Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Meta: Adopt a "project" process #3443

Open
RalfJung opened this issue Apr 3, 2024 · 15 comments
Open

Meta: Adopt a "project" process #3443

RalfJung opened this issue Apr 3, 2024 · 15 comments
Labels
A-meta Not about any part of Miri per se, but about shaping the environment to make something in/with Miri C-proposal Category: a proposal for something we might want to do, or maybe not; details still being worked out

Comments

@RalfJung
Copy link
Member

RalfJung commented Apr 3, 2024

Close to two years ago, we landed stubs for epoll and some related functions to get an empty tokio program to work. Those stubs were buggy and lacked basic sanity checks, and two years later they are still just stubs. IMO we can conclude that this approach of just landing something and then figuring things out as we go was a failure.

When embarking on larger projects, like epoll support, we need a plan -- t-lang has design meetings, t-compiler has MCPs, t-libs-api has ACPs. The pattern is always the same: we should have an idea and consensus on where we want to go and how we want to get there before we start even reviewing PRs that move us in that direction.

So I think for Miri we want a "project" process. It doesn't have to be heavyweight, but I am imagining something roughly like this:

  1. A project proposal defines clearly what the goal is of this project. This defines the scope of the project, i.e. which part of which APIs should be supported. If this involves functions that expose a big API surface with lots of flags, the project may want to support only a tiny subset of flags; that should be documented. There must be at least one testcase given that Miri currently doesn't support and that should be supported when the project is done. This testcase doesn't need to cover everything but it should call each relevant API at least once with a representative set of arguments and flags. IOW, the project is not defined by "we are done when the testcase passes", it just documents more concretely the scope of the project.
    The testcase should be expressed by directly calling the relevant C FFI functions, not a high-level Rust crate. It's okay to start with a high-level test, but it's hard to judge the scope of the project and to evaluate the plan in step 2 without knowing the actual API surface Miri will be responsible for. So in that case developing the low-level test case is a blocker for moving to the next step of the project.
  2. Once the team accepts that this is a goal we want to pursue, we need to make a plan. What exactly the plan looks like will depend heavily on the concrete project. Any extension to the machine state (adding anything new to MiriMachine or any of its fields) definitely needs to be part of the plan, specifying the type of the new state component and where it goes. Similar if a new file descriptor type is involved, the data stored in the FD table for this file descriptor type needs to be pat of the plan. There doesn't have to be any code, just a rough description of how the new operations Miri needs to implement for this project interact with the new state -- and, if applicable, how existing operations interact with the new state.
  3. Once the team accepts the plan, implementation can begin. Now that we have a plan, implementation doesn't need to be one big PR, but we can still make sense of each PR as taking small steps towards the goal, following the plan.

I am not sure what exactly the threshold is for when the process should be required, and we probably don't need an exact threshold. But roughly speaking, if a machine state extension (or new FD type, which is a machine state extension) is required, then the project process should be used.

@rust-lang/miri what do you think?

@RalfJung RalfJung added C-proposal Category: a proposal for something we might want to do, or maybe not; details still being worked out A-meta Not about any part of Miri per se, but about shaping the environment to make something in/with Miri labels Apr 3, 2024
@oli-obk
Copy link
Contributor

oli-obk commented Apr 3, 2024

I don't know if we could have figured out the shims we need for tokio by writing up a plan. I don't think we could have even estimated what part of epoll we need.

If we just had stated we wanna support epoll, I wouldn't have known how to write tests. The rest of the proposal makes sense to me. We can then extract the actually supported and some unsupported use cases and write direct shim tests

@RalfJung
Copy link
Member Author

RalfJung commented Apr 3, 2024

I don't know if we could have figured out the shims we need for tokio by writing up a plan. I don't think we could have even estimated what part of epoll we need.

We still don't know which shims we need for tokio since the current support is extremely incomplete.

That needs some exploration. For instance you can just stub out some functions and return plausible dummy values to see what the next function is we run into. Or you can run tokio "for real" and use ltrace to see which calls are needed. Or you can look at the source code.

But I think that should be done out-of-tree, and then once we know which shims we need -- or once we have identified a first hurdle, like "we need socketpair" -- we can proceed with those. I think this kind of exploration should not be used as justification for actually landing anything.

@oli-obk
Copy link
Contributor

oli-obk commented Apr 3, 2024

This sounds like a huge hurdle and feels like basically just causing people to decide it's not worth it.

To me personally that would mean I'd implement the whole thing in a branch and open a project proposal when it works. Along with all the rebasing pain and so on.

@RalfJung
Copy link
Member Author

RalfJung commented Apr 3, 2024

I don't understand the problem. Running tokio you see a socketpair call, so the first thing to do is socketpair -- you can forget about tokio for now, that's just something that needs to be implemented, at least basic read/write/close. Then you see something with eventfd, so you go look at the eventfd documentation and tokio sources to figure out which fragment of eventfd is required. And so on.

Stubbing out a bunch of calls to figure out what the next call after that is should take maybe 15 minutes. Running a bare-bones tokio application with ltrace should take less than a minute. I think that time investment is absolutely worth it.

But some understanding of what it is that needs to be done is IMO absolutely required before we land anything. Implementing "just the next function call tokio makes" is very fragile and means we'll design things that can completely fall apart 5 function calls later. It means we'll have stubs like the current socketpair that incorrect accept many flags we will never support -- or we'll have stubs that work "exactly if the first argument is 0 and the second argument is 15" (or something like that) because that's what tokio does. Having such stubs in a local explorative branch is fine, but I don't think we should have them upstream. It's the equivalent of flying blind. This is not how we develop any part of Rust, and we shouldn't develop Miri like that either.

@oli-obk
Copy link
Contributor

oli-obk commented Apr 3, 2024

The stubbing never got us far, it's exactly what we tried. I've never used ltrace, but that seems reasonable.

Looking at code is almost useless, as the devil is in the interaction of multiple calls and the runtime state they return, write, or store somewhere.

If ltrace gives us all the libc functions that were invoked, that's reasonable. Figuring out the flags that are potentially used will be harder, but can be delayed by just not supporting any flags that we haven't seen via stubbing so far.

@RalfJung
Copy link
Member Author

RalfJung commented Apr 3, 2024

If ltrace gives us all the libc functions that were invoked,

That's what it should do. And it works on many binaries, including cargo itself, but somehow not on locally built Rust binaries. I'm not the first to try this but that thread also doesn't have a solution.

strace however does show this -- it's on the syscall level but at least for the tokio MVP that matches libc quite well:

epoll_create1(EPOLL_CLOEXEC)            = 3
eventfd2(0, EFD_CLOEXEC|EFD_NONBLOCK)   = 4
epoll_ctl(3, EPOLL_CTL_ADD, 4, {events=EPOLLIN|EPOLLRDHUP|EPOLLET, data={u32=0, u64=0}}) = 0
fcntl(3, F_DUPFD_CLOEXEC, 3)            = 5
socketpair(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0, [6, 7]) = 0
fcntl(6, F_DUPFD_CLOEXEC, 3)            = 8
epoll_ctl(5, EPOLL_CTL_ADD, 8, {events=EPOLLIN|EPOLLRDHUP|EPOLLET, data={u32=1, u64=1}}) = 0

And then there's a bunch of junk for spawning the threads...

It shows the epoll_wait from the "sleep" test as well (when using strace -f, otherwise I think it doesn't trace off-main threads).

The stubbing never got us far, it's exactly what we tried.

Except for epoll_ctl and Event::write, the things that landed are basically stubs. So I don't follow. I'm basically saying instead of polishing them and then landing that, they should be seen as the initial exploration which would have told us about the three APIs involved: socketpair, eventfd, epoll. These can then be designed and tested independently, to get something minimal but actually functional (a socketpair you can send data over, and eventfd you can actually read), which would bring us closer to actually supporting tokio with more than an empty main.

@saethlin
Copy link
Member

saethlin commented Apr 3, 2024

I agree that more team coordination would be healthy.

I am also not opposed to what you're laying out here. Clearly stating a goal and agreeing on a plan to accomplish the goal is certainly one possible way to avoid problems.

But I am not sure that what you've laid out here is actually going to prevent what happened from happening again. If we agree with an interested contributor that their plan seems good, but then after landing the first of many planned PRs their circumstances change and they stop contributing... What do we do then? This is more of a project management question than anything else. If there's a plan laid out and the contributor vanishes but never clearly records publicly that they will not be finishing the project, what do we do?

@RalfJung
Copy link
Member Author

RalfJung commented Apr 4, 2024

If we agree with an interested contributor that their plan seems good, but then after landing the first of many planned PRs their circumstances change and they stop contributing... What do we do then?

Then we still have the plan described in the project tracking issue, which should make it much easier for someone else to pick up the work.

If there's a plan laid out and the contributor vanishes but never clearly records publicly that they will not be finishing the project, what do we do?

Sure, we shouldn't wait a year until we declare the project open for others again. But I don't think that was even the main issue here. The main issue, as I see it, was that there was no clear idea of what the next step would even be.

@saethlin
Copy link
Member

saethlin commented Apr 7, 2024

Well, I don't have any objection to asking for a plan before we merge anything that's part of a much larger effort. Planning is generally good, communicating expectations even better. I'm just wary of this having more limited upside than you are indicating.

@LegNeato
Copy link
Contributor

Perhaps dumb questions:

  1. Are the stubs a maintenance burden?
  2. Is it strictly better to have stubs than nothing at all?
  3. Does having the stubs and test machinery in-repo lower the barrier for someone else to contribute in the future?

@saethlin
Copy link
Member

saethlin commented Apr 19, 2024

Are the stubs a maintenance burden?

Mostly, no.

Is it strictly better to have stubs than nothing at all?

It might be better to have nothing. The stubs in their current state make it easier to ICE Miri. There are a small handful of crates that hit an ICE path in the stubs, which makes it look like they've hit an interpreter bug.

Does having the stubs and test machinery in-repo lower the barrier for someone else to contribute in the future?

The above discussion makes a pretty solid case that the answer is no.

@RalfJung
Copy link
Member Author

Are the stubs a maintenance burden?

To some extend. We already had a case of someone adding a new shim modeled after one of the stubs. But it is unclear whether that stub can even be extended to anything useful without a major redesign, so IMO it is an unhelpful distraction to new contributors.

Is it strictly better to have stubs than nothing at all?

Beyond what @saethlin said, there's also the problem that the stubs do NOT properly report errors when they are called with flags they will never support. So (a) some code that now passes Miri will actually fail in the future, and (b) we'll have to carefully re-review all that code when the feature gets properly implemented. It's easier to review code when it lands, and make sure it makes sense then, than to remember to re-review code in the future as it was landed in a shape that is unsuitable for the long term.

@LegNeato
Copy link
Contributor

To some extend. We already had a case of someone adding a new shim modeled after one of the stubs. But it is unclear whether that stub can even be extended to anything useful without a major redesign, so IMO it is an unhelpful distraction to new contributors.

Perhaps the focus should be on review and making stubbed out PRs be in the right form?

Is it strictly better to have stubs than nothing at all?

Beyond what @saethlin said, there's also the problem that the stubs do NOT properly report errors when they are called with flags they will never support. So (a) some code that now passes Miri will actually fail in the future, and (b) we'll have to carefully re-review all that code when the feature gets properly implemented. It's easier to review code when it lands, and make sure it makes sense then, than to remember to re-review code in the future as it was landed in a shape that is unsuitable for the long term.

Agreed. But again, isn't this pointing to changing maintainer review standards for stubs rather than getting rid of stubs entirely with an up-front process?

FWIW as a drive-by committer usually the fact that all the plumbing is there is a huge benefit for me. Perhaps this is just the way I like to work, but it is much easier for me to take something and change it then create something from scratch. I love to see bugs that are like "here is a pointer to some code, we need to do X".

@RalfJung
Copy link
Member Author

RalfJung commented Apr 19, 2024

FWIW as a drive-by committer usually the fact that all the plumbing is there is a huge benefit for me. Perhaps this is just the way I like to work, but it is much easier for me to take something and change it then create something from scratch.

For some things this works, and there we can keep doing it.

But for e.g. socketpair, there's no simple drive-by improvement. There needs to be actual design. It's impossible to decide during PR review whether the proposed PR is compatible with a future design without having seen the design. We could ask the reviewer to do the full design work and then evaluate the PR against that, but that's more than reviewers typically have the time to do. (Also it may often end up with "oh and the PR needs to be entirely rewritten as it turns out the design is very different from a naive stub".) That's why I don't think this can just be different standards for review of stub PRs. Instead I am saying we should not accept such stub PRs until there's a design they can be evaluated against.

Once there is an agreed-upon design, I am completely fine with landing a stub PR that implements just the first 5% of that design. During review it is then possible to actually check whether the PR matches the design, and later another contributor can do the next 5%. But the design needs to exist for this to be possible.

I love to see bugs that are like "here is a pointer to some code, we need to do X".

I agree. But not everything can be developed like that. It can be a lot of work to bring an issue to this stage (from "let's have socketpair" to "here's what you need to do in the code"). This work can't be skipped. And crucially, that work can benefit from review and feedback the same way code can, before a single line of code is written.

@LegNeato
Copy link
Contributor

Makes sense to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-meta Not about any part of Miri per se, but about shaping the environment to make something in/with Miri C-proposal Category: a proposal for something we might want to do, or maybe not; details still being worked out
Projects
None yet
Development

No branches or pull requests

4 participants