Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: add experimental stages #46100

Merged
merged 3 commits into from
Jan 14, 2023

Conversation

GeoffreyBooth
Copy link
Member

@GeoffreyBooth GeoffreyBooth commented Jan 5, 2023

Implements the first PR per #45900 (comment). This only defines the stages; follow-up PRs will classify all currently experimental features accordingly and add any stage requirements such as flags or warnings that we decide are appropriate. We could potentially land this PR at the same time as the one that updates the statuses for all experimental features, but I don’t want to start work on that one without reaching consensus on this one first.

Even just on its own, this PR solves a few problems:

  • It encourages more use and testing of experimental features that we’ve been referring to as in the “baking” phase, where we think they’re ready to be marked as stable. This should hopefully catch more bugs before features become stable.

  • It clarifies the refactoring risk that various experimental features have; “release candidate” ones should hopefully require no refactoring due to breaking changes, but “active development” ones very well might; and “early development” ones almost certainly will. Users therefore have a better idea of how much maintenance work they’re signing up for by choosing to use a subject-to-change feature.

Per #45900 (comment) I tried to come up with better names for the stages. Please let me know if there are better ideas for names that can reach consensus. 🚲

cc @nodejs/tsc @nodejs/documentation

@GeoffreyBooth GeoffreyBooth added doc Issues and PRs related to the documentations. meta Issues and PRs related to the general management of the project. experimental Issues and PRs related to experimental features. labels Jan 5, 2023
doc/api/documentation.md Show resolved Hide resolved
doc/api/documentation.md Show resolved Hide resolved
doc/api/documentation.md Outdated Show resolved Hide resolved
Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm -1 to this approach. I think it creates more confusion and processes where there is no need.

I think we should focus in documenting:

  1. what criteria are needed for a feature to exit experimental status
  2. when to use CLI flags for experimental features
  3. when to emit a warning

Those three things have been used wildly, and we should document some guidelines.

@panva

This comment was marked as outdated.

@Trott
Copy link
Member

Trott commented Jan 5, 2023

I apologize if I'm off base here as I haven't been able to follow the prior discussion as closely as I would ordinarily like to. With that out of the way....

With apologies to George Orwell, at the current time, all experimental features are equal but some are more equal than others. That's...not good (IMO).

In other words: The effective statuses of various Experimental features vary tremendously. Sometimes we require a CLI flag but sometimes not. Sometimes we emit a warning and sometimes not. Sometimes we know that we can't actually introduce breaking changes because of widespread adoption, but we keep things Experimental anyway for unpublished reasons.

This PR attempts to take the first step in solving this issue by trying to document experimental statuses that are currently only implied. I like that approach, but it's certainly not the only approach.

Going a different way, I also like Matteo's three suggestions for what to do. Those suggestions solve specific problems (which is good!) whereas this is an attempt to document (as best we can) current practice which is inconsistent (or at least certainly seems inconsistent) as a way of laying groundwork for fixing the issues.

If this PR is likely to stall in review, maybe a good way forward is to take just one of the three suggestions Matteo made and open a PR trying to solve just that one. And then move to the next one. Or even divide up the work. For example, and purely hypothetically, maybe Geoffrey could try to get consensus on the exit-from-Experimental status criteria, James could work on when a CLI flag can be omitted, and Matteo could work on the criteria for when a warning can be left out.

@mcollina
Copy link
Member

mcollina commented Jan 5, 2023

I'm ok with documenting what we do. However this proposal implies attaching specific warnings and conditions to each stage. I'm strongly opposed to that.

To solve that specific mess, I think we should create a guide instead, as each feature is mostly its own case and risk level.

My point of view is that Experimental means "breaking changes are possible", and we should keep it that way.

jasnell
jasnell previously requested changes Jan 5, 2023
doc/api/documentation.md Outdated Show resolved Hide resolved
doc/api/documentation.md Outdated Show resolved Hide resolved
doc/api/documentation.md Outdated Show resolved Hide resolved
@GeoffreyBooth GeoffreyBooth added the tsc-agenda Issues and PRs to discuss during the meetings of the TSC. label Jan 5, 2023
@GeoffreyBooth
Copy link
Member Author

GeoffreyBooth commented Jan 5, 2023

It looks like we need to discuss this among the TSC again, or have a breakout meeting. There are so many divergent opinions here that I’m not sure that this PR can get resolved via a simple review process.

The point of this PR is as I wrote at the top: to better set user expectations around experimental features. We have been burned in both directions by users considering experimental features either stable when they shouldn’t be considered as such, or too rough to test when we really want users trying out our experiments before we graduate them to stable. By subdividing “experimental” into stages we can better focus our testers’ efforts, and also avoid situations where we upset the user base by breaking or removing an experimental feature that they’ve come to depend on even though they shouldn’t have.

These goals can’t be achieved by staying silent about how ready or not we consider each stage to be, or how likely or not each stage is to have breaking changes. It’s a simple fact that some experimental features are more stable than others, and both users and we are better served by knowing where each feature stands. They’re not all the same, hence the need to divide into stages.

We’re also at a disadvantage currently in landing new experimental features in that most past experimental features have landed straight into near-stable state and then rarely changed (and often never got marked as stable) so user expectations right now are that experimental features are generally okay to use right away. That’s wrong, of course, but the only ways to reeducate our users are either through a docs update like this or by burning them a few times by shipping rougher experimental features that do break or go away. I think a documentation update is preferable to that.

We should also write something in the contributor’s guide about how to write experimental features and graduate them and so on, but that follows after first getting consensus around what the stages are.

cjihrig
cjihrig previously requested changes Jan 5, 2023
Copy link
Contributor

@cjihrig cjihrig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should do this for a few reasons:

  • We should never discourage user testing. We've had the problem of not getting enough user testing in the past.
  • We should not try to discourage things from getting popular. Yes, things could have gone better with async_hooks, but that is now a pretty old API and seems like more of an isolated incident than a big recurring problem. Also, Node is not the only JS runtime in town anymore. If we don't want features to get popular, I can think of at least two other runtimes that would love to have our users and give them features.
  • If a feature is so immature that we don't want anyone to use it, then we should not ship it. We should maintain a fork with the feature as we have done in the past, or tell people to build from main. In your other proposal, you mentioned a build flag here - so we'd just be telling people to build from source anyway.
  • Even though these are bullets under the Experimental stability index, we are essentially creating a stability index with six distinct stages and only really want people to use one of them.
  • The text here doesn't provide any real guarantees. If something is listed as a release candidate, it can still be completely broken or removed at any time.
  • Users should only be concerned with whether or not something is production ready. All of these shades of experimental grey are just likely to confuse people - or upset them because we don't provide any guarantees (you completely removed this release candidate feature).
  • Have we done any surveying of how other languages/runtimes handle this? If not, we should do that first.

I'm not opposed to adding text stating that experimental features may be in various stages of development.

@GeoffreyBooth
Copy link
Member Author

I pushed an update with revised text that tries to resolve all the notes that people left. The various opinions are conflicting, so the perfect text isn’t possible, but hopefully this comes closer to a majority opinion and we can continue to revise. If you don’t mind, if you see something you don’t like in the new text, please suggest an alternative (and thanks to those who did so for the previous draft).

doc/api/documentation.md Show resolved Hide resolved
doc/api/documentation.md Show resolved Hide resolved
doc/api/documentation.md Outdated Show resolved Hide resolved
@GeoffreyBooth
Copy link
Member Author

GeoffreyBooth commented Jan 8, 2023

Have we done any surveying of how other languages/runtimes handle this? If not, we should do that first.

An example that deserves special mention is Oracle Java: https://blogs.oracle.com/javamagazine/post/the-role-of-preview-features-in-java-14-java-15-java-16-and-beyond. They have three categories of what they call “nonfinal” features: “preview,” “experimental” and “incubating.” The article goes into deep detail about what kind of feedback the core team wants from each stage and where the feedback should be submitted. Like the other examples I found, none of these features are enabled by default:

Important safeguards prevent developers from using nonfinal features accidentally. This is necessary because a nonfinal feature may well be different when it becomes final and permanent in a later Java feature release. Moreover, only final, permanent features are subject to Java’s stringent backward-compatibility rules.

Therefore, to avoid unintentional use, preview and experimental features are disabled by default, and the JDK documentation unequivocally warns developers about the nonfinal nature of these features and any of their associated APIs.

Preview features are specific to a given Java SE feature release and require the use of special flags at compile time as well as at runtime. …

Experimental features are JVM features and are disabled by default; the -XX:+UnlockExperimentalVMOptions flag instructs HotSpot to allow experimental features. The actual experimental feature can then be enabled via specific flags, for example, -XX:+UseZGC for ZGC.

Finally, incubator modules are also shielded from accidental use because incubating can be done only in the jdk.incubator namespace. Therefore, an application on the classpath must use the --add-modules command-line option to explicitly request resolution for an incubating feature. Alternatively, a modular application must specify requires or requires transitive dependencies upon an incubating feature directly.

That “incubator” namespace is an interesting idea; a version of that for Node could look like import test from 'node:experimental-test', where instead of requiring an --experimental-test flag we just name the builtin experimental-test.

What these all have in common is that users must explicitly opt into enabling experimental features; not just by using the feature, but via a config setting/flag or by using a non-stable release/build. They’re never enabled by default in a stable release. I couldn’t find any project that does so, though I encourage you all to look harder than I did 😄 This list includes all the projects I found that ship semver-exempt features; the list isn’t cherry-picked. There were lots of other projects I looked at that don’t follow this pattern; I would guess they probably don’t merge in or release features that aren’t ready to be considered stable.

@cjihrig cjihrig dismissed their stale review January 8, 2023 01:34

Thanks for the survey. I'll dismiss my objection.

@GeoffreyBooth
Copy link
Member Author

Thanks for the survey. I’ll dismiss my objection.

Thanks. And with regard to “I can think of at least two other runtimes that would love to have our users and give them features,” yes that’s absolutely true, but I would argue that Node.js’ value proposition is that it’s stable. Only the very brave are using Bun in production, and Deno may be past 1.0 but it’s still nowhere near as mature as Node.js. I think that a big part of our market share relative to those runtimes is that we’re considered battle-tested and proven and safe in production for big, important sites.

Reading that long article about how Java handles this made me think, whoa, Java is a platform even more mature than we are, and it really shows in the level of detail they have around this. Perhaps this is just necessary complexity when you reach the point that Java’s at and we’re possibly at, where we want to continue innovating while also needing to continue being stable enough for big companies to run mission-critical applications.

@GeoffreyBooth
Copy link
Member Author

GeoffreyBooth commented Jan 8, 2023

@mcollina

I think we should focus in documenting:

  1. what criteria are needed for a feature to exit experimental status

I think a feature should be marked as stable after it’s been in 1.2 / “release candidate” for a while and no one’s reported any issues that would require breaking changes to fix. It should also require TSC approval to ensure that we’re aware.

  1. when to use CLI flags for experimental features

In the TSC meeting, we discussed hashing that out as the follow-up to this PR. If we follow the proposal in #45900, flags would be required for 1.0 and 1.1, “early development” and “active development” (previously alpha and beta). I think that in general the primary factor in deciding whether a flag should be needed would be the relative maturity of the experimental feature, which corresponds to which stage it’s in.

  1. when to emit a warning

Along the same lines as flags. In #45900 I suggested the warning be emitted for all experimental features, even RC ones; that gives contributors an incentive to get their features over the finish line and marked as stable. (If you search the codebase for Stability: 1 - Experimental you’ll find a shockingly high number of features, many of which haven’t been updated for years and should probably either be marked as stable or removed as abandoned.)

Those three things have been used wildly, and we should document some guidelines.

That’s what this PR aims to begin. I absolutely agree with you that we should set some standards around flags and warnings and moving to “stable” status; and I think the most sensible way to do so is to establish experimental stages to use as a framework for doing so. The follow-up PR to this one can and should dive into all these issues.

@mcollina
Copy link
Member

mcollina commented Jan 8, 2023

I don't think I will ever agree to mandatory CLI flags and experimental warnings. I believe they will greatly damage our momentum in shipping new features, because they will make it harder for people to test them.

This goes against what we have been doing for the last while, and all the excitement about everything we generated in the community about the features we shipped in the last year or so.

This also add another barrier to contribution, and we had dwindling numbers even there. Why making things harder?

We should be working to increase our velocity, not slow it down.
My objection remains on the basis that this will be used to attach mandatory CLI flags and warnings to features.

@jasnell
Copy link
Member

jasnell commented Jan 8, 2023

Honestly, if we're going to start requiring a flag to opt in to experimental features, we shouldn't emit warnings for those. Going back to the survey of other runtimes, are there any that emit console warnings when an explicitly opted-in experimental feature is enabled?

That said, I am in full agreement with @mcollina and I'm still generally -1 on this whole approach.

@GeoffreyBooth
Copy link
Member Author

@mcollina: I don’t think I will ever agree to mandatory CLI flags and experimental warnings. . . . My objection remains on the basis that this will be used to attach mandatory CLI flags and warnings to features.

This PR never mentions flags or warnings, mandatory or otherwise. Is there anything in this PR that you find objectionable?

Yes the intention is to follow this up with another PR that discusses what, if any, requirements we want to attach to each stage. But we haven’t even begun that discussion, much less made decisions about that. We agreed in the last TSC meeting to first settle on what we thought the stages were, and then move on to discussing what requirements or recommendations we would make for each stage. I would think that that would take the form of a section in the collaborator’s guide, as it’s not really information that the general public needs; and when we write it we can go into detail about what considerations to take. For example, features that are unlikely to ever be used in production like --watch or --test probably don’t need as much protection (flag, warning) as something like fetch. I think that the relative maturity of a feature is the primary factor in terms of how much protection it needs, but it’s not the only factor; and we can use the follow-up PR to discuss what the other considerations should be and how they should all be weighed.

@jasnell: Honestly, if we’re going to start requiring a flag to opt in to experimental features, we shouldn’t emit warnings for those.

This PR never mentions flags or warnings, mandatory or otherwise. We can discuss standards for when flags and warnings should be used in a follow-up PR.

Going back to the survey of other runtimes, are there any that emit console warnings when an explicitly opted-in experimental feature is enabled?

I didn’t notice any mention of warnings in the various projects’ documentations; I think you would have to try each one. Deno doesn’t print a warning from what I can tell.

That said, I am in full agreement with @mcollina and I’m still generally -1 on this whole approach.

What are your objections to this PR? I’ve already revised per your notes above.

@jasnell
Copy link
Member

jasnell commented Jan 8, 2023

On this PR I still do not believe the additional formality is needed or that it helps. I appreciate the changes that have been made already but I still don't think it's necessary at all.

@mcollina
Copy link
Member

mcollina commented Jan 8, 2023

I'm ok as the text is currently written. I'm ok to land that as long as there are no planned flags or runtime checks attached to the level.

However, the whole premise of this work is adding runtime flags or build-time options for specific levels:

follow-up PRs will classify all currently experimental features accordingly and add any stage requirements such as flags or warnings that we decide are appropriate

I cannot approve this approach.

@GeoffreyBooth
Copy link
Member Author

GeoffreyBooth commented Jan 8, 2023

Thanks. I think we should keep this on the TSC agenda to discuss at our next meeting. I assume the next steps are:

  • A few people need to actually approve this, not just unobject to it. I’m happy to revise further if anyone has more notes.

  • Once this has a few approvals, let’s consider this PR “locked” as approved until a companion PR can go and update all the existing experimental features to classify them as a particular stage. Then we can land the two PRs at the same time, so that the new docs are updated only once with the changes from both PRs.

I feel like there might be a lot of debate around the classifications of various features, so I want to separate that discussion into its own PR (or even a set of PRs) rather than adding all of that onto this one. We might even decide to revise this one as a result of reviewing all our existing features, for example if we decide we need a fourth stage or could maybe make do with only two. When I search for Stability: 1 - Experimental in the codebase I get 137 results, but up to 26 of those hits may be duplicates because they’re for a flag marked as experimental in cli.md (19) or an error marked as experimental in errors.md (7) separate from the feature itself also marked as experimental; so we have something like 111 features to classify.

@mcollina
Copy link
Member

mcollina commented Jan 8, 2023

I think most of those could be graduated to stable.

@GeoffreyBooth GeoffreyBooth removed the request for review from jasnell January 9, 2023 00:41
Copy link
Contributor

@RaisinTen RaisinTen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@RaisinTen
Copy link
Contributor

Once this has a few approvals, let’s consider this PR “locked” as approved until a companion PR can go and update all the existing experimental features to classify them as a particular stage. Then we can land the two PRs at the same time, so that the new docs are updated only once with the changes from both PRs.

FWIW, the companion PRs would probably need changes to some of the scripts in tools/doc to adjust the documentations website according to the newer indices. Also, I think updating the existing experimental features would also take a considerable amount of time. All of this would result in this PR staying afloat for a long time, so if I were you I wouldn't block this PR on the companion PRs because this is an improvement already.

doc/api/documentation.md Show resolved Hide resolved
@mhdawson
Copy link
Member

mhdawson commented Jan 9, 2023

@RaisinTen one option might be adding a line in the new text that says this subdivision is new and that some APIs may still just be tagged as experimental. The tools/doc change might make sense to have in place/part of this PR.

Copy link
Member

@mhdawson mhdawson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@ChALkeR ChALkeR left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do have a slight concern about the border between 1.2 and 2, as if we don't implement a warning recommendation at level 1.2, then the ecosystem might eventually start treating 1.2 as 2 -- the language is similar, and if some features will seamlessly go from 1.2 to 2, the ecosystem might stop seeing any difference.

This is not a blocker, but something that we should keep in mind later, when implementing recommendations and actually using these stages.

LGTM

@mhdawson
Copy link
Member

We discussed again in the TSC meeting yesterday and the next steps agreed by the 7 TSC members there was:

  1. Land this PR, but mark it "don't backport" so that it will not show up in a release immediateliy
  2. Work to get PRs landed that update the experimental status for 2-3 subsystems, also marked "don't backport" Doing this will help validate that the categories can be applied to currently experimental features.
  3. Once we have teh 2-3 PRs and we are happy that the categories applied, remove the "don't backport" from all of the PRs.

@nodejs/tsc since we had a smaller group in the discussion at the meeting.

@GeoffreyBooth GeoffreyBooth added dont-land-on-v14.x dont-land-on-v18.x PRs that should not land on the v18.x-staging branch and should not be released in v18.x. author ready PRs that have at least one approval, no pending requests for changes, and a CI started. and removed tsc-agenda Issues and PRs to discuss during the meetings of the TSC. labels Jan 12, 2023
@GeoffreyBooth
Copy link
Member Author

Per #46100 (comment) I’ll land this tomorrow (Friday) unless anyone objects. It’s marked as “don’t backport.”

@GeoffreyBooth GeoffreyBooth added commit-queue Add this label to land a pull request using GitHub Actions. commit-queue-squash Add this label to instruct the Commit Queue to squash all the PR commits into the first one. labels Jan 14, 2023
@nodejs-github-bot nodejs-github-bot removed the commit-queue Add this label to land a pull request using GitHub Actions. label Jan 14, 2023
@nodejs-github-bot nodejs-github-bot merged commit 1022c6f into nodejs:main Jan 14, 2023
@nodejs-github-bot
Copy link
Collaborator

Landed in 1022c6f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
author ready PRs that have at least one approval, no pending requests for changes, and a CI started. commit-queue-squash Add this label to instruct the Commit Queue to squash all the PR commits into the first one. doc Issues and PRs related to the documentations. dont-land-on-v18.x PRs that should not land on the v18.x-staging branch and should not be released in v18.x. experimental Issues and PRs related to experimental features. meta Issues and PRs related to the general management of the project.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet