-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It is not currently possible to enable experiments for package dependencies in the analyzer #55688
Comments
Without this there is no way to enable the experiment for a dependency, so nobody can try out package:json. Bug: #55688 Change-Id: Id2f10b10dde0eb6d68d0bf76030ff557087ee83b Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/366041 Reviewed-by: Leaf Petersen <leafp@google.com> Auto-Submit: Jake Macdonald <jakemac@google.com> Reviewed-by: Konstantin Shcheglov <scheglov@google.com> Commit-Queue: Leaf Petersen <leafp@google.com>
Yes, the analyzer is different from other tools, primarily because it has to support the analysis server which means that it has to be able to analyze multiple "entry points" simultaneously. But the analyzer's handling of experiment flags might not be as different from other tools as it first appears to be. If you have two packages open in the workspace, P1 and P2 where P1 has a dependency on P2 and each has its own package configuration file, the analyzer treats those packages as two separate entry points. It's equivalent to compiling both an entry point in P1 and an entry point in P2. In other words, it analyzes P2 twice, just as a compiler would compile the code in P2 twice: once in its own context, without reference to P1, and once in the context of P1. Enabling experiments in an analysis options file is equivalent to enabling them on the command-line when the analyzer is analyzing code in or reachable from a file in the package containing that options file. From that perspective it sounds like the handling is exactly the same. What the analyzer doesn't do is to borrow the experiment flags from a depended-on package. In some ways that would be like asking One work-around for users is just to require that if they want to depend on a package that requires an experiment (P2), then they need to enable that experiment in the analysis options file (just as they would if compiling from the command-line) of the root package (P1). That doesn't feel unreasonable to me, but YMMV. That doesn't mean that we shouldn't consider ways to make the behavior more consistent, but we do need to consider the actual value of changes from the users perspective and cost in terms of implementing and maintaining the change. Macros is the first, and might be the last, experiment that we're planning on encouraging users to try before it's ready to ship, so this appears to have bounded (though possibly great) value. We have had some preliminary discussions about alternatives, and I'll capture the thoughts that I can remember off-hand. These are not exhaustive, nor are they in any particular order, but I'm hopefully representing the discussions accurately.
This suggestion is founded on one or both of the following assessments. First, the value to users, aside from this one experiment, is likely to be very low, so we shouldn't spend time on it. Second, what we're already doing is as good or better than any of the other alternatives, and there's a work-around, so it isn't worth changing.
The result here would be that the files in each package would be analyzed using the experiments enabled for that package. That's still different (from my perspective) from the way other tools work. If you compile P1 without enabling the experiment that P2 depends on your compilation might well fail, whereas the analyzer would not tell you that with this approach. The difference in output between the analyzer and a compiler might be confusing. It might also be confusing to try to remember which experiments are enabled in any given file.
(It isn't clear to me whether that option would be performant; I'd have to think about it a little more.) The result here would be similar to the result in option 2 except that all of the files would be analyzed consistently with each other. While users wouldn't need to remember which experiments are enabled for which file, they would have to search through analysis options file to figure out which experiments are enabled. It could also lead to some confusion if the user doesn't realize that by adding a dependency on P2 they're opting in to a potentially breaking experiment. This option also doesn't feel consistent with other tools. |
From what I understand, this is not how it works. If your package enables the experiment via analysis_options.yaml, it doesn't enable it for transitively reachable files (specifically, we care about package dependencies in this case). If it can be changed to work that way, that would resolve the issue. Or if I am mistaken and it does work that way, but then I don't understand what was happening in the original issue.
This is exactly the UX I want (enable experiment in my app, and all my dependencies also get that experiment enabled). To me that sounds the closest to how the other tools work. |
@scheglov Since I might have misunderstood. If the experiments in a package being analyzed are not enabled when analyzing files from depended on packages (when they're being analyzed in the original package's context): Why not? and Can they be? |
IIUC, with changes that @pq implemented, there are no such thing as "experiments in a package". I propose that we bring this idea to its logical conclusion, and use |
For all of the other analysis options, I think it makes sense to allow them to be specified per file. I'm not as convinced that it's helpful to users to be able to have different experiments enabled for different files within a package. Of course, doing so would answer the question of what it means if an options file below the root of the package enables experiments.
For everything other than experiments I think that would be a no-op. I don't think we're going to enable lints, apply severity changes, or any other options that I can think of when analyzing those files. And, it would have a (possibly small) performance implication, because now we'd need to search the directory structure within these packages in order to locate the |
A user should rarely enable experiments at all. But if he does, I think it is fine for us to implement it the most straightforward way. Here this means using
We will have to walk to the root only once. |
I don't know what that means. If a package under development depends on another package, and imports N different libraries from that other package, won't we have to find the analysis options file associated with each of those N libraries? Which seems to me like we're either walking up to the root of the package N times or we're traversing the whole package structure one time. |
Each directory has only one In the worst case, if the package does not have its own This presumes that we use some options file producing class with caching, single instance per |
For any package inside |
Is anybody working on this one? We might want to consider bumping the priority of it. We have a number of experiments with macros out there (https://pub.dev/packages/freezed/versions/3.0.0-0.0.dev) as an example. I don't believe any of these can work today, unless you are developing within the package itself, or otherwise have it open in a workspace. |
I believe that the answer is no. This is still on our radar, but is lower priority than some other work at the moment. |
This comes up for macros - it is not possible today to release an experimental macro package and have people try it out.
In my understanding, this is because analysis_options.yaml files are only loaded if they are in the workspace. So, even if the package has an analysis_options.yaml which enables the experiment, it won't be actually enabled unless you add that package to your workspace.
This is contrary to how other tools work, because they globally enable experiments.
The text was updated successfully, but these errors were encountered: