New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rbenv shell specs- refactor to test the behavior instead of the implementation? #1455
Comments
As a follow-up to this issue, I made a PR in my forked version of the codebase which refactors some of the Side noteObviously, this refactor was not something that the RBENV core team requested, and I won't be surprised or hurt if the suggestion is declined. I'm treating this mostly as a learning opportunity, one that I hope doesn't take up too much of the core team's bandwidth. I welcome any constructive criticism you might have. QuestionMost of the re-written specs pass as intended, but the The reason I would expect the
Since the command works in the shell but not in the test, I'm guessing the test in its current form leaves out some important setup step, or is otherwise written incorrectly. As a side question, I'm also wondering if the three re-written specs which do pass are giving me a misleading result? |
Hi, thanks for the code spike, and sorry for the late reply. Absolutely agreed about the limitation of testing output (i.e. generated code) instead of that code's effect. I'm a big believer in integration tests being the best kind of verification for an intended feature, but in practice I find these kind of tests hard to set up and maintain over a long period of time. One limitation of your eval-based test is that from within bats (which is implemented in bash) we can only verify the correctness of the bash implementation, but probably not zsh, fish, and others without spawning their subshells. And if we were to spawn subshells of fish, zsh, etc. for integration tests, our test suite would become dependent on the existence of those shells on the system, whereas now it's only dependent on bash. Do you think that would be a worthwhile tradeoff? |
No worries, I appreciate your feedback! :-)
I'm trying to understand the implications of this. It sounds like we'd be asking RBENV contributors to install two additional shells (fish and zsh) in order to run the updated tests, is that correct? If the request is limited to installing two additional shells, that seems reasonable to me. They're each one-time, small-ish downloads- 15.5MB for On the other hand, if we're asking contributors to install every conceivable shell on which RBENV could be run (i.e. not just fish and zsh), that seems like a debilitatingly huge request, and we'd be better off leaving things as they are. Another option we could consider- why not both? We could always just supplement the current testing harness with additional integration-level tests of the kind under discussion. These tests could potentially bail out early with a success code (?) if the user doesn't have the required dependencies. One downside here is that I'd expect CI to run these supplemental tests, meaning it would need those shells as dependencies. This, in turn, means that some specs would run in CI but not locally (for users who are missing those dependencies). I don't love the idea of the test suite working differently in a local env vs. in CI, or the precedent that sets. But if these tests are nice-to-haves (instead of must-haves), maybe that's not a deal-breaker? Thoughts? Am I over-thinking this? |
I like this, as it presents a path of least disruption to contributors. We can ensure that in CI, zsh and fish are always present in addition to bash. |
Sounds good, I'll get started on a solution. |
@mislav here's a first pass at a PR. I'll update it based on any subsequent feedback. |
I'm reading through the
bats
tests forrbenv-sh-shell
now, and I notice that many of them test that the tests assert that specific code is output viaecho
,cat
, etc. Some even test thatif
conditions are printed to the screen. Example:This was initially confusing to me as a newbie to the codebase, and it was only later that I landed on the hypothesis that this is because that code will be
eval
uated by therbenv
shell function (correct?). If that's the case, isn't this equivalent to testing the implementation instead of testing the behavior? I would think this would require someone attempting a green-to-green refactor of the code to change both the code and the spec, which feels like it goes against the spirit of green-to-green refactors. Separately, it also makes it harder to use the tests as complement documentation to the README and usage/summary comments.I took a stab at refactoring two of the tests as an experiment, and came up with the following:
This test passed:
I'm not sure if the other specs would be as easy to port over as this one was, but I'm happy to take a shot at it. First though, I wanted to check with y'all to see whether such a PR would be well-received or not, since there could be a good reason why these tests were written the way that they were.
The text was updated successfully, but these errors were encountered: