Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parameterized test doesn't executes before_all and after_all for each interaction #71

Open
skaina00 opened this issue Apr 12, 2022 · 1 comment
Labels
enhancement New feature or request

Comments

@skaina00
Copy link

If you create some unit-test using parameterized it doesn't executes before_all and after_all for each interaction (behavior expected in most common test frameworks). During my tests I could see it executes one time for each test and not for each interaction.

I couldn't find any docs saying it is compatible with parameterized library. Someone knows any option?

See the example below:

import uuid
from parameterized import parameterized
from runtime.nutterfixture import NutterFixture, tag

class TestParam(NutterFixture):      
    def before_all(self): 
        self.random_name = uuid.uuid4().hex
        print(f"started [{self.random_name}]")

    @parameterized.expand([ ("AAA"), ("BBB"), ("CCC")])
    def assertion_test(self, param1):
        print(f"processing [{param1}] with [{self.random_name}]")
        assert param1==param1
    
    def after_all(self): 
        print(f"finished [{self.random_name}]")

result = TestParam().execute_tests()
print(result.to_string())

Current results:

started [23c4c652b5364d44a0bcac132df51317]
processing [AAA] with [23c4c652b5364d44a0bcac132df51317]
processing [BBB] with [23c4c652b5364d44a0bcac132df51317]
processing [CCC] with [23c4c652b5364d44a0bcac132df51317]
finished [23c4c652b5364d44a0bcac132df51317]

Notebook: N/A - Lifecycle State: N/A, Result: N/A
Run Page URL: N/A
============================================================
PASSING TESTS
------------------------------------------------------------
test_0_AAA (1.4899997040629387e-05 seconds)
test_1_BBB (9.099996532313526e-06 seconds)
test_2_CCC (7.400005415547639e-06 seconds)
============================================================

Expected results:

started [23c4c652b5364d44a0bcac132df51317]
processing [AAA] with [23c4c652b5364d44a0bcac132df51317]
finished [23c4c652b5364d44a0bcac132df51317]

started [9999c652b5364d44a0bcac132df59999]
processing [BBB] with [9999c652b5364d44a0bcac132df59999]
finished [9999c652b5364d44a0bcac132df59999]

started [aaaac652b5364d44a0bcac132df5aaaa]
processing [CCC] with [aaaac652b5364d44a0bcac132df5aaaa]
finished [aaaac652b5364d44a0bcac132df5aaaa]

Notebook: N/A - Lifecycle State: N/A, Result: N/A
Run Page URL: N/A
============================================================
PASSING TESTS
------------------------------------------------------------
test_0_AAA (1.4899997040629387e-05 seconds)
test_1_BBB (9.099996532313526e-06 seconds)
test_2_CCC (7.400005415547639e-06 seconds)
============================================================
@giventocode
Copy link
Contributor

Currently, before_all and after_all are executed once per test fixture instance. What you describe is functionally analogous to before_each and after_each, which would be a new feature we could consider. If you want to execute something before each test case, you need to call a method in each of your test cases explicitly.

@giventocode giventocode added the enhancement New feature or request label Apr 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants