New Official Pytest plugin for Celery: pytest-celery v1.0.0 is now released! #8963
Nusnus
started this conversation in
Show and tell
Replies: 1 comment
-
Is there an upgrade guide available? I'm seeing my test suite hang on Linux and errors from Windows in CI after upgrading On Linux:
The tests that run after The Windows errors: Python 3.12: _____________ ERROR at setup of test_celery[celery_setup_worker] ______________
request = <SubRequest 'celery_worker' for <Function test_celery[celery_setup_worker]>>
@pytest.fixture(params=ALL_CELERY_WORKERS)
def celery_worker(request: pytest.FixtureRequest) -> CeleryTestWorker: # type: ignore
"""Parameterized fixture for all supported celery workers. Responsible for
tearing down the node.
This fixture will add parametrization to the test function, so that
the test will be executed for each supported celery worker.
"""
> worker: CeleryTestWorker = request.getfixturevalue(request.param)
.venv\Lib\site-packages\pytest_celery\fixtures\worker.py:31:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv\Lib\site-packages\pytest_docker_tools\factories\build.py:36: in build
image, logs = docker_client.images.build(**kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.models.images.ImageCollection object at 0x00000206349C1B80>
kwargs = {'buildargs': {'CELERY_LOG_LEVEL': 'INFO', 'CELERY_VERSION': '', 'CELERY_WORKER_NAME': 'celery_test_worker', 'CELERY_W...env\\Lib\\site-packages\\pytest_celery\\vendors\\worker', 'rm': True, 'tag': 'pytest-celery/components/worker:default'}
resp = <generator object APIClient._stream_helper at 0x0000020634583740>
last_event = {'id': '3.10-slim-buster', 'status': 'Pulling from library/python'}
image_id = None, result_stream = <itertools._tee object at 0x0000020634917680>
internal_stream = <itertools._tee object at 0x00000206349179C0>
chunk = {'error': 'no matching manifest for windows/amd64 10.0.20348 in the manifest list entries', 'errorDetail': {'message': 'no matching manifest for windows/amd64 10.0.20348 in the manifest list entries'}}
match = None
def build(self, **kwargs):
"""
Build an image and return it. Similar to the ``docker build``
command. Either ``path`` or ``fileobj`` must be set.
If you already have a tar file for the Docker build context (including
a Dockerfile), pass a readable file-like object to ``fileobj``
and also pass ``custom_context=True``. If the stream is also
compressed, set ``encoding`` to the correct value (e.g ``gzip``).
If you want to get the raw output of the build, use the
:py:meth:`~docker.api.build.BuildApiMixin.build` method in the
low-level API.
Args:
path (str): Path to the directory containing the Dockerfile
fileobj: A file object to use as the Dockerfile. (Or a file-like
object)
tag (str): A tag to add to the final image
quiet (bool): Whether to return the status
nocache (bool): Don't use the cache when set to ``True``
rm (bool): Remove intermediate containers. The ``docker build``
command now defaults to ``--rm=true``, but we have kept the old
default of `False` to preserve backward compatibility
timeout (int): HTTP timeout
custom_context (bool): Optional if using ``fileobj``
encoding (str): The encoding for a stream. Set to ``gzip`` for
compressing
pull (bool): Downloads any updates to the FROM image in Dockerfiles
forcerm (bool): Always remove intermediate containers, even after
unsuccessful builds
dockerfile (str): path within the build context to the Dockerfile
buildargs (dict): A dictionary of build arguments
container_limits (dict): A dictionary of limits applied to each
container created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable
swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g.,
``"0-3"``, ``"0,1"``
shmsize (int): Size of `/dev/shm` in bytes. The size must be
greater than 0. If omitted the system uses 64MB
labels (dict): A dictionary of labels to set on the image
cache_from (list): A list of images used for build cache
resolution
target (str): Name of the build-stage to build in a multi-stage
Dockerfile
network_mode (str): networking mode for the run commands during
build
squash (bool): Squash the resulting images layers into a
single layer.
extra_hosts (dict): Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
platform (str): Platform in the format ``os[/arch[/variant]]``.
isolation (str): Isolation technology used during build.
Default: `None`.
use_config_proxy (bool): If ``True``, and if the docker client
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
Returns:
(tuple): The first item is the :py:class:`Image` object for the
image that was built. The second item is a generator of the
build logs as JSON-decoded objects.
Raises:
:py:class:`docker.errors.BuildError`
If there is an error during the build.
:py:class:`docker.errors.APIError`
If the server returns any other error.
``TypeError``
If neither ``path`` nor ``fileobj`` is specified.
"""
resp = self.client.api.build(**kwargs)
if isinstance(resp, str):
return self.get(resp)
last_event = None
image_id = None
result_stream, internal_stream = itertools.tee(json_stream(resp))
for chunk in internal_stream:
if 'error' in chunk:
> raise BuildError(chunk['error'], result_stream)
E docker.errors.BuildError: no matching manifest for windows/amd64 10.0.20348 in the manifest list entries
.venv\Lib\site-packages\docker\models\images.py:304: BuildError Python 3.8: _____________ ERROR at setup of test_celery[celery_setup_worker] ______________
request = <SubRequest 'celery_worker' for <Function test_celery[celery_setup_worker]>>
@pytest.fixture(params=ALL_CELERY_WORKERS)
def celery_worker(request: pytest.FixtureRequest) -> CeleryTestWorker: # type: ignore
"""Parameterized fixture for all supported celery workers. Responsible for
tearing down the node.
This fixture will add parametrization to the test function, so that
the test will be executed for each supported celery worker.
"""
> worker: CeleryTestWorker = request.getfixturevalue(request.param)
.venv\lib\site-packages\pytest_celery\fixtures\worker.py:31:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv\lib\site-packages\pytest_docker_tools\factories\volume.py:122: in volume
_populate_volume(docker_client, volume, seeds)
.venv\lib\site-packages\pytest_docker_tools\factories\volume.py:56: in _populate_volume
image, logs = docker_client.images.build(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.models.images.ImageCollection object at 0x0000016F5FBA4DF0>
kwargs = {'path': 'd:\\a\\kolo\\kolo\\python\\.venv\\lib\\site-packages\\pytest_docker_tools\\factories\\..\\contexts/scratch', 'rm': True}
resp = <generator object APIClient._stream_helper at 0x0000016F5FE18660>
last_event = {'stream': '\n'}, image_id = None
result_stream = <itertools._tee object at 0x0000016F5FA41340>
internal_stream = <itertools._tee object at 0x0000016F5FFBEF80>
chunk = {'error': 'Windows does not support FROM scratch', 'errorDetail': {'message': 'Windows does not support FROM scratch'}}
match = None
def build(self, **kwargs):
"""
Build an image and return it. Similar to the ``docker build``
command. Either ``path`` or ``fileobj`` must be set.
If you already have a tar file for the Docker build context (including
a Dockerfile), pass a readable file-like object to ``fileobj``
and also pass ``custom_context=True``. If the stream is also
compressed, set ``encoding`` to the correct value (e.g ``gzip``).
If you want to get the raw output of the build, use the
:py:meth:`~docker.api.build.BuildApiMixin.build` method in the
low-level API.
Args:
path (str): Path to the directory containing the Dockerfile
fileobj: A file object to use as the Dockerfile. (Or a file-like
object)
tag (str): A tag to add to the final image
quiet (bool): Whether to return the status
nocache (bool): Don't use the cache when set to ``True``
rm (bool): Remove intermediate containers. The ``docker build``
command now defaults to ``--rm=true``, but we have kept the old
default of `False` to preserve backward compatibility
timeout (int): HTTP timeout
custom_context (bool): Optional if using ``fileobj``
encoding (str): The encoding for a stream. Set to ``gzip`` for
compressing
pull (bool): Downloads any updates to the FROM image in Dockerfiles
forcerm (bool): Always remove intermediate containers, even after
unsuccessful builds
dockerfile (str): path within the build context to the Dockerfile
buildargs (dict): A dictionary of build arguments
container_limits (dict): A dictionary of limits applied to each
container created by the build process. Valid keys:
- memory (int): set memory limit for build
- memswap (int): Total memory (memory + swap), -1 to disable
swap
- cpushares (int): CPU shares (relative weight)
- cpusetcpus (str): CPUs in which to allow execution, e.g.,
``"0-3"``, ``"0,1"``
shmsize (int): Size of `/dev/shm` in bytes. The size must be
greater than 0. If omitted the system uses 64MB
labels (dict): A dictionary of labels to set on the image
cache_from (list): A list of images used for build cache
resolution
target (str): Name of the build-stage to build in a multi-stage
Dockerfile
network_mode (str): networking mode for the run commands during
build
squash (bool): Squash the resulting images layers into a
single layer.
extra_hosts (dict): Extra hosts to add to /etc/hosts in building
containers, as a mapping of hostname to IP address.
platform (str): Platform in the format ``os[/arch[/variant]]``.
isolation (str): Isolation technology used during build.
Default: `None`.
use_config_proxy (bool): If ``True``, and if the docker client
configuration file (``~/.docker/config.json`` by default)
contains a proxy configuration, the corresponding environment
variables will be set in the container being built.
Returns:
(tuple): The first item is the :py:class:`Image` object for the
image that was built. The second item is a generator of the
build logs as JSON-decoded objects.
Raises:
:py:class:`docker.errors.BuildError`
If there is an error during the build.
:py:class:`docker.errors.APIError`
If the server returns any other error.
``TypeError``
If neither ``path`` nor ``fileobj`` is specified.
"""
resp = self.client.api.build(**kwargs)
if isinstance(resp, str):
return self.get(resp)
last_event = None
image_id = None
result_stream, internal_stream = itertools.tee(json_stream(resp))
for chunk in internal_stream:
if 'error' in chunk:
> raise BuildError(chunk['error'], result_stream)
E docker.errors.BuildError: Windows does not support FROM scratch
.venv\lib\site-packages\docker\models\images.py:304: BuildError I'm also completely failing to install on PyPy on Windows:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
v1.0.0 Official Release
User Manual: https://pytest-celery.readthedocs.io
PyPI: https://pypi.org/project/pytest-celery
Source: https://github.com/celery/pytest-celery
Install with
pip install -U "pytest-celery[all]"
.Key Highlights
Simple
The plugin provides a single entry point to the test case and makes sure everything is configured according to the selected architecture and requirements.
By default, all of the supported architecture components are added to a matrix of all possible combinations.
Pytest will generate a test case for each combination, and will run it in an isolated environment.
This allows separation of concerns, and makes it simple to access different architectures in a single test case, for example:
This code will generate test cases for all possible combinations of the supported brokers and backends, using the latest
version of Celery. Under the context of the test, each combination will be available as a
celery_setup
fixture,with access to all of the required components, and will run in an isolated environment.
Flexible
The plugin is highly configurable, and can be used to test a wide range of Celery architectures.
It can be configured to use a specific version of Celery, or to use a specific version of a broker or backend.
It can also be configured to use a custom broker or backend, or to use a custom Celery application.
For basic usage, the plugin provides default components that can be configured and extended.
For more advanced use cases, the plugin uses the pytest fixtures mechanism to allow injecting custom components into the environment and build a custom Celery architecture for your project.
For example, see the rabbitmq-management example, which demonstrates how to replace the default broker matrix with a single RabbitMQ Management broker.
Fast
The plugin is designed to run tests in parallel using isolated environments. It supports the pytest-xdist plugin to run tests in parallel and scales well with available resources to improve the overall test suite performance.
Annotated
The codebase is fully annotated with type hints and is tested with mypy to ensure type safety across the board, allowing for a better development experience.
Supports
Workers
Brokers
Backends
Clusters
Features
Architecture Injection
By default, a set of predefined components is used to build the Celery architecture for each test.
Each built-in component can be either configured or completely replaced with a custom implementation.
Architecture Injection can be done at different layers, and can be used to replace only specific elements of the architecture pipeline, or to replace the entire pipeline altogether.
Docker Based
The plugin uses docker containers to build the Celery architecture for each test.
This means that the plugin is not limited to specific versions and can be used to test potentially any Celery setup.
It uses the pytest-docker-tools plugin to manage the docker containers which is useful for accessing the docker containers in the test case during the test run and assert on their state with high granularity.
Batteries Included
The plugin provides a set of built-in components that can be used to test ideas quickly.
You can start with the default settings and gradually modify the configurations to fine-tune the test environment. By focusing on the test case, you can quickly iterate and test ideas, without wasting time on the overhead of setting up different environments manually.
Code Generation
One of the challenges in testing production Celery applications is the need to inject testing infrastructure into the Celery worker container at runtime. The plugin provides a Code Generation mechanism that can be used to inject code into the Celery worker container at runtime according to the test case. This opens the door to a wide range of testing scenarios, and allows higher level of control over the tested Celery application.
Isolated Environments
Each test case is executed in an isolated environment. This means that each test case has its own Celery architecture, and is not affected by other test cases. Tests may run in parallel and take care of tearing down themselves when they are done, regardless of the test result.
Tests as First-Class Citizens
The plugin is designed to enhance testing capabilities by treating tests as first-class citizens. It uses advanced mechanisms to encapsulate the complexity of setting up a Celery environment, thus allowing the developer to focus on the test case itself and leave the hard lifting to the plugin.
Extensible
The plugin is designed to be extensible to fit a wide range of use cases and provides a set of built-in components that can be extended to fit more advanced use cases.
It's based on the S.O.L.I.D principles and provides APIs for developing high-quality test suites. It combines the sophistication of the pytest fixtures mechanism with OOP principles to create separation of concerns between each layer of the infrastructure and its elements, which allow a higher level of granularity and control when extending the plugin.
Beta Was this translation helpful? Give feedback.
All reactions