Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Add structured outputs using outlines #601

Merged
merged 33 commits into from May 14, 2024
Merged

Conversation

plaguss
Copy link
Contributor

@plaguss plaguss commented May 2, 2024

Description

This PR adds an integration with outlines to generate structured outputs for LLMs.

LLMs available:

The following LLMs are implemented, just pass the StructuredOutputType:

  • Transformers
  • LlamaCpp
  • vLLM

Example pipeline LlamaCPP:

from enum import Enum
from pathlib import Path

from distilabel.llms import vLLM, LlamaCppLLM
from distilabel.pipeline import Pipeline
from distilabel.steps import LoadDataFromDicts
from distilabel.steps.tasks import TextGeneration
from pydantic import BaseModel, StringConstraints, conint
from typing_extensions import Annotated


class Weapon(str, Enum):
    sword = "sword"
    axe = "axe"
    mace = "mace"
    spear = "spear"
    bow = "bow"
    crossbow = "crossbow"


class Armor(str, Enum):
    leather = "leather"
    chainmail = "chainmail"
    plate = "plate"
    mithril = "mithril"


class Character(BaseModel):
    name: Annotated[str, StringConstraints(max_length=30)]
    age: conint(gt=1, lt=3000)
    armor: Armor
    weapon: Weapon


# Download the model with
# curl -L -o ~/Downloads/openhermes-2.5-mistral-7b.Q4_K_M.gguf https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF/resolve/main/openhermes-2.5-mistral-7b.Q4_K_M.gguf

model_path = "Downloads/openhermes-2.5-mistral-7b.Q4_K_M.gguf"

with Pipeline("RPG-characters") as pipeline:
    system_prompt = (
        "You are a leading role play gamer. You have seen thousands of different characters and their attributes."
        " Please return a JSON object with common attributes of an RPG character."
    )

    load_dataset = LoadDataFromDicts(
        name="load_instructions",
        data=[
            {
                "system_prompt": system_prompt,
                "instruction": f"Give me a character description for a {char}",
            }
            for char in ["dwarf", "elf", "human", "ork"]
        ],
    )
    llm=LlamaCppLLM(
        model_path=str(Path.home() / model_path),  # type: ignore
        n_gpu_layers=-1,
        n_ctx=1024,
        structured_output={"format": "json", "schema": Character},
    )
    # Change to vLLM as such:
    # llm = vLLM(
    #     model="teknium/OpenHermes-2.5-Mistral-7B",
    #     extra_kwargs={"tensor_parallel_size": 1},
    #     structured_output={"format": "json", "schema": Character},
    # )

    text_generation = TextGeneration(
        name="text_generation_rpg",
        llm=llm,
        input_batch_size=8,
        output_mappings={"model_name": "generation_model"},
    )
    load_dataset >> text_generation


if __name__ == "__main__":
    distiset = pipeline.run(
        parameters={
            text_generation.name: {
                "llm": {"generation_kwargs": {"max_new_tokens": 256}}
            }
        },
        use_cache=False,
    )
    print(distiset)
    df = distiset["default"]["train"].to_pandas()
    df.to_csv("rpg_characters.csv", index=False)

    for num, character in enumerate(distiset["default"]["train"]["generation"]):
        print(f"Character: {num}")
        print(character)

# Character: 0
# {
# "name": "Gimli",
# "age": 42,
# "armor": "plate",
# "weapon": "axe" }
# Character: 1
# {"name":"Gaelen","age":600,"armor":"leather","weapon":"bow"}
# Character: 2
# {"name": "John Smith","age": 35,"armor": "leather","weapon": "sword"}
# Character: 3
# { "name": "Grug", "age": 35, "armor": "leather", "weapon": "axe"}
  • OpenAI
    Outlines offers integrations, but given that OpenAI already offers JSON support, and the other types of output are not available, this one should be done directly from the corresponding LLM:
OpenAILLM(..., response_format="json")  # To generate JSON: https://platform.openai.com/docs/api-reference/chat/create#chat-create-response_format

More information from OpenAI for function calling can be found here.

@alvarobartt alvarobartt added this to the 1.1.0 milestone May 7, 2024
@plaguss plaguss linked an issue May 7, 2024 that may be closed by this pull request

@pytest.fixture(scope="module")
def tiny_mistral_llm() -> TransformersLLM:
llm = TransformersLLM(model="openaccess-ai-collective/tiny-mistral")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're planning to use a real LLM shouldn't this be an integration test instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I started with the simplest/fastest possible to iterate, but I wanted to discuss once the other pending details are done 😄

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just chiming in as mentioned this morning that we were using real LLMs in some cases, which shouldn't be the case for unit tests if we can avoid that (talking about the llama-cpp case now)

@plaguss plaguss self-assigned this May 7, 2024
@plaguss plaguss added enhancement New feature or request integrations labels May 7, 2024
pyproject.toml Outdated Show resolved Hide resolved
@plaguss plaguss marked this pull request as ready for review May 9, 2024 09:06
Copy link
Member

@gabrielmbmb gabrielmbmb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! This feat is awesome 🙌 just some small comments

@@ -72,11 +74,12 @@ class LLM(RuntimeParametersMixin, BaseModel, _Serializable, ABC):
description="The kwargs to be propagated to either `generate` or `agenerate`"
" methods within each `LLM`.",
)
structured_output: Optional[Any] = None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we be more specific on the type hinting of this attribute? or we can't because circular imports, etc?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't due to circular imports, and/or validation from pydantic, would love to improve these if we can find a way

src/distilabel/llms/huggingface/transformers.py Outdated Show resolved Hide resolved
src/distilabel/steps/tasks/structured_outputs/outlines.py Outdated Show resolved Hide resolved
src/distilabel/steps/tasks/structured_outputs/outlines.py Outdated Show resolved Hide resolved
src/distilabel/steps/tasks/structured_outputs/outlines.py Outdated Show resolved Hide resolved
tests/unit/llms/test_openai.py Outdated Show resolved Hide resolved
tests/unit/steps/tasks/structured_outputs/test_outlines.py Outdated Show resolved Hide resolved
@plaguss plaguss requested a review from gabrielmbmb May 13, 2024 10:10
@plaguss plaguss merged commit de9081c into develop May 14, 2024
3 of 4 checks passed
@plaguss plaguss deleted the structured-output branch May 14, 2024 08:40
plaguss added a commit that referenced this pull request May 20, 2024
* Allow nested connect calls and overload rshift method to connect steps (#490)

* Allow nested connect calls and overload rshift method to connect steps

* Update src/distilabel/steps/base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/pipeline/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/pipeline/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/pipeline/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/pipeline/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/pipeline/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Add comment to simplify reading the tests

* Add reference on the Pipeline of alternative ways of connecting the steps

---------

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Fix `llm_blender`installation from `argilla-io` fork (#557)

* Warn user about unknown runtime parameters (#555)

* Add warning of unknown runtime parameters

* Run check only when in a pipeline, to allow unit tests for runtime parameters to pass

* Add missing `model_name`, update docstrings, and add `*.jinja2` templates to `Task` subclasses (#560)

* Remove not required `else` statement in `UltraFeedback`

* Add missing `model_name` and clean formatting in `SelfInstruct`

* Move `QualityScorer` template to `quality-scorer.jinja2`

* Move `ComplexityScorer` template to `complexity-scorer.jinja2`

* Add `model_name` to `GenerateEmbeddings`

* Fix docstrings in `InstructionBacktranslation`

* Remove `input_batch_size` in `PairRM`

* Add `model_name` in `PairRM` and update docstrings

* Add `model_name` and missing docstrings in `ComplexityScorer`

* Fix docstrings and add `model_name` in `QualityScorer`

* Fix `TestPairRM` with `model_name` and `input_batch_size`

* Split `ChatGeneration` from `TextGeneration` (#558)

* Add `ChatGeneration` and rename `text_generation->generation`

* Add `ChatGeneration` tests and catch `DeprecationWarning`

* Revert `generation.py` rename and add `system_prompt` to `{Chat,Text}Generation`

* Add missing tests for `{Chat,Text}Generation`

* Add missing `InstructionBacktranslation` task in `preference_tasks.md`

* Fix weird characters around `#`

* Update `docs/` related to `{Chat,Text}Generation`

* Add `use_system_prompt` in `TextGeneration`

* Update `text_generation.md`

* Set `extra="forbid"` in `{_Step,LLM}.model_config` (#577)

* Set `extra="forbid"` in `_Step.model_config`

* Set `extra="forbid"` in `LLM.model_config`

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

* Fix `TestVertexAILLM` since `api_key` does not exist

Most likely due to a copy-over from an existing test

* Pop `runtime_paremeters_info` in `from_dict`

As `runtime_parameters_info` is only used in the CLI, and not required to instantiate a `_Step` subclass

---------

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

* Infer `Step.name` if not provided (#575)

* Log with a warning instead of raising an error when a step name is not found when passing runtime parameters via pipeline.run

* Infer the name of the step if the user doesn't set one

* Make the function to infer the name private and add docstrings

* Add tests for the inferred name of steps and move the call to be done after the pipeline is set via the pipeline manager

* Update src/distilabel/steps/base.py

Co-authored-by: Gabriel Martín Blázquez <gmartinbdev@gmail.com>

* Update warning message to make it more explicit

* Fix name inference with more than 10 steps and possible repeated names

---------

Co-authored-by: Gabriel Martín Blázquez <gmartinbdev@gmail.com>

* Set `spawn` as multiprocessing start method if Windows (#578)

* Dump logs within a file in `.cache/distilabel/pipelines` dir (#568)

* Write logs to file in the cache folder

* Push log file to the dataset in hugging face hub

* Ensure the cache folder exists when the pipeline log file is created on setup_logging

* Fix docstring

* Update log handlers to write the timestamp and simplify the setup logging via the queue listener

* Fix empty batches causing missaligment when branching (#590)

* Fix empty batches causing missaligment when branching

* Add `_BatchManager.can_generate` unit test

* Update tests/integration/test_branching_missaligmnent.py

Co-authored-by: Agus <agustin@argilla.io>

---------

Co-authored-by: Agus <agustin@argilla.io>

* Add `GroqLLM` (#583)

* Add `GroqLLM`

Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>

* Improve dependency installation in `test.yml`

* Add `GroqLLM` to documentation

* Add `TestGroqLLM`

* Remove extra line break in `CohereLLM` docstring

* Add `GroqLLM` docstring

---------

Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>

* Add `Format{Chat,Text}Generation{DPO,SFT}` (#584)

* Add `Format{Chat,Text}Generation{DPO,SFT}` (WIP)

* Add `TestFormat{Chat,Text}GenerationSFT`

* Add `TestFormat{Chat,Text}GenerationDPO`

* Fix `title` in `RatingQuestion` of `PreferenceToArgilla` (#597)

* Remove `based on the annotation guidelines` from `PreferenceToArgilla`

* Add missing `# type: ignore`

* Fix `test_preference`

* Set `streaming=False` and add `num_examples` to `LoadHubDataset` (#565)

* Update LoadHubDataset to allow for more flexibility related to streaming and column fetching via API

* Add tests for new load_hub_dataset cases

* Fix docstring comment

* Allow passing num_examples as a runtime parameter to simplify loading small number of examples from datasets

* Update src/distilabel/steps/generators/huggingface.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

---------

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Make `pipeline` argument of `Step` optional (#566)

* Make pipeline argument of step optional

* Fix failing tests

* Fix logger instantiation from model_post_init of step

* Update tests/unit/pipeline/test_dag.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/tasks/evol_instruct/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/tasks/evol_instruct/test_generator.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/tasks/evol_quality/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/tasks/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update tests/unit/steps/test_base.py

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Fix comment from review

* Format tests

---------

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Extend `LLM` kwargs to align with counterparts (#594)

* Extend `vLLM` supported kwargs

* Extend `LlamaCppLLM` supported kwargs

* Fix `super().load()` placement in `LlamaCppLLM`

Revert from a previous commit where this was changed unintentionally, since no notes were written down about it, but now a note and context has been included to prevent this from happening to someone else

* Add missing kwargs in `InferenceEndpointsLLM.agenerate`

* Rename `llamacpp_llm` to `llm`

* Add `Genstruct` task (#600)

* Add `Genstruct` and `genstruct.jinja2`

* Add `test_genstruct.py`

* Fix `Genstruct` regex

* Fix `num_examples` to be optional in `LoadHubDataset` (#603)

* Fix `list_files_in_dir` returning unsorted files (#609)

Co-authored-by: plaguss <plaguss@users.noreply.github.com>

* Add `PrometheusEval` task (#610)

* Add `PrometheusAbsEval`

* Add `PrometheusRelEval`

* Add `Prometheus{Abs,Rel}Eval` docstrings

* Add imports in `distilabel.steps.tasks`

* Fix import order in `UltraFeedback`

* Fix `_template` path in `Prometheus{Abs,Rel}Eval`

* Add `TestPrometheus{Abs,Rel}Eval`

* Add missing `model_name` in `Genstruct` and `Prometheus{Abs,Rel}Eval`

* Combine `Prometheus{Abs,Rel}Eval` into `PrometheusEval`

* Update `test_imports` and `test_prometheus_eval`

* Update `ValueError` on missing inputs message (#617)

* Run `codespell` to fix typos

* Update `ValueError` exception on missing inputs

* Add `routing_batch_function` (#595)

* Update `connect` to accept `*args` and `routing_batch_function`

* Use `routing_batch_function`

* Add `convergence_step` attribute

* Update `_BatchManager` buffers to store `_Batch`es

* Fix `step_empty_buffers` method

* Fix steps not processing all the rows because no batch copy

* Add `stop` and `stop_sequences` in `LLM.generate` subclasses (#585)

* Add `stop_sequences` arg to `InferenceEndpointsLLM.generate`

* Add `stop` arg to `OpenAILLM.generate`

* Set `stop_token_ids` from `eos_token_id` if not set

* Bump version to `1.0.3`

* Skip `tokenizer.eos_token_id` defaults for `stop_sequences`

Should be handled in the LLM / TGI / etc. side, so as long as we can set the values we're good, no need to set defaults too, as those can be misleading and wrong in some cases

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

---------

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

* Dump logs within a file in `.cache/distilabel/pipelines` dir (#568)

* Write logs to file in the cache folder

* Push log file to the dataset in hugging face hub

* Ensure the cache folder exists when the pipeline log file is created on setup_logging

* Fix docstring

* Update log handlers to write the timestamp and simplify the setup logging via the queue listener

* Fix empty batches causing missaligment when branching (#590)

* Fix empty batches causing missaligment when branching

* Add `_BatchManager.can_generate` unit test

* Update tests/integration/test_branching_missaligmnent.py

Co-authored-by: Agus <agustin@argilla.io>

---------

Co-authored-by: Agus <agustin@argilla.io>

* Add checking if can create batch for convergence step

* Remove unit test

* Add `GroqLLM` (#583)

* Add `GroqLLM`

Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>

* Improve dependency installation in `test.yml`

* Add `GroqLLM` to documentation

* Add `TestGroqLLM`

* Remove extra line break in `CohereLLM` docstring

* Add `GroqLLM` docstring

---------

Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>

* Add `_get_data_for_convergence_step` method

* Add sending `LAST_BATCH_SENT_FLAG` to steps

* It's working

* Add `routing_batch_function` decorator

* Fix circular import

* Fix pyright `>>` errors

* Having second thoughts with this thing

* Confirmed

* Add `routing_batch_function` related dag validation

* Add test for routing batch function

* Add `time.sleep`

* Fix `_BatchManager` unit tests

* Remove unit test

* Fix unit test

* Add convergence step batch manager unit tests

* Add example

* Add missing attributes to docstring

* Add `sample_n_steps` routing batch function

* Add `routing_batch_function` docs

* Add pipeline typing api

* Add DAG nodes keys constants

* Fix `test` workflow

* Fix typo

Co-authored-by: Agus <agustin@argilla.io>

---------

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>
Co-authored-by: Agus <agustin@argilla.io>
Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>

* Fix `pipeline.log` inconsistency & include LLM info in signature (#598)

* Include LLM info in signature

* Add draft of fix

* Fix problem with pipeline.log order of cache folder creation

* Fix creation of parent dir if doesn't already exist

* Add custom `rubrics` attribute to `PrometheusEval` (#621)

* Add `rubrics` attribute in `PrometheusEval`

* Fix `typing.Self` import to `typing_extensions`

* Fix `TestPrometheusEval` to use `_DEFAULT_RUBRICS`

* Update `PrometheusEval` docstrings

* Add tests for `rubrics` in `PrometheusEval`

* Remove duplicated `Dict[str,str]` validation

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

---------

Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>

* Update `UltraFeedback` paper replication to use `routing_batch_function` (#620)

* Update `UltraFeedback` paper replication to use `routing_batch_function`

* Add note about batch sizes

* Update docs/sections/papers/ultrafeedback.md

* Add `distilabel_metadata` column to the datasets to include general data (#586)

* Add the option of passing the multiprocessing context via env var (#604)

* Add name of the pipeline to group the hashed folders (#626)

* Add `routing_batch_function` serialization (#628)

* Add `RoutingBatchFunction` serialization

* Add `RoutingBatchFunction` serialization

* Update info command

* Fix routing batch function tests

* Fix `@routing_batch_function` detecting factory function bug

* Don't serialize `model_path` in `LlamaCpp`

* Fix problem of sorting files whose name is numbered (#622)

* Add `dry_run` method to the pipelines to run with a single example. (#635)

* [FEATURE] Add structured outputs using `outlines` (#601)

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>
Co-authored-by: Gabriel Martín Blázquez <gmartinbdev@gmail.com>

* Force pipeline stop after 2 SIGINT signals caught (#630)

* Refactor and update `docs` (#634)

* Bump version to 1.1.0

* Fix `project.license` in `pyproject.toml`

* Fix `docs/scripts/gen_ref_pages.py`

* Update `description` in `pyproject.toml` and `mkdocs.yml`

* Update `index.md`

* Remove unused `docs/overview.md`

* Update `mkdocs.yml` and `docs/*md` (WIP)

* Fix typo in `ComplexityScorer` docstring

* Update `mkdocs.yml` and `docs/*.md` (WIP)

* Fix indentation after `!!! NOTE`

* Update `mkdocs.yml` and `docs/*md` (WIP)

* Update `mkdocs.yml` and `docs/*md` (WIP)

* Update `mkdocs.yml` and `docs/*md` (WIP)

* Include example of dry_run method

* Fix link to pipeline

* Change section names with class methods

* Add FAQ section in `docs` (WIP)

* Update `docs/sections/components/step/index.md`

* Update `docs/sections/components/step/index.md`

* Fix typos via `codespell`

* Apply suggestions from code review

* Update docs structured outputs (#636)

* Update `step/index.md` and `faq.md`

* Update `step/*.md` and `task/index.md`

* Update more docs (#637)

* Update `llm/index.md`

Not sure if the structured generation fits nicely there, as that's most likely a tutorial or a subpage or something else (?)

* Rename `docs/components/pipelines` to `docs/components/pipeline`

* Update docs/sections/components/llm/index.md

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Fix `@step` section name

* Update `generator_task.md`

* Move structured outputs to it's own section and include references (#638)

* Update FAQ layout (#639)

* Apply suggestions from code review

Co-authored-by: Ignacio Talavera <ignaciotalaveracepeda@gmail.com>

* Update docs/sections/faq.md

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* Update docs/sections/faq.md

Co-authored-by: Alvaro Bartolome <alvaro@argilla.io>

* refactor file name for explicitness (#641)

* Fix file name to render docs (#642)

* Update `docs/*.md` and `mkdocs.yml` (WIP)

* Update `docs/*.md` and `mkdocs.yml` (WIP)

* Run `codespell` to fix typos

* Collapse examples to simplify reading (#643)

* Avoid long titles in `nav`

* Update `docs/*.md` and `mkdocs.yml` (WIP)

* Update `Task Gallery` API reference

* Remove `prometheus.md` as not written yet

---------

Co-authored-by: plaguss <agustin@argilla.io>
Co-authored-by: Ignacio Talavera <ignaciotalaveracepeda@gmail.com>

* Export components info & components gallery in docs (#640)

* Refactor to `classmethod`s

* Add `distilabel/components-gallery` mkdocs plugin

* Update to exclude `self` parameter

* Update docstrings

* Fix parsing short and long description

* Remove `self`

* Revert "Refactor to `classmethod`s"

This reverts commit 3384de9.

* Remove exporting runtime parameters info with method

* Remove leading whitespaces

* Add GitHub icon in docs

* Finish component-gallery

* Update docstrings

* Update `parse_google_docstring` unit tests

* Update sections and fix warnings

* Deploy `dev` version from `develop` branch

* Add hide toc

* Update `Format{Chat,Text}Generation{DPO,SFT}`

* Fix wrong import in `step_gallery/extra.md`

* Documentation updates (#646)

* Include section for Note in gallery

* Fix rendering

* Fix step to llm in jinja template

* Updated layout for examples/papers section and included default page for Learn section

* Update unit tests to take into account the note section from docstrings

* Nest components from steps and tasks gallery into its parent section

* Refactor docs 1.1.0 (#650)

* Remove redundant information of available objects, those are in the components gallery now

* Remove more redundant documentation of available steps and tasks

* Reintroduce input/output_mappings in steps

* Add reference to runtime parameters

* Fix routing batch function deadlocks and unordered batches (#649)

* Add checking step `input_batch_size` multiple

* Fix unordered batches when using `routing_batch_function`

* Fix `can_generate` condition

* Remove metadata and style

* Fix getting data for batch when irregular batch sizes

* Fix steps receiving routed batches getting stuck

* Fix `_last_batch_convergence_step` method

* Fix stop not checking for `None`

* Fix issues related to the queues

* Remove unused variable

* Add integration tests timeout

* Fix deadlock caused becase next expected batch in convergence step

* Update unit tests

* Add timeout to tests

* Simplify condition

* Fix unit test

* Update timeouts

---------

Co-authored-by: Agus <agustin@argilla.io>
Co-authored-by: Gabriel Martín Blázquez <gmartinbdev@gmail.com>
Co-authored-by: Gabriel Martin <gabrielmbmb@users.noreply.github.com>
Co-authored-by: Krishna Tripathi <kcentric@users.noreply.github.com>
Co-authored-by: plaguss <plaguss@users.noreply.github.com>
Co-authored-by: Ignacio Talavera <ignaciotalaveracepeda@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request integrations
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

[FEATURE] Add structured generation from the LLMs
3 participants