Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add serve module and cli tools #448

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ocss884
Copy link
Member

@ocss884 ocss884 commented Feb 26, 2024

This PR aims to add our own openai-compatible API server and command line tools with the help of Systemd.
TODO:

  • add test case
  • complete documentation
  • better variable naming (need everyone's help)
  • lazy loading command

@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Feb 26, 2024
Copy link

coderabbitai bot commented Feb 26, 2024

Important

Auto Review Skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository.

To trigger a single review, invoke the @coderabbitai review command.

Walkthrough

The updates introduce a comprehensive command-line interface and server functionality for the Camel service, focusing on chatbot interactions and model management. It includes licensing updates, a CLI for service operations, enhanced text generation capabilities, model management, non-blocking server execution, an OpenAI-compatible API server, utility functions for model loading and GPU usage, and improved server type definitions for request and response handling.

Changes

File(s) Summary
camel/serve/__init__.py, .../model_manager.py Introduced licensing information and imported CLI functionalities.
camel/serve/cli.py Added a CLI for managing the Camel service, including chatbot interaction and model management.
camel/serve/inference.py Enhanced text generation with translation of parameters and handling the generation process.
camel/serve/non_block.py Added functionality for running the server in the background using threading and uvicorn.
camel/serve/openai_api_server.py Introduced an OpenAI-compatible API server for chat completions with FastAPI and aiohttp.
camel/serve/utils.py, camel/utils/__init__.py Utility functions for model loading from Hugging Face and exporting messages_to_prompt.
camel/types/server_types.py Defined data models for chat completion requests/responses and model management.

🐇✨
In the land of code and lore,
Came the Camel, ready to explore.
With CLI commands, it dances,
Managing chats and model stances.
Through server's night, it runs with glee,
A service for all, open and free.
🌟🐾

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 11

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between 9c0f598 and 4d5101e.
Files ignored due to path filters (2)
  • poetry.lock is excluded by: !**/*.lock
  • pyproject.toml is excluded by: !**/*.toml
Files selected for processing (9)
  • camel/serve/init.py (1 hunks)
  • camel/serve/cli.py (1 hunks)
  • camel/serve/inference.py (1 hunks)
  • camel/serve/model_manager.py (1 hunks)
  • camel/serve/non_block.py (1 hunks)
  • camel/serve/openai_api_server.py (1 hunks)
  • camel/serve/utils.py (1 hunks)
  • camel/types/server_types.py (1 hunks)
  • camel/utils/init.py (1 hunks)
Files skipped from review due to trivial changes (2)
  • camel/serve/init.py
  • camel/serve/model_manager.py
Additional comments: 14
camel/serve/non_block.py (1)
  • 9-11: Using daemon=True for the thread running the server ensures that the server thread will close when the main program exits. However, it's important to ensure that this behavior is desired, as it may lead to unexpected server shutdowns if the main program's lifecycle is not managed properly.
camel/utils/__init__.py (1)
  • 31-31: The addition of messages_to_prompt to the list of exported entities suggests that this is a new utility function or variable introduced for use outside this module. Ensure that messages_to_prompt is properly documented and tested, as it is now part of the public API of this module.
Verification successful

The messages_to_prompt function within camel/utils/token_counting.py is documented, as indicated by the presence of a docstring that begins to describe the function's purpose and parameters. This addresses the concern about ensuring that new utilities introduced to the public API are properly documented. However, the full extent of the documentation and the quality thereof cannot be fully assessed based on the provided snippet. It is recommended to review the complete docstring and any associated tests to ensure comprehensive documentation and coverage.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify the existence and documentation of messages_to_prompt in the utils module.
rg --type py "def messages_to_prompt" camel/utils/
rg --type py "messages_to_prompt =" camel/utils/

Length of output: 213



Script:

#!/bin/bash
# Extract the documentation (docstring) for messages_to_prompt function.
rg --type py --context 5 "def messages_to_prompt" camel/utils/token_counting.py

Length of output: 414

camel/serve/utils.py (1)
  • 63-64: The get_gpu_nums function is straightforward and correctly returns the number of available GPUs. This is a good utility function for dynamically determining device allocation based on available resources.
camel/serve/cli.py (2)
  • 37-37: The init command uses sudo systemctl start camel to start the service, which implies that a systemd service named camel must be configured on the system. Ensure that documentation is provided on how to set up this systemd service, including necessary permissions and security considerations.
  • 76-79: The add and remove commands correctly use requests to interact with the service. Ensure that error handling is implemented to provide user-friendly feedback in case of failures, such as model not found or server errors.

Consider adding error handling to these commands to catch and display HTTP errors or connection issues.

camel/serve/openai_api_server.py (3)
  • 41-43: The root route (/) returns a simple "Hello World" message along with a list of models managed by LLM_MGR. This is a useful diagnostic endpoint but consider documenting its purpose and potential use cases to avoid confusion.
  • 73-85: The add_model route handler loads a model and tokenizer using load_model and registers them with LLM_MGR. Ensure that error handling is implemented for cases where model loading fails or the model is already registered, to provide clear feedback to the client.

Consider adding error handling for model loading failures and duplicate model registration.

  • 106-107: The application is configured to run with uvicorn using command-line arguments for host and port, which is a flexible setup. Ensure that the timeout_keep_alive parameter is set appropriately for your use case, as it might need adjustment based on the expected client behavior and server load.
camel/serve/inference.py (1)
  • 71-144: The translate_param_to_hf_config function translates OpenAI chat completion parameters to a configuration compatible with Hugging Face's generate method. While it covers essential parameters like max_new_tokens, temperature, and top_p, ensure that this translation aligns with the intended use cases and that any omitted parameters are intentionally excluded based on compatibility or relevance.
camel/types/server_types.py (5)
  • 26-29: The ListModelResponse model is well-defined for listing models. It includes essential fields like model_name and model_type, which should cover basic needs for model identification and categorization.
  • 112-114: The ChatCompletionResponse model is simple and effective for returning chat completion messages. If additional metadata or information about the completion is needed in the future, consider extending this model.
  • 117-150: The HFModelLoadingParam model includes a configuration option for device_map that supports a variety of input types. This flexibility is beneficial for deploying models across different hardware configurations. Ensure that the documentation clearly explains how to use this field, especially for users unfamiliar with PyTorch's device management.
  • 152-164: The ModelRegistrationRequest model is well-structured for registering new models with the service. It allows for specifying a model by ID or path and optionally renaming it. This flexibility supports a wide range of use cases, from using pre-trained models to integrating custom models.
  • 167-171: The ModelRemovalRequest model is straightforward and effective for requesting the removal of a model from the service. It uses the model name as the key identifier, which aligns with common practices for managing resources.

from camel.serve.openai_api_server import app
from time import sleep
def start_server():
uvicorn.run(app, host=None, port=8000)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The server is hardcoded to run on port 8000 without allowing customization of the host or port. Consider making these parameters configurable to enhance flexibility and avoid potential port conflicts.

- uvicorn.run(app, host=None, port=8000)
+ uvicorn.run(app, host=host, port=port)

And modify the start_server function to accept host and port as arguments.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
uvicorn.run(app, host=None, port=8000)
uvicorn.run(app, host=host, port=port)

t = threading.Thread(target=start_server, daemon=True)
t.start()
sleep(0.1)
run_in_background()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The call to run_in_background() at the global level will execute the server startup as soon as this module is imported. This could lead to unintended side effects, especially in a larger application where imports might happen conditionally or in a specific order. Consider controlling server startup through a main function or an explicit call from outside this module.

- run_in_background()

Move this call to a controlled environment, such as a main function or an explicit initialization routine.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
run_in_background()

Comment on lines 21 to 41
def load_model(
model: str,
hf_param: HFModelLoadingParam,
):
"""
General entry of loading a model in CAMEL project
"""

# device input check
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore
device = hf_param["device_map"]
if isinstance(device, str):
if device not in {"cuda", "cpu"}:
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.")
if isinstance(device, int):
gpu_nums = get_gpu_nums()
if device >= gpu_nums:
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.")

model, tokenizer = load_HF_model(model, hf_param)
return model, tokenizer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The load_model function correctly checks the device type and validates GPU device numbers. However, it lacks error handling for the case where get_gpu_nums might be called on a system without CUDA support, leading to a potential unhandled exception. Consider adding a check for CUDA availability.

+ if torch.cuda.is_available():

Add this check before accessing CUDA-specific functions to ensure compatibility with systems without GPU support.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def load_model(
model: str,
hf_param: HFModelLoadingParam,
):
"""
General entry of loading a model in CAMEL project
"""
# device input check
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore
device = hf_param["device_map"]
if isinstance(device, str):
if device not in {"cuda", "cpu"}:
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.")
if isinstance(device, int):
gpu_nums = get_gpu_nums()
if device >= gpu_nums:
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.")
model, tokenizer = load_HF_model(model, hf_param)
return model, tokenizer
def load_model(
model: str,
hf_param: HFModelLoadingParam,
):
"""
General entry of loading a model in CAMEL project
"""
# device input check
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore
device = hf_param["device_map"]
if isinstance(device, str):
if device not in {"cuda", "cpu"}:
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.")
if isinstance(device, int):
if torch.cuda.is_available():
gpu_nums = get_gpu_nums()
if device >= gpu_nums:
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.")
model, tokenizer = load_HF_model(model, hf_param)
return model, tokenizer

Comment on lines +43 to +65
def load_HF_model(hf_model, configs):
"""
Load a model from Hugging Face model hub
"""
try:
model = AutoModelForCausalLM.from_pretrained(
hf_model,
**configs
)
except Exception as e:
raise HTTPException(status_code=404, detail=str(e))
try:
tokenizer = AutoTokenizer.from_pretrained(
hf_model
)
except Exception as e:
raise HTTPException(status_code=404, detail=str(e))

return model, tokenizer
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The load_HF_model function uses a broad except Exception as e clause which might catch more exceptions than intended, potentially obscuring the root cause of errors. It's generally a good practice to catch specific exceptions where possible. Additionally, raising an HTTPException in this utility function couples it tightly to FastAPI, reducing its reusability. Consider returning error information in a way that doesn't depend on FastAPI.

- raise HTTPException(status_code=404, detail=str(e))
+ raise ValueError(f"Failed to load model or tokenizer: {str(e)}")

This change decouples the utility function from FastAPI and encourages handling exceptions at a higher level where the context of HTTP requests is relevant.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
def load_HF_model(hf_model, configs):
"""
Load a model from Hugging Face model hub
"""
try:
model = AutoModelForCausalLM.from_pretrained(
hf_model,
**configs
)
except Exception as e:
raise HTTPException(status_code=404, detail=str(e))
try:
tokenizer = AutoTokenizer.from_pretrained(
hf_model
)
except Exception as e:
raise HTTPException(status_code=404, detail=str(e))
return model, tokenizer
def load_HF_model(hf_model, configs):
"""
Load a model from Hugging Face model hub
"""
try:
model = AutoModelForCausalLM.from_pretrained(
hf_model,
**configs
)
except Exception as e:
raise ValueError(f"Failed to load model or tokenizer: {str(e)}")
try:
tokenizer = AutoTokenizer.from_pretrained(
hf_model
)
except Exception as e:
raise ValueError(f"Failed to load model or tokenizer: {str(e)}")
return model, tokenizer

Comment on lines 52 to 60
out = requests.post(
f"http://localhost:{8000}/v1/chat/completions", json={
"model": model,
"messages": [{
"role": "user",
"content": messages
}]
})
click.echo(out.json()["message"])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chat command hardcodes the host and port for the service. Consider making these configurable through CLI options or environment variables to enhance flexibility and support different deployment scenarios.

- f"http://localhost:{8000}/v1/chat/completions"
+ f"http://{host}:{port}/v1/chat/completions"

And add host and port as parameters to the chat function, with default values or retrieved from environment variables.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
out = requests.post(
f"http://localhost:{8000}/v1/chat/completions", json={
"model": model,
"messages": [{
"role": "user",
"content": messages
}]
})
click.echo(out.json()["message"])
out = requests.post(
f"http://{host}:{port}/v1/chat/completions", json={
"model": model,
"messages": [{
"role": "user",
"content": messages
}]
})
click.echo(out.json()["message"])

"""
Exits the camel, closing all running LLMs
"""
subprocess.run(["sudo", "systemctl", "stop", "camel"])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

The search for documentation related to setting up and securing the systemd service, as mentioned in the review comment, did not yield any results directly mentioning "systemd" or providing instructions for systemd service setup in the README.md or other common documentation files. The mentions of "service" found in the search results are not related to systemd service setup but rather to setting environment variables for an OpenAI API proxy service and using an external server for LLM inference service. Based on this, it appears that the specific documentation requested in the review comment might not exist in the repository.

Analysis chain

The exit command uses sudo systemctl stop camel to stop the service. As with the init command, ensure that documentation is provided on setting up and securing the systemd service.


Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Verify documentation for systemd service setup.
rg "systemd" README.md

Length of output: 22



Script:

#!/bin/bash
# Broaden the search for systemd service setup documentation in other common documentation files.
rg "systemd" -g '*.md'
# Additionally, search for keywords related to service setup to catch indirect references.
rg "(service setup|systemctl|service)" -g '*.md'

Length of output: 838

Comment on lines 51 to 65
#TODO: make the return match the openai api
@app.post("/v1/chat/completions")
async def chat_completions(
request: ChatCompletionRequest) -> ChatCompletionResponse:
"""
Creates a chat completion for chat messages
"""

check_model_availability(request.model)

model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer(
request.model)
output = generate(model, tokenizer, request)

return ChatCompletionResponse(message=output[0])
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The chat_completions route handler correctly processes chat completion requests. However, it directly accesses LLM_MGR without error handling for cases where the model might not be found. Consider adding error handling to provide a more informative response in such cases.

- check_model_availability(request.model)
+ try:
+     check_model_availability(request.model)
+ except KeyError as e:
+     raise HTTPException(status_code=404, detail=str(e))

This change ensures that a clear error message is returned when the requested model is not found.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
#TODO: make the return match the openai api
@app.post("/v1/chat/completions")
async def chat_completions(
request: ChatCompletionRequest) -> ChatCompletionResponse:
"""
Creates a chat completion for chat messages
"""
check_model_availability(request.model)
model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer(
request.model)
output = generate(model, tokenizer, request)
return ChatCompletionResponse(message=output[0])
#TODO: make the return match the openai api
@app.post("/v1/chat/completions")
async def chat_completions(
request: ChatCompletionRequest) -> ChatCompletionResponse:
"""
Creates a chat completion for chat messages
"""
try:
check_model_availability(request.model)
except KeyError as e:
raise HTTPException(status_code=404, detail=str(e))
model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer(
request.model)
output = generate(model, tokenizer, request)
return ChatCompletionResponse(message=output[0])

Comment on lines 88 to 95
@app.delete("/v1/delete")
async def remove_model(request: ModelRemovalRequest):
model_name = request.model_name
try:
LLM_MGR.delete_model(model_name)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
return {"message": f"Model {model_name} deleted."}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remove_model route handler correctly attempts to remove a model from LLM_MGR. However, it catches a broad Exception which might obscure the root cause of errors. Consider catching specific exceptions related to model removal and providing informative error messages.

- except Exception as e:
+ except KeyError as e:

Change the exception type to KeyError or another specific exception type relevant to model removal failures.


Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.

Suggested change
@app.delete("/v1/delete")
async def remove_model(request: ModelRemovalRequest):
model_name = request.model_name
try:
LLM_MGR.delete_model(model_name)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
return {"message": f"Model {model_name} deleted."}
@app.delete("/v1/delete")
async def remove_model(request: ModelRemovalRequest):
model_name = request.model_name
try:
LLM_MGR.delete_model(model_name)
except KeyError as e:
raise HTTPException(status_code=500, detail=str(e))
return {"message": f"Model {model_name} deleted."}

Comment on lines 44 to 56
def generate(model, tokenizer, request):
hf_generation_config = translate_param_to_hf_config(request)

# tokenizer hosted in huggingface hub may not have chat_template configed
if tokenizer.chat_template is not None:
input_ids = tokenizer.apply_chat_template(
request.messages, return_tensors="pt").to("cuda")
else:
raise AttributeError(
"Chat template not found in tokenizer, please provide a template")
if request.stop is not None:
stop = [
tokenizer.encode(stop_word, add_prefix_space=False)
for stop_word in tuple(request.stop)
]
else:
stop = None

prefix_length = input_ids.shape[-1]
out_ids = model.generate(input_ids=input_ids, stopping_criteria=stop,
**hf_generation_config)

decoded = tokenizer.batch_decode(out_ids[:, prefix_length:],
skip_special_tokens=True)
return decoded
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generate function correctly translates parameters and generates text completions. However, it assumes the presence of tokenizer.chat_template, which might not exist for all tokenizers. This is correctly handled with an AttributeError, but consider documenting this requirement or providing a fallback mechanism.

Consider adding documentation about the chat_template requirement or implementing a fallback mechanism for tokenizers without this attribute.

Comment on lines +31 to +130
class ChatCompletionRequest(BaseModel):
r"""Base class for completion create parameters.
Not usable fields for open-source models sare
commented out.
# ref: https://platform.openai.com/docs/api-reference/chat/create
"""

messages: List[ChatCompletionMessageParam]
r"""
A list of openai-style user-assistant alternating messages.
"""

model: str
r"""
Model to use.
"""

# frequency_penalty: Optional[float] = 0
r"""
Number between -2.0 and 2.0. Positive values penalize new tokens based
on their existing frequency in the text so far, decreasing the model's
likelihood to repeat the same line verbatim.
"""

logit_bias: Optional[Dict] = None
r"""
Modify the likelihood of specified tokens appearing in the completion.

Accepts a JSON object that maps tokens (specified by their token ID in the
tokenizer) to an associated bias value from -100 to 100. Mathematically,
the bias is added to the logits generated by the model prior to sampling.
The exact effect will vary per model, but values between -1 and 1 should
decrease or increase likelihood of selection; values like -100 or 100
should result in a ban or exclusive selection of the relevant token.
"""

#TODO: fix -inf output score from HF model
# logprobs: Optional[bool] = False
r""""
Whether to return log probabilities of the output tokens or not. If true,
returns the log probabilities of each output token returned in the
`content` of `message`. This option is currently not available on the
`gpt-4-vision-preview` model.
"""

# top_logprobs: Optional[int] = None # int between 0 and 5
r"""
An integer between 0 and 5 specifying the number of most likely tokens
to return at each token position, each with an associated log
probability. `logprobs` must be set to `true` if this parameter is used.
"""

max_tokens: Optional[int] = None
r"""
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the
model's context length. Example Python code for counting tokens.
"""
n: Optional[int] = 1
r"""
How many chat completion choices to generate for each input message.
Note that you will be charged based on the number of generated
tokens across all of the choices. Keep `n` as `1` to minimize costs.
"""
# presence_penalty: Optional[float] = 0 # from -2.0 to 2.0
r"""
Number between -2.0 and 2.0. Positive values penalize new tokens
based on whether they appear in the text so far, increasing
the model's likelihood to talk about new topics.
"""
# response_format: None
# seed: None
stop: Optional[Union[str, List[str]]] = None
stream: Optional[bool] = False
temperature: Optional[float] = 0.2 # from 0 to 2.0
top_p: Optional[float] = 1.0 # from 0 to 1.0
# tools: None
# tool_choice: None
# user:

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ChatCompletionRequest model accurately captures the necessary parameters for chat completion requests. However, there are several commented-out fields and TODO comments indicating potential areas for future development or compatibility issues. Ensure that these are addressed or documented appropriately before finalizing the API.

Consider addressing or documenting the commented-out fields and TODO comments to clarify their status and plans for future development.

@ocss884
Copy link
Member Author

ocss884 commented Mar 13, 2024

Hi! @dandansamax @Wendong-Fan Do you have time to take a look at this PR and give me some feedback?

@Wendong-Fan
Copy link
Member

Hi! @dandansamax @Wendong-Fan Do you have time to take a look at this PR and give me some feedback?

@ocss884 thanks for the contribution, will do the review later this week, please fix pre-commit error when you have time

@Wendong-Fan Wendong-Fan added this to the Sprint 3 milestone May 9, 2024
@Wendong-Fan Wendong-Fan changed the title Add serve module and cli tools feat: Add serve module and cli tools May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XL This PR changes 500-999 lines, ignoring generated files.
Projects
Status: Reviewing
Development

Successfully merging this pull request may close these issues.

None yet

2 participants