New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add serve module and cli tools #448
base: master
Are you sure you want to change the base?
Conversation
Important Auto Review SkippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the To trigger a single review, invoke the WalkthroughThe updates introduce a comprehensive command-line interface and server functionality for the Camel service, focusing on chatbot interactions and model management. It includes licensing updates, a CLI for service operations, enhanced text generation capabilities, model management, non-blocking server execution, an OpenAI-compatible API server, utility functions for model loading and GPU usage, and improved server type definitions for request and response handling. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 11
Configuration used: CodeRabbit UI
Files ignored due to path filters (2)
poetry.lock
is excluded by:!**/*.lock
pyproject.toml
is excluded by:!**/*.toml
Files selected for processing (9)
- camel/serve/init.py (1 hunks)
- camel/serve/cli.py (1 hunks)
- camel/serve/inference.py (1 hunks)
- camel/serve/model_manager.py (1 hunks)
- camel/serve/non_block.py (1 hunks)
- camel/serve/openai_api_server.py (1 hunks)
- camel/serve/utils.py (1 hunks)
- camel/types/server_types.py (1 hunks)
- camel/utils/init.py (1 hunks)
Files skipped from review due to trivial changes (2)
- camel/serve/init.py
- camel/serve/model_manager.py
Additional comments: 14
camel/serve/non_block.py (1)
- 9-11: Using
daemon=True
for the thread running the server ensures that the server thread will close when the main program exits. However, it's important to ensure that this behavior is desired, as it may lead to unexpected server shutdowns if the main program's lifecycle is not managed properly.camel/utils/__init__.py (1)
- 31-31: The addition of
messages_to_prompt
to the list of exported entities suggests that this is a new utility function or variable introduced for use outside this module. Ensure thatmessages_to_prompt
is properly documented and tested, as it is now part of the public API of this module.Verification successful
The
messages_to_prompt
function withincamel/utils/token_counting.py
is documented, as indicated by the presence of a docstring that begins to describe the function's purpose and parameters. This addresses the concern about ensuring that new utilities introduced to the public API are properly documented. However, the full extent of the documentation and the quality thereof cannot be fully assessed based on the provided snippet. It is recommended to review the complete docstring and any associated tests to ensure comprehensive documentation and coverage.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify the existence and documentation of messages_to_prompt in the utils module. rg --type py "def messages_to_prompt" camel/utils/ rg --type py "messages_to_prompt =" camel/utils/Length of output: 213
Script:
#!/bin/bash # Extract the documentation (docstring) for messages_to_prompt function. rg --type py --context 5 "def messages_to_prompt" camel/utils/token_counting.pyLength of output: 414
camel/serve/utils.py (1)
- 63-64: The
get_gpu_nums
function is straightforward and correctly returns the number of available GPUs. This is a good utility function for dynamically determining device allocation based on available resources.camel/serve/cli.py (2)
- 37-37: The
init
command usessudo systemctl start camel
to start the service, which implies that a systemd service namedcamel
must be configured on the system. Ensure that documentation is provided on how to set up this systemd service, including necessary permissions and security considerations.- 76-79: The
add
andremove
commands correctly userequests
to interact with the service. Ensure that error handling is implemented to provide user-friendly feedback in case of failures, such as model not found or server errors.Consider adding error handling to these commands to catch and display HTTP errors or connection issues.
camel/serve/openai_api_server.py (3)
- 41-43: The root route (
/
) returns a simple "Hello World" message along with a list of models managed byLLM_MGR
. This is a useful diagnostic endpoint but consider documenting its purpose and potential use cases to avoid confusion.- 73-85: The
add_model
route handler loads a model and tokenizer usingload_model
and registers them withLLM_MGR
. Ensure that error handling is implemented for cases where model loading fails or the model is already registered, to provide clear feedback to the client.Consider adding error handling for model loading failures and duplicate model registration.
- 106-107: The application is configured to run with
uvicorn
using command-line arguments for host and port, which is a flexible setup. Ensure that thetimeout_keep_alive
parameter is set appropriately for your use case, as it might need adjustment based on the expected client behavior and server load.camel/serve/inference.py (1)
- 71-144: The
translate_param_to_hf_config
function translates OpenAI chat completion parameters to a configuration compatible with Hugging Face'sgenerate
method. While it covers essential parameters likemax_new_tokens
,temperature
, andtop_p
, ensure that this translation aligns with the intended use cases and that any omitted parameters are intentionally excluded based on compatibility or relevance.camel/types/server_types.py (5)
- 26-29: The
ListModelResponse
model is well-defined for listing models. It includes essential fields likemodel_name
andmodel_type
, which should cover basic needs for model identification and categorization.- 112-114: The
ChatCompletionResponse
model is simple and effective for returning chat completion messages. If additional metadata or information about the completion is needed in the future, consider extending this model.- 117-150: The
HFModelLoadingParam
model includes a configuration option fordevice_map
that supports a variety of input types. This flexibility is beneficial for deploying models across different hardware configurations. Ensure that the documentation clearly explains how to use this field, especially for users unfamiliar with PyTorch's device management.- 152-164: The
ModelRegistrationRequest
model is well-structured for registering new models with the service. It allows for specifying a model by ID or path and optionally renaming it. This flexibility supports a wide range of use cases, from using pre-trained models to integrating custom models.- 167-171: The
ModelRemovalRequest
model is straightforward and effective for requesting the removal of a model from the service. It uses the model name as the key identifier, which aligns with common practices for managing resources.
camel/serve/non_block.py
Outdated
from camel.serve.openai_api_server import app | ||
from time import sleep | ||
def start_server(): | ||
uvicorn.run(app, host=None, port=8000) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The server is hardcoded to run on port 8000 without allowing customization of the host or port. Consider making these parameters configurable to enhance flexibility and avoid potential port conflicts.
- uvicorn.run(app, host=None, port=8000)
+ uvicorn.run(app, host=host, port=port)
And modify the start_server
function to accept host
and port
as arguments.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
uvicorn.run(app, host=None, port=8000) | |
uvicorn.run(app, host=host, port=port) |
camel/serve/non_block.py
Outdated
t = threading.Thread(target=start_server, daemon=True) | ||
t.start() | ||
sleep(0.1) | ||
run_in_background() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The call to run_in_background()
at the global level will execute the server startup as soon as this module is imported. This could lead to unintended side effects, especially in a larger application where imports might happen conditionally or in a specific order. Consider controlling server startup through a main function or an explicit call from outside this module.
- run_in_background()
Move this call to a controlled environment, such as a main function or an explicit initialization routine.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
run_in_background() |
camel/serve/utils.py
Outdated
def load_model( | ||
model: str, | ||
hf_param: HFModelLoadingParam, | ||
): | ||
""" | ||
General entry of loading a model in CAMEL project | ||
""" | ||
|
||
# device input check | ||
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore | ||
device = hf_param["device_map"] | ||
if isinstance(device, str): | ||
if device not in {"cuda", "cpu"}: | ||
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.") | ||
if isinstance(device, int): | ||
gpu_nums = get_gpu_nums() | ||
if device >= gpu_nums: | ||
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.") | ||
|
||
model, tokenizer = load_HF_model(model, hf_param) | ||
return model, tokenizer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The load_model
function correctly checks the device type and validates GPU device numbers. However, it lacks error handling for the case where get_gpu_nums
might be called on a system without CUDA support, leading to a potential unhandled exception. Consider adding a check for CUDA availability.
+ if torch.cuda.is_available():
Add this check before accessing CUDA-specific functions to ensure compatibility with systems without GPU support.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def load_model( | |
model: str, | |
hf_param: HFModelLoadingParam, | |
): | |
""" | |
General entry of loading a model in CAMEL project | |
""" | |
# device input check | |
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore | |
device = hf_param["device_map"] | |
if isinstance(device, str): | |
if device not in {"cuda", "cpu"}: | |
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.") | |
if isinstance(device, int): | |
gpu_nums = get_gpu_nums() | |
if device >= gpu_nums: | |
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.") | |
model, tokenizer = load_HF_model(model, hf_param) | |
return model, tokenizer | |
def load_model( | |
model: str, | |
hf_param: HFModelLoadingParam, | |
): | |
""" | |
General entry of loading a model in CAMEL project | |
""" | |
# device input check | |
hf_param = hf_param.model_dump() if PYDANTIC_V2 else hf_param.dict() # type: ignore | |
device = hf_param["device_map"] | |
if isinstance(device, str): | |
if device not in {"cuda", "cpu"}: | |
raise ValueError(f"Invalid device {device}, expect one of 'cuda' and 'cpu'.") | |
if isinstance(device, int): | |
if torch.cuda.is_available(): | |
gpu_nums = get_gpu_nums() | |
if device >= gpu_nums: | |
raise ValueError(f"Invalid device number {device}, only have {tuple(range(gpu_nums))}.") | |
model, tokenizer = load_HF_model(model, hf_param) | |
return model, tokenizer |
def load_HF_model(hf_model, configs): | ||
""" | ||
Load a model from Hugging Face model hub | ||
""" | ||
try: | ||
model = AutoModelForCausalLM.from_pretrained( | ||
hf_model, | ||
**configs | ||
) | ||
except Exception as e: | ||
raise HTTPException(status_code=404, detail=str(e)) | ||
try: | ||
tokenizer = AutoTokenizer.from_pretrained( | ||
hf_model | ||
) | ||
except Exception as e: | ||
raise HTTPException(status_code=404, detail=str(e)) | ||
|
||
return model, tokenizer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The load_HF_model
function uses a broad except Exception as e
clause which might catch more exceptions than intended, potentially obscuring the root cause of errors. It's generally a good practice to catch specific exceptions where possible. Additionally, raising an HTTPException
in this utility function couples it tightly to FastAPI, reducing its reusability. Consider returning error information in a way that doesn't depend on FastAPI.
- raise HTTPException(status_code=404, detail=str(e))
+ raise ValueError(f"Failed to load model or tokenizer: {str(e)}")
This change decouples the utility function from FastAPI and encourages handling exceptions at a higher level where the context of HTTP requests is relevant.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def load_HF_model(hf_model, configs): | |
""" | |
Load a model from Hugging Face model hub | |
""" | |
try: | |
model = AutoModelForCausalLM.from_pretrained( | |
hf_model, | |
**configs | |
) | |
except Exception as e: | |
raise HTTPException(status_code=404, detail=str(e)) | |
try: | |
tokenizer = AutoTokenizer.from_pretrained( | |
hf_model | |
) | |
except Exception as e: | |
raise HTTPException(status_code=404, detail=str(e)) | |
return model, tokenizer | |
def load_HF_model(hf_model, configs): | |
""" | |
Load a model from Hugging Face model hub | |
""" | |
try: | |
model = AutoModelForCausalLM.from_pretrained( | |
hf_model, | |
**configs | |
) | |
except Exception as e: | |
raise ValueError(f"Failed to load model or tokenizer: {str(e)}") | |
try: | |
tokenizer = AutoTokenizer.from_pretrained( | |
hf_model | |
) | |
except Exception as e: | |
raise ValueError(f"Failed to load model or tokenizer: {str(e)}") | |
return model, tokenizer |
camel/serve/cli.py
Outdated
out = requests.post( | ||
f"http://localhost:{8000}/v1/chat/completions", json={ | ||
"model": model, | ||
"messages": [{ | ||
"role": "user", | ||
"content": messages | ||
}] | ||
}) | ||
click.echo(out.json()["message"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat
command hardcodes the host and port for the service. Consider making these configurable through CLI options or environment variables to enhance flexibility and support different deployment scenarios.
- f"http://localhost:{8000}/v1/chat/completions"
+ f"http://{host}:{port}/v1/chat/completions"
And add host
and port
as parameters to the chat
function, with default values or retrieved from environment variables.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
out = requests.post( | |
f"http://localhost:{8000}/v1/chat/completions", json={ | |
"model": model, | |
"messages": [{ | |
"role": "user", | |
"content": messages | |
}] | |
}) | |
click.echo(out.json()["message"]) | |
out = requests.post( | |
f"http://{host}:{port}/v1/chat/completions", json={ | |
"model": model, | |
"messages": [{ | |
"role": "user", | |
"content": messages | |
}] | |
}) | |
click.echo(out.json()["message"]) |
camel/serve/cli.py
Outdated
""" | ||
Exits the camel, closing all running LLMs | ||
""" | ||
subprocess.run(["sudo", "systemctl", "stop", "camel"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The search for documentation related to setting up and securing the systemd service, as mentioned in the review comment, did not yield any results directly mentioning "systemd" or providing instructions for systemd service setup in the README.md or other common documentation files. The mentions of "service" found in the search results are not related to systemd service setup but rather to setting environment variables for an OpenAI API proxy service and using an external server for LLM inference service. Based on this, it appears that the specific documentation requested in the review comment might not exist in the repository.
Analysis chain
The exit
command uses sudo systemctl stop camel
to stop the service. As with the init
command, ensure that documentation is provided on setting up and securing the systemd service.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify documentation for systemd service setup.
rg "systemd" README.md
Length of output: 22
Script:
#!/bin/bash
# Broaden the search for systemd service setup documentation in other common documentation files.
rg "systemd" -g '*.md'
# Additionally, search for keywords related to service setup to catch indirect references.
rg "(service setup|systemctl|service)" -g '*.md'
Length of output: 838
camel/serve/openai_api_server.py
Outdated
#TODO: make the return match the openai api | ||
@app.post("/v1/chat/completions") | ||
async def chat_completions( | ||
request: ChatCompletionRequest) -> ChatCompletionResponse: | ||
""" | ||
Creates a chat completion for chat messages | ||
""" | ||
|
||
check_model_availability(request.model) | ||
|
||
model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer( | ||
request.model) | ||
output = generate(model, tokenizer, request) | ||
|
||
return ChatCompletionResponse(message=output[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The chat_completions
route handler correctly processes chat completion requests. However, it directly accesses LLM_MGR
without error handling for cases where the model might not be found. Consider adding error handling to provide a more informative response in such cases.
- check_model_availability(request.model)
+ try:
+ check_model_availability(request.model)
+ except KeyError as e:
+ raise HTTPException(status_code=404, detail=str(e))
This change ensures that a clear error message is returned when the requested model is not found.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
#TODO: make the return match the openai api | |
@app.post("/v1/chat/completions") | |
async def chat_completions( | |
request: ChatCompletionRequest) -> ChatCompletionResponse: | |
""" | |
Creates a chat completion for chat messages | |
""" | |
check_model_availability(request.model) | |
model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer( | |
request.model) | |
output = generate(model, tokenizer, request) | |
return ChatCompletionResponse(message=output[0]) | |
#TODO: make the return match the openai api | |
@app.post("/v1/chat/completions") | |
async def chat_completions( | |
request: ChatCompletionRequest) -> ChatCompletionResponse: | |
""" | |
Creates a chat completion for chat messages | |
""" | |
try: | |
check_model_availability(request.model) | |
except KeyError as e: | |
raise HTTPException(status_code=404, detail=str(e)) | |
model, tokenizer = LLM_MGR.get_model(request.model), LLM_MGR.get_tokenizer( | |
request.model) | |
output = generate(model, tokenizer, request) | |
return ChatCompletionResponse(message=output[0]) |
camel/serve/openai_api_server.py
Outdated
@app.delete("/v1/delete") | ||
async def remove_model(request: ModelRemovalRequest): | ||
model_name = request.model_name | ||
try: | ||
LLM_MGR.delete_model(model_name) | ||
except Exception as e: | ||
raise HTTPException(status_code=500, detail=str(e)) | ||
return {"message": f"Model {model_name} deleted."} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The remove_model
route handler correctly attempts to remove a model from LLM_MGR
. However, it catches a broad Exception
which might obscure the root cause of errors. Consider catching specific exceptions related to model removal and providing informative error messages.
- except Exception as e:
+ except KeyError as e:
Change the exception type to KeyError
or another specific exception type relevant to model removal failures.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
@app.delete("/v1/delete") | |
async def remove_model(request: ModelRemovalRequest): | |
model_name = request.model_name | |
try: | |
LLM_MGR.delete_model(model_name) | |
except Exception as e: | |
raise HTTPException(status_code=500, detail=str(e)) | |
return {"message": f"Model {model_name} deleted."} | |
@app.delete("/v1/delete") | |
async def remove_model(request: ModelRemovalRequest): | |
model_name = request.model_name | |
try: | |
LLM_MGR.delete_model(model_name) | |
except KeyError as e: | |
raise HTTPException(status_code=500, detail=str(e)) | |
return {"message": f"Model {model_name} deleted."} |
camel/serve/inference.py
Outdated
def generate(model, tokenizer, request): | ||
hf_generation_config = translate_param_to_hf_config(request) | ||
|
||
# tokenizer hosted in huggingface hub may not have chat_template configed | ||
if tokenizer.chat_template is not None: | ||
input_ids = tokenizer.apply_chat_template( | ||
request.messages, return_tensors="pt").to("cuda") | ||
else: | ||
raise AttributeError( | ||
"Chat template not found in tokenizer, please provide a template") | ||
if request.stop is not None: | ||
stop = [ | ||
tokenizer.encode(stop_word, add_prefix_space=False) | ||
for stop_word in tuple(request.stop) | ||
] | ||
else: | ||
stop = None | ||
|
||
prefix_length = input_ids.shape[-1] | ||
out_ids = model.generate(input_ids=input_ids, stopping_criteria=stop, | ||
**hf_generation_config) | ||
|
||
decoded = tokenizer.batch_decode(out_ids[:, prefix_length:], | ||
skip_special_tokens=True) | ||
return decoded |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The generate
function correctly translates parameters and generates text completions. However, it assumes the presence of tokenizer.chat_template
, which might not exist for all tokenizers. This is correctly handled with an AttributeError
, but consider documenting this requirement or providing a fallback mechanism.
Consider adding documentation about the chat_template
requirement or implementing a fallback mechanism for tokenizers without this attribute.
class ChatCompletionRequest(BaseModel): | ||
r"""Base class for completion create parameters. | ||
Not usable fields for open-source models sare | ||
commented out. | ||
# ref: https://platform.openai.com/docs/api-reference/chat/create | ||
""" | ||
|
||
messages: List[ChatCompletionMessageParam] | ||
r""" | ||
A list of openai-style user-assistant alternating messages. | ||
""" | ||
|
||
model: str | ||
r""" | ||
Model to use. | ||
""" | ||
|
||
# frequency_penalty: Optional[float] = 0 | ||
r""" | ||
Number between -2.0 and 2.0. Positive values penalize new tokens based | ||
on their existing frequency in the text so far, decreasing the model's | ||
likelihood to repeat the same line verbatim. | ||
""" | ||
|
||
logit_bias: Optional[Dict] = None | ||
r""" | ||
Modify the likelihood of specified tokens appearing in the completion. | ||
|
||
Accepts a JSON object that maps tokens (specified by their token ID in the | ||
tokenizer) to an associated bias value from -100 to 100. Mathematically, | ||
the bias is added to the logits generated by the model prior to sampling. | ||
The exact effect will vary per model, but values between -1 and 1 should | ||
decrease or increase likelihood of selection; values like -100 or 100 | ||
should result in a ban or exclusive selection of the relevant token. | ||
""" | ||
|
||
#TODO: fix -inf output score from HF model | ||
# logprobs: Optional[bool] = False | ||
r"""" | ||
Whether to return log probabilities of the output tokens or not. If true, | ||
returns the log probabilities of each output token returned in the | ||
`content` of `message`. This option is currently not available on the | ||
`gpt-4-vision-preview` model. | ||
""" | ||
|
||
# top_logprobs: Optional[int] = None # int between 0 and 5 | ||
r""" | ||
An integer between 0 and 5 specifying the number of most likely tokens | ||
to return at each token position, each with an associated log | ||
probability. `logprobs` must be set to `true` if this parameter is used. | ||
""" | ||
|
||
max_tokens: Optional[int] = None | ||
r""" | ||
The maximum number of tokens that can be generated in the chat completion. | ||
The total length of input tokens and generated tokens is limited by the | ||
model's context length. Example Python code for counting tokens. | ||
""" | ||
n: Optional[int] = 1 | ||
r""" | ||
How many chat completion choices to generate for each input message. | ||
Note that you will be charged based on the number of generated | ||
tokens across all of the choices. Keep `n` as `1` to minimize costs. | ||
""" | ||
# presence_penalty: Optional[float] = 0 # from -2.0 to 2.0 | ||
r""" | ||
Number between -2.0 and 2.0. Positive values penalize new tokens | ||
based on whether they appear in the text so far, increasing | ||
the model's likelihood to talk about new topics. | ||
""" | ||
# response_format: None | ||
# seed: None | ||
stop: Optional[Union[str, List[str]]] = None | ||
stream: Optional[bool] = False | ||
temperature: Optional[float] = 0.2 # from 0 to 2.0 | ||
top_p: Optional[float] = 1.0 # from 0 to 1.0 | ||
# tools: None | ||
# tool_choice: None | ||
# user: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ChatCompletionRequest
model accurately captures the necessary parameters for chat completion requests. However, there are several commented-out fields and TODO comments indicating potential areas for future development or compatibility issues. Ensure that these are addressed or documented appropriately before finalizing the API.
Consider addressing or documenting the commented-out fields and TODO comments to clarify their status and plans for future development.
4d5101e
to
8c4ed85
Compare
Hi! @dandansamax @Wendong-Fan Do you have time to take a look at this PR and give me some feedback? |
8c4ed85
to
c25b911
Compare
c25b911
to
f2ae5cc
Compare
@ocss884 thanks for the contribution, will do the review later this week, please fix pre-commit error when you have time |
This PR aims to add our own openai-compatible API server and command line tools with the help of Systemd.
TODO: