Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add pne.chat use openai provider #650

Open
Undertone0809 opened this issue May 7, 2024 · 0 comments
Open

Add pne.chat use openai provider #650

Undertone0809 opened this issue May 7, 2024 · 0 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@Undertone0809
Copy link
Owner

Undertone0809 commented May 7, 2024

馃殌 Feature Request

Add pne.chat() use openai provider to proxy some specified model, eg:

import promptulate as pne

pne.chat(messages="hello", model="gpt-4-turbo")

If developer want to use openai provider by pne.chat(). They can use pne.chat(messages="hello", model="openai/custom-model", model_config={"base_url": "xxxx", "api_key": "xxx"}) to chat with custom model. Model name use openai prefix.

Why?

There are lots of providers use OpenAI SDK to proxy their model, eg:

  1. https://platform.deepseek.com/api-docs/api/create-chat-completion
    Original:
from openai import OpenAI

client = OpenAI(api_key="<your API key>", base_url="https://api.deepseek.com")

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello"},
  ],
    max_tokens=1024,
    temperature=0.7,
    stream=False
)

print(response.choices[0].message.content)

Expect:

import promptulate as pne

pne.chat(
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Hello"},
    ], 
    model="openai/deepseek-chat", 
    model_config={
        base_url="https://api.deepseek.com",
        max_tokens=1024,
        temperature=0.7,
        stream=False
    }
)

2.https://openrouter.ai/docs#principles

image

from openai import OpenAI
from os import getenv

# gets API Key from environment variable OPENAI_API_KEY
client = OpenAI(
  base_url="https://openrouter.ai/api/v1",
  api_key=getenv("OPENROUTER_API_KEY"),
)

completion = client.chat.completions.create(
  extra_headers={
    "HTTP-Referer": $YOUR_SITE_URL, # Optional, for including your app on openrouter.ai rankings.
    "X-Title": $YOUR_APP_NAME, # Optional. Shows in rankings on openrouter.ai.
  },
  model="openai/gpt-3.5-turbo",
  messages=[
    {
      "role": "user",
      "content": "Say this is a test",
    },
  ],
)
print(completion.choices[0].message.content)
  1. zhipu

What to do?

  1. Optimize pne.chat() core code.
  2. Add unit test
  3. Add docs and notebook to show how to use custom model by openai in pne.chat()

Attention

In openrouter, openai need to add extra_headers. It's a question to consider.

@Undertone0809 Undertone0809 added the enhancement New feature or request label May 7, 2024
@Undertone0809 Undertone0809 self-assigned this May 7, 2024
@Undertone0809 Undertone0809 added the good first issue Good for newcomers label May 12, 2024
@Undertone0809 Undertone0809 removed their assignment May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
Status: Ready
Development

No branches or pull requests

1 participant