You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basically it requires adding two additional parameters into ChatCompletionCreateRequest
logprobs:
Whether to return log probabilities of the output tokens or not.
If true, returns the log probabilities of each output token returned in the content of message.
This option is currently not available on the gpt-4-vision-preview model.
top_logprobs:
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position,
each with an associated log probability. logprobs must be set to true if this parameter is used.
This part can be done with existing library, by deriving from ChatCompletionCreateRequest and adding these parameters as
publicclassChatCompletionCreateRequest2:ChatCompletionCreateRequest{/// <summary>/// Whether to return log probabilities of the output tokens or not./// If true, returns the log probabilities of each output token returned in the content of message./// This option is currently not available on the gpt-4-vision-preview model./// </summary>[JsonPropertyName("logprobs")]publicbool?LogProbs{get;set;}/// <summary>/// An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability./// logprobs must be set to true if this parameter is used./// </summary>[JsonPropertyName("top_logprobs")]publicint?TopLogprobs{get;set;}}
But I could not find and easy way to extend parser of the response. it's hardcoded to parse server response into ChatCompletionCreateResponse, which in turn decode ChatChoiceResponse, which doesn't have a property to capture response from server
So, suggestion is to add two optional parameters mntioned above to ChatCompletionCreateRequest
And add LogProbs property to ChatChoiceResponse (same way as it is declared in ChoiceResponse)
Here is description of the feature from OpenAI - https://cookbook.openai.com/examples/using_logprobs
Basically it requires adding two additional parameters into ChatCompletionCreateRequest
This part can be done with existing library, by deriving from ChatCompletionCreateRequest and adding these parameters as
But I could not find and easy way to extend parser of the response. it's hardcoded to parse server response into ChatCompletionCreateResponse, which in turn decode ChatChoiceResponse, which doesn't have a property to capture response from server
So, suggestion is to add two optional parameters mntioned above to ChatCompletionCreateRequest
And add LogProbs property to ChatChoiceResponse (same way as it is declared in ChoiceResponse)
The text was updated successfully, but these errors were encountered: