You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that you're making async calls to OpenAI API manually in async def async_create_completion(). Is there any reason for this instead of using the AsyncOpenAI from the official library?
The reason I found this is because I was instrumenting our app with OpenTelemetry and OpenLLMetry just to find that async OpenAI calls from Adala are not instrumented - because OpenLLMetry expects standard openai package calls. It would be great to stick to the official package to make this (and other) useful tricks working.
The text was updated successfully, but these errors were encountered:
Hi, thanks for reaching out! Happy to say this is under development in #120 and should be merged in next week. Interested to hear more about your use case if you have details to share.
Thank you. This is great to hear!
I hope it'll make it easier for us to use LangFuse or something similar as we move towards more instrumentation and observability for our production environment.
PS I'm happy to share our use cases privately, as we're working on an internal proprietary tool.
I would also appreciate it if our (very simple) bugfixes are merged (or fixed otherwise) - like this one: #98
I noticed that you're making async calls to OpenAI API manually in
async def async_create_completion()
. Is there any reason for this instead of using theAsyncOpenAI
from the official library?The reason I found this is because I was instrumenting our app with OpenTelemetry and OpenLLMetry just to find that async OpenAI calls from Adala are not instrumented - because OpenLLMetry expects standard
openai
package calls. It would be great to stick to the official package to make this (and other) useful tricks working.The text was updated successfully, but these errors were encountered: