[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
-
Updated
Apr 15, 2024 - Python
[ICML 2024] LLMCompiler: An LLM Compiler for Parallel Function Calling
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). It provides a simple yet robust interface using llama.cpp, allowing users to chat with LLM models, execute structured function calls and get structured output. Also supports llama-index and OpenAI Tools.
A sample app to demonstrate Function calling using the latest format in Chat Completions API and also in Assistants API.
Add a description, image, and links to the parallel-function-call topic page so that developers can more easily learn about it.
To associate your repository with the parallel-function-call topic, visit your repo's landing page and select "manage topics."