Skip to content

Releases: pgalko/BambooAI

v0.3.50

21 May 06:10
Compare
Choose a tag to compare
  • Library now supports scraping of dynamic web content via Selenium
  • Requires manual ChromeDriver download, and the path to it set in SELENIUM_WEBDRIVER_PATH env var
  • If the env var is set, the library selects the Selenium for all scraping tasks
  • a couple bug fixes

v0.3.48

15 May 03:45
Compare
Choose a tag to compare

Major refactor of qa_retrieval.py, new gemini models

  • Adds support for new pinecone client
  • Removes vector db records duplication
  • Add support for OpenAI embeddings models in addition to hf_sentence_transformers
  • text-embedding-3-small now default embeddings model

v0.3.44

05 May 09:30
1e217c4
Compare
Choose a tag to compare
  • Google search seamlessly incorporated into the flow. A very positive results with Groq:Llama 3 70B when selected as a model for the Search agent.
{"agent": "Google Search Query Generator", "details": {"model": "llama3-70b-8192", "provider":"groq","max_tokens": 4000, "temperature": 0}},
    {"agent": "Google Search Summarizer", "details": {"model": "llama3-70b-8192", "provider":"groq","max_tokens": 4000, "temperature": 0}}
  • Some improvements to jupyter notebook output formatting.
  • Search could benefit further from Something like Selenium or pypeteer to allow for scraping of dynamic websites. At the moment only static content is supported. Tricky, as we do not want the library to become too bloated

v0.3.42

03 May 03:50
Compare
Choose a tag to compare
  • I have now updated the notebook output formatting using Markdown instead of HTML. It is now much more pleasant for the user. The changes are included in the this version (v0.3.42), and pushed to PyPi.

  • Video to illustrate the new output here: https://github.com/pgalko/BambooAI/assets/39939157/6058a3a2-63d9-44b9-b065-0a0cda5d7e17

  • Also benchmarked the library against "OpenAI Assitants API + Code Interpreter". BambooAI much cheaper and faster :-).

Task: Devise a machine learning model to predict the survival of passengers on the Titanic. The output should include the accuracy of the model and visualizations of the confusion matrix, correlation matrix, and other relevant metrics.

Dataset: Titanic.csv

Model: GPT-4-Turbo

OpenAI Assistants API (Code Interpreter)

  • Result:
    • Confusion Matrix:
      • True Negative (TN): 90 passengers were correctly predicted as not surviving.
      • True Positive (TP): 56 passengers were correctly predicted as surviving.
      • False Negative (FN): 18 passengers were incorrectly predicted as not surviving.
      • False Positive (FP): 15 passengers were incorrectly predicted as surviving.
Metric Value
Execution Time 77.12 seconds
Input Tokens 7128
Output Tokens 1215
Total Cost $0.1077

BambooAI (No Planning, Google Search or Vector DB)

  • Result:
    • Confusion Matrix:
      • True Negative (TN): 92 passengers were correctly predicted as not surviving.
      • True Positive (TP): 55 passengers were correctly predicted as surviving.
      • False Negative (FN): 19 passengers were incorrectly predicted as not surviving.
      • False Positive (FP): 13 passengers were incorrectly predicted as surviving.
Metric Value
Execution Time 47.39 seconds
Input Tokens 722
Output Tokens 931
Total Cost $0.0353

v0.3.38

02 May 05:34
Compare
Choose a tag to compare
  • Planning agent can now use google search via function_calls. This is currently only available for OpenAI LLMs.
  • A new logic for expert selector
  • Plans now included in Vector DB record metadata alongside code. This is particularly beneficial for non OpenAI models.
  • A completely new google_search.py module using ReAct method
  • Some prompt adjustments. Current date now included in some system prompts.
  • A bunch of bug fixes

v0.3.32

21 Apr 13:12
9613a21
Compare
Choose a tag to compare

Added support for a bunch of models and APIs

  • Added support for Ollama
  • Added support for Anthtropic, Mistral, Groq, Google Gemini
  • Some bug fixes

v0.3.30

17 Apr 11:40
38ae773
Compare
Choose a tag to compare
  • BambooAI now compatible with the latest version of OpenAI client library
  • New OpenSource LLM added "Open Code Interpreter"
  • A few mods to prompts
  • The Default OpenAI model switched to GPT-4-Turbo

v0.3.29

25 Oct 03:37
dc0b258
Compare
Choose a tag to compare
  • Load llm config from env var or json file
  • Load prompt templates from json file
  • Add ability to specify an llm config individually for each agent
  • Append full traceback to error correction calls
  • Refactor the code for functions and classes to match agent work flow
  • Change variable names to be more descriptive
  • Change output messages to be more descriptive

Deprecation Notice (October 25, 2023):
Please note that the "llm", "local_code_model", "llm_switch_plan", and "llm_switch_code" parameters have been deprecated as of v 0.3.29. The assignment of models and model parameters to agents is now handled via LLM_CONFIG. This can be set either as an environment variable or via a LLM_CONFIG.json file in the working directory. Please see README "Usage" section for details.

v0.3.28

07 Oct 13:52
Compare
Choose a tag to compare

-Result of the previous computation added to the user prompt
-model max_tokens increased to 2000
-minor output wording changes

v0.3.27

29 Sep 08:01
Compare
Choose a tag to compare

-Consolidated all output and print in "output_manager" module.
-A few bug fixes related to log entries duplication.