The support for GPT-4o has been improved. AIlice has demonstrated the strongest performance yet! #32
stevenlu137
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Hi! I just want to say that I love what you’re building. I’ve read through all the documents and will be trying to run myself (nervous! Not a dev but your documentation makes it sound doable so going to try!). I really like the forward planning that I can see went into this project. Once I learn more I’m sure I’ll be trying to build my own modules with them. Could I ask your advice? Before getting started on my PC I would like to have an instruction for AIlice somewhere explicitly telling them to always be honest. I would also like to put in a version of Azimovs laws. Where could I put that in the code? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Recently, I implemented fault tolerance in the interpreter to address some issues with GPT-4o, and added video multimodal support (though there are still some problems; this is not very important, as video multimodality is difficult to practically implement in agent tasks before the release of open-source GPT-4o-level multimodal models). I also initially resolved the long-standing issue of incorrect token estimation for multimodal content.
Currently, we are seeing the best performance ever on GPT-4o. Additionally, the performance of Mixtral-8x22b-instruct is quite impressive, subjectively appearing smarter than LLaMA-3-70b. By combining the Interrupt feature to provide real-time prompts to agents, we can now complete some complex tasks.
Our next goals remain: long-term memory and complex software engineering.
Beta Was this translation helpful? Give feedback.
All reactions