You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Looking at the code I believe, it should be possible to implement serialisation. Idea would be simple - dump an algorithm into JSON file.
I do wonder if it's a bigger question on agents being able to abstract their own tasks into concepts and create a list of possible tasks to use.
Like when I was writing a trading bot I kept thinking - why agent cannot create a tool once like "get_technical_indicators" and then reuse it. It keeps writing the tool from scratch every time.
It's like software development before invention of libraries.
The text was updated successfully, but these errors were encountered:
Developing ways to pause and resume agent runs (and to provide feedback to agents) is something I've been thinking about. As a first step, I want to have a clean step execution tree. That tree will be serializable and is the main component to serializing a full agent run. Such a serialized agent run could be used to resume agent runs. It could also be the foundation for extracting information, maybe even reusable execution fragments as you suggest.
Inspired by https://jina.ai/news/auto-gpt-unmasked-hype-hard-truths-production-pitfalls/
Looking at the code I believe, it should be possible to implement serialisation. Idea would be simple - dump an algorithm into JSON file.
I do wonder if it's a bigger question on agents being able to abstract their own tasks into concepts and create a list of possible tasks to use.
Like when I was writing a trading bot I kept thinking - why agent cannot create a tool once like "get_technical_indicators" and then reuse it. It keeps writing the tool from scratch every time.
It's like software development before invention of libraries.
The text was updated successfully, but these errors were encountered: