Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving resilience to inaccurate code generation #42

Open
5 tasks done
alextrzyna opened this issue Jun 23, 2023 · 5 comments
Open
5 tasks done

Improving resilience to inaccurate code generation #42

alextrzyna opened this issue Jun 23, 2023 · 5 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@alextrzyna
Copy link

⚠️ Please check that this feature request hasn't been suggested before.

  • I searched previous Ideas in Discussions didn't find any similar feature requests.
  • I searched previous Issues didn't find any similar feature requests.

🔖 Feature description

First of all, really cool project! I found gpt-code-ui when looking for an alternative to Code Interpreter/Notable that I could run locally.

I have noticed that gpt-code-ui is not quite as resilient to mistakes that it makes when generating code, specifically when compared to something like ChatGPT + Noteable plugin. For example, if gpt-code-ui makes a mistaken assumption about the name of a dataframe row in code that it generates, execution will fail and it will give up, whereas in the Noteable scenario ChatGPT is more likely to proactively inspect the results and attempt to fix it.

✔️ Solution

Instead of just outputting the errors associated with a failed execution, proactively inspect the error and attempt a fix/re-run.

❓ Alternatives

No response

📝 Additional Context

No response

Acknowledgements

  • My issue title is concise, descriptive, and in title casing.
  • I have searched the existing issues to make sure this feature has not been requested yet.
  • I have provided enough information for the maintainers to understand and evaluate this request.
@alextrzyna alextrzyna added the enhancement New feature or request label Jun 23, 2023
@ricklamers ricklamers added the good first issue Good for newcomers label Jun 24, 2023
@ricklamers
Copy link
Owner

Agree there is room for improvement in retrying/feeding error messages back into the model. Inviting the community to contribute PRs – it’s out of scope for what I wanted to build personally.

Maybe GPT-5 is good enough as to not hallucinate variable names? 😄

@CiaranYoung
Copy link

Although openai has now made the code interpreter available to all plus users, the project is still very cool, I have a question is it as powerful as the official plugin

@darkacorn
Copy link

what kind of work would need to be done to run this on say a local llm with ooba ( has an openai compatible api)

@dasmy
Copy link
Contributor

dasmy commented Jul 26, 2023

Working on this one.
@ricklamers: I am preparing a pull request with a rough idea in https://github.com/dasmy/gpt-code-ui/tree/dev/conversation_history. Then we can discuss if and how my approach fits into the overall picture.

@bitsnaps
Copy link

@dasmy I have two ideas in mind:

  1. Detect if the code fails to execute statically (through the OS exit code, thrown exceptions, etc.).
  2. Auto-detect and fix the issue, similar to the official CodeInterpreter implementation. The concept involves piping the output to ChatGPT with a simple prompt to identify the problem and attempt to resolve it. This approach may yield better results but could require more resources.
    Ideally, we should ask the user to choose one of these options.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

6 participants