Skip to content

🐚 OpenDevin: Code Less, Make More

License

Notifications You must be signed in to change notification settings

DunaSpice/OpenDevin

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenDevin Logo

OpenDevin: Code Less, Make More

License

demo-video.webm

Mission 🎯

Welcome to OpenDevin, an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community.

Work in Progress

OpenDevin is still a work in progress. But you can run the alpha version to see things working end-to-end.

Requirements

Installation

First, make sure Docker is running:

docker ps # this should exit successfully

Then pull our latest image here

docker pull ghcr.io/opendevin/sandbox

Then copy config.toml.template to config.toml. Add an API key to config.toml. (See below for how to use different models.)

OPENAI_API_KEY="..."
WORKSPACE_DIR="..."

Next, start the backend. We manage python packages and the virtual environment with pipenv. Make sure you have python >= 3.10.

python -m pip install pipenv
pipenv install -v
pipenv shell
uvicorn opendevin.server.listen:app --port 3000

Then, in a second terminal, start the frontend:

cd frontend
npm install
npm start

Picking a Model

We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini. LiteLLM has a full list of providers.

To change the model, set the LLM_MODEL and LLM_API_KEY in config.toml.

For example, to run Claude:

LLM_API_KEY="your-api-key"
LLM_MODEL="claude-3-opus-20240229"

You can also set the base URL for local/custom models:

LLM_BASE_URL="https://localhost:3000"

And you can customize which embeddings are used for the vector database storage:

LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"

Running the app

You should be able to run the backend now

uvicorn opendevin.server.listen:app --port 3000

Then in a second terminal:

cd frontend
npm install
npm run start -- --port 3001

You'll see OpenDevin running at localhost:3001

Running on the Command Line

You can run OpenDevin from your command line:

PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

πŸ€” What is Devin?

Devin represents a cutting-edge autonomous agent designed to navigate the complexities of software engineering. It leverages a combination of tools such as a shell, code editor, and web browser, showcasing the untapped potential of LLMs in software development. Our goal is to explore and expand upon Devin's capabilities, identifying both its strengths and areas for improvement, to guide the progress of open code models.

🐚 Why OpenDevin?

The OpenDevin project is born out of a desire to replicate, enhance, and innovate beyond the original Devin model. By engaging the open-source community, we aim to tackle the challenges faced by Code LLMs in practical scenarios, producing works that significantly contribute to the community and pave the way for future advancements.

⭐️ Research Strategy

Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:

  1. Core Technical Research: Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
  2. Specialist Abilities: Enhancing the effectiveness of core components through data curation, training methods, and more.
  3. Task Planning: Developing capabilities for bug detection, codebase management, and optimization.
  4. Evaluation: Establishing comprehensive evaluation metrics to better understand and improve our models.

πŸ›  Technology Stack

  • Sandboxing Environment: Ensuring safe execution of code using technologies like Docker and Kubernetes.
  • Frontend Interface: Developing user-friendly interfaces for monitoring progress and interacting with Devin, potentially leveraging frameworks like React or creating a VSCode plugin for a more integrated experience.

πŸš€ Next Steps

An MVP demo is urgent for us. Here are the most important things to do:

  • UI: a chat interface, a shell demonstrating commands, a browser, etc.
  • Architecture: an agent framework with a stable backend, which can read, write and run simple commands
  • Agent: capable of generating bash scripts, running tests, etc.
  • Evaluation: a minimal evaluation pipeline that is consistent with Devin's evaluation.

After finishing building the MVP, we will move towards research in different topics, including foundation models, specialist capabilities, evaluation, agent studies, etc.

How to Contribute

OpenDevin is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:

  • Code Contributions: Help us develop the core functionalities, frontend interface, or sandboxing solutions.
  • Research and Evaluation: Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
  • Feedback and Testing: Use the OpenDevin toolset, report bugs, suggest features, or provide feedback on usability.

For details, please check this document.

Join Us

We use Slack to discuss. Feel free to fill in the form if you would like to join the Slack organization of OpenDevin. We will reach out shortly if we feel you are a good fit to the current team!

Stay updated on OpenDevin's progress, share your ideas, and collaborate with fellow enthusiasts and experts. Together, we can make significant strides towards simplifying software engineering tasks and creating more efficient, powerful tools for developers everywhere.

🐚 Code less, make more with OpenDevin.

Star History Chart

About

🐚 OpenDevin: Code Less, Make More

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 62.6%
  • Python 27.8%
  • TypeScript 6.6%
  • CSS 1.1%
  • HTML 0.5%
  • Makefile 0.5%
  • Other 0.9%