Bropilot is a GitHub Copilot alternative that takes advantage of local LLMs through Ollama's API.
Current working models:
- codellama (7b & 13b)
- codegemma (2b & 7b)
- starcoder2 (3b & 7b)
You need to have Ollama installed and running for bro to work. Official download link
For Linux:
curl -fsSL https://ollama.com/install.sh | sh
# And check that the service is running
systemctl status ollama
Here is the default configuration.
model
is a string (e.g. "codellama:7b-code" or "codegemma:2b-code")prompt
is an object defining the prefix, suffix and middle keywords for FIMdebounce
is a number in millisecondsauto_pull
is a boolean that allows bro to pull the model if not listed in ollama api
require('bropilot').setup({
model = "codegemma:2b-code",
prompt = { -- FIM prompt for codegemma
prefix = "<|fim_prefix|>",
suffix = "<|fim_suffix|>",
middle = "<|fim_middle|>",
},
debounce = 1000,
auto_pull = true,
})
Install and configure using lazy.nvim
{
'meeehdi-dev/bropilot.nvim',
event = "VeryLazy", -- preload model on start
dependencies = {
"nvim-lua/plenary.nvim",
"j-hui/fidget.nvim", -- optional
},
config = true, -- setup with default options
keys = {
{
"<Tab>",
function()
require("bropilot").accept_block()
end,
mode = "i",
},
},
}
-- or
{
'meeehdi-dev/bropilot.nvim',
event = "InsertEnter", -- preload model on insert start
dependencies = {
"nvim-lua/plenary.nvim",
-- "j-hui/fidget.nvim", -- optional
},
opts = {
model = "starcoder2:3b",
prompt = { -- FIM prompt for starcoder2
prefix = "<fim_prefix>",
suffix = "<fim_suffix>",
middle = "<fim_middle>",
},
debounce = 500,
auto_pull = false,
},
config = function (_, opts)
require("bropilot").setup(opts)
end,
keys = {
-- Soon
{
"<C-Right>",
function()
require("bropilot").accept_word()
end,
mode = "i",
},
{
"<M-Right>",
function()
require("bropilot").accept_line()
end,
mode = "i",
},
{
"<Tab>",
function()
require("bropilot").accept_block()
end,
mode = "i",
},
},
}
- show suggestion as virtual text
- accept line
- accept block
- progress while suggesting
- cleanup current code
- skip suggestion if text after cursor (except if just moving?)
- fix: accepting line resets suggestion
- fix: remove additional newlines at end of suggestion
- fix: sometimes the suggestion is not cancelled even tho inserted text doesn't match
- improve init
- rewrite async handling and use callbacks to avoid timing problems
- rejoin model & tag
- fix: partial accept + newline => doesn't clear suggestion
- fix: sometimes the pid is already killed
- fix: notify non existent model
- some lua callbacks in async process, need to use scheduler (async util function)
- wait for model to be ready before trying to suggest (does ollama api provide that info? -> using preload)
- check that suggestion is created after model finishes preload
- notify on ollama api errors
- keep subsequent suggestions in memory
- accepting block resets suggestions
- refactor everything (wip)
- fix: keep same suggestion when partially accepting
- custom init options
- model
-
tag - prompt (assert if unknown model)
- debounce time
- pull model if missing
- show progress
- keep all current suggestions in memory (option to keep only n blocks)
- ollama params
- check if model is listed in ollama api
- pull model if not listed (behind option)
- replace unix sleep with async job
- accept word
- commands (might need additional model -instruct?-)
- describe
- refactor
- comment
- chat
- commit msg (using git diff --staged + concentional commit rules)
- add more context to prompt
- opened splits
- opened tabs
- lsp info (arg types, return types)
- imported files outlines (with lsp info also?)