Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add Msty provider #916

Open
wants to merge 8 commits into
base: preview
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ After you've written your context provider, make sure to complete the following:

### Adding an LLM Provider

Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:
Continue has support for more than a dozen different LLM "providers", making it easy to use models running on OpenAI, Ollama, Together, LM Studio, Msty, and more. You can find all of the existing providers [here](https://github.com/continuedev/continue/tree/main/core/llm/llms), and if you see one missing, you can add it with the following steps:

1. Create a new file in the `core/llm/llms` directory. The name of the file should be the name of the provider, and it should export a class that extends `BaseLLM`. This class should contain the following minimal implementation. We recommend viewing pre-existing providers for more details. The [LlamaCpp Provider](./core/llm/llms/LlamaCpp.ts) is a good simple example.

Expand Down
3 changes: 2 additions & 1 deletion core/config/types.ts
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,8 @@ declare global {
| "mistral"
| "bedrock"
| "deepinfra"
| "flowise";
| "flowise"
| "msty";

export type ModelName =
| "AUTODETECT"
Expand Down
3 changes: 2 additions & 1 deletion core/index.d.ts
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,8 @@ type ModelProvider =
| "mistral"
| "bedrock"
| "deepinfra"
| "flowise";
| "flowise"
| "msty";

export type ModelName =
| "AUTODETECT"
Expand Down
2 changes: 2 additions & 0 deletions core/llm/autodetect.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ const PROVIDER_HANDLES_TEMPLATING: ModelProvider[] = [
"openai",
"ollama",
"together",
"msty",
"anthropic",
];

Expand All @@ -45,6 +46,7 @@ const PROVIDER_SUPPORTS_IMAGES: ModelProvider[] = [
"ollama",
"google-palm",
"free-trial",
"msty",
"anthropic",
];

Expand Down
12 changes: 12 additions & 0 deletions core/llm/llms/Msty.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import Ollama from "./Ollama";
import {LLMOptions, ModelProvider} from "../../index";

class Msty extends Ollama {
static providerName: ModelProvider = "msty";
static defaultOptions: Partial<LLMOptions> = {
apiBase: "http://localhost:10000",
model: "codellama-7b",
};
}

export default Msty;
2 changes: 2 additions & 0 deletions core/llm/llms/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ import OpenAIFreeTrial from "./OpenAIFreeTrial";
import Replicate from "./Replicate";
import TextGenWebUI from "./TextGenWebUI";
import Together from "./Together";
import Msty from "./Msty";

function convertToLetter(num: number): string {
let result = "";
Expand Down Expand Up @@ -94,6 +95,7 @@ const LLMs = [
DeepInfra,
OpenAIFreeTrial,
Flowise,
Msty
];

export async function llmFromDescription(
Expand Down
14 changes: 14 additions & 0 deletions docs/docs/config-file-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -197,6 +197,20 @@ After the "Full example" these examples will only show the relevant portion of t
}
```

### Msty with CodeLlama 13B

```json
{
"models": [
{
"title": "Msty",
"provider": "msty",
"model": "codellama-13b"
}
]
}
```

### OpenAI-compatible API

This is an example of serving a model using an OpenAI-compatible API on http://localhost:8000.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/model-setup/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ If you are unsure what model or provider to use, here is our current rule of thu

- Use GPT-4 via OpenAI if you want the best possible model overall
- Use DeepSeek Coder 33B via the Together API if you want the best open-source model
- Use DeepSeek Coder 6.7B with Ollama if you want to run a model locally
- Use DeepSeek Coder 6.7B with Ollama or Msty if you want to run a model locally

Learn more:

Expand Down
3 changes: 2 additions & 1 deletion docs/docs/model-setup/select-provider.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Select a provider
description: Swap out different LLM providers
keywords: [openai, anthropic, PaLM, ollama, ggml]
keywords: [openai, anthropic, PaLM, ollama, ggml, msty]
---

# Select a model provider
Expand All @@ -23,6 +23,7 @@ You can run a model on your local computer using:
- [FastChat](../reference/Model%20Providers/openai.md) (OpenAI compatible server)
- [llama-cpp-python](../reference/Model%20Providers/openai.md) (OpenAI compatible server)
- [TensorRT-LLM](https://github.com/NVIDIA/trt-llm-as-openai-windows?tab=readme-ov-file#examples) (OpenAI compatible server)
- [Msty](../reference/Model%20Providers/msty.md)

Once you have it running, you will need to configure it in the GUI or manually add it to your `config.json`.

Expand Down
49 changes: 49 additions & 0 deletions docs/docs/reference/Model Providers/msty.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Msty

[Msty](https://msty.app/) is an application for Windows, Mac, and Linux that makes it really easy to run online as well as local open-source models, including Llama-2, DeepSeek Coder, etc. No need to fidget with your terminal, run a command, or anything. Just download the app from the website, click a button, and you are up and running. Continue can then be configured to use the `Msty` LLM class:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Msty",
"provider": "msty",
"model": "deepseek-coder:6.7b",
"completionOptions": {}
}
]
}
```

## Completion Options

In addition to the model type, you can also configure some of the parameters that Msty uses to run the model.

- temperature: options.temperature - This is a parameter that controls the randomness of the generated text. Higher values result in more creative but potentially less coherent outputs, while lower values lead to more predictable and focused outputs.
- top_p: options.topP - This sets a threshold (between 0 and 1) to control how diverse the predicted tokens should be. The model generates tokens that are likely according to their probability distribution, but also considers the top-k most probable tokens.
- top_k: options.topK - This parameter limits the number of unique tokens to consider when generating the next token in the sequence. Higher values increase the variety of generated sequences, while lower values lead to more focused outputs.
- num_predict: options.maxTokens - This determines the maximum number of tokens (words or characters) to generate for the given input prompt.
- num_thread: options.numThreads - This is the multi-threading configuration option that controls how many threads the model uses for parallel processing. Higher values may lead to faster generation times but could also increase memory usage and complexity. Set this to one or two lower than the number of threads your CPU can handle to leave some for your GUI when running the model locally.

## Authentication

If you need to send custom headers for authentication, you may use the `requestOptions.headers` property like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Msty",
"provider": "msty",
"model": "deepseek-coder:6.7b",
"requestOptions": {
"headers": {
"Authorization": "Bearer xxx"
}
}
}
]
}
```

[View the source](https://github.com/continuedev/continue/blob/main/core/llm/llms/Msty.ts)
22 changes: 20 additions & 2 deletions docs/docs/walkthroughs/codellama.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: Using Code Llama with Continue
description: How to use Code Llama with Continue
keywords: [code llama, meta, togetherai, ollama, replciate, fastchat]
keywords: [code llama, meta, togetherai, ollama, replciate, fastchat, msty]
---

# Using Code Llama with Continue

With Continue, you can use Code Llama as a drop-in replacement for GPT-4, either by running locally with Ollama or GGML or through Replicate.
With Continue, you can use Code Llama as a drop-in replacement for GPT-4, either by running locally with Ollama, Msty, or GGML or through Replicate.

If you haven't already installed Continue, you can do that [here](https://marketplace.visualstudio.com/items?itemName=Continue.continue). For more general information on customizing Continue, read [our customization docs](../customization/overview.md).

Expand Down Expand Up @@ -83,3 +83,21 @@ If you haven't already installed Continue, you can do that [here](https://market
]
}
```

## Msty

1. Download Msty [here](https://msty.app/) for your platform (Windows, Mac, or Linux)
2. Open the app and click "Setup Local AI". Optionally, download any model you want with just a click of a button from the Text Module page.
3. Change your Continue config file like this:

```json title="~/.continue/config.json"
{
"models": [
{
"title": "Code Llama",
"provider": "msty",
"model": "codellama:7b"
}
]
}
```
14 changes: 14 additions & 0 deletions docs/docs/walkthroughs/config-file-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,20 @@ After the "Full example" these examples will only show the relevant portion of t
}
```

### Msty with CodeLlama 13B

```json
{
"models": [
{
"title": "Msty",
"provider": "msty",
"model": "codellama-13b"
}
]
}
```

### OpenAI-compatible API

This is an example of serving a model using an OpenAI-compatible API on http://localhost:8000.
Expand Down
54 changes: 51 additions & 3 deletions docs/static/schemas/config.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
},
"mirostat": {
"title": "Mirostat",
"description": "Enable Mirostat sampling, controlling perplexity during text generation (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). Only available for Ollama, LM Studio, and llama.cpp providers",
"description": "Enable Mirostat sampling, controlling perplexity during text generation (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0). Only available for Ollama, LM Studio, Msty, and llama.cpp providers",
"type": "number"
},
"stop": {
Expand Down Expand Up @@ -139,7 +139,8 @@
"llamafile",
"mistral",
"deepinfra",
"flowise"
"flowise",
"msty"
],
"markdownEnumDescriptions": [
"### OpenAI\nUse gpt-4, gpt-3.5-turbo, or any other OpenAI model. See [here](https://openai.com/product#made-for-developers) to obtain an API key.\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/openai)",
Expand All @@ -155,7 +156,8 @@
"### LMStudio\nLMStudio provides a professional and well-designed GUI for exploring, configuring, and serving LLMs. It is available on both Mac and Windows. To get started:\n1. Download from [lmstudio.ai](https://lmstudio.ai/) and open the application\n2. Search for and download the desired model from the home screen of LMStudio.\n3. In the left-bar, click the '<->' icon to open the Local Inference Server and press 'Start Server'.\n4. Once your model is loaded and the server has started, you can begin using Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/lmstudio)",
"### Llamafile\nTo get started with llamafiles, find and download a binary on their [GitHub repo](https://github.com/Mozilla-Ocho/llamafile#binary-instructions). Then run it with the following command:\n\n```shell\nchmod +x ./llamafile\n./llamafile\n```\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/llamafile)",
"### Mistral API\n\nTo get access to the Mistral API, obtain your API key from the [Mistral platform](https://docs.mistral.ai/)",
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)"
"### DeepInfra\n\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/deepinfra)",
"### Msty\nMsty is the simplest way to get started with online or local LLMs on all desktop platforms - Windows, Mac, and Linux. No fussing around, one-click and you are up and running. To get started, follow these steps:\n1. Download from [Msty.app](https://msty.app/), open the application, and click 'Setup Local AI'.\n2. Go to the Local AI Module page and download a model of your choice.\n3. Once the model has finished downloading, you can start asking questions through Continue.\n> [Reference](https://continue.dev/docs/reference/Model%20Providers/Msty)"
],
"type": "string"
},
Expand Down Expand Up @@ -694,6 +696,52 @@
}
}
},
{
"if": {
"properties": {
"provider": {
"enum": ["msty"]
}
},
"required": ["provider"]
},
"then": {
"properties": {
"model": {
"anyOf": [
{
"enum": [
"mistral-7b",
"llama2-7b",
"llama2-13b",
"codellama-7b",
"codellama-13b",
"codellama-34b",
"codellama-70b",
"phi-2",
"phind-codellama-34b",
"wizardcoder-7b",
"wizardcoder-13b",
"wizardcoder-34b",
"zephyr-7b",
"codeup-13b",
"deepseek-7b",
"deepseek-33b",
"neural-chat-7b",
"deepseek-1b",
"stable-code-3b",
"starcoder-1b",
"starcoder-3b",
"AUTODETECT"
]
},
{ "type": "string" }
],
"markdownDescription": "Select a pre-defined option, or find the exact model tag for a model from the Local AI Module page in Msty."
}
}
}
},
{
"if": {
"properties": {
Expand Down
Binary file added gui/public/logos/msty.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion gui/src/pages/models.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ function Models() {
To set up an LLM you will choose
<ul>
<li>
a provider (the service used to run the LLM, e.g. Ollama,
a provider (the service used to run the LLM, e.g. Ollama, Msty,
TogetherAI) and
</li>
<li>a model (the LLM being run, e.g. GPT-4, CodeLlama).</li>
Expand Down