Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate to using Env Vars everywhere #1371

Open
5 tasks
lenaxia opened this issue May 12, 2024 · 1 comment
Open
5 tasks

Migrate to using Env Vars everywhere #1371

lenaxia opened this issue May 12, 2024 · 1 comment
Assignees

Comments

@lenaxia
Copy link
Contributor

lenaxia commented May 12, 2024

Describe the bug
When mounting a config file into kubernetes pod at /root/.memgpt/config, program exits because it cannot write to the config file.

ubuntu@terraform:~/workspace/home-ops-prod/cluster/apps/home/localai/memgpt(⎈|prod:home)$ kcl memgpt-776b89b584-gh5km -c main
Starting MEMGPT server...
server :: loading configuration from '/root/.memgpt/config'
Traceback (most recent call last):
  File "/app/.venv/bin/uvicorn", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/click/core.py", line 1157, in _call
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.11/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
 ...
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 940, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in callwith_frames_removed
  File "/memgpt/server/rest_api/server.py", line 46, in <module>
    server: SyncServer = SyncServer(default_interface=interface)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/memgpt/server/server.py", line 222, in __init
    self.config.save()
  File "/memgpt/config.py", line 272, in save
    with open(self.config_path, "w", encoding="utf-8") as f:
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 30] Read-only file system: '/root/.memgpt/config'

Please describe your setup

Additional context
Comment from Swooders in Discord:

swooders. — Today at 11:00 AM
We did something a bit hacky to get the enviornment variable configuration (for postgres) to work for the docker container with minimal changes, where we override the config to point to the provided envs for postgres (basically so we could keep using the config file for DB configuration) -- but I think we need to change this and migrate to using envs everywhere else in the code istead fo reading the config file

MemGPT Config
Please attach your ~/.memgpt/config file or copy past it below.

[defaults]
preset = memgpt_chat
persona = sam_pov
human = basic

[model]
model = neuralhermes-2.5-7b
model_endpoint = http://localai.home.svc.cluster.local:8080/v1
model_endpoint_type = openai
model_wrapper = null
context_window = 8192

[embedding]
embedding_endpoint_type = openai
embedding_endpoint = http://localai.home.svc.cluster.local:8080/v1
embedding_model = bert-embeddings
embedding_dim = 1536
embedding_chunk_size = 300

[archival_storage]
type = postgres
path = /root/.memgpt/chroma
uri = postgresql+pg8000://memgpt:adsfjkh*&^wer13@localhost:5432/memgpt

[recall_storage]
type = postgres
path = /root/.memgpt
uri = postgresql+pg8000://memgpt:adsfjkh*&^wer13@localhost:5432/memgpt

[metadata_storage]
type = postgres
path = /root/.memgpt
uri = postgresql+pg8000://memgpt:adsfjkh*&^wer13@localhost:5432/memgpt


[client]
anon_clientid = 00000000-0000-0000-0000-000000000000

If you're not using OpenAI, please provide additional information on your local LLM setup:

Local LLM details

If you are trying to run MemGPT with local LLMs, please provide the following information:

  • The exact model you're trying to use (e.g. dolphin-2.1-mistral-7b.Q6_K.gguf)
  • The local LLM backend you are using (web UI? LM Studio?)
  • Your hardware for the local LLM backend (local computer? operating system? remote RunPod?)
@sarahwooders
Copy link
Collaborator

I think we actually just need to remove this config overriding since the DB connectors are properly checking the env variables https://github.com/cpacker/MemGPT/blob/main/memgpt/server/server.py#L216

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: To triage
Development

No branches or pull requests

2 participants