Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tabby exits with the exit code 1 without any errors #2162

Open
yurivict opened this issue May 17, 2024 · 12 comments
Open

Tabby exits with the exit code 1 without any errors #2162

yurivict opened this issue May 17, 2024 · 12 comments

Comments

@yurivict
Copy link

Describe the bug
It exits at this point:

馃搫 Version 0.11.1
馃殌 Listening at 0.0.0.0:8080


  JWT secret is not set

  Tabby server will generate a one-time (non-persisted) JWT secret for the current process.
  Please set the TABBY_WEBSERVER_JWT_TOKEN_SECRET environment variable for production usage.

Information about your version
0.11.1

FreeBSD 14.0

@wsxiaoys
Copy link
Member

Hi - since we're not distributing the freebsd binary, could you confirm you're builiding from scratch, with v0.11.1 tag?

@yurivict
Copy link
Author

Yes, this is the build from scratch for the v0.11.1 tag.

The build was performed in the FreeBSD port's framework.

@yurivict
Copy link
Author

yurivict commented May 17, 2024

I also tried to use the Tabby plugin from vim, but I couldn't get any code suggestions.
The help page says that the suggestion is supposed to appear when you stop typing.
But this didn't happen.

I used this command line:
$ tabby serve --model TabbyML/StarCoder-1B

Perhaps it attempts to use GPU while GPU isn't available? Might this be the cause?
Is there a compile-time or run-time switch to use only CPU?

I couldn't easily find in the docs how to only enable CPU inference.

@mblarsen
Copy link

Is there a compile-time or run-time switch to use only CPU?

I couldn't easily find in the docs how to only enable CPU inference.

--device <DEVICE>            Device to run model inference [default: cpu] [possible values: cpu, metal]

Setting --device cpu should make it use only CPU I guess?

@yurivict
Copy link
Author

Ok, so I was using the default - CPU.

It is still unclear why does it exit w/out errors.

@metal3d
Copy link

metal3d commented May 28, 2024

I've got the same on Fedora Linux on v0.11.1, exits with "Error 132" and no verbose messsage.
I tried to recompile with CUDA 12.4.1 - the interface is up but the isse #2263 happens.

v0.11.1 exists without any delay. With device CPU and GPU.

You should give more information in STDERR IMHO.

@arnm
Copy link

arnm commented Jun 1, 2024

Same issue with latest unstable nixos tabby v0.11.1

model or rocm seem to not change the exit 1 without error

 services.tabby = {                                                                                    
    enable = true;                                                                                      
    acceleration = "cpu";                                                                               
    model = "TabbyML/DeepseekCoder-1.3B";                                                               
};   

@yurivict
Copy link
Author

yurivict commented Jun 1, 2024

It seems to be broken in general, not specific to any OS.

@wsxiaoys Any chance to get it fixed?

@wsxiaoys
Copy link
Member

wsxiaoys commented Jun 1, 2024

Hi, please share more information (e.g set RUST_LOG=debug, docker image tag or release page link) to help troubleshooting, thanks.

@arnm
Copy link

arnm commented Jun 1, 2024

I was able to rollback to tabby v0.11.0 and cpu with default model is working. Trying with rocm now. Tried v0.8.3 and v0.10.0 and couldn't get them to work first try, might have been an issue with the files v0.11.1 had already created, not quite sure.

  services.tabby =
    let
      tabby_0_11_0 = (import
        (builtins.fetchGit {
          name = "tabby_0_11_0";
          url = "https://github.com/NixOS/nixpkgs/";
          ref = "refs/heads/nixpkgs-unstable";
          # rev = "e89cf1c932006531f454de7d652163a9a5c86668"; #0.8.3
          # rev = "a064513ad395d680ec3d5f56abc4ed30c23150ee"; # 0.10.0
          rev = "3e1464aff56e5c26996e974a0a5702357a01a127"; # 0.11.0
        })
        { system = "x86_64-linux"; }).pkgs.tabby;
    in
    {
      enable = true;
      package = tabby_0_11_0;
    };

@lirc571
Copy link

lirc571 commented Jun 4, 2024

Hi, please share more information (e.g set RUST_LOG=debug, docker image tag or release page link) to help troubleshooting, thanks.

Hi @wsxiaoys , for me, tabby stops immediately if I run it with --chat-model TabbyML/Deepseek-V2-Lite-Chat:

tabby-1  | 2024-06-04T09:41:03.358298Z ERROR llama_cpp_bindings: crates/llama-cpp-bindings/src/lib.rs:61: Unable to load model: /data/models/TabbyML/Deepseek-V2-Lite-Chat/ggml/model.gguf

It works if I remove the chat-model option.

@wsxiaoys
Copy link
Member

wsxiaoys commented Jun 4, 2024

Hi @lirc571 - DeepseekV2 Lite support is added in 0.12 (Currently in rc). It's not supported in 0.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants