Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v0.0.15] LLMstack NoneType error in Chatbot #50

Open
Moep90 opened this issue Sep 26, 2023 · 0 comments
Open

[v0.0.15] LLMstack NoneType error in Chatbot #50

Moep90 opened this issue Sep 26, 2023 · 0 comments

Comments

@Moep90
Copy link

Moep90 commented Sep 26, 2023

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. CreateWebsite Chatbot from Template
  2. Add Datasource of Company website
  3. Check "use local ai if available"
  4. Change using Model to gpt-3.5-turbo
  5. Enter preview mode
  6. Ask What is the VAT of Company? AI responds with
I'm sorry, but I don't have access to the current VAT rates of Company. However, you can check the official website or contact their customer support for more information.
  1. Ask What is the VAT Number of Company? multiple times
  2. see error llmstack.processors.providers.promptly.text_chat.TextChatOutput() argument after ** must be a mapping, not NoneType

Expected behavior
I expect LLMstack to scrape the website for a VAT Number and return it back

Version
V0.0.15

Environment
DISTRIB_DESCRIPTION="Ubuntu 22.04.3 LTS"
Docker version 24.0.5, build ced0996
Docker Compose version v2.20.3

Screenshots
image

Additional context

Local-AI: I use the .env option:
PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}]

age=0.000 s; distance=16 kB, estimate=16 kB
local-ai.example.com        | [127.0.0.1]:38564  200  -  GET      /readyz
local-ai.example.com        | [127.0.0.1]:44518  200  -  GET      /readyz
llmstack-0015-api-1         | INFO 2023-09-26 05:37:13,575 coordinator Spawned actor InputActor (urn:uuid:c14a0071-7373-4e57-900b-bc847e0f3977) for coordinator urn:uuid:c9288903-25c1-4998-a08f-62832159e5e9
llmstack-0015-api-1         | INFO 2023-09-26 05:37:13,575 coordinator Spawned actor OutputActor (urn:uuid:9227f669-8f57-4fca-8702-f038c7454268) for coordinator urn:uuid:c9288903-25c1-4998-a08f-62832159e5e9
llmstack-0015-api-1         | INFO 2023-09-26 05:37:13,576 coordinator Spawned actor TextChat (urn:uuid:cc6c9de3-878e-419b-9bd7-a4eafaf68268) for coordinator urn:uuid:c9288903-25c1-4998-a08f-62832159e5e9
llmstack-0015-api-1         | INFO 2023-09-26 05:37:13,576 coordinator Spawned actor BookKeepingActor (urn:uuid:b2f016da-e10b-483b-87ce-1ded941669d6) for coordinator urn:uuid:c9288903-25c1-4998-a08f-62832159e5e9
local-ai.example.com        | 5:37AM DBG Request received: 
local-ai.example.com        | 5:37AM DBG Configuration read: &{PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.7 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Chat:gpt4all-chat ChatMessage: Completion:gpt4all-completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0}}
local-ai.example.com        | 5:37AM DBG Parameters: &{PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.7 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Chat:gpt4all-chat ChatMessage: Completion:gpt4all-completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0}}
local-ai.example.com        | 5:37AM DBG Prompt (before templating): You are a helpful chat assistant
local-ai.example.com        | You are a chatbot that uses the provided context to answer the user's question.
local-ai.example.com        | If you cannot answer the question based on the provided context, say you don't know the answer.
local-ai.example.com        | No answer should go out of the provided input. If the provided input is empty, return saying you don't know the answer.
local-ai.example.com        | Keep the answers terse.
local-ai.example.com        | ----
local-ai.example.com        | context: 
local-ai.example.com        | What is the VAT of Company? 
local-ai.example.com        | I'm sorry, but I don't have access to the current VAT rates of Company. However, you can check the official website or contact their customer support for more information.
local-ai.example.com        | [172.23.0.8]:59670  200  -  POST     /chat/completions
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | 5:37AM DBG Stream request received
local-ai.example.com        | 5:37AM DBG Template found, input modified to: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
local-ai.example.com        | ### Prompt:
local-ai.example.com        | You are a helpful chat assistant
local-ai.example.com        | You are a chatbot that uses the provided context to answer the user's question.
local-ai.example.com        | If you cannot answer the question based on the provided context, say you don't know the answer.
local-ai.example.com        | No answer should go out of the provided input. If the provided input is empty, return saying you don't know the answer.
local-ai.example.com        | Keep the answers terse.
local-ai.example.com        | ----
local-ai.example.com        | context: 
local-ai.example.com        | What is the VAT of Company? 
local-ai.example.com        | I'm sorry, but I don't have access to the current VAT rates of Company. However, you can check the official website or contact their customer support for more information.
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | ### Response:
local-ai.example.com        | 5:37AM DBG Prompt (after templating): The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
local-ai.example.com        | ### Prompt:
local-ai.example.com        | You are a helpful chat assistant
local-ai.example.com        | You are a chatbot that uses the provided context to answer the user's question.
local-ai.example.com        | If you cannot answer the question based on the provided context, say you don't know the answer.
local-ai.example.com        | No answer should go out of the provided input. If the provided input is empty, return saying you don't know the answer.
local-ai.example.com        | Keep the answers terse.
local-ai.example.com        | ----
local-ai.example.com        | context: 
local-ai.example.com        | What is the VAT of Company? 
local-ai.example.com        | I'm sorry, but I don't have access to the current VAT rates of Company. However, you can check the official website or contact their customer support for more information.
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | What is the VAT Number of Company? 
local-ai.example.com        | ### Response:
local-ai.example.com        | 5:37AM DBG Loading model gpt4all-j from ggml-gpt4all-j.bin
local-ai.example.com        | 5:37AM DBG Sending chunk: {"object":"chat.completion.chunk","model":"gpt-3.5-turbo","choices":[{"index":0,"delta":{"role":"assistant","content":""}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
local-ai.example.com        | 5:37AM DBG Model already loaded in memory: ggml-gpt4all-j.bin
local-ai.example.com        | [127.0.0.1]:57536  200  -  GET      /readyz
llmstack-0015-api-1         | INFO 2023-09-26 05:37:42,059 output Error in output actor: {'_inputs1': 'llmstack.processors.providers.promptly.text_chat.TextChatOutput() argument after ** must be a mapping, not NoneType'}
llmstack-0015-api-1         | INFO 2023-09-26 05:37:42,067 coordinator Coordinator urn:uuid:c9288903-25c1-4998-a08f-62832159e5e9 stopping
llmstack-0015-api-1         | INFO 2023-09-26 05:37:42,069 bookkeeping Stopping BookKeepingActor
llmstack-0015-api-1         | sys:1: ResourceWarning: unclosed <socket.socket fd=29, family=2, type=1, proto=6, laddr=('172.23.0.8', 59670), raddr=('172.23.0.2', 8080)>
llmstack-0015-api-1         | ResourceWarning: Enable tracemalloc to get the object allocation traceback

Docker Compose

version: "3.8"

services:
  api:
    image: ${REGISTRY:-ghcr.io/trypromptly/}llmstack-api:latest
    build:
      context: .
      cache_from:
        - llmstack-api:latest
    command: apiserver
    links:
      - postgres:postgres
    expose:
      - 9000
    env_file:
      - .env
    cap_add:
      - SYS_PTRACE
    security_opt:
      - seccomp:unconfined
  rqworker:
    image: ${REGISTRY:-ghcr.io/trypromptly/}llmstack-api:latest
    build:
      context: .
      cache_from:
        - llmstack-rqworker:latest
    command: rqworker
    depends_on:
      - redis
      - postgres
    links:
      - redis:redis
      - postgres:postgres
    env_file:
      - .env
    security_opt:
      - seccomp:unconfined
  nginx:
    image: ${REGISTRY:-ghcr.io/trypromptly/}llmstack-nginx:latest
    build:
      context: nginx
      dockerfile: Dockerfile
      cache_from:
        - llmstack-nginx:latest
      args:
        - REGISTRY=${REGISTRY:-ghcr.io/trypromptly/}
    ports:
      - ${LLMSTACK_PORT:-3000}:80
    env_file:
      - .env
    depends_on:
      - api
  playwright:
    image: ${REGISTRY:-ghcr.io/trypromptly/}llmstack-playwright:latest
    build:
      context: playwright
      dockerfile: Dockerfile
      cache_from:
        - llmstack-playwright:latest
    command: npx --yes playwright launch-server --browser chromium --config /config.json
    expose:
      - 30000
    ipc: host
    user: pwuser
    security_opt:
      - seccomp:playwright/seccomp_profile.json
  redis:
    image: redis:alpine
    command: redis-server
    restart: unless-stopped
    volumes:
      - ${REDIS_VOLUME}:/data
    env_file:
      - .env
  postgres:
    image: postgres:15.1-alpine
    command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
    restart: unless-stopped
    volumes:
      - ${POSTGRES_VOLUME}:/var/lib/postgresql/data/pgdata
    environment:
      POSTGRES_HOST_AUTH_METHOD: "password"
      PGDATA: /var/lib/postgresql/data/pgdata
      POSTGRES_USER: ${DATABASE_USERNAME:-llmstack}
      POSTGRES_PASSWORD: ${DATABASE_PASSWORD:-llmstack}
    env_file:
      - .env
  weaviate:
    image: semitechnologies/weaviate:1.20.5
    volumes:
      - ${WEAVIATE_VOLUME}:/var/lib/weaviate
    environment:
      QUERY_DEFAULTS_LIMIT: 20
      AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: "true"
      PERSISTENCE_DATA_PATH: "/var/lib/weaviate"
      DEFAULT_VECTORIZER_MODULE: text2vec-openai
      ENABLE_MODULES: text2vec-openai
      CLUSTER_HOSTNAME: "weaviate-node"
    env_file:
      - .env
  local-api:
    image: quay.io/go-skynet/local-ai:v1.25.0
    container_name: local-ai.example.com
    ports:
      - 8080
    env_file:
      - .env
    environment:
      - DEBUG=true
      - MODELS_PATH=/models
      - THREADS=10
      - CONTEXT_SIZE=1024
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai"]

.env file

# Change the secrets below before running LLMStack
SECRET_KEY='3!4^11rgx0-53)!n#18(1_^)&pj-3n^Afpc#mbm(+!fj4r$rp7ea!s'
CIPHER_KEY_SALT=salt
DATABASE_PASSWORD=llmstack

# Update the location of the persistent volumes to non-temporary locations
POSTGRES_VOLUME=./data/postgres_llmstack
REDIS_VOLUME=./data/redis_llmstack
WEAVIATE_VOLUME=./data/weaviate_llmstack

# LLMStack port
LLMSTACK_PORT=3000

# Platform default keys (optional)
DEFAULT_OPENAI_API_KEY=
DEFAULT_DREAMSTUDIO_API_KEY=
DEFAULT_AZURE_OPENAI_API_KEY=
DEFAULT_COHERE_API_KEY=
DEFAULT_FOREFRONTAI_API_KEY=
DEFAULT_ELEVENLABS_APiI_KEY=

OPENBLAS_NUM_THREADS=39
ALLOWED_HOSTS="127.0.0.1,localhost,x.x.x.x"
CSRF_TRUSTED_ORIGINS="http://x.x.x.x:3000"
LOG_LEVEL=INFO

PRELOAD_MODELS=[{"url": "github:go-skynet/model-gallery/gpt4all-j.yaml", "name": "gpt-3.5-turbo"}, { "url": "github:go-skynet/model-gallery/bert-embeddings.yaml", "name": "text-embedding-ada-002"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant