Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Support]: Error! Controller unreachable #408

Closed
1 of 2 tasks
rootella opened this issue May 11, 2024 · 11 comments · Fixed by #419
Closed
1 of 2 tasks

[Support]: Error! Controller unreachable #408

rootella opened this issue May 11, 2024 · 11 comments · Fixed by #419
Labels
support support request for ZTNET

Comments

@rootella
Copy link

📝 Inquiry

Going in settings > controller can't see any stats about the controller, after 5s timout got a red banner "Error! Controller unreachable". Orgs and networks are present , working without issue since 0.6.1
What can i check to troubleshoot this issue? TY

🔖 Version

0.6.3

🔧 Deployment Type

  • Docker
  • Standalone

💻 Operating System

Other Linux

📚 Any Other Information That May Be Helpful

Rocky 9.3, docker CE 26.0.2, ztnet port on localhost:3000 revproxy with nginx-proxymanager

@rootella rootella added the support support request for ZTNET label May 11, 2024
@sinamics
Copy link
Owner

Check the status of zerotier container.
docker logs zerotier

@rootella
Copy link
Author

rootella commented May 13, 2024

Here's some logs, i can't see relevant anomalies

zerotier

WARNING: using manually-specified secondary and/or tertiary ports. This can cause NAT issues.
Starting Control Plane...
Starting V6 Control Plane...

ztnet

 ⨯ Error: 'sharp' is required to be installed in standalone mode for the image optimization to function correctly. Read more at: https://nextjs.org/docs/messages/sharp-missing-in-production
Socket is initializing
 ⨯ Error: 'sharp' is required to be installed in standalone mode for the image optimization to function correctly. Read more at: https://nextjs.org/docs/messages/sharp-missing-in-production
Creating .env file...
Applying migrations to the database...
Environment variables loaded from .env
Prisma schema loaded from prisma/schema.prisma
Datasource "db": PostgreSQL database "ztnet", schema "public" at "postgres:5432"

29 migrations found in prisma/migrations


No pending migrations to apply.
Migrations applied successfully!
Seeding the database...
Environment variables loaded from .env
Running seed command `ts-node --compiler-options {"module":"CommonJS"} prisma/seed.ts` ...
Seeding:: User Options complete!
Seeding:: Updating user ID complete!

🌱  The seed command has been executed.
Database seeded successfully!
Executing command
   ▲ Next.js 14.1.4
   - Local:        http://050c87e422b1:3000
   - Network:      http://172.18.0.4:3000

 ✓ Ready in 191ms

db

PostgreSQL Database directory appears to contain a database; Skipping initialization

2024-05-13 11:54:18.799 UTC [1] LOG:  starting PostgreSQL 15.2 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 12.2.1_git20220924-r4) 12.2.1 20220924, 64-bit
2024-05-13 11:54:18.799 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2024-05-13 11:54:18.799 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2024-05-13 11:54:18.802 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2024-05-13 11:54:18.806 UTC [24] LOG:  database system was shut down at 2024-05-13 11:54:18 UTC
2024-05-13 11:54:18.812 UTC [1] LOG:  database system is ready to accept connections
2024-05-13 11:54:30.882 UTC [29] LOG:  could not receive data from client: Connection reset by peer
2024-05-13 11:54:30.990 UTC [30] LOG:  could not receive data from client: Connection reset by peer

compose.yaml (real ip/dns redacted)

version: "3.1"
services:
  postgres:
    image: postgres:15.2-alpine
    container_name: postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: ztnet
    volumes:
      - postgres-data:/var/lib/postgresql/data
    networks:
      - app-network
  zerotier:
    #image: zyclonite/zerotier:1.12.2
    image: zyclonite/zerotier:1.14.0
    hostname: zerotier
    container_name: zerotier
    restart: unless-stopped
    volumes:
      - zerotier:/var/lib/zerotier-one
      - /root/docker/ztnet/zerotier/local.conf:/var/lib/zerotier-one/local.conf
    cap_add:
      - NET_ADMIN
      - SYS_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    networks:
      - app-network
    ports:
      - 9993:9993/udp
      - 9993:9993/tcp
      - 443:443/udp
      - 443:443/tcp
      - 53:53/udp
    environment:
      - ZT_OVERRIDE_LOCAL_CONF=false
      - ZT_ALLOW_MANAGEMENT_FROM=172.31.255.0/29
  ztnet:
    image: sinamics/ztnet:latest
    container_name: ztnet
    working_dir: /app
    volumes:
      - zerotier:/var/lib/zerotier-one
    restart: unless-stopped
    ports:
      #- 3000:3000
      - 127.0.0.1:3000:3000
    environment:
      POSTGRES_HOST: postgres
      POSTGRES_PORT: 5432
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: ztnet
      #NEXTAUTH_URL: http://real.ip:3000
      NEXTAUTH_URL: https://real.domain:3000
      NEXTAUTH_SECRET: password
      NEXTAUTH_URL_INTERNAL: http://ztnet:3000
      NEXT_PUBLIC_SITE_NAME: APUNet
    networks:
      - app-network
      - nginxproxy_default
    depends_on:
      - postgres
      - zerotier
volumes:
  zerotier: null
  postgres-data: null
networks:
  app-network:
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.31.255.0/29
  nginxproxy_default:
    external: true

@sinamics
Copy link
Owner

sinamics commented May 13, 2024

do you have zerotier installed on the host, sudo systemctl status zerotier-one ?
if you do, you need to uninstall it or change the default port to something else than 9993

@rootella
Copy link
Author

No it's a clean docker host, 9993 is used only by the container, no zerotier-one istance present or socket used by other processes
Changed to only 9994/udp but nothing change

COMMAND      PID USER   FD   TYPE  DEVICE SIZE/OFF NODE NAME
docker-pr 516756 root    4u  IPv4 2386566      0t0  TCP *:palace-2 (LISTEN)
docker-pr 516763 root    4u  IPv6 2386569      0t0  TCP *:palace-2 (LISTEN)
docker-pr 516776 root    4u  IPv4 2386605      0t0  UDP *:palace-2 
docker-pr 516783 root    4u  IPv6 2387163      0t0  UDP *:palace-2 

@sinamics
Copy link
Owner

i belive is somehow related to the nginxproxy_default network you have added.
if you remove it, do you see any fault in the controller section?

Also, try to ping the ztnet container from zerotier.
docker exec zerotier ping ztnet

@sinamics
Copy link
Owner

Did you solve the issue?

@rootella
Copy link
Author

Unfortunately not, very busy days. I will check by the end of the week and update on the results.
In any case, thank you for the support!

@rootella
Copy link
Author

Hi @sinamics, removed any networks and revproxy (published :3000 directly) and updated to 0.6.4 but the issue persist
I can ping ztnet container from zerotier and no relevant warning in logs to be noted

@sinamics
Copy link
Owner

did you make any changes to the URL or Secret in the controller section?
If so, just submit empty fields to let ztnet use the defaults.

@rootella
Copy link
Author

Nothing that I am aware of, anyway URL and secret commented out but the issue persist
This is a test installation, i can give access if this can help troubleshooting

@sinamics
Copy link
Owner

sure, send me login details by mail or discord

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
support support request for ZTNET
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants