Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Example Vagrant file doesn't work out of the box: Error querying node status: Get http://127.0.0.1:4646/v1/nodes: dial tcp 127.0.0.1:4646: connect: connection refused #113

Open
dnk8n opened this issue Nov 19, 2020 · 3 comments

Comments

@dnk8n
Copy link

dnk8n commented Nov 19, 2020

[dnk8n@localhost brianshumate.nomad]$ cd examples/
[dnk8n@localhost examples]$ ls
bin  README_VAGRANT.md  site.yml  Vagrantfile  vagrant_hosts
[dnk8n@localhost examples]$ pwd
/home/dnk8n/.ansible/roles/brianshumate.nomad/examples
[dnk8n@localhost examples]$ ./bin/preinstall 
✅  nomad VM node information present in /etc/hosts
✅  Vagrant Hosts plugin is installed
[dnk8n@localhost examples]$ vagrant up
Bringing machine 'nomad1' up with 'virtualbox' provider...
Bringing machine 'nomad2' up with 'virtualbox' provider...
Bringing machine 'nomad3' up with 'virtualbox' provider...
==> nomad1: Importing base box 'debian/jessie64'...
==> nomad1: Matching MAC address for NAT networking...
==> nomad1: Checking if box 'debian/jessie64' version '8.11.1' is up to date...
==> nomad1: Setting the name of the VM: nomad-node1
==> nomad1: Clearing any previously set network interfaces...
==> nomad1: Preparing network interfaces based on configuration...
    nomad1: Adapter 1: nat
    nomad1: Adapter 2: hostonly
==> nomad1: Forwarding ports...
    nomad1: 22 (guest) => 2222 (host) (adapter 1)
==> nomad1: Running 'pre-boot' VM customizations...
==> nomad1: Booting VM...
==> nomad1: Waiting for machine to boot. This may take a few minutes...
    nomad1: SSH address: 127.0.0.1:2222
    nomad1: SSH username: vagrant
    nomad1: SSH auth method: private key
    nomad1: 
    nomad1: Vagrant insecure key detected. Vagrant will automatically replace
    nomad1: this with a newly generated keypair for better security.
    nomad1: 
    nomad1: Inserting generated public key within guest...
    nomad1: Removing insecure key from the guest if it's present...
    nomad1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> nomad1: Machine booted and ready!
==> nomad1: Checking for guest additions in VM...
    nomad1: No guest additions were detected on the base box for this VM! Guest
    nomad1: additions are required for forwarded ports, shared folders, host only
    nomad1: networking, and more. If SSH fails on this machine, please install
    nomad1: the guest additions and repackage the box to continue.
    nomad1: 
    nomad1: This is not an error message; everything may continue to work properly,
    nomad1: in which case you may ignore this message.
==> nomad1: Setting hostname...
==> nomad1: Configuring and enabling network interfaces...
==> nomad1: Installing rsync to the VM...
==> nomad1: Rsyncing folder: /home/dnk8n/.ansible/roles/brianshumate.nomad/examples/ => /vagrant
==> nomad1: Running provisioner: hosts...
==> nomad2: Importing base box 'debian/jessie64'...
==> nomad2: Matching MAC address for NAT networking...
==> nomad2: Checking if box 'debian/jessie64' version '8.11.1' is up to date...
==> nomad2: Setting the name of the VM: nomad-node2
==> nomad2: Fixed port collision for 22 => 2222. Now on port 2200.
==> nomad2: Clearing any previously set network interfaces...
==> nomad2: Preparing network interfaces based on configuration...
    nomad2: Adapter 1: nat
    nomad2: Adapter 2: hostonly
==> nomad2: Forwarding ports...
    nomad2: 22 (guest) => 2200 (host) (adapter 1)
==> nomad2: Running 'pre-boot' VM customizations...
==> nomad2: Booting VM...
==> nomad2: Waiting for machine to boot. This may take a few minutes...
    nomad2: SSH address: 127.0.0.1:2200
    nomad2: SSH username: vagrant
    nomad2: SSH auth method: private key
    nomad2: 
    nomad2: Vagrant insecure key detected. Vagrant will automatically replace
    nomad2: this with a newly generated keypair for better security.
    nomad2: 
    nomad2: Inserting generated public key within guest...
    nomad2: Removing insecure key from the guest if it's present...
    nomad2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> nomad2: Machine booted and ready!
==> nomad2: Checking for guest additions in VM...
    nomad2: No guest additions were detected on the base box for this VM! Guest
    nomad2: additions are required for forwarded ports, shared folders, host only
    nomad2: networking, and more. If SSH fails on this machine, please install
    nomad2: the guest additions and repackage the box to continue.
    nomad2: 
    nomad2: This is not an error message; everything may continue to work properly,
    nomad2: in which case you may ignore this message.
==> nomad2: Setting hostname...
==> nomad2: Configuring and enabling network interfaces...
==> nomad2: Installing rsync to the VM...
==> nomad2: Rsyncing folder: /home/dnk8n/.ansible/roles/brianshumate.nomad/examples/ => /vagrant
==> nomad2: Running provisioner: hosts...
==> nomad3: Importing base box 'debian/jessie64'...
==> nomad3: Matching MAC address for NAT networking...
==> nomad3: Checking if box 'debian/jessie64' version '8.11.1' is up to date...
==> nomad3: Setting the name of the VM: nomad-node3
==> nomad3: Fixed port collision for 22 => 2222. Now on port 2201.
==> nomad3: Clearing any previously set network interfaces...
==> nomad3: Preparing network interfaces based on configuration...
    nomad3: Adapter 1: nat
    nomad3: Adapter 2: hostonly
==> nomad3: Forwarding ports...
    nomad3: 22 (guest) => 2201 (host) (adapter 1)
==> nomad3: Running 'pre-boot' VM customizations...
==> nomad3: Booting VM...
==> nomad3: Waiting for machine to boot. This may take a few minutes...
    nomad3: SSH address: 127.0.0.1:2201
    nomad3: SSH username: vagrant
    nomad3: SSH auth method: private key
    nomad3: 
    nomad3: Vagrant insecure key detected. Vagrant will automatically replace
    nomad3: this with a newly generated keypair for better security.
    nomad3: 
    nomad3: Inserting generated public key within guest...
    nomad3: Removing insecure key from the guest if it's present...
    nomad3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> nomad3: Machine booted and ready!
==> nomad3: Checking for guest additions in VM...
    nomad3: No guest additions were detected on the base box for this VM! Guest
    nomad3: additions are required for forwarded ports, shared folders, host only
    nomad3: networking, and more. If SSH fails on this machine, please install
    nomad3: the guest additions and repackage the box to continue.
    nomad3: 
    nomad3: This is not an error message; everything may continue to work properly,
    nomad3: in which case you may ignore this message.
==> nomad3: Setting hostname...
==> nomad3: Configuring and enabling network interfaces...
==> nomad3: Installing rsync to the VM...
==> nomad3: Rsyncing folder: /home/dnk8n/.ansible/roles/brianshumate.nomad/examples/ => /vagrant
==> nomad3: Running provisioner: hosts...
==> nomad3: Running provisioner: ansible...
    nomad3: Running ansible-playbook...

PLAY [Installing Nomad] ********************************************************

TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host nomad2.local is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [nomad2.local]
[WARNING]: Platform linux on host nomad3.local is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [nomad3.local]
[WARNING]: Platform linux on host nomad1.local is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change this. See https://docs.ansible.com/ansible/2.9/referen
ce_appendices/interpreter_discovery.html for more information.
ok: [nomad1.local]

TASK [brianshumate.nomad : Check distribution compatibility] *******************
skipping: [nomad2.local]
skipping: [nomad1.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Fail if not a new release of Red Hat / CentOS] ******
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Fail if not a new release of Debian] ****************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Fail if not a new release of Ubuntu] ****************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Check nomad_group_name is included in groups] *******
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Include OS variables] *******************************
ok: [nomad1.local]
ok: [nomad2.local]
ok: [nomad3.local]

TASK [brianshumate.nomad : Gather facts from other servers] ********************

TASK [brianshumate.nomad : Expose bind_address, advertise_address and node_role as facts] ***
ok: [nomad1.local]
ok: [nomad2.local]
ok: [nomad3.local]

TASK [brianshumate.nomad : Add Nomad group] ************************************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Add Nomad user] *************************************
changed: [nomad2.local]
changed: [nomad3.local]
changed: [nomad1.local]

TASK [brianshumate.nomad : Install dmsetup for Ubuntu 16.04] *******************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Run dmsetup for Ubuntu 16.04] ***********************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Add Nomad user to docker group] *********************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : OS packages] ****************************************
changed: [nomad1.local] => (item=cgroup-bin)
changed: [nomad2.local] => (item=cgroup-bin)
changed: [nomad3.local] => (item=cgroup-bin)
changed: [nomad1.local] => (item=curl)
changed: [nomad2.local] => (item=curl)
changed: [nomad3.local] => (item=curl)
changed: [nomad3.local] => (item=git)
changed: [nomad2.local] => (item=git)
ok: [nomad3.local] => (item=libcgroup1)
ok: [nomad2.local] => (item=libcgroup1)
changed: [nomad1.local] => (item=git)
ok: [nomad1.local] => (item=libcgroup1)
changed: [nomad3.local] => (item=unzip)
changed: [nomad2.local] => (item=unzip)
changed: [nomad1.local] => (item=unzip)

TASK [brianshumate.nomad : Check Nomad package checksum file] ******************
ok: [nomad1.local]

TASK [brianshumate.nomad : Get Nomad package checksum file] ********************
changed: [nomad1.local]

TASK [brianshumate.nomad : Get Nomad package checksum] *************************
changed: [nomad3.local]
changed: [nomad1.local]
changed: [nomad2.local]

TASK [brianshumate.nomad : Check Nomad package file] ***************************
ok: [nomad2.local]
ok: [nomad1.local]
ok: [nomad3.local]

TASK [brianshumate.nomad : Download Nomad] *************************************
changed: [nomad2.local]
ok: [nomad3.local]
ok: [nomad1.local]

TASK [brianshumate.nomad : Create Temporary Directory for Extraction] **********
changed: [nomad2.local]
changed: [nomad1.local]
changed: [nomad3.local]

TASK [brianshumate.nomad : Unarchive Nomad] ************************************
changed: [nomad3.local]
changed: [nomad1.local]
changed: [nomad2.local]

TASK [brianshumate.nomad : Install Nomad] **************************************
changed: [nomad2.local]
changed: [nomad3.local]
changed: [nomad1.local]

TASK [brianshumate.nomad : Cleanup] ********************************************
changed: [nomad1.local]
changed: [nomad2.local]
changed: [nomad3.local]

TASK [brianshumate.nomad : Disable SELinux for Docker Driver] ******************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Create directories] *********************************
changed: [nomad2.local] => (item=/var/nomad)
changed: [nomad1.local] => (item=/var/nomad)
changed: [nomad3.local] => (item=/var/nomad)

TASK [brianshumate.nomad : Create config directory] ****************************
changed: [nomad2.local]
changed: [nomad1.local]
changed: [nomad3.local]

TASK [brianshumate.nomad : Base configuration] *********************************
changed: [nomad2.local]
changed: [nomad1.local]
changed: [nomad3.local]

TASK [brianshumate.nomad : Server configuration] *******************************
skipping: [nomad3.local]
changed: [nomad2.local]
changed: [nomad1.local]

TASK [brianshumate.nomad : Client configuration] *******************************
skipping: [nomad1.local]
skipping: [nomad2.local]
changed: [nomad3.local]

TASK [brianshumate.nomad : Custom configuration] *******************************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : SYSV init script] ***********************************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : Debian init script] *********************************
skipping: [nomad1.local]
skipping: [nomad2.local]
skipping: [nomad3.local]

TASK [brianshumate.nomad : extract systemd version] ****************************
ok: [nomad3.local]
ok: [nomad1.local]
ok: [nomad2.local]

TASK [brianshumate.nomad : systemd script] *************************************
changed: [nomad1.local]
changed: [nomad3.local]
changed: [nomad2.local]

TASK [brianshumate.nomad : reload systemd daemon] ******************************
ok: [nomad3.local]
ok: [nomad2.local]
ok: [nomad1.local]

TASK [brianshumate.nomad : Start Nomad] ****************************************
changed: [nomad1.local]
changed: [nomad3.local]
changed: [nomad2.local]

TASK [Start nomad] *************************************************************
ok: [nomad1.local]
ok: [nomad2.local]
ok: [nomad3.local]

RUNNING HANDLER [brianshumate.nomad : restart nomad] ***************************
changed: [nomad3.local]
changed: [nomad2.local]
changed: [nomad1.local]

PLAY RECAP *********************************************************************
nomad1.local               : ok=24   changed=15   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   
nomad2.local               : ok=22   changed=15   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   
nomad3.local               : ok=22   changed=14   unreachable=0    failed=0    skipped=15   rescued=0    ignored=0   


==> nomad1: Machine 'nomad1' has a post `vagrant up` message. This is a message
==> nomad1: from the creator of the Vagrantfile, and not from Vagrant itself:
==> nomad1: 
==> nomad1: Vanilla Debian box. See https://app.vagrantup.com/debian for help and bug reports

==> nomad2: Machine 'nomad2' has a post `vagrant up` message. This is a message
==> nomad2: from the creator of the Vagrantfile, and not from Vagrant itself:
==> nomad2: 
==> nomad2: Vanilla Debian box. See https://app.vagrantup.com/debian for help and bug reports

==> nomad3: Machine 'nomad3' has a post `vagrant up` message. This is a message
==> nomad3: from the creator of the Vagrantfile, and not from Vagrant itself:
==> nomad3: 
==> nomad3: Vanilla Debian box. See https://app.vagrantup.com/debian for help and bug reports
[dnk8n@localhost examples]$ vagrant ssh nomad1

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Thu Nov 19 12:56:38 2020 from 10.1.42.1
vagrant@nomad1:~$ nomad node status
Error querying node status: Get http://127.0.0.1:4646/v1/nodes: dial tcp 127.0.0.1:4646: connect: connection refused
@dnk8n
Copy link
Author

dnk8n commented Nov 19, 2020

Part of the problem, although I am not sure how to fix it. It looks like nomad is listening on the wrong IP address.

vagrant@nomad1:~$ nc -zv 127.0.0.1 4646
localhost [127.0.0.1] 4646 (?) : Connection refused



vagrant@nomad1:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:8d:c0:4d brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe8d:c04d/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:37:fd:17 brd ff:ff:ff:ff:ff:ff
    inet 10.1.42.70/24 brd 10.1.42.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe37:fd17/64 scope link 
       valid_lft forever preferred_lft forever



vagrant@nomad1:~$ nc -zv 10.0.2.15 4646
10.0.2.15: inverse host lookup failed: Unknown host
(UNKNOWN) [10.0.2.15] 4646 (?) open



vagrant@nomad1:~$ nc -zv 10.1.42.70 4646
nomad1.local [10.1.42.70] 4646 (?) : Connection refused



vagrant@nomad1:~$ wget 10.0.2.15:4646
--2020-11-19 13:04:42--  http://10.0.2.15:4646/
Connecting to 10.0.2.15:4646... connected.
HTTP request sent, awaiting response... 307 Temporary Redirect
Location: /ui/ [following]
--2020-11-19 13:04:42--  http://10.0.2.15:4646/ui/
Reusing existing connection to 10.0.2.15:4646.
HTTP request sent, awaiting response... 200 OK
Length: 1478 (1.4K) [text/html]
Saving to: ‘index.html’

index.html                              100%[===============================================================================>]   1.44K  --.-KB/s   in 0s     

2020-11-19 13:04:42 (55.9 MB/s) - ‘index.html’ saved [1478/1478]



vagrant@nomad1:~$ cat index.html 
<!DOCTYPE html>
<html>
  <head>
    <meta charset="utf-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <title>Nomad</title>
    <meta name="description" content="">
    <meta name="viewport" content="width=device-width, initial-scale=1">

    
<meta name="nomad-ui/config/environment" content="%7B%22modulePrefix%22%3A%22nomad-ui%22%2C%22environment%22%3A%22production%22%2C%22rootURL%22%3A%22%2Fui%2F%22%2C%22locationType%22%3A%22auto%22%2C%22EmberENV%22%3A%7B%22FEATURES%22%3A%7B%7D%2C%22EXTEND_PROTOTYPES%22%3A%7B%22Date%22%3Afalse%7D%2C%22_JQUERY_INTEGRATION%22%3Atrue%7D%2C%22APP%22%3A%7B%22blockingQueries%22%3Atrue%2C%22mirageScenario%22%3A%22smallCluster%22%2C%22mirageWithNamespaces%22%3Atrue%2C%22mirageWithTokens%22%3Atrue%2C%22mirageWithRegions%22%3Atrue%7D%2C%22ember-cli-mirage%22%3A%7B%22usingProxy%22%3Atrue%2C%22useDefaultPassthroughs%22%3Atrue%7D%2C%22exportApplicationGlobal%22%3Afalse%7D" />

    <link rel="stylesheet" href="/ui/assets/vendor-d8602bc1ee5a13b26e7066b76b62d063.css">
    <link rel="stylesheet" href="/ui/assets/nomad-ui-2ffef17efbbc4361902d6cc1d42ce2b8.css">
    <link rel="icon" type="image/png" href="/ui//favicon-1c2527a7a07d130ecbafce75e4615a69.png" />

    
  </head>
  <body>
    

    <script src="/ui/assets/vendor-44114fe8d7aaeb1fcb23a1020f2623c2.js"></script>
    <script src="/ui/assets/nomad-ui-3f5aade8a94ca946d42dfb0ab70ffee0.js"></script>

    <div id="ember-basic-dropdown-wormhole"></div>
  </body>
</html>

@dnk8n
Copy link
Author

dnk8n commented Nov 19, 2020

I have since moved on from the example Vagrantfile (I will post my setup soon). I suffered from exactly the same issue though and some changes I made helped...

Ansible Role Variables:
nomad_bind_address: 0.0.0.0 (avoids the error in this issue... but then shows that no leader could be elected, see next line for a fix)
nomad_bootstrap_expect: 1 (Maybe 2 in this example... but it defaults to 3 erroneously, in the case where there are 3+ servers but fewer than 3 server nodes. With this correctly defined, servers and clients are found)
nomad_iface: enp0s8 (Due to the extra interface identified, nomad is advertising through the wrong IP. I made a change that fixed that (bit worried that it is hardcoded). In this example, I think the value should change to "eth1")
nomad_advertise_address: <set differently per node> (Not sure if this was required or not but it is working with it set. E.g. in the example nomad_advertise_address for nomad1 is 10.1.42.70

@dnk8n
Copy link
Author

dnk8n commented Nov 19, 2020

In the following repo (https://github.com/dnk8n/monitored-orchestration/tree/main/dev) you can find a Vagrantfile that uses both ansible-nomad and ansible-consul to spin up a dev cluster. You might want to reconfigure the VM resources to suit you.

In it I follow some of the suggestions I make to work around the problem of this issue. I tried to get away from needing to have "nomad_bind_address" => "0.0.0.0". Using shell variable NOMAD_ADDR="http://{{ nomad_bind_address }}:4646" with nomad_bind_address set to the private IP of the VM does work for this issue, but causes another issue which can be seen in sudo systemctl status nomad (It erroneously tries to connect via rpc to 127.0.0.1:4647 instead of the configured bind-address)

Not sure if all the modifications were necessary. Often had to totally destroy and re-vagrant up in order to be sure on changes. Finally I got a clean run which works out of the box.

I tested with nomad init output (cat example.nomad) and uploaded it through the UI. It worked as expected.

The UI endpoints are:

nomad:
http://n1.lan:4646
http://n2.lan:4646
http://n3.lan:4646

consul:
http://n1.lan:8500
http://n2.lan:8500
http://n3.lan:8500

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants