Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Combining This Role With ‘ansible-consul’ #147

Open
egmanoj opened this issue Jan 10, 2022 · 5 comments
Open

Combining This Role With ‘ansible-consul’ #147

egmanoj opened this issue Jan 10, 2022 · 5 comments

Comments

@egmanoj
Copy link

egmanoj commented Jan 10, 2022

I'm looking for examples that illustrate how to combine this role with ansible-consul to bring up a local cluster for developing using Nomad and Consul. I couldn't find any examples out there in the wild, hence thought of asking here.

Please point in the right direction if this is not the right place to ask this question.

@ppacher
Copy link
Contributor

ppacher commented Jan 17, 2022

I'm using something like the following (just a quick copy from my playbook). I hope I did not forget something important:

#
# Consul
#
# Setup a consul cluster
#
- name: Consul Cluster Setup
  hosts: consul_instances
  gather_facts: true
  become: yes
  roles:
  - role: ansible-consul
    vars:
      consul_connect_enabled: True

 # The dnsmasq role installs and configures dnsmasq as a local DNS resolver that forwards queries for .consul to consul
 # and all others to some public DNS. See https://learn.hashicorp.com/tutorials/consul/dns-forwarding
  - role: dnsmasq
 
#
# Nomad Cluster Setup
#
- name: Setup a nomad cluster
  hosts: nomad_instances
  gather_facts: true
  become: yes
  roles:
  # we need docker on each server that runs a
  # nomad client (nomad_node_role == 'client' | 'both')
  - role: tkd.common.docker
    vars:
      docker_daemon_config:
        insecure-registries: ["registry.service.consul:5000"]
        bip: "172.17.0.1/24"
        dns:
          - "172.17.0.1"
    when: nomad_node_role != 'server'

  - role: ansible-nomad
    vars:
      nomad_version: 1.2.3
      nomad_use_consul: True
      #nomad_bind_address: "0.0.0.0" # Dangerous: make sure to restict access from the public network.
      nomad_group_name: "nomad_instances"
      nomad_docker_enable: True
      nomad_retry_join: True
      nomad_retry_max: 20
      nomad_manage_user: False # https://github.com/ansible-community/ansible-nomad/issues/109#issuecomment-877225241
      nomad_encrypt_enable: True
      # nomad_vault_enabled: True
      # nomad_vault_address: "http://active.vault.service.consul:8200/"
      nomad_telemetry: yes
      nomad_telemetry_disable_hostname: true
      nomad_telemetry_publish_allocation_metrics: true
      nomad_telemetry_publish_node_metrics: true
      nomad_telemetry_prometheus_metrics: true
      nomad_host_networks: [] # TODO(ppacher): configure for ingress
      nomad_plugins:
        docker:
          config:
            allow_privileged: true
      nomad_host_volumes:
      - name: "shared-data"
        path: /mnt/zonedata0
        owner: root
        group: bin
        mode: "0755"
        read_only: false

With the following variables set for all participating hosts:

consul_version: latest
consul_raft_protocol: 3
consul_bootstrap_expect: true
consul_iface: cluster0
consul_node_role: server

nomad_iface: cluster0
nomad_network_interface: cluster0
nomad_node_role: both
nomad_node_class: node

Note that I'm using a wireguard mesh between all my hosts that's why the _iface values are set to cluster0. You should use the name of the network interface of the local LAN connection between all those nodes instead.

Also the host groups consul_instances and nomad_instances consist of more or less the same hosts in my setup (and I don't run dedicated nomad clients. Each node serves as a server and as a client - budget wise ...)

@egmanoj
Copy link
Author

egmanoj commented Jan 21, 2022

Thanks @ppacher! Will take this for a spin.

Could you share (relevant sections from) your hosts file also?

@rizerzero
Copy link

Hi @egmanoj, did you manage to do this?

@egmanoj
Copy link
Author

egmanoj commented Feb 24, 2022

Hi @rizerzero,

I did not. Created a new playbook with both ansible-consul, and ansible-nomad. Consul was installed and started, but nomad did not.

In the end I gave up and wrote my own simple roles/tasks to install and configure both Consul and Nomad. Hope this helps.

@IamTheFij
Copy link

IamTheFij commented Oct 24, 2022

I've got a playbook with Consul, Vault, and Nomad all integrated. It will handle setup, bootstrap and updates. I've also got a playbook to bootstrap values to Consul and Vault and Terraform configuration for setting up the jobs.

I might do a significant refactor now that Nomad 1.4 supports variables and service discovery. I may drop Consul and Vault.

It's probably worth noting that this configuration uses my forks of each of the Ansible roles because some changes I made haven't been merged upstream yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants