Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Agent node IP not replaced with Tailscale VPN IP #10064

Open
npitsillos opened this issue May 3, 2024 · 7 comments
Open

Agent node IP not replaced with Tailscale VPN IP #10064

npitsillos opened this issue May 3, 2024 · 7 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@npitsillos
Copy link

Environmental Info:
K3s Version:

k3s version v1.29.4+k3s1 (94e29e2e)
go version go1.21.9

Node(s) CPU architecture, OS, and Version:

Linux drone-one 6.8.0-1004-raspi #4-Ubuntu SMP PREEMPT_DYNAMIC Sat Apr 20 02:29:55 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux

Cluster Configuration:

1 server 1 agent

Describe the bug:
Adding nodes using --vpn-auth-file only changes the IP address of the server node and does not update the node-ip to reflect the one in Tailscale.

Steps To Reproduce:

  1. Install Tailscale on server node
  2. Install k3s on server node with curl -sfL https://get.k3s.io | sh -s - --disable servicelb --disable traefik --vpn-auth-file /home/terminator/vpn-auth-file
  3. Install Tailscale on agent node
  4. Install k3s on agent node with curl -sfL https://get.k3s.io | K3S_URL=$URL K3S_TOKEN=$TOKEN sh -s - --vpn-auth-file /home/terminator/vpn-auth-file

Expected behavior:
Expected behaviour should be that drone-one IP should match the one in Tailscale and not the one from the local network.

Actual behavior:
image

Additional context / logs:
Here are some logs when running

sudo k3s agent --server https://192.168.1.101:6443 --token <token> --vpn-auth-file /home/terminator/vpn-auth-file
INFO[0000] Starting k3s agent v1.29.4+k3s1 (94e29e2e)   
INFO[0000] Starting VPN: tailscale                      
INFO[0000] Adding server to load balancer k3s-agent-load-balancer: 192.168.1.101:6443 
INFO[0000] Adding server to load balancer k3s-agent-load-balancer: 100.81.154.10:6443 
INFO[0000] Removing server from load balancer k3s-agent-load-balancer: 192.168.1.101:6443 
INFO[0000] Running load balancer k3s-agent-load-balancer 127.0.0.1:6444 -> [100.81.154.10:6443] [default: 192.168.1.101:6443] 
WARN[0004] Host resolv.conf includes loopback or multicast nameservers - kubelet will use autogenerated resolv.conf with nameserver 8.8.8.8 
INFO[0004] Module overlay was already loaded            
INFO[0004] Module nf_conntrack was already loaded       
INFO[0004] Module br_netfilter was already loaded       
INFO[0004] Module iptable_nat was already loaded        
INFO[0004] Module iptable_filter was already loaded     
INFO[0004] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[0004] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[0005] containerd is now running                    
INFO[0005] Getting list of apiserver endpoints from server 
INFO[0005] Updated load balancer k3s-agent-load-balancer default server address -> 100.81.154.10:6443 
INFO[0005] Connecting to proxy                           url="wss://100.81.154.10:6443/v1-k3s/connect"
INFO[0005] Creating k3s-cert-monitor event broadcaster  
INFO[0005] Running kubelet --address=0.0.0.0 --allowed-unsafe-sysctls=net.ipv4.ip_forward,net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=CloudDualStackNodeIPs=true --healthz-bind-address=127.0.0.1 --hostname-override=drone-one --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-ip=192.168.1.102 --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/var/lib/rancher/k3s/agent/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key 
INFO[0005] Remotedialer connected to proxy               url="wss://100.81.154.10:6443/v1-k3s/connect"
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
I0503 13:49:40.570165    3366 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I0503 13:49:40.576596    3366 server.go:482] "Kubelet version" kubeletVersion="v1.29.4+k3s1"
I0503 13:49:40.576695    3366 server.go:484] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0503 13:49:40.582451    3366 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
I0503 13:49:40.601360    3366 server.go:740] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0503 13:49:40.602150    3366 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0503 13:49:40.602693    3366 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"/k3s","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
I0503 13:49:40.602837    3366 topology_manager.go:138] "Creating topology manager with none policy"
I0503 13:49:40.602871    3366 container_manager_linux.go:301] "Creating device plugin manager"
I0503 13:49:40.603036    3366 state_mem.go:36] "Initialized new in-memory state store"
I0503 13:49:40.603635    3366 kubelet.go:396] "Attempting to sync node with API server"
I0503 13:49:40.603763    3366 kubelet.go:301] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0503 13:49:40.603825    3366 kubelet.go:312] "Adding apiserver pod source"
I0503 13:49:40.603890    3366 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0503 13:49:40.605752    3366 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.15-k3s1" apiVersion="v1"
I0503 13:49:40.606283    3366 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
I0503 13:49:40.607915    3366 server.go:1251] "Started kubelet"
I0503 13:49:40.608407    3366 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
I0503 13:49:40.609143    3366 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
I0503 13:49:40.609988    3366 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
I0503 13:49:40.610271    3366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0503 13:49:40.612494    3366 server.go:461] "Adding debug handlers to kubelet server"
I0503 13:49:40.614479    3366 volume_manager.go:291] "Starting Kubelet Volume Manager"
E0503 13:49:40.616501    3366 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"drone-one\" not found"
I0503 13:49:40.619254    3366 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
I0503 13:49:40.621372    3366 reconciler_new.go:29] "Reconciler: start to sync state"
I0503 13:49:40.624353    3366 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
E0503 13:49:40.626394    3366 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0503 13:49:40.629996    3366 factory.go:221] Registration of the containerd container factory successfully
I0503 13:49:40.630067    3366 factory.go:221] Registration of the systemd container factory successfully
E0503 13:49:40.646372    3366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"drone-one\" not found" node="drone-one"
I0503 13:49:40.689069    3366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
I0503 13:49:40.695359    3366 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
I0503 13:49:40.695486    3366 status_manager.go:217] "Starting to sync pod status with apiserver"
I0503 13:49:40.695556    3366 kubelet.go:2329] "Starting kubelet main sync loop"
E0503 13:49:40.695785    3366 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
I0503 13:49:40.718622    3366 kubelet_node_status.go:73] "Attempting to register node" node="drone-one"
I0503 13:49:40.733798    3366 kubelet_node_status.go:76] "Successfully registered node" node="drone-one"
I0503 13:49:40.740011    3366 cpu_manager.go:214] "Starting CPU manager" policy="none"
I0503 13:49:40.740118    3366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
I0503 13:49:40.740212    3366 state_mem.go:36] "Initialized new in-memory state store"
I0503 13:49:40.740835    3366 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0503 13:49:40.740945    3366 state_mem.go:96] "Updated CPUSet assignments" assignments={}
I0503 13:49:40.740980    3366 policy_none.go:49] "None policy: Start"
I0503 13:49:40.742631    3366 memory_manager.go:170] "Starting memorymanager" policy="None"
I0503 13:49:40.742812    3366 state_mem.go:35] "Initializing new in-memory state store"
I0503 13:49:40.743493    3366 state_mem.go:75] "Updated machine memory state"
I0503 13:49:40.749339    3366 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
I0503 13:49:40.750214    3366 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
INFO[0005] Failed to set annotations and labels on node drone-one: Operation cannot be fulfilled on nodes "drone-one": the object has been modified; please apply your changes to the latest version and try again 
INFO[0005] Failed to set annotations and labels on node drone-one: Operation cannot be fulfilled on nodes "drone-one": the object has been modified; please apply your changes to the latest version and try again 
I0503 13:49:40.848799    3366 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.42.3.0/24"
I0503 13:49:40.850633    3366 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.3.0/24"
I0503 13:49:40.852855    3366 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
INFO[0005] Failed to set annotations and labels on node drone-one: Operation cannot be fulfilled on nodes "drone-one": the object has been modified; please apply your changes to the latest version and try again 
INFO[0006] Failed to set annotations and labels on node drone-one: Operation cannot be fulfilled on nodes "drone-one": the object has been modified; please apply your changes to the latest version and try again 
INFO[0006] Annotations and labels have been set successfully on node: drone-one 
INFO[0006] Starting flannel with backend vxlan          
INFO[0006] Flannel found PodCIDR assigned for node drone-one 
INFO[0006] The interface eth0 with ipv4 address 192.168.1.102 will be used by flannel 
I0503 13:49:40.965328    3366 kube.go:139] Waiting 10m0s for node controller to sync
I0503 13:49:40.965378    3366 kube.go:461] Starting kube subnet manager
I0503 13:49:40.982266    3366 kube.go:482] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.0.0/24]
INFO[0006] Starting network policy controller version v2.1.0, built on 2024-04-25T17:33:09Z, go1.21.9 
I0503 13:49:41.142097    3366 network_policy_controller.go:164] Starting network policy controller
INFO[0006] Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=drone-one --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables 
INFO[0006] Tunnel authorizer set Kubelet Port 10250     
I0503 13:49:41.296599    3366 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.1.102"]
I0503 13:49:41.306198    3366 network_policy_controller.go:176] Starting network policy controller full sync goroutine
I0503 13:49:41.321789    3366 server.go:652] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
I0503 13:49:41.321873    3366 server_others.go:168] "Using iptables Proxier"
I0503 13:49:41.332461    3366 server_others.go:512] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
I0503 13:49:41.332523    3366 server_others.go:529] "Defaulting to no-op detect-local"
I0503 13:49:41.332593    3366 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
I0503 13:49:41.333480    3366 server.go:865] "Version info" version="v1.29.4+k3s1"
I0503 13:49:41.333555    3366 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0503 13:49:41.335984    3366 config.go:188] "Starting service config controller"
I0503 13:49:41.336261    3366 shared_informer.go:311] Waiting for caches to sync for service config
I0503 13:49:41.336502    3366 config.go:97] "Starting endpoint slice config controller"
I0503 13:49:41.336721    3366 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
I0503 13:49:41.338653    3366 config.go:315] "Starting node config controller"
I0503 13:49:41.341161    3366 shared_informer.go:311] Waiting for caches to sync for node config
I0503 13:49:41.437107    3366 shared_informer.go:318] Caches are synced for endpoint slice config
I0503 13:49:41.437107    3366 shared_informer.go:318] Caches are synced for service config
I0503 13:49:41.442061    3366 shared_informer.go:318] Caches are synced for node config
E0503 13:49:41.504199    3366 proxier.go:838] "Failed to ensure chain jumps" err=<
	error appending rule: exit status 2: Ignoring deprecated --wait-interval option.
	iptables v1.8.10 (nf_tables): Chain 'KUBE-NODEPORTS' does not exist
	Try `iptables -h' or 'iptables --help' for more information.
 > table="filter" srcChain="INPUT" dstChain="KUBE-NODEPORTS"
I0503 13:49:41.504294    3366 proxier.go:803] "Sync failed" retryingTime="30s"
I0503 13:49:41.604644    3366 apiserver.go:52] "Watching apiserver"
I0503 13:49:41.621302    3366 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
I0503 13:49:41.965935    3366 kube.go:146] Node controller sync successful
I0503 13:49:41.966232    3366 vxlan.go:141] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
I0503 13:49:41.979074    3366 kube.go:621] List of node(drone-one) annotations: map[string]string{"alpha.kubernetes.io/provided-node-ip":"192.168.1.102", "k3s.io/hostname":"drone-one", "k3s.io/internal-ip":"192.168.1.102", "k3s.io/node-args":"[\"agent\",\"--server\",\"https://192.168.1.101:6443\",\"--token\",\"********\",\"--vpn-auth-file\",\"/home/terminator/vpn-auth-file\"]", "k3s.io/node-config-hash":"CXT2ST2JGTGXHFT7PS4KBUD7YUV5A3FHPN3BZ6UYFUBLTY6KB4TQ====", "k3s.io/node-env":"{\"K3S_DATA_DIR\":\"/var/lib/rancher/k3s/data/381112d65aad62fd1acd373e1fc0c430cb9c3fc77232ffd864b8532a77aef54d\"}", "node.alpha.kubernetes.io/ttl":"0", "volumes.kubernetes.io/controller-managed-attach-detach":"true"}
I0503 13:49:42.013354    3366 kube.go:482] Creating the node lease for IPv4. This is the n.Spec.PodCIDRs: [10.42.3.0/24]
I0503 13:49:42.038326    3366 iptables.go:290] generated 3 rules
INFO[0007] Wrote flannel subnet file to /run/flannel/subnet.env 
INFO[0007] Running flannel backend.                     
I0503 13:49:42.038630    3366 vxlan_network.go:65] watching for new subnet leases
I0503 13:49:42.038752    3366 subnet.go:160] Batch elem [0] is { lease.Event{Type:0, Lease:lease.Lease{EnableIPv4:true, EnableIPv6:false, Subnet:ip.IP4Net{IP:0xa2a0000, PrefixLen:0x18}, IPv6Subnet:ip.IP6Net{IP:(*ip.IP6)(nil), PrefixLen:0x0}, Attrs:lease.LeaseAttrs{PublicIP:0x64519a0a, PublicIPv6:(*ip.IP6)(nil), BackendType:"extension", BackendData:json.RawMessage{}, BackendV6Data:json.RawMessage(nil)}, Expiration:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Asof:0}} }
W0503 13:49:42.038934    3366 vxlan_network.go:101] ignoring non-vxlan v4Subnet(10.42.0.0/24) v6Subnet(::/0): type=extension
I0503 13:49:42.044098    3366 iptables.go:290] generated 7 rules
I0503 13:49:42.098269    3366 iptables.go:283] bootstrap done
I0503 13:49:42.157344    3366 iptables.go:283] bootstrap done

Whereas on the server the IP address is correctly updated to match the one from Tailscale

INFO[0000] Starting VPN: tailscale                      
INFO[0001] Changed advertise-address to 100.81.154.10 due to VPN 
WARN[0001] Etcd IP (PrivateIP) remains the local IP. Running etcd traffic over VPN is not recommended due to performance issues 
INFO[0001] Starting k3s v1.29.4+k3s1 (94e29e2e)         
INFO[0001] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s 
INFO[0001] Configuring database table schema and indexes, this may take a moment... 
INFO[0001] Database tables and indexes are up to date   
INFO[0001] Kine available at unix://kine.sock           
INFO[0001] Reconciling bootstrap data between datastore and disk 
INFO[0002] Running kube-apiserver --advertise-address=100.81.154.10

I remember this working fine on 1.27 but I am not sure if I run anything different to be honest so I can try that and update the issue here. Could someone also point me to how I can develop k3s in order to contribute? I would really appreciate any help on this.

Thank you for such an amazing tool with so many frequent updates!

@npitsillos
Copy link
Author

npitsillos commented May 3, 2024

After some more digging, using --vpn-auth name=tailscale,joiKey=<key> works as expected whereas the --vpn-auth-file argument does not work as expected. My assumption would be that the functionality to read the file when an agent is run might contain a bug. This issue only appears when vpn-auth-file is used to install k3s on the agent node. Running this on the server node works as expected.

@npitsillos npitsillos changed the title Node IP not replaced with Tailscale VPN IP Agetn Node IP not replaced with Tailscale VPN IP May 3, 2024
@npitsillos npitsillos changed the title Agetn Node IP not replaced with Tailscale VPN IP Agent node IP not replaced with Tailscale VPN IP May 3, 2024
@manuelbuil manuelbuil added the kind/bug Something isn't working label May 6, 2024
@manuelbuil manuelbuil self-assigned this May 6, 2024
@manuelbuil
Copy link
Contributor

Thanks for the report! You are totally correct and there is a bug. I have just created a PR to address it and fix it

@manuelbuil manuelbuil added this to the v1.30.1+k3s1 milestone May 7, 2024
@npitsillos
Copy link
Author

Thanks @manuelbuil! I would try to fix this myself if possible might take some more time than usual but I'd be interested in contributing. I am not really sure how to approach local development though.

@manuelbuil
Copy link
Contributor

Oh sorry! I have already created a fix this morning :( ==> #10074

@manuelbuil
Copy link
Contributor

manuelbuil commented May 7, 2024

Thanks @manuelbuil! I would try to fix this myself if possible might take some more time than usual but I'd be interested in contributing. I am not really sure how to approach local development though.

If you'd like to help, there is one thing you could certainly do. Enhance our tailscale e2e testcase with one extra node which uses vpn-auth-file parameter, that way we will be able to catch bugs if this problem happens again. Here is the testcase: https://github.com/k3s-io/k3s/tree/master/tests/e2e/tailscale. If you are up for the challenge, I can help you.

Right now we are deploying both server and worker nodes with vpn-auth ===> https://github.com/k3s-io/k3s/blob/master/tests/e2e/tailscale/Vagrantfile#L38-L55

@npitsillos
Copy link
Author

I would definitely attempt to implement this! If I get this right the Vagrant sets up multiple VMs which act as nodes in the cluster and the tests are then run on these? I am currently away so will attempt work on this next week! Thank you very much for your help looking forward to contributing!

@manuelbuil
Copy link
Contributor

I would definitely attempt to implement this! If I get this right the Vagrant sets up multiple VMs which act as nodes in the cluster and the tests are then run on these? I am currently away so will attempt work on this next week! Thank you very much for your help looking forward to contributing!

Correct. You run the test by executing go test -v -timeout=30m ./tests/e2e/tailscale/, that triggers the vagrant creation and when that is ready the testing starts. Hit me up in slack for more details

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
Status: To Test
Development

No branches or pull requests

3 participants