Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volumes are not forced to use SELinux label #208

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rikislaw
Copy link
Contributor

SELinux label is not added by default to all volumes/mounts when we specify usage of SELinux label in main Nomad config for Podman driver. This setting goes only to standard Nomad mounts. For volumes defined in job definition, we need to provide z label for each volume, where we want to use it. With previous approach I was not able to mount share, that didn't require providing z label at all. I had this issue on RHEL7/8 with enabled selinux and for NFS mounts. NFS mounts were not working (permission denied) when mounted with z label.

@lgfa29
Copy link
Contributor

lgfa29 commented Jan 26, 2023

@jdoss we had this change being suggested before, but you mentioned in #66 (comment) that this breaks CSI integration.

Do you have more details? Is it related to #185?

@jdoss
Copy link
Contributor

jdoss commented Jan 26, 2023

I believe this does what #66 does and it moves the SELinux logic up so it applies to all mounts which can have unintended consequences.

The better solution would be to add in support for selinuxlabel in volume_mount so you can be specific on the SELinux contexts you want applies.

volume_mount {
        volume      = "victoriametrics"
        destination = "/victoria-metrics-data"
        read_only   = false
        selinuxlabel = "z"
      }

@rikislaw
Copy link
Contributor Author

Hi @jdoss,

Is volume_mount stanza some new feature? I don't see it in podman driver documentation. Will
it allow also to mount any folder?

@lgfa29
Copy link
Contributor

lgfa29 commented Jan 27, 2023

Ah right, thanks @jdoss I think I remember it now. This issue in Nomad has some more details: hashicorp/nomad#9123

@rikislaw volume_mount is handled by Nomad itself, so it's outside the task driver control:
https://developer.hashicorp.com/nomad/docs/job-specification/volume_mount

@jdoss
Copy link
Contributor

jdoss commented Jan 27, 2023

No problem @lgfa29!

@rikislaw
Copy link
Contributor Author

Hi @lgfa29 , @jdoss,

So finally what approach we should follow to allow podman driver mount some host directory without z label when it is specified in main config file? I had workaround for docker driver - using mount stanza, which doesn't support z label at all. But it is not available for podman. In other words, currently I cannot directly migrate my jobs from docker to podman, because I need to mount few directories, but some with and some without z label.

@lgfa29
Copy link
Contributor

lgfa29 commented Feb 6, 2023

Hi @rikislaw,

I think @jdoss is suggesting something like these changes:
https://github.com/hashicorp/nomad/compare/f-volume-mount-label
https://github.com/hashicorp/nomad-driver-podman/compare/f-volume-mount-label

I haven't tested this much, so I'm not sure if it actually works, but the idea is that you would be able to set these labels at the volume_mount level in your task, then the Podman plugin doesn't have to apply the label to all binds.

The implementation I have is fairly naive because it leaves to plugins to handle that config, so we would have to discuss more what a better implementation would be. Possibly a new way to validate if the plugin supports additional label? Not sure 🤔

Would either of you be interested in moving the development of this feature forward?

@rikislaw
Copy link
Contributor Author

Hi @lgfa29

I'm coming back to this issue after some time as it is getting more critical for us. The suggested solution assumes we are using csi or host volumes, but we are not using them. We just use volumes list inside config of podman (previously docker) task. At this moment csi or host volumes are not good for us.

Are we 100% sure, that change from this PR will break CSI integration? At which point? Maybe it can be fixed by adding some more changes in section "create binds for host volumes, CSI plkugins, and CSI volumes" ??

BR,
Piotr

@jdoss
Copy link
Contributor

jdoss commented Jul 24, 2023

@rikislaw Yes, I am very confident that if we merge this in it will break CSI for a lot of folks. The better path is the work that @lgfa29 talked about in his last response. If you need a faster fix, I would remove selinuxlabel = "z" from your nomad config in the podman driver config:

plugin "nomad-driver-podman" {
  config {
    socket_path = "unix://var/run/podman/podman.sock"
    volumes {
      enabled      = true
      selinuxlabel = "z"
    }
  }
}

and then manually manage your mounts in the task section of your nomad jobs with

volumes = [
          "/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt:z"
        ]

I think that should work in the short term until we come to consensus on if the addition of selinuxlabel in the volume_mount directive would be a better path forward for this kind of issue.

Also as a nuclear option, turning off SELinux on your client nodes is something you can do. I don't recommend it. But will unblock you.

@jdoss
Copy link
Contributor

jdoss commented Jul 24, 2023

I got some time at lunch to test this with Nomad 1.6.1 and nomad-podman-driver 1.5.0 on Fedora CoreOS 38.20230625.3.0` and it works as expected. You might have different results with RHEL 7 / RHEL 8 which have a much older version of podman and container-selinux which might impact your success as well. You could try a custom SELinux policy on your NFS mounts as well as a workaround

volumes = [
          "/opt/joetest:/joetest:z",
        ]

I set this on a job task.

[root@testing ~]# podman exec -it testjob-d639d53c-fcc6-8f13-5bac-0a9cd47e939b sh
/ # cd /joetest/
/joetest # ls
/joetest # touch foo
/joetest # ls -lah
total 0      
drwxr-xr-x    2 root     root          17 Jul 24 18:24 .
dr-xr-xr-x    1 root     root         150 Jul 24 18:23 ..
-rw-r--r--    1 root     root           0 Jul 24 18:24 foo
/joetest # 
[root@testing ~]# ls -lah /opt/joetest/
total 0
drwxr-xr-x. 2 root root 17 Jul 24 18:24 .
drwxr-xr-x. 5 root root 47 Jul 24 18:23 ..
-rw-r--r--. 1 root root  0 Jul 24 18:24 foo
[root@testing ~]# getenforce 
Enforcing

@rikislaw
Copy link
Contributor Author

rikislaw commented Jul 25, 2023

Hi @jdoss,

Regarding used versions, I currently work on RHEL 8.8, podman 4.4.1 and nomad-podman-driver 0.5.0. I think those are quite fresh versions.

Generally I was trying not to use selinuxlabel = "z" in podman driver config and then yes, I could specify z label in the volumes list in task section, but unfortunately then default allocation Nomad mounts (like /alloc, /local, /secrets, ...) are mounted also without z label and have permission denied.

Some not perfect workaround would be adding additional option along with selinuxlabel = "z" like alloc_mounts_selinuxlabel = "z", which would have impact only on default allocation mounts. Of course this parameter would be not mandatory if we have already specified selinuxlabel = "z".

BR,
Piotr Richter

@zemm
Copy link

zemm commented Sep 12, 2023

I also had a case where I need to mount NFS storage into Podman task without the :z.

At first I tried using task's volumes instead of host volumes, but found out that the implicit :z is still forcefully applied to them.

Then I tried to remove the driver's selinuxlabel = "z", but found out that now none of the Podman tasks work, because the alloc's secrets mount has wrong labels. Creating semanage fcontext rules for the alloc path doesn't fix this, because tmpfs mounts don't obey that list.

The only solution I found was to run the tasks using NFS-storage as privileged, which is obviously not a good solution.

There needs to be more control than one global flag. As a user I didn't expect that when the config.volumes takes explicit flags, there would also be implicit additions that I can't remove.

Even if the host volumes would add support for per-volume/mount labels, I think the separate alloc_mounts_selinuxlabel option suggested above would still be a needed to be able to get rid of the all-infections selinuxlabel because of the problem with the wrong alloc mount labels.

@Juanadelacuesta
Copy link
Member

Juanadelacuesta commented Mar 11, 2024

The PR to support it is ready, but it requieres a new release from Nomad that includes the new configuration on the volume mount configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
hcc/cst Admin - internal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants