Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OAuth 2.0 (RFC 6749) violations #4426

Open
zacknewman opened this issue Mar 13, 2024 · 30 comments
Open

OAuth 2.0 (RFC 6749) violations #4426

zacknewman opened this issue Mar 13, 2024 · 30 comments
Labels
enhancement New feature or request

Comments

@zacknewman
Copy link

zacknewman commented Mar 13, 2024

Sections 10.3 and 10.4 of RFC 6749 requires the authorization server to use TLS when exchanging the access and refresh tokens; however Vaultwarden—which acts as both the resource and authorization servers—currently does not force TLS, meaning deployments without TLS violate the RFC.

Additionally Section 6 requires the refresh token to be uniquely linked to the client; however Vaultwarden does not have a UNIQUE CONSTRAINT defined on devices.refresh_token which means it's possible, albeit unlikely, for multiple clients to have the same refresh token.

If adherence to RFC 6749 is not a goal, I do believe a UNIQUE CONSTRAINT should still be defined on devices.refresh_token since it will ensure the token is truly unique. Furthermore Vaultwarden already (incorrectly) assumes/hopes the refresh token is unique; and with an actual UNIQUE CONSTRAINT defined—which will be backed by a UNIQUE INDEX with any real-world RDBMS I am aware of—the performance will be even better (and most importantly correct).

With this change, it would also be nice if refresh token compromise were detected as well forcing all clients associated to a user to re-authenticate via password and whatever 2-factor authentication is hopefully configured.

@BlackDex
Copy link
Collaborator

Thanks for the report.

The enforcement of TLS/HTTPS will not added i think for multiple reasons.

  1. People could use a reverse proxy which does HTTPS termination and Vaultwarden might not know if it is use or not.
  2. People who use it locally (localhost/127.0.0.1, or which every resolves to the same) only do not need SSL for a working environment.

Regarding the unique, that is probably something we should adjust. Together with some lifetime/ttl checks probably.
There are some PR's open which link with this. But those need to be tested first on what the impact would be.

@zacknewman
Copy link
Author

I assumed TLS enforcement was not desired; however I'll remark nonetheless. Rocket is TLS capable, so a reverse proxy could still be used. Second, binding Vaultwarden to the loopback interface is an exceptional case since we are then talking about an environment that has only one client device. Additionally, TLS while useless in that situation, could still technically be enforced. Last, one could parse the IP in the config file and exempt TLS enforcement iff Vaultwarden is bound to ::1 or 127.0.0.0/8—yes, I realize that doesn't exempt even more exceptional cases where the loopback interface is associated to an address other than a loopback address.

To me if you taking on the burden and responsibility of hosting a password manager and setting up TLS is undesirable, you probably shouldn't be hosting something so important. I realize this project's stance is much more relaxed though.

@BlackDex
Copy link
Collaborator

In general most projects, even if it is something secure or not do not enforce TLS on the container side. They leave that to the person who implements it.

We do notify in the admin/diagnostics that HTTPS is not enabled though.
And we also say in several places that without TLS some stuff will not work.

Only way i would see this happen is if we generate a CA and Self-Signed cert our selfs, and use that as a default.
But that seems nasty too.

@zacknewman
Copy link
Author

I don't have enough data to make assertions like "most projects, even if it is something secure or not do not enforce TLS on the container side", so I won't comment on that other than one could interpret Vaultwarden as the crate and not a "container" (e.g., Docker image).

This is just a philosophical difference, so I won't spend much more time arguing my stance. There are things that will always be forced by a project (e.g., Vaultwarden doesn't allow the use of UNIX sockets). I'm of the opinion that forcing TLS 1.2 or TLS 1.3 is OK since it simplifies development (e.g., tests only need to occur on TLS-based deployments and not both TLS and non-TLS deployments), improves the security for end users (e.g., there may be some that don't even realize that not using TLS means the credentials can trivially be read by anyone), and allows for better RFC adherence. My opinion is in part motivated by the my stance that X.509 v3 certificates are the responsibility of the admin because I agree deploying self-signed certificates does not make sense.

Regardless, it will be nice to see enforcement of the uniqueness of refresh tokens as well as detecting the use of an expired token and interpreting that as a compromised token.

@BlackDex
Copy link
Collaborator

Ok, let's say we enforce it. But i want my TLS to be terminated by my reverse proxy. How would you then determine i have TLS enabled there? Or would you force me to configure TLS also on Vaultwarden?

Adding a variable to allow non TLS enforcing would not make development easier as you suggested.

What would you suggest for this scenario?

@zacknewman
Copy link
Author

zacknewman commented Mar 19, 2024

I would indeed force TLS; in fact that is what I am doing on my WIP Bitwarden-compatible server. How one communicates with the HTTPS server/Rocket/Vaultwarden would be considered out of scope.

Using a reverse proxy server does not ensure secure transmission even if the reverse proxy server uses TLS 1.2—technically the proper subset of secure ciphersuites of TLS 1.2—or TLS 1.3 since you need to also ensure you have blocked all communication to the HTTP server/Vaultwarden from any client except the reverse proxy server. The only way you can do that is with firewall rules and hope that no one is spoofing an IP to make it appear as the reverse proxy server. Firewalls are not a replacement for actual crypto though; so unless Vaultwarden adds some form of client-side certificate that can be enforced to ensure a specific client is connecting (e.g., a reverse proxy server), not enforcing TLS is strictly weaker. This justification is not much different than why RFC 6749 requires TLS: it too allows reverse proxies to communicate with the authorization server, but it still requires TLS in all situations (i.e., "reverse proxy servers" are not exempt).

This does mean the contrived scenario of running Vaultwarden on the loopback interface would still require TLS even though from a security perspective it adds nothing.

I don't expect to change your mind though. I understand why a project would take a looser stance. I am merely replying to your questions.

@eoprede
Copy link

eoprede commented Mar 30, 2024

@zacknewman Correct me if I am wrong, but from my understanding RFC 6749 does not require that TLS termination happens on the entity serving authorization tokens. I believe Vaultwarden installation does not violate RFC 6749 as long as it is fronted by a reverse proxy or other system that does TLS termination.
With that said, I personally run Vaultwarden in k8s. It suits environment perfectly. And the last thing I need is to have 100 different places where I need to manage my certs. I already have ingress controller that manages them all, that's enough. If somebody manages to get into my systems/network and starts sniffing packets between k8s hosts - I have a whole lot of other issues. So from my perspective - Vaultwarden should focus on what it does best (being a backend providing Bitwarden API) and let another system manage TLS.
P.S. Don't take me wrong, I really appreciate somebody taking a look at the project with security-minded approach. Thank you for that! Just disagreeing with this one point.

@zacknewman
Copy link
Author

zacknewman commented Mar 30, 2024

@zacknewman Correct me if I am wrong, but from my understanding RFC 6749 does not require that TLS termination happens on the entity serving authorization tokens. I believe Vaultwarden installation does not violate RFC 6749 as long as it is fronted by a reverse proxy or other system that does TLS termination.

Please cite via links or copy-paste where your "understanding" is coming from. I cited the relevant sections that make it rather cut-and-dry that TLS MUST be used (and more generally that confidentiality MUST exist between the client and resource server and the resource server and the authorization server):

10.3 Access Tokens

Access token credentials (as well as any confidential access token
attributes) MUST be kept confidential in transit and storage, and
only shared among the authorization server, the resource servers the
access token is valid for, and the client to whom the access token is
issued. Access token credentials MUST only be transmitted using TLS
as described in Section 1.6 with server authentication as defined by
RFC2818 (emphasis added).

The client is the Bitwarden application the user is using to access their vault. "Transit" means the entire path from authorization server to client. There are no special exceptions stated there or anywhere else in that document literally or inferred about "blessed" intermediaries (e.g., "reverse proxy servers") that don't need TLS.

More generally the world is (slowly) transitioning to encrypt-by-default as it should have long ago. This means common setups where "reverse proxy servers" can talk to their backends in plaintext will increasingly be less common at least in non-hobby setups where security and privacy matter not much differently than how application servers use TLS when talking to database backends even when on a local network. In fact even the oft-cited (but incorrect) recommendation that SMTP should use STARTTLS (i.e., "TCP port 587") has been corrected over 5 years ago with the recommendation of SMTPS (i.e., "TCP port 465").

Obviously there are situations for certain entities where this won't matter as has already been discussed in this issue. An important point to emphasize is that requiring TLS does not technically preclude setups from happening even if they slightly complicate them. That is where this conversation becomes philosophical and opinionated unlike the RFC violations.

If the target audience includes hobby projects, then the lower the entry the better even if that comes at the cost of security and privacy (and RFC violations). For more serious projects, the entry is higher. Where one draws the line is up to them; and no matter where it is drawn, it will necessarily preclude some people.

I already stated the project would likely not change its stance even with the RFC violations, so you don't have anything to worry about especially based on @BlackDex's responses. As someone who manages many separate domains and services many of which protected by a WireGuard server and not just a "reverse proxy server", I think that if handling X.509 v3 certificates is "too much", then you shouldn't be hosting something so important. It is quite easy to manage this, and I don't even use Kubernetes or any other orchestration system. But this is a controversial and opinionated stance that will of course have their detractors.

Ideally this discussion stays technical; and until someone shows me where in the RFC I am wrong, I won't retract my claim there are violations. To stay on track with the technical, what do you mean "Bitwarden API"? There are many ways one can define that. Any definition that implies or is based on a necessary but not sufficient condition that if any portion of the API requires TLS, then TLS should be required by the implementation implies Vaultwarden MUST require TLS since portions of the API that depend on WebAuthn (e.g., WebAuthn 2FA) indirectly require TLS.

Similarly, the Bitwarden API requires HTTP + REST + JSON—in practice REST implies the other two, but it doesn't actually require them—which then raises the question what is HTTP? That can be HTTP/0.9, HTTP/1.0, HTTP/1.1, HTTP/2.0, or HTTP/3.0. The latter two practically and formally require TLS respectively. Point being there are both technical and philosophical reasons a Bitwarden-conforming server would require TLS even though that will cause some entities to not host since it's "too complicated".

@eoprede
Copy link

eoprede commented Mar 30, 2024

@zacknewman
The section you quoted says Access token credentials MUST only be transmitted using TLS and also that MUST be kept confidential in transit and storage. I think it demonstrates the issues between RFC and real-life implementation. Let me illustrate with examples:

  1. vaultwarden terminates TLS in rust and uses that to talk to the client. This clearly fits the definition.
  2. vaultwarden terminates TLS in rust, but there's a TCP load balancer between it and the client. Since TCP load balancer does not terminate TLS connection, this also clearly fits the definition.
  3. vaultwarden terminates TLS in rust, but there's an HTTP load balancer between it and the client. Now this gets complicated. Load-balancer terminates the TLS session from client and then creates another TLS session to vaultwarden. It still definitely fits the definition of "transmitting using TLS", but what about confidentiality? Technically speaking, this session can be listened to (in case load balancer is compromised, for example). So is it confidential? I don't know how it is from the RFC standpoint, but I can assure you that a huge part of Oauth implementations out there will use this method - as all your major players do have extensive use of http load balancers (or reverse proxies) in their infrastructure. I'm going to argue that this implementation still fits the RFC, otherwise that means world is just ignoring the RFC and renders it useless.
  4. vaultwarden runs on a host, where it does NOT do any TLS. However, there's a reverse proxy on the same host that does handle TLS. From the perspective of the traffic path, it's identical to the example 3 - traffic is encrypted during transit (meaning when the data hits the wire). There's a small hop between local rust process and local reverse proxy process where the traffic is not encrypted, and the client does not have direct TLS session straight to the rust process. So does it confirm to the definition? I'd argue that yes - we don't call passing data from one process to another a "transmission" and it's not any different than example 3 in terms of vector of attack.
  5. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of just a single node. This is pretty much identical to option 4 from the traffic perspective, you just use k8s to configure vaultwarden and reverse proxy, as opposed to doing it manually.
  6. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of multiple nodes, but it is using Calico WireGuard to encrypt internal communication. But WireGuard is not TLS, so I guess technically speaking now we are in breach of RFC as there may be a case where token hits a wire without being encrypted by TLS. But in practice, security of communication for this option are pretty much identical to option 5.
  7. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of multiple nodes and is using a standard Calico network plugin. Again - it does not confirm to Oauth spec, as it's not transmitted using TLS. But in reality, given a secure data center environment, stealing traffic off a wire in a locked cage is not an easy task. I'm not saying that it can't be done, but it does require physical access to the system, in which case you may as well install a keylogger to the console cart, steal credentials and compromise the system via local admin account. So again - for all the practical purposes, this setup is not any less secure than others. And other security standards realize that. For example, PCI explicitly calls out that data has to be encrypted over public networks, leaving internal networks open to implementation.

Hopefully these examples will illustrate that there are multiple ways of deploying vaultwarden with strict adherence to RFC specs and many of them do not require vaultwarden to worry about TLS implementation. As vaulwarden is positioned as a lightweight product, I don't see any benefits to adding unnecessary features. And there are even more ways that while technically may not adhere to the spec, in reality they are just as secure.

Oh regarding I think that if handling X.509 v3 certificates is "too much", then you shouldn't be hosting something so important. It is quite easy to manage this. Managing certs for 10 systems is easy. It's easy to do that for 100 systems as well. For 100,000 systems - trust me, you don't want to manage certs for each individual system. You would either implement some kind of service mesh (like VPC Lattice), or you would do TLS termination at the end and run stuff unencrypted internally. The places I have worked at (and you would know these companies) usually use combination of these methods depending on the workload.

@zacknewman
Copy link
Author

zacknewman commented Mar 30, 2024

  1. vaultwarden terminates TLS in rust and uses that to talk to the client. This clearly fits the definition.

Agreed.

  1. vaultwarden terminates TLS in rust, but there's a TCP load balancer between it and the client. Since TCP load balancer does not terminate TLS connection, this also clearly fits the definition.

Perhaps pedantic, but this is problematic with the requirement that TCP be used. While it may be the case now and perhaps forever for Vaultwarden that HTTP/1.1 is used, Bitwarden doesn't mandate that which means you prevent a perfectly reasonable HTTP/3-based implementation. Of course generalizing "TCP" to "TCP/UDP" solves the problem, so I agree with that amendment.

  1. vaultwarden terminates TLS in rust, but there's an HTTP load balancer between it and the client. Now this gets complicated. Load-balancer terminates the TLS session from client and then creates another TLS session to vaultwarden. It still definitely fits the definition of "transmitting using TLS", but what about confidentiality? Technically speaking, this session can be listened to (in case load balancer is compromised, for example). So is it confidential? I don't know how it is from the RFC standpoint, but I can assure you that a huge part of Oauth implementations out there will use this method - as all your major players do have extensive use of http load balancers (or reverse proxies) in their infrastructure. I'm going to argue that this implementation still fits the RFC, otherwise that means world is just ignoring the RFC and renders it useless.

Indeed. This is why mathematicians should write RFCs so such ambiguity is less likely—documentation would be substantially longer and more complex though, lol. I would posit that Section 3.2 allows this with the paragraph:

The means through which the client obtains the location of the token
endpoint are beyond the scope of this specification, but the location
is typically provided in the service documentation.

I do agree that this is not obvious; thus "usage" must take over which is to stay this conforms.

  1. vaultwarden runs on a host, where it does NOT do any TLS. However, there's a reverse proxy on the same host that does handle TLS. From the perspective of the traffic path, it's identical to the example 3 - traffic is encrypted during transit (meaning when the data hits the wire). There's a small hop between local rust process and local reverse proxy process where the traffic is not encrypted, and the client does not have direct TLS session straight to the rust process. So does it confirm to the definition? I'd argue that yes - we don't call passing data from one process to another a "transmission" and it's not any different than example 3 in terms of vector of attack.

Disagree. Anyone that can listen to the traffic between the reverse proxy server and Vaultwarden has access to the plaintext data. This wouldn't be true if TLS were used which means there are more attack vectors. Hopefully the setup is such that the only entities that have access to such traffic are the same entities that have access to Vaultwarden, but that's an added assumption; furthermore that assumption is made how? Firewall rules? That does not have the same security guarantees that cryptography like TLS has, so I don't believe this conforms.

  1. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of just a single node. This is pretty much identical to option 4 from the traffic perspective, you just use k8s to configure vaultwarden and reverse proxy, as opposed to doing it manually.

For the same reasons I think point 4 does not conform, I believe this does not.

  1. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of multiple nodes, but it is using Calico WireGuard to encrypt internal communication. But WireGuard is not TLS, so I guess technically speaking now we are in breach of RFC as there may be a case where token hits a wire without being encrypted by TLS. But in practice, security of communication for this option are pretty much identical to option 5.

I think this conforms in spirit, so I have a similar stance as I do with point 3. The RFC seemingly uses TLS as a synonym for "confidentiality", but this is clearly not true from a cryptographic perspective.

  1. vaultwarden runs as a k8s pod without TLS, using ingress controller (aka reverse proxy) to terminate TLS. This k8s cluster consists of multiple nodes and is using a standard Calico network plugin. Again - it does not confirm to Oauth spec, as it's not transmitted using TLS. But in reality, given a secure data center environment, stealing traffic off a wire in a locked cage is not an easy task. I'm not saying that it can't be done, but it does require physical access to the system, in which case you may as well install a keylogger to the console cart, steal credentials and compromise the system via local admin account. So again - for all the practical purposes, this setup is not any less secure than others. And other security standards realize that. For example, PCI explicitly calls out that data has to be encrypted over public networks, leaving internal networks open to implementation.

Not even going to pretend to know enough what you're talking about. If it's actually the case you need physical access, then I agree. It is sort of similar to points 3 and 6 where you meet the "confidentiality" aspect without TLS. Here physical access.

Hopefully these examples will illustrate that there are multiple ways of deploying vaultwarden with strict adherence to RFC specs and many of them do not require vaultwarden to worry about TLS implementation. As vaulwarden is positioned as a lightweight product, I don't see any benefits to adding unnecessary features. And there are even more ways that while technically may not adhere to the spec, in reality they are just as secure.

As my replies indicate, I only think a proper subset of those examples conform. I don't think requiring TLS makes Vaultwarden that less lightweight; and as stated, I don't think some of your examples are "as secure". Like most things, an "expert" will almost always be able to circumvent formal requirements in a way that is as secure or even more secure; however such experts are not common enough to assume them. So I still conclude a Bitwarden-conforming server has good reason to require TLS. If Daniel Bernstein is comfortable not conforming to RFC 6749, then it's likely his reasons are justified as he's an actual mathematician and cryptographer and has likely forgotten more about crypto than the people involved in the RFC would ever know. Of course it won't hurt to have his reasons peer-reviewed, but I am not at all qualified to be one of said peers.

Oh regarding I think that if handling X.509 v3 certificates is "too much", then you shouldn't be hosting something so important. It is quite easy to manage this. Managing certs for 10 systems is easy. It's easy to do that for 100 systems as well. For 100,000 systems - trust me, you don't want to manage certs for each individual system. You would either implement some kind of service mesh (like VPC Lattice), or you would do TLS termination at the end and run stuff unencrypted internally. The places I have worked at (and you would know these companies) usually use combination of these methods depending on the workload.

I have no experience with that, so I cannot really comment. I'm curious what a company like Google which has quite an immaculate record regarding security (not so much privacy) does when handling backend communication. Sadly there are many big companies that I don't respect very much from a security perspective, so those examples are less meaningful to me.

More generally, I never stated that it's impossible for specific configurations to be as secure or more without conforming to RFC 6749; however the existence of such possibilities doesn't change my stance. Exception proves the rule as they say. As an analogy, one should not "roll their own crypto". That should be followed by a great many. An application or library that does not allow users to control certain TLS ciphersuites is perfectly OK even though there exist experts that know more than enough (e.g., Thomas Pornin) to have such control. Is the target audience dominated by Pornins? I don't think so.

@eoprede
Copy link

eoprede commented Mar 30, 2024

@zacknewman
I think we may have misunderstanding. Can you explain how Anyone can listen to the traffic between the reverse proxy server and Vaultwarden when we are talking about communication within the same host (meaning, the vaultwarden and reverse proxy are running on the same machine/within the same VM)? The only way this can be done is if attacker gained access to the machine/vm. But if they have access to it, they can intercept data from the processes themselves. TLS won't solve anything, as the attacker can simply look at the data TLS is about to encrypt. The moment attacker has access to the host/vm, data is fully compromised with any of the options. And if attacker has no access to the host/vm, the data cannot be intercepted in any way in its unencrypted form, as it never physically leaves that host/vm.

Basically option 3 is more secure than option 6 because traffic never leaves the physical host/vm in option 3, while in option 6 it does leave the host/vm while being encrypted by something outside of TLS. And yes, to intercept data in option 7 one either have to gain remote access to a load balancer/reverse proxy (same attack vector as many other options), or gain access to the host running vaultwarden (same attack vector as all the other options), or get access to the data center, cage within it and then install a physical device on the wire.

I'm curious what a company like Google Can't talk specifically about Google, but I've seen multiple companies (that employ ex-Google engineers) run service mesh with sidecar container. Basically you deploy your application in a k8s pod, your application runs as a container that listens to HTTP requests, but all the communication has to go through the sidecar container that handles encryption. Because containers in the same pod are scheduled to run on the same node, this is effectively identical to options 3-4 that I outlined above. The point is - you don't want your developers wasting time re-inventing the wheel and creating custom encryption in thousands (or even millions) of microservice applications you have, instead you have your security team come up with a robust solution that can be re-used with every single application. And k8s makes it very easy to run together. If it's good for big boys (and I am willing to bet Google does it too) - it's good enough for me, when it comes to running vaultwarden.

@zacknewman
Copy link
Author

zacknewman commented Mar 30, 2024

@eoprede, I think you've done enough to show there exists real-world setups that are "secure"; but I still fail to see how requiring TLS on Vaultwarden's side precludes such setups. You need to establish there exist secure setups without Vaultwarden requiring TLS (check), that such a setup is common enough to even consider (I'll just take this by assumption), and that enforcing TLS prevents such setups (yet to see).

For example, you claim that doing so will no longer make Vaultwarden "lightweight"; but I don't like blind assertions especially ones on performance without hard data. Where is the actual data that shows TLS will have a measurable negative impact? Until then, I am highly skeptical.

You also didn't show how using Kubernetes makes X.509 certificates "unreasonably" burdensome. So to me the examples given at worst don't benefit, but they also are not "reasonably" affected adversely. This means I think it's a net gain: entities that don't benefit security wise are not appreciably impacted negatively and entities that do benefit well benefit.

I find it hard to believe that you are equipped to support so many services that one of them requiring TLS prevents the setup. Until one of those two things are shown/explained to me, I don't see how "secure by requirement" would cause an issue. And just to reiterate for external observers, Vaultwarden has made it quite clear TLS won't be enforced; so this is largely academic.

For example can you prove to me that reusing the same X.509 certificate the reverse proxy server uses to enforce TLS is "too complex"? I struggle to see how you can easily enforce X.509 certificates and renewals on the reverse proxy server, but then somehow can't just re-use the certificate for Vaultwarden. That's only one example. The onus to prove that there doesn't exist any setup that makes X.509 certificates on Vaultwarden's side fairly simple while simultaneously having such a setup on the reverse proxy server is a lot harder. But let's just start with this one simple example.

The implications of such a proof seem too catastrophic also. This means you take a philosophical (perhaps even technical if Kubernetes is not capable) stance against any service that integrates public-key cryptography? Any HTTP/3 service will never be supported because TLS is part of the protocol and no longer at a lower "OSI layer"? Any service that integrates the Noise protocol won't be supported? I find that stance to be archaic. As time goes on, we are seeing services integrate cryptography directly even at the protocol level (e.g., HTTP/3); so perhaps in your lifetime you can only support services without public-key cryptography, but I find such a setup less than ideal to put it mildly. As mentioned multiple times before, this is a philosophical disagreement I don't think you and I will resolve.

@eoprede
Copy link

eoprede commented Mar 31, 2024

@zacknewman
but I still fail to see how requiring TLS on Vaultwarden's side precludes such setups
It doesn't, but it unnecessarily complicates things IMO.
you claim that doing so will no longer make Vaultwarden "lightweight"; but I don't like blind assertions especially ones on performance without hard data. Where is the actual data that shows TLS will have a measurable negative impact?
This is not about performance at all. It's about effort needed to maintain. Now developers of vaultwarden have also to monitor all the potential security risks and bugs in rust related to TLS implementation. And implement them as needed. That's a lot of extra time that can be spent on much better things. Why not let other teams (that develop well known reverse proxies like haproxy, nginx, etc) focus on the security aspect? But if you feel like TLS is necessary - why should vaultwarden "outsource" its database? Wouldn't creation of a custom database solution enhance the security?
You also didn't show how using Kubernetes makes X.509 certificates "unreasonably" burdensome.
I run personal vaultwarden on my home k8s cluster (since I do maintain them for living, running a small one at home takes minimal effort). I have a script to renew certs with letsencrypt for all the local services I run that are internet-accessible. Once it has been written, it takes exactly 0 effort to maintain. And adding a new cert is automatic - it happens as part of k8s ingress configuration. Compare 0 effort with some effort for no security benefit - I think there are zero reasons (aka unreasonable) to put that extra effort, And also don't forget another thing - maintaining it all. Imagine Let's Encrypt stops functioning. I just fix my solution for some other certificate provider and it works for all the services I host. Otherwise, I'd have to go and figure out solution for every single service I have - that's again unnecessary amount of effort.
For example can you prove to me that reusing the same X.509 certificate the reverse proxy server uses to enforce TLS is "too complex"?
So, first thing I'd have to figure out is how reverse proxy accesses container in a pod. I think that's done by internal IP. That alone makes it impossible to reuse the same cert - because cert is created for vaultwarden.mydomain.com, while internal system will be accessed at, let's say, 10.25.67.12. Which means it's going to be an invalid cert, so I may as well use a self-signed cert. But that's insecure, right? Which means I have to run my own certificate authority, and figure out how to integrate that with k8s, how to issue and revoke certs once the pod restarts (as it gets different IPs every time), how to then integrate that code with vaultwarden container (because I need to know which IP container has to generate cert, but the IP is allocated only once container is created, so I'd have to create container with dummy cert, then generate a new cert, then replace it and then restart the service). I hope you are seeing how this gets very complicated very quickly.
Oh, and that's just my setup in my k8s cluster. Somebody else may choose to run their vaultwarden in AWS and use AWS's Load-Balancer with AWS-managed certificate in front. You can not get that cert. there's simply no way to download it. It lives only on the load balancer and nowhere else. Makes it quite difficult to reuse it.
Oh, and... I can keep coming up with examples, but hopefully you get the point - it's actually not easy at all and each solution has to be custom made for each environment to be robust.
As time goes on, we are seeing services integrate cryptography directly even at the protocol level (e.g., HTTP/3)
Sure, but let's not forget that many backend services still run HTTP/1, while HTTP/2 is provided by reverse proxies in front of them. Who says HTTP/3 won't be the same?
I agree that true end to end encryption is a great goal. But operating it isn't easy. I am sure big guys have enough engineering resources (maybe, though maybe not with all the layoffs) to pull it off. But nobody else will do it properly until it's made very simple. And I hope that you see it's nothing but simple.

@zacknewman
Copy link
Author

zacknewman commented Mar 31, 2024

It doesn't, but it unnecessarily complicates things IMO.

As stated, I don't find that it does. This is a subjective argument that likely won't get resolved. What you find to be unnecessarily complicated another won't just as another will find something unnecessarily complicated that you don't. Handling X.509 v3 certificates even for "backend services" is not complicated in my opinion.

This is not about performance at all. It's about effort needed to maintain. Now developers of vaultwarden have also to monitor all the potential security risks and bugs in rust related to TLS implementation. And implement them as needed. That's a lot of extra time that can be spent on much better things. Why not let other teams (that develop well known reverse proxies like haproxy, nginx, etc) focus on the security aspect? But if you feel like TLS is necessary - why should vaultwarden "outsource" its database? Wouldn't creation of a custom database solution enhance the security?

You didn't make it clear what you meant by "lightweight", so I assumed you were referring to the often incorrect assumption that TLS has measurable negative performance. Vaultwarden supports TLS directly regardless if it's forced for not. This means the project absolutely should be testing TLS configurations regardless; otherwise the project should require the opposite: TLS must not be used directly since it's not a tested or approved setup. Until then, supporting only TLS actually makes the work less since they're already testing such config. As far as the security of the dependencies are concerned, that is outside the scope; otherwise Rocket shouldn't be used at all right? Use "established HTTP" servers like Nginx for the HTTP logic. I'm not convinced Nginx is more secure than Rust-based TLS solutions anyway which internally rely on OpenSSL, BoringSSL, LibreSSL, etc. Regardless, this is an argument that not only states Vaultwarden shouldn't require TLS but in fact should outright prevent it which is a crazy stance as far as I am concerned.

I run personal vaultwarden on my home k8s cluster (since I do maintain them for living, running a small one at home takes minimal effort). I have a script to renew certs with letsencrypt for all the local services I run that are internet-accessible. Once it has been written, it takes exactly 0 effort to maintain. And adding a new cert is automatic - it happens as part of k8s ingress configuration. Compare 0 effort with some effort for no security benefit - I think there are zero reasons (aka unreasonable) to put that extra effort, And also don't forget another thing - maintaining it all. Imagine Let's Encrypt stops functioning. I just fix my solution for some other certificate provider and it works for all the services I host. Otherwise, I'd have to go and figure out solution for every single service I have - that's again unnecessary amount of effort.

You keep getting hung up on your situation. I have a lot of security layers on my setup that also wouldn't benefit much, but the idea is to require such setups for the "common good". There are a seemingly infinite number of situations where I don't benefit by some law, RFC, or other requirement due to already addressing such issues myself—often times in a more thorough way—yet I understand why they exist and don't dismiss such things as unnecessary because they don't affect me. There are a myriad of ways you can automate X.509 certificates that require 10 minutes or so of one-time configuration. That is very much reasonable.

So, first thing I'd have to figure out is how reverse proxy accesses container in a pod. I think that's done by internal IP. That alone makes it impossible to reuse the same cert - because cert is created for vaultwarden.mydomain.com, while internal system will be accessed at, let's say, 10.25.67.12. Which means it's going to be an invalid cert, so I may as well use a self-signed cert. But that's insecure, right? Which means I have to run my own certificate authority, and figure out how to integrate that with k8s, how to issue and revoke certs once the pod restarts (as it gets different IPs every time), how to then integrate that code with vaultwarden container (because I need to know which IP container has to generate cert, but the IP is allocated only once container is created, so I'd have to create container with dummy cert, then generate a new cert, then replace it and then restart the service).

One way you can solve this that requires an initial small-time investment in time is setting up a dedicated X.509 certificate server that is responsible for auto-renewing all certificates. You can configure SFTP on the other machines to fetch said X.509 certificate in a way that is hardened such that such machines can only fetch the X.509 certificate from their directory. This can all be automated so that it takes essentially no time or effort. Even easier though is to use a self-signed certificate. Why does this hurt security? You seem to forget that we are dealing with one of the seven situations you mentioned—in addition to the situations mentioned by @BlackDex and me (e.g., running Vaultwarden on the loopback interface)—which means "security" is already established. The reason you are using the X.509 certificate is to get Vaultwarden to work with 0 impact on security. Again, we are dealing with a rather selfish and myopic perspective that just because you don't benefit and have to do something with very minimal effort like using a self-signed certificate that means the "good of all" must be rejected too. I vehemently disagree with this on technical, crypto, and moral grounds. If something benefits a lot of people with minimal impact on me, then I'm all for it. No matter what Vaultwarden or any other service requires there will always be situations where you don't benefit from how it's set up, but you nonetheless configure it so that it works.

I hope you are seeing how this gets very complicated very quickly. Oh, and that's just my setup in my k8s cluster. Somebody else may choose to run their vaultwarden in AWS and use AWS's Load-Balancer with AWS-managed certificate in front. You can not get that cert. there's simply no way to download it. It lives only on the load balancer and nowhere else. Makes it quite difficult to reuse it. Oh, and... I can keep coming up with examples, but hopefully you get the point - it's actually not easy at all and each solution has to be custom made for each environment to be robust.

Such points are addressed with a self-signed certificate so long as they meet one of the seven situations you have defined or the situations @BlackDex and I have defined.

Sure, but let's not forget that many backend services still run HTTP/1, while HTTP/2 is provided by reverse proxies in front of them. Who says HTTP/3 won't be the same? I agree that true end to end encryption is a great goal. But operating it isn't easy. I am sure big guys have enough engineering resources (maybe, though maybe not with all the layoffs) to pull it off. But nobody else will do it properly until it's made very simple. And I hope that you see it's nothing but simple.

As stated, I disagree with this stance. End-to-end encryption should not be avoided just to cater to people who find it "too difficult" to set up. That just means such services won't be managed by those people. I want applications to integrate cryptography directly rather than cross their fingers and hope it's hardened outside. We used to be a world where crypto and security were not a thing. Then we shifted to optional crypto and security. Then crypto and security by default. Finally we are slowly shifting to crypto and security by requirement. If a setup can't handle such services, then this is an XY problem: the issue is in the setup not the service with mandatory crypto.

@BlackDex
Copy link
Collaborator

Let's agree to disagree on this topic.
In regards to HTTP2/HTTP3 that is not something currently supported by Rocket (at least not there version we use). But that can be solved via a reverse proxy too 🎉.

Also whether we enabled TLS or not, webauthn will not work without it, so in that sense it's mandatory.

What @eoprede means in regards to managing certificates. It's fine to manage a few manually, it becomes a pain in the ... if you need to do this for all separate services when you have a dynamic environment like k8s and run maybe 30 or more different service and all those services (like Vaultwarden) have different ways on how to configure where the certificate is located, how you can enable or disable TLS versions, ciphers, MAC's etc... (if at all), you can configure a reverse proxy just once in a secure way and even allow or force new http versions. And also keep it secure and up to date for all services in one go.

Also Vaultwarden will still need HTTP/1.1 for websocket connections. So that will be the minimum version until something else is going to be used.

@zacknewman
Copy link
Author

zacknewman commented Mar 31, 2024

Let's agree to disagree on this topic.

I've said that many times already.

In regards to HTTP2/HTTP3 that is not something currently supported by Rocket (at least not there version we use). But that can be solved via a reverse proxy too 🎉.

There is no reason this should be out-right prevented though. If Rocket were to enable it, then why not support it? More generally the HTTP/2 and HTTP/3 elements were more about making the point that TLS is very much part of higher "OSI layers" now, so not having a way to support such services is a deficiency that should be addressed. It wasn't necessarily related to Vaultwarden specifically.

Also whether we enabled TLS or not, webauthn will not work without it, so in that sense it's mandatory.

Which I stated which means a Bitwarden-conforming server is well within their rights to require TLS directly as otherwise the server does not conform since it implicitly relies on external configuration.

What @eoprede means in regards to managing certificates. It's fine to manage a few manually, it becomes a pain in the ... if you need to do this for all separate services when you have a dynamic environment like k8s and run maybe 30 or more different service and all those services (like Vaultwarden) have different ways on how to configure where the certificate is located, how you can enable or disable TLS versions, ciphers, MAC's etc... (if at all), you can configure a reverse proxy just once in a secure way and even allow or force new http versions. And also keep it secure and up to date for all services in one go.

I know what they were stating, and I disagree. If your setup cannot handle services with mandatory crypto, then the setup is the issue. I want services to require crypto even if such services make it too challenging for some entities. Vaultwarden disagrees with this which I stated in my initial post would likely be true, but it's reasonable to require it as that guarantees better security without relying on external configuration and hoping it's set up correctly.

Also Vaultwarden will still need HTTP/1.1 for websocket connections. So that will be the minimum version until something else is going to be used.

I don't see Vaultwarden ever not supporting earlier HTTP versions than HTTP/2, but WebSockets work with TLS too. So again, we're repeating ourselves.

If you want to have better assurances that deployments are secure, then you force TLS which is a reasonable stance to take. If you want to support "easier" deployments at the cost of secure deployment assurance, then you don't. Vaultwarden takes the latter stance.

I do think the project should be more explicit about the consequences of not using TLS (whether that's via a reverse proxy server or directly). As of now, the project only mentions that certain features won't work; but it doesn't mention that very sensitive information is accessible in plaintext without it. This is turn should cause the project to strongly recommend only TLS-based deployments.

@BlackDex
Copy link
Collaborator

@zacknewman , as always you are free to provide a good PR which could help do this. But as we already know, you are not going to do this.

We will pickup the database adjustments and maybe a bit more, so thanks for reporting this specific issue and bring it to our and others attention.

@zacknewman
Copy link
Author

But as we already know, you are not going to do this.

I don't appreciate that line, and so I feel the need to defend myself. As a security product, any security issue would ideally be appreciated even if it doesn't come with a PR that addresses it. Would a PR be better? Of course. As stated in the past, if this is not true and you'd rather me no longer create issues; then I will respect such wishes. Second, I have in the past explicitly offered to provide PRs to address "vulnerabilities" (e.g., circumventing admin-configured limits for attachments); and you rejected such an offer without any explanation even when such a PR would have come one month sooner than the PR that eventually addressed it (which I still did review). Should a one-time unjustified rejection of help prevent me from offering help in the future? Probably not, but it certainly doesn't help.

@BlackDex
Copy link
Collaborator

BlackDex commented Apr 2, 2024

Don't get me wrong, i do appreciate your findings and reports of improvements we can make. But i could say the same about your remark regarding not forcing TLS and you assumed it already. That falls in the same category for me.

It's not a same or blame or anything. But you do have a strong feeling about this topic, and while I agree it should be as secure as possible, we differ on the solution. I also asked for a simple solution without generating a custom self signed cert (which is what Bitwarden does btw), and you agreed that that also is not a good way to go. Then for me it ends there.

Having a should/should not discussion about this is not going to help. Providing a solution, either in comment form or PR is fine by me. But keep posting on how it should be without providing a viable solution workable for all standpoints is not fruitful i think.

Again, I do appreciate your findings, so i still hope you will keep providing those.

@zacknewman
Copy link
Author

zacknewman commented Apr 2, 2024

It's not a same or blame or anything. But you do have a strong feeling about this topic, and while I agree it should be as secure as possible, we differ on the solution. I also asked for a simple solution without generating a custom self signed cert (which is what Bitwarden does btw), and you agreed that that also is not a good way to go. Then for me it ends there.

This also supports my stance of not offering PRs, and why I'd appreciate if you refrained from off-handed comments like "we already know you are not going to do this". The larger our philosophical differences the less likely a PR aligns with your position making said PR a waste of both of our time. I am more on the "secure" part of the continuum, and the project is more on the "support as many deployments as possible" part of the continuum. Neither one is more "correct" than the other, but it does mean that "just create an issue" seems to be the better fit since then the project can decide if/how to address it.

I apologize if this issue derailed. When someone attempts to discuss some of the points I raised even after the project has made it clear on their stance, then I feel the need to address them. I probably should have exercised better judgment and not replied though.

@BlackDex
Copy link
Collaborator

BlackDex commented Apr 2, 2024

Also, regarding the PR's, and especially the attachment one, I do not recall you provide any PR at all. And i would never reject a PR or if someone is offering to create one. My memory could fail me of course. If you revere to the sentence I can attempt a diff myself if preferred, then for me that was a bit ambiguous, and i nowhere disregard it in any specific form. I just mentioned we would take a look at it. That shouldn't have prevent anyone who is willing and able to provide a PR that would fix it in my opinion. I think that specific item is a case of miscommunication or misunderstanding.

@zacknewman
Copy link
Author

I think that specific item is a case of miscommunication or misunderstanding.

That appears to be the case indeed.

@zacknewman
Copy link
Author

zacknewman commented Apr 2, 2024

Back to the issue of enforcing TLS to guarantee RFC 6749 and WebAuthn conformance. The reason I was against deploying a self-signed certificate is that I don't think it makes TLS enforcement appreciably easier. Deployments that rely on TLS via Rocket are unaffected entirely since they will remain using the X.509 v3 certificate they are already using. Deployments that rely on TLS via an intermediary (e.g., reverse proxy server) benefit very little. Specifically such deployments wouldn't have to create a self-signed certificate themselves, but they would still have to configure their intermediary to trust the certificate. Is that worth it? If you think that is the only way TLS enforcement works, then fine.

Of course that would mean that insecure deployments are strictly forbidden; and as you stated:

Also whether we enabled TLS or not, webauthn will not work without it, so in that sense it's mandatory.

that seems to be acceptable. If TLS is not enforced, then I think the project should emphasize the consequences of that even if it should be "obvious" to most people. Specifically all data that is sent is in plaintext which means the encrypted vault, OAuth tokens, master password hash, and potentially other secrets like TOTP keys when a user looks at the value (in addition to the initial time it was set up). This in turn means that deployments that enable TLS later may not be any more secure since an entity could have already captured said data and use it.

Of course I realize that a major aspect of password managers like Bitwarden is the vault is encrypted and decryption is done on the client, so an entity having another entity's encrypted vault isn't nearly as catastrophic as other services. However security is about layers; so since authentication exists, it's clear the project thinks it adds some benefit (which of course I agree); at which point, it should be done right.

@BlackDex
Copy link
Collaborator

BlackDex commented Apr 2, 2024

Back to the issue of enforcing TLS to guarantee RFC 6749 and WebAuthn conformance. The reason I was against deploying a self-signed certificate is that I don't think it makes TLS enforcement appreciably easier. Deployments that rely on TLS via Rocket are unaffected entirely since they will remain using the X.509 v3 certificate they are already using. Deployments that rely on TLS via an intermediary (e.g., reverse proxy server) benefit very little. Specifically such deployments wouldn't have to create a self-signed certificate themselves, but they would still have to configure their intermediary to trust the certificate. Is that worth it? If you think that is the only way TLS enforcement works, then fine.

I don't think that. What other solutions (besides forcing certificates) do you see for this challenge?

Of course that would mean that insecure deployments are strictly forbidden; and as you stated:

Also whether we enabled TLS or not, webauthn will not work without it, so in that sense it's mandatory.

that seems to be acceptable. If TLS is not enforced, then I think the project should emphasize the consequences of that even if it should be "obvious" to most people. Specifically all data that is sent is in plaintext which means the encrypted vault, OAuth tokens, master password hash, and potentially other secrets like TOTP keys when a user looks at the value (in addition to the initial time it was set up). This in turn means that deployments that enable TLS later may not be any more secure since an entity could have already captured said data and use it.

Where should we emphasize this? We already provide a note in the admin/diagnostics page. We have a better warning in the web-vault then Bitwarden if someone tries to use the Crypt API without SSL, because that also does not work.

I'm searching for good solutions here, since, as i stated before, I'm also for security, but it indeed should not become an annoyance to configure. Do you maybe know other software/packaged solutions which do something like this which we could check?

@zacknewman
Copy link
Author

Where should we emphasize this? We already provide a note in the admin/diagnostics page. We have a better warning in the web-vault then Bitwarden if someone tries to use the Crypt API without SSL, because that also does not work.

A good start is README.md. Now it only states that "Web Crypto APIs" are not supported in insecure contexts, but I think it should use much stronger language that emphasizes TLS is required and the consequences of non-supported deployments that don't rely on TLS.

The wiki would ideally be updated to make the HTTPS section part of the deployment section in an effort to emphasize that TLS is required not merely optional. The HTTPS section should be updated to emphasize the adverse consequences of not using TLS.

I don't think that. What other solutions (besides forcing certificates) do you see for this challenge?

I'm guessing there is not a manner in which you will be comfortable with. Under my philosophical belief system especially with the above updates, I think it's reasonable to do nothing but enforce TLS via Rocket since the documentation will go over such config and clearly states it as a prerequisite. Users of intermediaries create self-signed certs, and the rest create the X.509 cert as they are already doing. The benefit is that you are not just relying on documentation alone to "enforce" TLS, but it's literally enforced at the application level. I view this analogously to a developer that uses the type system of a language to enforce certain invariants instead of purely on API documentation. As has gone over ad nauseam though, this is too "extreme" and not backward compatible for this project.

The bundled self-signed certificate way doesn't improve the above approach much and opens the door to deployments that treat said self-signed certificate as actually secure which would require documentation alone to emphasize that the bundled self-signed certificate is not trustworthy and only exists to lower the bar for deployments that rely on intermediaries to enforce TLS. Thus if I had to guess, the best the project can do is improve documentation and guides.

@stefan0xC
Copy link
Contributor

From my point of view this issue should be closed / turned into a discussion (as it's not really a technical problem with this project or something that will be fixed by a PR but seems like a difference of opinion about security vs usability).

@zacknewman
Copy link
Author

zacknewman commented Apr 6, 2024 via email

@stefan0xC
Copy link
Contributor

@zacknewman Thanks for the clarification. I still think that the issue should be closed / turned into a discussion because there's no disagreement about that first issue and you already agreed to disagree on the second issue.

@zacknewman
Copy link
Author

zacknewman commented Apr 6, 2024 via email

@BlackDex BlackDex added the enhancement New feature or request label Apr 30, 2024
@xxsuperdreamxx
Copy link

@zacknewman, RFC 6749 is a Proposed Standard. This means that no one is “forced” to follow that standard in every single detail. There is a reason why OAuth 2.1 is still in draft form and being updated.

Regarding concerns about security behind reverse proxies, there are two relevant standards you can consider:

[RFC 9209: The Proxy-Status HTTP Response Header Field]((https://www.rfc-editor.org/rfc/rfc9209.pdf) and RFC 9440: Client-Cert HTTP Header Field provides guidance on handling proxy-related information in HTTP responses.
RFC 9440: Client-Cert HTTP Header Field discusses how a reverse proxy can make the TLS client certificate available to the backend application. (you can use this for request declining from your webserver)

Many companies adopt the practice of using locally hosted backends that are not exposed to the internet directly. These backends operate behind reverse proxies or load balancers and to SSL Offloading Article 1 and Article 2. In such cases, VaultWarden is not required to enforce SSL. Instead, you can configure your DNS server (e.g., Cloudflare) to allow communication only to servers with the same root CA certificate.

Lastly, RFC 6749 is an older standard. We now have RFC 8996: Deprecating TLS 1.0 and TLS 1.1, which addresses TLS security. It has been approved by the IETF as a Best Current Practice. TLS 1.2, mentioned in RFC 6749, never received approval, and we have since moved to TLS 1.3 as the new standard.

Keep in mind that these papers primarily apply to widespread public production environments. VaultWarden, like Bitwarden, is considered an enterprise product for internal company use and for companies with solid IT knowledge. If you host it yourself, you can take control of its security

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants