Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feature = "zeroize" #92

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

FauxFaux
Copy link

Add a feature which causes us to run zeroize() on Drop. It is disabled by default.

zeroize tries to ensure that the memory inside the object is zeroed on Drop, not just left lying around on the heap. This may improve security in certain types of application, where BigInt or BigUint is being used for cryptography. It does not address any timing issues.

It is useful to have this inside this crate. Even if the backing storage was exposed (which it is not), the crate uses temporary allocation all over the place internally, so you would want to wipe all of those instances, too. I checked many places it appears that the crate is using temporary vectors, and, in all cases, it appears that these are moved into a Big(U)int, so would be wiped on Drop.

I do not believe the feature should be enabled by default, it necessarily and intentionally adds extra performance overhead which many users would not care about.

The code change is not entirely trivial, because adding a Drop implementation to an existing struct stops you from moving out of the struct's fields (I learned, today!).
In one case, this can be fixed by inlining a variable, with no semantic changes.
In two others, I have added #[cfg( flags to use an existing "copying" implementation when in zeroize mode. These two are probably avoidable, with larger code changes.

@cuviper
Copy link
Member

cuviper commented May 2, 2019

At first glance, there are two problems I see here:

  1. Zeroize for Vec<T> only zeros the used space, self.as_mut_slice().zeroize(), not the entire capacity. So if you ever use operations that reduce the data length in-place, like Shr or Sub, then leftover upper bits will remain untouched. I think the zeroize crate could fix this to clear the whole capacity, if they want.

  2. Zeroize does nothing for realloc when the vector grows. The previous allocation with your sensitive data will just be abandoned. I don't think the zeroize crate can do anything to help this. We could carefully avoid doing anything to increase the capacity ourselves, always forcing a new Vec replacement, but this would be terribly cumbersome and error-prone.

I think a better approach would be to hook the actual allocator. You could do this today by setting a #[global_allocator] to something that clears every dealloc and realloc, but of course this would affect every allocation in your program, which might be too much. Someday we might be able to use custom allocators to let you focus a zeroing allocator just for your sensitive data.

This may improve security in certain types of application, where BigInt or BigUint is being used for cryptography. It does not address any timing issues.

I'm no cryptographer, but it doesn't seem very useful to address just this one small part of security. Reading data from old allocations in Rust means that something must have gone terribly wrong in the borrow checker or more likely unsafe code. On the other hand, timing issues are an every day fact of life that Rust does nothing to prevent, so this seems like a much more important concern.

@FauxFaux
Copy link
Author

FauxFaux commented May 3, 2019

Excellent review of Zeroize. I have raised the capacity() with them directly. For the realloc issue, I suspect a Vec-like type (with the zero-on-realloc) would work here? I also raised an issue for discussing that, maybe continue over there?

For the crypto side, a couple of points:

  • This is defence in depth.
    • It should never matter that the data is still in memory, but it would be nice if we tried to eliminate it. It's not unreasonable to believe that a Rust application would be using a library with this kind of leak internally, such as OpenSSL's CVE-2014-0160 ("heartbleed").
    • There's also the issue that a machine is compromised after a user has disconnected, and someone attaches a debugger to the Rust process, and can see non-wiped memory. (This is the case I care about today.)
  • There's plenty of crypto where timing doesn't matter, but the data is still sensitive, e.g. generating signatures for files (although maybe this would be better handled by a process boundary), and (to some extent) EDH, where we're dealing only with single-use keys an attacker can't control.

I believe a better approach would be to write an allocation-free version of modpow (for a fixed prime) for my project, on something like generic-array, but I failed to do that, so here we are. :)

@FauxFaux
Copy link
Author

FauxFaux commented May 3, 2019

Oh dear. I've already thought of more things to add.

There are mathematical solutions to timing issues which can be implemented on top of the non-constant-time library as it currently is, e.g. the RSA blinding implementation.

The notes on explicit_bzero about how it can reduce security:

In some circumstances, explicit_bzero() can decrease security. If the compiler determined that the variable containing the sensitive data could be optimized to be stored in a register (because it is small enough to fit in a register, and no operation other than the explicit_bzero() call would need to take the address of the variable), then the explicit_bzero() call will force the data to be copied from the register to a location in RAM that is then immediately erased (while the copy in the register remains unaffected). The problem here is that data in RAM is more likely to be exposed by a bug than data in a register, and thus the explicit_bzero() call creates a brief time window where the sensitive data is more vulnerable than it would otherwise have been if no attempt had been made to erase the data.

I'm done for real this time. Short summary: I believe this usefully increases safety, despite the downsides, some of which can be improved (in other libraries).

@elichai
Copy link

elichai commented Jul 18, 2021

@cuviper Now that Zeroize does zero out the full capacity, would you accept a PR for this?
I agree that it doesn't solve realloc and a lot of other problems, but a careful implementation might never reallocate (and instead allocate the needed size and then use all operations in a modular field).

IMHO it's better to do best effort than nothing, and currently a cryptographer using this library can't make best effort here without invoking UB

@cuviper
Copy link
Member

cuviper commented Jul 27, 2021

@elichai Would it be sufficient to you if we simply implemented Zeroize for BigUint/BigInt? Then you'd be able to wrap your important values in Zeroizing, or otherwise clean them up, without us making any promises/implications about how temporary values should be cleaned up internally.

@elichai
Copy link

elichai commented Jul 27, 2021

@cuviper IMHO that's already better than nothing, and we can push experimentations/thoughts about how to handle temporaries to the future, maybe the future allocator api could be involved here.

(It is also an interesting idea to take a performance critical application, replace the global allocator with one that zeroize everything and test the performance)

@10maurycy10
Copy link

@cuviper IMHO that's already better than nothing, and we can push experimentations/thoughts about how to handle temporaries to the future, maybe the future allocator api could be involved here.

(It is also an interesting idea to take a performance critical application, replace the global allocator with one that zeroize everything and test the performance)

For now adding zeroize is the simplest solution.

@tarcieri
Copy link

tarcieri commented Mar 3, 2022

FWIW here is the zeroize implementation in @dignifiedquire's fork of num-bigint (used by the rsa) crate:

https://github.com/dignifiedquire/num-bigint/blob/c64439f/src/biguint.rs#L136-L141

I also just opened a PR which improves the impl on BigInt:

dignifiedquire#39

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants