On 12/09/2014 09:49 AM, Daniel P. Berrange wrote:
>>>>> The question is how should we make use of it ? Should we use it as
the
>>>>> seed for initstate_r, or just use it for virRandomBits directly ?
>>>>
>>>> Well, consider that libvirt might be run in a VM with snapshot. IIUC
>>>> nowadays when the VM is started from the snapshot virRandomBits()
produces
>>>> the same sequence. If we want to prevent that we must use the new
syscall
>>>> every time the virRandomBits() is called. I'm afraid using the
syscall just
>>>> to set the seed won't be sufficient.
Forking two guests from the same point in time is NOT my concern with
this proposal (when forking, you should already be coordinating with the
forks to clear their entropy pool; virt-sysprep can do this). Rather,
this is about making libvirtd startup use a better seed, when it can.
>>>
>>> If you are restoring a VM from a snapshot, the entropy pool for /dev/random
>>> will have been preseved too, so you'll still have the same problem. This
is
>>> just one of 100's of examples of why you should never try to use
snapshots
>>> as the basis for forking multiple independant VM instances.
>>
>> Correct. But after some time (and some packets, keypresses and mouse
>> movements too) both the syscall and the /dev/random will produce
>> unpredictable sequence. But that's not the case for our RNG. What I'm
saying
>> is, if we feel that our RNG is not good enough we should use a better one.
It's not our RNG that is bad, just our seeding.
>> Improving seed setting is just deferring problem. Although I can live with
>> Eric's approach too.
>
> I don't think we are saying the RNG is not good enough. We're saying the
> way we initialize it when libvirtd starts is not good enough. Basically
> two libvirtd processes started on identical OS instances should not get
> the same random numbers. Giving a good seed achieves that.
Furthermore, my biggest reason for using getrandom(2) _only_ for the
seed is that this is the documentation recommended by the syscall itself:
http://lists.openwall.net/linux-kernel/2014/07/17/235
The system call getrandom() fills the buffer pointed to by buf
with up to buflen random bytes which can be used to seed user
space random number generators (i.e., DRBG's) or for other
cryptographic processes. It should not be used Monte Carlo
simulations or for other probabilistic sampling applications.
We do NOT want to be draining /dev/[u]random on every call, but merely
seeding our PRNG sufficiently that the rest of our random bits are
decently "random" because the seed was unpredictable.
Speaking with Florian Weimer about this, he suggests that we might be better
off not using any of the POSIX rand functions at all, and instead using the
random number source from a crypto library, which would be gnutls in our
case. We allow compilation without gnutls, so guess we'd need to keep the
current code as fallback, but if we can use a cryptographically strong
RNG by default that sounds like a nice idea - assuming it isn't simply a
facade for /dev/urandom :-)