On Fri, May 22, 2015 at 11:02:57AM +0100, Daniel P. Berrange wrote:
> On Fri, May 22, 2015 at 11:56:24AM +0200, Michal Privoznik wrote:
> > On 20.05.2015 19:35, John Ferlan wrote:
> > >
> > >
> > > On 05/20/2015 10:22 AM, Ján Tomko wrote:
> > >> s/read/refresh/ in the commit message?
> > >>
> > >> On Wed, May 20, 2015 at 08:52:57AM -0400, John Ferlan wrote:
> > >>>
https://bugzilla.redhat.com/show_bug.cgi?id=1195882
> > >>>
> > >>> Original commit id 'cbde3589' indicates that the cache
file would be
> > >>> discarded if either the QEMU binary or libvirtd 'ctime'
changes; however,
> > >>> the code only discarded if the QEMU binary time didn't match
or if the
> > >>> new libvirtd ctime was later than what created the cache file.
> > >>>
> > >>> This could lead to issues with respect to how the order of
libvirtd images
> > >>> is created for maintenance or patch branches where if someone had
a libvirtd
> > >>> created on 'date x' that was created from (a) backported
patch(es) followed
> > >>> by a subsequent install of the primary release which would have
'date y'
> > >>> where if 'date x' was greater than 'date y', then
features in a primary
> > >>> release branch may not be available.
> > >>
> > >> I can see how here can be two daemons with different ctimes on the
same
> > >> system during development, so ACK to the change.
> > >
> > > But they'd use different cache paths, right?
> > >
> > >>
> > >> However, when you install the daemon (even from an older package),
ctime
> > >> should only move forward, so I'm sceptical about its relevance to
the
> > >> referenced bug.
> > >
> > > Perhaps that my misinterpretation or misunderstanding of ctime -
> > > certainly in a different OS I used to work on many years ago - ctime was
> > > when the image was created by a build and not when the inode or file
> > > change time as I (now) read...
> > >
> > > So hmmm... that blows my theory to shreds - unless you account for that
> > > window of time during an install where files are being replaced while
> > > libvirtd is still running an older release. As Dan notes in my 1/2
> > > removing the cache in between would be racey. So would it also be racey
> > > if something caused a cache read, code finds updated libvirtd, asks qemu
> > > for current capabilities, gets answer, creates file based on current (or
> > > old) understanding... Then when libvirtd restarts it finds a ctime of
> > > itself, doesn't update the cache, and of course doesn't have the
latest
> > > bits.
> >
> > How about computing a hash of the qemu binary, and if hashes do not
> > equal recompile the cache?
>
> That'd be significantly slower than todays code, and I don't think it
> is needed. We just need to change the from < date to != date. The
> currenty < date check was written based on the (mistaken) assumption
> that the ctime would be set based on the time at which the RPM package
> was installed. ie, even if you downgraded libvirt, I thought you'd
> get a newer ctime due to install time. Since RPM preserves timestamps
> from the build-environment this assumption was invalid. Using != date
> should deal with it adequately as the chance of two separate RPMs
> builds having been done at precisely the same time is minimal.
RPM preserves mtimes, not ctimes.
We use ctimes since commit f5059a929e0db6689e908e354b13a60661c143e1
Change QEMU capabilities cache to check ctime instead of mtime
CommitDate: 2014-03-11 10:52:29 +0000
http://libvirt.org/git/?p=libvirt.git;a=commitdiff;h=f5059a92
Oh, so it does. I thought it preserved both, but I was comparing the
wrong fields. In that case, I'm not clear on just what problem John
is seeing. Even if he installs an older maint release RPM, or
switches to an old maint branch and rebuilds, the ctime should
always get changed. So the cache should invalidate.
Regards,
Daniel
--
|: