On Fri, Jun 24, 2016 at 18:09:01 +0200, Ján Tomko wrote:
On Wed, Jun 22, 2016 at 07:03:46AM -0400, John Ferlan wrote:
>On 06/17/2016 11:16 AM, Ján Tomko wrote:
>> On Thu, Jun 16, 2016 at 07:44:05AM -0400, John Ferlan wrote:
>>> On 06/16/2016 06:03 AM, Ján Tomko wrote:
>>>> On Wed, Jun 15, 2016 at 06:45:59PM -0400, John Ferlan wrote:
>>>>> On 06/15/2016 01:19 PM, Ján Tomko wrote:
>>>>>> Regression introduced by commit 71b803a for [1] that prevents
>>>>>> starting up
>>>>>> a logical pool created with <source><device
path=''></source>
>>>>>> after it has been moved to a different physical volume.
[...]
>> <pool type='logical'>
>> <name>vgname</name>
>> <source>
>> <device path='/dev/sda4'/>
>> <name>vgname</name>
>
>As I see things, this more or less lets libvirt "know" that
"/dev/sda4"
>and "vgname" are associated.
>
>> </source>
>> <target>
>> <path>/dev/vgname</path>
>> </target>
>> </pool>
>>
>> Then pvmove /dev/sda4 to /dev/sda5, libvirt will refuse that pool
>> even though <name>vgname</name> is enough to uniquely identify a
storage
>> pool.
>>
>
>As an admin you take this step to pvmove your data from /dev/sda4 to
>/dev/sda5.
Note that this messes with the configuration without notifying libvirt
since we don't support such operation via the APIs.
>But then you expect libvirt to know that? Do you expect libvirt
to
>automagically update the XML to /dev/sda5 during 'build' or 'start'?
That is one possibility ...
>>>> The whole point of LVM is abstraction from the lower
layers so we
>>>> shouldn't ever be checking this.
>>>>
>>>
>>> So it's OK to activate/start a pool where the source device path is
>>> incorrect?
>>
>> For LVM, the important path is in <source><name>.
>>
>
>But isn't there a 1-to-1 relationship between the VG and the PV? A PV
>cannot be in more than one VG, so if you move your PV, then you have to
>update your XML to let libvirt know.
>
Why would it need to know? Apart from pool building, libvirt does not
need the value for anything.
It is needed when deleting the pool, where libvirt attempts to pvremove
the devices. Fortunately even in the current broken state it's not
possible to remove PVs that are member of a different VG which this
would allow.
The broken part of the check is that it doesn't enforce that ALL PVs as
listed in the XML are member of the given VG and no other is. If just
one PV matches then the code happily accepts it.
In the current state the check is pointless since it doesn't properly
enforce the configuration and still still allows other wrong
configurations.
So the options are:
1) Remove it:
- If the admin messes with the state we will happily ignore the
difference. Deletion will not work properly unless updated.
Pros: the storage pool will work if the volume group is found
Cons: deletion may break
2) Enforce it properly:
- Fix the check so that the data provided in the XML must match the
state in the system.
Pros: the state will be always right
Cons: existing setups may stop to work
3) Update the state in the XML
- Re-detect the PV paths on pool startup.
Pros: state will be in sync, existing configs will work
Cons: less robust in some cases, weird semantics with persistent
config
4) 2 + 3. Update it via a flag when starting.
Peter