Hi all. Having trouble figuring out the magic to set up VMChannel comms between EL6.x
host/guests. My end goal is to enable fence_virt in the guest to talk to fence_virtd on
the host via VMChannel. I'd prefer to use that instead of multicast because it is
supposed to work even if networking in the guest is down/borked.
My analysis is that there is a mismatch between what libvirt is feeding qemu-kvm and what
qemu-kvm is willing to accept:
virt-install option:
--channel
unix,path=/var/run/cluster/fence/foobar,mode=bind,target_type=guestfwd,target_address=10.0.2.179:1229
turns into this XML:
<channel type='unix'>
<source mode='bind' path='/var/run/cluster/fence/foobar'/>
<target type='guestfwd' address='10.0.2.179'
port='1229'/>
</channel>
Which then gets fed to qemu-kvm as this:
-chardev socket,id=charchannel0,path=/var/run/cluster/fence/foobar,server,nowait -netdev
user,guestfwd=tcp:10.0.2.179:1229,chardev=charchannel0,
id=user-channel0
And then qemu-kvm barfs like so:
qemu-kvm: -netdev user,guestfwd=tcp:10.0.2.179:1229,chardev=charchannel0,id=user-channel0:
Invalid parameter 'chardev'
Versions:
libvirt-0.9.10-21.el6.1
qemu-kvm-0.12.1.2-2.295.el6
NB: I did try this with the versions in 6.2, got the same result. Upgraded to these
versions to see if the problem went away, but no joy.
NB2: The fence_virt.conf manpage lists the following as example XML for defining a channel
device:
<channel type=’unix’>
<source mode=’bind’ path=’/sandbox/guests/fence_molly_vmchannel’/>
<target type=’guestfwd’ address=’10.0.2.179’ port=’1229’/>
</serial>
Any advice would be greatly appreciated. I can fall back to multicast, of course, but
I'd like to make this work if possible.
Thanks!
Mike
Show replies by date