[libvirt-users] Facing problems while connecting virsh to XenServer 6.5 to import VMs to RHEV / oVirt
by Anantha Raghava
Hi,
I am trying to import couple of important Virtual Machines from
XenServer 6.5 (from downloaded from xenserver.org) into RHEV / oVirt
environment. When I used the RHEV / oVirt Import Wizard, I was facing
the issues. In order to check further, I attempted to use virsh -c
ssh://root@<xenhost> command to connect and check but received the same
set of errors as in RHEV / oVirt Wizard.
I am able to connect to xen server 6.5 using ssh command without
password and even "sudo -u vdsm ssh root@<xenhost>" also works without
any issues. To dig further I, on xen server, disabled the firewall and
set SELinux to disabled and I am finding that libvirt is unable to
connect to hypervisor.
I have attached the libvirt client logs for your reference.
Can someone help me to debug this and import the VMs from Xen Server 6.5
to RHEV / oVirt? We cannot recreate these Virtual Machines as these are
critical ones and rebuilding will be a herculean task.
Thanks in advance.
--
Thanks & Regards,
Anantha Raghava
Do not print this e-mail unless required. Save Paper & trees.
8 years, 2 months
[libvirt-users] Python API does not return all information available
by David Ashley
All -
The Python virDomain method info() does not return all the entries
contained in the C struct virNodeInfo. While this is somewhat documented
the entries that are returned are unlabeled and the documentation does
not specify which VirNodeInfo entries they correspond to.
Does anyone have more information they can pass on?
Thanks,
Davis Ashley
8 years, 2 months
[libvirt-users] Definition /dev/shm prevent LXC container from start: "Failed to access '(null)': Bad address"
by mxs kolo
Hi all
We use libvirt 1.2.x and 1.3.x with next filesystem definition in config:
<filesystem type='ram' accessmode='passthrough'>
<source usage='524288' units='KiB'/>
<target dir='/dev/shm'/>
</filesystem>
But libvirt 2.1.0 with same config report error:
Failed to access '(null)': Bad address
Debug level 1 show:
2016-08-11 10:14:56.661+0000: 1: debug : lxcContainerMountAllFS:1618 :
Mounting all non-root filesystems
2016-08-11 10:14:56.661+0000: 1: debug : lxcContainerMountAllFS:1625 :
Mounting '(null)' -> '/dev/shm'
2016-08-11 10:14:56.661+0000: 1: error :
lxcContainerResolveSymlinks:629 : Failed to access '(null)': Bad
address
2016-08-11 10:14:56.661+0000: 1: debug : virFileClose:102 : Closed fd 15
2016-08-11 10:14:56.661+0000: 1: debug : virFileClose:102 : Closed fd 10
2016-08-11 10:14:56.661+0000: 1: debug : virFileClose:102 : Closed fd 12
2016-08-11 10:14:56.661+0000: 1: debug : lxcContainerChild:2271 :
Tearing down container
Failure in libvirt_lxc startup: Failed to access '(null)': Bad address
Someone use libvirt 2.x with /dev/shm device ?
Probably need change config and add some definitions to /dev/shm ?
b.r.
Maxim Kozin
8 years, 2 months
[libvirt-users] Guest Agent and API Relationship
by David Ashley
All -
I have looked everywhere I can think of (except the C code) to see if I
can find the
relationship of an API to the guest agent. What I want to know is
whether or not a
given API has a dependency on the guest agent being available. I know
that some
of the APIs now have such a dependency but there does not seem to be an
indicator in the documentation for when this is the case. Nor is there any
comprehensive list of APIs that have a dependency on the guest agent.
Any help would be useful.
Thanks,
W. David Ashley
8 years, 2 months
[libvirt-users] Attaching disks with external snapshots
by Erlon Cruz
Hi folks,
I'm trying to to attach a disk to an instance using libvirt. The problem
is, this disk has external snapshots. The process tried was:
1 - Attach a disk in the domain:
* virsh# attach-device instance-00000006 /tmp/disk.xml[1] --live*
2 - Snapshot the disk[2]:
* virsh# snapshot-create instance-00000006 --quiesce --xmlfile
/tmp/snap-from-disk.xml[2] --disk-only*
3 - Dump the domain XML and create a new disk file from it:
*virsh# dumpxml instance-00000006*
* ... [3]*
4 - Dettach the device and re-add it using the new disk file:
* virsh# detach-device instance-00000006 /tmp/disk-with-snap.xml[4]*
* Device detached successfully*
* virsh# attach-device instance-00000006 /tmp/disk-with-snap.xml *
* error: Failed to attach device from /tmp/disk-with-snap.xml*
* error: internal error: unable to execute QEMU command
'device_add': Property 'virtio-blk-device.drive' can't find value
'drive-virtio-disk1'*
Question, is this operation supported? If yes, how is the correct procedure?
Erlon
[1] http://paste.openstack.org/show/556055/
[2] http://paste.openstack.org/show/556056/
[3] http://paste.openstack.org/show/556063/
[4] http://paste.openstack.org/show/556064/
8 years, 2 months
[libvirt-users] QEMU IMG vs Libvirt block commit
by Erlon Cruz
Hi folks,
I'm having a issue in the standard NFS driver on OpenStack, that uses
qemu-img and libvirt to create snapshots of volumes. It uses qemu-img in
the Controller to manage the snapshots when the volume is not attached
(offline) or calls the Compute (which calls libvirt) to manage snapshots if
the volume is attached (online). When I try to create/delete snapshots from
a snapshot chain. There are 3 situations:
1 - I create/delete the snapshots in Cinder (it uses qemu-img). It goes OK!
I can delete and I can create snapshots.
2 - I create/delete the snapshots, in online mode (it uses libvirt). It
goes Ok, as weel.
3 - I create the snapshots in Cinder (offline) and delete then in online
mode (using libvirt), then it fails with this message[1]:
libvirtError: invalid argument: could not find image
'volume-13cb83a2-880f-40e8-b60e-7e805eed76f9.d024731c-bdc3-4943-91c0-215a93ee2cf4'
in chain for
'/opt/stack/data/nova/mnt/a3b4c6ddd9bf82edd4f726872be58d05/volume-13cb83a2-880f-40e8-b60e-7e805eed76f9'
But, in this folder, there the backing files are there[2] and they are
chained as think it suppose to be[3].
The version for the 2 hosts in tests are:
Controller/Cinder node: qemu-img version 2.0.0 && libvirt 1.2.2-0ubuntu13
Compute node: qemu-img version 2.5.0 && libvirt-1.3.1-1ubuntu10
Is there any compatibility problem between libvirt and qemu-img snapshots?
Have you guys found any problem like that?
Erlon
----------------------------------
[1] http://paste.openstack.org/show/543270/
[2] http://paste.openstack.org/show/543273/
[3] http://paste.openstack.org/show/543274/
8 years, 2 months
[libvirt-users] Go bindings to LibVirt
by Sergey Bronnikov
Hi, everyone
for a small project I need a Go bindings to LibVirt library. There is a page on
the official site with a list of bindings to different languages [1], but it is
lack of Go bindings. Does it mean there are no official bindings for that
language?
[1] https://libvirt.org/bindings.html
Sergey
8 years, 2 months
[libvirt-users] Sharing a host directory with a Windows guest
by Peter Steele
I'm running libvirt un a CentOS 7.2 host and have created a Windows 10
VM under that. I've been researching how to share a folder on my host
with the Windows guest VM but can't figure out if it's possible. I've
added the code
<filesystem type='mount' accessmode='passthrough'>
<driver type='path' wrpolicy='immediate'/>
<source dir='var/lib/mydata`'/>
<target dir='/mydata'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</filesystem>
to the xml definition of my Windows VM but I assume something needs to
be configured on the VM side as well. Is there a way to accomplish this?
Peter
8 years, 2 months
[libvirt-users] Proper XML for compareCPU method
by David Ashley
All -
What is the proper XML to supply to the Python Connections compareCPU
method? I have looked at the cpu_map.xml file and I believe I have the
proper XML configured but the method always throws an exception with a
missing CPU architecture error. Here is the code.
from __future__ import print_function
import sys
import libvirt
conn = libvirt.open('qemu:///system')
if conn == None:
print('Failed to open connection to qemu:///system', file=sys.stderr)
exit(1)
retc = conn.compareCPU('<cpu><arch name="x86"><model
name="kvm64"/></arch></cpu>')
if retc == -1:
print("CPUs are not the same.")
else:
print("CPUs are the same.")
conn.close()
exit(0)
This probably just a simple error on my part but I have tried more that
a few permutations of the XML with no success. So I guess I need some
help with this.
Thanks,
David Ashley
8 years, 2 months
[libvirt-users] Cannot guestmount a Fedora 24 XFS disk.
by Andre Goree
I seem to be having trouble using guestmount to mount a Fedora 24 disk that is using XFS. This is the error messings I get when I try:
root@cpdev-cn5:/var/lib/libvirt/images/base# guestmount --rw -a ${disk_path}/${disk_name} -m /dev/${target}1 /tmp/fedora-master/
libguestfs: error: mount_options: /dev/vda1 on /: mount: wrong fs type, bad option, bad superblock on /dev/vda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
guestmount: '/dev/sda1' could not be mounted. Did you mean one of these?
/dev/sda1 (xfs)
Verbose output shows:
libguestfs: recv_from_daemon: received GUESTFS_LAUNCH_FLAG
libguestfs: [03647ms] appliance is up
libguestfs: send_to_daemon: 72 bytes: 00 00 00 44 | 20 00 f5 f5 | 00 00 00 04 | 00 00 00 4a | 00 00 00 00 | ...
guestfsd: main_loop: new request, len 0x44
/dev/sda1: No such file or directory
mount -o /dev/vda1 /sysroot/
[ 1.759120] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
[ 1.762869] XFS (vda1): Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled!
[ 1.762869] Use of these features in this kernel is at your own risk!
[ 1.764819] XFS (vda1): Superblock has unknown read-only compatible features (0x1) enabled.
[ 1.765834] XFS (vda1): Attempted to mount read-only compatible filesystem read-write.
[ 1.765834] Filesystem can only be safely mounted read only.
[ 1.767472] XFS (vda1): SB validate failed with error 22.
mount: wrong fs type, bad option, bad superblock on /dev/vda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
guestfsd: error: /dev/vda1 on /: mount: wrong fs type, bad option, bad superblock on /dev/vda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
guestfsd: main_loop: proc 74 (mount_options) took 0.09 seconds
libguestfs: recv_from_daemon: 272 bytes: 20 00 f5 f5 | 00 00 00 04 | 00 00 00 4a | 00 00 00 01 | 00 12 34 00 | ...
libguestfs: error: mount_options: /dev/vda1 on /: mount: wrong fs type, bad option, bad superblock on /dev/vda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I've tried with both /dev/sda1 and /dev/vda1 but no luck. Interestingly enough, a CentOS 7.2 image with XFS and the same setup (i.e., a single partition formatted with XFS) mounts without issue. Any ideas?
Andre Goree
Senior Engineer, Atlantic.Net
CONFIDENTIALITY: This email (including any attachments) may contain confidential, proprietary and privileged information. Any unauthorized disclosure or use without Atlantic.Net's express written consent is strictly prohibited. If you received this email in error, please notify the sender and delete this email from your system.
8 years, 2 months