[libvirt-users] Updating domain XML issue
by Brian Rak
I'm trying to unmount a guest cdrom using libvirt_domain_update_device
(via php-libvirt). The guest cdrom is currently mounted via Ceph, via
this XML:
<disk type='network' device='cdrom'>
<driver name='qemu' type='raw'/>
<auth username='cdroms'>
<secret type='ceph' uuid='XXXX'/>
</auth>
<source protocol='rbd' name='cdrom/test'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
In order to unmount it, I'm trying to use this XML:
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
However, I'm getting an error from libvirt: 'internal error: invalid
secret type 'ceph''. I suspect this is because it's still trying to use
the authentication information from the old cdrom definition. How do I
convince it to not do this? I've tried specifiying an empty <auth/>
block, but that generates it's own error ('missing username for auth')
10 years, 5 months
[libvirt-users] Problem connecting to hypervisor
by Vikas Kokare
I am trying to establish a connection (using virsh command) to a couple of
hypervisor hosts, both on SLES 11.3 environments. The host from where i am
making both the connections is a SLES 10.4 instance.
*kvmh911351246:~ # virsh -c qemu+ssh://9.113.51.247:22/system
<http://9.113.51.247:22/system>libvir: XML-RPC error : authentication
requirederror: failed to connect to the hypervisor*
The SSH authentication to hypervisor host 9.113.51.247 has already been
established.
*kvmh911351246:~ # ssh root(a)9.113.51.247 <root(a)9.113.51.247>Last login: Tue
Jun 3 11:31:56 2014 from scm-kvmh2-rhel6 SLES11-51-247:~ #*
Hence the cause of the authentication failure is not clear.
The second connection to another SLES 11.3 host looks like
*virsh # connect qemu+ssh://9.121.58.19:22/system
<http://9.121.58.19:22/system> bash: socat: command not foundlibvir: Remote
error : socket closed unexpectedlyerror: Failed to connect to the
hypervisor*
where the cause of the error is entirely different.
The libvirt packages available on these environments are
*9.113.51.247 (SLES 11.3)*
*libvirt-cim-0.5.12-0.7.16libvirt-1.0.5.1-0.7.10libvirt-python-1.0.5.1-0.7.10libvirt-client-32bit-1.0.5.1-0.7.10
libvirt-lock-sanlock-1.0.5.1-0.7.10libvirt-client-1.0.5.1-0.7.10*
*9.121.58.19 (SLES 11.3)*
*libvirt-cim-0.5.12-0.7.16libvirt-client-1.0.5.1-0.7.10libvirt-python-1.0.5.1-0.7.10
libvirt-1.0.5.1-0.7.10*
*9.113.51.246 ( SLES 10.4)*
*libvirt-python-0.3.3-18.22.1libvirt-devel-0.3.3-18.22.1libvirt-0.3.3-18.22.1*
How can these connection failures be debugged? Is there a way to know more
information about them?
10 years, 5 months
[libvirt-users] Live snapshots of a single block device
by Andrew Martin
Hello,
I am working on a script to automatically create live snapshots of running VMs using qemu-kvm 1.4.0 and libvirt 1.0.2. If a VM has multiple disks, I'd like to back them up individually with separate calls to snapshot-create-as, so I can more easily manage the disk images. The code I have now is essentially as follows:
virsh snapshot-create-as --domain "vmname" --name "snapshotname" --description "snapshot description" --disk-only --atomic --no-metadata --diskspec "vda"
I run this command for each block device (e.g vda, vdb, etc). However, after running the above command (to only backup vda), I see that a snapshot has also been created for vdb. I tried running it with --print-xml, which gives this output:
<domainsnapshot>
<name>05-22-14_17-09-25</name>
<description>05-22-14_17-09-25</description>
<disks>
<disk name='vda'/>
</disks>
</domainsnapshot>
What am I doing wrong - how can I tell snapshot-create-as to create an external snapshot for a specific block device only (not all block devices)?
Also, while looking at the manpage, does the --live option do anything different if used with the above command?
Thanks,
Andrew Martin
10 years, 5 months
[libvirt-users] LIbvirt Python Snapshot -Domain Crashing
by Sijo Jose
Hi,
I'm using libvirt(1.0.0) with python, for managing virtual machines..
but while taking multiple snapshot domain is crashing...
Snapshot XML
-------------------------
<domainsnapshot>
<name>snp1</name>
<creationTime></creationTime>
<description>Description</description>
<state></state>
<domain>
<uuid></uuid>
</domain>
<parent>
<name></name>
</parent>
</domainsnapshot>
----------------
API Call
--------------
snp1=domain1.snapshotCreateXML(snp_xml,0)
here I'm passing flag value as zero..
Its created first snapshot without any error, but when I tried for second
snapshot
1) Domain switched its state to pause and its not coming back
I installed ubuntu12.04 OS in the domian.
Rgds
-Sijo
10 years, 5 months
[libvirt-users] Wrong tty at autologin
by Mateusz Malicki
Hello,
I am working on system that use systemd.
On host user session (uid=5000) starts automatically but in container I have following error message:
May 12 00:18:15 localhost user-session-launch[110]: pam_systemd(login:session): Asking logind to create session: uid=5000 pid=110 service=login type=tty class=user seat=seat0 vtnr=1 tty=tty1 display= remote=no remote_user= remote_host=
May 12 00:18:16 localhost user-session-launch[110]: pam_systemd(login:session): Failed to create session: Invalid argument
May 12 00:18:16 localhost user-session-launch[110]: pam_unix(login:session): session opened for user app by (uid=0)
After hard coded tty=pts/0 and vtnr=0 session starts up.
Is there any way to specify this values in xml config file or some other config files?
Where did these values come from?
Regards,
Mateusz
10 years, 5 months
[libvirt-users] Problem connecting to hypervisor
by Vikas Kokare
I am trying to establish a connection (using virsh command) to a couple of
hypervisor hosts, both on SLES 11.3 environments. The host from where i am
making both the connections is a SLES 10.4 instance.
*kvmh911351246:~ # virsh -c qemu+ssh://9.113.51.247:22/system
<http://9.113.51.247:22/system>libvir: XML-RPC error : authentication
requirederror: failed to connect to the hypervisor*
The SSH authentication to hypervisor host 9.113.51.247 has already been
established.
*kvmh911351246:~ # ssh root(a)9.113.51.247 <root(a)9.113.51.247>Last login: Tue
Jun 3 11:31:56 2014 from scm-kvmh2-rhel6SLES11-51-247:~ #*
Hence the cause of the authentication failure is not clear.
The second connection to another SLES 11.3 host looks like
*virsh # connect qemu+ssh://9.121.58.19:22/system
<http://9.121.58.19:22/system>bash: socat: command not foundlibvir: Remote
error : socket closed unexpectedlyerror: Failed to connect to the
hypervisor*
where the cause of the error is entirely different.
The libvirt packages available on these environments are
*9.113.51.247 (SLES 11.3)*
*libvirt-cim-0.5.12-0.7.16libvirt-1.0.5.1-0.7.10libvirt-python-1.0.5.1-0.7.10libvirt-client-32bit-1.0.5.1-0.7.10libvirt-lock-sanlock-1.0.5.1-0.7.10libvirt-client-1.0.5.1-0.7.10*
*9.121.58.19 (SLES 11.3)*
*libvirt-cim-0.5.12-0.7.16libvirt-client-1.0.5.1-0.7.10libvirt-python-1.0.5.1-0.7.10libvirt-1.0.5.1-0.7.10*
*9.113.51.246 ( SLES 10.4)*
*libvirt-python-0.3.3-18.22.1libvirt-devel-0.3.3-18.22.1libvirt-0.3.3-18.22.1*
How can these connection failures be debugged? Is there a way to know more
information about them?
-Vikas
10 years, 5 months
[libvirt-users] kvm domain won't start with vhost-net loaded
by Raphael Bauduin
Hi,
I have multiple kvm domains running fine, but to improve network
performance, I wanted to try vhost-net. However, once I load the module:
modprobe vhost-net
I can't start the domain anymore:
virsh start kvmGitInt
gives the error:
internal error: process exited while connecting to monitor:
qemu-system-x86_64: -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26:
vhost-net requested but could not be initialized
qemu-system-x86_64: -netdev tap,fd=25,id=hostnet0,vhost=on,vhostfd=26:
Device 'tap' could not be initialized
This is a 3.14.4 kernel, with libvirt 1.2.1 and qemu 1.7.0.
I have searched online but haven't found any indication on how to solve it
in my case (The kernel is not selinux enabled,not using systemd, ...)
Any suggestion on how to solve this is very welcome!
Thanks in advance
Raph
10 years, 5 months