[libvirt-users] some problem with snapshot by libvirt
by xingxing gao
Hi,all,i am using libvirt to manage my vm,in these days i am testing
the libvirt snapshot ,but meet some problem:
the snapshot was created from this command:
snapshot-create-as win7 --disk-only --diskspec
vda,snapshot=external --diskspec hda,snapshot=no
but when i tried to revert from the snapshot which created from the
above command ,i got error below:
virsh # snapshot-revert win7 1338041515 --force
error: unsupported configuration: revert to external disk snapshot not
supported yet
version:
virsh # version
Compiled against library: libvir 0.9.4
Using library: libvir 0.9.4
Using API: QEMU 0.9.4
Running hypervisor: QEMU 1.0.93
10 years, 1 month
[libvirt-users] NWFilter and IPv6
by Guido Winkelmann
Hi,
Libvirt's nwfilter ships a number of useful filter scripts by default, but
none to handle IPv6 traffic. Is there a particular reason for that, or is that
just because nobody has got around to that yet?
One interesting thing about dealing with IPv6 traffic is that hosts often have
several auto-configured addresses, usually at least one auto-configured link-
local address under fe80::/64 and one auto-configured one from router-
advertisements. For writing filter rules, it would be nice to have some
function/notation to calculate those auto-configured addresses for the user,
so we can write something like this:
<rule action='return' direction='out' priority='500'>
<ipv6 srcipaddr='ipv6_autoconf($IPV6_PREFIX[@1], $IPV6_MASK[@1], $MAC)'/>
</rule>
<rule action='return' direction='out' priority='500'>
<ipv6 srcipaddr='$IPV6'/>
</rule>
<rule action='drop' direction='out' priority='1000'/>
or maybe more like this:
<ipv6 mode='autoconf' field='srcipaddr' prefix='$IPV6_PREFIX[@1]'
netmask='$IPV6_MASK[@1]' mac='$MAC)'/>
Guido
12 years
[libvirt-users] error: unsupported configuration: block I/O throttling not supported with this QEMU binary
by hzguanqiang
Hi, guys.
I want to change block I/O throttle using 'virsh blkdeviotune' with vm not running, it reported an error:
# virsh blkdeviotune instance-000000dc /dev/loop0 --total-bytes-sec 20000000 --total-iops-sec 20 --config
error: Unable to change block I/O throttle
error: unsupported configuration: block I/O throttling not supported with this QEMU binary
I can do this when the vm is running. Is it a bug? How can I fix this problem.
BTW,My libvirt version is 0.9.13:
# virsh version
Compiled against library: libvir 0.9.13
Using library: libvir 0.9.13
Using API: QEMU 0.9.13
Running hypervisor: QEMU 1.1.2
Thanks.
2012-11-28
hzguanqiang
12 years
Re: [libvirt-users] [Freeipa-users] libvirt with vnc freeipa
by Simo Sorce
Hi Natxo,
On Fri, 2012-11-30 at 13:06 +0100, Natxo Asenjo wrote:
> hi,
>
> I'm following the howto on
> http://freeipa.org/page/Libvirt_with_VNC_Consoles to authenticate
> users voor virsh with ipa.
>
> I have it mostly working :-) except for the fact that libvirtd is not
> respecting the sasl_allowed_username_list parameter.
>
> If I do not set it, and I have a realm ticket, then I may login virsh
> or virtual manager and I get tickets for libvirt/vnc services.
>
> If I do set it, then it tells me the client is not in the whitelist,
> so I cannot log in :-)
>
>
> 2012-11-30 12:00:53.403+0000: 7786: error :
> virNetSASLContextCheckIdentity:146 : SASL client admin not allowed in
> whitelist
> 2012-11-30 12:00:53.403+0000: 7786: error :
> virNetSASLContextCheckIdentity:150 : Client's username is not on the
> list of allowed clients
> 2012-11-30 12:00:53.403+0000: 7786: error :
> remoteDispatchAuthSaslStep:2447 : authentication failed:
> authentication failed
> 2012-11-30 12:00:53.415+0000: 7781: error : virNetSocketReadWire:999 :
> End of file while reading data: Input/output error
>
> Is this a question for the libvirt folks or is it ok to post it here?
Seem more like a libvirt or maybe even a cyrus-sasl question but I would
be interested in knowing what is going on.
Have you used a full principal name including the realm in the list, or
just the bare user names ?
CCing libvirt-users.
Simo.
--
Simo Sorce * Red Hat, Inc * New York
12 years
[libvirt-users] error: argument unsupported: unable to handle disk requests in snapshot
by 于长江
Hello, a problem occurred during making a snapshot for a guest, ask for help...
1. the xml for snapshotdomain like this:
# cat deployment.1
<domainsnapshot>
<name>a</name>
<disks>
<disk name='/home/qcow2/disk.0'>
<driver type='qcow2'/>
<source file='/home/qcow2/disk.3'/>
</disk>
</disks>
</domainsnapshot>
2. when i execute the command , a mistake occurred:
# virsh snapshot-create qcow2 deployment.1
error: argument unsupported: unable to handle disk requests in snapshot
3. i add the parameter '--disk-only', another mistake occurred:
# virsh snapshot-create qcow2 deployment.1 --disk-only
error: Failed to take snapshot: unknown command: 'snapshot_blkdev'
4. my libvirtd version is 0.9.4 , i update to 1.0.0 , also had this problem
# libvirtd --version
libvirtd (libvirt) 1.0.0
5. this is my guest below:
# ls -l
-rw-r--r-- 1 oneadmin oneadmin 1540 2012-11-27 15:27 deployment.0
-rw-r--r-- 1 root root 204 2012-11-27 21:28 deployment.1
-rw-r--r-- 1 root root 5772345344 2012-11-28 17:42 disk.0
-rw-r--r-- 1 root root 459264 2012-11-27 21:02 disk.1
-rw-r--r-- 1 root root 372736 2012-11-27 15:25 disk.2
# virsh list
Id 名� ��
----------------------------------------------------
1 qcow2 running
# qemu-img info disk.0
image: disk.0
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 5.4G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1354019482 142M 2012-11-27 20:31:22 00:00:08.971
is there anyone has the same problem with me ?
于长江 | 东软
---------------------------------------------------------------------------------------------------
Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s)
is intended only for the use of the intended recipient and may be confidential and/or privileged of
Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is
not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying
is strictly prohibited, and may be unlawful.If you have received this communication in error,please
immediately notify the sender by return e-mail, and delete the original message and all copies from
your system. Thank you.
---------------------------------------------------------------------------------------------------
12 years
[libvirt-users] error when configuring management access via PolicyKit
by Arindam Choudhury
Hi,
Libvirtd is in listen mode.
/etc/libvirt/libvirtd.conf
listen_tls = 0
listen_tcp = 1
auth_tcp = "sasl"
my trying to setup polkit authentication using
http://wiki.libvirt.org/page/SSHPolicyKitSetup
[root@aopcach ~]# cat
/etc/polkit-1/localauthority/50-local.d/50-org.arindam-libvirt-remote-access.pkla
[Remote libvirt SSH access]
Identity=unix-user:arindam
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
but it fails with:
[arindam@aopcach ~]$ virsh -c qemu+ssh:///system
arindam@localhost's password:
error: authentication failed:
polkit\56retains_authorization_after_challenge=1
Authorization requires authentication but no agent is available.
error: failed to connect to the hypervisor
12 years
[libvirt-users] How to passthru a block device from host to the quest os in kvm?
by Dennis Chen
Hi,
Is there any APIs or some XML scheme can passthru a local block device
of the host, eg, /dev/sda1 to its running guest OS?
Specifically, say there is a block device /dev/sda1 in the host, mount
it: mount /dev/sda1 /mnt; ls /mnt:
a.file, b.file
After passthru, the same /dev/sda1 can be used by the quest os(maybe the
dev name is changed): mount /dev/sda1 /mnt; ls /mnt:
a.file, b.file.
BRs,
Dennis
12 years
[libvirt-users] Live migration with non-shared storage leads to corrupted file system
by Xinglong Wu
Hi,
We have the following environment for live-migration with
non-shared stroage between two nodes,
Host OS: RHEL 6.3
Kernel: 2.6.32-279.el6.x86_64
Qemu-kvm: 1.2.0
libvirt: 0.10.1
and use "virsh" to do the job as
virsh -c 'qemu:///system' migrate --live --persistent
--copy-storage-all <guest-name> qemu+ssh://<target-node>/system
The above command itself returns no error, and the migrated domain in
the destination node starts fine. But when I log into the migrated
domain, some commands failed immediately. And if I shutdown the
domain, it won't boot up any more, complaining about the corrupted
file system. Furthermore, I can confirm that the domain before
migration works flawlessly after thorough test.
The log file in /var/log/libvirt/qemu looks fine without any warnings
or errors. And the only error message I can observe is found at
/var/log/libvirt/libvirtd.log
2012-11-25 10:00:55.001+0000: 15398: warning :
qemuDomainObjBeginJobInternal:838 : Cannot start job (query, none) for
domain testVM; current job is (async nested, migration out) owned by
(15397, 15397)
2012-11-25 10:00:55.001+0000: 15398: error :
qemuDomainObjBeginJobInternal:842 : Timed out during operation: cannot
acquire state change lock
2012-11-25 10:00:57.009+0000: 15393: error : virNetSocketReadWire:1184
: End of file while reading data: Input/output error
I also noticed that the raw image file used by the migrated domain has
the different sizes (reported by "du") before and after the migration.
Is there anybody having the similiar experience with live migration on
non-shared storage? It apparently leads to failed migrations in
libvirt but no cirtical errors ever reported.
Brett
12 years
[libvirt-users] IPV6 configuration
by Patrick Chemla
Hi,
I am running package libvirt-0.9.11.6-1.fc17.x86_64 on a
kernel-3.6.5-1.fc17.x86_64 Fedora 17.
I have set up the host with an IPv6 address, and I made some successfull
tests transferring in IPv6 mode to/from other hosts.
I want to set up libvirt to get an IPv6 address for his local network
interface, same for the guests.
I want the guests to be able to communicate directly on the internet
through this ipv6 address.
I have had a <ip family="ipv6" address="2001:xxxxxxx" /> line to the
/etc/libvirtd/qemu/network/default.xml file but when I restart the host,
this line disappears and no ipv6 address is added to the libvirt interface.
I have added an ipv6 address to the guest in the same range of the host
address. I can ping6 the address locally on the guest, but can't access
the host.
Is there somewhere a howto-libvirt-ipv6 to understand how to set up the
whole stuff?
Thanks for help
Patrick
12 years
[libvirt-users] Block device IO limits
by Davide Guerri
Hi all,
is there a way to limit the throughput (or the number of operations per seconds) of a block device?
I ask because I've a couple of misconfigured VMs that are performing a lot of disk activity (swapping I'd bet but I haven't access to their OS). Since their storage is on a NFS share they are slowing down other VMs on the same share.
Thank you.
Davide.
12 years