[libvirt-users] Snapshot without volumes temporarily
by Hubert Chiang
Hello,
I want to do a snapshot which the VM has volumes.
But I want to skip the volume check when I do the snapshot. (Which means I
want to do a snapshot without detaching the volume)
So I try to write a snapshot XML refer by
http://libvirt.org/formatsnapshot.html as following:
vda is my VM basic disk (qcow2)
vdb is a volume (raw)
================ t1.xml ===============
<domainsnapshot>
<disks>
<disk name='vda' snapshot='internal'>
<source file='/var/lib/libvirt/VM1/disk' />
</disk>
<disk name='vdb' snapshot='no' />
</disks>
</domainsnapshot>
=====================================
with this command: # virsh snapshot-create VM1 --xmlfile t1.xml
and get the message: # error: argument unsupported: unable to handle disk
requests in snapshot
My environment is Libvirt 0.9.8, Qemu 1.0.0 on Ubuntu12.04.02
I try to do this action before at libvirt 0.9.2 QEMU 0.14.1 on Ubuntu 11.10
before with following commands, It's success.
Step1. I copy the XML from /etc/libvirt/qemu/VM1.xml to VM1.xml.backup
Step2. I edit the VM1.xml to remove the disk tag which is create by volume
Step3. do virsh command: virsh snapshot-create VM1
Step4. Move VM1.xml.backup to VM1.xml
It's success on libvirt 0.9.2. But on 0.9.8 It doesn't work. Because
libvirt will check the setups from memory not the XML file in
/etc/libvirt/qemu/
What could I do with libvirt 0.9.8?
Thanks in advance.
11 years, 8 months
[libvirt-users] virsh migrate ---no route to host
by Hari Pyla
Hi,
I am trying to migrate a guest domain from one node to another. I've
tried several options of the virsh migrate command, but in vain. It
seems to be a networking issue. I wanted to make sure that my setup is
correct and that I am not missing anything.
I've issued the below command on the source node (n0):
[user@n0 ~]$ virsh --connect qemu:///system migrate --verbose
Fedora-17-x86_64-1 qemu+ssh://n1/system
error: unable to connect to server at 'n1:49156': No route to host
I was wondering if you have any ideas on how to fix this issue.
My current setup/configuration:
1. virsh version : 0.10.2
2. The user has all the necessary access control setup. I've referenced
http://libvirt.org/auth.html
I am able to locally and remotely connect to both the nodes. I am also
able to start, stop and perform other actions on guest domains on local
and remote nodes, e.g.,
/* access destination node */
[user@n0 ~]$ virsh --connect qemu+ssh://n1/system list --all
Id Name State
----------------------------------------------------
/* access local node */
[user@n0 ~]$ virsh --connect qemu:///system list --all
Id Name State
----------------------------------------------------
2 Fedora-17-x86_64-1 running
- Fedora-18-x86_64-DVD shut off
3. ssh keys are configured for password less access to both the nodes
4. I've opened ports 49152-49215 on the destination node i.e., n1.
please see below.
[user@n1 images]$ sudo iptables -L
[sudo] password for user:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW
tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with
icmp-host-prohibited
ACCEPT tcp -- anywhere anywhere tcp
dpts:49152:49215
ACCEPT tcp -- anywhere anywhere tcp
spts:49152:49215
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 state
RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with
icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with
icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with
icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
5. I've restarted the libvirtd after updating the iptables. I don't
think that is necessary, but did it anyway.
I was wondering if I am missing something. Any help is greatly appreciated.
Thanks,
--Hari
11 years, 8 months
[libvirt-users] virsh migrate --no route to host
by Hari Pyla
Hi,
I am trying to migrate a guest domain from one node to another.
I've tried several options of the virsh migrate command, but in vain.
It seems to be a networking issue. I wanted to make sure that my setup is
correct and that I am not missing anything.
I've issued the below command on the source node (n0):
[user@n0 ~]$ virsh --connect qemu:///system migrate --verbose
Fedora-17-x86_64-1 qemu+ssh://n1/system
error: unable to connect to server at 'n1:49156': No route to host
I was wondering if you have any ideas on how to fix this issue.
My current setup/configuration:
1. virsh version : 0.10.2
2. The user has all the necessary access control setup. I've referenced
http://libvirt.org/auth.html
I am able to locally and remotely connect to both the nodes. I am also
able to start, stop and perform other actions on guest domains on local
and remote nodes, e.g.,
/* access destination node */
[user@n0 ~]$ virsh --connect qemu+ssh://n1/system list --all
Id Name State
----------------------------------------------------
/* access local node */
[user@n0 ~]$ virsh --connect qemu:///system list --all
Id Name State
----------------------------------------------------
2 Fedora-17-x86_64-1 running
- Fedora-18-x86_64-DVD shut off
3. ssh keys are configured for password less access on both the nodes
4. I've opened ports 49152-49215 on the destination node i.e., n1.
please see below.
[user@n1 images]$ sudo iptables -L
[sudo] password for user:
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:bootps
ACCEPT all -- anywhere anywhere state
RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW
tcp dpt:ssh
REJECT all -- anywhere anywhere reject-with
icmp-host-prohibited
ACCEPT tcp -- anywhere anywhere tcp
dpts:49152:49215
ACCEPT tcp -- anywhere anywhere tcp
spts:49152:49215
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 state
RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with
icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with
icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with
icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
5. I've restarted the libvirtd after updating the iptables.
I dont think that is necessary, but did it anyway.
I was wondering if I am missing something. Any help is greatly appreciated.
Thanks,
--Hari
11 years, 8 months
[libvirt-users] netfilter+libvirt=(smth got broken?)
by Nikolai Zhubr
Hello,
I'm having problem setting up filtering traffic for a virtual machine
managed by libvirt. Strange thing is, such a setup has been working fine
for me on an older version of distro (namely, opensuse 11.3 w/updates,
kernel 2.6.34, libvirt 0.8.8) but refused to work on shiny new opensuse
12.4 (kernel 3.7.10, libvirt 1.0.2).
The definition of filter in question is pretty simple:
<filter name='some-filt' chain='ipv4'>
<rule action='accept' direction='in'>
<tcp dstportstart='110'/>
</rule>
<rule action='drop' direction='inout'>
<all/>
</rule>
</filter>
So basically it should allow incoming connections to the specified port
number and nothing else. After activating this filter on a box in
question, connections to 110 started to fail (timeout). Examining
iptables rules manually and comparing them to the rules from my old box
did not reveal anything suspicious to me. However, through just pure
guesswork, I managed to ocasionally "fix" the problem by manually
editing 3 relevant rules as follows:
--A FI-vnet0 -p tcp -m tcp --sport 110 -m conntrack --ctstate
ESTABLISHED -m conntrack --ctdir ORIGINAL -j RETURN
+-A FI-vnet0 -p tcp -m tcp --sport 110 -m conntrack --ctstate
ESTABLISHED -m conntrack --ctdir REPLY -j RETURN
--A FO-vnet0 -p tcp -m tcp --dport 110 -m conntrack --ctstate
NEW,ESTABLISHED -m conntrack --ctdir REPLY -j ACCEPT
+-A FO-vnet0 -p tcp -m tcp --dport 110 -m conntrack --ctstate
NEW,ESTABLISHED -m conntrack --ctdir ORIGINAL -j ACCEPT
--A HI-vnet0 -p tcp -m tcp --sport 110 -m conntrack --ctstate
ESTABLISHED -m conntrack --ctdir ORIGINAL -j RETURN
+-A HI-vnet0 -p tcp -m tcp --sport 110 -m conntrack --ctstate
ESTABLISHED -m conntrack --ctdir REPLY -j RETURN
So essentially, just manually swtiching "--ctdir" values to their
opposites makes the filter allow connections to port 110. I've also
verified that the filter still blocks unwanted connections originating
from port 110 from VM exactly as it should:
(in VM): netcat -v -v -n -p 110 192.168.122.1 22
(UNKNOWN) [192.168.122.1] 22 (?) : Connection timed out
set 0, rcvd 0
I've then compared /proc/net/nf_conntrack on both (old and new) boxes.
They look roughly the same, nothing suspicious.
This all looks to me as if "--ctdir" argument somehow magically changed
its meaning to the opposite, but this just cannot be! I'm out of ideas
and looking for insights. Any hints appreciated quite a lot.
Thank you.
Nikolai
11 years, 8 months
[libvirt-users] Bug in DOMINFO command when balloon driver is used on a vm with more then 8 GB of MaxMemory ?
by fc lists
Hi ,
I Sent this to the wrong list (libvirt-devel) on friday ... so i am trying
to send it to the correct one this time.
Apologize for double posting.
I also created a ticket on bugzilla.redhat.com for this
https://bugzilla.redhat.com/show_bug.cgi?id=927336
still i am posting it here because is absolutely possible i am doing
something wrong and someone here will see it .
Description of the problem:
When the virsh setmem command is used to inflate (or deflate) the balloon
on a VM to which 8GB of MaxMemory or more are allocated then the
information reported with virsh dominfo , virsh dumpxml after the balloon
change has been performed are wrong.
The actual balloon is inflated correctly though and that is verifiable both
in the vm itself and through the "virsh dommemstat" commands.
Platform is Centos 6.4 (problem was present in 6.3 as well starting
with libvirt-0.9.10-21.el6.3.7 )
The patch that actually is causing the problem is patch number 398 in the
src rpm for libvirt-0.9.10-21.el6.3.7 :
libvirt-Wire-up-handling-for-QMP-s-BALLOON_EVENT.patch (
https://bugzilla.redhat.com/show_bug.cgi?id=884713)
And i verified it by removing the patch and recompiling the rpm , that
indeed worked perfectly after that.
The problem exists in every version up to the latest (that is what i am
running now) but now the patch is part of the libvirt source code in itself
and not a patch in the rpm anymore.
Software used:
# yum list installed | egrep -i "qemu|libvirt"
gpxe-roms-qemu.noarch 0.9.7-6.9.el6 @base
libvirt.x86_64 0.10.2-18.el6_4.2 @updates
libvirt-client.x86_64 0.10.2-18.el6_4.2 @updates
libvirt-python.x86_64 0.10.2-18.el6_4.2 @updates
qemu-img.x86_64 2:0.12.1.2-2.355.0.1.el6.centos.2 @updates
qemu-kvm.x86_64 2:0.12.1.2-2.355.0.1.el6.centos.2 @updates
qemu-kvm-tools.x86_64 2:0.12.1.2-2.355.0.1.el6.centos.2 @updates
Full example ( long and include debug logs ) :
1) Start VM with 12GB of MAX and CURRENT (12582912)
(tail -f libvirtd.log | grep -i balloon)
2013-03-22 21:06:34.438+0000: 23180: debug : qemuMonitorSetBalloon:1690 :
mon=0x7f3d6c1789e0 newmem=12582912
2013-03-22 21:06:34.438+0000: 23180: debug : virJSONValueToString:1133 :
result={"execute":"balloon","arguments":{"value":12884901888},"id":"libvirt-6"}
2013-03-22 21:06:34.438+0000: 23180: debug :
qemuMonitorJSONCommandWithFd:265 : Send command
'{"execute":"balloon","arguments":{"value":12884901888},"id":"libvirt-6"}'
for write with FD -1
2013-03-22 21:06:34.438+0000: 23180: debug : qemuMonitorSend:903 :
QEMU_MONITOR_SEND_MSG: mon=0x7f3d6c1789e0
msg={"execute":"balloon","arguments":{"value":12884901888},"id":"libvirt-6"}
2013-03-22 21:06:34.439+0000: 23179: debug : qemuMonitorIOWrite:461 :
QEMU_MONITOR_IO_WRITE: mon=0x7f3d6c1789e0
buf={"execute":"balloon","arguments":{"value":12884901888},"id":"libvirt-6"}
# dominfo 2
Id: 2
Name: centos_jt
UUID: bc25a6c4-ba34-a593-47c7-6372999946d6
OS Type: hvm
State: running
CPU(s): 1
CPU time: 26.6s
Max memory: 12582912 KiB
Used memory: 12582912 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
2) set mem to 8GB (Fail as described)
# setmem 2 --live --config --size 8388608
2013-03-22 21:13:45.522+0000: 23183: debug : qemuMonitorSetBalloon:1690 :
mon=0x7f3d6c1789e0 newmem=8388608
2013-03-22 21:13:45.522+0000: 23183: debug : virJSONValueToString:1133 :
result={"execute":"balloon","arguments":{"value":8589934592
},"id":"libvirt-9"}
2013-03-22 21:13:45.522+0000: 23183: debug :
qemuMonitorJSONCommandWithFd:265 : Send command
'{"execute":"balloon","arguments":{"value":8589934592},"id":"libvirt-9"}'
for write with FD -1
2013-03-22 21:13:45.522+0000: 23183: debug : qemuMonitorSend:903 :
QEMU_MONITOR_SEND_MSG: mon=0x7f3d6c1789e0
msg={"execute":"balloon","arguments":{"value":8589934592},"id":"libvirt-9"}
2013-03-22 21:13:45.523+0000: 23179: debug : qemuMonitorIOWrite:461 :
QEMU_MONITOR_IO_WRITE: mon=0x7f3d6c1789e0
buf={"execute":"balloon","arguments":{"value":8589934592},"id":"libvirt-9"}
2013-03-22 21:13:45.528+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d6c1789e0 buf={"timestamp": {"seconds":
1363986825, "microseconds": 528314}, "event": "BALLOON_CHANGE", "data":
{"actual": 12883853312}}
2013-03-22 21:13:45.528+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363986825, "microseconds": 528314}, "event": "BALLOON_CHANGE", "data":
{"actual": 12883853312}}]
2013-03-22 21:13:45.528+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363986825, "microseconds": 528314},
"event": "BALLOON_CHANGE", "data": {"actual": 12883853312}}
2013-03-22 21:13:45.528+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d6c1789e0 event={"timestamp": {"seconds": 1363986825,
"microseconds": 528314}, "event": "BALLOON_CHANGE", "data": {"actual":
12883853312}}
2013-03-22 21:13:45.528+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1685ce0
2013-03-22 21:13:45.528+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d6c1789e0
2013-03-22 21:13:45.528+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 12582912 to
12581888 kb
2013-03-22 21:13:46.528+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d6c1789e0 buf={"timestamp": {"seconds":
1363986826, "microseconds": 352941}, "event": "BALLOON_CHANGE", "data":
{"actual": 12884901888}}
2013-03-22 21:13:46.528+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363986826, "microseconds": 352941}, "event": "BALLOON_CHANGE", "data":
{"actual": 12884901888}}]
2013-03-22 21:13:46.528+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363986826, "microseconds": 352941},
"event": "BALLOON_CHANGE", "data": {"actual": 12884901888}}
2013-03-22 21:13:46.528+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d6c1789e0 event={"timestamp": {"seconds": 1363986826,
"microseconds": 352941}, "event": "BALLOON_CHANGE", "data": {"actual":
12884901888}}
2013-03-22 21:13:46.528+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1686770
2013-03-22 21:13:46.528+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d6c1789e0
2013-03-22 21:13:46.528+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 12581888 to
12582912 kb
virsh # dominfo 2
Id: 2
Name: centos_jt
UUID: bc25a6c4-ba34-a593-47c7-6372999946d6
OS Type: hvm
State: running
CPU(s): 1
CPU time: 41.6s
Max memory: 12582912 KiB
Used memory: 12582912 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
virsh # dommemstat 2
actual 8388608
rss 599916
The Balloon was actually inflated , and in the VM i can see 8GB of Ram
available with free
3) setmem to to 3000000 (Fail as described)
2013-03-22 21:18:19.182+0000: 23181: debug : qemuMonitorSetBalloon:1690 :
mon=0x7f3d6c1789e0 newmem=3000000
2013-03-22 21:18:19.182+0000: 23181: debug : virJSONValueToString:1133 :
result={"execute":"balloon","arguments":{"value":3072000000
},"id":"libvirt-63"}
2013-03-22 21:18:19.182+0000: 23181: debug :
qemuMonitorJSONCommandWithFd:265 : Send command
'{"execute":"balloon","arguments":{"value":3072000000},"id":"libvirt-63"}'
for write with FD -1
2013-03-22 21:18:19.182+0000: 23181: debug : qemuMonitorSend:903 :
QEMU_MONITOR_SEND_MSG: mon=0x7f3d6c1789e0
msg={"execute":"balloon","arguments":{"value":3072000000},"id":"libvirt-63"}
2013-03-22 21:18:19.183+0000: 23179: debug : qemuMonitorIOWrite:461 :
QEMU_MONITOR_IO_WRITE: mon=0x7f3d6c1789e0
buf={"execute":"balloon","arguments":{"value":3072000000},"id":"libvirt-63"}
2013-03-22 21:18:19.184+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d6c1789e0 buf={"timestamp": {"seconds":
1363987099, "microseconds": 184245}, "event": "BALLOON_CHANGE", "data":
{"actual": 12883853312}}
2013-03-22 21:18:19.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363987099, "microseconds": 184245}, "event": "BALLOON_CHANGE", "data":
{"actual": 12883853312}}]
2013-03-22 21:18:19.184+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363987099, "microseconds": 184245},
"event": "BALLOON_CHANGE", "data": {"actual": 12883853312}}
2013-03-22 21:18:19.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d6c1789e0 event={"timestamp": {"seconds": 1363987099,
"microseconds": 184245}, "event": "BALLOON_CHANGE", "data": {"actual":
12883853312}}
2013-03-22 21:18:19.184+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1688320
2013-03-22 21:18:19.184+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d6c1789e0
2013-03-22 21:18:19.191+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 12582912 to
12581888 kb
2013-03-22 21:18:20.184+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d6c1789e0 buf={"timestamp": {"seconds":
1363987100, "microseconds": 184128}, "event": "BALLOON_CHANGE", "data":
{"actual": 11874074624}}
2013-03-22 21:18:20.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363987100, "microseconds": 184128}, "event": "BALLOON_CHANGE", "data":
{"actual": 11874074624}}]
2013-03-22 21:18:20.184+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363987100, "microseconds": 184128},
"event": "BALLOON_CHANGE", "data": {"actual": 11874074624}}
2013-03-22 21:18:20.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d6c1789e0 event={"timestamp": {"seconds": 1363987100,
"microseconds": 184128}, "event": "BALLOON_CHANGE", "data": {"actual":
11874074624}}
2013-03-22 21:18:20.184+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1685ce0
2013-03-22 21:18:20.184+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d6c1789e0
2013-03-22 21:18:20.184+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 12581888 to
11595776 kb
2013-03-22 21:18:21.184+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d6c1789e0 buf={"timestamp": {"seconds":
1363987100, "microseconds": 224930}, "event": "BALLOON_CHANGE", "data":
{"actual": 11661934592}}
2013-03-22 21:18:21.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363987100, "microseconds": 224930}, "event": "BALLOON_CHANGE", "data":
{"actual": 11661934592}}]
2013-03-22 21:18:21.184+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363987100, "microseconds": 224930},
"event": "BALLOON_CHANGE", "data": {"actual": 11661934592}}
2013-03-22 21:18:21.184+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d6c1789e0 event={"timestamp": {"seconds": 1363987100,
"microseconds": 224930}, "event": "BALLOON_CHANGE", "data": {"actual":
11661934592}}
2013-03-22 21:18:21.184+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1686770
2013-03-22 21:18:21.184+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d6c1789e0
2013-03-22 21:18:21.184+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 11595776 to
11388608
virsh # dominfo 2
Id: 2
Name: centos_jt
UUID: bc25a6c4-ba34-a593-47c7-6372999946d6
OS Type: hvm
State: running
CPU(s): 1
CPU time: 53.0s
Max memory: 12582912 KiB
Used memory: 11388608 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
virsh # dommemstat 2
actual 3000000
rss 599036
4) Restart the SAME vm with 4GB of Current and Max (4194304)
2013-03-22 21:20:58.894+0000: 23182: debug : qemuMonitorSetBalloon:1690 :
mon=0x7f3d643eb5a0 newmem=4194304
2013-03-22 21:20:58.894+0000: 23182: debug : virJSONValueToString:1133 :
result={"execute":"balloon","arguments":{"value":4294967296},"id":"libvirt-6"}
2013-03-22 21:20:58.894+0000: 23182: debug :
qemuMonitorJSONCommandWithFd:265 : Send command
'{"execute":"balloon","arguments":{"value":4294967296},"id":"libvirt-6"}'
for write with FD -1
2013-03-22 21:20:58.894+0000: 23182: debug : qemuMonitorSend:903 :
QEMU_MONITOR_SEND_MSG: mon=0x7f3d643eb5a0
msg={"execute":"balloon","arguments":{"value":4294967296},"id":"libvirt-6"}
2013-03-22 21:20:58.894+0000: 23179: debug : qemuMonitorIOWrite:461 :
QEMU_MONITOR_IO_WRITE: mon=0x7f3d643eb5a0
buf={"execute":"balloon","arguments":{"value":4294967296},"id":"libvirt-6"}
virsh # dominfo 3
Id: 3
Name: centos_jt
UUID: bc25a6c4-ba34-a593-47c7-6372999946d6
OS Type: hvm
State: running
CPU(s): 1
CPU time: 15.4s
Max memory: 4194304 KiB
Used memory: 4194304 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
virsh # dommemstat 3
actual 4194304
rss 431384
5) reduce ram to 1GB (Works)
virsh # setmem 3 --live --config --size 1048576
2013-03-22 21:26:08.783+0000: 23183: debug : qemuMonitorSetBalloon:1690 :
mon=0x7f3d643eb5a0 newmem=1048576
2013-03-22 21:26:08.783+0000: 23183: debug : virJSONValueToString:1133 :
result={"execute":"balloon","arguments":{"value":1073741824},"id":"libvirt-11"}
2013-03-22 21:26:08.783+0000: 23183: debug :
qemuMonitorJSONCommandWithFd:265 : Send command
'{"execute":"balloon","arguments":{"value":1073741824},"id":"libvirt-11"}'
for write with FD -1
2013-03-22 21:26:08.783+0000: 23183: debug : qemuMonitorSend:903 :
QEMU_MONITOR_SEND_MSG: mon=0x7f3d643eb5a0
msg={"execute":"balloon","arguments":{"value":1073741824},"id":"libvirt-11"}
2013-03-22 21:26:08.784+0000: 23179: debug : qemuMonitorIOWrite:461 :
QEMU_MONITOR_IO_WRITE: mon=0x7f3d643eb5a0
buf={"execute":"balloon","arguments":{"value":1073741824},"id":"libvirt-11"}
2013-03-22 21:26:08.789+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d643eb5a0 buf={"timestamp": {"seconds":
1363987568, "microseconds": 789570}, "event": "BALLOON_CHANGE", "data":
{"actual": 4293918720}}
2013-03-22 21:26:08.789+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363987568, "microseconds": 789570}, "event": "BALLOON_CHANGE", "data":
{"actual": 4293918720}}]
2013-03-22 21:26:08.789+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363987568, "microseconds": 789570},
"event": "BALLOON_CHANGE", "data": {"actual": 4293918720}}
2013-03-22 21:26:08.789+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d643eb5a0 event={"timestamp": {"seconds": 1363987568,
"microseconds": 789570}, "event": "BALLOON_CHANGE", "data": {"actual":
4293918720}}
2013-03-22 21:26:08.789+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x1686360
2013-03-22 21:26:08.789+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d643eb5a0
2013-03-22 21:26:08.789+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 4194304 to
4193280 kb
2013-03-22 21:26:09.789+0000: 23179: debug : qemuMonitorIOProcess:353 :
QEMU_MONITOR_IO_PROCESS: mon=0x7f3d643eb5a0 buf={"timestamp": {"seconds":
1363987569, "microseconds": 408889}, "event": "BALLOON_CHANGE", "data":
{"actual": 1073741824}}
2013-03-22 21:26:09.789+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:152 : Line [{"timestamp": {"seconds":
1363987569, "microseconds": 408889}, "event": "BALLOON_CHANGE", "data":
{"actual": 1073741824}}]
2013-03-22 21:26:09.789+0000: 23179: debug : virJSONValueFromString:975 :
string={"timestamp": {"seconds": 1363987569, "microseconds": 408889},
"event": "BALLOON_CHANGE", "data": {"actual": 1073741824}}
2013-03-22 21:26:09.789+0000: 23179: debug :
qemuMonitorJSONIOProcessLine:167 : QEMU_MONITOR_RECV_EVENT:
mon=0x7f3d643eb5a0 event={"timestamp": {"seconds": 1363987569,
"microseconds": 408889}, "event": "BALLOON_CHANGE", "data": {"actual":
1073741824}}
2013-03-22 21:26:09.789+0000: 23179: debug :
qemuMonitorJSONIOProcessEvent:138 : handle BALLOON_CHANGE handler=0x4b4de0
data=0x16826e0
2013-03-22 21:26:09.789+0000: 23179: debug :
qemuMonitorEmitBalloonChange:1151 : mon=0x7f3d643eb5a0
2013-03-22 21:26:09.789+0000: 23179: debug :
qemuProcessHandleBalloonChange:1248 : Updating balloon from 4193280 to
1048576 kb
virsh # dominfo 3
Id: 3
Name: centos_jt
UUID: bc25a6c4-ba34-a593-47c7-6372999946d6
OS Type: hvm
State: running
CPU(s): 1
CPU time: 28.6s
Max memory: 4194304 KiB
Used memory: 1048576 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: none
Security DOI: 0
virsh # dommemstat 3
actual 1048576
rss 411740
11 years, 8 months
[libvirt-users] Failed to boot lxc with libvirt 1.0.3:2013-03-25 06:54:17.620+0000: 1: error : lxcContainerMountBasicFS:563 : Failed to mount /selinux on /selinux type selinuxfs flags=e opts=(null): No such device
by 张章
hi all, I am using lxc with libvirt. But I can't boot lxc container by libvirt 1.0.3(libvirt 0.9.8 works) . Below is my environment. Do I miss something?
lxc1.xml:<domain type='lxc'> <name>lxc1</name> <memory>1024000</memory> <cputune> <shares>100</shares> </cputune> <os> <type>exe</type> <init>/bin/sh</init> </os> <devices> <console type = 'pty'/> </devices></domain>
I used virsh -c lxc:/// create lxc1.xml and got the following error:2013-03-25 06:54:17.620+0000: 1: info : libvirt version: 1.0.32013-03-25 06:54:17.620+0000: 1: error : lxcContainerMountBasicFS:563 : Failed to mount /selinux on /selinux type selinuxfs flags=e opts=(null): No such device2013-03-25 06:54:17.620+0000: 10668: info : libvirt version: 1.0.32013-03-25 06:54:17.620+0000: 10668: error : virLXCControllerRun:1468 : error receiving signal from container: Input/output error
My OS: Ubuntu 11.10
Can anyone help me? Thanks a lot!
regardszhangzhang
11 years, 8 months
[libvirt-users] virsh list running VMs as idle, libvirt-1.0.3 xen-4.2
by Heiko L.
hallo,
I am using libvirt-1.0.3+squeeze+xen-4.2+linux-3.8 (compiled)
"virsh list -all" show running vm now, but with state "idle".
(libvirt-0.8.3 show vm if state!=running only)
Whats going wrong?
regards Heiko
PS (if interested)
I tried many times. among others:
- test4 list vm running
-> output differ between "virsh list" and "virsh -c ...localhost list" ?!
- summary
-host:libvirtd-1.0.3
action | client | ver | result
list vm offline | virsh | 0.8.3 | ok
list vm offline | virsh | 1.0.3 | ok
list vm offline | convirt| 2.1 | ok
start vm | virsh | 0.8.3 | ok
start vm | virsh | 1.0.3 | ok
start vm | convirt| 2.1 | fail (already exists)
list vm running | virsh | 0.8.3 | fail (empty)
list vm running | virsh | 1.0.3 | ok(idle)
list vm running | convirt| 2.1 | ok
list vm running | virsh+ssh | 0.8.3 | ok(idle)
stop vm | virsh | 0.8.3 | ok
stop vm | virsh | 1.0.3 | ok
stop vm | convirt| 1.0.3 | ok
- details available at:
http://www-home.fh-lausitz.de/~hlehmann/tmp/13/virt/130323.prob.libvirt10...
11 years, 8 months
[libvirt-users] virsh list show shutdown VMs only
by Heiko L.
Hallo,
I am using virsh with squeeze and xen-4.2+linux-3.8 (compiled)
Problem: "virsh list -all" shows VMs only in shutdown state.
(see: test3 virsh list shutdown)
virsh start/shutdown works correctly.
I suspect that version of libvirt is to old.
Is their some workaround without updating system?
regards Heiko
details:
- test1 start vm
root@e2:/tmp# virsh start vm1_test1
Domain vm1_test1 started
root@e2:/tmp# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 64487 8 r----- 6834.7
vm1_test1 6 96 1 -b---- 2.3
root@e2:/tmp# virsh list
Id Name State
----------------------------------
root@e2:/tmp# virsh list --all
Id Name State
----------------------------------
root@e2:/tmp#
-> vm running, but "virsh list" show nothing
- test2 debug virsh list
root@e2:/tmp# LIBVIRT_DEBUG=1 virsh list | egrep -v "do_open|virRegister"
14:36:38.540: debug : virInitialize:340 : register drivers
14:36:38.541: debug : xenHypervisorInit:2010 : Using new hypervisor call: 40002
14:36:38.542: debug : xenHypervisorInit:2123 : Failed to find any Xen hypervisor method
14:36:38.542: debug : vboxRegister:122 : VBoxCGlueInit failed, using dummy driver
14:36:38.543: debug : virConnectOpenAuth:1499 : name=(null), auth=0x7f46145b2700, flags=0
14:36:38.543: debug : xenUnifiedOpen:326 : Trying hypervisor sub-driver
14:36:38.543: debug : xenUnifiedOpen:329 : Activated hypervisor sub-driver
14:36:38.543: debug : xenUnifiedOpen:337 : Trying XenD sub-driver
14:36:38.546: debug : xenUnifiedOpen:340 : Activated XenD sub-driver
14:36:38.546: debug : xenUnifiedOpen:353 : Trying XS sub-driver
14:36:38.547: debug : xenStoreOpen:345 : Failed to add event handle, disabling events
14:36:38.547: debug : xenUnifiedOpen:356 : Activated XS sub-driver
14:36:38.551: debug : xenUnifiedOpen:392 : Trying Xen inotify sub-driver
14:36:38.551: debug : xenInotifyXendDomainsDirLookup:139 : Looking for dom with uuid: d868acf8-56b7-b260-4c85-c2e1e2a67e9b
14:36:38.576: debug : virGetDomain:382 : New hash entry 0x65b0a0
14:36:38.576: debug : virDomainFree:2243 : domain=0x65b0a0
14:36:38.576: debug : virUnrefDomain:463 : unref domain 0x65b0a0 vm1_test1 1
14:36:38.576: debug : virReleaseDomain:416 : release domain 0x65b0a0 vm1_test1 d868acf8-56b7-b260-4c85-c2e1e2a67e9b
14:36:38.576: debug : virReleaseDomain:433 : unref connection 0x634ad0 2
14:36:38.576: debug : xenInotifyOpen:441 : Adding a watch on /var/lib/xend/domains
14:36:38.576: debug : xenInotifyOpen:453 : Building initial config cache
14:36:38.576: debug : xenInotifyOpen:460 : Registering with event loop
14:36:38.576: debug : xenInotifyOpen:464 : Failed to add inotify handle, disabling events
14:36:38.576: debug : xenUnifiedOpen:395 : Activated Xen inotify sub-driver
14:36:38.576: debug : doRemoteOpen:565 : proceeding with name = xen:///
14:36:38.576: debug : remoteIO:9888 : Do proc=66 serial=0 length=28 wait=(nil)
14:36:38.576: debug : remoteIO:9963 : We have the buck 66 0x7f461474d010 0x7f461474d010
14:36:38.577: debug : remoteIODecodeMessageLength:9316 : Got length, now need 64 total (60 more)
14:36:38.577: debug : remoteIOEventLoop:9814 : Giving up the buck 66 0x7f461474d010 (nil)
14:36:38.577: debug : remoteIO:9992 : All done with our call 66 (nil) 0x7f461474d010
14:36:38.577: debug : remoteIO:9888 : Do proc=1 serial=1 length=48 wait=(nil)
14:36:38.577: debug : remoteIO:9963 : We have the buck 1 0x65e2d0 0x65e2d0
14:36:38.608: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.608: debug : remoteIOEventLoop:9814 : Giving up the buck 1 0x65e2d0 (nil)
14:36:38.608: debug : remoteIO:9992 : All done with our call 1 (nil) 0x65e2d0
14:36:38.608: debug : doRemoteOpen:942 : Adding Handler for remote events
14:36:38.608: debug : doRemoteOpen:949 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
14:36:38.609: debug : doRemoteOpen:565 : proceeding with name = xen:///
14:36:38.609: debug : remoteIO:9888 : Do proc=66 serial=0 length=28 wait=(nil)
14:36:38.609: debug : remoteIO:9963 : We have the buck 66 0x69e3b0 0x69e3b0
14:36:38.609: debug : remoteIODecodeMessageLength:9316 : Got length, now need 64 total (60 more)
14:36:38.609: debug : remoteIOEventLoop:9814 : Giving up the buck 66 0x69e3b0 (nil)
14:36:38.609: debug : remoteIO:9992 : All done with our call 66 (nil) 0x69e3b0
14:36:38.610: debug : remoteIO:9888 : Do proc=1 serial=1 length=48 wait=(nil)
14:36:38.610: debug : remoteIO:9963 : We have the buck 1 0x69e3b0 0x69e3b0
14:36:38.643: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.643: debug : remoteIOEventLoop:9814 : Giving up the buck 1 0x69e3b0 (nil)
14:36:38.643: debug : remoteIO:9992 : All done with our call 1 (nil) 0x69e3b0
14:36:38.643: debug : doRemoteOpen:942 : Adding Handler for remote events
14:36:38.643: debug : doRemoteOpen:949 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
14:36:38.643: debug : doRemoteOpen:565 : proceeding with name = xen:///
14:36:38.643: debug : remoteIO:9888 : Do proc=66 serial=0 length=28 wait=(nil)
14:36:38.643: debug : remoteIO:9963 : We have the buck 66 0x6de490 0x6de490
14:36:38.645: debug : remoteIO:9888 : Do proc=1 serial=1 length=48 wait=(nil)
14:36:38.645: debug : remoteIO:9963 : We have the buck 1 0x6de490 0x6de490
14:36:38.677: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.677: debug : remoteIOEventLoop:9814 : Giving up the buck 1 0x6de490 (nil)
14:36:38.677: debug : remoteIO:9992 : All done with our call 1 (nil) 0x6de490
14:36:38.677: debug : doRemoteOpen:942 : Adding Handler for remote events
14:36:38.677: debug : doRemoteOpen:949 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
14:36:38.677: debug : virConnectNumOfDomains:1894 : conn=0x634ad0
Id Name State
----------------------------------
14:36:38.678: debug : virConnectClose:1525 : conn=0x634ad0
14:36:38.678: debug : virUnrefConnect:294 : unref connection 0x634ad0 1
14:36:38.678: debug : remoteIO:9888 : Do proc=2 serial=2 length=28 wait=(nil)
14:36:38.678: debug : remoteIO:9963 : We have the buck 2 0x6de490 0x6de490
14:36:38.678: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.678: debug : remoteIOEventLoop:9814 : Giving up the buck 2 0x6de490 (nil)
14:36:38.678: debug : remoteIO:9992 : All done with our call 2 (nil) 0x6de490
14:36:38.678: debug : remoteIO:9888 : Do proc=2 serial=2 length=28 wait=(nil)
14:36:38.678: debug : remoteIO:9963 : We have the buck 2 0x65e2d0 0x65e2d0
14:36:38.679: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.679: debug : remoteIOEventLoop:9814 : Giving up the buck 2 0x65e2d0 (nil)
14:36:38.679: debug : remoteIO:9992 : All done with our call 2 (nil) 0x65e2d0
14:36:38.679: debug : remoteIO:9888 : Do proc=2 serial=2 length=28 wait=(nil)
14:36:38.679: debug : remoteIO:9963 : We have the buck 2 0x65e2d0 0x65e2d0
14:36:38.680: debug : remoteIODecodeMessageLength:9316 : Got length, now need 56 total (52 more)
14:36:38.680: debug : remoteIOEventLoop:9814 : Giving up the buck 2 0x65e2d0 (nil)
14:36:38.680: debug : remoteIO:9992 : All done with our call 2 (nil) 0x65e2d0
14:36:38.682: debug : virReleaseConnect:249 : release connection 0x634ad0
-> entry "unref domain 0x65b0a0 vm1_test1 1" found
but not displayed
- test3 virsh list shutdown
# virsh shutdown vm1_test1
Domain vm1_test1 is being shutdown
root@e2:/tmp# virsh list
Id Name State
----------------------------------
root@e2:/tmp# virsh list --all
Id Name State
----------------------------------
- vm1_test1 shut off
-> "virsh list" show vm if shutdown only
- version
# apt-cache policy libvirt0
libvirt0:
Installed: 0.8.3-5+squeeze2
Candidate: 0.8.3-5+squeeze2
Version table:
*** 0.8.3-5+squeeze2 0
# uname -a
Linux e2 3.8.0-xen0 #1 SMP Fri Mar 1 14:01:33 CET 2013 x86_64 GNU/Linux
# xm info | grep ^xen_m
xen_major : 4
xen_minor : 2
11 years, 8 months
[libvirt-users] howto make libvirtd xen-4.2
by Heiko L.
Hallo
I wanted to compile libvirtd on xen-4.2
but on [3] wrote: ...current version of libvirt is not compatible with xen 4.2...
Which version that I can test?
regards heiko
--------------------------------------------------
- details
130305.xen.make.libvirtd
#### var
tmp=/tmp
libvirtver="0.10.2"
pkgs="libxml2-dev libdevmapper-dev"
#### dl
cd $tmp
## dl [2]
wget ftp://libvirt.org/libvirt/libvirt-$libvirtver.tar.gz
apt-get install $pkgs
#### make
tar -xzf libvirt-$libvirtver.tar.gz
cd libvirt-$libvirtver
./configure
make && make install
-----------------------------------------------------
#### details
# make
...
CC libvirt_driver_xen_impl_la-xen_inotify.lo
CCLD libvirt_driver_xen_impl.la
CCLD libvirt_driver_xen.la
CC libvirt_driver_libxl_impl_la-libxl_conf.lo
In file included from libxl/libxl_conf.c:43:
libxl/libxl_conf.h:61: error: field 'ctx' has incomplete type
libxl/libxl_conf.h:80: error: field 'ctx' has incomplete type
libxl/libxl_conf.h:81: error: expected specifier-qualifier-list before 'libxl_waiter'
libxl/libxl_conf.c: In function 'libxlMakeDomCreateInfo':
libxl/libxl_conf.c:365: warning: implicit declaration of function 'libxl_init_create_info' [-Wimplicit-function-declaration]
libxl/libxl_conf.c:365: warning: nested extern declaration of 'libxl_init_create_info' [-Wnested-externs]
- on [3] wrote: ...current version of libvirt is not compatible with xen 4.2...
=========================================================================
[1] https://www.redhat.com/archives/libvirt-users/2012-September/msg00113.html
Re: [libvirt-users] virsh list not working with xen 4
virsh list --all only shows turned off machines registered in xend.
[2] ftp://libvirt.org/libvirt/
Index of ftp://libvirt.org/libvirt/
[3] http://superuser.com/questions/481662/xen-4-2-on-centos-6-3-cant-compile-...
Xen 4.2 on CentOs 6.3 : can't compile a libvirt 0.9.10 xen-activated?
...current version of libvirt is not compatible with xen 4.2...
11 years, 8 months
[libvirt-users] Errors while using blkiotune command
by Saurabh Deochake
Hi all,
I want to limit the I/O bandwidth inside the container, so I used virsh
command blkiotune. But when I enter a command:
virsh # blkiotune lxcguest --weight 250
I get following errors:
error: Unable to change blkio parameters
error: Requested operation is not valid: blkio cgroup isn't mounted
I also have blkio cgroup mounted. What can be the problem?
Thanks in advance.
Regards,
Saurabh Deochake
11 years, 8 months