[libvirt-users] some problem with snapshot by libvirt
by xingxing gao
Hi,all,i am using libvirt to manage my vm,in these days i am testing
the libvirt snapshot ,but meet some problem:
the snapshot was created from this command:
snapshot-create-as win7 --disk-only --diskspec
vda,snapshot=external --diskspec hda,snapshot=no
but when i tried to revert from the snapshot which created from the
above command ,i got error below:
virsh # snapshot-revert win7 1338041515 --force
error: unsupported configuration: revert to external disk snapshot not
supported yet
version:
virsh # version
Compiled against library: libvir 0.9.4
Using library: libvir 0.9.4
Using API: QEMU 0.9.4
Running hypervisor: QEMU 1.0.93
10 years, 1 month
[libvirt-users] libvirt v1.0.2 fails to boot LXC container, but v1.0.0 works
by Dennis Jenkins
Hello.
tl;dr = v1.0.0 can boot my LXC containers, v.1.0.1 and v.1.0.2 fails.
Paraphrased error message: "lxcContainerMountProcFuse:616 : Failed to
mount ..../meminfo"
I'd like to know if my host is misconfigured, or my domains, or
... why 1.0.2 and 1.0.1 are not working for me.
I've been using libvirt for a while to manage QEMU instances. I
have experimented with lxc. Back in October of last year, I had some
working LXC containers. I don't recall what version of libvirt I was
using at the time.
I recently attempted to boot my containers, and they failed
(libvirt v1.0.2). I then reverted to v1.0.1 and tried again. Failed
with the same result (same error text, just different line numbers).
I then reverted to 1.0.0 and my containers boot up just fine.
My host runs Gentoo Linux on an Inter core-i5 with 16G ram. I
regularly (weekly) apply all available Gentoo updates.
I strongly perfer to install all third-party software (including
libvirt) from Gentoo portage, not manually from source or git. v1.0.2
is the most recent version available.
First, items common to all test cases:
*** GCC version
ostara ~ # gcc --version
gcc (Gentoo 4.6.3 p1.11, pie-0.5.2) 4.6.3
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
*** Linux kernel
ostara ~ # uname -a
Linux ostara 3.6.11-gentoo #4 SMP PREEMPT Sat Jan 26 10:27:55 CST 2013
x86_64 Intel(R) Core(TM) i5 CPU 760 @ 2.80GHz GenuineIntel GNU/Linux
ostara ~ # zcat /proc/config.gz | egrep "CONFIG_(CGROUP|.*_NS|NAMESPACES)"
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_NS87410 is not set
*** Gento "USE flags" when building Libvirt:
ostara ~ # equery u libvirt | xargs echo
-audit -avahi +caps -firewalld -iscsi +libvirtd +lvm +lxc +macvtap
+nfs +nls -numa -openvz -parted +pcap -phyp -policykit -python +qemu
-rbd +sasl +udev +uml +vepa +virt-network -virtualbox -xen
*** libvirt config over-rides:
ostara ~ # egrep "^\w" < /etc/libvirt/libvirtd.conf
host_uuid = "ab8c50b8-2337-4a02-9274-00923fe8f476"
ostara ~ # egrep "^\w" < /etc/libvirt/libvirt.conf
ostara ~ # egrep "^\w" < /etc/conf.d/libvirtd
rc_need="net"
LIBVIRTD_OPTS="--listen"
LIBVIRTD_KVM_SHUTDOWN="managedsave"
*** Network when no domains (qemu or libvirt) are running:
ostara ~ # ifconfig -a | grep -e "^[a-z]"
br0: flags=4355<UP,BROADCAST,PROMISC,MULTICAST> mtu 1500
br1: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 16436
*** Network with libvirt running (libvirt manages "virbr0" for my qemu
instances. It attaches to "br0").
ostara ~ # /etc/init.d/libvirtd start
* Starting libvirtd ...
[ ok ]
ostara ~ # ifconfig -a | grep -e "^[a-z]"
br0: flags=4355<UP,BROADCAST,PROMISC,MULTICAST> mtu 1500
br1: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 16436
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
*** Mounts (cgroup is mounted)
ostara ~ # mount
rootfs on / type rootfs (rw)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs
(rw,nosuid,relatime,size=10240k,nr_inodes=2050547,mode=755)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
/dev/md3 on / type ext3 (rw,noatime,errors=continue,barrier=1,data=writeback)
tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
cgroup_root on /sys/fs/cgroup type tmpfs
(rw,nosuid,nodev,noexec,relatime,size=10240k,mode=755)
openrc on /sys/fs/cgroup/openrc type cgroup
(rw,nosuid,nodev,noexec,relatime,release_agent=/lib64/rc/sh/cgroup-release-agent.sh,name=openrc)
cpuset on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
debug on /sys/fs/cgroup/debug type cgroup
(rw,nosuid,nodev,noexec,relatime,debug)
cpu on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cpuacct on /sys/fs/cgroup/cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct)
memory on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
devices on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
freezer on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
blkio on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
perf_event on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
/chroot/ssh-jails/jamiel/home/jamiel on /home/jamiel type none (rw,bind)
libvirt on /run/libvirt/lxc/dwj-lnx-dev type fuse (rw,nosuid,nodev)
*** My LXC containers:
ostara ~ # virsh -c lxc:// list --all
Id Name State
----------------------------------------------------
- dwj-lnx-dev shut off
- vm1 shut off
*** Config from one container (itself is Gentoo Linux, sharing some
file-systems)
ostara ~ # virsh -c lxc:// dumpxml dwj-lnx-dev
<domain type='lxc'>
<name>dwj-lnx-dev</name>
<uuid>fbcd8c3a-9939-12b4-727d-5d3526bc448f</uuid>
<memory unit='KiB'>500000</memory>
<currentMemory unit='KiB'>500000</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64'>exe</type>
<init>/sbin/init</init>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/vm/lxc/dwj-lnx-dev'/>
<target dir='/'/>
</filesystem>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/usr/portage'/>
<target dir='/usr/portage'/>
</filesystem>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/usr/src'/>
<target dir='/usr/src'/>
</filesystem>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/home'/>
<target dir='/home'/>
</filesystem>
<interface type='bridge'>
<mac address='82:00:00:00:01:00'/>
<source bridge='br0'/>
</interface>
<console type='pty'>
<target type='lxc' port='0'/>
</console>
</devices>
</domain>
ostara ~ # ls -l /vm/lxc/dwj-lnx-dev
total 108
drwxr-xr-x 2 root root 4096 Oct 18 09:38 bin
drwxr-xr-x 2 root root 4096 Apr 27 2011 boot
drwxr-xr-x 10 root root 45056 Jan 26 10:52 dev
drwxr-xr-x 53 root root 4096 Feb 8 11:23 etc
drwxr-xr-x 2 root root 4096 Apr 27 2011 home
lrwxrwxrwx 1 root root 5 Oct 18 00:46 lib -> lib64
drwxr-xr-x 2 root root 4096 Oct 18 00:51 lib32
drwxr-xr-x 10 root root 4096 Oct 18 09:11 lib64
drwxr-xr-x 4 root root 4096 Apr 27 2011 mnt
drwxr-xr-x 4 root root 4096 May 22 2012 opt
drwxr-xr-x 2 root root 4096 Apr 27 2011 proc
drwx------ 3 root root 4096 Oct 18 08:13 root
drwxr-xr-x 2 root root 4096 Oct 18 00:57 run
drwxr-xr-x 2 root root 4096 Oct 18 09:11 sbin
drwxr-xr-x 2 root root 4096 Apr 27 2011 sys
drwxrwxrwt 4 root root 4096 Feb 8 11:23 tmp
drwxr-xr-x 13 root root 4096 May 4 2011 usr
drwxr-xr-x 14 root root 4096 May 22 2012 var
************ v 1.0.0 works great!
ostara ~ # virsh --version
1.0.0
ostara ~ # virsh -c lxc:// list --all
Id Name State
----------------------------------------------------
- dwj-lnx-dev shut off
- vm1 shut off
ostara ~ # virsh -c lxc:// start dwj-lnx-dev
Domain dwj-lnx-dev started
ostara ~ # virsh -c lxc:// list --all
Id Name State
----------------------------------------------------
21364 dwj-lnx-dev running
- vm1 shut off
************* v 1.0.2 fails:
ostara ~ # /etc/init.d/libvirtd start
* Caching service dependencies ...
[ ok ]
* Starting libvirtd ...
[ ok ]
ostara ~ # virsh --version
1.0.2
ostara ~ # ls -l /var/lib/libvirt/lxc/
total 0
ostara ~ # virsh -c lxc:// start dwj-lnx-dev
error: Failed to start domain dwj-lnx-dev
error: internal error guest failed to start: PATH=/bin:/sbin
TERM=linux container=lxc-libvirt
container_uuid=fbcd8c3a-9939-12b4-727d-5d3526bc448f
LIBVIRT_LXC_UUID=fbcd8c3a-9939-12b4-727d-5d3526bc448f
LIBVIRT_LXC_NAME=dwj-lnx-dev /sbin/init
2013-02-08 18:09:28.402+0000: 1: info : libvirt version: 1.0.2
2013-02-08 18:09:28.402+0000: 1: error : lxcContainerMountProcFuse:616
: Failed to mount /.oldroot//var/run/libvirt/lxc/dwj-lnx-dev/meminfo
on /proc/meminfo: No such file or directory
2013-02-08 18:09:28.402+0000: 23867: info : libvirt version: 1.0.2
2013-02-08 18:09:28.402+0000: 23867: error : virLXCControllerRun:1468
: error receiving signal from container: Input/output error
2013-02-08 18:09:28.814+0000: 23867: error : virCommandWait:2287 :
internal error Child process (ip link del veth1) unexpected exit
status 1: Cannot find device "veth1"
ostara ~ # ls -l /var/lib/libvirt/lxc/
total 0
ostara ~ # mount | grep libvirt
libvirt on /run/libvirt/lxc/dwj-lnx-dev type fuse (rw,nosuid,nodev)
ostara ~ # ls -l /run/libvirt/lxc
total 0
drwxr-xr-x 2 root root 40 Feb 8 12:09 dwj-lnx-dev
srwx------ 1 root root 0 Feb 8 12:09 dwj-lnx-dev.sock
srwx------ 1 root root 0 Feb 8 11:22 vm1.sock
ostara ~ # ls -l /run/libvirt/lxc/dwj-lnx-dev
total 0
*** "veth1" (regarding the rror "Cannot find device")
*** "veth1" is still no created. But I do know that when
*** using libvirt v1.0.0, and the container is running, the device
DOES exist (not shown above)
ostara ~ # ifconfig -a | egrep "^\w"
br0: flags=4355<UP,BROADCAST,PROMISC,MULTICAST> mtu 1500
br1: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST> mtu 1500
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
eth2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 16436
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
*** The last entry in "/var/log/libvirt/lxc/dwj-lnx-dev.log"
2013-02-08 18:09:27.771+0000: starting up
PATH=/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin:/usr/x86_64-pc-linux-gnu/gcc-bin/4.6.3:/usr/x86_64-pc-linux-gnu/i686-pc-mingw32/gcc-bin/4.7.2
LIBVIRT_DEBUG=3 LIBVIRT_LOG_OUTPUTS=3:stderr /usr/libexec/libvirt_lxc
--name dwj-lnx-dev --console 20 --security=none --handshake 23
--background --veth veth1
PATH=/bin:/sbin TERM=linux container=lxc-libvirt
container_uuid=fbcd8c3a-9939-12b4-727d-5d3526bc448f
LIBVIRT_LXC_UUID=fbcd8c3a-9939-12b4-727d-5d3526bc448f
LIBVIRT_LXC_NAME=dwj-lnx-dev /sbin/init
2013-02-08 18:09:28.402+0000: 1: info : libvirt version: 1.0.2
2013-02-08 18:09:28.402+0000: 1: error : lxcContainerMountProcFuse:616
: Failed to mount /.oldroot//var/run/libvirt/lxc/dwj-lnx-dev/meminfo
on /proc/meminfo: No such file or directory
2013-02-08 18:09:28.402+0000: 23867: info : libvirt version: 1.0.2
2013-02-08 18:09:28.402+0000: 23867: error : virLXCControllerRun:1468
: error receiving signal from container: Input/output error
2013-02-08 18:09:28.814+0000: 23867: error : virCommandWait:2287 :
internal error Child process (ip link del veth1) unexpected exit
status 1: Cannot find device "veth1"
11 years, 9 months
[libvirt-users] lxc--sshd
by Brandon Foster
Hey all,
I am new to libvirt lxc and am trying to get a container that i can
ssh to. So far i've booted up a container and gave it an Ip address,
it can ping out and I can ping it but I cannot get ssh to work.
When i try to run an ssh command inside the container I get a command
not found error. here is my xml file
<domain type='lxc'>
<name>helloworld</name>
<memory>102400</memory>
<os>
<type>exe</type>
<init>/bin/sh</init>
</os>
<devices>
<console type='pty'/>
<filesystem type='mount'>
<source dir='/export/helloworld/config'/>
<target dir='/etc/httpd'/>
</filesystem>
<filesystem type='mount'>
<source dir='/export/helloworld/data'/>
<target dir='/var/www'/>
</filesystem>
<interface type='bridge'>
<mac address='52:54:00:5e:02:45'/>
<source bridge='br0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
<model type='virtio' />
</interface>
</devices>
</domain>
I havent restricted it with busybox for simplicity thinking it was
because I wasn't allowing it access to necessary ssh files, but Im not
sure what I am missing now.
surely someone has done this before.
thanks
--
Brandon Foster
Infrastructure Administrator
Liferay, Inc.
Enterprise. Open Source. For life.
11 years, 9 months
[libvirt-users] VM can not start
by Qinghua Cheng
Hi Experts,
I have libvirt 1.0.2 downloaded and build on my RH. And I also
downloaded qemu 1.4.0 and build on this machine.
When I tried to start my VM ubuntu12, get error messages:
error: Failed to start domain ubuntu12
error: internal error Process exited while reading console log output:
I have no idea what does this message mean. Could you please help point
out what I should do next?
Thanks,
Conny
11 years, 9 months
[libvirt-users] Libvirt not creating .vmdk file and .nvram files on vmware esx host
by Nithin Mp
Hi,
I am currently using libvirt API(java api) for creating VMs in esxi host.
I am passing commands as follows:(trying from terminal)
virsh -c esx://192.168.0.144?no_verify=1
define /home/bm/eee.xml
Domain eee defined from /home/bm/eee.xml
I have attached the eee.xml file and the eee.vmx from the esxi datastore
with this mail.
the versions i am using is :
virsh # version
Compiled against library: libvirt 1.0.0
Using library: libvirt 1.0.0
Using API: ESX 1.0.0
Running hypervisor: ESX 5.0.0
The system i am using is Ubuntu 12.04 LTS 64 bit
Please reply me.Waiting for your response.
--
*Thanks & Regards*
NITHIN.M.P
11 years, 10 months
[libvirt-users] With no luck with virDomainGetInfo and virDomainMemoryStats for memory usage of a running vm
by 张章
Hi, all
I am trying to get used memory of a running vm using libvirt and then to calculate the memory usage, but with no luck.
1.The used memory returned by virDomainGetInfo is equal to max memory with no virtio balloon driver set. When using virtio balloon driver and setting currentMemory less than memory in the xml, the used memory returned by virDomainGetInfo is nearly equal to currentMemroy. Again, I can't get the memory used by vm. 2. When I turned to virDomainMemoryStats, I can only get two values : Current balloon value and Resident Set Size of the process running the domain. (for kvm 0.14.1 and libvirt 1.0.2)
That's not what I want. I only need to know how much memory used by the running vm, so I can calculate the memory usage of this vm. (like the used memory return by cmd "free -m")
Any hints?
Many thanks!
Z. ZhangNational Research Center for intelligent Computing Systems,ICT, Chinese Academy of Sciences
11 years, 10 months
[libvirt-users] Deleting and coalescing live snapshots
by Skardal, Harald
All,
I have a service that takes new live KVM snapshots Si regularly, keeps a
fixed number N (Si ,..,Si-N+1), and therefore needs to delete Si-N in
this cycle.
Until libvirt includes support for this capability that is said to be
available in qemu, what is a safe workflow to delete old live KVM
snapshots w/o losing data. Do I need to pause/shut down the VM?
The development environment is Fedora 18 with qemu, libvirtd and libvirt
upgrades to the more recent stable versions.
Harald Skardal,
Stratus Technologies.
11 years, 10 months
[libvirt-users] The accurate CPU usage of a domain?
by 张章
Hi,all
I want to get a relatively accurate cpu usage of a domain. I have a few questions about virDomainGetInfo: struct virDomainInfo{ unsigned char state : //the running state, one of virDomainState unsigned long maxMem :// the maximum memory in KBytes allowed unsigned long memory :// the memory in KBytes used by the domain unsigned short nrVirtCpu :// the number of virtual CPUs for the domain unsigned long long cpuTime : //the CPU time used in nanoseconds }
If a domain is assigned to 4 vcpus and my machine has 6 physical cores, then the cpuTime in virDomainInfo should be the sum of cpu time of the 4 vcpus. When I want to calculate the cpu usage, which of the following is more accurate? 1/4 * cpu time / total time or 4/6 * cpu time / total time or just cpu time / total time
regardszhangzhang
11 years, 10 months
[libvirt-users] error when call migrate function "error: invalid argument: qemuDomainMigratePrepare3: unsupported flags (0x200)"
by loic.cloatre@orange.com
Hello,
I try to migrate a VM and I get this error :
loic@loic-VirtualBox:~/workspace/src/build/bin$ ./virsh --connect qemu:///system migrate --live vmUbuntuPartage2CacheWriteBack qemu+ssh://192.168.10x.2x/system --unsafe
loic(a)192.168.10x.2x's password:
error: invalid argument: qemuDomainMigratePrepare3: unsupported flags (0x200)
Does somebody know this error?
I am working on windows 7, I have:
1) Virtual box with two vms ubuntu desktop 12.04: "VM1" and "VM40"
2) One shared hard drive build with virtual box: "newDiskSharedFev.vdi"
In my VM1:
1) I have build one "vmUbuntuPartage" ubuntu server 12.10 on shared disk with this command:
sudo virt-install -n vmUbuntuPartage -r 1500 --disk path=/media/partage/newDiskSharedFev.vdi,bus=virtio,size=4,cache=writeback -c /home/loic/Downloads/ubuntu-12.04.1-server-i386.iso --network network=default,model=virtio --connect=qemu:///system --vnc -v
2) I have download and built libvirt-0.10.0 (I can give configure command if necessary)
In my VM40:
1) I install libvirt with apt-get
2) I have mounted shared folder with "newDiskSharedFev.vdi"
So, in my VM1 I try to migrate my "vmUbuntuPartage" from VM1 to VM40 and I get error above.
Here my configuration file /etc/libvirt/qemu/ vmUbuntuPartage.xml
-->
I try to change <driver name='qemu' type='raw' cache='none or writeback or writetrough'/>
================
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh edit vmUbuntuPartage
or other application using the libvirt API.
-->
<domain type='qemu'>
<name>vmUbuntuPartage</name>
<uuid>5d38cf86-d9c5-b0f2-cc03-982a1f18bede</uuid>
<memory>1536000</memory>
<currentMemory>1536000</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='i686' machine='pc-1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/media/partage/SharedDiskUbuntu.vdi'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' unit='0'/>
</disk>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='network'>
<mac address='52:54:00:8b:b1:80'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
</domain>
==================
Loïc Cloâtre
_________________________________________________________________________________________________________________________
Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration,
France Telecom - Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci.
This message and its attachments may contain confidential or privileged information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete this message and its attachments.
As emails may be altered, France Telecom - Orange is not liable for messages that have been modified, changed or falsified.
Thank you.
11 years, 10 months
[libvirt-users] questions of libvirt's lxc
by Han Yuejuan-B42073
Hi all,
I am trying to init lxc by libvirt instead of using lxc tools. I have several questions as following:
software platform: integrating libvirt-0.10.1 to yocto based on sdk1.3
hardwatre platform: p4080ds
steps:
root@p4080ds:~# cat > container.xml <<EOF
<domain type='lxc'>
<name>container</name>
<memory>500000</memory>
<os>
<type>exe</type>
<init>/bin/sh</init>
</os>
<vcpu>1</vcpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<interface type='network'>
<source network='default'/>
</interface>
<console type='pty' />
</devices>
</domain>
<EOF
root@p4080ds:~# libvirtd -d
root@p4080ds:~# virsh --connect lxc:/// define container.xml
Domain container defined from container.xml
root@p4080ds:~# virsh --connect lxc:/// start container
error: Failed to start domain container
error: internal error Network 'default' is not active.
To resolve this issue I change the container.xml to as below:
<domain type='lxc'>
<name>container</name>
<memory>500000</memory>
<os>
<type>exe</type>
<init>/bin/sh</init>
</os>
<vcpu>1</vcpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<interface type='bridge'>
<source network='br0'/>
<mac address='00:11:22:34:34:34'/> (I am not sure it will make difference)
</interface>
<console type='pty' />
</devices>
</domain>
then error infos as below:
root@p4080ds:~# virsh --connect lxc:/// define container.xml
error: Failed to define domain from container.xml
error: internal error No <source> 'bridge' attribute specified with <interface type='bridge'/>
Q1: What an appropriate virtual network is supported by virsh for error mentioned above?
Linux container support several types of network such as veth, vlan, macvlan, phys and how to edit different xml file for different network type?
Q2: There are two kinds of containers , application container and system container.
If using lxc tools, lxc-execute will establish an application container and lxc-create with lxc-start will establish a system container.
So with libvirt how to set up these two kinds of containers? by different xml file or by different virsh cmds?
Q3: If I want to define a rootfs (such as busybox) for container how I edit the relative xml file? anyone has related infos that can be shared?
Q4:Cgroups manage and control resources using for lxc containers. If use libvirt how to realise the similar resource tuning for containers based on PowerPC?
There are limited infos at: https://www.berrange.com/posts/2009/12/03/using-cgroups-with-libvirt-and-...
Regards,
Yuejuan
11 years, 10 months