[libvirt] [PATCH] add some text to http://libvirt.org/drvlxc.html
by Serge E. Hallyn
--- drvlxc-orig.html 2009-04-15 09:36:48.000000000 -0500
+++ drvlxc.html 2009-04-15 09:44:27.000000000 -0500
@@ -140,7 +140,89 @@
</div>
<div id="content">
- <h1>LXC container driver</h1>
+<h1>LXC container driver</h1>
+<p>
+The libvirt LXC driver manages "Linux Containers". Containers are sets of processes
+with private namespaces which can (but don't always) look like separate machines, but
+do not have their own OS. Here are two example configurations. The first is a very
+light-weight "application container" which does not have it's own root image. You would
+start it using
+</p>
+
+<h3>Example config version 1</h3>
+<p>
+<pre>
+<domain type='lxc'>
+ <name>vm1</name>
+ <memory>500000</memory>
+ <os>
+ <type>exe</type>
+ <init>/bin/sh</init>
+ </os>
+ <vcpu>1</vcpu>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/libexec/libvirt_lxc</emulator>
+ <interface type='network'>
+ <source network='default'/>
+ </interface>
+ <console type='pty' />
+ </devices>
+</domain>
+</pre>
+</p>
+
+<p>
+The next example assumes there is a private root filesystem
+(perhaps hand-crafted using busybox, or installed from media,
+debootstrap, whatever) under /opt/vm-1-root:
+</p>
+<p>
+<pre>
+<domain type='lxc'>
+ <name>vm1</name>
+ <memory>32768</memory>
+ <os>
+ <type>exe</type>
+ <init>/init</init>
+ </os>
+ <vcpu>1</vcpu>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/libexec/libvirt_lxc</emulator>
+ <filesystem type='mount'>
+ <source dir='/opt/vm-1-root'/>
+ <target dir='/'/>
+ </filesystem>
+ <interface type='network'>
+ <source network='default'/>
+ </interface>
+ <console type='pty' />
+ </devices>
+</domain>
+
+</pre>
+</p>
+
+<p>
+In both cases, you can define and start a container using:
+<pre>
+lxc --connect lxc:/// define v1.xml
+lxc --connect lxc:/// start v1.xml
+</pre>
+and then get a console using:
+<pre>
+lxc --connect lxc:/// console v1
+</pre>
+Now doing 'ps -ef' will only show processes in the container, for
+instance.
+</p>
</div>
</div>
<div id="footer">
15 years, 6 months
[libvirt] virsh ttyconsole [node] has rc of 1?
by Darryl L. Pierce
I have a script that's breaking when trying to get the ttyconsole for a
node. The script gets the console by invoking:
virsh ttyconsole [nodename]
The script also has trap on error enabled, and the above command has a
return code of 1. This is causing the script to exit at that point.
Is this a bug? Or is there a reason that return code is 1 and not 0?
--
Darryl L. Pierce, Sr. Software Engineer @ Red Hat, Inc.
Virtual Machine Management - http://www.ovirt.org/
Is fearr Gaeilge bhriste ná Béarla cliste.
15 years, 6 months
[libvirt] How to prevent libvirt from adding iptables rules?
by Mariano Absatz
Hi,
I'm new to libvirt but not a complete neophite.
I'm using libvirt and kvm in ubuntu with "vmbuilder".
I'm creating a couple of VMs inside a host that is directly connected to
internet with a public routeable address. Since I only have one public
address, I won't use bridging.
I'm using shorewall (www.shorewall.net) to configure my iptables rules.
I intend to use DNAT to route specific ports in the host to one or other VM.
With standard masquerading, I give the VMs access to the outside world.
At first I used the 'default' network (with a different rfc1918
network)... everything was kinda working until I rebooted the host... at
that point I lost connectivity between the outside world and the VMs.
From inside the host I had no trouble connecting to the VMs.
If I restarted shorewall (which actually cleans all iptables rules and
regenerate them according to its configuration) everything works fine.
After sending a report and some debugging in the shorewall mailing list,
it was clear that libvirt was adding rules to iptables.
After reading a bit
(http://libvirt.org/formatnetwork.html#examplesPrivate) I created a new
network called "isolated". I stopped default (and disabled its
autostart), and defined and started isolated.
This is the content of isolated.xml:
<network>
<name>isolated</name>
<uuid>51cffbcc-88f5-4edc-a81c-1765c1045691</uuid>
<bridge name='virbr%d' stp='on' forwardDelay='0' />
<ip address='10.3.14.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.3.14.128' end='10.3.14.254' />
</dhcp>
</ip>
</network>
I modified my VMs to use isolated rather than default, but rules keep
being added to iptables when libvirt-bin is started.
Is there a way to convince libvirt not to add these rules?
Feel free to ask for any data that I didn't send here.
TIA.
--
Mariano Absatz - "El Baby"
el.baby(a)gmail.com
www.clueless.com.ar
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
An expert is a person who has made all the mistakes
that can be made in a very narrow field.
Niels Bohr
Danish physicist (1885 - 1962)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
* TagZilla 0.066 * http://tagzilla.mozdev.org
15 years, 6 months
[libvirt] Libvirt PHP binding
by Radek Hladik
Hi,
I am trying to develop PHP binding for Libvirt API. As discussed
previously on the list the best way is to create a ZEND extension. There
is a great tutorial on the net ( http://devzone.zend.com/tag/Extension )
and PHP provides a lot of macros and functions to make extension writing
easier.
I've implemented functions for connecting, listing domains, getting info
about domains and node, dumping domain XML, some basic domain life-cycle
actions and very basic error reporting. Before moving to other functions
I need to solve a few issues :-)
The biggest issue is that PHP used from webserver can (and usually does)
run multithread. One thread can process more than one request and the
threads can run in parallel. PHP/ZEND provides "tools" to solve this. It
provides "request-global" variables, resources (containers for objects
like file handles or connection pointers) and much more. In single
threaded environment (like calling from command line) all these fallback
to usual behavior. In addition ZEND has its own memory manager to be
able to better manage the memory on per request basis (functions are
called emalloc(), efree(),...). One can still use malloc() and free()
but this is bypassing the ZEND memory manager. Free()ing emalloced
memory or vice versa leads to crash.
So one thing is that every memory allocated by libvirt (using malloc())
I need to copy to memory allocated using emalloc(). Also every memory,
that would be free()ed by libvirt I need to malloc() and copy from
emmaloc()ed memory. The first one is done by simple macro:
#define RECREATE_STRING_WITH_E(str_out, str_in) \
str_out = estrndup(str_in, strlen(str_in)); \
free(str_in);
The second case I encountered only on authentication callback and I use
simple strndup() to copy it for libvirt.
When running from command line everything seems to work fine. This is of
course singlethreaded and potential resource leaks need not to cause a
problem. However when running from webserver, the result is much more
interesting. When connecting to qemu:///system (via local socket) using
<?
libvirt_connect($uri,true);
?>
I can crash the libvirt daemon after cca 10 page reloads - sometimes
with message
Apr 13 15:32:44 kvmtest kernel: libvirtd[8263]: segfault at 4 ip
00000039d7223fc0 sp 00007fa6fbc29a88 error 6 in
libdbus-1.so.3.4.0[39d7200000+3c000]
in system log, sometimes without any message.
(When running the same script from command line it worked for 1000
executions without noticing a problem).
When connecting to qemu+tcp:///system using credentials (explained later):
<?
libvirt_connect($uri,true,Array(VIR_CRED_AUTHNAME=>"fred",VIR_CRED_PASSPHRASE=>"fred"));
?>
It works but httpd processes open a lot pipes and after a few hours with
page refreshing every 10 sec I even ran into error:
[Mon Apr 13 02:40:26 2009] [error] [client 10.38.25.152] PHP Warning:
libvirt_connect() unable to make pipe: Too many open files
Next issue I am not sure about is callbacks. I need them mainly for
authentication. As PHP is intended to run the whole script
non-interactively at most times, I've created this solution. When
calling the libvirt_connect() PHP function you can provide list of
credentials in form of an array
Array(VIR_CRED_AUTHNAME=>"fred",VIR_CRED_PASSPHRASE=>"fred"). This PHP
array is parsed to C array and this prepopulated array is passed to my
authentication callback function. That function receives the requested
credential, looks it up in the provided array and returns it. I think
this suits more the PHP nature but in future I may provide real callback
solution.
Little trouble is that libvirt free()s the values returned by callback
so I need to copy them to malloc()ed memory.
I am not sure about multithread safety of callback functions but I think
that if the function only obtains the parameters via cbdata and operates
only with them then it should be safe. However sometimes I get error:
[Mon Apr 13 14:19:48 2009] [error] [client 10.38.25.152] PHP Warning:
libvirt_connect(): Failed to collect auth credentials
and I need to restart http daemon for a few times to make it work again.
The last trouble is with error callback, where I am not sure whether it
is thread safe. I need to call PHP function for reporting error, storing
the error in "global" variable for next use and the PHP may even
terminate the whole processing of the request...
And if you are still reading then I can point you to
http://phplibvirt.cybersales.cz/ where you can download the source code
and browse the documentation - you can find there list of implemented
functions and brief instructions how to install the extension. But be
warned it really can crash your libvirt and maybe apache! For
completeness: I use Fedora 10 with some rawhide updates:
httpd-2.2.11-6.x86_64
php-5.2.9-1.fc11.x86_64
php-devel-5.2.9-1.fc11.x86_64
libvirt-0.6.0-2.fc11.x86_64
qemu-0.9.1-12.fc11.x86_64
libvirt-devel-0.6.0-2.fc11.x86_64
php-cli-5.2.9-1.fc11.x86_64
kvm-83-5.fc10.x86_64
php-common-5.2.9-1.fc11.x86_64
Radek
15 years, 6 months
[libvirt] [Patch] Fix vcpupin to inactive domains on Xend3.0.3.
by Takahashi Tomohiro
Hi,
I made the patch that corrected the following problem.
It is that when I execute the "virsh vcpupin" command to inactive
domains on Xend3.0.3,
Libvirt outputs the following Segmentation fault.
# virsh vcpupin guest_dom 0 0,1,2
Segmentation fault
Signed-off-by: Tomohiro Takahashi <takatom(a)jp.fujitsu.com>
Thanks,
Tomohiro Takahashi
--- xm_internal.c.org 2009-04-07 21:50:17.000000000 +0900
+++ xm_internal.c 2009-04-13 20:53:22.000000000 +0900
@@ -1700,7 +1700,8 @@ int xenXMDomainPinVcpu(virDomainPtr doma
ret = 0;
cleanup:
- VIR_FREE(mapstr);
+ if( *mapstr != '\0' )
+ VIR_FREE(mapstr);
VIR_FREE(cpuset);
xenUnifiedUnlock(priv);
return (ret);
15 years, 6 months
[libvirt] [PATCH] lxc: Add lxcGetHostname()
by Dan Smith
This patch adds the getHostname method to the lxc driver structure
(using the qemu driver's generic code). Apparently virsh started
using that method during console attachment recently. Without this
implementation, the virsh refused to attach to lxc consoles.
Signed-off-by: Dan Smith <danms(a)us.ibm.com>
--
Dan Smith
IBM Linux Technology Center
email: danms(a)us.ibm.com
Index: src/lxc_driver.c
===================================================================
RCS file: /data/cvs/libvirt/src/lxc_driver.c,v
retrieving revision 1.65
diff -u -r1.65 lxc_driver.c
--- src/lxc_driver.c 31 Mar 2009 15:47:17 -0000 1.65
+++ src/lxc_driver.c 14 Apr 2009 18:22:45 -0000
@@ -1404,6 +1404,20 @@
return ret;
}
+static char *lxcGetHostname (virConnectPtr conn)
+{
+ char *result;
+
+ result = virGetHostname();
+ if (result == NULL) {
+ virReportSystemError (conn, errno,
+ "%s", _("failed to determine host name"));
+ return NULL;
+ }
+ /* Caller frees this string. */
+ return result;
+}
+
/* Function Tables */
static virDriver lxcDriver = {
VIR_DRV_LXC, /* the number virDrvNo */
@@ -1413,7 +1427,7 @@
NULL, /* supports_feature */
NULL, /* type */
lxcVersion, /* version */
- NULL, /* getHostname */
+ lxcGetHostname, /* getHostname */
NULL, /* getMaxVcpus */
NULL, /* nodeGetInfo */
NULL, /* getCapabilities */
15 years, 6 months
[libvirt] vcpuinfo returns wrong CPU value for kvm
by Gerrit Slomma
vcpuinfo returns a wrong value for kvm with libvirt 0.6.2:
virsh # nodeinfo
CPU-Modell: x86_64
CPU(s): 2
CPU-Frequenz: 2333 MHz
CPU-Socket(s): 1
Kern(e) pro Socket: 2
Thread(s) pro Kern: 1
NUMA-Zelle(n): 1
Speichergrösse: 3062956 kB
virsh # dominfo rr019v3
Id: 2
Name: rr019v3
UUID: ff1ba599-5801-fca9-ab33-8962c2dfa46c
OS Typ: hvm
Status: laufend
CPU(s): 2
CPU-Zeit: 46,5s
Max Speicher: 1572864 kB
Verwendeter Speicher: 1572864 kB
Automatischer Start: deaktiviert
As default a kvm-vm has a affinity on all cpus provided by the node
virsh # vcpuinfo rr019v3
VCPU: 0
CPU: 0
Status: laufend
CPU-Affinität: yy
VCPU: 1
CPU: 0
Status: laufend
CPU-Affinität: yy
When i set the affinity of the vCPUs to dedicated CPUs of the node
vcpuinfo returns CPU 0 as the assigned CPU.
virsh # vcpupin rr019v3 0 0
virsh # vcpupin rr019v3 1 1
virsh # vcpuinfo rr019v3
VCPU: 0
CPU: 0
Status: laufend
CPU-Affinität: y-
VCPU: 1
CPU: 0
Status: laufend
CPU-Affinität: -y
Furthermore even when pinning all - in my case both - vCPUs of the
domain to the second CPU of the node virsh reports the first cpu as the
assigned cpu.
virsh # vcpupin rr019v3 0 1
virsh # vcpuinfo rr019v3
VCPU: 0
CPU: 0
Status: laufend
CPU-Affinität: -y
VCPU: 1
CPU: 0
Status: laufend
CPU-Affinität: -y
Watching at the code in virsh.c i am a little bit on my wits end where
virVcpuInfoPtr is defined as i am no c-programmer - only some java on my
side.
On my xen-machine with libvirt 0.3.3 the CPU is shown correctly though:
virsh vcpuinfo rr010v2
VCPU: 0
CPU: 0
Status: blockiert
CPU-Zeit: 550,4s
CPU-Affinität: y---
VCPU: 1
CPU: 1
Status: blockiert
CPU-Zeit: 27,9s
CPU-Affinität: -y--
VCPU: 2
CPU: 2
Status: blockiert
CPU-Zeit: 31,2s
CPU-Affinität: --y-
VCPU: 3
CPU: 3
Status: blockiert
CPU-Zeit: 49,1s
CPU-Affinität: ---y
Is this only working with the xen-driver?
15 years, 6 months
[libvirt] [PATCH] lxc: stop rootless containers from messing with system mounts
by Serge E. Hallyn
If a container has no root, liblxc remounts /proc. If the
system had marked / as MS_SHARED, then even though the
container is in a new mounts namespace, the mount event is
propagated back to the host mounts namespace, overwriting
/proc. After that, for instance, ps will no longer show
system processes.
A Fedora 11 default install has / MS_SHARED.
Make sure that root is not MS_SHARED before remounting
/proc. I'm making it MS_SLAVE so that the container
will receive mount events from the host, but not vice
versa.
Signed-off-by: Serge Hallyn <serue(a)us.ibm.com>
---
src/lxc_container.c | 11 ++++++++++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/src/lxc_container.c b/src/lxc_container.c
index d3959f6..8addd23 100644
--- a/src/lxc_container.c
+++ b/src/lxc_container.c
@@ -273,7 +273,11 @@ static int lxcContainerChildMountSort(const void *a, const void *b)
#endif
#ifndef MS_PRIVATE
-#define MS_PRIVATE 1<<18
+#define MS_PRIVATE (1<<18)
+#endif
+
+#ifndef MS_SLAVE
+#define MS_SLAVE (1<<19)
#endif
static int lxcContainerPivotRoot(virDomainFSDefPtr root)
@@ -558,6 +562,11 @@ static int lxcContainerSetupExtraMounts(virDomainDefPtr vmDef)
{
int i;
+ if (mount("", "/", NULL, MS_SLAVE|MS_REC, NULL) < 0) {
+ virReportSystemError(NULL, errno, "%s",
+ _("failed to make / slave"));
+ return -1;
+ }
for (i = 0 ; i < vmDef->nfss ; i++) {
// XXX fix to support other mount types
if (vmDef->fss[i]->type != VIR_DOMAIN_FS_TYPE_MOUNT)
--
1.6.2
15 years, 6 months
[libvirt] libvirt arch detection on x86_64 host
by Gerry Reno
I have a 64-bit host that is running a 32-bit OS (Fedora 10).
# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9850 Quad-Core Processor
stepping : 3
cpu MHz : 2511.416
cache size : 512 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc pni monitor cx16
popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
3dnowprefetch osvw ibs
bogomips : 5022.83
clflush size : 64
power management: ts ttp tm stc 100mhzsteps hwpstate
processor : 1
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9850 Quad-Core Processor
stepping : 3
cpu MHz : 2511.416
cache size : 512 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 1
initial apicid : 1
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc pni monitor cx16
popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
3dnowprefetch osvw ibs
bogomips : 5023.06
clflush size : 64
power management: ts ttp tm stc 100mhzsteps hwpstate
processor : 2
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9850 Quad-Core Processor
stepping : 3
cpu MHz : 2511.416
cache size : 512 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 3
initial apicid : 3
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc pni monitor cx16
popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
3dnowprefetch osvw ibs
bogomips : 5023.06
clflush size : 64
power management: ts ttp tm stc 100mhzsteps hwpstate
processor : 3
vendor_id : AuthenticAMD
cpu family : 16
model : 2
model name : AMD Phenom(tm) 9850 Quad-Core Processor
stepping : 3
cpu MHz : 2511.416
cache size : 512 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 2
initial apicid : 2
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext
fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc pni monitor cx16
popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
3dnowprefetch osvw ibs
bogomips : 5023.06
clflush size : 64
power management: ts ttp tm stc 100mhzsteps hwpstate
# virsh nodeinfo
CPU model: i686
CPU(s): 4
CPU frequency: 2511 MHz
CPU socket(s): 1
Core(s) per socket: 4
Thread(s) per core: 1
NUMA cell(s): 1
Memory size: 4147340 kB
# virsh capabilities
<capabilities>
<host>
<cpu>
<arch>i686</arch>
</cpu>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu</emulator>
<machine>pc</machine>
<machine>isapc</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/bin/qemu-kvm</emulator>
</domain>
</arch>
<features>
<pae/>
<nonpae/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine>pc</machine>
<machine>isapc</machine>
<domain type='qemu'>
</domain>
</arch>
<features>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='mips'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-mips</emulator>
<machine>mips</machine>
<domain type='qemu'>
</domain>
</arch>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='mipsel'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-mipsel</emulator>
<machine>mips</machine>
<domain type='qemu'>
</domain>
</arch>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='sparc'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-sparc</emulator>
<machine>sun4m</machine>
<domain type='qemu'>
</domain>
</arch>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='ppc'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-ppc</emulator>
<machine>g3bw</machine>
<machine>mac99</machine>
<machine>prep</machine>
<domain type='qemu'>
</domain>
</arch>
</guest>
</capabilities>
Shouldn't the host arch have been detected and identified as x86_64?
Regards,
Gerry
15 years, 6 months