[libvirt-users] some problem with snapshot by libvirt
by xingxing gao
Hi,all,i am using libvirt to manage my vm,in these days i am testing
the libvirt snapshot ,but meet some problem:
the snapshot was created from this command:
snapshot-create-as win7 --disk-only --diskspec
vda,snapshot=external --diskspec hda,snapshot=no
but when i tried to revert from the snapshot which created from the
above command ,i got error below:
virsh # snapshot-revert win7 1338041515 --force
error: unsupported configuration: revert to external disk snapshot not
supported yet
version:
virsh # version
Compiled against library: libvir 0.9.4
Using library: libvir 0.9.4
Using API: QEMU 0.9.4
Running hypervisor: QEMU 1.0.93
10 years, 1 month
[libvirt-users] Managing Live Snapshots with Libvirt 1.0.1
by Andrew Martin
Hello,
I recently compiled libvirt 1.0.1 and qemu 1.3.0 on Ubuntu 12.04. I have performed live snapshots on VMs using "virsh snapshot-create-as" and then later re-merge the images together using "virsh blockpull". I am wondering how I can do a couple of other operations on the images while the VM is running. For example, VM1 is running from the snap3 image, with the following snapshot history (backing files):
[orig] <-- [snap1] <-- [snap2] <-- [snap3]
1. Can I revert VM1 to use snap2 while it is live, or must it be shutdown? After shutting it down, is the best way to revert to snap2 to just edit the xml file and change the block device to point to snap2? Afterwards, I believe snap3 would become unusable and should be deleted?
2. If I would like to start a new VM from snap1, is there a way to extract a copy of this snapshot from the chain, to an independent image file? I tried to use "virsh blockcopy" but it returned this error:
# virsh blockcopy VM1 vda snap1.qcow2 --wait --verbose
error: Requested operation is not valid: domain is not transient
Thanks,
Andrew
11 years, 10 months
[libvirt-users] Sanlock gives up lock when VM is paused
by Michael Rodrigues
Hello,
I'm using libvirt and sanlock on qemu-kvm guests. Each guest has it's
own Logical Volume for it's root filesystem. Sanlock is configured and
working and prevents me from starting the same VM twice on multiple
nodes and corrupting it's root filesystem. Each VM's domain XML resides
on 2 servers that share the LVM volume group over fiber channel.
In testing, I noticed that if I pause a VM on node 1, the sanlock lock
is relinquished, and I am able to start the same VM, using the same root
filesystem, on node 2. I get a lock error when unpausing node 1's VM if
node 2's copy is still running, but by this point, the disk may already
be corrupted.
Is it necessary that paused VMs don't get to keep their locks? Is there
a way to configure sanlock to lock when a VM is paused?
Versions:
sanlock(-devel, -lib) 2.3-1.el6
libvirt(-lock-sanlock, -client) 0.9.10-21.el6_3.8
I'm using NFS for the lockspace.
Thanks,
Michael
--
Michael Rodrigues
Interim Help Desk Manager
Gevirtz Graduate School of Education
Education Building 4203
(805) 893-8031
help(a)education.ucsb.edu
11 years, 10 months
Re: [libvirt-users] windows 2008 guest causing rcu_shed to emit NMI
by Andrey Korolyov
On Thu, Jan 31, 2013 at 12:11 AM, Marcelo Tosatti <mtosatti(a)redhat.com> wrote:
> On Wed, Jan 30, 2013 at 11:21:08AM +0300, Andrey Korolyov wrote:
>> On Wed, Jan 30, 2013 at 3:15 AM, Marcelo Tosatti <mtosatti(a)redhat.com> wrote:
>> > On Tue, Jan 29, 2013 at 02:35:02AM +0300, Andrey Korolyov wrote:
>> >> On Mon, Jan 28, 2013 at 5:56 PM, Andrey Korolyov <andrey(a)xdel.ru> wrote:
>> >> > On Mon, Jan 28, 2013 at 3:14 AM, Marcelo Tosatti <mtosatti(a)redhat.com> wrote:
>> >> >> On Mon, Jan 28, 2013 at 12:04:50AM +0300, Andrey Korolyov wrote:
>> >> >>> On Sat, Jan 26, 2013 at 12:49 AM, Marcelo Tosatti <mtosatti(a)redhat.com> wrote:
>> >> >>> > On Fri, Jan 25, 2013 at 10:45:02AM +0300, Andrey Korolyov wrote:
>> >> >>> >> On Thu, Jan 24, 2013 at 4:20 PM, Marcelo Tosatti <mtosatti(a)redhat.com> wrote:
>> >> >>> >> > On Thu, Jan 24, 2013 at 01:54:03PM +0300, Andrey Korolyov wrote:
>> >> >>> >> >> Thank you Marcelo,
>> >> >>> >> >>
>> >> >>> >> >> Host node locking up sometimes later than yesterday, bur problem still
>> >> >>> >> >> here, please see attached dmesg. Stuck process looks like
>> >> >>> >> >> root 19251 0.0 0.0 228476 12488 ? D 14:42 0:00
>> >> >>> >> >> /usr/bin/kvm -no-user-config -device ? -device pci-assign,? -device
>> >> >>> >> >> virtio-blk-pci,? -device
>> >> >>> >> >>
>> >> >>> >> >> on fourth vm by count.
>> >> >>> >> >>
>> >> >>> >> >> Should I try upstream kernel instead of applying patch to the latest
>> >> >>> >> >> 3.4 or it is useless?
>> >> >>> >> >
>> >> >>> >> > If you can upgrade to an upstream kernel, please do that.
>> >> >>> >> >
>> >> >>> >>
>> >> >>> >> With vanilla 3.7.4 there is almost no changes, and NMI started firing
>> >> >>> >> again. External symptoms looks like following: starting from some
>> >> >>> >> count, may be third or sixth vm, qemu-kvm process allocating its
>> >> >>> >> memory very slowly and by jumps, 20M-200M-700M-1.6G in minutes. Patch
>> >> >>> >> helps, of course - on both patched 3.4 and vanilla 3.7 I`m able to
>> >> >>> >> kill stuck kvm processes and node returned back to the normal, when on
>> >> >>> >> 3.2 sending SIGKILL to the process causing zombies and hanged ``ps''
>> >> >>> >> output (problem and workaround when no scheduler involved described
>> >> >>> >> here http://www.spinics.net/lists/kvm/msg84799.html).
>> >> >>> >
>> >> >>> > Try disabling pause loop exiting with ple_gap=0 kvm-intel.ko module parameter.
>> >> >>> >
>> >> >>>
>> >> >>> Hi Marcelo,
>> >> >>>
>> >> >>> thanks, this parameter helped to increase number of working VMs in a
>> >> >>> half of order of magnitude, from 3-4 to 10-15. Very high SY load, 10
>> >> >>> to 15 percents, persists on such numbers for a long time, where linux
>> >> >>> guests in same configuration do not jump over one percent even under
>> >> >>> stress bench. After I disabled HT, crash happens only in long runs and
>> >> >>> now it is kernel panic :)
>> >> >>> Stair-like memory allocation behaviour disappeared, but other symptom
>> >> >>> leading to the crash which I have not counted previously, persists: if
>> >> >>> VM count is ``enough'' for crash, some qemu processes starting to eat
>> >> >>> one core, and they`ll panic system after run in tens of minutes in
>> >> >>> such state or if I try to attach debugger to one of them. If needed, I
>> >> >>> can log entire crash output via netconsole, now I have some tail,
>> >> >>> almost the same every time:
>> >> >>> http://xdel.ru/downloads/btwin.png
>> >> >>
>> >> >> Yes, please log entire crash output, thanks.
>> >> >>
>> >> >
>> >> > Here please, 3.7.4-vanilla, 16 vms, ple_gap=0:
>> >> >
>> >> > http://xdel.ru/downloads/oops-default-kvmintel.txt
>> >>
>> >> Just an update: I was able to reproduce that on pure linux VMs using
>> >> qemu-1.3.0 and ``stress'' benchmark running on them - panic occurs at
>> >> start of vm(with count ten working machines at the moment). Qemu-1.1.2
>> >> generally is not able to reproduce that, but host node with older
>> >> version crashing on less amount of Windows VMs(three to six instead
>> >> ten to fifteen) than with 1.3, please see trace below:
>> >>
>> >> http://xdel.ru/downloads/oops-old-qemu.txt
>> >
>> > Single bit memory error, apparently. Try:
>> >
>> > 1. memtest86.
>> > 2. Boot with slub_debug=ZFPU kernel parameter.
>> > 3. Reproduce on different machine
>> >
>> >
>>
>> Hi Marcelo,
>>
>> I always follow the rule - if some weird bug exists, check it on
>> ECC-enabled machine and check IPMI logs too before start complaining
>> :) I have finally managed to ``fix'' the problem, but my solution
>> seems a bit strange:
>> - I have noticed that if virtual machines started without any cgroup
>> setting they will not cause this bug under any conditions,
>> - I have thought, very wrong in my mind, that the
>> CONFIG_SCHED_AUTOGROUP should regroup the tasks without any cgroup and
>> should not touch tasks already inside any existing cpu cgroup. First
>> sight on the 200-line patch shows that the autogrouping always applies
>> to all tasks, so I tried to disable it,
>> - wild magic appears - VMs didn`t crashed host any more, even in count
>> 30+ they work fine.
>> I still don`t know what exactly triggered that and will I face it
>> again under different conditions, so my solution more likely to be a
>> patch of mud in wall of the dam, instead of proper fixing.
>>
>> There seems to be two possible origins of such error - a very very
>> hideous race condition involving cgroups and processes like qemu-kvm
>> causing frequent context switches and simple incompatibility between
>> NUMA, logic of CONFIG_SCHED_AUTOGROUP and qemu VMs already doing work
>> in the cgroup, since I have not observed this errors on single numa
>> node(mean, desktop) on relatively heavier condition.
>
> Yes, it would be important to track it down though. Enabling
> slub_debug=ZFPU kernel parameter should help.
>
>
Hi Marcelo,
I have finally beat that one. As I have mentioned before in the
off-list message, nested cgroups for vcpu/emulator threads created by
libvirt was a root cause of this problem. Today we`ve disabled
creation of cgroup deeper than qemu/vm/ level and trace didn`t showed
up under different workloads. So for libvirt itself, it may be a
feature request to create thread-based cgroups iff any element of the
VM` config requires that. As for cgroups, seems it is fatal to have
very large amount of nested elements inside cpu on qemu-kvm, or on
very large amount of threads - since I have limited core amount on
each node, I can`t prove what exactly, complicated cgroup hierarchy or
some side effects putting threads on the dedicated cgroup, caused all
this pain. And, of course, without Windows(tm) bug is very hard to
observe in the wild, since almost no synthetic test I have put on the
linux VMs is able to show it.
11 years, 10 months
[libvirt-users] There's no output when connecting to console of domain on PowerPC
by Yin Olivia-R63875
Hi,
I tried to use libvirt to run KVM/QEMU on Freescale PowerPC platforms.
So far there's only one serial device (spapr-vty) defined in QEMU to work as console for IBM PSeries platform.
There's no serial device support in QEMU for Freescale PowerPC (ePAPR).
libvirt/src/qemu/qemu_command.c
/* This function generates the correct '-device' string for character
* devices of each architecture.
*/
char *
qemuBuildChrDeviceStr(virDomainChrDefPtr serial,
virBitmapPtr qemuCaps,
char *os_arch,
char *machine)
{
virBuffer cmd = VIR_BUFFER_INITIALIZER;
if (STREQ(os_arch, "ppc64") && STREQ(machine, "pseries")) {
if (serial->deviceType == VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL &&
serial->source.type == VIR_DOMAIN_CHR_TYPE_PTY &&
serial->info.type == VIR_DOMAIN_DEVICE_ADDRESS_TYPE_SPAPRVIO) {
virBufferAsprintf(&cmd, "spapr-vty,chardev=char%s",
serial->info.alias);
if (qemuBuildDeviceAddressStr(&cmd, &serial->info, qemuCaps) < 0)
goto error;
}
} else
virBufferAsprintf(&cmd, "isa-serial,chardev=char%s,id=%s",
serial->info.alias, serial->info.alias);
if (virBufferError(&cmd)) {
virReportOOMError();
goto error;
}
return virBufferContentAndReset(&cmd);
error:
virBufferFreeAndReset(&cmd);
return NULL;
}
We usually connect guest with telnet.
For instance,
/usr/bin/qemu-system-ppc -name demo -M ppce500v2 -enable-kvm -m 256 -nographic -kernel /media/ram/uImage -initrd /media/ram/ramdisk -append "root=/dev/ram rw console=ttyS0,115200" -serial tcp::4445,server
Then to run 'telnet 10.193.20.xxx 4445' could connect the guest.
The temporary workaround is not add '-device' string after '-serial' option.
diff -Nur libvirt-0.10.1.orig/src/qemu/qemu_command.c libvirt-0.10.1/src/qemu/qemu_command.c
--- libvirt-0.10.1.orig/src/qemu/qemu_command.c 2012-08-30 15:35:18.000000000 +0530
+++ libvirt-0.10.1/src/qemu/qemu_command.c 2012-10-05 17:19:32.060368755 +0530
@@ -5501,13 +5501,15 @@
virCommandAddArg(cmd, devstr);
VIR_FREE(devstr);
- virCommandAddArg(cmd, "-device");
- if (!(devstr = qemuBuildChrDeviceStr(serial, qemuCaps,
+ if (!STREQ(def->os.arch, "ppc")) {
+ virCommandAddArg(cmd, "-device");
+ if (!(devstr = qemuBuildChrDeviceStr(serial, qemuCaps,
def->os.arch,
def->os.machine)))
- goto error;
- virCommandAddArg(cmd, devstr);
- VIR_FREE(devstr);
+ goto error;
+ virCommandAddArg(cmd, devstr);
+ VIR_FREE(devstr);
+ }
} else {
virCommandAddArg(cmd, "-serial");
if (!(devstr = qemuBuildChrArgStr(&serial->source, NULL)))
Applying the above patch to libvirt, all the other domain control commands could work except 'virsh console domain'.
# cat >demo.args <<EOF
> /usr/bin/qemu-system-ppc -name demo -M ppce500v2 -enable-kvm -m 256 -nographic -kernel /media/ram/uImage -initrd /media/ram/ramdisk -append "root=/dev/ram rw console=ttyS0,115200" -serial tcp::4445,server -net nic
> EOF
# vi demo.args
/usr/bin/qemu-system-ppc -name demo -M ppce500v2 -enable-kvm -m 256 -nographic -kernel /media/ram/uImage -initrd /media/ram/ramdisk -append "root=/dev/ram rw console=ttyS0,115200" -serial tcp::4445,server -net nic
# virsh domxml-from-native qemu-argv demo.args >demo.xml
# vi demo.xml
<domain type='kvm'>
<name>demo</name>
<uuid>985d7154-83c8-0763-cbac-ecd159eee8a6</uuid>
<memory unit='KiB'>262144</memory>
<currentMemory unit='KiB'>262144</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='ppc' machine='ppce500v2'>hvm</type>
<kernel>/media/ram/uImage</kernel>
<initrd>/media/ram/ramdisk</initrd>
<cmdline>root=/dev/ram rw console=ttyS0,115200</cmdline>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-ppc</emulator>
<serial type='tcp'>
<source mode='bind' host='' service='4445'/>
<protocol type='raw'/>
<target port='0'/>
</serial>
<console type='tcp'>
<source mode='bind' host='' service='4445'/>
<protocol type='raw'/>
<target type='serial' port='0'/>
</console>
<memballoon model='virtio'/>
</devices>
</domain>
# virsh -c qemu:///system define demo.xml
# virsh -c qemu:///system start demo
But it seemed that can't connect to the console.
# virsh -c qemu:///system console demo
Connected to domain test
Escape character is ^]
error: internal error character device (null) is not using a PTY
I tried also use '-serial pty' option,
/usr/bin/qemu-system-ppc -name test -M ppce500v2 -enable-kvm -m 256 -nographic -kernel /media/ram/uImage -initrd /media/ram/ramdisk -append "root=/dev/ram rw console=ttyS0,115200" -serial pty
Then there's no other output after the below message:
# virsh -c qemu:///system console demo
Connected to domain test
Escape character is ^]
It seemed not libvirt group issue.
I also tried the LXC.
It could connect to the console if <init>/bin/sh/</init> instead of <init>/sbin/init</init>
Best Regards,
Olivia
11 years, 10 months
[libvirt-users] VMs fail to start with NUMA configuration
by Doug Goldstein
I am using libvirt 0.10.2.2 and qemu-kvm 1.2.2 (qemu-kvm 1.2.0 + qemu
1.2.2 applied on top plus a number of stability patches). Having issue
where my VMs fail to start with the following message:
kvm_init_vcpu failed: Cannot allocate memory
Following the instructions at
http://libvirt.org/formatdomain.html#elementsNUMATuning I've added the
following to my VCPU configuration:
<vcpu placement='auto'>2</vcpu>
Which libvirt expands out as I'd expect per the documentation to:
<vcpu placement='auto'>2</vcpu>
<numatune>
<memory tune='strict' placement='auto'/>
</numatune>
However, the VMs won't start and the system is no low on memory.
# numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 4 8 12 16 20 24 28
node 0 size: 16374 MB
node 0 free: 11899 MB
node 1 cpus: 32 36 40 44 48 52 56 60
node 1 size: 16384 MB
node 1 free: 15318 MB
node 2 cpus: 2 6 10 14 18 22 26 30
node 2 size: 16384 MB
node 2 free: 15766 MB
node 3 cpus: 34 38 42 46 50 54 58 62
node 3 size: 16384 MB
node 3 free: 15347 MB
node 4 cpus: 3 7 11 15 19 23 27 31
node 4 size: 16384 MB
node 4 free: 15041 MB
node 5 cpus: 35 39 43 47 51 55 59 63
node 5 size: 16384 MB
node 5 free: 15202 MB
node 6 cpus: 1 5 9 13 17 21 25 29
node 6 size: 16384 MB
node 6 free: 15197 MB
node 7 cpus: 33 37 41 45 49 53 57 61
node 7 size: 16368 MB
node 7 free: 15669 MB
The system has 4 Opteron 6272 which add up to a total of 64 cores, 16
cores per socket. These are the CPUs that Dan B noticed issues with in
the past regarding topology but I thought this was resolved. But the
capabilities are posted below, you'll notice the topology is
incorrect.
# virsh capabilities
<capabilities>
<host>
<uuid>44454c4c-5300-1038-8031-c4c04f545331</uuid>
<cpu>
<arch>x86_64</arch>
<model>Opteron_G4</model>
<vendor>AMD</vendor>
<topology sockets='1' cores='8' threads='2'/>
<feature name='nodeid_msr'/>
<feature name='wdt'/>
<feature name='skinit'/>
<feature name='ibs'/>
<feature name='osvw'/>
<feature name='cr8legacy'/>
<feature name='extapic'/>
<feature name='cmp_legacy'/>
<feature name='fxsr_opt'/>
<feature name='mmxext'/>
<feature name='osxsave'/>
<feature name='monitor'/>
<feature name='ht'/>
<feature name='vme'/>
</cpu>
<power_management>
<suspend_disk/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='8'>
<cell id='0'>
<cpus num='8'>
<cpu id='0'/>
<cpu id='4'/>
<cpu id='8'/>
<cpu id='12'/>
<cpu id='16'/>
<cpu id='20'/>
<cpu id='24'/>
<cpu id='28'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='8'>
<cpu id='32'/>
<cpu id='36'/>
<cpu id='40'/>
<cpu id='44'/>
<cpu id='48'/>
<cpu id='52'/>
<cpu id='56'/>
<cpu id='60'/>
</cpus>
</cell>
<cell id='2'>
<cpus num='8'>
<cpu id='2'/>
<cpu id='6'/>
<cpu id='10'/>
<cpu id='14'/>
<cpu id='18'/>
<cpu id='22'/>
<cpu id='26'/>
<cpu id='30'/>
</cpus>
</cell>
<cell id='3'>
<cpus num='8'>
<cpu id='34'/>
<cpu id='38'/>
<cpu id='42'/>
<cpu id='46'/>
<cpu id='50'/>
<cpu id='54'/>
<cpu id='58'/>
<cpu id='62'/>
</cpus>
</cell>
<cell id='4'>
<cpus num='8'>
<cpu id='3'/>
<cpu id='7'/>
<cpu id='11'/>
<cpu id='15'/>
<cpu id='19'/>
<cpu id='23'/>
<cpu id='27'/>
<cpu id='31'/>
</cpus>
</cell>
<cell id='5'>
<cpus num='8'>
<cpu id='35'/>
<cpu id='39'/>
<cpu id='43'/>
<cpu id='47'/>
<cpu id='51'/>
<cpu id='55'/>
<cpu id='59'/>
<cpu id='63'/>
</cpus>
</cell>
<cell id='6'>
<cpus num='8'>
<cpu id='1'/>
<cpu id='5'/>
<cpu id='9'/>
<cpu id='13'/>
<cpu id='17'/>
<cpu id='21'/>
<cpu id='25'/>
<cpu id='29'/>
</cpus>
</cell>
<cell id='7'>
<cpus num='8'>
<cpu id='33'/>
<cpu id='37'/>
<cpu id='41'/>
<cpu id='45'/>
<cpu id='49'/>
<cpu id='53'/>
<cpu id='57'/>
<cpu id='61'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>none</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine>pc-1.2</machine>
<machine canonical='pc-1.2'>pc</machine>
<machine>pc-1.1</machine>
<machine>pc-1.0</machine>
<machine>pc-0.15</machine>
<machine>pc-0.14</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<machine>none</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/bin/qemu-kvm</emulator>
<machine>pc-1.2</machine>
<machine canonical='pc-1.2'>pc</machine>
<machine>pc-1.1</machine>
<machine>pc-1.0</machine>
<machine>pc-0.15</machine>
<machine>pc-0.14</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<machine>none</machine>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<pae/>
<nonpae/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine>pc-1.2</machine>
<machine canonical='pc-1.2'>pc</machine>
<machine>pc-1.1</machine>
<machine>pc-1.0</machine>
<machine>pc-0.15</machine>
<machine>pc-0.14</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<machine>none</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/bin/qemu-kvm</emulator>
<machine>pc-1.2</machine>
<machine canonical='pc-1.2'>pc</machine>
<machine>pc-1.1</machine>
<machine>pc-1.0</machine>
<machine>pc-0.15</machine>
<machine>pc-0.14</machine>
<machine>pc-0.13</machine>
<machine>pc-0.12</machine>
<machine>pc-0.11</machine>
<machine>pc-0.10</machine>
<machine>isapc</machine>
<machine>none</machine>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
Any suggestions are appreciated.
--
Doug Goldstein
11 years, 10 months
[libvirt-users] Libvirt support for windows
by varun bhatnagar
Hi,
I want to use libvirt for windows but when i try to launch virsh it gives
an error message saying "*error: invalid argument in transport methods
unix, ssh and ext are not supported under Windows"*
The version which I downloaded is libvirt-0.8.8.exe. I want to connect to
two virtualization techniques --- *Virtualbox* and *VMWare* If libvirt is
not supported on windows then is there any other tool or API which supports
multiple virtualization technology?
Thanks in advance..!!!!
11 years, 10 months
[libvirt-users] Fail to build libvirt-sandbox under ubuntu 12.10
by pablo platt
I'm trying to build libvirt-sandbox under ubuntu 12.10
sudo apt-get install git build-essential lxc libvirt-bin libvirt-glib-1.0
libglib2.0-0 libglib2.0-dev gtk-doc-tools libxml2-dev libselinux-dev
git clone git://libvirt.org/libvirt-sandbox.git
cd libvirt-sandbox
sudo ./autobuild
The error I'm getting:
make[2]: Entering directory `/home/user/libvirt-sandbox/build/bin'
CC virt_sandbox-virt-sandbox.o
CCLD virt-sandbox
CC virt_sandbox_service_util-virt-sandbox-service-util.o
make[2]: *** No rule to make target `container.c', needed by
`virt_sandbox_service_util-container.o'. Stop.
Am I doing something wrong?
Thanks
11 years, 10 months
[libvirt-users] http problem with (a particular url) and default (nat) networking
by John McFarlane
At work I have a script that provisions a vm for use by employees. One
step in this process is to fetch hadoop, which we happen to get from
cloudera. I noticed the script always failed when I used libvirt's default
networking (nat) but worked fine when I used user mode networking. My
instinct is that this is related to (potentially uncommon) network traffic
from the server in question, and the iptables rules added by libvirt.
Repro steps:
1. Create a vm (I tested with linux and freebsd guests) using default
libvirt networking settings (<interface type='network'>).
2. wget, curl, fetch:
http://archive.cloudera.com/one-click-install/lucid/cdh3-repository_1.0_a...
Observe it will "hang". If you use strace you'll see it block on the select
call.
My particular host is using a virbr0 network bridge, with the following
iptables rules:
$ iptables -S -v -Z
-P INPUT ACCEPT -c 404828 91071544
-P FORWARD ACCEPT -c 0 0
-P OUTPUT ACCEPT -c 402905 45139291
-A INPUT -i virbr0 -p udp -m udp --dport 53 -c 26 1703 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -c 0 0 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -c 70 22960 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -c 0 0 -j ACCEPT
-A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state
RELATED,ESTABLISHED -c 1191 1495856 -j ACCEPT
-A FORWARD -s 192.168.122.0/24 -i virbr0 -c 853 64266 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -c 6 1968 -j ACCEPT
-A FORWARD -o virbr0 -c 0 0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -c 0 0 -j REJECT --reject-with icmp-port-unreachable
Zeroing chain `INPUT'
Zeroing chain `FORWARD'
Zeroing chain `OUTPUT'
I'm not sure how best to diagnose this problem. Any ideas or tips?
Thanks!
John M.
11 years, 11 months
[libvirt-users] not able to stop vmware player node through libvirt API
by varun bhatnagar
Hi,
I have started one node through libvirt API using its start method. I am
trying to stop that node using the *destroy *method provided by the API but
I am not able to do so. Instead, I am getting one message saying:
*libvir: error: internal error Child process (vmrun -T player stop
/root/test_folder/myImage.vmx soft) unexpected exit status 255*
Can anybody tell me why I am not able to stop the node.
Thanks in advance. :)
11 years, 11 months