[libvirt-users] some problem with snapshot by libvirt
by xingxing gao
Hi,all,i am using libvirt to manage my vm,in these days i am testing
the libvirt snapshot ,but meet some problem:
the snapshot was created from this command:
snapshot-create-as win7 --disk-only --diskspec
vda,snapshot=external --diskspec hda,snapshot=no
but when i tried to revert from the snapshot which created from the
above command ,i got error below:
virsh # snapshot-revert win7 1338041515 --force
error: unsupported configuration: revert to external disk snapshot not
supported yet
version:
virsh # version
Compiled against library: libvir 0.9.4
Using library: libvir 0.9.4
Using API: QEMU 0.9.4
Running hypervisor: QEMU 1.0.93
10 years, 1 month
[libvirt-users] Passing non-standard Options through Libvirt to QEMU
by Asadullah Hussain
Hello, I have a custom QEMU which requires some non-standard command line
arguments to launch a VM. e.g., "--proc-type=secondary" option is always
required to launch a QEMU VM.
To launch the VM through libvirt (virsh) "How do I specify these
non-standard options in XML?
OR if thats not possible ,Can you guide me at which point (code file)
libvirt converts the XML to QEMU command (so that I can insert the option
there).
Regards
--
Asadullah Hussain
10 years, 8 months
[libvirt-users] method for communication between containers
by WANG Cheng D
Dear all,
In my system, two containers need to exchange data as quick as possible and the two containers are hosted by the same physical machine, I wonder if socket is the only method for communication between containers?
Thank you.
Cheng Wang
10 years, 8 months
[libvirt-users] Programmatically force shutdown a guest: possible?
by Pasquale Dir
I am looking at the shutdown method, but if the guest system is a desktop
system, like for example ubuntu, it just has the effect to show a box
prompting the user for a shutdown/reboot/ and such.
I could enter the guest and change this default behaviour and it actually
works..but I'd like for a way to send a shutdown command without doing so.
Is it possible?
10 years, 8 months
[libvirt-users] Connecting libvirt to manually compiled QEMU
by Asadullah Hussain
Hello I have manually compiled a customized qemu (1.4.0) which runs fine on
its own (create VM etc) but I want to access this qemu through libvirt
(virt-manager, virsh etc).
But the libvirt driver only looks into "/usr/bin" for qemu binaries, how
can I tell libvirt to connect to my qemu which is placed at
"/home/user/qemu" directory.
--
Asadullah Hussain
10 years, 8 months
[libvirt-users] Specify a disk image file which is on an iscsi target
by Pasquale Dir
Hello,
for my vm I'd like to specify a disk image which would be on an iscsi
target.
Currently, by using targets directly, I'd just specify the path in
/dev/disk/by-path/ip-192.168.1.2:3260-iscsi-iqn.myiqn
but how to specify a file which is actually INSIDE this path, ex.
/dev/disk/by-path/ip-192.168.1.2:3260-iscsi-iqn.myiqn/myVmImage.img ?
I saw that there is another path in /media/myHomeUser/anId but do this id
changes each time?
10 years, 8 months
[libvirt-users] virsh attach-disk live guest problem
by john fisher
Hypervisor= ubuntu 12.04.2
guest = ubuntu 12.04.4
used virt-manager to create a new storage file in qcow2 format on hypervisor and then attach it to the guest.
In virt-man I can see the disk is attached.
trying virsh attach-disk I get a failure saying disk already attached
virsh dump xml shows the disk at vdb as it should be
but
on the guest, fdisk -l shows no vdb and its not in /proc
I found one reference to apparmor saying add /proc rw to hypervisor's apparmor libvirt conf
But I see some release notes that might be about avoiding this ( sounds insecure)
so... Do I have to stop and restart the guest, change apparmor or what?
--
John Fisher
10 years, 9 months
[libvirt-users] Using qemu+ssh on openSUSE 13.1/Tumbleweed
by Johannes Kastl
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi everyone,
I am trying to get libvirt with qemu-kvm to work on my machines
running openSUSE 13.1 / Tumbleweed. My question is in regard to using
qemu+ssh, which would be my preference, as I already have a working
ssh-key authentication with SSH-Agent.
I set the permissions of the manage-socket to 0770, added my user to
the libvirt group and set auth_unix_rw = "none" in
/etc/libvirt/libvirtd.conf.
I am now able to see all my VMs using
virsh list --all
(I had to set a default_uri in .config/libvirt/libvirt.conf to get any
non-empty output).
Trying from another machine (say target hostname is HOSTX, username
testuser) with virsh -c qemu+ssh://testusers@HOSTX/system always
results in
>> error: failed to connect to the hypervisor error: internal error:
>> received hangup / error event on socket
I see this in /var/log/messages (lines will be wrapped in the mail,
sorry):
> 2014-03-24T21:25:55.023525+01:00 HOSTX sshd[3509]: Accepted
> publickey for testuser from 192.168.79.8 port 54176 ssh2
> 2014-03-24T21:25:55.024744+01:00 HOSTX sshd[3509]:
> pam_unix(sshd:session): session opened for user testuser by (uid=0)
> 2014-03-24T21:25:55.025717+01:00 HOSTX kernel: [ 3162.225673]
> type=1006 audit(1395692755.024:69): pid=3509 uid=0 old
> auid=4294967295 new auid=1000 old ses=4294967295 new ses=24 res=1
> 2014-03-24T21:25:55.027501+01:00 HOSTX systemd[1]: Starting Session
> 24 of user testuser. 2014-03-24T21:25:55.027918+01:00 HOSTX
> systemd-logind[703]: New session 24 of user testuser.
> 2014-03-24T21:25:55.028213+01:00 HOSTX systemd[1]: Started Session
> 24 of user testuser. 2014-03-24T21:25:55.081719+01:00 HOSTX
> sshd[3511]: Received disconnect from 192.168.79.8: 11: disconnected
> by user 2014-03-24T21:25:55.082095+01:00 HOSTX sshd[3509]:
> pam_unix(sshd:session): session closed for user testuser
> 2014-03-24T21:25:55.086268+01:00 HOSTX systemd-logind[703]: Removed
> session 24.
I am curious about the 'Received disconnect from 192.168.79.8: 11:
disconnected by user' thing.
How can I fix this? Or how to debug this? How can I get more info from
libvirt? /var/log/kvm is an empty directory, /var/log/libvirt/qemu
just has logs for the different VMs.
I have already tried to get polkit running, although I think it should
not be used due to the auth_unix_rw = "none".
Any help will be appreciated.
Versions:
openSUSE TUmbleweed (which is openSUSE 13.1 with some more recent
software)
qemu-kvm is version 1.7.90-19.1.x86_64
libvirt is 1.1.2
The following packes are installed:
libvirt-1.1.2-2.18.3.x86_64
libvirt-client-1.1.2-2.18.3.x86_64
libvirt-daemon-1.1.2-2.18.3.x86_64
libvirt-daemon-config-network-1.1.2-2.18.3.x86_64
libvirt-daemon-config-nwfilter-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-interface-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-libxl-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-lxc-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-network-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-nodedev-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-nwfilter-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-qemu-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-secret-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-storage-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-uml-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-vbox-1.1.2-2.18.3.x86_64
libvirt-daemon-driver-xen-1.1.2-2.18.3.x86_64
libvirt-daemon-lxc-1.1.2-2.18.3.x86_64
libvirt-daemon-qemu-1.1.2-2.18.3.x86_64
libvirt-doc-1.1.2-2.18.3.x86_64
libvirt-glib-1_0-0-0.1.7-2.1.3.x86_64
libvirt-python-1.1.2-2.18.3.x86_64
Regards,
Johannes
- --
Have you ever noticed that the Klingons are all speaking unix? 'Grep
ls awk chmod.'' 'Mknod ksh tar imap.' 'Wall fsck yacc!' (that last is
obviously a curse of some sort).
Gandalf Parker
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with SeaMonkey - http://www.enigmail.net/
iEYEARECAAYFAlM0CU8ACgkQzi3gQ/xETbKKyQCfVQ4XviDo5+92ivOIgx4lXIty
980An28BKHjfwZiRh5T2Yyj3wHwOLrYJ
=ujio
-----END PGP SIGNATURE-----
10 years, 9 months
[libvirt-users] host crashes "unable to handle paging request"
by Raphael Bauduin
Hi,
we have regular crashed of a kvm host with the error "unable to handle
paging request".
Can this be due to memory over-commitment even if some memory is still used
by the kernel for caches and buffers? (collectd graph shows no free
memory, with 15G used, very little buffers, and 1G cache). There are 32GB
of swap, of which only 150MB are used.
I suspect might be the direction to search to find the cause, but would be
happy to learn from people versed in the kernel behaviour to confirm or
reject my hypothesis. Below is the full error.
Thanks!
Raph
745 Mar 23 14:27:37 sMaster01 kernel: [241450.355339] BUG: unable to handle
kernel paging request at ffff8804c001fade
746 Mar 23 14:27:37 sMaster01 kernel: [241450.355384] IP:
[<ffffffff8117e9e9>] bio_check_eod+0x29/0xcd
747 Mar 23 14:27:37 sMaster01 kernel: [241450.355433] PGD 1002063 PUD 0
748 Mar 23 14:27:37 sMaster01 kernel: [241450.355464] Oops: 0000 [#1] SMP
749 Mar 23 14:27:37 sMaster01 kernel: [241450.355496] last sysfs file:
/sys/devices/system/cpu/cpu15/
topology/thread_siblings
750 Mar 23 14:27:37 sMaster01 kernel: [241450.355551] CPU 4
751 Mar 23 14:27:37 sMaster01 kernel: [241450.355577] Modules linked in:
ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4
xt_conntrack nf_conntrack ipt_REJECT xt_tcpudp kvm_amd kvm ip6table_filter
ip6_tables iptable_fi lter ip_tables x_tables tun nfsd exportfs nfs
lockd fscache nfs_acl auth_rpcgss sunrpc bridge stp bonding dm_round_robin
dm_multipath scsi_dh loop snd_pcm snd_timer snd soundcore snd_page_alloc
serio_raw evdev tpm_tis tpm tpm_bios p smouse pcspkr amd64_edac_mod
edac_core button edac_mce_amd shpchp i2c_piix4 container pci_hotplug
i2c_core processor ext3 jbd mbcache dm_mirror dm_region_hash dm_log
dm_snapshot dm_mod sd_mod crc_t10dif mptsas mptscsih mptbase lpfc
ehci_hcd scsi_transport_fc tg3 scsi_tgt scsi_transport_sas ohci_hcd libphy
scsi_mod usbcore nls_base thermal fan thermal_sys [last unloaded:
scsi_wait_scan]
752 Mar 23 14:27:37 sMaster01 kernel: [241450.356084] Pid: 3557, comm:
kjournald Not tainted 2.6.32.61vanilla #1 PRIMERGY BX630 S2
753 Mar 23 14:27:37 sMaster01 kernel: [241450.356141] RIP:
0010:[<ffffffff8117e9e9>] [<ffffffff8117e9e9>] bio_check_eod+0x29/0xcd
754 Mar 23 14:27:37 sMaster01 kernel: [241450.356196] RSP:
0018:ffff8804229abba0 EFLAGS: 00010202
755 Mar 23 14:27:37 sMaster01 kernel: [241450.356228] RAX:
ffff8804c001fad6 RBX: ffff8802e7235080 RCX: 00011200061e5110
756 Mar 23 14:27:37 sMaster01 kernel: [241450.356279] RDX:
0000000000000008 RSI: 0000000000000008 RDI: ffff8802e7235080
757 Mar 23 14:27:37 sMaster01 kernel: [241450.356331] RBP:
ffff8802e7235080 R08: 0000000000000000 R09: ffff880425c54c00
758 Mar 23 14:27:37 sMaster01 kernel: [241450.356383] R10:
0000000000000003 R11: 00000000022e539e R12: ffff8802e7235080
759 Mar 23 14:27:37 sMaster01 kernel: [241450.356434] R13:
ffff8802e7235080 R14: ffff880425c54c00 R15: ffff8802e6281850
760 Mar 23 14:27:37 sMaster01 kernel: [241450.356486] FS:
00007faa6a757820(0000) GS:ffff88000fc80000(0000) knlGS:0000000000000000
761 Mar 23 14:27:37 sMaster01 kernel: [241450.356540] CS: 0010 DS: 0018
ES: 0018 CR0: 000000008005003b
762 Mar 23 14:27:37 sMaster01 kernel: [241450.356573] CR2:
ffff8804c001fade CR3: 00000000cc11f000 CR4: 00000000000006e0
763 Mar 23 14:27:37 sMaster01 kernel: [241450.356628] DR0:
0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
764 Mar 23 14:27:37 sMaster01 kernel: [241450.356681] DR3:
0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
765 Mar 23 14:27:37 sMaster01 kernel: [241450.356733] Process kjournald
(pid: 3557, threadinfo ffff8804229aa000, task ffff88041490a300)
766 Mar 23 14:27:37 sMaster01 kernel: [241450.356788] Stack:
767 Mar 23 14:27:37 sMaster01 kernel: [241450.356812] ffff880415382c00
0000000100000285 ffff8804229abfd8 0000000000005186
768 Mar 23 14:27:37 sMaster01 kernel: [241450.356852] <0> 0000000000000000
000000000f1c2776 ffff8804128efa38 ffff8802e7235080
769 Mar 23 14:27:37 sMaster01 kernel: [241450.356913] <0> ffff8802e7235080
ffff8802e7235080 ffff8800cdacae40 ffffffff8117eb5a
770 Mar 23 14:27:37 sMaster01 kernel: [241450.356993] Call Trace:
771 Mar 23 14:27:37 sMaster01 kernel: [241450.357021]
[<ffffffff8117eb5a>] ? generic_make_request+0xcd/0x2f9
772 Mar 23 14:27:37 sMaster01 kernel: [241450.357058]
[<ffffffff810b6034>] ? mempool_alloc+0x55/0x106
773 Mar 23 14:27:37 sMaster01 kernel: [241450.357091]
[<ffffffff8117ee5c>] ? submit_bio+0xd6/0xf2
774 Mar 23 14:27:37 sMaster01 kernel: [241450.357125]
[<ffffffff8110d83f>] ? submit_bh+0xf5/0x115
775 Mar 23 14:27:37 sMaster01 kernel: [241450.357158]
[<ffffffff8110edc0>] ? sync_dirty_buffer+0x51/0x93
776 Mar 23 14:27:37 sMaster01 kernel: [241450.357196]
[<ffffffffa01727c7>] ? journal_commit_transaction+0xaa6/0xe4f [jbd]
777 Mar 23 14:27:37 sMaster01 kernel: [241450.357252]
[<ffffffffa0175194>] ? kjournald+0xdf/0x226 [jbd]
778 Mar 23 14:27:37 sMaster01 kernel: [241450.357288]
[<ffffffff810651de>] ? autoremove_wake_function+0x0/0x2e
779 Mar 23 14:27:37 sMaster01 kernel: [241450.357324]
[<ffffffffa01750b5>] ? kjournald+0x0/0x226 [jbd]
780 Mar 23 14:27:37 sMaster01 kernel: [241450.357357]
[<ffffffff81064f11>] ? kthread+0x79/0x81
781 Mar 23 14:27:37 sMaster01 kernel: [241450.357391]
[<ffffffff81011baa>] ? child_rip+0xa/0x20
782 Mar 23 14:27:37 sMaster01 kernel: [241450.357425]
[<ffffffff81016568>] ? read_tsc+0xa/0x20
783 Mar 23 14:27:37 sMaster01 kernel: [241450.357456]
[<ffffffff81064e98>] ? kthread+0x0/0x81
784 Mar 23 14:27:37 sMaster01 kernel: [241450.357487]
[<ffffffff81011ba0>] ? child_rip+0x0/0x20
785 Mar 23 14:27:37 sMaster01 kernel: [241450.357517] Code: 5c c3 41 55 49
89 fd 41 54 55 53 48 83 ec 38 65 48 8b 04 25 28 00 00 00 48 89 44 24 28 31
c0 85 f6 0f 84 86 00 00 00 48 8b 47 10 <48> 8b 40 08 48 8b 40 68 48 c1 f8
09 74 74 89 f2 48 8b 0f 48 39
786 Mar 23 14:27:37 sMaster01 kernel: [241450.357738] RIP
[<ffffffff8117e9e9>] bio_check_eod+0x29/0xcd
787 Mar 23 14:27:37 sMaster01 kernel: [241450.357772] RSP
<ffff8804229abba0>
788 Mar 23 14:27:37 sMaster01 kernel: [241450.357799] CR2: ffff8804c001fade
789 Mar 23 14:27:37 sMaster01 kernel: [241450.358183] ---[ end trace
608fcf1f5a482549 ]---
--
Web database: http://www.myowndb.com
Free Software Developers Meeting: http://www.fosdem.org
10 years, 9 months
[libvirt-users] LXC + passthrough mount and host filesystem-cache
by James R. Leu
Hello,
I'm using libvirt to build/run LXC instances. My LXC instances use
passthrough filesystem mounts. When I try to do large file systems
operations (ex tar or rsync) the file systems cache on the host
spikes and causes the OOM handler to run and kills processes
in the LXC.
Has anyone else seen this? Is there a way around this?
At this point I'm resorting to running a cron job that dumps
the filesystem cache every 5 minutes. The result is the filesystem
cache on the host never grows too large and OOM never runs against
LXC processes. The obvious down fall is that I'm killing my filesystem
performance by duming the cache.
I'm currently running libvirt 1.0.0
--
James R. Leu
10 years, 9 months