[libvirt-users] libvirt-lxc
by Mark Clarkson
Hi,
I noticed that libvirt-lxc will be deprecated for RedHat:
"Future development on the Linux containers framework is now based on
the docker command-line interface. libvirt-lxc tooling may be removed in
a future release of Red Hat Enterprise Linux (including Red Hat
Enterprise Linux 7) and should not be relied upon for developing custom
container management applications." -
https://access.redhat.com/articles/1365153
And CentOS:
"further deprecated packages: libvirt-daemon-driver-lxc,
libvirt-daemon-lxc and libvirt-login-shell " -
http://wiki.centos.org/Manuals/ReleaseNotes/CentOS7
And LXC support from linuxcontainers.org is poor for RedHat/CentOS:
"... Also Cgmanager which is currently not available on CentOS 7. So
cannot support unprivileged containers and thus LXD. Systemd based
containers need at least LXC 1.1, lxcfs and related dependencies that
are not available on CentOS. ... For a stable, seamless and smooth
experience we suggest either Debian Wheezy with Flockport packages or
Ubuntu Trusty. ..." -
https://www.flockport.com/lxc-and-lxd-support-across-distributions/
It seems that the only way for me to use LXC containers on CentOS/RedHat
is to use Docker, which I am not particularly happy about since, as I
understand it, Docker and libvirt-lxc/linuxcontainers.org-lxc are for
different use cases, with their own pros and cons, for example:
"Why use LXD? ... Full operating system functionality within containers,
not just single processes ..." - http://www.ubuntu.com/cloud/tools/lxd
There are many uses for full containers within build, server management,
testing, etc. where quickly creating containers that look, 'feel' and
act just like real servers is beneficial, and far cheaper (in many ways)
and more versatile than fully virtualised machines or docker containers.
I have just discovered libvirt-lxc and found that it works well on both
Ubuntu and RedHat, and is designed to be integrated into tooling, which
is exactly what I need.
I considered Runc before libvirt, but libvirt is so versatile, allowing
me to use other technologies such as qemu/kvm, and with its rich API,
that I would prefer to use libvirt - it would allow me the most options
for change in the future and seemed like a 'no-brainer' until I saw the
deprecation announcement.
Will libvirt-lxc be dropped from libvirt?
Will there be an alternative for a similar container based use-case (as
in requiring a full machine) on RedHat?
Best Regards
Mark Clarkson
9 years, 3 months
[libvirt-users] Libvirt LXC vcpu doesn't seem to work
by Dave Riches
Hi,
I seem to have a problem when creating a LXC container through virsh.
While virsh -c lxc:/// dominfo <container> shows up (for example) 2
VCPUs as defined, if I run a CPU intensive task (such as stress --cpu
10) it will max out 10 CPU cores on the host.
If I echo "0" > /cgroup/cpuset/libvirt/lxc/<domain>/cpuset.cpus
then the container is properly confined to just one CPU core.
Subsequently, running stress --cpu 10 only one core on the host is maxed
out.
Does anyone know what syntax/config I'm missing, while installing the
lxc container via virsh..to ensure that the VCPUs are restricted to the
number defined?
Thanks in advance
9 years, 3 months
[libvirt-users] VirtIO specific to KVM ??
by Jatin Davey
Hi
I have a query with respect to the VirtIO drivers.
I have experience on using them on the Guests (Red Hat Guests) on a KVM
hypervisor. I want to know if we can use these drivers on ESX as well.
Appreciate any response in this regard.
Thanks
Jatin
9 years, 3 months
[libvirt-users] creating a network on existing bridge
by Fırat KÜÇÜK
Hello,
Simply I want to create a network on an existing bridge.
My XML:
<network>
<name>nn1</name>
<bridge name="br-nn1" />
</network>
when i want to start network I got "File exist" error.
error: Failed to start network nn1
error: Unable to create bridge br-nn1: File exist
I could define on older releases but now I can't.
Ubuntu 15.04 / libvirt version: 1.2.12
virsh version:
Compiled against library: libvirt 1.2.12
Using library: libvirt 1.2.12
Using API: QEMU 1.2.12
Running hypervisor: QEMU 2.2.0
Is there any way to activate network?
Regards.
9 years, 3 months
[libvirt-users] W2k3 with more than one QXL video card
by Ruzsinszky Attila
Hi,
Is that possible?
If I start in normal VGA mode I can see yellow ! next to the 2nd and 3rd
QXL card and Win complains about can't start (Code 10).
If I want to change the normal VGA resolution in a higher res (for example
800x600) I got black screen in virt-manager. I use SPICE protocol (for more
screens).
Any solution?
W2k3 R2 Ent. Ed. SP2
TIA,
Ruzsi
9 years, 3 months
[libvirt-users] Do I need to enable qemu-ga's guest-suspend: hybrid/suspend-ram/disk/shutdown?
by james harvey
How do I "enable" qemu-ga on a guest to be able to (I think this means have
a success-response:true) for: guest-suspend-hybrid; guest-suspend-ram;
guest-suspend-disk; and guest-shutdown?
At least I think that's my question.
http://wiki.stoney-cloud.org/wiki/Qemu_Guest_Agent_Integration shows these
same 4 as false, so I'm not sure if they're always supposed to be that way.
My overall point is to be able to run dompmsuspend servo disk. After
setting up qemu-ga as far as I have, it at least says:
=====
Domain <vmname> successfully suspended
=====
But,
Using libvirt 1.2.18 (-1 Arch) and QEMU git-master (2.4.0.r40384.2d69736).
pm-utils 1.4.1-6. acpid 2.0.23-4
Installed using Q35 chipset.
======
{{{ domain xml file includes: }}}
<pm>
<suspend-to-mem enables='yes'/>
<suspend-to-disk enables='yes'/>
</pm>
...
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channel/target/<vmname>.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
=====
client # qemu-ga --verbose
debug: received EOF
{{{ server:
virsh # shutdown <vmname> --mode agent
error: Failed to shutdown domain <vmname>
error: internal error: unable to execute QEMU agent command
'guest-shutdown': child process has failed to shutdown
}}}
debug: read data, count: 59, data: {"execute":"guest-sync",
"arguments":{"id":1439251067706}}
debug: process_event: called
debug: processing command
debug: sending data, count: 26
debug: read data, count: 62, data:
{"execute":"guest-shutdown","arguments":{"mode":"powerdown"}}
debug: process_event: called
debug: processing command
debug: sending data, count: 85
=====
{{{ server:
virsh # qemu-agent-command <vmname> '{"execute":"guest-ping"}'
{"return":{}}
}}}
debug: read data, count: 59, data: {"execute":"guest-sync",
"arguments":{"id":1439251455241}}
debug: process_event: called
debug: processing command
debug: sending data, count: 26
debug: read data, count: 25, data: {"execute":"guest-ping"}
debug: process_event: called
debug: processing command
debug: sending data, count: 15
=====
{{{ server - the json-type data is all on one line - i broke it up:
virsh # qemu-agent-command servo '{"execute":"guest-info"}'
{"return":
{"version":"2.3.94","supported_commands":
[{"enabled":true,"name":"guest-get-memory-block-info","success-response":true},
{"enabled":true,"name":"guest-set-memory-blocks","success-response":true},
{"enabled":true,"name":"guest-get-memory-blocks","success-response":true},
{"enabled":true,"name":"guest-set-user-password","success-response":true},
{"enabled":true,"name":"guest-get-fsinfo","success-response":true},
{"enabled":true,"name":"guest-set-vcpus","success-response":true},
{"enabled":true,"name":"guest-get-vcpus","success-response":true},
{"enabled":true,"name":"guest-network-get-interfaces","success-response":true},
{"enabled":true,"name":"guest-suspend-hybrid","success-response":*false*},
{"enabled":true,"name":"guest-suspend-ram","success-response":*false*},
{"enabled":true,"name":"guest-suspend-disk","success-response":*false*},
{"enabled":true,"name":"guest-fstrim","success-response":true},
{"enabled":true,"name":"guest-fsfreeze-thaw","success-response":true},
{"enabled":true,"name":"guest-fsfreeze-freeze-list","success-response":true},
{"enabled":true,"name":"guest-fsfreeze-freeze","success-response":true},
{"enabled":true,"name":"guest-fsfreeze-status","success-response":true},
{"enabled":true,"name":"guest-file-flush","success-response":true},
{"enabled":true,"name":"guest-file-seek","success-response":true},
{"enabled":true,"name":"guest-file-write","success-response":true},
{"enabled":true,"name":"guest-file-read","success-response":true},
{"enabled":true,"name":"guest-file-close","success-response":true},
{"enabled":true,"name":"guest-file-open","success-response":true},
{"enabled":true,"name":"guest-shutdown","success-response":*false*},
{"enabled":true,"name":"guest-info","success-response":true},
{"enabled":true,"name":"guest-set-time","success-response":true},
{"enabled":true,"name":"guest-get-time","success-response":true},
{"enabled":true,"name":"guest-ping","success-response":true},
{"enabled":true,"name":"guest-sync","success-response":true},
{"enabled":true,"name":"guest-sync-delimited","success-response":true}]}}
}}}
=====
9 years, 3 months
[libvirt-users] libvirt-guests.service doesn't work, but manually running libvirt-guests.sh stop does
by james harvey
Using libvirt 1.2.18 (-1 Arch) and QEMU git-master (2.4.0.r40384.2d69736).
pm-utils 1.4.1-6. acpid 2.0.23-4
Installed using Q35 chipset.
I can perform virsh # shutdown <vmname>, and watching the client VM console
see a graceful shutdown.
On host:
=====
# systemctl reboot
{{{ client VM console immediately blanks out - I do NOT see a graceful
shutdown }}}
libvirt-guests.sh: Running guests on default URI: <vmname>
libvirt-guests.sh: Shuitting down guests on default URI...
libvirt-guests.sh: Starting shutdown on guest: <vmname>
libvirt-guests.sh: Waiting for 1 guests to shut down, 300 seconds left
A stop job is running for Suspend Active Libvirt Guests...
libvirt-guests.sh: Waiting for 1 guests to shut down
=====
In about 85 seconds, it moves past this, but guest was not suspended or
gracefully shutdown. Guest's "last" shows a crash. And, watching on the
VM console, the minute systemctl restart runs on the host, the VM console
disconnects.
=====
# systemctl status libvirt-guests
● libvirt-guests.service - Suspend Active Libvirt Guests
Loaded: loaded (/usr/lib/systemd/system/libvirt-guests.service; enabled;
vendor preset: disabled)
Active: active (exited) since Mon 2015-08-10 20:27:57 EDT; 6s ago
Docs: man:libvirtd(8)
http://libvirt.org
Process: 751 ExecStart=/usr/lib/libvirt/libvirt-guests.sh start
(code=exited, status=0/SUCCESS)
Main PID: 751 (code=exited, status=0/SUCCESS)
=====
$ cat /etc/conf.d/libvirt-guests
BYPASS_CACHE=0
CONNECT_RETRIES=10
ON_BOOT=start
ON_SHUTDOWN=shutdown
PARALLEL_SHUTDOWN=4
RETRIES_SLEEP=1
SHUTDOWN_TIMEOUT=300
START_DELAY=0
URIS=default
=====
A virsh # shutdown <vmname> or an in-guest systemctl poweroff takes a few
seconds (3-5?)
=====
{{{ libvirt-guests.service is: }}}
[Unit]
Description=Suspend Active Libvirt Guests
After=network.target libvirtd.service time-sync.target
Documentation=man:libvirtd(8)
Documentation=http://libvirt.org
[Service]
EnvironmentFile=-/etc/conf.d/libvirt-guests
# Hack just call traditional service until we factor
# out the code
ExecStart=/usr/lib/libvirt/libvirt-guests.sh start
ExecStop=/usr/lib/libvirt/libvirt-guests.sh stop
Type=oneshot
RemainAfterExit=yes
StandardOutput=journal+console
[Install]
WantedBy=multi-user.target
=====
Strangely, manually running "/usr/lib/libvirt/libvirt-guests.sh stop" lets
me see a graceful shutdown on the VM console and shows (executing in a few
seconds):
Running guests on default URI: <vmname>
Shutting down guests on default URI...
Starting shutdown on guest: <vmname>
Waiting for 1 guests to shut down, 300 seconds left
Shutdown of guest <vmname> complete.
=====
9 years, 3 months
[libvirt-users] managedsave/start causes IRQ and task blocked for more than 120 seconds errors
by james harvey
If I manually delete the Q35 USB Controllers, I can use
managedsave/start, but I start getting tty errors that don't happen
before the managedsave/start.
Using libvirt 1.2.18 (-1 Arch) and QEMU git-master (2.4.0.r40384.2d69736).
Installed using Q35 chipset.
I'm running QEMU git, which allows SCSI controller migration, so I can
attempt doing this.
I started my guest, and waited for 15 minutes. No post-booting
tty1/dmesg messages.
After a managedsave/start, I start getting tty1/dmesg errors. I can
also no longer ping the guest system.
=====
$ dmesg|grep "IRQ 21"
[ 1.141040] ACPI: PCI Interrupt Link [GSIF] enabled at IRQ 21
$ ls -lA /proc/irq/21
total 0
-r--r--r-- 1 root root 0 Aug 10 19:42 affinity_hint
-r--r--r-- 1 root root 0 Aug 10 19:42 node
dr-xr-xr-x 2 root root 0 Aug 10 19:42 qxl
-rw-r--r-- 1 root root 0 Aug 10 19:42 smp_affinity
-rw-r--r-- 1 root root 0 Aug 10 19:42 smp_affinity_list
-r--r--r-- 1 root root 0 Aug 10 19:42 spurious
dr-xr-xr-x 2 root root 0 Aug 10 19:42 virtio2
$ ls -lA /proc/irq/21/qxl
total 0
$ ls -lA /proc/irq/21/virtio2
total 0
=====
{{{ on host, everything else is on guest }}}
virsh # managedsave <vmname>
Domain servo state saved by libvirt
virsh # start <vmname>
Domain <vmname> started
=====
[ 1117.083236] irq 21: nobody cared (try booting with the "irqpoll" option)
[ 1117.083236] handlers:
[ 1117.083236] [<ffffffffa00cfc60>] qxl_irq_handler [qxl]
[ 1117.083236] [<ffffffffa00f2530>] vp_interrupt [virtio_pci]
[ 1117.083236] Disabling IRQ #21
=====
{{{ bit later, doing nothing, just waiting }}}
[ 1440.223239] INFO: task vballoon:147 blocked for more than 120 seconds.
[ 1440.223409] Not tainted 4.1.4-1-ARCH #1
[ 1440.223556] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 1440.223805] INFO: task btrfs-transacti:242 blocked for more than 120 seconds.
[ 1440.223947] Not tainted 4.1.4-1-ARCH #1
[ 1440.224069] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
[ 1440.224422] INFO: task systemd-journal:280 blocked for more than 120 seconds.
[ 1440.224575] Not tainted 4.1.4-1-ARCH #1
[ 1440.224710] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.
{{{ these message lines show up very slowly, as if the VM console
video is running at 1% speed }}}
=====
{{{ bit later, doing nothing, just waiting }}}
[ 1560.223227] {{{ repeats the above 9 lines,
vballoon:147/btrfs-transacti:242/systemd-journal:280] blocked for more
than 120 seconds }}}
=====
{{{ bit later, doing nothing, just waiting }}}
[ 1680.223217] {{{ repeats the above 9 lines,
vballoon:147/btrfs-transacti:242/systemd-journal:280] blocked for more
than 120 seconds }}}
[ 1688.885605] systemd[1]: systemd-journald.service: Watchdog timeout
(limit 1min)!
=====
9 years, 3 months
[libvirt-users] managedsave/start fails because of Q35 UHCI Host Controller
by james harvey
managedsave/start appears to work, but VM is unusable due to non-stop
repeating uhci_hcd errors.
I can't remove the USB controller from QEMU, but I CAN remove it
through virsh edit. Doing so eliminates the USB controller in the
guest, preventing non-stop repeating uhci_hcd errors. (QEMU only lets
you remove the USB redirection entries, which didn't help.)
Using libvirt 1.2.18 (-1 Arch) and QEMU git-master (2.4.0.r40384.2d69736).
Installed using Q35 chipset.
I'm running QEMU git, which allows SCSI controller migration, so I can
attempt doing this.
==========
virsh # managedsave <vmname>
Domain <vmname> state saved by libvirt
virsh # start <vmname>
Domain <vmname> started
{{{ Then, viewing the VM console through QEMU, its screen is extremely
laggy, taking about a minute to fill up the screen with repeating
errors within 0.003 kernel seconds: }}}
uhci_hcd 0000:02:02.2: host system error, PCI problems?
uhci_hcd 0000:02:02.2: host controller process error, something bad happened!
{{{ repeats }}}
==========
{{{ after a forced reboot }}}
$ dmesg|grep 0000:02:02
[ 0.480054] pci 0000:02:02.0: [8086:2934] type 00 class 0x0c0300
[ 0.498451] pci 0000:02:02.0: reg 0x20: [io 0xc0e0-0xc0ff]
[ 0.506965] pci 0000:02:02.1: [8086:2935] type 00 class 0x0c0300
[ 0.525101] pci 0000:02:02.1: reg 0x20: [io 0xc100-0xc11f]
[ 0.533634] pci 0000:02:02.2: [8086:2936] type 00 class 0x0c0300
[ 0.554992] pci 0000:02:02.2: reg 0x20: [io 0xc120-0xc13f]
[ 0.563642] pci 0000:02:02.7: [8086:293a] type 00 class 0x0c0320
[ 0.566679] pci 0000:02:02.7: reg 0x10: [mem 0xfc041000-0xfc041fff]
[ 1.720668] ehci-pci 0000:02:02.7: EHCI Host Controller
[ 1.720709] ehci-pci 0000:02:02.7: new USB bus registered, assigned
bus number 1
[ 1.721343] ehci-pci 0000:02:02.7: irq 22, io mem 0xfc041000
[ 1.730198] ehci-pci 0000:02:02.7: USB 2.0 started, EHCI 1.00
[ 1.740641] uhci_hcd 0000:02:02.0: UHCI Host Controller
[ 1.740667] uhci_hcd 0000:02:02.0: new USB bus registered, assigned
bus number 2
[ 1.740813] uhci_hcd 0000:02:02.0: detected 2 ports
[ 1.741026] uhci_hcd 0000:02:02.0: irq 23, io base 0x0000c0e0
[ 1.747517] uhci_hcd 0000:02:02.1: UHCI Host Controller
[ 1.747543] uhci_hcd 0000:02:02.1: new USB bus registered, assigned
bus number 3
[ 1.747664] uhci_hcd 0000:02:02.1: detected 2 ports
[ 1.747904] uhci_hcd 0000:02:02.1: irq 20, io base 0x0000c100
[ 1.753055] uhci_hcd 0000:02:02.2: UHCI Host Controller
[ 1.753074] uhci_hcd 0000:02:02.2: new USB bus registered, assigned
bus number 4
[ 1.753624] uhci_hcd 0000:02:02.2: detected 2 ports
[ 1.753724] uhci_hcd 0000:02:02.2: irq 21, io base 0x0000c120
==========
9 years, 3 months
Re: [libvirt-users] machine='pc-q35-2.1' and sata controller
by Michael Darling
What's the status of the SATA controller migration bug? Are the
patches for it expected to be in 2.4?
Looks like they didn't make it into 2.3.
Since you last wrote, if the SATA controller patches aren't in yet, is
there any new way to avoid the SATA controller device, if you have no
SATA devices?
Thank you.
=== REPLYING TO ===
On 02/23/2015 02:26 PM, Thomas Stein wrote:
> Hello.
>
> I'm not able to disable the sata controller on a machine='pc-q35-2.1' type VM.
> Whenever i delete:
>
> <controller type='sata' index='0'>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f'
> function='0x2'/>
> </controller>
>
> it gets added again when i close the editor.
>
> The reason i would like to delete the sata controller is this error while
> trying to migrate the machine:
>
> 2015-02-23 19:04:11.181+0000: 1972: error : qemuMonitorJSONCheckError:381 :
> internal error: unable to execute QEMU command 'migrate': State blocked by
> non-migratable device '0000:00:1f.2/ich9_ahci'
>
> Someone has an idea to solve this?
>
Yep. You have described the situation perfectly. Here are my notes from
libvirt commit c27b0bb171d9bdac10a93492a2a99eaa22746694, which fixed the
handling of "default" devices in the Q35 machinetype:
sata - a q35 machine always has a sata controller implicitly
added at slot 0x1F, function 2. There is no way to avoid this
controller, so we always add it. Note that the xml2xml tests
for the pcie-root and q35 cases were changed to use
DO_TEST_DIFFERENT() so that we can check for the sata
controller being automatically added. This is especially
important because we can't check for it in the xml2argv output
(it has no effect on that output since it's an implicit device).
So basically when you specify a q35 machinetype, you get a SATA
controller at 00:1f.2 even though you have added no commandline args to
ask for it. And unlike the default network device (which libvirt can
eliminate by adding "-net none" to the qemu commandline), there is no
way to avoid this device.
And it is also true that any machine with a SATA controller can't be
migrated because of problems with the driver. I just talked to the
person responsible for fixing these bugs in qemu, and he said that the
patches will go upstream "soon", and that he hopes they will be in qemu
2.3. In the meantime, the only way to avoid this problem is to switch to
an i440fx-based machinetype (this will require removing all the pci
controllers from your config as well as removing all <address type='pci'
.../> elements within all devices in the domain; libvirt will then
reassign PCI addresses in the guest appropriate to the new machine type.
IMPORTANT: If the guest is running MS Windows, this may require a new
"activation"; best to keep a copy of the old config and an image of the
disk just in case there are problems!)
9 years, 3 months