[libvirt] [PATCH v2] docs: mention bhyve SATA address changes in news.xml
by Roman Bogorodskiy
---
docs/news.xml | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index f408293a1..7990cc6d4 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -56,6 +56,25 @@
a fabric name has been removed by making it optional.
</description>
</change>
+ <change>
+ <summary>
+ bhyve: change address allocation schema for SATA disks
+ </summary>
+ <description>
+ Previously, the bhyve driver assigned PCI addresses to SATA disks
+ directly rather than assigning that to a controller and
+ using SATA addresses for disks. It was implemented this way
+ because bhyve has no notion of an explicit SATA controller.
+ However, as this doesn't match libvirt's understanding of
+ disk addresses, it was changed for the bhyve driver
+ to follow the common schema and have PCI addresses
+ for SATA controllers and SATA addresses for disks. If you're
+ having issues because of this, it's recommended to edit
+ the domain's XML and remove <address type='pci'>
+ from the <disk> elements with <target bus='sata'/>
+ and let libvirt regenerate it properly.
+ </description>
+ </change>
</section>
</release>
<release version="v3.0.0" date="2017-01-17">
--
2.11.0
7 years, 9 months
[libvirt] [PATCH] bhyve: fix virtio disk addresses
by Roman Bogorodskiy
Like it usually happens, I fixed one thing and broke another:
in 803966c76 address allocation was fixed for SATA disks, but
broke that for virtio disks, because it dropped disk address
assignment completely. It's not needed for SATA disks anymore,
but still needed for the virtio ones.
Bring that back and add a couple of tests to make sure it won't
happen again.
---
src/bhyve/bhyve_device.c | 16 +++++++++
.../bhyvexml2argv-addr-multiple-virtio-disks.args | 11 ++++++
...bhyvexml2argv-addr-multiple-virtio-disks.ldargs | 3 ++
.../bhyvexml2argv-addr-multiple-virtio-disks.xml | 32 +++++++++++++++++
.../bhyvexml2argv-addr-single-virtio-disk.args | 9 +++++
.../bhyvexml2argv-addr-single-virtio-disk.ldargs | 3 ++
.../bhyvexml2argv-addr-single-virtio-disk.xml | 22 ++++++++++++
tests/bhyvexml2argvtest.c | 2 ++
.../bhyvexml2xmlout-addr-multiple-virtio-disks.xml | 42 ++++++++++++++++++++++
.../bhyvexml2xmlout-addr-single-virtio-disk.xml | 30 ++++++++++++++++
tests/bhyvexml2xmltest.c | 2 ++
11 files changed, 172 insertions(+)
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.xml
create mode 100644 tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-multiple-virtio-disks.xml
create mode 100644 tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-single-virtio-disk.xml
diff --git a/src/bhyve/bhyve_device.c b/src/bhyve/bhyve_device.c
index 29528230f..55ce631ec 100644
--- a/src/bhyve/bhyve_device.c
+++ b/src/bhyve/bhyve_device.c
@@ -129,6 +129,22 @@ bhyveAssignDevicePCISlots(virDomainDefPtr def,
goto error;
}
+ for (i = 0; i < def->ndisks; i++) {
+ /* We only handle virtio disk addresses as SATA disks are
+ * attached to a controller and don't have their own PCI
+ * addresses */
+ if (def->disks[i]->bus != VIR_DOMAIN_DISK_BUS_VIRTIO)
+ continue;
+
+ if (def->disks[i]->info.type == VIR_DOMAIN_DEVICE_ADDRESS_TYPE_PCI &&
+ !virPCIDeviceAddressIsEmpty(&def->disks[i]->info.addr.pci))
+ continue;
+ if (virDomainPCIAddressReserveNextAddr(addrs, &def->disks[i]->info,
+ VIR_PCI_CONNECT_TYPE_PCI_DEVICE,
+ -1) < 0)
+ goto error;
+ }
+
return 0;
error:
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.args b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.args
new file mode 100644
index 000000000..8cc166894
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.args
@@ -0,0 +1,11 @@
+/usr/sbin/bhyve \
+-c 1 \
+-m 214 \
+-u \
+-H \
+-P \
+-s 0:0,hostbridge \
+-s 3:0,virtio-net,faketapdev,mac=52:54:00:bc:85:fe \
+-s 2:0,virtio-blk,/tmp/freebsd.img \
+-s 4:0,virtio-blk,/tmp/test.img \
+-s 5:0,virtio-blk,/tmp/test2.img bhyve
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.ldargs b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.ldargs
new file mode 100644
index 000000000..32538b558
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.ldargs
@@ -0,0 +1,3 @@
+/usr/sbin/bhyveload \
+-m 214 \
+-d /tmp/freebsd.img bhyve
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.xml b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.xml
new file mode 100644
index 000000000..9bcd0a629
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-multiple-virtio-disks.xml
@@ -0,0 +1,32 @@
+<domain type='bhyve'>
+ <name>bhyve</name>
+ <uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
+ <memory>219136</memory>
+ <vcpu>1</vcpu>
+ <os>
+ <type>hvm</type>
+ </os>
+ <devices>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/freebsd.img'/>
+ <target dev='vda' bus='virtio'/>
+ </disk>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/test.img'/>
+ <target dev='vdb' bus='virtio'/>
+ </disk>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/test2.img'/>
+ <target dev='vdc' bus='virtio'/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='52:54:00:bc:85:fe'/>
+ <model type='virtio'/>
+ <source bridge="virbr0"/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+ </interface>
+ </devices>
+</domain>
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.args b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.args
new file mode 100644
index 000000000..4dcc40404
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.args
@@ -0,0 +1,9 @@
+/usr/sbin/bhyve \
+-c 1 \
+-m 214 \
+-u \
+-H \
+-P \
+-s 0:0,hostbridge \
+-s 3:0,virtio-net,faketapdev,mac=52:54:00:bc:85:fe \
+-s 2:0,virtio-blk,/tmp/freebsd.img bhyve
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.ldargs b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.ldargs
new file mode 100644
index 000000000..32538b558
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.ldargs
@@ -0,0 +1,3 @@
+/usr/sbin/bhyveload \
+-m 214 \
+-d /tmp/freebsd.img bhyve
diff --git a/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.xml b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.xml
new file mode 100644
index 000000000..6be9ae134
--- /dev/null
+++ b/tests/bhyvexml2argvdata/bhyvexml2argv-addr-single-virtio-disk.xml
@@ -0,0 +1,22 @@
+<domain type='bhyve'>
+ <name>bhyve</name>
+ <uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
+ <memory>219136</memory>
+ <vcpu>1</vcpu>
+ <os>
+ <type>hvm</type>
+ </os>
+ <devices>
+ <disk type='file'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/freebsd.img'/>
+ <target dev='vda' bus='virtio'/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='52:54:00:bc:85:fe'/>
+ <model type='virtio'/>
+ <source bridge="virbr0"/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+ </interface>
+ </devices>
+</domain>
diff --git a/tests/bhyvexml2argvtest.c b/tests/bhyvexml2argvtest.c
index e80705780..c36b55a0a 100644
--- a/tests/bhyvexml2argvtest.c
+++ b/tests/bhyvexml2argvtest.c
@@ -190,6 +190,8 @@ mymain(void)
DO_TEST("addr-single-sata-disk");
DO_TEST("addr-multiple-sata-disks");
DO_TEST("addr-more-than-32-sata-disks");
+ DO_TEST("addr-single-virtio-disk");
+ DO_TEST("addr-multiple-virtio-disks");
/* The same without 32 devs per controller support */
driver.bhyvecaps ^= BHYVE_CAP_AHCI32SLOT;
diff --git a/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-multiple-virtio-disks.xml b/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-multiple-virtio-disks.xml
new file mode 100644
index 000000000..542bff121
--- /dev/null
+++ b/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-multiple-virtio-disks.xml
@@ -0,0 +1,42 @@
+<domain type='bhyve'>
+ <name>bhyve</name>
+ <uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='x86_64'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/freebsd.img'/>
+ <target dev='vda' bus='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
+ </disk>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/test.img'/>
+ <target dev='vdb' bus='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
+ </disk>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/test2.img'/>
+ <target dev='vdc' bus='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
+ </disk>
+ <controller type='pci' index='0' model='pci-root'/>
+ <interface type='bridge'>
+ <mac address='52:54:00:bc:85:fe'/>
+ <source bridge='virbr0'/>
+ <model type='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+ </interface>
+ </devices>
+</domain>
diff --git a/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-single-virtio-disk.xml b/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-single-virtio-disk.xml
new file mode 100644
index 000000000..d7abb5abc
--- /dev/null
+++ b/tests/bhyvexml2xmloutdata/bhyvexml2xmlout-addr-single-virtio-disk.xml
@@ -0,0 +1,30 @@
+<domain type='bhyve'>
+ <name>bhyve</name>
+ <uuid>df3be7e7-a104-11e3-aeb0-50e5492bd3dc</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='x86_64'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <disk type='file' device='disk'>
+ <driver name='file' type='raw'/>
+ <source file='/tmp/freebsd.img'/>
+ <target dev='vda' bus='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
+ </disk>
+ <controller type='pci' index='0' model='pci-root'/>
+ <interface type='bridge'>
+ <mac address='52:54:00:bc:85:fe'/>
+ <source bridge='virbr0'/>
+ <model type='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
+ </interface>
+ </devices>
+</domain>
diff --git a/tests/bhyvexml2xmltest.c b/tests/bhyvexml2xmltest.c
index 004afda14..ba9af2996 100644
--- a/tests/bhyvexml2xmltest.c
+++ b/tests/bhyvexml2xmltest.c
@@ -109,6 +109,8 @@ mymain(void)
DO_TEST_DIFFERENT("addr-single-sata-disk");
DO_TEST_DIFFERENT("addr-multiple-sata-disks");
DO_TEST_DIFFERENT("addr-more-than-32-sata-disks");
+ DO_TEST_DIFFERENT("addr-single-virtio-disk");
+ DO_TEST_DIFFERENT("addr-multiple-virtio-disks");
/* The same without 32 devs per controller support */
driver.bhyvecaps ^= BHYVE_CAP_AHCI32SLOT;
--
2.11.0
7 years, 9 months
[libvirt] [PATCH 00/10] Another set of qemu namespace fixes
by Michal Privoznik
The major problem was with symlinks. Imagine the following chain of symlinks:
/dev/my_awesome_disk -> /home/user/blaah -> /dev/disk/by-uuid/$uuid -> /dev/sda
We really need to create all those /dev/* symlinks and /dev/sda device. Also,
some other (less critical) bugs are fixed.
Michal Privoznik (10):
virProcessRunInMountNamespace: Report errors from child
util: Introduce virFileReadLink
qemuDomainPrepareDisk: Fix ordering
qemuSecurityRestoreAllLabel: Don't use transactions
qemu_security: Use more transactions
qemuDomain{Attach,Detach}Device NS helpers: Don't relabel devices
qemuDomainCreateDevice: Properly deal with symlinks
qemuDomainCreateDevice: Don't loop endlessly
qemuDomainAttachDeviceMknod: Deal with symlinks
qemuDomainAttachDeviceMknod: Don't loop endlessly
src/libvirt_private.syms | 1 +
src/qemu/qemu_domain.c | 438 +++++++++++++++++++++++++++--------------------
src/qemu/qemu_hotplug.c | 20 +--
src/qemu/qemu_security.c | 137 +++++++++------
src/util/virfile.c | 12 ++
src/util/virfile.h | 2 +
src/util/virprocess.c | 8 +-
7 files changed, 374 insertions(+), 244 deletions(-)
--
2.11.0
7 years, 9 months
[libvirt] [PATCH] qemu: Forbid <memoryBacking><locked> without <memtune><hard_limit>
by Andrea Bolognani
In order for memory locking to work, the hard limit on memory
locking (and usage) has to be set appropriately by the user.
The documentation mentions the requirement already: with this
patch, it's going to be enforced by runtime checks as well.
Note that this will make existing guests that don't satisfy
the requirement disappear; that said, such guests have never
been able to start in the first place.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1316774
---
src/qemu/qemu_domain.c | 20 ++++++++++++++++++++
tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 +++
....xml => qemuxml2argv-mlock-without-hardlimit.xml} | 0
tests/qemuxml2argvtest.c | 1 +
4 files changed, 24 insertions(+)
copy tests/qemuxml2argvdata/{qemuxml2argv-mlock-on.xml => qemuxml2argv-mlock-without-hardlimit.xml} (100%)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index c6ce090..bb29cfe 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -2629,6 +2629,23 @@ qemuDomainDefVcpusPostParse(virDomainDefPtr def)
static int
+qemuDomainDefMemoryLockingPostParse(virDomainDefPtr def)
+{
+ /* Memory locking can only work properly if the memory locking limit
+ * for the QEMU process has been raised appropriately: the default one
+ * is extrememly low, so there's no way the guest will fit in there */
+ if (def->mem.locked && !virMemoryLimitIsSet(def->mem.hard_limit)) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Setting <memoryBacking><locked> requires "
+ "<memtune><hard_limit> to be set as well"));
+ return -1;
+ }
+
+ return 0;
+}
+
+
+static int
qemuDomainDefPostParse(virDomainDefPtr def,
virCapsPtr caps,
unsigned int parseFlags,
@@ -2692,6 +2709,9 @@ qemuDomainDefPostParse(virDomainDefPtr def,
if (qemuDomainDefVcpusPostParse(def) < 0)
goto cleanup;
+ if (qemuDomainDefMemoryLockingPostParse(def) < 0)
+ goto cleanup;
+
ret = 0;
cleanup:
virObjectUnref(qemuCaps);
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
index 20a5eaa..2046663 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
@@ -3,6 +3,9 @@
<uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
<memory unit='KiB'>219136</memory>
<currentMemory unit='KiB'>219136</currentMemory>
+ <memtune>
+ <hard_limit unit='KiB'>256000</hard_limit>
+ </memtune>
<memoryBacking>
<locked/>
</memoryBacking>
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml b/tests/qemuxml2argvdata/qemuxml2argv-mlock-without-hardlimit.xml
similarity index 100%
copy from tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
copy to tests/qemuxml2argvdata/qemuxml2argv-mlock-without-hardlimit.xml
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 3532cb5..9b2fec5 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -2129,6 +2129,7 @@ mymain(void)
DO_TEST_FAILURE("mlock-on", NONE);
DO_TEST("mlock-off", QEMU_CAPS_REALTIME_MLOCK);
DO_TEST("mlock-unsupported", NONE);
+ DO_TEST_PARSE_ERROR("mlock-without-hardlimit", NONE);
DO_TEST_PARSE_ERROR("pci-bridge-negative-index-invalid",
QEMU_CAPS_DEVICE_PCI_BRIDGE);
--
2.7.4
7 years, 9 months
[libvirt] [PATCH v2] qemu: Forbid <memoryBacking><locked> without <memtune><hard_limit>
by Andrea Bolognani
In order for memory locking to work, the hard limit on memory
locking (and usage) has to be set appropriately by the user.
The documentation mentions the requirement already: with this
patch, it's going to be enforced by runtime checks as well,
by forbidding a non-compliant guest from starting at all.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1316774
---
Changes from [v1]
* Address review feeback:
- check in BuildCommandLine rather than in PostParse,
so that non-compliant guests will merely fail to
start rather than disappear completely.
[v1] https://www.redhat.com/archives/libvir-list/2017-February/msg00180.html
src/qemu/qemu_command.c | 9 +++++++++
tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml | 3 +++
2 files changed, 12 insertions(+)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 1396661..ca3bcdc 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -7340,6 +7340,15 @@ qemuBuildMemCommandLine(virCommandPtr cmd,
qemuBuildMemPathStr(cfg, def, qemuCaps, cmd) < 0)
return -1;
+ /* Memory locking can only work properly if the memory locking limit
+ * for the QEMU process has been raised appropriately: the default one
+ * is extrememly low, so there's no way the guest will fit in there */
+ if (def->mem.locked && !virMemoryLimitIsSet(def->mem.hard_limit)) {
+ virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
+ _("Setting <memoryBacking><locked> requires "
+ "<memtune><hard_limit> to be set as well"));
+ return -1;
+ }
if (def->mem.locked && !virQEMUCapsGet(qemuCaps, QEMU_CAPS_REALTIME_MLOCK)) {
virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s",
_("memory locking not supported by QEMU binary"));
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
index 20a5eaa..2046663 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-mlock-on.xml
@@ -3,6 +3,9 @@
<uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
<memory unit='KiB'>219136</memory>
<currentMemory unit='KiB'>219136</currentMemory>
+ <memtune>
+ <hard_limit unit='KiB'>256000</hard_limit>
+ </memtune>
<memoryBacking>
<locked/>
</memoryBacking>
--
2.7.4
7 years, 9 months
[libvirt] [PATCH] docs: mention bhyve SATA address changes in news.xml
by Roman Bogorodskiy
---
docs/news.xml | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index f408293a1..9e3c4ec3d 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -56,6 +56,22 @@
a fabric name has been removed by making it optional.
</description>
</change>
+ <change>
+ <summary>
+ bhyve: change address allocation schema for SATA disks
+ </summary>
+ <description>
+ Previously, the bhyve driver assigned PCI addresses to SATA disks directly
+ rather than assigning that to a controller and using SATA addresses for disks.
+ It was implemented this way because bhyve has no notion of an explicit SATA
+ controller. However, this doesn't go inline with the internal libvirt model,
+ it was changed for the bhyve driver to follow the common schema and
+ have PCI addresses for SATA controllers and SATA addresses for disks. If you're having
+ issues because of this, it's recommended to edit the domain's XML and remove
+ <address type='xml'> from the <disk> elements with
+ <target bus="sata"/> and let libvirt regenerate it properly.
+ </description>
+ </change>
</section>
</release>
<release version="v3.0.0" date="2017-01-17">
--
2.11.0
7 years, 9 months
Re: [libvirt] Issue with libvirtd
by Michal Privoznik
On 01/10/2017 07:48 AM, Faysal Ali wrote:
> Hi Michal,
[It is usually good idea to keep the list CCed - it may help others
finding a solution to their problems]
>
> Well I have created my little python/libivrt app to manage my virtual
> machines. I am using python socket to check libivrt port availability, and
> the error End of file while reading data: Input/output error* only happens
> whenever that python socket script is trigger.
>
> Here is the script of python socket
>
> import libvirt, socket, sys
>
> hostname='kvm09'
> port = 16509
>
> try:
> socket_host = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
> socket_host.settimeout(1)
> socket_host.connect((hostname, port))
> socket_host.close()
> print 'true'
> except Exception as err:
> print err
Ah, I haven't expected this. Of course your are seeing the error
message. Libvirt has its own protocol on the top of TCP and since you
are connecting and dying immediately - without sending any valid libvirt
packet, the daemon logs an error. This is perfectly expected.
Also, this is *not* how you connect to libvirt. You want to open a
libvirt connection:
conn = libvirt.open(uri)
dom = conn.lookupByName(name)
...
Michal
7 years, 9 months
[libvirt] [PATCH] char: drop data written to a disconnected pty
by Ed Swierk
When a serial port writes data to a pty that's disconnected, drop the
data and return the length dropped. This avoids triggering pointless
retries in callers like the 16550A serial_xmit(), and causes
qemu_chr_fe_write() to write all data to the log file, rather than
logging only while a pty client like virsh console happens to be
connected.
Signed-off-by: Ed Swierk <eswierk(a)skyportsystems.com>
---
qemu-char.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/qemu-char.c b/qemu-char.c
index 676944a..ccb6923 100644
--- a/qemu-char.c
+++ b/qemu-char.c
@@ -1528,7 +1528,7 @@ static int pty_chr_write(CharDriverState *chr, const uint8_t *buf, int len)
/* guest sends data, check for (re-)connect */
pty_chr_update_read_handler_locked(chr);
if (!s->connected) {
- return 0;
+ return len;
}
}
return io_channel_send(s->ioc, buf, len);
--
1.9.1
7 years, 9 months
[libvirt] [PATCH v2 0/3] add per cpu stats to all domain stats
by Nikolay Shirokovskiy
Info provided in virDomainGetCPUStats is currently missed in all
domain stats. This patch removes this discrepancy.
You may need not yet merged patch
'qemu: fix crash on getting block stats for empty cdrom' to test this
series.
diff from v1:
================
1. reuse code (patches 1, 2)
2. move per cpu stats to a distinct stats group
3. add documentation
Nikolay Shirokovskiy (3):
cgroup: extract interface part from virCgroupGetPercpuStats
cgroup: reuse virCgroupGetPercpuTime in virCgroupGetPercpuVcpuSum
add per cpu stats to all domain stats
docs/news.xml | 9 ++
include/libvirt/libvirt-domain.h | 1 +
src/libvirt-domain.c | 7 ++
src/libvirt_private.syms | 2 +
src/qemu/qemu_driver.c | 64 ++++++++++++++
src/util/vircgroup.c | 178 +++++++++++++++++++++++++++------------
src/util/vircgroup.h | 17 ++++
tools/virsh-domain-monitor.c | 7 ++
tools/virsh.pod | 11 ++-
9 files changed, 238 insertions(+), 58 deletions(-)
--
1.8.3.1
7 years, 9 months
[libvirt] OSX Homebrew support note ;)
by Justin Clift
Hi all,
Compiling Libvirt directly from latest git master head is now supported
on
OSX Homebrew. It's as simple as:
$ brew install --HEAD libvirt
...
$ virsh -v
3.0.0
This should make development of Libvirt and related virtualisation
things
much easier on OSX. :)
The credit for this goes to Roman Bogorodskiy and Homebrew developer
"ilovezfs" (CC-d), for adding a functional rpcgen to Homebrew so we can
generate the needed RPC bindings during the build (previously missing).
Thanks heaps! :)
Regards and best wishes,
Justin Clift
7 years, 9 months