[libvirt] [PATCH] lxc: Fix coverity
by Martin Kletzander
Commit 399394ab74ebf3f6e60771044fda0ee69a2acf67 removed some coverity
comments which skipped the dead code, so add them back.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/lxc/lxc_driver.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c
index 4f35f93..e319234 100644
--- a/src/lxc/lxc_driver.c
+++ b/src/lxc/lxc_driver.c
@@ -2621,6 +2621,7 @@ lxcDomainGetBlkioParameters(virDomainPtr dom,
goto cleanup;
break;
+ /* coverity[dead_error_begin] */
default:
break;
/* should not hit here */
@@ -2812,6 +2813,7 @@ lxcDomainGetBlkioParameters(virDomainPtr dom,
}
break;
+ /* coverity[dead_error_begin] */
default:
break;
/* should not hit here */
--
1.8.5.3
10 years, 11 months
[libvirt] [libvirt-java] [PATCH] Depend on JNA versions 3.3 to 4.0
by Claudio Bley
Specify a version range for the net.java.dev.jna / jna artefact
in order to accept any version we tested the libvirt Java bindings
against.
---
It's been some time we discussed this[1], but here we go...
https://www.redhat.com/archives/libvir-list/2013-September/msg00929.html
pom.xml.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pom.xml.in b/pom.xml.in
index 25b2ae7..4e7a7c1 100644
--- a/pom.xml.in
+++ b/pom.xml.in
@@ -27,7 +27,7 @@
<groupId>net.java.dev.jna</groupId>
<artifactId>jna</artifactId>
<scope>provided</scope>
- <version>3.5.0</version>
+ <version>[3.3,4.0]</version>
</dependency>
</dependencies>
--
1.8.5.2.msysgit.0
10 years, 11 months
[libvirt] [PATCH v2 0/8] Add throttle blkio cgroup support for libvirt
by Gao feng
Right now, libvirt only supports the cfq based blkio cgorup,
this means if the block devices doesn't use cfq scheduler, the
blkio cgroup will loss effect.
This patchset adds the throttle blkio cgroup support for libvirt,
introduces four elements for domain configuration and extend the
virsh command blkiotune.
This patchset is a new version of Guan Qiang's patchset
://www.redhat.com/archives/libvir-list/2013-October/msg01066.html
Change from v1:
1, rearrange the order of patches
2, change the options/elements of throttle blkio cgroup to consist
with disk iotune.
3, fix complie error when cgroup is unavailable.
4, remove virCgroupSetBlkioDevice, split virCgroupSetBlkioDeviceBps
and virCgroupSetBlkioDeviceIops
Change from Guan Qiang's patchset:
1, split to 8 patches, make logic more clear
2, change the type of read/write iops form unsigned long long to unsigned int,
trying to set read/write iops to the value which bigger than max number of
unsigned int will fail.
3, fix some logic shortage.
Gao feng (8):
rename virDomainBlkioDeviceWeightParseXML to
virDomainBlkioDeviceParseXML
rename virBlkioDeviceWeightArrayClear to virBlkioDeviceArrayClear
rename virBlkioDeviceWeightPtr to virBlkioDevicePtr
domain: introduce xml elements for throttle blkio cgroup
blkio: Setting throttle blkio cgroup for domain
virsh: add setting throttle blkio cgroup option to blkiotune
qemu: allow to setup throttle blkio cgroup through virsh
lxc: allow to setup throttle blkio cgroup through virsh
docs/schemas/domaincommon.rng | 28 +-
include/libvirt/libvirt.h.in | 45 ++
src/conf/domain_conf.c | 113 +++-
src/conf/domain_conf.h | 16 +-
src/libvirt_private.syms | 6 +-
src/lxc/lxc_cgroup.c | 29 +-
src/lxc/lxc_driver.c | 649 ++++++++++++++++++++-
src/qemu/qemu_cgroup.c | 29 +-
src/qemu/qemu_driver.c | 443 ++++++++++++--
src/util/vircgroup.c | 224 ++++++-
src/util/vircgroup.h | 16 +
.../qemuxml2argv-blkiotune-device.xml | 8 +
tools/virsh-domain.c | 64 ++
tools/virsh.pod | 36 +-
14 files changed, 1583 insertions(+), 123 deletions(-)
--
1.8.3.1
10 years, 11 months
[libvirt] [v10 0/6] Write separate module for hostdev passthrough
by Chunyan Liu
These patches implements a separate module for hostdev passthrough so that it
could be shared by different drivers and can maintain a global state of a host
device.
patch 1/6: extract hostdev passthrough function from qemu_hostdev.c and make it
reusable by multiple drivers.
patch 2/6: add a unit test for hostdev common library.
patch 3/6: switch qemu driver to use the common library instead of its own
hostdev passthrough APIs.
patch 4/6: switch lxc driver to use the common library instead of its own
hostdev passthrough APIs.
patch 5/6: add a hostdev pci backend type for xen usage.
patch 6/6: add pci passthrough to libxl driver.
---
Changes
* change copyright to 2014
* use VIR_DEBUG instead of self-defined DPRINTF in virhostdevtest.c
* rebase to lasest source code
Chunyan Liu (6):
add hostdev passthrough common library
add unit test to hostdev common library
change qemu driver to use hostdev common library
change lxc driver to use hostdev common library
add hostdev pci backend type for xen
add pci passthrough to libxl driver
.gnulib | 2 +-
docs/schemas/domaincommon.rng | 1 +
po/POTFILES.in | 3 +-
src/Makefile.am | 3 +-
src/conf/domain_conf.c | 3 +-
src/conf/domain_conf.h | 1 +
src/libvirt_private.syms | 21 +
src/libxl/libxl_conf.c | 63 +
src/libxl/libxl_conf.h | 4 +
src/libxl/libxl_domain.c | 9 +
src/libxl/libxl_driver.c | 448 +++++-
src/lxc/lxc_conf.h | 4 -
src/lxc/lxc_driver.c | 47 +-
src/lxc/lxc_hostdev.c | 413 -----
src/lxc/lxc_hostdev.h | 43 -
src/lxc/lxc_process.c | 24 +-
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.h | 9 +-
src/qemu/qemu_domain.c | 22 +
src/qemu/qemu_driver.c | 81 +-
src/qemu/qemu_hostdev.c | 1454 -----------------
src/qemu/qemu_hostdev.h | 76 -
src/qemu/qemu_hotplug.c | 136 +-
src/qemu/qemu_process.c | 40 +-
src/util/virhostdev.c | 1703 ++++++++++++++++++++
src/util/virhostdev.h | 134 ++
src/util/virpci.c | 30 +-
src/util/virpci.h | 9 +-
src/util/virscsi.c | 28 +-
src/util/virscsi.h | 8 +-
src/util/virusb.c | 29 +-
src/util/virusb.h | 8 +-
tests/Makefile.am | 5 +
.../qemuxml2argv-hostdev-pci-address.xml | 1 +
.../qemuxml2argvdata/qemuxml2argv-net-hostdev.xml | 1 +
tests/qemuxml2argvdata/qemuxml2argv-pci-rom.xml | 2 +
tests/virhostdevtest.c | 473 ++++++
tests/virpcimock.c | 23 +-
38 files changed, 3152 insertions(+), 2213 deletions(-)
delete mode 100644 src/lxc/lxc_hostdev.c
delete mode 100644 src/lxc/lxc_hostdev.h
delete mode 100644 src/qemu/qemu_hostdev.c
delete mode 100644 src/qemu/qemu_hostdev.h
create mode 100644 src/util/virhostdev.c
create mode 100644 src/util/virhostdev.h
create mode 100644 tests/virhostdevtest.c
10 years, 11 months
[libvirt] [PATCH] Add test for transient disk support in VMX files
by Wout Mertens
From: Wout Mertens <Wout.Mertens(a)gmail.com>
Adds test for transient disk translation in vmx files
---
tests/vmx2xmldata/vmx2xml-harddisk-transient.vmx | 6 +++++
tests/vmx2xmldata/vmx2xml-harddisk-transient.xml | 25
++++++++++++++++++++++
tests/vmx2xmltest.c | 1 +
3 files changed, 32 insertions(+), 0 deletions(-)
create mode 100644 tests/vmx2xmldata/vmx2xml-harddisk-transient.vmx
create mode 100644 tests/vmx2xmldata/vmx2xml-harddisk-transient.xml
diff --git a/tests/vmx2xmldata/vmx2xml-harddisk-transient.vmx
b/tests/vmx2xmldata/vmx2xml-harddisk-transient.vmx
new file mode 100644
index 0000000..68ef382
--- /dev/null
+++ b/tests/vmx2xmldata/vmx2xml-harddisk-transient.vmx
@@ -0,0 +1,6 @@
+config.version = "8"
+virtualHW.version = "4"
+ide0:0.present = "true"
+ide0:0.deviceType = "ata-hardDisk"
+ide0:0.fileName = "harddisk.vmdk"
+ide0:0.mode = "independent-nonpersistent"
diff --git a/tests/vmx2xmldata/vmx2xml-harddisk-transient.xml
b/tests/vmx2xmldata/vmx2xml-harddisk-transient.xml
new file mode 100644
index 0000000..3786e2f
--- /dev/null
+++ b/tests/vmx2xmldata/vmx2xml-harddisk-transient.xml
@@ -0,0 +1,25 @@
+<domain type='vmware'>
+ <uuid>00000000-0000-0000-0000-000000000000</uuid>
+ <memory unit='KiB'>32768</memory>
+ <currentMemory unit='KiB'>32768</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='i686'>hvm</type>
+ </os>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <disk type='file' device='disk'>
+ <source file='[datastore] directory/harddisk.vmdk'/>
+ <target dev='hda' bus='ide'/>
+ <transient/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <controller type='ide' index='0'/>
+ <video>
+ <model type='vmvga' vram='4096'/>
+ </video>
+ </devices>
+</domain>
diff --git a/tests/vmx2xmltest.c b/tests/vmx2xmltest.c
index 13515f0..70178f7 100644
--- a/tests/vmx2xmltest.c
+++ b/tests/vmx2xmltest.c
@@ -221,6 +221,7 @@ mymain(void)
DO_TEST("harddisk-scsi-file", "harddisk-scsi-file");
DO_TEST("harddisk-ide-file", "harddisk-ide-file");
+ DO_TEST("harddisk-transient", "harddisk-transient");
DO_TEST("cdrom-scsi-file", "cdrom-scsi-file");
DO_TEST("cdrom-scsi-device", "cdrom-scsi-device");
--
1.7.1
10 years, 11 months
[libvirt] [PATCH] Use AC_PATH_PROG to search for dmidecode
by Roman Bogorodskiy
This is useful in certain circumstances, for example when
libvirtd is being executed by FreeBSD rc script, it cannot find
dmidecode installed from FreeBSD ports because it doesn't have
/usr/local (default prefix for ports) in PATH.
---
configure.ac | 4 ++++
src/util/virsysinfo.c | 2 +-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/configure.ac b/configure.ac
index 146418f..c34c7b8 100644
--- a/configure.ac
+++ b/configure.ac
@@ -391,6 +391,8 @@ dnl External programs that we can use if they are available.
dnl We will hard-code paths to these programs unless we cannot
dnl detect them, in which case we'll search for the program
dnl along the $PATH at runtime and fail if it's not there.
+AC_PATH_PROG([DMIDECODE], [dmidecode], [dmidecode],
+ [/sbin:/usr/sbin:/usr/local/sbin:$PATH])
AC_PATH_PROG([DNSMASQ], [dnsmasq], [dnsmasq],
[/sbin:/usr/sbin:/usr/local/sbin:$PATH])
AC_PATH_PROG([RADVD], [radvd], [radvd],
@@ -408,6 +410,8 @@ AC_PATH_PROG([OVSVSCTL], [ovs-vsctl], [ovs-vsctl],
AC_PATH_PROG([SCRUB], [scrub], [scrub],
[/sbin:/usr/sbin:/usr/local/sbin:$PATH])
+AC_DEFINE_UNQUOTED([DMIDECODE],["$DMIDECODE"],
+ [Location or name of the dmidecode program])
AC_DEFINE_UNQUOTED([DNSMASQ],["$DNSMASQ"],
[Location or name of the dnsmasq program])
AC_DEFINE_UNQUOTED([RADVD],["$RADVD"],
diff --git a/src/util/virsysinfo.c b/src/util/virsysinfo.c
index 18f426d..92484f5 100644
--- a/src/util/virsysinfo.c
+++ b/src/util/virsysinfo.c
@@ -44,7 +44,7 @@
VIR_ENUM_IMPL(virSysinfo, VIR_SYSINFO_LAST,
"smbios");
-static const char *sysinfoDmidecode = "dmidecode";
+static const char *sysinfoDmidecode = DMIDECODE;
static const char *sysinfoSysinfo = "/proc/sysinfo";
static const char *sysinfoCpuinfo = "/proc/cpuinfo";
--
1.8.4.3
10 years, 11 months
[libvirt] 'host-passthrough' for arm64
by Oleg Strikov
Hello guys,
I'm trying to come up with basic OpenStack support for arm64 node.
I'd like to use 'libvirt_cpu_mode=host-passthrough' configuration option
with Nova which issues <cpu mode='host-passthrough'> to libvirt xml config.
But with this option passed libvirt crashes with 'error: unsupported
configuration: CPU specification not supported by hypervisor'.
This happens because the following handlers are not implemented (or
implemented as stubs) inside src/cpu/cpu_aarch64.c:
* AArch64Decode()
* AArch64Update()
* AArch64guestData()
To solve exactly this 'host-passthrough'-related issue that's enough to
have the following set of handlers:
AArch64Decode(<...>)
{
virCheckFlags(VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES, -1);
/* I don't know any way to detect 'cortex-a57' or any other armv8 CPU
for now */
/* But I don't think that we can meet anything else than cortex-a57 */
/* We may also put 'host' there to specifically point out that
qemu-aarch64 supports only '-cpu host' for now */
/* pm215 told me that the ETA for '-cpu cortex-a57' and friends is
around 3 months from now */
return !(VIR_STRDUP(cpu->model, "host" or "cortex-a57") == 1);
}
static int AArch64Update(<...>)
{
/* qemu-aarch64 supports only '-cpu host' for now */
guest->match = VIR_CPU_MATCH_EXACT;
virCPUDefFreeModel(guest);
return virCPUDefCopyModel(guest, host, true);
}
static virCPUCompareResult
AArch64guestData(<..>)
{
return VIR_CPU_COMPARE_IDENTICAL;
}
That's clear that these handlers provide just basic functionality
('host-passthrough'-only) and have to be extended in future.
But is it something we can commit for now?
Another way to deal with this issue is to adopt some code from PPC handlers
(including CPU model detection and best fit qemu configuration discovery).
But this way will be blocked until:
(1) I find any way to reliably detect CPU model on ARMv8 board (any ideas?)
(2) pm215 implements TCG for arm64
What is the best way to choose to come up with the commitable code?
Many thanks for your help!
Oleg
10 years, 11 months
[libvirt] [RFC PATCH 0/3] Implement two-tier driver loading
by Adam Walters
This patchset implements a two-tier driver loading system. I split the hypervisor drivers out into their own tier, which is loaded after the other drivers. This has the net effect of ensuring that things like secrets, networks, etc., are initialized and auto-started before any hypervisors, such as qemu, lxc, etc., are touched. This resolves the race condition present when starting libvirtd while domains are running, which happens when restarting libvirtd after having started at least one domain.
This patch will work without my config driver patchset, but does prevent RBD storage pools from auto-starting. It may also affect other pool types, but I only have file and RBD to test with, personally. The RBD storage pool is only affected because it requires a hypervisor connection (prior to this patchset, that connection was hardcoded to be a connection to qemu on localhost) in order to look up secrets. Any pool type that does not use/need data outside of the base storage pool definition should continue to auto-start (file backed pools definitely still work) and also no longer be part of the restart race condition.
For anyone who is not familiar with the race condition I mentioned above, the basic description is the upon restarting libvirtd, any running QEMU domains using storage pool backed disks are killed (randomly) due to their storage pool not being online. This is due to storage pool auto-start not having finished before QEMU initalization runs.
I would appreciate any comments and suggestions about this patchset. It works for me on 4 machines running three different distros of Linux (Archlinux, Gentoo, and CentOS), so I would imagine it should work most anywhere.
Adam Walters (3):
driver: Implement new state driver field
storage: Fix hardcoded qemu connection
libvirt: Implement two-tier driver loading
src/config/config_driver.c | 1 +
src/driver.h | 6 ++++
src/interface/interface_backend_netcf.c | 1 +
src/libvirt.c | 57 ++++++++++++++++++++++++++++-----
src/libxl/libxl_driver.c | 1 +
src/lxc/lxc_driver.c | 1 +
src/network/bridge_driver.c | 1 +
src/node_device/node_device_hal.c | 1 +
src/node_device/node_device_udev.c | 1 +
src/nwfilter/nwfilter_driver.c | 1 +
src/qemu/qemu_driver.c | 1 +
src/remote/remote_driver.c | 1 +
src/secret/secret_driver.c | 1 +
src/storage/storage_driver.c | 13 ++++----
src/uml/uml_driver.c | 1 +
src/xen/xen_driver.c | 1 +
16 files changed, 75 insertions(+), 14 deletions(-)
--
1.8.5.2
10 years, 11 months
[libvirt] How to configure MacVtap passthrough mode to SR-IOV VF?
by opendaylight
Hi guys.
These days I'm doing research on SR-IOV & Live migration. As we all know there is big problem that SR-IOV & Live migration can not exist at the same time.
I heard that KVM + SRIOV + MacVtap can solve this problem. So I want to try.
My environment:
Host: Dell R610, OS: RHEL 6.4 ( kernel 2.6.32)
NIC: intel 82599
I follow a document from intel guy, it said that I should write xml like below:
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
============================
I guess here the vf0 & vf1 should be the VFs of Intel 82599.
What make me confused is that we know we can not see the vf 0 & vf 1 directly from the host server with "ifconfig", that is to say, vf 0 & vf1 are not a real physical interface.
I try #: virsh net-define macvtap_passthrough.xml
#: virsh net-start macvtap_passthrough
When I try to configure macvtap_passthrough for a VNIC of a VM, the virt-manager told : "Can't get vf 0, no such a device".
When I try from virt-manager: add hardware--->network--->host device (macvtap_passthrough:pass_through network), I got error like : "Error adding device: xmlParseDoc() failed".
I guess I can not write like this " <interface dev=’vf0’ />" in the xml.
I try to change as below, but the result is same.
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<pf dev=’p2p1’ /> // p2p1 is intel sriov physical nic
</forward>
</network>
============================
I don't know how to write correctly. Please help me.
You can refer to intel document as below.
Many thanks.
==========document from intel========================
Linux/KVM VM Live Migration (SRIOV And MacVtap)
By Waseem Ahmad (waseem.ahmad(a)intel.com
In this scenario we are using 3 machines:
Server 1: DNS/NFS – nfs.vtt.priv
Server 2: Hv1
Server3: Hv2
HV1 and HV2 are Linux/KVM machines. We will get to them in a minute however we first must address kvm and nfs.
NFS:
Create a storage area, where both HV1, and HV2 can access it. There are several methods available for this (FCOE/ISCSI/NFS). For this write-up use nfs.
Configure NFS:
Create a directory on nfs.vtt.priv where you want your storage to be. In this case used /home/vmstorage
Edit /etc/exports and add the following
/home/vmstorage 172.0.0.0/1(rw,no_root_squash,sync)
Now to /etc/sysconfig/nfs
Uncomment RPCNFSDARGS=”-N 4”
This will disable nfs v4. If you don’t do this you will have issues with not being able to access the share from within VirtManager.
Add all three machines ip addresses to each machines /hosts file.
MIGRATION WILL NOT WORK WITHOUT FULLY QUALIFIED DOMAIN NAMES.
KVM:
On both HV1, and HV2 servers:
Edit /etc/selinux/config
SELINUX=disabled
Edit /etc/libvirt/qemu.conf
Change security_driver=none
On HV1 and HV2 start Virtual Machine Manager
Double click on localhost(QEMU)
Then click on the storage tab at the top of the window that pops up
Down in the left hand corner is a box with a + sign in it, click on that. A new window will appear entitled Add a New Storage Pool
In the name box type vmstorage, then click on the type box and select netfs: Network Exported Directory, now click next.
You will see the last step of the network Storage Pool Dialog. The first option is the target path. This is the path where we will mount our storage on the local server. I have chosen to leave this alone.
The next option is format, leave this set on auto:
Host name: nfs.vtt.priv
Source path: /home/vmstorage
Click on finish
Repeat the above steps on HV2 server
Create vms
On HV1 server go back to the connection details screen, (this is the one that showed up when you double clicked on localhost (qemu), and click on the storage tab again.
Click on vmstorage then click on new volume at the bottom.
A new dialog will appear entitled add a storage volume.
In the Name box type vm1
In the Max Capacity box type 20000
And do the same in the allocation box then click finish.
Now you can close the connection details box by click on the x in the corner.
Now click on the terminal in the corner, right underneath file, and type the name of our vm in the box that is entitled Name, vm1 choose how your installation media, probably local install media, and click forward. Click on use cdrom or dvd, and place a rh6.2 dvd in the dvd player on HV1. Select Linux, for the OS type, and Red Hat Enterprise Linux 6 for the version. Memory I chose to leave this at its default of 1024, and assigned 1 cpu to the guest. Click forward, select “select managed or other existing storage” and click the browse button. Click on vmstoarge, and select vm1.img then click forward. Then click on finish.
We will configure network after we make sure migration between the two servers works properly.
Now go ahead and install the operating system as you would normally.
Create networks
Create a file that looks like the following < there is no support for adding a macvtap interface from the gui as of yet, this is the only manual step in the process. Create a file named macvtap_passthrough.xml with the following contents.
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
<network> <name>’macvtap_bridge’</name> <forward mode=’bridge’> <interface dev=’p3p1’/> </forward>
</network>
Save it and run the following commands:
virsh net-define macvtap_passthrough.xml
virsh net-start macvtap_passthrough
Make sure all of your virtual interfaces that you used in the xml file are up.
for i in $(ifconfig –a | awk ‘/eth/ {print $1}’); do ifconfig $i up; done
Then double click on your vm and click on the big blue i
On the next screen click on add hardware, then on network, then select Virtual network “macvtap_passthrough”
Then click on finish.
Start your vm and make sure that the macvtap was created on the host by doing
ip link | grep ‘macvtap’
In the vm configure the ip information for the virtio adapter.
In the virtual machine manager click on file, add connection.
Then check the connect to remote host fill in the username and hostname, then click on connect
Right click on your VM and select Migrate, select the host you want to migrate the machine to, then click on advanced options, check the address box, and type the ip address of the machine you want to migrate to, and click the migrate button.
10 years, 11 months
[libvirt] Propose patch?
by Joel Simoes
Hi, all.
I'm Joel
I wan't propose patch to correct bug to refresh volume on sheepdog from
a sheep pool.
Bug I'm not understanding how to configure correctly my .git/config to
send patch.
Warning, I'm not C developper, this patch requiered correction for char
analyse and copy. It's work but ...
My patch (add auto volumes (vdi) and refresh when adding a pool sheepdog)
Thanks.
10 years, 11 months