[libvirt] Doc v2: How to use NPIV in libvirt
by Osier Yang
Thanks for John Ferlan's lots of internal feedbacks, I believe it's more
readable, and better orgnized now. Should we create a page for it
under http://libvirt.org/deployment.html or add it in WIKI?
==========================================
NPIV in libvirt
NPIV (N_Port ID Virtualization) is a Fibre Channel technology to
share a single physical Fibre Channel HBA with multiple virtual ports.
Henceforth known as a "virtual port" or "virtual Host Bus Adapter"
(vHBA), each virtual port is identified by its own WWPN (Word Wide
Port Name) and WWNN (Word Wide Node Name). In the virtualization
world the vHBA controls the LUNs for virtual machines.
The libvirt implementation provides flexibility to configure the LUN's
either directly to the virtual machine or as part of a storage pool
which then can be configured for use on a virtual machine.
NPIV support in libvirt was first added to libvirt 0.6.5; however, the
following sections will primarily describe NPIV functionality as of the
current libvirt release, 1.1.2. There will be a troubleshooting and prior
version considerations section to describe some historical differences.
1) Discovery
Discovery of HBA(s) capable of NPIV is provided through the virsh
command 'virsh nodedev-list --cap vports'. If no HBA is returned,
then the host configuration should be checked. The XML output from the
command "virsh nodedev-dumpxml" will list fields <name>, <wwnn>, and
<wwpn> to be used in order to create a vHBA. Take care to also note
the <max_vports> value as this lets you know if the HBA is going to
exceed the maximum vHBA supported.
The following output indicates a host that has two HBAs to support
vHBA and the layout of a HBA's XML:
# virsh nodedev-list --cap vports
scsi_host4
scsi_host5
# virsh nodedev-dumpxml scsi_host5
<device>
<name>scsi_host5</name>
<parent>pci_0000_04_00_1</parent>
<capability type='scsi_host'>
<host>5</host>
<capability type='fc_host'>
<wwnn>2001001b32a9da4e</wwnn>
<wwpn>2101001b32a9da4e</wwpn>
<fabric_wwn>2001000dec9877c1</fabric_wwn>
</capability>
<capability type='vport_ops'>
<max_vports>164</max_vports>
<vports>5</vports>
</capability>
</capability>
</device>
The "max_vports" value indicates there are a possible of 164 vports
available for use in the HBA configuration. The "vports" value indicates
the number of vports currently being used.
Support for detection of HBA's capable of NPIV support prior to libvirt
1.0.4 is described in the "Troubleshooting" section.
2) Creation of a vHBA using the node device driver
In order to create a vHBA using the node device driver, select an HBA with
available "vport" space, use the HBA "<name>" field as the "<parent>"
field in the following XML:
<device>
<parent>scsi_host5</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
</capability>
</capability>
</device>
Then create the vHBA with the command "virsh nodedev-create" (assuming
above XML file is named "vhba.xml"):
# virsh nodedev-create vhba.xml
Node device scsi_host6 created from vhba.xml
NOTE: If you specify "name" for the vHBA, then it will be ignored.
The kernel will automatically pick the next SCSI host name in sequence not
already used. The "wwpn" and "wwnn" values will be automatically generated
by libvirt.
In order to see the generated vHBA XML, use the command "virsh
nodedev-dumpxml" as follows:
# virsh nodedev-dumpxml scsi_host6
<device>
<name>scsi_host6</name>
<parent>scsi_host5</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
<wwnn>2001001b32a9da5e</wwnn>
<wwpn>2101001b32a9da5e</wwpn>
</capability>
</capability>
</device>
This vHBA will only be defined as long the host is not rebooted. In
order to create a persistent vHBA, one must use a libvirt storage pool
(see next section).
3) Creation of vHBA by the storage pool
By design, vHBAs managed by the node device driver are transient across
host reboots. It is recommended to define a libvirt storage pool based
on the vHBA in order to preserve the vHBA configuration. Using a storage
pool has two primary advantage, first the libvirt code will find the
LUN's path via simple virsh command output and second migration of
virtual machine's requires only defining and starting a storage pool
with the same vHBA name on the target machine if you use the LUN with
libvirt storage pool and volume name in virtual machine config (see
section 5).
In order to create a persistent vHBA configuration create
a libvirt 'scsi' storage pool using the XML as follows:
<pool type='scsi'>
<name>poolvhba0</name>
<source>
<adapter type='fc_host' wwnn='20000000c9831b4b' wwpn='10000000c9831b4b'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
You must use the "type='scsi'" for the pool; The source adapter
type must be "fc_host". Attributes "wwnn" and "wwpn" are provided as
the unique identifier for the vHBA to be created.
There is an optional attribute "parent" for source the adapter. It
indicates the name of the HBA which you want to use to create the
vHBA. Its value should be consistent with what node device driver
dumps (e.g. scsi_host5). If it's not specified, libvirt will pick
the first HBA capable of NPIV that has not exceeded the maximum
vports it supports.
NOTE: You can also create a scsi pool with source adapter type "fc_host"
for a HBA, and in that case the attribute "parent" is not necessary.
If you prefer to choose which parent HBA to use for your vHBA, then
you must provide the parent, wwnn, and wwpn in the source adapter XML as
follows:
<source>
<adapter type='fc_host' parent='scsi_host5' wwnn='20000000c9831b4b'
wwpn='10000000c9831b4b'/>
</source>
To define the persistent pool (assuming above XML is named as
poolvhba0.xml):
# virsh pool-define poolvhba0.xml
NOTE: One must use pool-define to define the pool as persistent,
since a pool created by pool-create is transient and it will disappear
after a system reboot or a libvirtd restart.
To start the pool:
# virsh pool-start poolvhba0
To destroy the pool:
# virsh pool-destroy poolvhba0
When starting the pool, libvirt will check if the vHBA with same
"wwpn:wwpn" already exists. If it does not exist, a new vHBA with the
provided "wwpn:wwnn" will be created. Correspondingly,when destroying
the pool the vHBA is destroyed too.
Finally, in order to ensure that subsequent reboots of your host will
automatically define vHBA's for use in virtual machines, one must set the
storage pool autostart feature as follows (assuming the name of the created
pool was "poolvhba0"):
# virsh pool-autostart poolvhba0
4) Finding LUNs on your vHBA
4.1) Utilizing LUN's from a vHBA created by the storage pool
Assuming that a storage pool was created for a vHBA, use the command
"virsh vol-list" command in order to generate a list of available LUN's
on the vHBA, as follows:
# virsh vol-list poolvhba0 --details
Name Path
---------------------------------------------------------------------
unit:0:2:0
/dev/disk/by-path/pci-0000:04:00.1-fc-0x203500a0b85ad1d7-lun-0 block
The list of LUN names displayed will be available for use as disk volumes
in virtual machine configurations.
4.2) Utilizing LUN's from a vHBA created using the node device driver
Finding an available LUN from a vHBA created using the node device driver
can be achieved either via use of the "virsh nodedev-list" command or
through manual searching of the hosts system file system.
Use the "virsh nodedev-list --tree | more" and find the parent HBA
to which the vHBA was configured. The following example lists the
pertinent part of the tree for the example HBA "scsi_host5":
+- scsi_host5
|
+- scsi_host7
+- scsi_target5_0_0
| |
| +- scsi_5_0_0_0
|
+- scsi_target5_0_1
| |
| +- scsi_5_0_1_0
|
+- scsi_target5_0_2
| |
| +- scsi_5_0_2_0
| |
| +- block_sdb_3600a0b80005adb0b0000ab2d4cae9254
|
+- scsi_target5_0_3
|
+- scsi_5_0_3_0
The "block_" indicates it's a block device, the "sdb_" is a
convention to signify the the short device path of "/dev/sdb", and the
short device path or the number can be used to search the
"/dev/disk/by-{id,path,uuid,label}/" name space for the specific LUN
by name, for example:
# ls /dev/disk/by-id/ | grep 3600a0b80005adb0b0000ab2d4cae9254
scsi-3600a0b80005adb0b0000ab2d4cae9254
# ls /dev/disk/by-path/ -l | grep sdb
lrwxrwxrwx. 1 root root 9 Sep 16 05:58
pci-0000:04:00.1-fc-0x203500a0b85ad1d7-lun-0 -> ../../sdb
As an option to using "virsh nodedev-list", it is possible to manually
iterate through the "/sys/bus/scsi/device" and "/dev/disk/by-path"
directory trees in order to find a LUN using the following steps:
1. Iterate over all the directories beginning with the SCSI host number
of the vHBA under the "/sys/bus/scsi/devices" tree. For example, if the
SCSI host number is 6, the command would be:
# ls /sys/bus/scsi/devices/6:* -d
/sys/bus/scsi/devices/6:0:0:0 /sys/bus/scsi/devices/6:0:1:0
/sys/bus/scsi/devices/6:0:2:0 /sys/bus/scsi/devices/6:0:3:0
2. List the "block" names of all the entries belongs to the SCSI host
as follows:
# ls /sys/bus/scsi/devices/6:*/block/
/sys/bus/scsi/devices/6:0:2:0/block/:
sdc
/sys/bus/scsi/devices/6:0:3:0/block/:
sdd
This indicates that "scsi_host6" has two LUNs, one is attached to
"6:0:2:0", with the short device name "sdc", and the other is attached
to "6:0:3:0", with the short device name "sdd".
3. Determine the stable path to the LUN.
Unfortunately a device name such as "sdc" is not stable enough for use
by libvirt. In order to get the stable path, use the "ls -l
/dev/disk/by-path"
and look for the "sdc" path:
# ls -l /dev/disk/by-path/ | grep sdc
lrwxrwxrwx. 1 root root 9 Sep 10 22:28
pci-0000:08:00.1-fc-0x205800a4085a3127-lun-0 -> ../../sdc
Thus "/dev/disk/by-path/pci-0000:08:00.1-fc-0x205800a4085a3127-lun-0"
is the stable path of the LUN attached to address "6:0:2:0" and will be
used in virtual machine configurations.
5) Virtual machine configuration change to use vHBA LUN
Adding the vHBA LUN to the virtual machine configuration is done via
an XML modification to the virtual machine.
5.1) Using a LUN from a vHBA created by the storage pool
Adding the vHBA LUN to the virtual machine is handled via XML to create
a disk volume on the virtual machine with the following example XML:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='poolvhba0' volume='unit:0:2:0'/>
<target dev='hda' bus='ide'/>
</disk>
In particular note the usage of the "<source>" directive with the "pool"
and "volume" attributes listing the storage pool and the short volume
name.
5.2) Using a LUN from a vHBA created using the node device driver
Configuring a vHBA on the virtual machine can be done with its
stable path (path of {by-id|by-path|by-uuid|by-label}). The following is an
XML example of a direct LUN path:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source
dev='/dev/disk/by-path/pci-0000\:04\:00.1-fc-0x203400a0b85ad1d7-lun-0'/>
<target dev='sda' bus='scsi'/>
</disk>
NOTE: The use of "device='disk'" and the long "<source>" device name.
The example uses the "by-path" option. The backslashes prior to the
colons are required, since colons can be considered as delimiters.
5.3) To configure the LUN as a pass-through device, use the following XML
examples.
For a vHBA created using the node device driver:
<disk type='volume' device='lun'>
<driver name='qemu' type='raw'/>
<source
dev='/dev/disk/by-path/pci-0000\:04\:00.1-fc-0x203400a0b85ad1d7-lun-0'/>
<target dev='sda' bus='scsi'/>
</disk>
NOTE: The use of "device='lun'" and again the long "<source>" device
name. Again, the backslashes prior to the colons are required.
For a vHBA created by a storage pool:
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='poolvhba0' volume='unit:0:2:0'/>
<target dev='hda' bus='ide'/>
</disk>
Although it is possible to use the LUN's path as the disk source for a
vHBA created by the storage pool, it is recommended to use libvirt storage
pool and storage volume instead.
6) Destroying a vHBA
A vHBA created by the storage pool can be destroyed by the virsh command
"pool-destroy", for example:
# virsh pool-destroy poolvhba0
NOTE: If the storage pool is persistent, the vHBA will also be removed
by libvirt when it destroys the storage pool.
A vHBA created using the node device driver can be destroyed by the
command "virsh nodedev-destroy", for example (assuming that scsi_host6
was created as shown earlier):
# virsh nodedev-destroy scsi_host6
Destroying a vHBA removes it just as a reboot would do since the node
device driver does not support persistent configurations.
7) Troubleshooting
7.1) Discovery of HBA capable of NPIV prior to 1.0.4
Prior to libvirt 1.0.4, discovery of HBAs capable of NPIV
requires checking each of the HBAs on the host for the capability flag
"vport_ops", as follows:
First you need to find out all the HBA by capability flag "scsi_host":
# virsh nodedev-list --cap scsi_host
scsi_host0
scsi_host1
scsi_host2
scsi_host3
scsi_host4
scsi_host5
Now check each HBA to find one with the "vport_ops" capability, either
one at a time as follows:
# virsh nodedev-dumpxml scsi_host3
<device>
<name>scsi_host3</name>
<parent>pci_0000_00_08_0</parent>
<capability type='scsi_host'>
<host>3</host>
</capability>
</device>
That says "scsi_host3" doesn't support vHBA
# virsh nodedev-dumpxml scsi_host5
<device>
<name>scsi_host5</name>
<parent>pci_0000_04_00_1</parent>
<capability type='scsi_host'>
<host>5</host>
<capability type='fc_host'>
<wwnn>2001001b32a9da4e</wwnn>
<wwpn>2101001b32a9da4e</wwpn>
<fabric_wwn>2001000dec9877c1</fabric_wwn>
</capability>
<capability type='vport_ops' />
</capability>
</device>
But "scsi_host5" supports it.
NOTE: In addition to libvirt 1.0.4 automating the lookup of HBA's capable
of supporting a vHBA configuration, the XML tags "max_vports" and "vports"
will describe the maximum vports allowed and the current vports in use.
As an alternative and smarter way, you can avoid above cumbersome steps
by simple script like:
for i in $(virsh nodedev-list --cap scsi_host); do
if virsh nodedev-dumpxml $i | grep vport_ops > /dev/null; then
echo $i;
fi
done
NOTE: It is possible that node device is named "pci_10df_fe00_scsi_host_0".
This is because libvirt supports two backends for the node device driver
("udev" and "HAL"), but they lead to completely different naming styles.
The udev backend is preferred over the HAL backend since HAL support
is in maintenance mode. The udev backend is more common; however, if
your destribution packager built the libvirt binaries without the
udev backend, then the more complicated names such as
"pci_10df_fe00_scsi_host_0" must be used.
7.2) Creation of a vHBA using the node device driver prior to 0.9.10
For libvirt prior to 0.9.10, you will need to specify the "wwnn" and "wwpn"
manually when creating a vHBA, example XML as follows:
<device>
<name>scsi_host6</name>
<parent>scsi_host5</parent>
<capability type='scsi_host'>
<capability type='fc_host'>
<wwnn>2001001b32a9da5e</wwnn>
<wwpn>2101001b32a9da5e</wwpn>
</capability>
</capability>
</device>
7.3) Creation of storage pool based on vHBA prior to 1.0.5
Prior to libvirt 1.0.5, one can define a "scsi" type pool based on a
vHBA by it's SCSI host name (e.g. "host5" in XML below), using an example
XML as follows:
<pool type='scsi'>
<name>poolhba0</name>
<uuid>e9392370-2917-565e-692b-d057f46512d6</uuid>
<capacity unit='bytes'>0</capacity>
<allocation unit='bytes'>0</allocation>
<available unit='bytes'>0</available>
<source>
<adapter name='host0'/>
</source>
<target>
<path>/dev/disk/by-path</path>
<permissions>
<mode>0700</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
There are two disadvantage of using the SCSI host name as the source
adapter. First the SCSI host number is not stable, thus it may cause trouble
for your storage pool after a system reboot. Second, the adapter name
(e.g. "host5") is not consistent with node device name (e.g. "scsi_host5").
Moreover, using the SCSI host name as the source adapter doesn't
allow you to create a vHBA.
NOTE: Since 1.0.5, the source adapter name was changed to be consistent
with node device name, thus the second disadvantage is destroyed.
Regards,
Osier
11 years, 2 months
[libvirt] [PATCH 0/3] Libvirt Wireshark dissector
by Yuto KAWAMURA(kawamuray)
From: "Yuto KAWAMURA(kawamuray)" <kawamuray.dadada(a)gmail.com>
Introduce Wireshark dissector plugin which adds support to Wireshark
for dissecting libvirt RPC protocol.
This feature was presented by Michal Privoznik year before last[1].
But it did only support dissecting packet headers.
This time I enhanced that dissector to support dissecting packet
payload. Furthermore, I provide code generator of dissector. So you
can get fresh build of dissector from libvirt RPC specification file
at any version you like.
[1] http://www.redhat.com/archives/libvir-list/2011-October/msg00301.html
Yuto KAWAMURA(kawamuray) (3):
Exclude files in VC_LIST_ALWAYS_EXCLUDE_REGEX from
bracket-spacing-check
Introduce Libvirt Wireshark dissector
Add sample output of Wireshark dissector
Makefile.am | 3 +-
cfg.mk | 10 +-
configure.ac | 69 +-
devtools/wireshark-dissector/Makefile.am | 28 +
devtools/wireshark-dissector/README.md | 25 +
.../samples/libvirt-sample.pdml | 7970 ++++++++++++++++++++
devtools/wireshark-dissector/src/.gitignore | 2 +
devtools/wireshark-dissector/src/Makefile.am | 31 +
devtools/wireshark-dissector/src/moduleinfo.h | 36 +
devtools/wireshark-dissector/src/packet-libvirt.c | 512 ++
devtools/wireshark-dissector/src/packet-libvirt.h | 127 +
devtools/wireshark-dissector/src/plugin.c | 27 +
devtools/wireshark-dissector/util/genxdrstub.pl | 1009 +++
13 files changed, 9842 insertions(+), 7 deletions(-)
create mode 100644 devtools/wireshark-dissector/Makefile.am
create mode 100644 devtools/wireshark-dissector/README.md
create mode 100644 devtools/wireshark-dissector/samples/libvirt-sample.pdml
create mode 100644 devtools/wireshark-dissector/src/.gitignore
create mode 100644 devtools/wireshark-dissector/src/Makefile.am
create mode 100644 devtools/wireshark-dissector/src/moduleinfo.h
create mode 100644 devtools/wireshark-dissector/src/packet-libvirt.c
create mode 100644 devtools/wireshark-dissector/src/packet-libvirt.h
create mode 100644 devtools/wireshark-dissector/src/plugin.c
create mode 100755 devtools/wireshark-dissector/util/genxdrstub.pl
--
1.8.1.5
11 years, 2 months
[libvirt] [PATCH v3 0/7] add new API virConnectGetCPUModelNames
by Giuseppe Scrivano
This series adds a new API "virConnectGetCPUModelNames" that allows to
retrieve the list of CPU models known by the hypervisor for a specific
architecture.
This new function is mainly needed by virt-manager to not read
directly the cpu_map.xml file (it could also be different when
accessing a remote daemon).
I have amended all the comments reported for v2.
*v3 main changes
- virConnectGetCPUModelNames returns the number of models instead of
0 on success.
- Use VIR_INSERT_ELEMENT instead of VIR_EXPAND_N.
- Fix a potential memory leak in the python bindings.
- Move virsh changes to a separate commit.
- Remove API documentation from libvirt.h.
*v2 main changes
- set a hard limit for the number of CPU models that is possible to
fetch from a remote server.
- Use VIR_EXPAND_N instead of VIR_REALLOC_N.
- s|1.1.2|1.1.3|
Giuseppe Scrivano (7):
libvirt: add new public API virConnectGetCPUModelNames
cpu: add function to get the models for an arch
virConnectGetCPUModelNames: implement the remote protocol
virConnectGetCPUModelNames: add the support for qemu
virConnectGetCPUModelNames: add the support for the test protocol
virsh: add function to get the CPU models for an arch
python: add bindings for virConnectGetCPUModelNames
daemon/remote.c | 43 +++++++++++++++++++++++++++++++
include/libvirt/libvirt.h.in | 4 +++
python/generator.py | 1 +
python/libvirt-override-api.xml | 7 +++++
python/libvirt-override.c | 52 +++++++++++++++++++++++++++++++++++++
python/libvirt-override.py | 11 ++++++++
src/cpu/cpu.c | 56 ++++++++++++++++++++++++++++++++++++++++
src/cpu/cpu.h | 3 +++
src/driver.h | 7 +++++
src/libvirt.c | 46 +++++++++++++++++++++++++++++++++
src/libvirt_private.syms | 1 +
src/libvirt_public.syms | 5 ++++
src/qemu/qemu_driver.c | 14 ++++++++++
src/remote/remote_driver.c | 57 +++++++++++++++++++++++++++++++++++++++++
src/remote/remote_protocol.x | 20 ++++++++++++++-
src/remote_protocol-structs | 11 ++++++++
src/test/test_driver.c | 11 ++++++++
tools/virsh-host.c | 54 ++++++++++++++++++++++++++++++++++++++
tools/virsh.pod | 5 ++++
19 files changed, 407 insertions(+), 1 deletion(-)
--
1.8.3.1
11 years, 2 months
[libvirt] [PATCH] qemu: Avoid dangling job in qemuDomainSetBlockIoTune
by Jiri Denemark
virDomainSetBlockIoTuneEnsureACL was incorrectly called after we already
started a job. As a result of this, the job was not cleaned up when an
access driver had forbidden the action.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/qemu/qemu_driver.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 0763f9b..8a302d1 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -14673,15 +14673,15 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
if (!(vm = qemuDomObjFromDomain(dom)))
return -1;
+ if (virDomainSetBlockIoTuneEnsureACL(dom->conn, vm->def, flags) < 0)
+ goto cleanup;
+
if (qemuDomainObjBeginJob(driver, vm, QEMU_JOB_MODIFY) < 0)
goto cleanup;
priv = vm->privateData;
cfg = virQEMUDriverGetConfig(driver);
- if (virDomainSetBlockIoTuneEnsureACL(dom->conn, vm->def, flags) < 0)
- goto cleanup;
-
if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
goto endjob;
--
1.8.3.2
11 years, 2 months
[libvirt] Announce: Nuxis 1.3.0 and 2.0.0
by Nuno Fernandes
Hello,
Nuxis is an integrated solution for virtualization management. Some of its features
are centralized management of nodes/physical machines and virtual machines,
management of virtual networks, storage management, ISO management,
monitoring and statistics charts, backup/restore of appliance configurations, import
from and export to other virtualization systems using the OVF format, access control,
support for multiple operating systems on 32-bit and 64-bit architectures, including
Linux and Windows, paravirtualized hardware acceleration drivers, live migrate, PXE
boot, Web management, storage management with LVM, and more.
We are pleased to announce major version 2.0.0. It's now based on 64-bit CentOS6
with the xen4centos repository. There are many bugfixes and a few enhancements:
- physical volume resizing
- triggering a warning message when one snapshot is out of free space
- implementation of Storage iSCSI configuration and support.
- Libvirt has been updated to 0.10.2.7, Xen to 4.2.3, and the kernel to 3.4.61.
For more information please check http://freecode.com/projects/etvm and homepage
http://www.nuxis.com. As always the code is at github https://github.com/eurotux/ETVA.
We have a users mailling list at http://mailman.nuxis.com/pipermail/nuxis-users/
Best regards,
Nuno Fernandes
11 years, 2 months
[libvirt] [PATCH 0/2] Add support for device blkio iops and bps throttle
by hzguanqiang@gmail.com
From: Guan Qiang <hzguanqiang(a)corp.netease.com>
The patches add support for setting/getting blkio read/write bps/iops
throttle per-device with blkio cgroup.
Guan Qiang (2):
blkiotune: add support for device iops and bps throttle
blkiotune: add virsh support for blkiotune.throttle.iops/bps
docs/formatdomain.html.in | 8 +
docs/schemas/domaincommon.rng | 28 +-
include/libvirt/libvirt.h.in | 40 ++
src/conf/domain_conf.c | 115 +++-
src/conf/domain_conf.h | 16 +-
src/libvirt_private.syms | 4 +-
src/lxc/lxc_cgroup.c | 9 +-
src/qemu/qemu_cgroup.c | 10 +-
src/qemu/qemu_driver.c | 579 ++++++++++++++++++--
src/util/vircgroup.c | 79 ++-
src/util/vircgroup.h | 8 +-
.../qemuxml2argv-blkiotune-device.xml | 4 +
tools/virsh-domain.c | 64 +++
tools/virsh.pod | 32 +-
14 files changed, 883 insertions(+), 113 deletions(-)
--
1.7.9.5
11 years, 2 months
[libvirt] [RFC] Add iommu group commands
by Li Zhang
Hi,
Currently, we need to assign the iommu group to guests manually.
It needs to know the groups information.
I think we can add some iommu group commands to provide users
groups information.
I can think of these commands as the following:
#virsh group-list <--active>
* list all the groups in the system with VFIO
* --active: list the active groups which have been used by guests.
#virsh group-devs <groupnum>
* list the devices in the group.
*If groupnum is not specified, it will list every groups' devices.
# virsh group-dumpxml <groupnum>
* dump the group's xml configuration.
dumxml file for example:
<iommuGroup number='1'>
<address domain='0x0001' bus='0x40' slot='0x00' function='0x0'/>
<address domain='0x0001' bus='0x40' slot='0x00' function='0x1'/>
</iommuGroup>
This information can also be got by #nodedev-dumpxml <device>
But users still can't know all the groups and the devices in the group
directly.
Any suggestions?
I want to add iommu symbol to the commands name, but it's too long.
Thanks
Li Zhang
11 years, 2 months
[libvirt] [PATCH] virsh-domain: Remove unnecessary check and tune code in cmdDesc()
by Hongwei Bi
Since there is a check on buf through virBufferError(),
it is not necessary to check desc again.
---
tools/virsh-domain.c | 62 +++++++++++++++++++++----------------------------
1 files changed, 27 insertions(+), 35 deletions(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index e47877b..a8a0105 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -6724,45 +6724,37 @@ cmdDesc(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
}
desc = virBufferContentAndReset(&buf);
- if (edit || desc) {
- if (!desc) {
- desc = vshGetDomainDescription(ctl, dom, title,
- config?VIR_DOMAIN_XML_INACTIVE:0);
- if (!desc)
- goto cleanup;
- }
-
- if (edit) {
- /* Create and open the temporary file. */
- if (!(tmp = vshEditWriteToTempFile(ctl, desc)))
- goto cleanup;
+ if (edit) {
+ /* Create and open the temporary file. */
+ if (!(tmp = vshEditWriteToTempFile(ctl, desc)))
+ goto cleanup;
- /* Start the editor. */
- if (vshEditFile(ctl, tmp) == -1)
- goto cleanup;
+ /* Start the editor. */
+ if (vshEditFile(ctl, tmp) == -1)
+ goto cleanup;
- /* Read back the edited file. */
- if (!(desc_edited = vshEditReadBackFile(ctl, tmp)))
- goto cleanup;
+ /* Read back the edited file. */
+ if (!(desc_edited = vshEditReadBackFile(ctl, tmp)))
+ goto cleanup;
- /* strip a possible newline at the end of file; some
- * editors enforce a newline, this makes editing the title
- * more convenient */
- if (title &&
- (tmpstr = strrchr(desc_edited, '\n')) &&
- *(tmpstr+1) == '\0')
- *tmpstr = '\0';
-
- /* Compare original XML with edited. Has it changed at all? */
- if (STREQ(desc, desc_edited)) {
- vshPrint(ctl, _("Domain description not changed.\n"));
- ret = true;
- goto cleanup;
- }
+ /* strip a possible newline at the end of file; some
+ * editors enforce a newline, this makes editing the title
+ * more convenient */
+ if (title &&
+ (tmpstr = strrchr(desc_edited, '\n')) &&
+ *(tmpstr+1) == '\0')
+ *tmpstr = '\0';
+
+ /* Compare original XML with edited. Has it changed at all? */
+ if (STREQ(desc, desc_edited)) {
+ vshPrint(ctl, _("Domain description not changed.\n"));
+ ret = true;
+ goto cleanup;
+ }
- VIR_FREE(desc);
- desc = desc_edited;
- desc_edited = NULL;
+ VIR_FREE(desc);
+ desc = desc_edited;
+ desc_edited = NULL;
}
if (virDomainSetMetadata(dom, type, desc, NULL, NULL, flags) < 0) {
--
1.7.1
11 years, 2 months
[libvirt] [PATCHv3] Add forwarder attribute to <dns /> element.
by Diego Woitasen
Useful to set custom forwarders instead of using the contents of
/etc/resolv.conf. It helps me to setup dnsmasq as local nameserver to resolv VM
domain names from domain 0, when domain option is used.
Signed-off-by: Diego Woitasen <diego.woitasen(a)vhgroup.net>
---
docs/formatnetwork.html.in | 8 ++++
docs/schemas/network.rng | 5 +++
src/conf/network_conf.c | 43 ++++++++++++++++++++--
src/conf/network_conf.h | 2 +
src/network/bridge_driver.c | 8 ++++
.../nat-network-dns-forwarders.conf | 16 ++++++++
.../nat-network-dns-forwarders.xml | 12 ++++++
tests/networkxml2conftest.c | 1 +
8 files changed, 92 insertions(+), 3 deletions(-)
create mode 100644 tests/networkxml2confdata/nat-network-dns-forwarders.conf
create mode 100644 tests/networkxml2confdata/nat-network-dns-forwarders.xml
diff --git a/docs/formatnetwork.html.in b/docs/formatnetwork.html.in
index e1482db..9fdc3cf 100644
--- a/docs/formatnetwork.html.in
+++ b/docs/formatnetwork.html.in
@@ -631,6 +631,8 @@
<domain name="example.com"/>
<dns>
<txt name="example" value="example value" />
+ <forwarder addr="8.8.8.8"/>
+ <forwarder addr="8.8.4.4"/>
<srv service='name' protocol='tcp' domain='test-domain-name' target='.' port='1024' priority='10' weight='10'/>
<host ip='192.168.122.2'>
<hostname>myhost</hostname>
@@ -685,6 +687,12 @@
Currently supported sub-elements of <code><dns></code> are:
<dl>
+ <dt><code>forwarder</code></dt>
+ <dd>A <code>dns</code> element can have 0 or more <code>forwarder</code> elements.
+ Each forwarder element defines an IP address to be used as forwarder
+ in DNS server configuration. The addr attribute is required and defines the
+ IP address of every forwarder. <span class="since">Since N/A</span>
+ </dd>
<dt><code>txt</code></dt>
<dd>A <code>dns</code> element can have 0 or more <code>txt</code> elements.
Each txt element defines a DNS TXT record and has two attributes, both
diff --git a/docs/schemas/network.rng b/docs/schemas/network.rng
index ab183f1..95db5c2 100644
--- a/docs/schemas/network.rng
+++ b/docs/schemas/network.rng
@@ -217,6 +217,11 @@
</attribute>
</optional>
<zeroOrMore>
+ <element name="forwarder">
+ <attribute name="addr"><ref name="ipAddr"/></attribute>
+ </element>
+ </zeroOrMore>
+ <zeroOrMore>
<element name="txt">
<attribute name="name"><ref name="dnsName"/></attribute>
<attribute name="value"><text/></attribute>
diff --git a/src/conf/network_conf.c b/src/conf/network_conf.c
index d54f2aa..c9b90e7 100644
--- a/src/conf/network_conf.c
+++ b/src/conf/network_conf.c
@@ -175,6 +175,11 @@ virNetworkDNSSrvDefClear(virNetworkDNSSrvDefPtr def)
static void
virNetworkDNSDefClear(virNetworkDNSDefPtr def)
{
+ if (def->forwarders) {
+ while (def->nfwds)
+ VIR_FREE(def->forwarders[--def->nfwds]);
+ VIR_FREE(def->forwarders);
+ }
if (def->txts) {
while (def->ntxts)
virNetworkDNSTxtDefClear(&def->txts[--def->ntxts]);
@@ -1037,8 +1042,9 @@ virNetworkDNSDefParseXML(const char *networkName,
xmlNodePtr *hostNodes = NULL;
xmlNodePtr *srvNodes = NULL;
xmlNodePtr *txtNodes = NULL;
+ xmlNodePtr *fwdNodes = NULL;
char *forwardPlainNames = NULL;
- int nhosts, nsrvs, ntxts;
+ int nfwds, nhosts, nsrvs, ntxts;
size_t i;
int ret = -1;
xmlNodePtr save = ctxt->node;
@@ -1058,6 +1064,30 @@ virNetworkDNSDefParseXML(const char *networkName,
}
}
+ nfwds = virXPathNodeSet("./forwarder", ctxt, &fwdNodes);
+ if (nfwds < 0) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("invalid <forwarder> element found in <dns> of network %s"),
+ networkName);
+ goto cleanup;
+ }
+ if (nfwds > 0) {
+ if (VIR_ALLOC_N(def->forwarders, nfwds) < 0)
+ goto cleanup;
+
+ for (i = 0; i < nfwds; i++) {
+ def->forwarders[i] = virXMLPropString(fwdNodes[i], "addr");
+ if (virSocketAddrParse(NULL, def->forwarders[i], AF_UNSPEC) < 0) {
+ virReportError(VIR_ERR_XML_ERROR,
+ _("Invalid forwarder IP address '%s' "
+ "in network '%s'"),
+ def->forwarders[i], networkName);
+ goto cleanup;
+ }
+ def->nfwds++;
+ }
+ }
+
nhosts = virXPathNodeSet("./host", ctxt, &hostNodes);
if (nhosts < 0) {
virReportError(VIR_ERR_XML_ERROR,
@@ -1121,6 +1151,7 @@ virNetworkDNSDefParseXML(const char *networkName,
ret = 0;
cleanup:
VIR_FREE(forwardPlainNames);
+ VIR_FREE(fwdNodes);
VIR_FREE(hostNodes);
VIR_FREE(srvNodes);
VIR_FREE(txtNodes);
@@ -2267,13 +2298,14 @@ virNetworkDNSDefFormat(virBufferPtr buf,
int result = 0;
size_t i, j;
- if (!(def->forwardPlainNames || def->nhosts || def->nsrvs || def->ntxts))
+ if (!(def->forwardPlainNames || def->forwarders || def->nhosts ||
+ def->nsrvs || def->ntxts))
goto out;
virBufferAddLit(buf, "<dns");
if (def->forwardPlainNames) {
virBufferAddLit(buf, " forwardPlainNames='yes'");
- if (!(def->nhosts || def->nsrvs || def->ntxts)) {
+ if (!(def->forwarders || def->nhosts || def->nsrvs || def->ntxts)) {
virBufferAddLit(buf, "/>\n");
goto out;
}
@@ -2282,6 +2314,11 @@ virNetworkDNSDefFormat(virBufferPtr buf,
virBufferAddLit(buf, ">\n");
virBufferAdjustIndent(buf, 2);
+ for (i = 0; i < def->nfwds; i++) {
+ virBufferAsprintf(buf, "<forwarders addr='%s' />\n",
+ def->forwarders[i]);
+ }
+
for (i = 0; i < def->ntxts; i++) {
virBufferAsprintf(buf, "<txt name='%s' value='%s'/>\n",
def->txts[i].name,
diff --git a/src/conf/network_conf.h b/src/conf/network_conf.h
index c28bfae..b425986 100644
--- a/src/conf/network_conf.h
+++ b/src/conf/network_conf.h
@@ -122,6 +122,8 @@ struct _virNetworkDNSDef {
virNetworkDNSHostDefPtr hosts;
size_t nsrvs;
virNetworkDNSSrvDefPtr srvs;
+ size_t nfwds;
+ char **forwarders;
};
typedef struct _virNetworkIpDef virNetworkIpDef;
diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index 3a8be90..a2cfb35 100644
--- a/src/network/bridge_driver.c
+++ b/src/network/bridge_driver.c
@@ -708,6 +708,14 @@ networkDnsmasqConfContents(virNetworkObjPtr network,
if (!network->def->dns.forwardPlainNames)
virBufferAddLit(&configbuf, "domain-needed\n");
+ if (network->def->dns.forwarders) {
+ virBufferAddLit(&configbuf, "no-resolv\n");
+ for (i=0; i < network->def->dns.nfwds; i++) {
+ virBufferAsprintf(&configbuf, "server=%s\n",
+ network->def->dns.forwarders[i]);
+ }
+ }
+
if (network->def->domain) {
virBufferAsprintf(&configbuf,
"domain=%s\n"
diff --git a/tests/networkxml2confdata/nat-network-dns-forwarders.conf b/tests/networkxml2confdata/nat-network-dns-forwarders.conf
new file mode 100644
index 0000000..ebca289
--- /dev/null
+++ b/tests/networkxml2confdata/nat-network-dns-forwarders.conf
@@ -0,0 +1,16 @@
+##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
+##OVERWRITTEN AND LOST. Changes to this configuration should be made using:
+## virsh net-edit default
+## or other application using the libvirt API.
+##
+## dnsmasq conf file created by libvirt
+strict-order
+domain-needed
+no-resolv
+server=8.8.8.8
+server=8.8.4.4
+local=//
+except-interface=lo
+bind-dynamic
+interface=virbr0
+addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
diff --git a/tests/networkxml2confdata/nat-network-dns-forwarders.xml b/tests/networkxml2confdata/nat-network-dns-forwarders.xml
new file mode 100644
index 0000000..eebec97
--- /dev/null
+++ b/tests/networkxml2confdata/nat-network-dns-forwarders.xml
@@ -0,0 +1,12 @@
+<network>
+ <name>default</name>
+ <uuid>81ff0d90-c91e-6742-64da-4a736edb9a9c</uuid>
+ <forward dev='eth0' mode='nat'/>
+ <bridge name='virbr0' stp='on' delay='0' />
+ <dns>
+ <forwarder addr='8.8.8.8' />
+ <forwarder addr='8.8.4.4' />
+ </dns>
+ <ip address='192.168.122.1' netmask='255.255.255.0'>
+ </ip>
+</network>
diff --git a/tests/networkxml2conftest.c b/tests/networkxml2conftest.c
index 5825af3..ad50e88 100644
--- a/tests/networkxml2conftest.c
+++ b/tests/networkxml2conftest.c
@@ -145,6 +145,7 @@ mymain(void)
DO_TEST("nat-network-dns-srv-record", full);
DO_TEST("nat-network-dns-hosts", full);
DO_TEST("nat-network-dns-forward-plain", full);
+ DO_TEST("nat-network-dns-forwarders", full);
DO_TEST("dhcp6-network", dhcpv6);
DO_TEST("dhcp6-nat-network", dhcpv6);
DO_TEST("dhcp6host-routed-network", dhcpv6);
--
1.8.1.2
11 years, 2 months
Re: [libvirt] Mass rebuild report for August 29 2013
by Eric Blake
On 08/29/2013 11:38 AM, Erik van Pienbroek wrote:
>
> This mass rebuild was done using winpthreads instead of the old
> pthreads-w32 implementation. In Fedora itself winpthreads isn't
> used by default yet, but it will be introduced in Fedora 20 once
> all build failures which are caused by it are resolved (if this
> takes too long the introduction of winpthreads in Fedora will
> have to be postponed until Fedora 21 which is scheduled for
> release in Q2 2014). The gcc package is still being built without
> --enable-threads=posix (thus support for C++11 std::thread
> is not enabled yet)
>
>> mingw-libvirt-1.1.1-1
>> Package owner: berrange
>> Time to build: 6 minutes, 39 seconds
>> Build logs: http://build1.vanpienbroek.nl/fedora-mingw-rebuild/20130829/mingw-libvirt...
>
>
> Also caused by winpthreads:
>
> CCLD libvirt.la
> ./.libs/libvirt_driver_remote.a(libvirt_net_rpc_client_la-virnetclient.o): In function `virNetClientIOEventLoop':
> /builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:1517: undefined reference to `pthread_sigmask'
> /builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:1524: undefined reference to `pthread_sigmask'
> /builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:1524: undefined reference to `pthread_sigmask'
> ./.libs/libvirt_driver_remote.a(libvirt_net_rpc_client_la-virnetclient.o): In function `virNetClientSetTLSSession':
> /builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:785: undefined reference to `pthread_sigmask'
> /builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:792: undefined reference to `pthread_sigmask'
> ./.libs/libvirt_driver_remote.a(libvirt_net_rpc_client_la-virnetclient.o):/builddir/build/BUILD/libvirt-1.1.1/build_win32/src/../../src/rpc/virnetclient.c:809: more undefined references to `pthread_sigmask' follow
> collect2: error: ld returned 1 exit status
Hmm. The libvirt build for mingw explicitly wants to avoid pthread_*,
and use native threading instead (at least we wanted to explicitly avoid
the old pthreads-w32, and since we already have native thread support,
we might as well use it instead of dragging in winpthreads). Probably a
case of our configure checks not detecting the right situation once
winpthreads are turned on. I'll see if we can get this fixed up for
libvirt 1.1.2 (due real soon now), or if it will have to wait for 1.1.3
(a month out, but probably still in time to make it into F20). Is there
an easy environment to set up (such as rawhide + a repo) for testing a
mingw cross-build with winpthreads?
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
11 years, 2 months