Re: [libvirt] [PATCH] storage_scsi: Handle physical HBA when deleting vHBA vport.
by John Ferlan
On 04/15/2016 02:26 PM, Nitesh Konkar wrote:
> On Fri, Apr 15, 2016 at 8:08 PM, John Ferlan <jferlan(a)redhat.com> wrote:
>
Please do not remove libvir-list from a response. I've replaced it.
Someone may have a different idea.
>>
>>
>> On 04/15/2016 10:11 AM, Nitesh Konkar wrote:
>>> Thanks John for the reply.
>>>
>>> On Fri, Apr 15, 2016 at 5:08 PM, John Ferlan <jferlan(a)redhat.com> wrote:
>>>
>>>>
>>>>
>>>> On 04/07/2016 05:09 AM, Nitesh Konkar wrote:
>>>>> HBA will get treated as vHBA if not returned
>>>>> after detecting vhba_parent format.
>>>>>
>>>>> Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
>>>>> ---
>>>>> Before Patch:
>>>>> # virsh pool-destroy poolhba_name
>>>>> error: Failed to destroy pool poolhba_name
>>>>> error: internal error: Invalid adapter name 'pci_000x_0x_00_x' for SCSI
>>>> pool
>>>>>
>>>>> # virsh nodedev-dumpxml scsi_host2
>>>>> <device>
>>>>> <name>scsi_host2</name>
>>>>> <path>xxxx</path>
>>>>> <parent>pci_000x_0x_00_x</parent>
>>>>> <capability type='scsi_host'>
>>>>> <host>2</host>
>>>>> ...
>>>>> ...
>>>>> <capability type='vport_ops'>
>>>>> <max_vports>255</max_vports>
>>>>> <vports>0</vports>
>>>>> </capability>
>>>>> </capability>
>>>>> </device>
>>>>>
>>>>> After Patch:
>>>>> # virsh pool-destroy poolhba_name
>>>>> Pool poolhba_name destroyed
>>>>>
>>>>> src/storage/storage_backend_scsi.c | 5 +++++
>>>>> 1 file changed, 5 insertions(+)
>>>>>
>>>>
>>>> Can you provide the pool-dumpxml for poolhba_name? Can you provide the
>>>> nodedev-dumpxml of the 'scsi_host#' that was created for the vHBA pool?
>>>>
>>>>
>>> This patch is to destroy a pool created out from a Physical HBA.
>> Apologies,
>>> if the
>>> commit message was misleading.
>>>
>>> # virsh pool-dumpxml poolhba_name
>>> <pool type='scsi'>
>>> <name>poolhba_name</name>
>>> <uuid>60d74134-0c18-4d4f-9305-24d96ce1a1b6</uuid>
>>> <capacity unit='bytes'>268435456000</capacity>
>>> <allocation unit='bytes'>268435456000</allocation>
>>> <available unit='bytes'>0</available>
>>> <source>
>>> <adapter type='fc_host' managed='yes' wwnn='20000120fa8f1271'
>>> wwpn='10000090fa8f1271'/>
>>> </source>
>>> <target>
>>> <path>/dev/disk/by-id</path>
>>> <permissions>
>>> <mode>0700</mode>
>>> <owner>0</owner>
>>> <group>0</group>
>>> </permissions>
>>> </target>
>>> </pool>
>>>
>>
>> OK, maybe I wasn't clear enough... Which 'scsi_host#' is *this* pool
>> associated with. Prior to creating it, do a virsh nodedev-list
>> scsi_host. Then create it. Then generate the list again.
>>
>> The pool poolhba_name is associated with scsi_host2.
>
> #virsh list --all
> Id Name State
> ----------------------------------------------------
>
> # virsh nodedev-list scsi_host
> scsi_host0
> scsi_host1
> scsi_host2
> scsi_host3
> scsi_host4
>
> #virsh pool-list --all
> Name State Autostart
> -------------------------------------------
> poolhba_name active yes
>
> # virsh nodedev-dumpxml scsi_host2
> <device>
> <name>scsi_host2</name>
>
> <path>/sys/devices/pci0001:00/0001:00:00.0/0001:01:00.0/0001:02:09.0/0001:09:00.1/host2</path>
> <parent>pci_0001_09_00_1</parent>
> <capability type='scsi_host'>
> <host>2</host>
> <unique_id>1</unique_id>
> <capability type='fc_host'>
> <wwnn>20000120fa8f1271</wwnn>
> <wwpn>10000090fa8f1271</wwpn>
> <fabric_wwn>100050eb1a99d430</fabric_wwn>
> </capability>
> <capability type='vport_ops'>
> <max_vports>255</max_vports>
> <vports>1</vports>
> </capability>
> </capability>
> </device>
>
> Or of course since you cannot delete the poolvhba_name, go through the
>> various scsi_host#'s on your host looking for the one with the match
>> wwwn/wwpn - then do the nodedev-dumpxml of that. For you example you
>> are looking for the scsi_host# with the matching wwnn='20000120fa8f1271'
>> and wwpn='10000090fa8f1271'.
>>
>> That one is supposed to list 'scsi_host2' in the <parent> field as my
>> 'scsi_host19' does below.
>>
>> # virsh pool-list --all
> Name State Autostart
> -------------------------------------------
> poolhba_name active yes
>
> # virsh nodedev-dumpxml scsi_host2
> <device>
> <name>scsi_host2</name>
>
> <path>/sys/devices/pci0001:00/0001:00:00.0/0001:01:00.0/0001:02:09.0/0001:09:00.1/host2</path>
> <parent>pci_0001_09_00_1</parent>
> <capability type='scsi_host'>
> <host>2</host>
> <unique_id>1</unique_id>
> <capability type='fc_host'>
> <wwnn>20000120fa8f1271</wwnn>
> <wwpn>10000090fa8f1271</wwpn>
> <fabric_wwn>100050eb1a99d430</fabric_wwn>
> </capability>
> <capability type='vport_ops'>
> <max_vports>255</max_vports>
> <vports>1</vports>
> </capability>
> </capability>
> </device>
>
> Here the parent field has "pci_0001_09_00_1" and not 'scsi_host2' . This is
> why it errors out.
>
> # virsh pool-destroy poolhba_name
> 2016-04-15 18:21:07.054+0000: 113209: error : virGetSCSIHostNumber:1922 :
> internal error: Invalid adapter name 'pci_0001_09_00_1' for SCSI pool
> error: Failed to destroy pool poolhba_name
> error: internal error: Invalid adapter name 'pci_0001_09_00_1' for SCSI pool
>
> Am I missing something?
>
Perhaps - I think I explained how the vHBA is created before... Look at
the code for createVport() - see the 'parent_hoststr' description.
When you did a pool-{create|define} for the "poolhba_name", you provided
some XML which would search the existing 'scsi_host#' for one that's
capable of supporting a vHBA.
What that create is supposed to do (at least it does it on the systems I
used) is create another 'scsi_host#'. That scsi_host# is then the vHBA -
it's *parent* is supposed to be 'scsi_host2' - see my example. On your
host perhaps scsi_host3 or scsi_host4. Do either one of those two have
the wwnn/wwpn that's in your poolhba_name:
<adapter type='fc_host' managed='yes' wwnn='20000120fa8f1271'
wwpn='10000090fa8f1271'/>
I'm really not quite sure what's happening on your host/environment.
Perhaps you could follow how createVport generates things and report
back. You can always 'create' another vHBA using a different wwnn/wwpn
John
8 years, 7 months
[libvirt] [PATCH libvirt v2 0/2] libxl: support vscsi
by Olaf Hering
Add support for upcoming vscsi= in libxl.
Changes between v1 and v2:
- rebase to 'master' (ad584cb)
- Update API to v12
Olaf Hering (2):
libxl: include a XLU_Config in _libxlDriverConfig
libxl: support vscsi
src/libxl/libxl_conf.c | 66 +++++++++++++++++
src/libxl/libxl_conf.h | 8 ++
src/libxl/libxl_domain.c | 2 +-
src/libxl/libxl_driver.c | 187 +++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 262 insertions(+), 1 deletion(-)
8 years, 7 months
[libvirt] [PATCH] libxl config file convertion: correct `type=netfront' to 'type=vif'
by Chunyan Liu
According to current xl.cfg docs and xl codes, it uses type=vif
instead of type=netfront.
Currently after domxml-to-native, libvirt xml model=netfront will be
converted to xl type=netfront. This has no problem before, xen codes
for a long time just check type=ioemu, if not, set type to _VIF.
Since libxl uses parse_nic_config to avoid duplicate codes, it
compares 'type=vif' and 'type=ioemu' for valid parameters, others
are considered as invalid, thus we have problem with type=netfront
in xl config file.
#xl create sles12gm-hvm.orig
Parsing config from sles12gm-hvm.orig
Invalid parameter `type'.
Correct the convertion in libvirt, so that it matchs libxl codes
and also xl.cfg.
Signed-off-by: Chunyan Liu <cyliu(a)suse.com>
---
Since type=netfront config has been used for a very long time, at
lease for xm/xend, it has no problem. I'm not sure if we need to
split into xenParseXLVif vs xenParseXMVif, and xenFormatXLVif vs
xenFormatXMVif for this change?
src/xenconfig/xen_common.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/xenconfig/xen_common.c b/src/xenconfig/xen_common.c
index 4dcd484..ae81635 100644
--- a/src/xenconfig/xen_common.c
+++ b/src/xenconfig/xen_common.c
@@ -944,7 +944,8 @@ xenParseVif(virConfPtr conf, virDomainDefPtr def)
VIR_STRDUP(net->model, model) < 0)
goto cleanup;
- if (!model[0] && type[0] && STREQ(type, "netfront") &&
+ if (!model[0] && type[0] &&
+ (STREQ(type, "netfront") || STREQ(type, "vif")) &&
VIR_STRDUP(net->model, "netfront") < 0)
goto cleanup;
@@ -1201,7 +1202,7 @@ xenFormatNet(virConnectPtr conn,
virBufferAsprintf(&buf, ",model=%s", net->model);
} else {
if (net->model != NULL && STREQ(net->model, "netfront")) {
- virBufferAddLit(&buf, ",type=netfront");
+ virBufferAddLit(&buf, ",type=vif");
} else {
if (net->model != NULL)
virBufferAsprintf(&buf, ",model=%s", net->model);
--
2.1.4
8 years, 7 months
[libvirt] [PATCH V2] libxl: use LIBXL_API_VERSION 0x040200
by Jim Fehlig
To ensure the libvirt libxl driver will build with future versions
of Xen where the libxl API may change in incompatible ways,
explicitly use LIBXL_API_VERSION 0x040200. The libxl driver
does use new libxl APIs that have been added since Xen 4.2, but
currently it does not make use of any changes made to existing
APIs such as libxl_domain_create_restore or libxl_set_vcpuaffinity.
The version can be bumped if/when the libxl driver consumes the
changed APIs.
Further details can be found in the following discussion thread
https://www.redhat.com/archives/libvir-list/2016-April/msg00178.html
Signed-off-by: Jim Fehlig <jfehlig(a)suse.com>
---
configure.ac | 2 ++
src/libxl/libxl_conf.h | 12 ------------
src/libxl/libxl_domain.c | 15 ---------------
3 files changed, 2 insertions(+), 27 deletions(-)
diff --git a/configure.ac b/configure.ac
index b1500f6..446f2a2 100644
--- a/configure.ac
+++ b/configure.ac
@@ -894,6 +894,7 @@ if test "$with_libxl" != "no" ; then
PKG_CHECK_MODULES([LIBXL], [xenlight], [
LIBXL_FIRMWARE_DIR=`$PKG_CONFIG --variable xenfirmwaredir xenlight`
LIBXL_EXECBIN_DIR=`$PKG_CONFIG --variable libexec_bin xenlight`
+ LIBXL_CFLAGS="$LIBXL_CFLAGS -DLIBXL_API_VERSION=0x040200"
with_libxl=yes
], [LIBXL_FOUND=no])
if test "$LIBXL_FOUND" = "no"; then
@@ -906,6 +907,7 @@ if test "$with_libxl" != "no" ; then
LIBS="$LIBS $LIBXL_LIBS"
AC_CHECK_LIB([xenlight], [libxl_ctx_alloc], [
with_libxl=yes
+ LIBXL_CFLAGS="$LIBXL_CFLAGS -DLIBXL_API_VERSION=0x040200"
LIBXL_LIBS="$LIBXL_LIBS -lxenlight"
],[
if test "$with_libxl" = "yes"; then
diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h
index 3c0eafb..24e2911 100644
--- a/src/libxl/libxl_conf.h
+++ b/src/libxl/libxl_conf.h
@@ -69,18 +69,6 @@
# endif
-/* libxl interface for setting VCPU affinity changed in 4.5. In fact, a new
- * parameter has been added, representative of 'VCPU soft affinity'. If one
- * does not care about it (and that's libvirt case), passing NULL is the
- * right thing to do. To mark that change, LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY
- * is defined. */
-# ifdef LIBXL_HAVE_VCPUINFO_SOFT_AFFINITY
-# define libxl_set_vcpuaffinity(ctx, domid, vcpuid, map) \
- libxl_set_vcpuaffinity((ctx), (domid), (vcpuid), (map), NULL)
-# define libxl_set_vcpuaffinity_all(ctx, domid, max_vcpus, map) \
- libxl_set_vcpuaffinity_all((ctx), (domid), (max_vcpus), (map), NULL)
-# endif
-
typedef struct _libxlDriverPrivate libxlDriverPrivate;
typedef libxlDriverPrivate *libxlDriverPrivatePtr;
diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index 86fb713..14a900c 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -1026,9 +1026,6 @@ libxlDomainStart(libxlDriverPrivatePtr driver, virDomainObjPtr vm,
int managed_save_fd = -1;
libxlDomainObjPrivatePtr priv = vm->privateData;
libxlDriverConfigPtr cfg;
-#ifdef LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS
- libxl_domain_restore_params params;
-#endif
virHostdevManagerPtr hostdev_mgr = driver->hostdevMgr;
libxl_asyncprogress_how aop_console_how;
@@ -1118,20 +1115,8 @@ libxlDomainStart(libxlDriverPrivatePtr driver, virDomainObjPtr vm,
ret = libxl_domain_create_new(cfg->ctx, &d_config,
&domid, NULL, &aop_console_how);
} else {
-#if defined(LIBXL_HAVE_DOMAIN_CREATE_RESTORE_SEND_BACK_FD)
- params.checkpointed_stream = 0;
- ret = libxl_domain_create_restore(cfg->ctx, &d_config, &domid,
- restore_fd, -1, ¶ms, NULL,
- &aop_console_how);
-#elif defined(LIBXL_HAVE_DOMAIN_CREATE_RESTORE_PARAMS)
- params.checkpointed_stream = 0;
- ret = libxl_domain_create_restore(cfg->ctx, &d_config, &domid,
- restore_fd, ¶ms, NULL,
- &aop_console_how);
-#else
ret = libxl_domain_create_restore(cfg->ctx, &d_config, &domid,
restore_fd, NULL, &aop_console_how);
-#endif
}
virObjectLock(vm);
--
2.6.1
8 years, 7 months
[libvirt] Regarding PCI/PCIe Multifunction hotplug support in libvirt
by Shivaprasad bhat
Hi All,
I am exploring how to go about supporting multi-function hot-plug/unplug
support from libvirt now that Qemu has enabled it.
The Qemu semantics is that, qemu queues the notification to guest until
function 0 is hot-
plugged and on function 0 hot-plug, all the previously added devices in the
slot are
notified to the guest.
For hot-unplug - On x86, unplug of any device in the slot would unplug all
the functions of
the slot. On PPC, unplug all non-zero functions first, then unplug the
function zero which
triggers the unplug of all the functions in the slot.
On Libvirt, I had a brief chat with Laine Stump on IRC and we "think" the
below semantics
would be appropriate.
1) Change the virDomainAttachDeviceFlags() to recognize multiple devices in
the XML
and the application should make one call to attach all the functions for
the slot at one
time.
2) The libvirt qemu driver to translate this into multiple attach device
commands to qemu
with the final operation being for function 0.
3) The XML now needs to accept multiple devices and there is a need for
single parent
element. I am thinking if we should have all devices to be enclosed in
<device></device>
parent element. Note that this is only when all the devices should go to
the same slot. We
probably should disallow user attempting to hot-plug to different slots
with this.
4) For the hot-unplug, the application to send all the devices the same way
enclosed in
<device></devices> and libvirt to go ahead with unplug only if all the
devices are
specified in the XML for the slot.
Want to know if you foresee any problem with using <device></device>
semantics Or you
have any different approach/suggestions. Any comments greatly appreciated.
Thanks and Regards,
Shivaprasad
8 years, 7 months
[libvirt] [PATCH v2 0/4] Fix parsing our own XMLs
by Martin Kletzander
v2:
- Just a rebase
- I did *not* use virPCIDeviceAddress wording instead as discussed in
the v1 thread. That's because we have lot of functions working
with virDevicePCIAddress named exactly after that and renaming
those would be ugly IMHO.
v1:
- https://www.redhat.com/archives/libvir-list/2016-April/msg00081.html
Martin Kletzander (4):
Change virPCIDeviceAddress to virDevicePCIAddress
Move capability formatting together
schemas: Update nodedev schema to match reality
conf: Parse more of our nodedev XML
docs/schemas/nodedev.rng | 29 +++---
src/conf/device_conf.h | 11 +--
src/conf/node_device_conf.c | 109 +++++++++++++++++++--
src/conf/node_device_conf.h | 6 +-
src/libvirt_private.syms | 10 +-
src/network/bridge_driver.c | 4 +-
src/node_device/node_device_linux_sysfs.c | 6 +-
src/util/virhostdev.c | 12 +--
src/util/virnetdev.c | 4 +-
src/util/virnetdev.h | 2 +-
src/util/virpci.c | 80 +++++++--------
src/util/virpci.h | 29 +++---
.../pci_0000_00_1c_0_header_type.xml | 2 +-
tests/nodedevschemadata/pci_0000_02_10_7_sriov.xml | 23 +++++
.../pci_0000_02_10_7_sriov_pf_vfs_all.xml | 29 ++++++
...i_0000_02_10_7_sriov_pf_vfs_all_header_type.xml | 30 ++++++
.../pci_0000_02_10_7_sriov_vfs.xml | 26 +++++
.../pci_0000_02_10_7_sriov_zero_vfs_max_count.xml | 21 ++++
tests/nodedevxml2xmltest.c | 7 ++
19 files changed, 335 insertions(+), 105 deletions(-)
create mode 100644 tests/nodedevschemadata/pci_0000_02_10_7_sriov.xml
create mode 100644 tests/nodedevschemadata/pci_0000_02_10_7_sriov_pf_vfs_all.xml
create mode 100644 tests/nodedevschemadata/pci_0000_02_10_7_sriov_pf_vfs_all_header_type.xml
create mode 100644 tests/nodedevschemadata/pci_0000_02_10_7_sriov_vfs.xml
create mode 100644 tests/nodedevschemadata/pci_0000_02_10_7_sriov_zero_vfs_max_count.xml
--
2.8.1
8 years, 7 months
[libvirt] [PATCH] ploop: Fix build with gluster
by Jiri Denemark
Recent patches addiing support for ploop volumes did not properly update
gluster backend.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
Pushed as a build-breaker.
src/storage/storage_backend_gluster.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/storage/storage_backend_gluster.c b/src/storage/storage_backend_gluster.c
index d2e79bc..0085052 100644
--- a/src/storage/storage_backend_gluster.c
+++ b/src/storage/storage_backend_gluster.c
@@ -435,6 +435,7 @@ virStorageBackendGlusterVolDelete(virConnectPtr conn ATTRIBUTE_UNUSED,
case VIR_STORAGE_VOL_FILE:
case VIR_STORAGE_VOL_DIR:
case VIR_STORAGE_VOL_BLOCK:
+ case VIR_STORAGE_VOL_PLOOP:
case VIR_STORAGE_VOL_LAST:
virReportError(VIR_ERR_NO_SUPPORT,
_("removing of '%s' volumes is not supported "
--
2.8.1
8 years, 7 months
[libvirt] [PATCH v6 0/7] storage:dir: ploop volumes support
by Olga Krishtal
This series of patches introduces the support of ploop volumes
in storage pools (dir, fs, etc).
Ploop volume is a disk loopback block device consists of root.hds
(it is the image file) and DiskDescriptor.xml:
https://openvz.org/Ploop/format. Due to this fact it can't be treated
as file or any other volume type, moreover, in order to successfully
manipulate with such volume, ploop tools should be installed.
All callbacks except the wipeVol are supported.
First patch introduces ploop volume type. This is a directory
with two files inside. The name of the directory is the name of the volume.
Patch 2 deals with creating an empty volume and cloning the existing one.
Clone is done via simle cp operation. If any of this operation fails -
directory will be deleted.
Patch 3 deletes recursively ploop directory.
Patch 4 uses ploop tool to resize volume.
Patch 6 adapts all refreshing functions to work with ploop. To get
information the directory is checked. The volume is treated as the
ploops one only if it contains ploop files. Every time the pool
is refreshed DiskDescriptor.xml is restored. This is necessary, because
the content of the volume may have changed.
Upload and download (patch 7) can't be done if the volume contains snapshots.
v6:
- There is no DiskDescriptor.xml restoring during the refresh pool operation.
This operation is moved to storage driver.
- fixed issue with regenerating xml for ploop volume with the snapshots.
v5:
- added ploop volume type
- there is no change in opening volume functions. Now reopening takes place is
the volume is ploops one.
- restore DiskDescriptor.xml every refresh pool
- all information, except format is taken from header
- forbidden upload and download ops for volume with snapshots
- there is no separate function for deleting the volume
- fixed identation and leaks
v4:
- fixed identation issues.
- in case of .uploadVol, DiskDescriptor.xml is restored.
- added check of ploops'accessibility
v3:
- no VIR_STORAGE_VOL_PLOOP type any more
- adapted all patches according to previous change
- fixed comments
v2:
- fixed memory leak
- chenged the return value of all helper functions to 0/-1.
Now check for success is smth like that: vir****Ploop() < 0
- fixed some identation issues.
8 years, 7 months
[libvirt] [PATCH] storage_scsi: Handle physical HBA when deleting vHBA vport.
by Nitesh Konkar
HBA will get treated as vHBA if not returned
after detecting vhba_parent format.
Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
---
Before Patch:
# virsh pool-destroy poolhba_name
error: Failed to destroy pool poolhba_name
error: internal error: Invalid adapter name 'pci_000x_0x_00_x' for SCSI pool
# virsh nodedev-dumpxml scsi_host2
<device>
<name>scsi_host2</name>
<path>xxxx</path>
<parent>pci_000x_0x_00_x</parent>
<capability type='scsi_host'>
<host>2</host>
...
...
<capability type='vport_ops'>
<max_vports>255</max_vports>
<vports>0</vports>
</capability>
</capability>
</device>
After Patch:
# virsh pool-destroy poolhba_name
Pool poolhba_name destroyed
src/storage/storage_backend_scsi.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/src/storage/storage_backend_scsi.c b/src/storage/storage_backend_scsi.c
index e6c8bb5..dd0343f 100644
--- a/src/storage/storage_backend_scsi.c
+++ b/src/storage/storage_backend_scsi.c
@@ -842,6 +842,11 @@ deleteVport(virConnectPtr conn,
if (!(vhba_parent = virStoragePoolGetVhbaSCSIHostParent(conn, name)))
goto cleanup;
+ if (STRPREFIX(vhba_parent, "pci")) {
+ ret = 0;
+ goto cleanup;
+ }
+
if (virGetSCSIHostNumber(vhba_parent, &parent_host) < 0)
goto cleanup;
}
--
1.8.3.1
8 years, 7 months