[libvirt-users] changing clock on host temporarily blocks guest OS operation
by Andrei Perietanu
I have the following running on my system:
- one linux host running libvirt 1.2.20
- two Linux guestOS VMs running
When I change the clock on the host, the VMs stop responding: The VNC
connection stops working, the console connection to the VMs does not work,
ping stops reponding, libvirt commands temporarily stop working (after
issuing the command, the system just hangs).
I get control back after 5 min or so, but I have to restart the VMs for
everything to work again.
Any ideas what's causing this?
Is this a know bug? Is there a fix for it?
Thanks in advance,
Andrei
--
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of or
taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you receive
this in error please contact the sender and delete the material from any
computer immediately. It is the policy of Klas Limited to disavow the
sending of offensive material and should you consider that the material
contained in the message is offensive you should contact the sender
immediately and also your I.T. Manager.
Klas Telecom Inc., a Virginia Corporation with offices at 1101 30th St. NW,
Washington, DC 20007.
Klas Limited (Company Number 163303) trading as Klas Telecom, an Irish
Limited Liability Company, with its registered office at Fourth Floor, One
Kilmainham Square, Inchicore Road, Kilmainham, Dublin 8, Ireland.
8 years, 9 months
[libvirt-users] virtual machine won't autostart when using LVM with cache
by David Hlacik
Hello guys,
Recently I have switched to use LVM cache feature on logical volume
/dev/hdd/windata1 to improve it performace using 32GB partition from SSD
disk.
However, when my computer will start, my virtual machine won't autostart
Mar 21 10:48:57 brutus-coreos libvirtd[956]: Cannot access storage file
'/dev/hdd/windata1' (as uid:107, gid:107): No such file or directory
Mar 21 10:48:57 brutus-coreos libvirtd[956]: Failed to autostart VM
'winos1': Cannot access storage file '/dev/hdd/windata1' (as uid:107,
gid:107): No such file or directory
Mar 21 10:48:57 brutus-coreos libvirtd[956]: Cannot access storage file
'/dev/hdd/windata1' (as uid:107, gid:107): No such file or directory
It seems that when using LVM cache, one have to wait till it will
inicialize?
When I will manually start virtual machine afterwards, everything works OK :
[root@brutus-coreos ~]# virsh start winos1
Domain winos1 started
I have tried to remove LVM cache from /dev/hdd/windata1 and afterward
autostart works! So it must be LVM cache related.
Can you please help me to solve this ?
Thanks in advance
David Hlacik
+420-777-307-745 | david(a)hlacik.cz
[image: View David Hlacik's profile on LinkedIn]
<http://cz.linkedin.com/in/hlacik>
8 years, 9 months
[libvirt-users] Incorrect memory usage returned from virsh
by Connor Osborn
When I run `virsh dominfo <domain>` I get the following:
Id: 455
Name: instance-000047e0
UUID: 50722aa0-d5c6-4a68-b4ef-9b27beba48aa
OS Type: hvm
State: running
CPU(s): 4
CPU time: 123160.4s
Max memory: 33554432 KiB
Used memory: 33554432 KiB
Persistent: yes
Autostart: disable
Managed save: no
Security model: apparmor
Security DOI: 0
Security label: libvirt-50722aa0-d5c6-4a68-b4ef-9b27beba48aa (enforcing)
The domain is not at 100% memory capacity. How can I diagnose this further?
8 years, 9 months
[libvirt-users] CfP 11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '16)
by VHPC 16
====================================================================
CALL FOR PAPERS
11th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'16)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 19-23, 2016, Frankfurt, Germany.
====================================================================
Date: June 23, 2016
Workshop URL: http://vhpc.org
Paper Submission Deadline: April 25, 2016
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, and particularly in cloud environments.
Cloud providers need to manage complex infrastructures in a seamless
fashion to support
the highly dynamic and heterogeneous workloads and hosted applications
customers
deploy. Similarly, HPC environments have been increasingly adopting
techniques that
enable flexible management of vast computing and networking resources,
close to marginal
provisioning cost, which is unprecedented in the history of scientific and
commercial
computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
underutilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-level virtualization (i.e.,
containerization), with its
capability to isolate multiple user-space environments and to allow for
their coexistence
within the same OS kernel, promises to provide many of the advantages of
machine
virtualization with high levels of responsiveness and performance; I/O
Virtualization allows
physical NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization,
with its capability to create logical network overlays that are independent
of the underlying
physical topology and IP addressing, provides the fundamental ground on top
of which
evolved network services can be realized with an unprecedented level of
dynamicity and
flexibility; the increasingly adopted paradigm of Software-Defined
Networking (SDN)
promises to extend this flexibility to the control and data planes of
network paths.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud. Topics include, but are not limited to:
- Virtualization in supercomputing environments, HPC clusters, cloud HPC
and grids
- OS-level virtualization including container runtimes (Docker, rkt et al.)
- Lightweight compute node operating systems/VMMs
- Optimizations of virtual machine monitor platforms, hypervisors
- QoS and SLA in hypervisors and network virtualization
- Cloud based network and system management for SDN and NFV
- Management, deployment and monitoring of virtualized environments
- Virtual per job / on-demand clusters and cloud bursting
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Programming models for virtualized environments
- Virtualization in data intensive computing and Big Data processing
- Cloud reliability, fault-tolerance, high-availability and security
- Heterogeneous virtualized environments, virtualized accelerators, GPUs
and co-processors
- Optimized communication libraries/protocols in the cloud and for HPC in
the cloud
- Topology management and optimization for distributed virtualized
applications
- Adaptation of emerging HPC technologies (high performance networks, RDMA,
etc..)
- I/O and storage virtualization, virtualization aware file systems
- Job scheduling/control/policy in virtualized environments
- Checkpointing and migration of VM-based large compute jobs
- Cloud frameworks and APIs
- Energy-efficient / power-aware virtualization
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
April 25, 2016 - Paper submission deadline
May 30, 2016 Acceptance notification
June 23, 2016 - Workshop Day
July 25, 2016 - Camera-ready version due
Chair
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Costas Bekas, IBM Research, Switzerland
Jakob Blomer, CERN
Ron Brightwell, Sandia National Laboratories, USA
Roberto Canonico, University of Napoli Federico II, Italy
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Robert Gardner, University of Chicago, USA
William Gardner, University of Guelph, Canada
Wolfgang Gentzsch, UberCloud, USA
Kyle Hale, Northwestern University, USA
Marcus Hardt, Karlsruhe Institute of Technology, Germany
Krishna Kant, Templte University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Kornilios Kourtis, IBM Research, Switzerland
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Nikos Parlavantzas, IRISA, France
Kevin Pendretti, Sandia National Laboratories, USA
Che-Rung Roger Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Paul Mundt, Adaptant, Germany
Amer Qouneh, University of Florida, USA
Carlos Reaño, Technical University of Valencia, Spain
Seetharami Seelam, IBM Research, USA
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Dieter Suess, TU Wien, Austria
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Amit Vadudevan, Carnegie Mellon University, USA
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Gianluigi Zanetti, CRS4, Italy
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=21801
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to
present research results, early research ideas, perspectives and positions
of interest to the
community. Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2016, June 19-23,
Frankfurt, Germany.
8 years, 9 months
[libvirt-users] save - restore problem
by Fırat KÜÇÜK
I was trying to live-migrate a VM. I have not succeeded for now.
Instead live-migration Just tried simple save and restore action in same
host.
virsh save my-domain my-domain.saved
virsh restore my-domain.saved
Everything seems fine after restoring. But after 5 minutes VM halted. No
network connectivity. No response to VNC keystrokes. Totally halted.
Is there any suggestion?
8 years, 9 months
[libvirt-users] Fwd: [Issue]: Regarding client socket getting closed from the server once the lxc container is started
by rammohan madhusudan
Hi Folks,
Using libvirt python bindings we are creating an lxc container.Here is the
problem that we see sometimes (say 20 % of the time) when we create a new
container.
1.container gets created, and also starts.However the we are not able to
enter the namepace of the container.It throws an error initPid not
available.We see that the using netstat command , socket connection is
closed.
2.To get around this problem we have to stop and start the container
again.We see that socket under (/var/run/libvirt/*) connection is
established and we are able to enter the namespace.
Enabled the libvirtd debug logs to debug this issue.
For *success* case we see that for new client connection gets created and
is able to handle async incoming events,
*2016-03-12 08:18:55.748+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54005460 classname=virLXCMonitor*
*2016-03-12 08:18:55.748+0000: 1247: debug : virNetSocketNew:159 :
localAddr=0x7fed7cd1d170 remoteAddr=0x7fed7cd1d200 fd=28 errfd=-1 pid=0*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54009040 classname=virNetSocket*
*2016-03-12 08:18:55.749+0000: 1247: info : virNetSocketNew:209 :
RPC_SOCKET_NEW: sock=0x7fed54009040 fd=28 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed54009d10 classname=virNetClient*
*2016-03-12 08:18:55.749+0000: 1247: info : virNetClientNew:327 :
RPC_CLIENT_NEW: client=0x7fed54009d10 sock=0x7fed54009040*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54009d10*
*2016-03-12 08:18:55.749+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54009040*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed540009a0 classname=virNetClientProgram*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed540009a0*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed54005460*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c168eb0*
*2016-03-12 08:18:55.750+0000: 1247: debug :
virLXCProcessCleanInterfaces:475 : Cleared net names: eth0 *
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c168eb0*
*2016-03-12 08:18:55.750+0000: 1247: info : virObjectUnref:259 :
OBJECT_UNREF: obj=0x7fed5c169600*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetClientIncomingEvent:1808
: client=0x7fed54009d10 wantclose=0*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetClientIncomingEvent:1816
: Event fired 0x7fed54009040 1*
*2016-03-12 08:18:55.755+0000: 1244: debug : virNetMessageDecodeLength:151
: Got length, now need 36 total (32 more)*
*2016-03-12 08:18:55.756+0000: 1244: info : virNetClientCallDispatch:1116 :
RPC_CLIENT_MSG_RX: client=0x7fed54009d10 len=36 prog=305402420 vers=1
proc=2 type=2 status=0 serial=1*
*2016-03-12 08:18:55.756+0000: 1244: debug : virKeepAliveCheckMessage:377 :
ka=(nil), client=0x7fed81fc5ed4, msg=0x7fed54009d78*
*2016-03-12 08:18:55.756+0000: 1244: debug :
virNetClientProgramDispatch:220 : prog=305402420 ver=1 type=2 status=0
serial=1 proc=2*
*2016-03-12 08:18:55.756+0000: 1244: debug :
virLXCMonitorHandleEventInit:109 : Event init 1420 *
For *failure* case ,we see that the client socket connection is initiated
and gets closed immediately after receiving an incoming event.In this case,
I don’t see an object for virNetClientProgram being created.
Incoming event comes in and since the its unable to find client->prog it
bails out and closes the connection.
Snaphost of the code,
static int virNetClientCallDispatchMessage(virNetClientPtr client)
{
size_t i;
virNetClientProgramPtr prog = NULL;
for (i = 0; i < client->nprograms; i++) {
if (virNetClientProgramMatches(client->programs[i],
&client->msg)) {
prog = client->programs[i];
break;
}
}
if (!prog) {
* VIR_DEBUG("No program found for event with prog=%d vers=%d",*
* client->msg.header.prog, client->msg.header.vers);*
return -1;
}
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed5c168eb0*
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd7bc0*
*2016-03-12 08:19:15.935+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8120 classname=virLXCMonitor*
*2016-03-12 08:19:15.935+0000: 1246: debug : virNetSocketNew:159 :
localAddr=0x7fed7d51e170 remoteAddr=0x7fed7d51e200 fd=31 errfd=-1 pid=0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8660 classname=virNetSocket*
*2016-03-12 08:19:15.936+0000: 1246: info : virNetSocketNew:209 :
RPC_SOCKET_NEW: sock=0x7fed82bd8660 fd=31 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectNew:202 : OBJECT_NEW:
obj=0x7fed82bd8ca0 classname=virNetClient*
*2016-03-12 08:19:15.936+0000: 1246: info : virNetClientNew:327 :
RPC_CLIENT_NEW: client=0x7fed82bd8ca0 sock=0x7fed82bd8660*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd8ca0*
*2016-03-12 08:19:15.936+0000: 1246: info : virObjectRef:296 : OBJECT_REF:
obj=0x7fed82bd8660*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientIncomingEvent:1808
: client=0x7fed82bd8ca0 wantclose=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientIncomingEvent:1816
: Event fired 0x7fed82bd8660 1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetMessageDecodeLength:151
: Got length, now need 36 total (32 more)*
*2016-03-12 08:19:15.942+0000: 1244: info : virNetClientCallDispatch:1116 :
RPC_CLIENT_MSG_RX: client=0x7fed82bd8ca0 len=36 prog=305402420 vers=1
proc=2 type=2 status=0 serial=1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virKeepAliveCheckMessage:377 :
ka=(nil), client=0x7fed81fc5ed4, msg=0x7fed82bd8d08*
*2016-03-12 08:19:15.942+0000: 1244: debug :
virNetClientCallDispatchMessage:1008 : No program found for event with
prog=305402420 vers=1*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetMessageClear:57 :
msg=0x7fed82bd8d08 nfds=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientMarkClose:632 :
client=0x7fed82bd8ca0, reason=0*
*2016-03-12 08:19:15.942+0000: 1244: debug : virNetClientCloseLocked:648 :
client=0x7fed82bd8ca0, sock=0x7fed82bd8660, reason=0*
Here is the snapshot of code ,
virLXCMonitorPtr virLXCMonitorNew(virDomainObjPtr vm,
const char *socketdir,
virLXCMonitorCallbacksPtr cb)
{
virLXCMonitorPtr mon;
char *sockpath = NULL;
if (virLXCMonitorInitialize() < 0)
return NULL;
if (!(mon = virObjectLockableNew(virLXCMonitorClass)))
return NULL;
if (virAsprintf(&sockpath, "%s/%s.sock",
socketdir, vm->def->name) < 0)
goto error;
if (!(mon->client = virNetClientNewUNIX(sockpath, false, NULL)))
goto error;
* if (virNetClientRegisterAsyncIO(mon->client) < 0)*
* goto error;*
* if (!(mon->program = virNetClientProgramNew(VIR_LXC_MONITOR_PROGRAM,*
*
VIR_LXC_MONITOR_PROGRAM_VERSION,*
* virLXCMonitorEvents,*
*
ARRAY_CARDINALITY(virLXCMonitorEvents),*
* mon)))*
* goto error;*
* if (virNetClientAddProgram(mon->client,*
* mon->program) < 0)*
* goto error;*
* mon->vm = vm;*
* memcpy(&mon->cb, cb, sizeof(mon->cb));*
virObjectRef(mon);
virNetClientSetCloseCallback(mon->client, virLXCMonitorEOFNotify, mon,
virLXCMonitorCloseFreeCallback);
Is the problem occurring due to invocation of “*virNetClientRegisterAsyncIO"
api before the virNetClientAddProgram.Probably once we register for aysnc
IO , immediately an event comes in and that thread takes priority and bails
out since it does not find the client->prog?Also the client is not retrying
to establish a new connection.*
*Please let me any thoughts/comments.Is there any patch already
available which has fixed this issue?We are using libvirt 1.2.15*
*-Thanks,*
*Rammohan*
8 years, 9 months
[libvirt-users] Questions regarding hostdev scsi
by Martin Polednik
Hi!
I'm oVirt developer responsible for most of 'hostdev' support. While
working on SCSI passthrough (that is hostdev type='scsi'), I've
encountered few issues I'm not sure how to solve somewhat effectively
and nicely.
Just a note - oVirt by default disables 'dynamic_ownership', meaning
we have to handle endpoint ownership/labeling ourselves. This
is not something I can change in a short term. Also, oVirt uses
libvirt's python API, I'll do my best using the original C names.
To report and construct the hostdev element, I am using
virConnectListAllNodeDevices. To get information about the devices,
virNodeDeviceGetXMLDesc is called on each device. For PCI and USB
devices, XML of the device contains everything needed to
a) construct the element,
b) fix endpoint permissions.
SCSI device becomes more difficult as the information is scattered
between multiple devices. Devices I have encountered contain this
subtree:
<device>
<name>scsi_host4</name>
<path>/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4</path>
<parent>pci_0000_00_1f_2</parent>
<capability type='scsi_host'>
<host>4</host>
<unique_id>5</unique_id>
</capability>
</device>
<device>
<name>scsi_target4_0_0</name>
<path>/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0</path>
<parent>scsi_host4</parent>
<capability type='scsi_target'>
<target>target4:0:0</target>
</capability>
</device>
<device>
<name>scsi_4_0_0_0</name>
<path>/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0</path>
<parent>scsi_target4_0_0</parent>
<driver>
<name>sd</name>
</driver>
<capability type='scsi'>
<host>4</host>
<bus>0</bus>
<target>0</target>
<lun>0</lun>
<type>disk</type>
</capability>
</device>
<device>
<name>block_sdb_Samsung_SSD_850_PRO_256GB_S251NXAGB42213R</name>
<path>/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/block/sdb</path>
<parent>scsi_4_0_0_0</parent>
<capability type='storage'>
<block>/dev/sdb</block>
<bus>ata</bus>
<drive_type>disk</drive_type>
<model>Samsung SSD 850</model>
<vendor>ATA</vendor>
<serial>Samsung_SSD_850_PRO_256GB_S251NXAGB42213R</serial>
<size>256060514304</size>
<logical_block_size>512</logical_block_size>
<num_blocks>500118192</num_blocks>
</capability>
</device>
<device>
<name>scsi_generic_sg1</name>
<path>/sys/devices/pci0000:00/0000:00:1f.2/ata5/host4/target4:0:0/4:0:0:0/scsi_generic/sg1</path>
<parent>scsi_4_0_0_0</parent>
<capability type='scsi_generic'>
<char>/dev/sg1</char>
</capability>
</device>
To construct the element, information from device scsi_4_0_0_0 is
needed for <address> element (minus the host element). Adapter element
is one of the places that I am not sure which information I can rely
on, currently 'host' element from scsi_4_0_0_0 is used to form string
'scsi_host{host}'. Is this correct? Reliable?
Whole different issue is locating the endpoint (/dev/sg{int}) to set
the permissions. From the subtree, it's apparent that it is included
in device called scsi_generic_sg1, but there is no *direct* link
between scsi_4_0_0_0 and scsi_generic_sg1. At this point, we present
user with devices matching capability='scsi', therefore to get the
information 2 additional parses of the tree would have to be done.
Other way would be reporting 'scsi_generic' capability, where only
2 additional virNodeDeviceLookupByName calls would be required (to
descend 2 levels). Is there anything *really* wrong with this
approach? Are there any other options without dynamic ownershit?
Seclabel doesn't seem to be available.
I'm thankful for any hints.
Regards,
mpolednik
8 years, 9 months