[libvirt-users] a question on vCPU setting for lxc
by WANG Cheng D
Dear all,
I am not clear about the 'vcpu' element for CPU allocation. I allocated 1 vCPU to my container, after I started the container, I ran 4 computation-intensive tasks on the container. And I found all the 4 physical core are 100% used (my host has 4 physical cores and no other application ran on the host except the container). That is, all available cores were used by the container. I want to know how to give a hard limitation for CPU usage of a container.
So I don't understand what 'vcpu' setting can be used for.
I know that another CPU allocation element 'shares' can also be used, but this elements only give a relative quota. If new containers are started, the CPU quota for the already started containers will change.
Regards,
Cheng
10 years, 8 months
[libvirt-users] Sys::Virt integration into other event loops
by Scott Wiersdorf
Hi all,
I’m trying to integrate Perl’s Sys::Virt into an already existing AnyEvent program.
I’m accustomed to writing things like this:
use EV;
use AnyEvent;
use AnyEvent::Handle;
my $h = AnyEvent::Handle->new(fh => $fh, …);
$h->on_read(sub { … });
EV::run; ## start the event loop
I can add some code in the on_read() handler and every time the $fh has something to read, it will fire off. I’d like to do something similar with Sys::Virt, but I can’t seem to wrap my head around its event system. The only examples I can find are the ones included in Sys::Virt source, which consist of a series of run_once() calls, or a while loop around run_default().
Does anyone have any idea how I can make this play nicely with an existing event loop such as EV, or even to fire off ($cv->send) an AnyEvent condvar when the event I set in domain_event_register_any() triggers?
Our current solution involves setting an AnyEvent->timer, which periodically fires off and then does a run_once() to see if any events happened during the period, but we’d like something more responsive (i.e., triggers when the event actually occurs) and less poll-y feeling.
I’d appreciate any advice you may have—thanks!
Scott
10 years, 8 months
Re: [libvirt-users] Problem in getting memory statistics
by Claudio Bley
[please don't top post on technical lists. it is easier to follow
conversation and less likely you forget answering a question if
you reply to the questions in line]
[please keep the conversation on the list - I've re-added libvirt-users]
At Sat, 15 Mar 2014 10:15:15 +0100,
Pasquale Dir wrote:
>
> I am on a kubuntu 13.10 x64, qemu version 1.5.0, guest uses kvm as emulator
> (1.5.0 as well), libvirt version is 1.1.1.
> I don't know what is a balloon driver...so I can't tell the version of it.
If you're using Linux inside your guest, just run "modinfo
virtio_balloon" and post the output.
> 2014-03-14 12:00 GMT+01:00 Claudio Bley <cbley(a)av-test.de>:
>
> > At Wed, 12 Mar 2014 13:36:50 +0100,
> > Pasquale Dir wrote:
> > >
> > > Problem is that it returns me just tags 0,6 and 7.
> > > Looking at the documentation I see they are not what I am looking for...I
> > > would rather need 4 (VIR_DOMAIN_MEMORY_STAT_UNUSED) and 5
> > > (VIR_DOMAIN_MEMORY_STAT_AVAILABLE).
> >
> > The information available depends on the kind of hypervisor you're
> > using and which guest OS you have.
> >
> > I guess you're using qemu. Which version are you using?
>
> > What guest OS do you use? Which version?
Er, you missed this question...
10 years, 8 months
[libvirt-users] How assign static ip using NAT?
by Peng Yu
Hi,
I have the following configuration in the xml of the guest. But if I
set a static ip on the guest, then the guest can not access the
outside network. I don't find an example on how to set a static IP
address for the guest. Could anybody know how to modify the following
xml code to do so? Thanks.
41 <interface type='network'>
42 <mac address='52:54:00:6c:a7:6d'/>
43 <source network='default'/>
44 <model type='virtio'/>
45 <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
46 </interface>
--
Regards,
Peng
10 years, 8 months
[libvirt-users] CfP 9th Workshop on Virtualization in High-Performance Cloud Computing (VHPC '14)
by VHPC 14
we apologize if you receive multiple copies of this CfP
=================================================================
CALL FOR PAPERS
9th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'14)
held in conjunction with Euro-Par 2014, August 25-29, Porto, Portugal
=================================================================
Date: August 26, 2014
Workshop URL: http://vhpc.org
Paper Submission Deadline: May 30, 2014
CALL FOR PAPERS
Virtualization technologies constitute a key enabling factor for flexible
resource
management in modern data centers, and particularly in cloud environments.
Cloud providers need to dynamically manage complex infrastructures in a
seamless fashion for varying workloads and hosted applications,
independently of
the customers deploying software or users submitting highly dynamic and
heterogeneous workloads. Thanks to virtualization, we have the ability to
manage
vast computing and networking resources dynamically and close to the
marginal
cost of providing the services, which is unprecedented in the history of
scientific
and commercial computing.
Various virtualization technologies contribute to the overall picture in
different
ways: machine virtualization, with its capability to enable consolidation
of multiple
under-utilized servers with heterogeneous software and operating systems
(OSes),
and its capability to live-migrate a fully operating virtual machine (VM)
with a very
short downtime, enables novel and dynamic ways to manage physical servers;
OS-level virtualization, with its capability to isolate multiple user-space
environments and to allow for their co-existence within the same OS kernel,
promises to provide many of the advantages of machine virtualization with
high
levels of responsiveness and performance; I/O Virtualization allows physical
NICs/HBAs to take traffic from multiple VMs; network virtualization, with
its
capability to create logical network overlays that are independent of the
underlying physical topology and IP addressing, provides the fundamental
ground on top of which evolved network services can be realized with an
unprecedented level of dynamicity and flexibility; the increasingly adopted
paradigm of Software-Defined Networking (SDN) promises to extend this
flexibility to the control and data planes of network paths. These
technologies
have to be inter-mixed and integrated in an intelligent way, to support
workloads that are increasingly demanding in terms of absolute performance,
responsiveness and interactivity, and have to respect well-specified
Service-
Level Agreements (SLAs), as needed for industrial-grade provided services.
Indeed, among emerging and increasingly interesting application domains
for virtualization, we can find big-data application workloads in cloud
infrastructures, interactive and real-time multimedia services in the cloud,
including real-time big-data streaming platforms such as used in real-time
analytics supporting nowadays a plethora of application domains. Distributed
cloud infrastructures promise to offer unprecedented responsiveness levels
for
hosted applications, but that is only possible if the underlying
virtualization
technologies can overcome most of the latency impairments typical of current
virtualized infrastructures (e.g., far worse tail-latency). What is more,
in data
communications Network Function Virtualization (NFV) is becoming a key
technology enabling a shift from supplying hardware-based network functions,
to providing them in a software-based and elastic way. In conjunction with
(public and private) cloud technologies, NFV may be used for constructing
the
foundation for cost-effective network functions that can easily and
seamlessly
adapt to demand, still keeping their major carrier-grade characteristics in
terms
of QoS and reliability.
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations,
each followed by 10 min discussion sections, and lightning talks, limited
to 5
minutes. Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
- Management, deployment and monitoring of virtualized environments
- Language-process virtual machines
- Performance monitoring for virtualized/cloud workloads
- Virtual machine monitor platforms
- Topology management and optimization for distributed virtualized
applications
- Paravirtualized I/O
- Improving I/O and network virtualization including use of RDMA,
Infiniband, PCIe
- Improving performance in VM access to GPUs, GPU clusters, GP-GPUs
- HPC storage virtualization
- Virtualized systems for big-data and analytics workloads
- Optimizations and enhancements to OS virtualization support
- Improving OS-level virtualization and its integration within cloud
management
- Performance modelling for virtualized/cloud applications
- Heterogeneous virtualized environments
- Network virtualization
- Software defined networking
- Network function virtualization
- Hypervisor and network virtualization QoS and SLAs
- Cloudbursting
- Evolved European grid architectures including such based on network
virtualization
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- System and process/bytecode VM convergence
- Cloud frameworks and APIs
- Checkpointing/migration of VM-based large compute jobs
- Job scheduling/control/policy with VMs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Research, industrial and educational use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud HPC use cases including optimizations
- Services in cloud HPC
- Hypervisor extensions and tools for cluster and grid computing
- Cluster provisioning in the cloud
- Performance and cost modelling
- Languages for describing highly-distributed compute jobs
- VM cloud and cluster distribution algorithms, load balancing
- Instrumentation interfaces and languages
- Energy-aware virtualization
Important Dates
Rolling Paper registration
May 30, 2014 - Full paper submission
July 4, 2014 - Acceptance notification
October 3, 2014 - Camera-ready version due
August 26, 2014 - Workshop Date
TPC
CHAIR
Michael Alexander (chair), TU Wien, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Tommaso Cucinotta (co-chair), Bell Labs, Dublin, Ireland
PROGRAM COMMITTEE
Costas Bekas, IBM
Jakob Blomer, CERN
Roberto Canonico, University of Napoli Federico II, Italy
Paolo Costa, MS Research Cambridge, England
Jorge Ejarque Artigas, Barcelona Supercomputing Center, Spain
William Gardner, University of Guelph, USA
Balazs Gerofi, University of Tokyo, Japan
Krishna Kant, Temple University, USA
Romeo Kinzler, IBM
Nectarios Koziris, National Technical University of Athens, Greece
Giuseppe Lettieri, University of Pisa, Italy
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Christine Morin, INRIA, France
Dimitrios Nikolopoulos, Queen's University of Belfast, UK
Herbert Poetzl, VServer, Austria
Luigi Rizzo, University of Pisa, Italy
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Vangelis Tasoulas, Simula Research Lab, Norway
Yoshio Turner, HP Labs, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Chao-Tung Yang, Tunghai University, Taiwan
PAPER SUBMISSION-PUBLICATION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
http://www.springer.de/comp/lncs/authors.html
EasyChair Abstract Submission Link:
https://www.easychair.org/conferences/?conf=europar2014ws
GENERAL INFORMATION
The workshop is one day in length and will be held in conjunction with
Euro-Par 2014, 25-29 August, Porto, Portugal
10 years, 8 months
Re: [libvirt-users] Scheduler Parameters
by Faruk Caglar
Hi All,
I have emailed the message below to Arnaud, but it got failed.
I will need to have get set scheduling parameters (including timeslice,
ratelimit, weight, cap for Xen)
thru C# libvirt bindings.
is there any plan to have those wrappers soon?
Thanks
Faruk
On Fri, Mar 14, 2014 at 4:06 PM, Faruk Caglar
<faruk.caglar(a)vanderbilt.edu>wrote:
>
> Hi Arnaud,
>
> Thanks for all your work for the libvirt C# bindings.
>
> Recently I have started to use libvirt C# bindings for my doctoral
> research. However,
> I have noticed that it currently does not support the get/set scheduler
> parameters.
>
> Do you have any plan to implement those in a near future?
>
> Thanks
> Faruk
> Vanderbilt University
>
>
10 years, 8 months
[libvirt-users] questions on clock catchup
by Jincheng Miao
In http://libvirt.org/formatdomain.html#elementsTime , we could find there are
three attributes of catchup tickpolicy: limit, threshold and slew.
"
The catchup element has three optional attributes, each a positive integer.
The attributes are threshold, slew, and limit.
"
The xml format likes:
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'>
<catchup slew='123'/> or <catchup threshold='123'/> or <catchup limit='123'/>
</timer>
</clock>
But there is no further explanation.
Does anyone know what's the meaning of these attributes ?
Best wishes,
Jincheng Miao
10 years, 8 months
Re: [libvirt-users] PCI Passthrough of 2 identical devices
by Thomas Jagoditsch
hi laine,
thx for the fast answer.
i tried VFIO as it seems supported and got at least one step further.
starting the guest with one of the DVB cards is possible now, even when the other card is in the hosts pci config (not removed soft or hard).
adding the second card/device thru ...
> <hostdev mode='subsystem' type='pci' managed='yes'>
> <driver name='vfio'/>
> <source>
> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
> </source>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
> </hostdev>
> <hostdev mode='subsystem' type='pci' managed='yes'>
> <driver name='vfio'/>
> <source>
> <address domain='0x0000' bus='0x03' slot='0x01' function='0x0'/>
> </source>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
> </hostdev>
will render the guest unbootable:
--><-- /var/log/libvirt/qemu/tvBackend.log
2014-03-12 16:48:48.018+0000: starting up
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm-spice -name tvBackend -S -machine pc-i440fx-1.5,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid f647bb10-7f9a-f94c-33b9-3d99e8e753e0 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/tvBackend.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/kvm/tvBackend.img,if=none,id=drive-virtio-disk0,format=raw -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:10:26:ec,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:2 -vga std -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.0,addr=0x6 -device vfio-pci,host=03:01.0,id=hostdev1,bus=pci.0,addr=0x7 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
W: kvm binary is deprecated, please use qemu-system-x86_64 instead
char device redirected to /dev/pts/0 (label charserial0)
qemu-system-x86_64: -device vfio-pci,host=03:00.0,id=hostdev0,bus=pci.0,addr=0x6: Warning, device 0000:03:00.0 does not support reset
qemu-system-x86_64: -device vfio-pci,host=03:01.0,id=hostdev1,bus=pci.0,addr=0x7: Warning, device 0000:03:01.0 does not support reset
qemu-system-x86_64: -device vfio-pci,host=03:01.0,id=hostdev1,bus=pci.0,addr=0x7: vfio: Error: Failed to setup INTx fd: Device or resource busy
qemu-system-x86_64: -device vfio-pci,host=03:01.0,id=hostdev1,bus=pci.0,addr=0x7: Device initialization failed.
qemu-system-x86_64: -device vfio-pci,host=03:01.0,id=hostdev1,bus=pci.0,addr=0x7: Device 'vfio-pci' could not be initialized
2014-03-12 16:48:48.639+0000: shutting down
--><-- /var/log/syslog
Mar 12 18:07:45 father kernel: [34317.290958] genirq: Flags mismatch irq 16. 00000000 (vfio-intx(0000:03:01.0)) vs. 00000080 (ehci_hcd:usb1)
--><--
as i have only 2 pci slots on the board and no other board with serious VT-d/IOMMU support i have no way to swap slots.
as the second card shared IRQ 16 with the ehci controller i soft-removed the ehci controller for testing purposes and voila the guest started with the 2 cards successfully inside, even mythtv works ... for now.
i hope to find a way to somehow rearrange the irqs as i sometimes need USB on the machine :D
wbr,tja...
----- Ursprüngliche Mail -----
Von: "Laine Stump" <laine(a)laine.org>
An: libvirt-users(a)redhat.com
CC: "Thomas Jagoditsch" <tja(a)tjasoft.com>
Gesendet: Mittwoch, 12. März 2014 16:13:15
Betreff: Re: [libvirt-users] PCI Passthrough of 2 identical devices
On 03/12/2014 06:29 AM, Thomas Jagoditsch wrote:
> hi,
>
> i have a small trouble with pci-passthrough.
>
> i have a working configuration passing a tv card into the guest, all is fine and dandy.
> as soon as i add my second identical card into the host i cant start the guest anymore, whether i add the second card to the guest or not. error message is identical in both cases.
>
> message of virt-manager|virsh|libvirtd.log:
>> libvirtError: internal error: Unable to reset PCI device 0000:03:00.0: internal error: Active 0000:03:01.0 devices on bus with 0000:03:00.0, not doing bus reset
> the host is a plain kvm server, no services or apps whatsoever accessing the two devices.
> the other guests (firewall and fileserver) do not use the cards either.
The problem is that for kvm device assignment to work properly, the
device needs to be reset by libvirt after detaching it from the host
driver and before passing it to kvm, and these devices you are trying to
reset support neither "function level reset" nor "power management
reset", so libvirt must fallback to resetting the entire bus where the
device is plugged in. But of course it can't do that if the bus contains
other devices that are in use by the host or by other guests.
If your host OS is new enough to support vfio device assignment, I would
suggest using that instead (if you update to the latest upstrem libvirt
it will automatically use VFIO if it is available, otherwise you can add
"<driver name='vfio'/>" to the device definition to assure that it will
either use vfio or fail. You will also probably need to run "modprobe
vfio" before starting the guest). The reason I suggest this is that VFIO
will automatically handle resetting the assigned devices whenever
necessary, compared to the old KVM device assignment, where libvirt must
always reset the device because it has no better information about
whether or not it is really necessary.
Another thing you can try is plugging one of the cards into a different
slot - if you can find a slot that is on a different bus, then libvirt
will be able to reset the bus containing the card that will be assigned
(since there won't be any other active devices on that same bus).
>
> if i (soft) remove the 2nd card via
>> echo -n 1 > /sys/bus/pci/devices/0000\:03\:01.0/remove
> i can start the guest with the 1st card assigned.
>
> thx for anyone looking into this.
libvirt attempts to reset the
>
> wbr,tja..
>
>
> PS:
> host:
>> root@father:~# lspci
>> 00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
>> 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
>> 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 05)
>> 00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
>> 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 05)
>> 00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 05)
>> 00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d5)
>> 00:1c.3 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d5)
>> 00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 05)
>> 00:1f.0 ISA bridge: Intel Corporation H87 Express LPC Controller (rev 05)
>> 00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 05)
>> 00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 05)
>> 02:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03)
>> 03:00.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
>> 03:01.0 Multimedia controller: Philips Semiconductors SAA7146 (rev 01)
>> root@father:~# cat /etc/lsb-release
>> DISTRIB_ID=Ubuntu
>> DISTRIB_RELEASE=13.10
>> DISTRIB_CODENAME=saucy
>> DISTRIB_DESCRIPTION="Ubuntu 13.10"
>> root@father:~# uname -a
>> Linux father 3.11.0-18-generic #32-Ubuntu SMP Tue Feb 18 21:11:14 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>> root@father:~# libvirtd --version
>> libvirtd (libvirt) 1.1.1
> guest:
>> root@father:~# cat /etc/libvirt/qemu/tvBackend.xml
>> <!--
>> WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
>> OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
>> virsh edit tvBackend
>> or other application using the libvirt API.
>> -->
>>
>> <domain type='kvm'>
>> <name>tvBackend</name>
>> <uuid>f647bb10-7f9a-f94c-33b9-3d99e8e753e0</uuid>
>> <memory unit='KiB'>1048576</memory>
>> <currentMemory unit='KiB'>1048576</currentMemory>
>> <vcpu placement='static'>2</vcpu>
>> <os>
>> <type arch='x86_64' machine='pc-i440fx-1.5'>hvm</type>
>> <boot dev='hd'/>
>> </os>
>> <features>
>> <acpi/>
>> <apic/>
>> <pae/>
>> </features>
>> <clock offset='utc'/>
>> <on_poweroff>destroy</on_poweroff>
>> <on_reboot>restart</on_reboot>
>> <on_crash>restart</on_crash>
>> <devices>
>> <emulator>/usr/bin/kvm-spice</emulator>
>> <disk type='file' device='disk'>
>> <driver name='qemu' type='raw'/>
>> <source file='/kvm/tvBackend.img'/>
>> <target dev='vda' bus='virtio'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
>> </disk>
>> <disk type='block' device='cdrom'>
>> <driver name='qemu' type='raw'/>
>> <target dev='hdc' bus='ide'/>
>> <readonly/>
>> <address type='drive' controller='0' bus='1' target='0' unit='0'/>
>> </disk>
>> <controller type='usb' index='0'>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
>> </controller>
>> <controller type='pci' index='0' model='pci-root'/>
>> <controller type='ide' index='0'>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
>> </controller>
>> <interface type='bridge'>
>> <mac address='52:54:00:10:26:ec'/>
>> <source bridge='brlan'/>
>> <model type='virtio'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
>> </interface>
>> <serial type='pty'>
>> <target port='0'/>
>> </serial>
>> <console type='pty'>
>> <target type='serial' port='0'/>
>> </console>
>> <input type='mouse' bus='ps2'/>
>> <graphics type='vnc' port='-1' autoport='yes'/>
>> <video>
>> <model type='vga' vram='9216' heads='1'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
>> </video>
>> <hostdev mode='subsystem' type='pci' managed='yes'>
>> <source>
>> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
>> </source>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
>> </hostdev>
>> <memballoon model='virtio'>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
>> </memballoon>
>> </devices>
>> </domain>
--
thomas jagoditsch - tjaSoft
softWareEntwicklung - netzWerkManagement
10 years, 8 months