[libvirt-users] NTP traffic blocked
by Sinan Polat
I have multiple VM's on the same KVM host. One of the VM's is running NTP.
All VM's can reach eachother, no firewall in between. But the problem is,
the VM's cannot communicate over port 123/udp to the NTP VM.
Network: 172.24.100.0/22
KVM: 172.24.101.50
VM ntp: 172.24.102.10
VM foo: 172.24.102.20
1. On the NTP server, listen for any incoming packets from VM foo on port
123:
[ntp ~]# tcpdump -i any host 172.24.102.20 and port 123 -n
2. Execute the following on server foo. Since server ntp is listening with
tcpdump, packets should be visible in tcpdump.
[foo ~]# ntpdate 172.24.102.10
This is failing:
ntpdate[30443]: no server suitable for synchronization found
No packets are coming in to the ntp server, tcpdump is just blank. Weird.
To troubleshoot further, start over and do the following:
[ntp ~]# tcpdump -i any host 172.24.102.20 and port 123 -n ## Listen for
packets filtering host 172.24.102.20 and port 123
[foo ~]# tcpdump -i any host 172.24.102.10 and port 123 -n ## Listen for
packets filtering host 172.24.102.10 and port 123
While both tcpdumps are running, execute the following:
[foo ~]# ntpdate 172.24.102.10
Now, on the tcpdump of VM foo, you will see outgoing packets:
19:45:26.644630 IP 172.24.102.20.ntp > 172.24.102.10.ntp: NTPv4, Client,
length 48
As you can see, packets are exiting the server, but there is no response.
And the tcpdump of the ntp server is still empty, it doesn't receive the
packets (so, it won't reply). But why?
Lets troubleshoot further and run ntpdate in debugging mode:
[foo ~]# ntpdate -dv 172.24.102.10
22 Aug 19:51:23 ntpdate[30465]: ntpdate 4.2.6p5(a)1.2349-o Wed Mar 1 09:00:52
UTC 2017 (1)
Looking for host 172.24.102.10 and service ntp
host found : some-host.com
transmit(172.24.102.10)
receive(172.24.102.10)
transmit(172.24.102.10)
receive(172.24.102.10)
server 172.24.102.10, port 123
22 Aug 19:51:29 ntpdate[30465]: step time server 172.24.102.10 offset
1.414813 sec
Wow it worked!? But it only works with the "-d" option. What is the
difference between normal and debgging mode? Lets have a closer look;
without the "-d" option, the src and dest ports are 123. When using the "-d"
option, the src port is not 123 (it is a random high port number).
On the KVM host and on the VM's there is no firewall active, even if there
was a firewall, in tcpdump the packets should have been shown.
Anyone who can help? Thanks!
Sinan
7 years, 1 month
[libvirt-users] virConnectClose
by llilulu
libvirt
version: 3.4.0
When invoke virConnectDomainEventRegisteAny register event on a hypervisor connection, before virConnecClose, should I invoke virConnectDomainEventDeregisterAny. If a hypervisor connection close unexpected, call virConnectDomainEventDeregisterAny will return error? Can tell me more detail about event? This libvirt doc not more description.
Thanks
7 years, 1 month
[libvirt-users] Increasing video memory available to Windows
by Alex
Hi,
I have a fedora25 system with a Windows10 host and would like to use
it for photoshop. However, it complains the video memory is too low.
I'm using the QXL driver and it appears to be limited to 256MB? I've
installed the Red Hat QXL driver in Windows.
I have 4GB of memory allocated overall, and could allocate more if necessary.
How do I increase the available video memory? Photoshop reports that
it's detected less than 512MB and isn't operating properly.
When I view "Adapter Properties" is reports there is 2047 MB of
available graphics memory and
2047 MB Shared System Memory.
Attempts to modify with "virsh edit" the vram and other video-related
variables for the host have failed.
Included below is my qemu XML configuration. Any idea greatly appreciated.
<domain type='kvm'>
<name>alex-win10</name>
<uuid>337f9410-3286-4ef5-a3e8-8271e38ea1e5</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
<vmport state='off'/>
</features>
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/alex-win10.qcow2'/>
<target dev='vda' bus='virtio'/>
<boot order='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sr0'/>
<target dev='hdb' bus='ide'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</controller>
<controller type='scsi' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:52:6b:61'/>
<source bridge='br0'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<listen type='address'/>
<image compression='off'/>
</graphics>
<sound model='ich6'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='536870912'
heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
</memballoon>
</devices>
</domain>
7 years, 1 month
[libvirt-users] virsh blockcommit fails regularily (was: virtual drive performance)
by Dominik Psenner
Hi,
a small update on this. We have migrated the virtualized host to use the
virtio drivers and now the drive performance is improved so that we can see
a constant transfer rate. Before it used to be the same rate but regularly
dropped to a few bytes/sec for a few seconds and then was fast again.
However we still observe that the following fails regularily:
$ virsh snapshot-create-as --domain domain --name backup --no-metadata
--atomic --disk-only --diskspec hda,snapshot=external
$ virsh blockcommit domain hda --active --pivot
error: failed to pivot job for disk hda
error: block copy still active: disk 'hda' not ready for pivot yet
Could not merge changes for disk hda of domain. VM may be in invalid state.
Then running the following in the morning succeeds and successfully pivotes
the snapshot into the base image while the vm is live:
$ virsh blockjob domain hda --abort
$ virsh blockcommit domain hda --active --pivot
Successfully pivoted
We run the backup process every day once and it failed on the following
days:
2017-07-07
2017-07-20
2017-07-27
2017-08-12
2017-08-14
Looking at this it roughly happens once a week and the guest from then on
writes into the snapshot backlog. That snapshot backlog file grows about
8gb every day and thus the issue always needs immediate attention.
Any ideas what could cause this issue? Is this a bug (race condition) of
`virsh blockcommit` that sometimes fails because it is invoked at the wrong
time?
Cheers,
Dominik
2017-07-07 9:21 GMT+02:00 Dominik Psenner <dpsenner(a)gmail.com>:
> Of course the cronjob fails when trying to virsh blockcommit and not when
> creating the snapshot, sorry for the noise.
>
> 2017-07-07 9:15 GMT+02:00 Dominik Psenner <dpsenner(a)gmail.com>:
>
>> Hi,
>>
>> different day, same issue.. cronjob runs and fails:
>>
>> $ virsh snapshot-create-as --domain domain --name backup --no-metadata
>> --atomic --disk-only --diskspec hda,snapshot=external
>> error: failed to pivot job for disk hda
>> error: block copy still active: disk 'hda' not ready for pivot yet
>> Could not merge changes for disk hda of domain. VM may be in invalid
>> state.
>>
>> Then running the following in the morning succeeds and successfully
>> pivotes the snapshot into the base image while the vm is live:
>>
>> $ virsh blockjob domain hda --abort
>> $ virsh blockcommit domain hda --active --pivot
>> Successfully pivoted
>>
>> This need of manual interventions is becoming a tiring job..
>>
>> I someone else seeing the same issue or has an idea what the cause could
>> be?
>> Can I trust the output and is the base image really up to the latest
>> state?
>>
>> Cheers
>>
>> 2017-07-02 10:30 GMT+02:00 Dominik Psenner <dpsenner(a)gmail.com>:
>>
>>> Just a little catch-up. This time I was able to resolve the issue by
>>> doing:
>>>
>>> virsh blockjob domain hda --abort
>>> virsh blockcommit domain hda --active --pivot
>>>
>>> Last time I had to shut down the virtual machine and do this while being
>>> offline.
>>>
>>> Thanks Wang for your valuable input. As far as the memory goes, there's
>>> plenty of head room:
>>>
>>> $ free -h
>>> total used free shared buff/cache
>>> available
>>> Mem: 7.8G 1.8G 407M 9.7M
>>> 5.5G 5.5G
>>> Swap: 8.0G 619M 7.4G
>>>
>>> 2017-07-02 10:26 GMT+02:00 王李明 <wanglm(a)certusnet.com.cn>:
>>>
>>>> mybe this is because you physic host memory is small
>>>>
>>>> then this will Causing instability of the virtual machine
>>>>
>>>> But I'm just guessing
>>>>
>>>> You can try to increase your memory
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> Wang Liming
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> *发件人:* libvirt-users-bounces(a)redhat.com [mailto:libvirt-users-bounces@
>>>> redhat.com] *代表 *Dominik Psenner
>>>> *发送时间:* 2017年7月2日 16:22
>>>> *收件人:* libvirt-users(a)redhat.com
>>>> *主题:* Re: [libvirt-users] virtual drive performance
>>>>
>>>>
>>>>
>>>> Hi again,
>>>>
>>>> just today an issue I've thought to be resolved popped up again. We
>>>> backup the machine by doing:
>>>>
>>>> virsh snapshot-create-as --domain domain --name backup --no-metadata
>>>> --atomic --disk-only --diskspec hda,snapshot=external
>>>>
>>>> # backup hda.qcow2
>>>>
>>>> virsh blockcommit domain hda --active --pivot
>>>>
>>>> Every now and then this process fails with the following error message:
>>>>
>>>> error: failed to pivot job for disk hda
>>>> error: block copy still active: disk 'hda' not ready for pivot yet
>>>> Could not merge changes for disk hda of domain. VM may be in invalid
>>>> state.
>>>>
>>>> I expect live backups are a great asset and should work. Is this a bug
>>>> that may relates also to the virtual drive performance issues we observe?
>>>>
>>>> Cheers
>>>>
>>>>
>>>>
>>>> 2017-07-02 10:10 GMT+02:00 Dominik Psenner <dpsenner(a)gmail.com>:
>>>>
>>>> Hi
>>>>
>>>> a small update on this. I just migrated the vm from the site to my
>>>> laptop and fired it up. The exact same xml configuration (except file paths
>>>> and such) starts up and bursts with 50Mb/s to 115Mb/s in the guest. This
>>>> allows only one reasonable answer: the cpu on my laptop is somehow better
>>>> suited to emulate IO than the CPU built into the host on site. The host
>>>> there is a HP proliant microserver gen8 with xeon processor. But the
>>>> processor there is also never capped at 100% when the guest copies files.
>>>>
>>>> I just ran another test by copying a 3Gb large file on the guest. What
>>>> I can observe on my computer is that the copy process is not at a constant
>>>> rate but rather starts with 90Mb/s, then drops down to 30Mb/s, goes up to
>>>> 70Mb/s, drops down to 1Mb/s, goes up to 75Mb/s, drops to 1Mb/s, goes up to
>>>> 55Mb/s and the pattern continues. Please note that the drive is still
>>>> configured as:
>>>>
>>>> <driver name='qemu' type='qcow2' cache='none' io='threads'/>
>>>>
>>>> and I would expect a constant rate that is either high or low since
>>>> there is no caching involved and the underlying hard drive is a samsung ssd
>>>> evo 850. To have an idea how fast that drive is on my laptop:
>>>>
>>>> $ dd if=/dev/zero of=testfile bs=1M count=1000 oflag=direct
>>>> 1000+0 records in
>>>> 1000+0 records out
>>>> 1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.47301 s, 424 MB/s
>>>>
>>>>
>>>>
>>>> I can further observe that the smaller the saved chunks are the slower
>>>> the overall performance is:
>>>>
>>>> dd if=/dev/zero of=testfile bs=512K count=1000 oflag=direct
>>>> 1000+0 records in
>>>> 1000+0 records out
>>>> 524288000 bytes (524 MB, 500 MiB) copied, 1.34874 s, 389 MB/s
>>>>
>>>> $ dd if=/dev/zero of=testfile bs=5K count=1000 oflag=direct
>>>> 1000+0 records in
>>>> 1000+0 records out
>>>> 5120000 bytes (5.1 MB, 4.9 MiB) copied, 0.105109 s, 48.7 MB/s
>>>>
>>>> $ dd if=/dev/zero of=testfile bs=1K count=10000 oflag=direct
>>>> 10000+0 records in
>>>> 10000+0 records out
>>>> 10240000 bytes (10 MB, 9.8 MiB) copied, 0.668438 s, 15.3 MB/s
>>>>
>>>> $ dd if=/dev/zero of=testfile bs=512 count=20000 oflag=direct
>>>> 20000+0 records in
>>>> 20000+0 records out
>>>> 10240000 bytes (10 MB, 9.8 MiB) copied, 1.10964 s, 9.2 MB/s
>>>>
>>>> Could this be a limiting factor? Does qemu/kvm do many many writes of
>>>> just a few bytes?
>>>>
>>>>
>>>> Ideas, anyone?
>>>>
>>>> Cheers
>>>>
>>>>
>>>>
>>>> 2017-06-21 20:46 GMT+02:00 Dan <srwx4096(a)gmail.com>:
>>>>
>>>> On Tue, Jun 20, 2017 at 04:24:32PM +0200, Gianluca Cecchi wrote:
>>>> > On Tue, Jun 20, 2017 at 3:38 PM, Dominik Psenner <dpsenner(a)gmail.com>
>>>> wrote:
>>>> >
>>>> > >
>>>> > > to the following:
>>>> > >
>>>> > > <disk type='file' device='disk'>
>>>> > > <driver name='qemu' type='qcow2' cache='none'/>
>>>> > > <source file='/var/data/virtuals/machines/windows-server-2016-
>>>> > > x64/image.qcow2'/>
>>>> > > <backingStore/>
>>>> > > <target dev='hda' bus='scsi'/>
>>>> > > <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>>>> > > </disk>
>>>> > >
>>>> > > Do you see any gotchas in this configuration that could prevent the
>>>> > > virtualized guest to power on and boot up?
>>>> > >
>>>> > >
>>>> > When I configure like this, from a linux guest point of view I get
>>>> this
>>>> > Symbios Logic SCSI Controller:
>>>> > 00:08.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a
>>>> >
>>>> > But htis is true only if you add the SCSI controller too, not only
>>>> the disk
>>>> > definition.
>>>> > In my case
>>>> >
>>>> > <controller type='scsi' index='0'>
>>>> > <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
>>>> > function='0x0'/>
>>>> > </controller>
>>>> >
>>>> > Note the slot='0x08' that is reflected into the first field of lspci
>>>> inside
>>>> > my linux guest.
>>>> > So between your controllers you have to add the SCSI one
>>>> >
>>>> > In my case (Fedora 25 with virt-manager-1.4.1-2.fc25.noarch,
>>>> > qemu-kvm-2.7.1-6.fc25.x86_64, libvirt-2.2.1-2.fc25.x86_64) with "Disk
>>>> bus"
>>>> > set as SCSI in virt-manager, the xml defintiion for the guest is
>>>> > automatically updated with the controller if not existent yet.
>>>> > And the disk definition sections is like this:
>>>> >
>>>> > <disk type='file' device='disk'>
>>>> > <driver name='qemu' type='qcow2'/>
>>>> > <source file='/var/lib/libvirt/images/slaxsmall.qcow2'/>
>>>> > <target dev='sda' bus='scsi'/>
>>>> > <boot order='1'/>
>>>> > <address type='drive' controller='0' bus='0' target='0'
>>>> unit='0'/>
>>>> > </disk>
>>>> >
>>>> > So I think you should set dev='sda' and not 'hda' in your xml for it
>>>> >
>>>>
>>>> I am actually very curious to know if that would make a difference. I
>>>> don't have a such windows vm images ready to test at present.
>>>>
>>>> Dan
>>>> > I don't kknow if w2016 contains the symbios logic drivers already
>>>> > installed, so that a "simple" reboot could imply an automatic
>>>> > reconfiguration of the guest....
>>>> > Note also that in Windows when the hw configuration is considered
>>>> heavily
>>>> > changed, you could be asked to register again (I don't think that the
>>>> IDE
>>>> > --> SCSI should imply it...)
>>>> >
>>>> > Gianluca
>>>>
>>>> > _______________________________________________
>>>> > libvirt-users mailing list
>>>> > libvirt-users(a)redhat.com
>>>> > https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Dominik Psenner
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Dominik Psenner
>>>>
>>>
>>>
>>>
>>> --
>>> Dominik Psenner
>>>
>>
>>
>>
>> --
>> Dominik Psenner
>>
>
>
>
> --
> Dominik Psenner
>
--
Dominik Psenner
7 years, 1 month
[libvirt-users] libvirt: XML-RPC error : Cannot write data: Broken pipe
by netsurfed
Hi all,
I think it's a bug, when call virDomainGetState after "service libvirtd stop", I receive signal SIGPIPE.
I want to know how to avoid this problem? And Other interfaces will also encounter this problem?
Below the bt information:
Below some information about my hypervisor:
root@ubuntu-05:/datapool/zhuohf# virsh -v
3.4.0
root@ubuntu-05:/datapool/zhuohf# qemu-x86_64 -version
qemu-x86_64 version 2.9.0
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
7 years, 1 month
[libvirt-users] ANNOUNCE: Oz 0.16.0 release
by Chris Lalancette
All,
I'm pleased to announce release 0.16.0 of Oz. Oz is a program for
doing automated installation of guest operating systems with limited input
from the user. Release 0.16.0 is a bugfix and feature release for Oz.
Some of the highlights between Oz 0.15.0 and 0.16.0 are:
* Windows 10 and 2016 support
* All timeouts are now configurable
* Ubuntu 16.04, 16.10, 17.04 support
* Mageia 2, 3, 4, 5 support
* Properly find UEFI firmware, which should fix aarch64 installs
* Fedora 24, 25, 26 support
* FreeBSD 11 support
* Replace internal use of pycurl with requests
* OpenSUSE Leap support
* Timeouts are now based on time, not the number of iterations of the loop
* Modern Fedora and RHEL guests will print out a lot more anaconda
debugging information to the terminal that oz-install was launched from
A tarball and zipfile of this release is available on the Github releases
page: https://github.com/clalancette/oz/releases . Packages for Fedora
rawhide and 26 have been built in Koji and will eventually make their way
to stable. Instructions on how to get and use Oz are available at
http://github.com/clalancette/oz/wiki .
If you have questions or comments about Oz, please feel free to contact me
at clalancette at gmail.com, or open up an issue on the github page:
http://github.com/clalancette/oz/issues .
Thanks to everyone who contributed to this release through bug reports,
patches, and suggestions for improvement.
Chris Lalancette
7 years, 1 month
[libvirt-users] Avoiding console prints by Libvirt Qemu python APIs
by swaroop sp
Hi,
I am trying to check if a domain exists by using the libvirt python API
*"lookupbyname()*". If the domain does not exist, it prints an error
message on the console saying "*Domain not found*".
I need the errors or logs only in syslog. I have tried redirecting stderr
and stdout. But, it doesn't have any effect. I have also tried playing
around with the libvirt logging settings described in
https://libvirt.org/logging.html . No effect again. "stdio_handler" flag in
/etc/libvirt/qemu.conf is set to "file" as well.
Following is my test code:
import os, sysimport libvirt
conn = libvirt.open('qemu:///system')
# Find the application in the virsh domaintry:
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
dom = conn.lookupByName('abcd')
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__except Exception as e:
syslog.syslog (syslog.LOG_ERR, 'Could not find the domain. ERROR:
%s.' % (e))
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
Output:
$ python test.py
libvirt: QEMU Driver error : Domain not found: no domain with matching
name 'abcd'
$
Is there a way to avoid this console print?
Regards,
Swaroop
7 years, 1 month
[libvirt-users] creating new vm with virt-manager, existing disk failure
by Marko Weber | 8000
hello,
i rsynced a kvm vm from one host to another.
i start virt-manager and tell virt-manager to use an existing disk.
i set cpu to haswell that is on the host.
configure before start is set, and i start "begin installation".
I get this output by virt-manager:
Unable to complete install: 'internal error: process exited while
connecting to monitor: 2017-07-19T09:27:10.861928Z qemu-system-x86_64:
can't apply global Haswell-x86_64-cpu.cmt=on: Property '.cmt' not found'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 477, in
start_install
doboot, transient)
File "/usr/share/virt-manager/virtinst/guest.py", line 405, in
_create_guest
self.domain.create()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1062, in
create
if ret == -1: raise libvirtError ('virDomainCreate() failed',
dom=self)
libvirtError: internal error: process exited while connecting to
monitor: 2017-07-19T09:27:10.861928Z qemu-system-x86_64: can't apply
global Haswell-x86_64-cpu.cmt=on: Property '.cmt' not found
try to set it to another cpu and clicking start, i get this:
Unable to complete install: 'Domain has already been started!'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 455, in
start_install
raise RuntimeError(_("Domain has already been started!"))
RuntimeError: Domain has already been started!
how do i stop the already runing vm? virsh list dont show the vm
and why do i get the error on haswell?
lscpu:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
Stepping: 1
CPU MHz: 2974.658
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 7191.87
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand
lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms
invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc
cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
kind regards
marko
--
zbfmail - Mittendrin statt nur Datei!
7 years, 2 months
[libvirt-users] Understanding the contents of virsh dump --memory-only
by Arnabjyoti Kalita
Hello,
I was trying to understand the ELF file generated by the virsh dump
(--memory-only) command. I have successfully generated a dump of the VM
memory using this command.
I specifically am trying to understand the loadable segments of this ELF
file.
I ran readelf -a <filename> to get the information that I need. Below shows
the details of the loadable segments in a much better format :-
Loading ELF header #1. offset: 1320 filesize: 655360 memsize: 655360 vaddr:
0 paddr: 0 align: 0 flags: 0
Loading ELF header #2. offset: 656680 filesize: 65536 memsize: 65536 vaddr:
0 paddr: a0000 align: 0 flags: 0
Loading ELF header #3. offset: 722216 filesize: 1072955392 memsize:
1072955392 vaddr: 0 paddr: c0000 align: 0 flags: 0
Loading ELF header #4. offset: 1073677608 filesize: 67108864 memsize:
67108864 vaddr: 0 paddr: f4000000 align: 0 flags: 0
Loading ELF header #5. offset: 1140786472 filesize: 67108864 memsize:
67108864 vaddr: 0 paddr: f8000000 align: 0 flags: 0
Loading ELF header #6. offset: 1207895336 filesize: 8192 memsize: 8192
vaddr: 0 paddr: fc054000 align: 0 flags: 0
Loading ELF header #7. offset: 1207903528 filesize: 262144 memsize: 262144
vaddr: 0 paddr: fffc0000 align: 0 flags: 0
I wanted to know why in this case, is the virtual address (denoted by
vaddr) 0 for each of the loadable segments ? Will it be okay if I load the
elf file taking the values of physical address (denoted by paddr) into
account ?
Specifically after loading the file, can I be certain that all of my
contents will have been loaded into memory address starting from 0 ? Will
the loaded contents be present in the exact location as specified (by
paddr) here ?
Thanks and Regards.
Arnab
7 years, 2 months
[libvirt-users] Libvirt fails on network disk with ISCSI protocol
by Fred Rolland
Hi,
I am working on oVirt, and I am trying to run a VM with a network disk with
ISCSI protocol ( the storage is on a Cinder server).
Here is the disk XML I use:
<disk device="disk" snapshot="no" type="network">
<address bus="0" controller="0" target="0" type="drive"
unit="0" />
<source
name="iqn.2010-10.org.openstack:volume-37fea687-040c-4a88-844c-117d1a90e9b2"
protocol="iscsi">
<host name="10.35.0.20" port="3260" />
</source>
<target bus="scsi" dev="sda" />
<boot order="1" />
<driver cache="none" error_policy="stop" io="threads"
name="qemu" type="raw" />
</disk>
I get the following error:
libvirtError: internal error: process exited while connecting to monitor:
2017-08-02T14:38:58.378430Z qemu-kvm: -drive file=iscsi://
10.35.0.20:3260/iqn.2010-10.org.openstack%3Avolume-37fea687-040c-4a88-844c-117d1a90e9b2,format=raw,if=none,id=drive-scsi0-0-0-0,cache=none,werror=stop,rerror=stop,aio=threads:
Failed to parse URL : iscsi://
10.35.0.20:3260/iqn.2010-10.org.openstack%3Avolume-37fea687-040c-4a88-844c-117d1a90e9b2
It seems that the ':' character got changed on the way to %3A.
Any suggestions on the root cause of this issue?
Thanks,
Fred Rolland
7 years, 2 months