[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 1 month
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 4 months
[libvirt-users] Increasing video memory available to Windows
by Alex
Hi,
I have a fedora25 system with a Windows10 host and would like to use
it for photoshop. However, it complains the video memory is too low.
I'm using the QXL driver and it appears to be limited to 256MB? I've
installed the Red Hat QXL driver in Windows.
I have 4GB of memory allocated overall, and could allocate more if necessary.
How do I increase the available video memory? Photoshop reports that
it's detected less than 512MB and isn't operating properly.
When I view "Adapter Properties" is reports there is 2047 MB of
available graphics memory and
2047 MB Shared System Memory.
Attempts to modify with "virsh edit" the vram and other video-related
variables for the host have failed.
Included below is my qemu XML configuration. Any idea greatly appreciated.
<domain type='kvm'>
<name>alex-win10</name>
<uuid>337f9410-3286-4ef5-a3e8-8271e38ea1e5</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-2.4'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
<vmport state='off'/>
</features>
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/alex-win10.qcow2'/>
<target dev='vda' bus='virtio'/>
<boot order='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
<source dev='/dev/sr0'/>
<target dev='hdb' bus='ide'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x7'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci1'>
<master startport='0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0' multifunction='on'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci2'>
<master startport='2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x1'/>
</controller>
<controller type='usb' index='0' model='ich9-uhci3'>
<master startport='4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</controller>
<controller type='scsi' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:52:6b:61'/>
<source bridge='br0'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='spice' autoport='yes'>
<listen type='address'/>
<image compression='off'/>
</graphics>
<sound model='ich6'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='536870912'
heads='1' primary='yes'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
</video>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='2'/>
</redirdev>
<redirdev bus='usb' type='spicevmc'>
<address type='usb' bus='0' port='3'/>
</redirdev>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
</memballoon>
</devices>
</domain>
7 years, 1 month
[libvirt-users] creating new vm with virt-manager, existing disk failure
by Marko Weber | 8000
hello,
i rsynced a kvm vm from one host to another.
i start virt-manager and tell virt-manager to use an existing disk.
i set cpu to haswell that is on the host.
configure before start is set, and i start "begin installation".
I get this output by virt-manager:
Unable to complete install: 'internal error: process exited while
connecting to monitor: 2017-07-19T09:27:10.861928Z qemu-system-x86_64:
can't apply global Haswell-x86_64-cpu.cmt=on: Property '.cmt' not found'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 477, in
start_install
doboot, transient)
File "/usr/share/virt-manager/virtinst/guest.py", line 405, in
_create_guest
self.domain.create()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1062, in
create
if ret == -1: raise libvirtError ('virDomainCreate() failed',
dom=self)
libvirtError: internal error: process exited while connecting to
monitor: 2017-07-19T09:27:10.861928Z qemu-system-x86_64: can't apply
global Haswell-x86_64-cpu.cmt=on: Property '.cmt' not found
try to set it to another cpu and clicking start, i get this:
Unable to complete install: 'Domain has already been started!'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 455, in
start_install
raise RuntimeError(_("Domain has already been started!"))
RuntimeError: Domain has already been started!
how do i stop the already runing vm? virsh list dont show the vm
and why do i get the error on haswell?
lscpu:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz
Stepping: 1
CPU MHz: 2974.658
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 7191.87
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1
sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand
lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt tpr_shadow vnmi
flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms
invpcid rtm cqm rdt_a rdseed adx smap xsaveopt cqm_llc cqm_occup_llc
cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
kind regards
marko
--
zbfmail - Mittendrin statt nur Datei!
7 years, 2 months
[libvirt-users] Changing <on_reboot> in the domain XML file
by Sam Varshavchik
Using virsh edit, I made the following change to the domain XML file:
<on_reboot>destroy</on_reboot>
But this appears to do nothing. The VM guest is Windows 10, and when
directing it to reboot, it still reboots, instead of shutting off the VM.
I was trying to work around some kind of a bug, somewhere, that started
happening after updating to Fedora 26 and qemu 2.9, where a reboot throws my
Windows 10 guests into some kind of a bizarre automatic recovery mode, which
then claims that the hard drive is hosed. It's not, and a forced shutdown
followed by a cold start boots everything back up like nothing has happened
(which I didn't figure out until reinstalling one of the guests, sigh...),
and everything is peachy once again.
So, anyway, I was trying to figure out a way around it, and
<on_reboot>destroy</on-reboot> seemed to be exactly what I was looking for.
Perusing qemu's man page, it seems that this option should result in a -no-
reboot option getting added to qemu's command line. But, looking at the
actual command line, after starting the VM, it's nowhere to be seen.
A bit more Google-fu found this:
http://blog.vmsplice.net/2011/04/how-to-pass-qemu-command-line-options.html
And I manually added a -no-reboot option to the domain XML file that way.
And you know what? After doing that, telling Windows 10 to reboot simply
shuts off the VM. Wonderful.
But that still leaves me wondering: what's up with the <on_reboot> tag? Why
didn't it work? Is this a PEBCAK, or something worth throwing a ticket for?
7 years, 2 months
[libvirt-users] libvirt virDomainDestroy
by llilulu
When stop a qemu vm via libvirt api, I call virDomainShutdown, I can seen the vm shutdown process, But I call virDomainDestroyFlags with VIR_DOMAIN_DESTROY_GRACEFULL ,I can't seen the vm shutdown process, virDomainDestroyFlags with VIR_DOMAIN_DESTROY_GRACEFULL is gave vm driver process a SIGTERM , how vm driver process SIGTERM signal (like qemu).
Thanks
7 years, 2 months
[libvirt-users] Libvirt and dnsmasq
by Erik Lukács
Hi Guys, I do. have a question on libvirt according dnsmasq.
according to the Documentation libvirt initiates for each virtual Interface a dnsmasq process which listens on that interface.
Now my Setup Looks like this: in Order to rebuild a Customers Setup i created a vm with several Interfaces ( machine1) and an API Server with two Interfaces ( machine2). Two of these interfaces net_ext and net_int are available on both vms. One of That Interface is meant for Internet (net_ext) the other other for intercommunication (net_int). The "Internet" network is Routed via another virtual Interface (net_infra) which is unavailable on both vms. On that network my outgoing Gateway, ntp and DNS Server is configured in the vms.
This makes me crazy due to the fact that dns resolution on machine1 does not work until I either kill the dnsmasq process which listens on net_infra (this makes the system-wide dnsmasq also react on that interface) or I make config-changes mentioned below (but that persists only until the host is rebooted.
Both vms and the host run with centos7
My problem now is, that dnsmasq is Running on every virtual interface. And every process only listens on its own interface. All changes are undone/reset by reboot.
Of course during runtime (and with restart) i can add to "Interface=net_infra,net_ext" which temporarily fixes My Problem.
Another thing I could do is killing the dnsmasq process on the interface and use the hosts own dnsmasq config.
Without These changes on DNS resolution doesn't work within my machines ( as written DNS MUST be set on an ip on net_infra, which must not be bound to both vms).
So my question: How do i einher configure dnsmasq Not to Start for each Interface on libvirt-start or how do i configure the dnsmasq-config for net_infra also to listen
On net_ext. chattr +i /var/lib/libvirt/dnsmasq/net_infra.conf is Not an Option!
Thanks in advance!
Erik
7 years, 2 months
Re: [libvirt-users] Xen died - Fedora upgrade from 21 to 26
by G Crowe
My server was installed as a "minimal install" in text mode, so there
may be some difference between my setup and your laptop. Are you able to
email the output of "systemctl list-unit-files" to me so that I can see
what services are enabled?
I tried installing a couple of the other drivers mentioned in the log
file...
# dnf install libvirt-daemon-driver-lxc
# dnf install libvirt-daemon-driver-uml
# dnf install libvirt-daemon-driver-vbox
But this made no difference.
I then tried disabling xenconsole.service, which fixed some things and
broke others.
- reboots now work (I don't have to hard power cycle - it was locking up
when trying to kill both the libvirtd and the xenconsole services)
- "virsh list" now works (shows an empty list)
- "xl list" now gives an error instead of just locking up (the same
error as if libvirtd wasn't running - see below).
- "virsh define vmtest.xml" now gives an error (the same error that it
gave before I installed libvirt-daemon-driver-libxl - see below)
- libvirtd.service starts fine on bootup. Manually restarting libvirtd
does not resolve the above problems.
Note that on my Fedora 21 machine, it has libvirtd.service and
xendomains.service and xenconsoled.service all running simultaneously,
and it has been running for years without issues (even on reboots)
Is there any documentation on what needs to be installed and/or running
to use Xen, and what conflicts?
Or is there any documentation on how Xen should be installed for the
current version of Fedora?
Thank you
GC
# virsh define vmtest.xml
error: Failed to define domain from vmtest.xml
error: invalid argument: could not find capabilities for arch=x86_64
domaintype=xen
# xl list
xencall: error: Could not obtain handle on privileged command interface:
No such file or directory
libxl: error: libxl.c:108:libxl_ctx_alloc: cannot open libxc handle: No
such file or directory
cannot init xl context
On 27/07/2017 11:40 AM, Alvin Starr wrote:
> I just tried my laptop and booted xen and can virsh list and xl seem
> to work.
>
>
> On 07/26/2017 06:27 PM, G Crowe wrote:
>> Alvin,
>> Thanks for that. I disabled the xendomains service (I had to do
>> it twice, as it seemed to re-enable itself after the first reboot)
>> and it now starts libvirtd automatically on startup.
>>
>> However, all is not well!
>> "xl list" and "virsh list" will just hang
>> "systemctl stop libvirtd" just hangs and after this "ps aux" lists
>> the process as defunct
>> root 673 0.0 0.0 0 0 ? Zsl 08:04 0:00
>> [libvirtd] <defunct>
>> "kill -KILL 673" has no effect
>> "shutdown -r now" also hangs (I have to physically power off the
>> machine) irrespectively of whether I have tried to stop the service
>> before shutdown or not.
>>
>> None of this happened yesterday when I manually started libvirtd
>> after bootup (when xendomains was still running).
>>
>> The logs show nothing in error or warning category, just a heap of
>> info (I have the logs set to level 2). Is there anything in
>> particular that I should be looking for in the logs?
>>
>>
>>
>> Regarding the use of a VM to access the host in a remote location...
>> I had considered this and there was little choice. The machine is on
>> the end of a VDSL line and was installed to consolidate a few servers
>> in that location in order to save power. I figured that automatically
>> starting a VM that ran the networking was not much more risk than a
>> physical server that had to start up its networking, and that has (so
>> far) proven to be true with all of the remote restarts & power
>> outages to date. The only other option is an expensive second link to
>> the host, and I've decided to live with the possibility of having to
>> talk someone through a restart over the phone rather than go down
>> that route.
>>
>> Interestingly, with the NBN in Australia (the country-wide rollout of
>> fibre or broadband to every premises) they have effectively converted
>> all our phone lines to VoIP. This has resulted in the old physical
>> landline now requiring one of the VMs to be running as well, so if I
>> ever need to talk someone through a restart it will need to be on a
>> mobile phone.
>>
>>
>> Thank you
>>
>> GC
>>
>>
>> On 27/07/2017 5:47 AM, Alvin Starr wrote:
>>> I addressed the fix to Jim but I should have addressed it to you.
>>>
>>>
>>> Try systemctl disable xendomains.service
>>>
>>> It is conflicting with libvirtd.service.
>>> I found it by using
>>> "systemd-analyze dot libvirtd.service | dot -Tgif > /tmp/dot1.gif"
>>>
>>>
>>> It looks like libvirt-daemon-driver-xen or
>>> libvirt-daemon-driver-libxl should disable the xendomains.service
>>>
>>>
>>> On 07/26/2017 06:46 AM, G Crowe wrote:
>>>> Jim,
>>>> Thanks for that, I had manually installed
>>>> libvirt-daemon-driver-xen, but also needed to install
>>>> libvirt-daemon-driver-libxl. I can now create VMs and convert
>>>> config formats.
>>>>
>>>> However the daemon still fails to start on bootup. It starts fine
>>>> when I manually start it with "systemctl start libvirtd" but
>>>> setting it to autostart with "systemctl enable libvirtd" seems to
>>>> have no effect. When I look at the status, it tells me that the
>>>> service is "enabled" which means that it starts on bootup (well,
>>>> that's what it means for any other service).
>>>>
>>>> This is rather critical on this PC, as it has unattended restarts
>>>> and the VPN/routing is done in one of the VMs (i.e. I can't get to
>>>> it unless it auto-starts at least one VM).
>>>>
>>>> Unfortunatley there is nothing appearing in the libvirtd log over a
>>>> reboot to help diagnose.
>>>>
>>>> Any suggestions?
>>>>
>>>>
>>>> Thank you
>>>>
>>>> GC
>>>>
>>>>
>>>> [root@testhost ~]# systemctl status libvirtd
>>>> ● libvirtd.service - Virtualization daemon
>>>> Loaded: loaded (/usr/lib/systemd/system/libvirtd.service;
>>>> enabled; vendor preset: enabled)
>>>> Active: inactive (dead)
>>>> Docs: man:libvirtd(8)
>>>> http://libvirt.org
>>>>
>>>>
>>>>
>>>>
>>>> On 25/07/2017 1:20 AM, Jim Fehlig wrote:
>>>>> On 07/23/2017 04:25 PM, G Crowe wrote:
>>>>>> Hi,
>>>>>> I am trying to upgrade my Xen host (Dom0) and are having
>>>>>> troubles getting it to work.
>>>>>>
>>>>>> I think that it has booted into a kernel that supports Xen
>>>>>> (running 'xl info' does list some Xen capabilities), but I have
>>>>>> three problems (that I have found so far).
>>>>>>
>>>>>> Firstly, the "libvirtd" daemon doesn't start on bootup (and as a
>>>>>> result all 'virsh' commands fail). It is set to auto-start
>>>>>> (systemctl enable libvirtd), and can be manually started
>>>>>> (systemctl start libvirtd), but it will not auto-start on reboot.
>>>>>>
>>>>>> Secondly, Once I have manually started libvirtd, when I try to
>>>>>> define a domain it gives me an error "could not find capabilities
>>>>>> for arch=x86_64 domaintype=xen" and I haven't yet been able to
>>>>>> define any domains. This domain type works fine on Fedora 21.
>>>>>>
>>>>>> Thirdly, I am unable to convert to/from xml config format, it
>>>>>> gives me the error "error: invalid argument: unsupported config
>>>>>> type xen-xl" however the format "xen-xl" works fine on the Fedora
>>>>>> 21 machine.
>>>>>
>>>>> It sounds like the libvirt libxl driver is not loaded. Is the
>>>>> libvirt-daemon-driver-libxl package installed? If it's installed,
>>>>> enabling debug logging in libvirtd can provide hints why it is not
>>>>> loading
>>>>>
>>>>> http://libvirt.org/logging.html
>>>>>
>>>>> Regards,
>>>>> Jim
>>>>>
>>>>>>
>>>>>> I had these same issues when I tried to upgrade to Fedora 25 and
>>>>>> assumed that something had been broken and so abandoned further
>>>>>> attempts to upgrade, however since Fedora 26 is the same I am now
>>>>>> assuming that I have stuffed something up myself (or missed
>>>>>> something).
>>>>>>
>>>>>> Fedora 21 uses kernel 3.19.3 and xen 4.4.1
>>>>>> Fedora 26 uses kernel 4.11.8 and xen 4.8.1
>>>>>>
>>>>>> I have tried following the info on
>>>>>> https://wiki.xen.org/wiki/Fedora_Host_Installation but it appears
>>>>>> to be out of date now (I used this site when I started using Xen
>>>>>> under Fedora 19, and when I upgraded to Fedora 21)
>>>>>>
>>>>>> Does anyone have any suggestions? outputs from "xl info" and the
>>>>>> domain config are below. I have also tried disabling SELinux, but
>>>>>> it made no difference.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> GC
>>>>>>
>>>>>> -----------------------------------------------
>>>>>> On the fedora 26 box.....
>>>>>> # xl info
>>>>>> host : family.mydomain.mytld
>>>>>> release : 4.11.8-300.fc26.x86_64
>>>>>> version : #1 SMP Thu Jun 29 20:09:48 UTC 2017
>>>>>> machine : x86_64
>>>>>> nr_cpus : 4
>>>>>> max_cpu_id : 3
>>>>>> nr_nodes : 1
>>>>>> cores_per_socket : 4
>>>>>> threads_per_core : 1
>>>>>> cpu_mhz : 2712
>>>>>> hw_caps :
>>>>>> b7ebfbff:77faf3bf:2c100800:00000121:0000000f:009c67af:00000000:00000100
>>>>>>
>>>>>> virt_caps : hvm hvm_directio
>>>>>> total_memory : 8072
>>>>>> free_memory : 128
>>>>>> sharing_freed_memory : 0
>>>>>> sharing_used_memory : 0
>>>>>> outstanding_claims : 0
>>>>>> free_cpus : 0
>>>>>> xen_major : 4
>>>>>> xen_minor : 8
>>>>>> xen_extra : .1
>>>>>> xen_version : 4.8.1
>>>>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p
>>>>>> hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>>>> xen_scheduler : credit
>>>>>> xen_pagesize : 4096
>>>>>> platform_params : virt_start=0xffff800000000000
>>>>>> xen_changeset :
>>>>>> xen_commandline : placeholder
>>>>>> cc_compiler : gcc (GCC) 7.0.1 20170421 (Red Hat
>>>>>> 7.0.1-0.15)
>>>>>> cc_compile_by : mockbuild
>>>>>> cc_compile_domain : [unknown]
>>>>>> cc_compile_date : Wed May 3 21:23:49 UTC 2017
>>>>>> build_id : 1c6e5a40165e05837303942b54757ae1f2d5033d
>>>>>> xend_config_format : 4
>>>>>>
>>>>>> ---------------------------------------------------
>>>>>> # cat vmtest.xml
>>>>>> <domain type='xen' id='21'>
>>>>>> <name>testVM</name>
>>>>>> <memory unit='KiB'>2097152</memory>
>>>>>> <currentMemory unit='KiB'>2097152</currentMemory>
>>>>>> <vcpu placement='static'>1</vcpu>
>>>>>> <os>
>>>>>> <type arch='x86_64' machine='xenfv'>hvm</type>
>>>>>> <loader type='rom'>/usr/lib/xen/boot/hvmloader</loader>
>>>>>> <boot dev='network'/>
>>>>>> </os>
>>>>>> <features>
>>>>>> <acpi/>
>>>>>> <apic/>
>>>>>> <pae/>
>>>>>> </features>
>>>>>> <clock offset='variable' adjustment='0' basis='utc'/>
>>>>>> <on_poweroff>destroy</on_poweroff>
>>>>>> <on_reboot>restart</on_reboot>
>>>>>> <on_crash>restart</on_crash>
>>>>>> <devices>
>>>>>> <interface type='bridge'>
>>>>>> <mac address='02:02:00:03:00:00'/>
>>>>>> <source bridge='enp1s0'/>
>>>>>> <script path='vif-bridge'/>
>>>>>> </interface>
>>>>>> <serial type='pty'>
>>>>>> <target port='0'/>
>>>>>> </serial>
>>>>>> <console type='pty'>
>>>>>> <target type='serial' port='0'/>
>>>>>> </console>
>>>>>> <input type='mouse' bus='ps2'/>
>>>>>> <input type='keyboard' bus='ps2'/>
>>>>>> <graphics type='vnc' port='5901' autoport='no'
>>>>>> listen='192.168.131.54'>
>>>>>> <listen type='address' address='192.168.131.54'/>
>>>>>> </graphics>
>>>>>> </devices>
>>>>>> </domain>
>>>>>>
>>>>>> --------------------------------------
>>>>>>
>>>>>> _______________________________________________
>>>>>> libvirt-users mailing list
>>>>>> libvirt-users(a)redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> libvirt-users mailing list
>>>> libvirt-users(a)redhat.com
>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>>
>>
>
7 years, 2 months
Re: [libvirt-users] Xen died - Fedora upgrade from 21 to 26
by G Crowe
Alvin,
Thanks for that. I disabled the xendomains service (I had to do it
twice, as it seemed to re-enable itself after the first reboot) and it
now starts libvirtd automatically on startup.
However, all is not well!
"xl list" and "virsh list" will just hang
"systemctl stop libvirtd" just hangs and after this "ps aux" lists the
process as defunct
root 673 0.0 0.0 0 0 ? Zsl 08:04 0:00
[libvirtd] <defunct>
"kill -KILL 673" has no effect
"shutdown -r now" also hangs (I have to physically power off the
machine) irrespectively of whether I have tried to stop the service
before shutdown or not.
None of this happened yesterday when I manually started libvirtd after
bootup (when xendomains was still running).
The logs show nothing in error or warning category, just a heap of info
(I have the logs set to level 2). Is there anything in particular that I
should be looking for in the logs?
Regarding the use of a VM to access the host in a remote location... I
had considered this and there was little choice. The machine is on the
end of a VDSL line and was installed to consolidate a few servers in
that location in order to save power. I figured that automatically
starting a VM that ran the networking was not much more risk than a
physical server that had to start up its networking, and that has (so
far) proven to be true with all of the remote restarts & power outages
to date. The only other option is an expensive second link to the host,
and I've decided to live with the possibility of having to talk someone
through a restart over the phone rather than go down that route.
Interestingly, with the NBN in Australia (the country-wide rollout of
fibre or broadband to every premises) they have effectively converted
all our phone lines to VoIP. This has resulted in the old physical
landline now requiring one of the VMs to be running as well, so if I
ever need to talk someone through a restart it will need to be on a
mobile phone.
Thank you
GC
On 27/07/2017 5:47 AM, Alvin Starr wrote:
> I addressed the fix to Jim but I should have addressed it to you.
>
>
> Try systemctl disable xendomains.service
>
> It is conflicting with libvirtd.service.
> I found it by using
> "systemd-analyze dot libvirtd.service | dot -Tgif > /tmp/dot1.gif"
>
>
> It looks like libvirt-daemon-driver-xen or
> libvirt-daemon-driver-libxl should disable the xendomains.service
>
>
> On 07/26/2017 06:46 AM, G Crowe wrote:
>> Jim,
>> Thanks for that, I had manually installed
>> libvirt-daemon-driver-xen, but also needed to install
>> libvirt-daemon-driver-libxl. I can now create VMs and convert config
>> formats.
>>
>> However the daemon still fails to start on bootup. It starts fine
>> when I manually start it with "systemctl start libvirtd" but setting
>> it to autostart with "systemctl enable libvirtd" seems to have no
>> effect. When I look at the status, it tells me that the service is
>> "enabled" which means that it starts on bootup (well, that's what it
>> means for any other service).
>>
>> This is rather critical on this PC, as it has unattended restarts and
>> the VPN/routing is done in one of the VMs (i.e. I can't get to it
>> unless it auto-starts at least one VM).
>>
>> Unfortunatley there is nothing appearing in the libvirtd log over a
>> reboot to help diagnose.
>>
>> Any suggestions?
>>
>>
>> Thank you
>>
>> GC
>>
>>
>> [root@testhost ~]# systemctl status libvirtd
>> ● libvirtd.service - Virtualization daemon
>> Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled;
>> vendor preset: enabled)
>> Active: inactive (dead)
>> Docs: man:libvirtd(8)
>> http://libvirt.org
>>
>>
>>
>>
>> On 25/07/2017 1:20 AM, Jim Fehlig wrote:
>>> On 07/23/2017 04:25 PM, G Crowe wrote:
>>>> Hi,
>>>> I am trying to upgrade my Xen host (Dom0) and are having
>>>> troubles getting it to work.
>>>>
>>>> I think that it has booted into a kernel that supports Xen (running
>>>> 'xl info' does list some Xen capabilities), but I have three
>>>> problems (that I have found so far).
>>>>
>>>> Firstly, the "libvirtd" daemon doesn't start on bootup (and as a
>>>> result all 'virsh' commands fail). It is set to auto-start
>>>> (systemctl enable libvirtd), and can be manually started (systemctl
>>>> start libvirtd), but it will not auto-start on reboot.
>>>>
>>>> Secondly, Once I have manually started libvirtd, when I try to
>>>> define a domain it gives me an error "could not find capabilities
>>>> for arch=x86_64 domaintype=xen" and I haven't yet been able to
>>>> define any domains. This domain type works fine on Fedora 21.
>>>>
>>>> Thirdly, I am unable to convert to/from xml config format, it gives
>>>> me the error "error: invalid argument: unsupported config type
>>>> xen-xl" however the format "xen-xl" works fine on the Fedora 21
>>>> machine.
>>>
>>> It sounds like the libvirt libxl driver is not loaded. Is the
>>> libvirt-daemon-driver-libxl package installed? If it's installed,
>>> enabling debug logging in libvirtd can provide hints why it is not
>>> loading
>>>
>>> http://libvirt.org/logging.html
>>>
>>> Regards,
>>> Jim
>>>
>>>>
>>>> I had these same issues when I tried to upgrade to Fedora 25 and
>>>> assumed that something had been broken and so abandoned further
>>>> attempts to upgrade, however since Fedora 26 is the same I am now
>>>> assuming that I have stuffed something up myself (or missed
>>>> something).
>>>>
>>>> Fedora 21 uses kernel 3.19.3 and xen 4.4.1
>>>> Fedora 26 uses kernel 4.11.8 and xen 4.8.1
>>>>
>>>> I have tried following the info on
>>>> https://wiki.xen.org/wiki/Fedora_Host_Installation but it appears
>>>> to be out of date now (I used this site when I started using Xen
>>>> under Fedora 19, and when I upgraded to Fedora 21)
>>>>
>>>> Does anyone have any suggestions? outputs from "xl info" and the
>>>> domain config are below. I have also tried disabling SELinux, but
>>>> it made no difference.
>>>>
>>>>
>>>> Thanks
>>>>
>>>> GC
>>>>
>>>> -----------------------------------------------
>>>> On the fedora 26 box.....
>>>> # xl info
>>>> host : family.mydomain.mytld
>>>> release : 4.11.8-300.fc26.x86_64
>>>> version : #1 SMP Thu Jun 29 20:09:48 UTC 2017
>>>> machine : x86_64
>>>> nr_cpus : 4
>>>> max_cpu_id : 3
>>>> nr_nodes : 1
>>>> cores_per_socket : 4
>>>> threads_per_core : 1
>>>> cpu_mhz : 2712
>>>> hw_caps :
>>>> b7ebfbff:77faf3bf:2c100800:00000121:0000000f:009c67af:00000000:00000100
>>>>
>>>> virt_caps : hvm hvm_directio
>>>> total_memory : 8072
>>>> free_memory : 128
>>>> sharing_freed_memory : 0
>>>> sharing_used_memory : 0
>>>> outstanding_claims : 0
>>>> free_cpus : 0
>>>> xen_major : 4
>>>> xen_minor : 8
>>>> xen_extra : .1
>>>> xen_version : 4.8.1
>>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p
>>>> hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
>>>> xen_scheduler : credit
>>>> xen_pagesize : 4096
>>>> platform_params : virt_start=0xffff800000000000
>>>> xen_changeset :
>>>> xen_commandline : placeholder
>>>> cc_compiler : gcc (GCC) 7.0.1 20170421 (Red Hat 7.0.1-0.15)
>>>> cc_compile_by : mockbuild
>>>> cc_compile_domain : [unknown]
>>>> cc_compile_date : Wed May 3 21:23:49 UTC 2017
>>>> build_id : 1c6e5a40165e05837303942b54757ae1f2d5033d
>>>> xend_config_format : 4
>>>>
>>>> ---------------------------------------------------
>>>> # cat vmtest.xml
>>>> <domain type='xen' id='21'>
>>>> <name>testVM</name>
>>>> <memory unit='KiB'>2097152</memory>
>>>> <currentMemory unit='KiB'>2097152</currentMemory>
>>>> <vcpu placement='static'>1</vcpu>
>>>> <os>
>>>> <type arch='x86_64' machine='xenfv'>hvm</type>
>>>> <loader type='rom'>/usr/lib/xen/boot/hvmloader</loader>
>>>> <boot dev='network'/>
>>>> </os>
>>>> <features>
>>>> <acpi/>
>>>> <apic/>
>>>> <pae/>
>>>> </features>
>>>> <clock offset='variable' adjustment='0' basis='utc'/>
>>>> <on_poweroff>destroy</on_poweroff>
>>>> <on_reboot>restart</on_reboot>
>>>> <on_crash>restart</on_crash>
>>>> <devices>
>>>> <interface type='bridge'>
>>>> <mac address='02:02:00:03:00:00'/>
>>>> <source bridge='enp1s0'/>
>>>> <script path='vif-bridge'/>
>>>> </interface>
>>>> <serial type='pty'>
>>>> <target port='0'/>
>>>> </serial>
>>>> <console type='pty'>
>>>> <target type='serial' port='0'/>
>>>> </console>
>>>> <input type='mouse' bus='ps2'/>
>>>> <input type='keyboard' bus='ps2'/>
>>>> <graphics type='vnc' port='5901' autoport='no'
>>>> listen='192.168.131.54'>
>>>> <listen type='address' address='192.168.131.54'/>
>>>> </graphics>
>>>> </devices>
>>>> </domain>
>>>>
>>>> --------------------------------------
>>>>
>>>> _______________________________________________
>>>> libvirt-users mailing list
>>>> libvirt-users(a)redhat.com
>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>
>>>
>>
>> _______________________________________________
>> libvirt-users mailing list
>> libvirt-users(a)redhat.com
>> https://www.redhat.com/mailman/listinfo/libvirt-users
>
7 years, 2 months
[libvirt-users] Xen died - Fedora upgrade from 21 to 26
by G Crowe
Hi,
I am trying to upgrade my Xen host (Dom0) and are having troubles
getting it to work.
I think that it has booted into a kernel that supports Xen (running 'xl
info' does list some Xen capabilities), but I have three problems (that
I have found so far).
Firstly, the "libvirtd" daemon doesn't start on bootup (and as a result
all 'virsh' commands fail). It is set to auto-start (systemctl enable
libvirtd), and can be manually started (systemctl start libvirtd), but
it will not auto-start on reboot.
Secondly, Once I have manually started libvirtd, when I try to define a
domain it gives me an error "could not find capabilities for arch=x86_64
domaintype=xen" and I haven't yet been able to define any domains. This
domain type works fine on Fedora 21.
Thirdly, I am unable to convert to/from xml config format, it gives me
the error "error: invalid argument: unsupported config type xen-xl"
however the format "xen-xl" works fine on the Fedora 21 machine.
I had these same issues when I tried to upgrade to Fedora 25 and assumed
that something had been broken and so abandoned further attempts to
upgrade, however since Fedora 26 is the same I am now assuming that I
have stuffed something up myself (or missed something).
Fedora 21 uses kernel 3.19.3 and xen 4.4.1
Fedora 26 uses kernel 4.11.8 and xen 4.8.1
I have tried following the info on
https://wiki.xen.org/wiki/Fedora_Host_Installation but it appears to be
out of date now (I used this site when I started using Xen under Fedora
19, and when I upgraded to Fedora 21)
Does anyone have any suggestions? outputs from "xl info" and the domain
config are below. I have also tried disabling SELinux, but it made no
difference.
Thanks
GC
-----------------------------------------------
On the fedora 26 box.....
# xl info
host : family.mydomain.mytld
release : 4.11.8-300.fc26.x86_64
version : #1 SMP Thu Jun 29 20:09:48 UTC 2017
machine : x86_64
nr_cpus : 4
max_cpu_id : 3
nr_nodes : 1
cores_per_socket : 4
threads_per_core : 1
cpu_mhz : 2712
hw_caps :
b7ebfbff:77faf3bf:2c100800:00000121:0000000f:009c67af:00000000:00000100
virt_caps : hvm hvm_directio
total_memory : 8072
free_memory : 128
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 8
xen_extra : .1
xen_version : 4.8.1
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler : credit
xen_pagesize : 4096
platform_params : virt_start=0xffff800000000000
xen_changeset :
xen_commandline : placeholder
cc_compiler : gcc (GCC) 7.0.1 20170421 (Red Hat 7.0.1-0.15)
cc_compile_by : mockbuild
cc_compile_domain : [unknown]
cc_compile_date : Wed May 3 21:23:49 UTC 2017
build_id : 1c6e5a40165e05837303942b54757ae1f2d5033d
xend_config_format : 4
---------------------------------------------------
# cat vmtest.xml
<domain type='xen' id='21'>
<name>testVM</name>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='x86_64' machine='xenfv'>hvm</type>
<loader type='rom'>/usr/lib/xen/boot/hvmloader</loader>
<boot dev='network'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='variable' adjustment='0' basis='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<interface type='bridge'>
<mac address='02:02:00:03:00:00'/>
<source bridge='enp1s0'/>
<script path='vif-bridge'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='5901' autoport='no'
listen='192.168.131.54'>
<listen type='address' address='192.168.131.54'/>
</graphics>
</devices>
</domain>
--------------------------------------
7 years, 2 months