About Libvirt Setmem&dommemstat Function
by 🍭
I have sent a similar email about this problem. But I did not describe carefully, so I would like to explain it in more details.
I am using 'virsh setmem' to ajust vm memory online. However, I don't know what is the lowerlimit that can be set to. And I try to use 'virsh dommemstat' to get 'unused' memory so that I can calculate the lowerlimit memory with this value, but it doesn't work when it comes to windows systems as I cannot set vm memory down to 'available - unused' in windows system.
I suppose it's caches memory that causes this problem, so I upgrade my libvirt & qemu in order to get 'stat_disk_caches'. Although I upgrade libvirt to v4.10.0, qemu upgrade to v4.1.0, I cannot get this value.
In summary, I came across two problems. How can I know the exact lowerlimit memory that can be set to in one vm? And if I need 'stat_disk_caches' value, how can I get it properly?
My libvirt version: v4.10.0, qemu version:v4.1.0, hypervisor working on centos7.6 system.
4 years, 7 months
macvtap direct
by Subhendu Ghosh
Hi
Couple of questions around macvtap direct usage:
1) is the document here current?
https://libvirt.org/formatnetwork.html#examplesDirect
I have been able to get host to guest network traffic without any special
configuration or switch since Fedora 28 when I first started using it.
Using <forward mode=vepa> requires switch port mirroring, but just using
<forward mode=bridge> doesn't.
2) do any of the language libraries make assumptions that libvirt networks
must have a <bridge name=xx> attribute? Foreman's Ruby interface to libvirt
errors out with attempting to build a VM on a KVM host with a network
defined with <forward mode=bridge>
https://projects.theforeman.org/issues/25890
thanks
Subhendu
4 years, 7 months
About Libvirt Setmem&dommemstat Function
by 🍭
I am using 'virsh setmem' to adjust the vm memory. But I don't know what is the lowerlimit memory that can be adjusted to, so I try to update the version of libvirt&qemu to get caches of the vm memory so that I can calculate the lowerlimit. And when I upgrade libvirt to 4.10.0, qemu upgrade to 4.1.0, still, I cannot get 'stat-disk-caches'. How can I solve it ? Thx.
4 years, 7 months
Firmware auto-select limitation
by GUOQING LI
Hi everyone and Martin
I would like to confirm the conversation we had in regard the possible limitation of firmware auto-select feature that’s been released since v5.20. I recall you saying that there were a lot of issues with auto select and later they shipped it into a Json file , it still didn’t solve all the problems, did it?
Is it better to explicitly specify the loader and nvram path than using auto-select ?
Just today, I encountered the issue of using firmware=“efi” on libvirt 5.4.0
I am running Ubuntu eoan 19.10, I am wondering how did it happen.
Detailed error
Error starting domain: internal error: process exited while connecting to monitor: 2020-05-15T14:19:06.033267Z qemu-system-x86_64: -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on: Failed to lock byte 100
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 111, in tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/libvirtobject.py", line 66, in newfn
ret = fn(self, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 1279, in startup
self._backend.create()
File "/usr/lib/python3/dist-packages/libvirt.py", line 1080, in create
if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirt.libvirtError: internal error: process exited while connecting to monitor: 2020-05-15T14:19:06.033267Z qemu-system-x86_64: -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,unit=0,readonly=on: Failed to lock byte 100
4 years, 7 months
DNSMASQ Libvirt
by Santhosh Kumar Gunturu
Hi,
Net-dump xml File.
root@NFV-FRU:~# virsh net-dumpxml data-2
<network>
<name>data-2</name>
<uuid>ace4935f-0367-4632-be7c-61bab2da7d4a</uuid>
<bridge name='data-2br1' stp='on' delay='0'/>
<mac address='52:54:00:46:43:7d'/>
<ip address='10.12.12.1' netmask='255.255.255.240'>
<dhcp>
<range start='10.12.12.4' end='10.12.12.4'/>
</dhcp>
</ip>
</network>
The DHCP response is going with Subnet Mask 255.255.255.0
10.12.12.1.bootps > 10.12.12.4.bootpc: [udp sum ok] BOOTP/DHCP, Reply,
length 300, xid 0x7331d77a, Flags [none] (0x0000)
Your-IP 10.12.12.4
Server-IP 10.12.12.1
Client-Ethernet-Address 52:54:00:5f:ab:94 (oui Unknown)
Vendor-rfc1048 Extensions
Magic Cookie 0x63825363
DHCP-Message Option 53, length 1: Offer
Server-ID Option 54, length 4: 10.12.12.1
Lease-Time Option 51, length 4: 3600
RN Option 58, length 4: 1800
RB Option 59, length 4: 3150
Subnet-Mask Option 1, length 4: 255.255.255.0
BR Option 28, length 4: 10.12.12.255
Domain-Name-Server Option 6, length 4: 10.12.12.1
Can I know the reason ?
Thanks & Regards
Santhosh Kumar Gunturu
4 years, 7 months
What is expiry time represents in this format
by Santhosh Kumar Gunturu
I see the output.
root@X10SDV-8C-TLN4F:/mnt/config# cat
/var/lib/libvirt/dnsmasq/mgmt-1br1.status
[
{
"ip-address": "192.168.27.8",
"mac-address": "52:54:00:42:21:14",
"hostname": "vyatta",
"expiry-time": 1589500228
}
]
Can you please explain what does the expiry-time mean ? What are its units ?
Please let me know.
Thanks & Regards
Santhosh Kumar Gunturu
4 years, 7 months
Re: Storage cleaning
by Lothar Schilling
Thank you, that's it!
virsh vol-list storage
VM1 /dev/storage/VM1.img
VM2 /dev/storage/VM2.img
VM3 /dev/storage/VM3.img [dead]
VM4 /dev/storage/VM4.img [dead]
A last stupid question (I don't want to make a big mistake ...): Is
virsh vol-delete VM3
virsh vol-delete VM4
the right command to get rid of the offending ones?
Am 14.05.2020 um 19:10 schrieb Alvin Starr:
>
> virsh pool-list
> you will get something like:
> Name State Autostart
> -----------------------------------------------
> default active yes
> gnome-boxes active no
> windows-openstack-image active yes
>
> then run virsh vol-list <your volume name>
>
> and you should be able to see the volumes that are still defined.
>
>
> On 5/14/20 1:01 PM, Lothar Schilling wrote:
>> virsh list --all
>>
>> 15 VM1 running
>> 16 VM2 running
>>
>> ps ax | grep virt
>>
>> 14281 ? Sl 1170:30 /usr/libexec/qemu-kvm -name VM1 [...]
>> 14384 ? Sl 376:45 /usr/libexec/qemu-kvm -name VM2 [...]
>>
>> Am 14.05.2020 um 17:45 schrieb Alvin Starr:
>>
>>> List your storage pool to insure that they have been deleted from
>>> the pool.
>>> If they are not there anymore then check to make sure nothing is
>>> running that would have the VM images open.
>>>
>>> On 5/14/20 11:01 AM, Lothar Schilling wrote:
>>>> Hi everybody,
>>>>
>>>> we have a Centos 6 host with libvirtd 0.10.2. It's holding a
>>>> storage pool of about 3.5 TB with 4 VMs. I decided to rearrange
>>>> them, so I destroyed and undefined two of them. But now I am not
>>>> able to install a new one because virsh gives me an "not enough
>>>> space left". Those two undefined VMs still linger around somehow
>>>> occupying a lot of that storage. How can I get rid of them?
>>>>
>>>> Name: storage
>>>> UUID: 8b25e085-38d8-5a09-f80f-a29150f25d42
>>>> Status: laufend
>>>> Persistent: yes
>>>> Automatischer Start: yes
>>>> Kapazität: 3,54 TiB
>>>> Zuordnung: 3,39 TiB
>>>> Verfügbar: 155,27 GiB
>>>>
>>>> Thank you very much
>>>>
>>>> Lothar Schilling
>>
>>
>
4 years, 7 months
Re: Storage cleaning
by Lothar Schilling
virsh list --all
15 VM1 running
16 VM2 running
ps ax | grep virt
14281 ? Sl 1170:30 /usr/libexec/qemu-kvm -name VM1 [...]
14384 ? Sl 376:45 /usr/libexec/qemu-kvm -name VM2 [...]
Am 14.05.2020 um 17:45 schrieb Alvin Starr:
> List your storage pool to insure that they have been deleted from the
> pool.
> If they are not there anymore then check to make sure nothing is
> running that would have the VM images open.
>
> On 5/14/20 11:01 AM, Lothar Schilling wrote:
>> Hi everybody,
>>
>> we have a Centos 6 host with libvirtd 0.10.2. It's holding a storage
>> pool of about 3.5 TB with 4 VMs. I decided to rearrange them, so I
>> destroyed and undefined two of them. But now I am not able to install
>> a new one because virsh gives me an "not enough space left". Those
>> two undefined VMs still linger around somehow occupying a lot of that
>> storage. How can I get rid of them?
>>
>> Name: storage
>> UUID: 8b25e085-38d8-5a09-f80f-a29150f25d42
>> Status: laufend
>> Persistent: yes
>> Automatischer Start: yes
>> Kapazität: 3,54 TiB
>> Zuordnung: 3,39 TiB
>> Verfügbar: 155,27 GiB
>>
>> Thank you very much
>>
>> Lothar Schilling
4 years, 7 months
Storage cleaning
by Lothar Schilling
Hi everybody,
we have a Centos 6 host with libvirtd 0.10.2. It's holding a storage
pool of about 3.5 TB with 4 VMs. I decided to rearrange them, so I
destroyed and undefined two of them. But now I am not able to install a
new one because virsh gives me an "not enough space left". Those two
undefined VMs still linger around somehow occupying a lot of that
storage. How can I get rid of them?
Name: storage
UUID: 8b25e085-38d8-5a09-f80f-a29150f25d42
Status: laufend
Persistent: yes
Automatischer Start: yes
Kapazität: 3,54 TiB
Zuordnung: 3,39 TiB
Verfügbar: 155,27 GiB
Thank you very much
Lothar Schilling
4 years, 7 months