[libvirt] Host OS, Storage Info

Hi, Is there any way to get host OS information and host Storage in formations using libvirt API...? Rgds -Sijo

Sijo, *Pertaining to virt-manager and virsh sluggish behaviour after a clone operation:* Thanks for your response. Honestly I do not know what "host Storage in formations using libvirt API" means. Sorry. I use virt-manager and virsh to do everything within KVM. If there is something better or another product/app that will enable me to drill down into the system...let me know... However, perhaps this can help: We are running *CentOS 6 (Update 5) 64 bit* - patched as of 11 April 2014. I create the virtual machines with the virt-install command using the* --file* switch and lay the system images of the vm's on the RAID5. The RAID5 uses ext4. The I/O to that volume is nice. We currently are running twenty-six (26) VM's. There is no I/O wait. The system has been up for thirteen (13) days. The load index (top) is between 1 and 3. Also, I have the following kernel tweak in /etc/sysctl.conf: vm.drop_caches = 3 NOTE: Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free; this helps to mitigate dipping into swap. Thanks in advance for everything, Tom On Thu, May 22, 2014 at 2:46 AM, Sijo Jose <mailtosijojose@gmail.com> wrote:
Hi, Is there any way to get host OS information and host Storage in formations using libvirt API...? Rgds -Sijo
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

On Thu, May 22, 2014 at 12:16:17PM +0530, Sijo Jose wrote:
Hi, Is there any way to get host OS information and host Storage in formations using libvirt API...? Rgds -Sijo
Check virsh help (most of the commands you probably want start with 'node' or 'vol'/'pool') or have a look at our hvsupport page [1] for virNode* and virStorage* functions. If this isn't what you're looking for, specify your question in a better way. Martin [1] http://libvirt.org/hvsupport.html

Martin, et al, Sorry for the lag in response. So I started playing with the various virsh commands. Awesome. Been doing some reading and I believe I have some things configured not so well. As I stated earlier in the thread, we have all of the VM image files on one RAID5. Very fast machine. When using top, the load average is a stable "5.xx". No I/O wait. GB's of free memory. Swap has not been touched. Using vmstat, I am writing to the RAID5 volume at a constant 150MB/s and reading at a constant 275MB/s. With all of that said, here are some results from virsh commands: # virsh pool-list --all Name State Autostart ------------------------------------------------------ default active yes # virsh pool-info default Name: default UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx State: running Persistent: yes Autostart: yes Capacity: 30.76 GiB Allocation: 2.10 GiB Available: 28.66 GiB Now, is that ok to have all of the VM's using a default pool? Or should a pool be created for each VM instance. I honestly am not even sure what a pool references...?... The more I read, the more I am moving away from thinking something in the OS is the cause of my sluggishness. Suggestions? Many thanks in advance, Tom On Thu, May 22, 2014 at 9:16 AM, Martin Kletzander <mkletzan@redhat.com> wrote:
On Thu, May 22, 2014 at 12:16:17PM +0530, Sijo Jose wrote:
Hi, Is there any way to get host OS information and host Storage in formations using libvirt API...? Rgds -Sijo
Check virsh help (most of the commands you probably want start with 'node' or 'vol'/'pool') or have a look at our hvsupport page [1] for virNode* and virStorage* functions.
If this isn't what you're looking for, specify your question in a better way.
Martin
[1] http://libvirt.org/hvsupport.html
_______________________________________________ libvirt-users mailing list libvirt-users@redhat.com https://www.redhat.com/mailman/listinfo/libvirt-users

On Thu, May 29, 2014 at 01:31:13PM -0400, Ainsworth, Thomas wrote:
Martin, et al,
Sorry for the lag in response.
So I started playing with the various virsh commands. Awesome. Been doing some reading and I believe I have some things configured not so well. As I stated earlier in the thread, we have all of the VM image files on one RAID5. Very fast machine.
When using top, the load average is a stable "5.xx". No I/O wait. GB's of free memory. Swap has not been touched. Using vmstat, I am writing to the RAID5 volume at a constant 150MB/s and reading at a constant 275MB/s.
With all of that said, here are some results from virsh commands:
# virsh pool-list --all Name State Autostart ------------------------------------------------------ default active yes
# virsh pool-info default Name: default UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx State: running Persistent: yes Autostart: yes Capacity: 30.76 GiB Allocation: 2.10 GiB Available: 28.66 GiB
Now, is that ok to have all of the VM's using a default pool? Or should a pool be created for each VM instance. I honestly am not even sure what a pool references...?...
Pool is a set of volumes of the same type (iscsi, LVs, files in a folder, etc.) in the same place. Example is the default pool which is, by default, in /var/lib/libvirt/images and volumes there are files (pool type is "dir"). If you want to have all the domain disks in that place and all the disks (volumes) should be files then default pool is enough.
The more I read, the more I am moving away from thinking something in the OS is the cause of my sluggishness.
I haven't read your previous mail before, so I've found that now. How often are you dropping those caches? That won't help you not to use swap. Having memory occupied by buffers and caches is good if you read/write from/to disks. Even when the reads/writes are as fast as you mentioned, reading/writing from/to RAM is way faster and until there's some free memory, why not use that? Martin

Martin, Thanks for the information. That makes sense. I *believe* we are good there. I noticed something weird yesterday. After a clone (via the virt-manager GUI) it seems libvirtd locked up. A force quit pop up appeared - I had to kill it. Then I restarted libvirtd. Then I did a "ps -edf | grep libvirt" and there were three (3) libvirtd --daemon processes. Then any virsh commands or virt-manager GUI (when it finally would come up) was very sluggish. By the end of the day I had four (4) of the processes running. Keep in mind, whilst all of this is going on the VM's were just cranking along fine. I could not find any dead PID files elated to the processes to kill... ...we rebooted the server at the end of the day. It should be fine until the next time I attempt a clone operation - which I am hesitant to do for obvious reasons... Any ideas? Thanks, Tom On Fri, May 30, 2014 at 5:12 AM, Martin Kletzander <mkletzan@redhat.com> wrote:
On Thu, May 29, 2014 at 01:31:13PM -0400, Ainsworth, Thomas wrote:
Martin, et al,
Sorry for the lag in response.
So I started playing with the various virsh commands. Awesome. Been doing some reading and I believe I have some things configured not so well. As I stated earlier in the thread, we have all of the VM image files on one RAID5. Very fast machine.
When using top, the load average is a stable "5.xx". No I/O wait. GB's of free memory. Swap has not been touched. Using vmstat, I am writing to the RAID5 volume at a constant 150MB/s and reading at a constant 275MB/s.
With all of that said, here are some results from virsh commands:
# virsh pool-list --all Name State Autostart ------------------------------------------------------ default active yes
# virsh pool-info default Name: default UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx State: running Persistent: yes Autostart: yes Capacity: 30.76 GiB Allocation: 2.10 GiB Available: 28.66 GiB
Now, is that ok to have all of the VM's using a default pool? Or should a pool be created for each VM instance. I honestly am not even sure what a pool references...?...
Pool is a set of volumes of the same type (iscsi, LVs, files in a folder, etc.) in the same place. Example is the default pool which is, by default, in /var/lib/libvirt/images and volumes there are files (pool type is "dir"). If you want to have all the domain disks in that place and all the disks (volumes) should be files then default pool is enough.
The more I read, the more I am moving away from thinking something in the
OS is the cause of my sluggishness.
I haven't read your previous mail before, so I've found that now. How often are you dropping those caches? That won't help you not to use swap. Having memory occupied by buffers and caches is good if you read/write from/to disks. Even when the reads/writes are as fast as you mentioned, reading/writing from/to RAM is way faster and until there's some free memory, why not use that?
Martin

On Fri, May 30, 2014 at 07:31:28AM -0400, Ainsworth, Thomas wrote:
Martin,
Thanks for the information. That makes sense. I *believe* we are good there.
I noticed something weird yesterday. After a clone (via the virt-manager GUI) it seems libvirtd locked up. A force quit pop up appeared - I had to kill it. Then I restarted libvirtd. Then I did a "ps -edf | grep libvirt" and there were three (3) libvirtd --daemon processes. Then any virsh commands or virt-manager GUI (when it finally would come up) was very sluggish. By the end of the day I had four (4) of the processes running. Keep in mind, whilst all of this is going on the VM's were just cranking along fine. I could not find any dead PID files elated to the processes to kill...
...we rebooted the server at the end of the day. It should be fine until the next time I attempt a clone operation - which I am hesitant to do for obvious reasons...
Any ideas?
I'd definitely try looking at the debug logs to see what the daemon is doing, when there are more processes I'd try looking what the others are doing by attaching with strace/gdb/whatever. As a way out you can always stop the daemon, (kill all remaining ones in your case) and start the daemon again. Martin
Thanks,
Tom
On Fri, May 30, 2014 at 5:12 AM, Martin Kletzander <mkletzan@redhat.com> wrote:
On Thu, May 29, 2014 at 01:31:13PM -0400, Ainsworth, Thomas wrote:
Martin, et al,
Sorry for the lag in response.
So I started playing with the various virsh commands. Awesome. Been doing some reading and I believe I have some things configured not so well. As I stated earlier in the thread, we have all of the VM image files on one RAID5. Very fast machine.
When using top, the load average is a stable "5.xx". No I/O wait. GB's of free memory. Swap has not been touched. Using vmstat, I am writing to the RAID5 volume at a constant 150MB/s and reading at a constant 275MB/s.
With all of that said, here are some results from virsh commands:
# virsh pool-list --all Name State Autostart ------------------------------------------------------ default active yes
# virsh pool-info default Name: default UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx State: running Persistent: yes Autostart: yes Capacity: 30.76 GiB Allocation: 2.10 GiB Available: 28.66 GiB
Now, is that ok to have all of the VM's using a default pool? Or should a pool be created for each VM instance. I honestly am not even sure what a pool references...?...
Pool is a set of volumes of the same type (iscsi, LVs, files in a folder, etc.) in the same place. Example is the default pool which is, by default, in /var/lib/libvirt/images and volumes there are files (pool type is "dir"). If you want to have all the domain disks in that place and all the disks (volumes) should be files then default pool is enough.
The more I read, the more I am moving away from thinking something in the
OS is the cause of my sluggishness.
I haven't read your previous mail before, so I've found that now. How often are you dropping those caches? That won't help you not to use swap. Having memory occupied by buffers and caches is good if you read/write from/to disks. Even when the reads/writes are as fast as you mentioned, reading/writing from/to RAM is way faster and until there's some free memory, why not use that?
Martin
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (3)
-
Ainsworth, Thomas
-
Martin Kletzander
-
Sijo Jose