[libvirt-users] How to change virtfs/9p/v9fs umask
by Javi Legido
Hi there.
My question is regarding virtfs/9p/v9fs [1], not sure what's the
appropiate name :)
Basically I have a KVM + libvirt server sharing a directory with a
guest in mapped mode.
It works fine, but the only issue is with the file permissions of the
files created by the guest inside the host:
-They are 0700 for dirs and 0400 for files
-The files belongs to the same user that runs the "qemu-system-x86_64"
process, which is "libvirt-qemu"
Questions:
1. There's a way to change the umask of this user (I'm almost sure
that I already tried this and it didn't worked) or to change any
setting to force the permissions to be wider?
2. It's acceptable to run the "qemu-system-x86_64" as root, and switch
to "passthrough" mode?
Below some details of my environment.
==== Host ====
$ uname -r
3.12-1-amd64
$ cat /etc/issue
Debian GNU/Linux jessie/sid \n \l
$ sudo dpkg -l | grep libvirt
ii libvirt-bin 1.2.0-2
amd64 programs for the libvirt library
ii libvirt0 1.2.0-2
amd64 library for interfacing with different virtualization
systems
ii python-libvirt 1.2.0-2
amd64 libvirt Python bindings
$ ps ax | grep vm_name
23307 ? Sl 0:40 qemu-system-x86_64 -enable-kvm -name
vm_name -S -machine pc-1.1,accel=kvm,usb=off -cpu
core2duo,+lahf_lm,+pdcm,+xtpr,+cx16,+tm2,+est,+smx,+vmx,+ds_cpl,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds
-m 1024 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
2387f160-ffa2-3463-1aa3-771594779df3 -nographic -no-user-config
-nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm_name.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
-drive file=/dev/vg/lv_vm_name,if=none,id=drive-virtio-disk0,format=raw,cache=none
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-fsdev local,security_model=mapped,id=fsdev-fs0,path=/srv/share
-device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=virtfs_share,bus=pci.0,addr=0x3
-netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:68:90:d8,bus=pci.0,addr=0x4
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
Guest XML snippet:
<filesystem type='mount' accessmode='mapped'>
<source dir='/srv/share/>
<target dir='virtfs_share'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</filesystem>
Thanks.
Javier
[1] http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Docu...
10 years, 10 months
[libvirt-users] PCI Passthrough
by The PowerTool
I'm trying to pass-through my VGA card to a guest session.
I found: http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM - which is very frustrating because there's no date on the documentation. I suspect it's old. It does clearly say that you must have VT-d support for pci pass-through. It then goes on to say "Some work towards allowing this ["software pass-through"] were done, but the code never made it into KVM". If this is old is there now support for PCI pass-through on hardware that doesn't support VT-d?
I have a HP p7-1456c which has:
Intel Core i5-3330 Processor (VT-x=yes, VT-d=yes)
http://ark.intel.com/products/65509/Intel-Core-i5-3330-Processor-6M-Cache...
on a MB: H-Joshua-H61-UATX (with specs that say *nothing* about virtualization)
http://h10025.www1.hp.com/ewfrf/wc/document?cc=us&lc=en&docname=c03135925...
and the H-Joshua-H61-UATX uses the Intel H61 Express Chipset (VT-d=No)
http://ark.intel.com/products/52806/Intel-BD82H61-PCH?q=intel%20h61%20exp...
On the KVM how to assign devices page it provides a way to verify IOMMU support on Intel:
]# dmesg | grep -e DMAR -e IOMMU
[ 0.000000] ACPI: DMAR 00000000d8d29460 000B0 (v01 HPQOEM SLIC-CPC 00000001 INTL 00000001)
[ 0.023074] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020e60262 ecap f0101a
[ 0.023078] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap c9008020660262 ecap f0105a
[ 0.023151] IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1
Does this mean I do have VT-d/IOMMU support???
I attempted to follow the basic instructions to pass through my VGA card:
]$ lspci -nn | grep VGA
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller [8086:0152] (rev 09)
Then added to my domain definition:
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</source>
<address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
</hostdev>
With that added code I consistently get "Connection to guest failed" messages and the guest fails to start.
/var/log/libvirt/qemu is empty. No log.
So my 1st question is can I do this given my hardware? If the answer is "yes" then this is where I'm stuck.
Additionally, I tried:
]$ virsh nodedev-detach pci_0000_00_02_0
error: Failed to detach device pci_0000_00_02_0
error: Failed to add PCI device ID '8086 0152' to pci-stub: Permission denied
My thinking was simply to verify if I could manually detach the device. I couldn't find a reference searching for this error. Any help would be greatly appreciated!
Thank you!
10 years, 10 months
[libvirt-users] Yet another disk I/O performance issue
by Matteo Lanati
Hi all,
I'm running a VM using libvirt+KVM and I have a disk performance issue.
The host is the following:
4 cores Intel Xeon 5140(a)2.33 GHz, 16 GB of RAM, SATA HDD, OS Debian Wheezy,
libvirt 0.9.12-11, QEMU-KVM 1.1.2+dfsg-2.
The guest:
1 CPU, 2 GB RAM running Debian 7.0, image in compressed qcow2 format.
When I try do run "dd if=/dev/zero of=io.test bs=32768k count=40" I get
around 500 MB/s on bare metal, while only around 30 MB/s inside the VM.
I'm trying to get something more out of the virtualization layer, I hope
that there's room for improvement.
I'm using virtio, I aready set cache='none' and io='native' in the domain
definition. Both host and guest are using deadline as I/O scheduler. The VM
uses an ext4 filesystem, while the image is saved on an ext3 disk. I
mounted the host and guest filesystems specifying nodiratime and noatime
options. Even if I convert the image to raw format nothing changes.
I didn't mess with iotune nor blockio.
Is there something that I overlooked or any other suggestion?
Thanks in advance for your help.
Matteo
--
A refund for defective software might be nice, except it would bankrupt the
entire software industry in the first year.
Andrew S. Tanenbaum, Computer Networks, 2003, Introduction, page 14
Linux registered user #463400
10 years, 10 months
[libvirt-users] Retry reboots in xmls files for libvirt
by Fernando Porro
Hi,
I have a virtual machine defined by xml that start from pxe (network boot),
my problem is that sometimes the dhcp server is not ready when the virtual
machine try to boot.
I need that the virtual machine keep asking for dhcp (network reboot) many
times, until dhcp is ready and can start the virtual machine with a valid
ip.
How can I configure that in the xml file?
BR
//Fernando
10 years, 10 months
[libvirt-users] Should domain be undefined after migration, or not?
by Brian Candler
I have been running a lab using libvirt under Debian Wheezy (libvirt
0.9.12.3-1, qemu-kvm 1.1.2+dfsg-6, virt-manager 0.9.1-4). There are a
number of machines as front-end servers and an nbd shared storage backend.
When I live-migrate a domain from one machine to another, normally I
observe that afterwards the domain remains on the source host (but in
"shutdown" state), as well as on the target host (in "running" state).
But occasionally I have observed the domain being removed from the
source host.
The trouble with the domain remaining on the source host is that it is
all too easy to double-click on the shutdown domain in virt-manager and
start it there accidentally, in addition to the copy on the target host
- resulting in disaster. (I know this can be prevented using the sanlock
plugin)
Furthermore, there could be stale copies of the XML lying around on some
machines where the domain had been running at some point in the past.
My question is, what is the expected behaviour? Is not removing the
domain definition from the source host a bug? Has this been changed in a
newer version of libvirt?
Thanks,
Brian.
10 years, 10 months
[libvirt-users] jumbo frame
by Anton Gorlov
I would like to use jumbo frame for local interfaces, but I am not
sure in which order to set it up. Should it be set on the physical
interface first, then on the bridge and on the guest, or in another
way? On some servers there are configurations with vlan on the bridge
10 years, 10 months
[libvirt-users] How to configure MacVtap passthrough mode to SR-IOV VF?
by opendaylight
Hi guys.
These days I'm doing research on SR-IOV & Live migration. As we all know there is big problem that SR-IOV & Live migration can not exist at the same time.
I heard that KVM + SRIOV + MacVtap can solve this problem. So I want to try.
My environment:
Host: Dell R610, OS: RHEL 6.4 ( kernel 2.6.32)
NIC: intel 82599
I follow a document from intel guy, it said that I should write xml like below:
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
============================
I guess here the vf0 & vf1 should be the VFs of Intel 82599.
What make me confused is that we know we can not see the vf 0 & vf 1 directly from the host server with "ifconfig", that is to say, vf 0 & vf1 are not a real physical interface.
I try #: virsh net-define macvtap_passthrough.xml
#: virsh net-start macvtap_passthrough
When I try to configure macvtap_passthrough for a VNIC of a VM, the virt-manager told : "Can't get vf 0, no such a device".
When I try from virt-manager: add hardware--->network--->host device (macvtap_passthrough:pass_through network), I got error like : "Error adding device: xmlParseDoc() failed".
I guess I can not write like this " <interface dev=’vf0’ />" in the xml.
I try to change as below, but the result is same.
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<pf dev=’p2p1’ /> // p2p1 is intel sriov physical nic
</forward>
</network>
============================
I don't know how to write correctly. Please help me.
You can refer to intel document as below.
Many thanks.
==========document from intel========================
Linux/KVM VM Live Migration (SRIOV And MacVtap)
By Waseem Ahmad (waseem.ahmad(a)intel.com
In this scenario we are using 3 machines:
Server 1: DNS/NFS – nfs.vtt.priv
Server 2: Hv1
Server3: Hv2
HV1 and HV2 are Linux/KVM machines. We will get to them in a minute however we first must address kvm and nfs.
NFS:
Create a storage area, where both HV1, and HV2 can access it. There are several methods available for this (FCOE/ISCSI/NFS). For this write-up use nfs.
Configure NFS:
Create a directory on nfs.vtt.priv where you want your storage to be. In this case used /home/vmstorage
Edit /etc/exports and add the following
/home/vmstorage 172.0.0.0/1(rw,no_root_squash,sync)
Now to /etc/sysconfig/nfs
Uncomment RPCNFSDARGS=”-N 4”
This will disable nfs v4. If you don’t do this you will have issues with not being able to access the share from within VirtManager.
Add all three machines ip addresses to each machines /hosts file.
MIGRATION WILL NOT WORK WITHOUT FULLY QUALIFIED DOMAIN NAMES.
KVM:
On both HV1, and HV2 servers:
Edit /etc/selinux/config
SELINUX=disabled
Edit /etc/libvirt/qemu.conf
Change security_driver=none
On HV1 and HV2 start Virtual Machine Manager
Double click on localhost(QEMU)
Then click on the storage tab at the top of the window that pops up
Down in the left hand corner is a box with a + sign in it, click on that. A new window will appear entitled Add a New Storage Pool
In the name box type vmstorage, then click on the type box and select netfs: Network Exported Directory, now click next.
You will see the last step of the network Storage Pool Dialog. The first option is the target path. This is the path where we will mount our storage on the local server. I have chosen to leave this alone.
The next option is format, leave this set on auto:
Host name: nfs.vtt.priv
Source path: /home/vmstorage
Click on finish
Repeat the above steps on HV2 server
Create vms
On HV1 server go back to the connection details screen, (this is the one that showed up when you double clicked on localhost (qemu), and click on the storage tab again.
Click on vmstorage then click on new volume at the bottom.
A new dialog will appear entitled add a storage volume.
In the Name box type vm1
In the Max Capacity box type 20000
And do the same in the allocation box then click finish.
Now you can close the connection details box by click on the x in the corner.
Now click on the terminal in the corner, right underneath file, and type the name of our vm in the box that is entitled Name, vm1 choose how your installation media, probably local install media, and click forward. Click on use cdrom or dvd, and place a rh6.2 dvd in the dvd player on HV1. Select Linux, for the OS type, and Red Hat Enterprise Linux 6 for the version. Memory I chose to leave this at its default of 1024, and assigned 1 cpu to the guest. Click forward, select “select managed or other existing storage” and click the browse button. Click on vmstoarge, and select vm1.img then click forward. Then click on finish.
We will configure network after we make sure migration between the two servers works properly.
Now go ahead and install the operating system as you would normally.
Create networks
Create a file that looks like the following < there is no support for adding a macvtap interface from the gui as of yet, this is the only manual step in the process. Create a file named macvtap_passthrough.xml with the following contents.
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
<network> <name>’macvtap_bridge’</name> <forward mode=’bridge’> <interface dev=’p3p1’/> </forward>
</network>
Save it and run the following commands:
virsh net-define macvtap_passthrough.xml
virsh net-start macvtap_passthrough
Make sure all of your virtual interfaces that you used in the xml file are up.
for i in $(ifconfig –a | awk ‘/eth/ {print $1}’); do ifconfig $i up; done
Then double click on your vm and click on the big blue i
On the next screen click on add hardware, then on network, then select Virtual network “macvtap_passthrough”
Then click on finish.
Start your vm and make sure that the macvtap was created on the host by doing
ip link | grep ‘macvtap’
In the vm configure the ip information for the virtio adapter.
In the virtual machine manager click on file, add connection.
Then check the connect to remote host fill in the username and hostname, then click on connect
Right click on your VM and select Migrate, select the host you want to migrate the machine to, then click on advanced options, check the address box, and type the ip address of the machine you want to migrate to, and click the migrate button.
10 years, 10 months
[libvirt-users] How to update filterref of a vm on the fly?
by Gao Yongwei
Hello,
I defined a vm with filterref like:
<filterref filter='clean-traffic'>
<parameter name='IP' value='192.168.1.161'/>
</filterref>
and now I need to add another IP parameter for this vm,is there any way to
achieve this?
thanks.
10 years, 10 months
[libvirt-users] blockcopy, userspace iSCSI support?
by Scott Sullivan
Right now, on a virsh blockcopy, I know you can do something like this:
# Connect DEST target
iscsiadm -m node -p ${DESTINATION}:3260 -T ${VOLNAME} -o new
iscsiadm -m node -p ${DESTINATION}:3260 -T ${VOLNAME} --login
# Copy to connected iSCSI target
virsh blockcopy ${DOMAIN} vda /dev/sdc --raw --bandwidth 300
However I have libiscsi compiled into my QEMU. So I can do this with the
monitor directly (and avoid the need to call out to external iscsiadm):
virsh qemu-monitor-command ${DOMAIN} '{"execute":"drive-mirror",
"arguments": { "device": "drive-virtio-disk0", "target":
"iscsi://${TARGET_IP}:3260/${DOMAIN}/1", "mode": "existing", "sync":
"full", "on-source-error": "stop", "on-target-error": "stop" } }'
Is there a way to use the libiscsi compiled into my QEMU with virsh
blockcopy command? I haven't been able to find any examples of using
blockcopy with iSCSI compiled into QEMU.
10 years, 10 months