[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Zombie processes being created when console buffer is full
by Peter Steele
We have been researching stuck zombie processes in our libvirt lxc
containers. What we found was:
1) Each zombie’s parent was pid 1. init which symlinks to systemd.
2) In some cases, the zombies were launched by systemd, in others the
zombie was inherited.
3) While the child is in the zombie state, the parent process (systemd)
/proc/1/status shows no pending signals.
4) Attaching gdb to systemd, there was 1 thread and it was waiting in
write() and the file being written was /dev/console.
This write() to the console never returns. We operated under the
assumption that systemd's SIGCHLD handler sets a bit and a foreground
thread (the only thread) would see that child processes needed reaping.
While the single thread is stuck in write(), the reaping never takes
place.
So why is write() blocking? The answer seems to be that there is
nothing draining the console and eventually it blocks write() when its
buffers become full. When we attached to the container's console, the
buffer is cleared allowing systemd’s write() to return. The zombies are
then reaped and everything goes back to normal.
Our “solution” was more of a workaround. systemd was altered to log
errors/warnings/etc to /dev/null instead of /dev/console. This
prevented the problem, only in that the console buffer was unlikely to
get filled up since systemd generally is the only then that writes to
it. This is definitely a hack though.
This may be a bug in the libvirt container library (you can't expect
something to periodically connect to a container's console to empty it
out). We suspect there may also be a configuration issue in our
containers with regards to the console.
Has anyone else observed this problem?
Peter
_______________________________________________
libvirt-users mailing list
libvirt-users(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users
8 years, 8 months
[libvirt-users] UserID Permissions: Virtual Machine Manager vs virsh and Python
by David Ashley
I have added a user to the libvirt group on my CentOS 7.2 server and
that user can successfully access the Virtual Machine Manager without
authenticating as expected. This allows the user to perform all
functions in the VMS as if they were root. This is acceptable as this is
a private server with no outside access so security is not a real issue.
But when that same user tries to perform functions with virsh or using a
Python script that uses the libvirt module, the connection is just
read-only.
Why are the permissions different for these environments and what must I
do to give the user r/w access in virsh or the Python script?
David Ashley
8 years, 9 months
[libvirt-users] Networking with qemu/kvm+libvirt
by Andre Goree
I have some questions regarding the way that networking is handled via
qemu/kvm+libvirt -- my apologies in advance if this is not the proper
mailing list for such a question.
I am trying to determine how exactly I can manipulate traffic from
a _guest's_ NIC using iptables on the _host_. On the host, there is a
bridged virtual NIC that corresponds to the guest's NIC. That interface
does not have an IP setup on it on the host, however within the vm
itself the IP is configured and everything works as expected.
During my testing, I've seemingly determined that traffic from the vm
does NOT traverse iptables on the host, but I _can_ in fact see the
traffic via tcpdump on the host. This seems odd to me, unless the
traffic is passed on during interaction with the kernel, and thus never
actually reaches iptables. I've gone as far as trying to log via
iptables any and all traffic traversing the guest's interface on the
host, but to no avail (iptables does not see any traffic from the
guest's NIC on the host).
Is this the way it's supposed to work? And if so, is there any way I
can do IP/port redirection silently on the _host_?
Thanks in advance for any insight that anyone can share :)
--
Andre Goree
-=-=-=-=-=-
Email - andre at drenet.net
Website - http://www.drenet.net
PGP key - http://www.drenet.net/pubkey.txt
-=-=-=-=-=-
8 years, 9 months
[libvirt-users] generate interface MAC addresses in a particular order
by Andrei Perietanu
Hi all,
I am using libvirt to manage VM on my system; after creating a VM (default
no NICs are present in the configuration) you can add any number of
interfaces to it (as long as they exist on the host).
To do that, I edit the configuration xlm:
vmXml = self.domain.XMLDesc()
root = ET.fromstring(vmXml)
devices = root.find('./devices')
intf = ET.SubElement(devices,'interface')
intf.set('type', 'bridge')
src = ET.SubElement(intf,'source')
src.set('bridge', bIntf)
model = ET.SubElement(intf,'model')
model.set('type', 'e1000')
xml = ET.tostring(root)
self.conn.defineXML(xml)
Now the problem I have is that the MAC addresses are auto-generated and
because of this there is no way to predict which interface number the newly
added interface will map to, on the VM. Ideally, the first added interface
is mapped to eth0/0, the second one eth0/1...etc. Since the mappings
depend on the MAC addresses I figured that is the part I need to have
control over.
Any ideas?
Thanks,
Andrei
--
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of or
taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you receive
this in error please contact the sender and delete the material from any
computer immediately. It is the policy of Klas Limited to disavow the
sending of offensive material and should you consider that the material
contained in the message is offensive you should contact the sender
immediately and also your I.T. Manager.
Klas Telecom Inc., a Virginia Corporation with offices at 1101 30th St. NW,
Washington, DC 20007.
Klas Limited (Company Number 163303) trading as Klas Telecom, an Irish
Limited Liability Company, with its registered office at Fourth Floor, One
Kilmainham Square, Inchicore Road, Kilmainham, Dublin 8, Ireland.
8 years, 9 months
[libvirt-users] storage pool and volume usage question
by Paul Carlton
Hi
I'm creating a storage volume in a storage pool (directory type).
I'd like to create a volume that creates a file in a sub directory under
the directory that the pool points to but am not able to work out how,
maybe it is not possible?
My pool was created using following xml...
<pool type='dir'>
<name>nova-local-pool</name>
<target>
<path>/home/pcarlton/openstack/stuff/instances</path>
<permissions>
<mode>0755</mode>
<owner>1000</owner>
<group>130</group>
</permissions>
</target>
</pool>
The volume was created using this xml...
stgvol_xml = """
<volume>
<name>4291f96a-759a-4b8a-bfe2-7a8ccb118b75-disk</name>
<allocation>0</allocation>
<capacity unit="G">1</capacity>
<target>
<format type='qcow2'/>
<permissions>
<owner>1000</owner>
<group>130</group>
<mode>0644</mode>
<label>image_overlay</label>
</permissions>
</target>
<backingStore>
<path>/home/pcarlton/openstack/stuff/nova/instances/_base/07f224f72581f8e029e08ff96827d581b2321a7b</path>
<format type='raw'/>
<permissions>
<owner>1000</owner>
<group>130</group>
<mode>0644</mode>
<label>image_base</label>
</permissions>
</backingStore>
</volume>"""
All works ok but I'd like the create the 'disk' file in
/home/pcarlton/openstack/stuff/nova/instances/4291f96a-759a-4b8a-bfe2-7a8ccb118b75
Thanks
--
Paul Carlton
Software Engineer
Cloud Services
Hewlett Packard Enterprise
BUK03:T242
Longdown Avenue
Stoke Gifford
Bristol BS34 8QZ
Mobile: +44 (0)7768 994283
Office: +44 (0)117 316 2189
Email: mailto:paul.carlton2@hpe.com
irc: paul-carlton2
Hewlett-Packard Enterprise Limited registered Office: Cain Road, Bracknell, Berks RG12 1HN Registered No: 690597 England.
The contents of this message and any attachments to it are confidential and may be legally privileged. If you have received this message in error, you should delete it from your system immediately and advise the sender. To any recipient of this message within HP, unless otherwise stated you should consider this message and attachments as "HP CONFIDENTIAL".
8 years, 9 months
[libvirt-users] virt-viewer and virt-manager problem
by Pascal Legrand
Hello,
here is my problem :
i migrated an old debian server, with kvm host and guests vm, on a new
one (wheezy to jessie).
Everything worked fine on this old server.
On the new server, i created a storage pool, copy the qcow2 guests files
(from the old server) inside this storage pool, copy the xml guests
files (from the old server) to /etc/libvirt/qemu ,
then virsh define guests and virsh start guests.
Every guests works fine.
My problem is about virt-viewer and virt-manager
when i connect from my station to my kvm host from ssh (ssh -X
192.168.151.248). I cant use virt-manager or virt-user
virt-manager display all the guests but i cant display the guest itself
or the guest informations
virt-viewer open a blanck window
when i do vncviewer 192.168.151.248:5906 it works fine
then i think the problem come from virt-viewer and virt-manager, but i
cant see where is the problem.
Does someone could help to solve this problem.
Thanks
--
Pascal
8 years, 10 months
[libvirt-users] Zombie processes being created when console buffer is full
by Peter Steele
We have been researching stuck zombie processes in our libvirt lxc
containers. What we found was:
1) Each zombie’s parent was pid 1. init which symlinks to systemd.
2) In some cases, the zombies were launched by systemd, in others the
zombie was inherited.
3) While the child is in the zombie state, the parent process (systemd)
/proc/1/status shows no pending signals.
4) Attaching gdb to systemd, there was 1 thread and it was waiting in
write() and the file being written was /dev/console.
This write() to the console never returns. We operated under the
assumption that systemd's SIGCHLD handler sets a bit and a foreground
thread (the only thread) would see that child processes needed reaping.
While the single thread is stuck in write(), the reaping never takes
place.
So why is write() blocking? The answer seems to be that there is
nothing draining the console and eventually it blocks write() when its
buffers become full. When we attached to the container's console, the
buffer is cleared allowing systemd’s write() to return. The zombies are
then reaped and everything goes back to normal.
Our “solution” was more of a workaround. systemd was altered to log
errors/warnings/etc to /dev/null instead of /dev/console. This
prevented the problem, only in that the console buffer was unlikely to
get filled up since systemd generally is the only then that writes to
it. This is definitely a hack though.
This may be a bug in the libvirt container library (you can't expect
something to periodically connect to a container's console to empty it
out). We suspect there may also be a configuration issue in our
containers with regards to the console.
Has anyone else observed this problem?
Peter
8 years, 10 months