[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] stream finish throws exception via python API
by Shahar Havivi
Hi,
The following snippet works fine e.g. receiving the data but when calling
stream.finish() we get the following error:
stream = con.newStream()
vol.download(stream, 0, 0, 0)
buf = stream.recv(1024)
stream.finish()
libvirt: I/O Stream Utils error : internal error: I/O helper exited abnormally
Traceback (most recent call last):
File "./helpers/kvm2ovirt", line 149, in <module>
download_volume(vol, item[1], diskno, disksitems, pksize)
File "./helpers/kvm2ovirt", line 102, in download_volume
stream.finish()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 5501, in finish
if ret == -1: raise libvirtError ('virStreamFinish() failed')
libvirt.libvirtError: internal error: I/O helper exited abnormally
Am I doing something wrong?
Thank you,
Shahar.
7 years, 12 months
[libvirt-users] unhelpful error message on failed "virsh migrate"
by Andreas Buschmann
Hello,
I have two servers where I can push VMs from one to the other by issuing
the command
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost2/system
on kvmhost1. I can get the VM back by issuing the equivalent command on
kvmhost2:
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost1/system
After an update (on behalf of the glibc bug) it only works for some VMs
and fails for others. The error message is unhelpful:
virsh --debug 4 migrate --live --persistent --copy-storage-all --verbose \
vmware-mgmt qemu+ssh://kvmhost1/system
error: internal error: info migration reply was missing return status
Is there a way to get a more helpful error message?
migrating VMs only works for a very limited subset of the VMs on the
system in the direction kvmhost2 --> kvmhost2
Stopping the VMs, copying them over to kvmhost1 and restarting still
works, and I can even migrate them back.
How do I debug this problem?
One possible way would be to stop all VMs on kvmhost2, manually copy
them to kvmhost1 and start them
and afterwards reinstall all packages
(or the whole Server if that doesn't help)
But that would be a workaround, and I still would not know what the
problem is.
The system is CentOS 7.2 now and was CentOS 7.1 before the upgrade, with
the qemu-kvm-rhev repo added for qemu.
Mit freundlichen Gruessen
Andreas Buschmann
--
Andreas Buschmann
[Senior Systems Engineer]
net.DE AG
8 years, 5 months
[libvirt-users] I have ping through bridge , net has been started , But IPs they're not equivalent.
by Mohsen Pahlevanzadeh
Dear All,
I have the following configuration and I have ping from bridge :
/////////////////////////////////////////////////////
iface eth0 inet static
address 192.168.1.4
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
auto ivbr0
iface ivbr0 inet static
address 192.168.1.4
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1
bridge_ports eth0
bridge_stp on
bridge_maxwait 0
bridge_fd 0
/////////////////////////////////////////////
# brctl show
bridge name bridge id STP enabled interfaces
ivbr0 8000.18037360b44e yes eth0
/////////////////////////////////////////////
my network xml file :
<network>
<name>myintranet</name>
<uuid>465ce6cb-0a69-4f89-92ba-629349741e73</uuid>
<forward mode='nat'>
<interface dev="eth0" />
</forward>
<bridge name='ivbr0' stp='on' delay='0' />
<mac address='52:54:00:0f:f4:f0'/>
<ip address='192.168.1.4' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.1.3' end='192.168.122.254'/>
</dhcp>
</ip>
</network>
////////////////////////////////////////////
Now , I have ping of my modem and internet.
Then I did the following command :
# ip l set dev ivbr0 down
# brctl delbr ivbr0
# virsh net-start myintranet
Network myintranet started
According to above , myintranet network has been started. But with the
following IP address :
# ip a show dev ivbr0
17: ivbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN group default
link/ether 52:54:00:0f:f4:f0 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global ivbr0
valid_lft forever preferred_lft forever
////////
192.168.122.1 ?
Question is , Where I change this address?(I set in myxml file.)
--best regards
Mohsen
8 years, 6 months
[libvirt-users] vhost-user libvirt 1.2.18 issue
by chintu hetam
Hello there,
i am trying to start my VM with 2 virtio interfaces as a vhost-user
interface
Following is my relevant xml network configuration
" <interface type='vhostuser'>
<mac address='52:54:00:c7:ac:38'/>
<source type='unix' path='/tmp/vhost1.sock' mode='server'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</interface>
<interface type='vhostuser'>
<mac address='52:54:00:9d:ea:73'/>
<source type='unix' path='/tmp/vhost2.sock' mode='server'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
function='0x0'/>
</interface>"
when i execute virsh start domain_name command
it starts and gets paused log shows it's stuck at
n-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
QEMU waiting for connection on: disconnected:unix:/tmp/vhost1.sock,server
is there anything i am missing?
thank you in advance.
-hardik-
8 years, 6 months
[libvirt-users] Use virt-* tools from within container?
by Ken D'Ambrosio
Hi, all. I've set up an LXC container for various Openstack admin
chores, and now I'm being asked to have it used for importing qcow2
images -- on which we want to run utilities like virt-sparsify and
virt-sysprep, first. Sadly, when I do that, it dies horribly, e.g.,
root@openstack-cli:/tmp# virt-sysprep -a cloud-image.qcow2
Examining the guest ...
libguestfs: warning: supermin-helper -f checksum returned a short string
Fatal error: exception Guestfs.Error("cannot find any suitable
libguestfs supermin, fixed or old-style appliance on LIBGUESTFS_PATH
(search path: /usr/lib/guestfs)")
Is there a way around this? Or is the answer, "Don't do that in a
container"?
Thanks!
-Ken
P.S. /usr/lib/guestfs -- the only part of the error I can easily look
into -- has more stuff in it than the system that does process virt-*
commands properly. Here's what's there: excludelist make.sh
packagelist supermin.d
8 years, 7 months
[libvirt-users] Networking issues with lxc containers in AWS EC2
by Peter Steele
I've created an EC2 AMI for AWS that essentially represents a CentOS 7
"hypervisor" image. I deploy instances of these in AWS and create an
number of libvirt based lxc containers on each of these instances. The
containers run fine within a single host and have no problem
communicating with themselves as well as with their host, and vice
versa. However, containers hosted in one EC2 instance cannot communicate
with containers hosted in another EC2 instance.
We've tried various tweaks with our Amazon VPC but have been unable to
find a way to solve this networking issue. If I use something like
VMware or KVM and create VMs using this same hypervisor image, the
containers running under these VMs can communicate with with each other,
even across different hosts.
My real question is has anyone tried deploying EC2 images that host
containers and have figured out how to successfully communicate between
containers on different hosts?
Peter
8 years, 7 months
[libvirt-users] /proc/meminfo
by Peter Steele
Has anyone seen this issue? We're running containers under CentOS 7.2
and some of these containers are reporting incorrect memory allocation
in /proc/meminfo. The output below comes from a system with 32G of
memory and 84GB of swap. The values reported are completely wrong.
# cat /proc/meminfo
MemTotal: 9007199254740991 kB
MemFree: 9007199224543267 kB
MemAvailable: 12985680 kB
Buffers: 0 kB
Cached: 119744 kB
SwapCached: 10804 kB
Active: 110228 kB
Inactive: 111716 kB
Active(anon): 53840 kB
Inactive(anon): 57568 kB
Active(file): 56388 kB
Inactive(file): 54148 kB
Unevictable: 0 kB
Mlocked: 15347728 kB
SwapTotal: 0 kB
SwapFree: 18446744073709524600 kB
Dirty: 20304 kB
Writeback: 99596 kB
AnonPages: 18963368 kB
Mapped: 231472 kB
Shmem: 51852 kB
Slab: 1891324 kB
SReclaimable: 1805244 kB
SUnreclaim: 86080 kB
KernelStack: 60656 kB
PageTables: 81948 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 104487760 kB
Committed_AS: 31507444 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 354796 kB
VmallocChunk: 34359380456 kAnonHugePages: 15630336 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 81684 kB
DirectMap2M: 3031040 kB
DirectMap1G: 32505856 kB
8 years, 7 months