A little while ago now I wrote about the initial libvirt TCK (Technology
Compatability Kit):
http://www.mail-archive.com/libvir-list@redhat.com/msg12703.html
Since that time, it has moved on somewhat and I am regularly running it
during development & immediately before release. It has caught a large
number of bugs and was very helpful porting QEMU to the JSON mode monitor
and -device syntax
First and foremost, I moved the source repository to
libvirt.org after we
switched to GIT
http://libvirt.org/git/?p=libvirt-tck.git
The list of Perl module pre-requisites is still accurate as per the original
mail.
To simplify its use, it is now possible to run it on a system which you
already have VMs on. The only restriction is that none of your virtual
machines, storage pools, networks etc have a name that starts with the
string prefix 'tck'. The test suite will complain & refuse to run if
it finds any clashes. The --force option will tell it to blow away your
existing VMs with names matching 'tck*'.
I still, however, only recommend its use on machines that you don't care
about. While I make reasonable effort not to trash your system, this kind
of testing always has a fairly high level of risk / danger, either from
bugs in libvirt, the HV, or the TCK itself.
For many of the tests you can actually run within another virtual machine.
eg, I have a Fedora 12 host, on which I run KVM. For running TCK tests, I
installed a Fedora 12 guests, and then run the TCK in this. Obviously you
won't be testing anything that requires hardware virt this way, but it is
good enough for the vast majority of tests so far. It also makes it easy
to setup tests against a large number of different OSs.
It is no longer neccessary to manually download kernels/initrds. The test
suite knows where to grab kernels/initrds off the interweb for Xen and any
fullvirtualization. It uses the hypervisor capabilities data to determine
the best kernels to use, and if it detects container based virt will setup
a busybox chroot for testing instead. It will also create any disk images
it needs, usually as sparse files, but sometimes as full files. The config
file controls where these are created if you are short on space
There are now somes tests which require access to hardware, so the config
file allows the admin running the TCK to specify any disks, USB devices,m
and PCI devices that can be used. If none are specified, any tests that
require them are skipped automagically.
Again, testing within a virtual machine makes this easy. Just add 4 extra
PCI nics and a bunch of USB disks to your guests. You can then tell the
TCK to play with them for its tests.
The number of tests has increased sigificantly
* domain/030-dominfo.t
The virDomainGetInfo API works
* domain/050-transient-lifecycle.t
Basic lifecycle operations on transient guests
* domain/051-transient-autostart.t
Verify autostart is blocked on transient guests
* domain/060-persistent-lifecycle.t
Basic lifecycle operations on persistent guests
* domain/061-persistent-autostart.t
Verify autostart is allowed on persistent guests
* domain/065-persistent-redefine.t
Verify a persistent guest config can be updated
* domain/070-transient-to-persistent.t
Conversion of transient guests to persistent guests
* domain/080-unique-id-define.t
* domain/081-unique-id-create.t
Verify name/uuid/id uniqueness when creating / defining guests
* domain/090-invalid-ops-when-inactive.t
Get suitabe errors for APIs which should not run on inactive guests
* domain/100-transient-save-restore.t
* domain/101-persistent-save-restore.t
Save/restore to/from files for guests
* domain/102-broken-save-restore.t
Verify a nice error message for delibrately corrupt save files
* domain/120-disks-stats.t
The disk IO stats reporting
* domain/200-disk-hotplug.t
Basic hotplug of disks
* domain/205-disk-hotplug-ordering.t
Ordering of disks when hotplug different types (virtio/scsi)
* domain/210-nic-hotplug.t
Basic NIC hotplug
* domain/215-nic-hotplug-many.t
Hotplug of many NICs
* domain/240-usb-host-hotplug.t
USB hostdevice assignment to guests, by vendor/product and dev/bus
* domain/250-pci-host-hotplug.t
PCI hostdevice assignment to guests
* qemu/100-disk-encryption.t
A QEMU specific test for qcow2 disk encryption
* storage/100-create-vol-dir.t
Create of all types of files in a directory pool
* storage/110-disk-pool.t
Operation of the disk pool type (ie adding/removing partitions)
* storage/200-clone-vol-dir.t
Cloning of disk volumes
There is of course scope for many many more tests to be added. Not least
for storage we have NFS, iSCSI, SCSI, filesystem pools, LVM. The entire
of the network interface APIs. Many more aspects of guest domain mgmt.
Host devices, etc
There is also work to be done to ensure the tests are fully portable across
hypervisors. I've only really tested with QEMU/KVM, and to a lesser extent
Xen and LXC.
Running the tests is quite simple
- Install the pre-requisite Perl modules (all in Fedora 12 and other distros)
- Get a GIT checkout of the libvirt-tck
- Build it
perl ./Build.PL
./Build
- Take the default config file conf/default.cfg in source tree, or5B
/etc/libvirt-tck/default.cfg after installation and set the hypervisor
URI. Optionally list some usb, pci & block devices if available for
testing
- Run it
libvirt-tck -c /path/to/your/config.cfg
The '-v' flag prints detailed progress info.
The '-t' flag lets you point it to a single test script to avoid running
all of them every time
It is possible to run without installing it too
export PERL5LIB=$TCK_GIT_TREE/lib
perl $TCK_GIT_TREE/bin/libvirt-tck -c my.cfg -t $TCK_GIT_TREE/scripts
Regards,
Daniel
--
|: Red Hat, Engineering, London -o-
http://people.redhat.com/berrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org -o-
http://deltacloud.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|