Signed-off-by: Beraldo Leal <bleal(a)redhat.com>
---
tests/lavocado/Makefile | 2 +
tests/lavocado/README.md | 124 +++++++++++++++++++++++++++++++++++++++
2 files changed, 126 insertions(+)
create mode 100644 tests/lavocado/Makefile
create mode 100644 tests/lavocado/README.md
diff --git a/tests/lavocado/Makefile b/tests/lavocado/Makefile
new file mode 100644
index 0000000000..33df604105
--- /dev/null
+++ b/tests/lavocado/Makefile
@@ -0,0 +1,2 @@
+libvirt-tests:
+ avocado run --test-runner='nrunner' tests/
diff --git a/tests/lavocado/README.md b/tests/lavocado/README.md
new file mode 100644
index 0000000000..b22015fe46
--- /dev/null
+++ b/tests/lavocado/README.md
@@ -0,0 +1,124 @@
+# lavocado - libvirt test framework based on Avocado
+
+The lavocado aims to be an alternative test framework for the libvirt project
+using as muscles only Avocado and python-libvirt. This can be used to write
+unit, functional and integration tests and it is inspired on the libvirt-tck
+framework, but instead of Perl it written in Python.
+
+The idea is to provide to the test writers helper classes to avoid boilerplate
+code, improve code readability and maintenance.
+
+## Disclaimer
+
+**For now, this framework assumes that you are going to run the tests in a fresh
+clean environment, i.e. a VM. If you decide to use your local system, beware
+that execution of the tests may affect your system.**
+
+One of the future goals of this framework is to utilize nested virtualization
+technologies and hence make sure an L1 guest is provisioned automatically for
+the tests to be executed in this environment and not tamper with your main
+system.
+
+## Requirements
+
+The libvirt interactions will all be done via the libvirt Python bindings
+(libvirt-python), we also rely on Avocado, not only to execute the tests but
+also to generate multiple output format artifacts (that can be uploaded to CI
+environments) and also the image fetch, so we don't need to handle with local
+cache, checksum and any other test's requirement task.
+
+In order to install the requirements, please execute the following command:
+
+```bash
+ $ pip3 install -r requirements.txt
+```
+
+If you wish you can run this command under a virtual environment.
+
+## Features
+
+Bellow, we list some features that can be explored from this proposal:
+
+ * Parallel execution: If your tests are independent of each other, you can run
+ those in parallel, when getting images, Avocado will create a snapshot to
+ avoid conflicts. Also domains created by tests have unique names to avoid
+ collisions.
+
+ * Creating domains easily: You can use the `.create_domain()` method to create
+ a generic domain which is based on a Jinja2 template. If you want to use an
+ already existing XML domain description, no problem, use
+ `Domain.from_xml_path()` instead.
+
+ * Multiple output formats: TAP, HTML, JSON, xUnit (Gitlab CI ready).
+
+ * Test tags: Ability to mark tests with tags, for filtering during execution
+ and/or results.
+
+ * Different isolation models: By default the avocado runner will spawn each
+ test on local processes (subprocess), but we can configure for specific use
+ cases, running the tests on different isolation models (containers for
+ instance).
+
+ * Requirements resolver: Avocado has implemented the 'requirements resolver'
+ feature that makes it easy to define test requirements. Currently, The
+ RequirementsResolver can handle `assets` (any local or remote files) and
+ `packages` but new requirements are on the roadmap, including `ansible
+ playbooks`. For now, it is still only an experimental feature though.
+
+ * Test result artifacts for future analysis: you can take a look at `avocado
+ jobs` command or inspect your `~/avocado/test-results/` folder for more
+ detailed results.
+
+ * Multiple test execution with variants: the variants subsystem is what allows
+ the creation of multiple variations of parameters, and the execution of
+ tests with those parameter variations.
+
+ * Clear separation between tests and bootstrap stages: If something fails
+ during the setUp() metod execution, your test will be not marked as FAIL,
+ instead it will be flagged as ERROR. You can also use some decorators
+ (@cancel_on, @skipUnless, ...) around your test to avoid false positives.
+
+ * Ready-to-use utility libraries: `avocado.utils` are rich, and cover
+ common operating system level needs, such as service managment,
+ hardware introspection, storage management, networking, etc.
+
+ * Avocado Job API: a programmable interface for defining test jobs, which
+ allows one to create custom test jobs that are suitable to developer's
+ systems and CI enviroments.
+
+ * Output artifacts for debug: Avocado stores all job results in
+ `~/avocado/job-results/` (you can also see details with `avocado jobs`
+ command). If any test fails we can debug the files there. Besides that, using
+ `sysinfo` plugin, we could collect additional system information (such as
+ libvirt/qemu logs).
+
+## Running
+
+After installing the requirements, you can run the tests with the following
+commands:
+
+```bash
+ $ export PYTHONPATH=.:$PYTHONPATH
+ $ avocado run --test-runner='nrunner' ./tests/domain/*.py
+```
+
+Please note that the Next Runner (nrunner) will be the default runner soon in
+Avocado, and so the `--test-runner='nrunner'` option will no longer be needed.
+
+Or, if you prefer, you can also execute the tests with `make`:
+
+```bash
+ $ make libvirt-tests
+```
+
+## Writing Tests
+
+You can write your tests here the same way you write for the [Avocado
+Framework](https://avocado-framework.readthedocs.io/en/latest/).
+Avocado supports "simple tests" (just executables) and "instrumented
tests"
+(Python tests).
+
+See the `tests/` folder for some references and ideas. In addition, feel free
+to read the [Avocado Test Writer’s
+Guide](https://avocado-framework.readthedocs.io/en/latest/guides/writer/) to
+play with some advanced features of the framework.
--
2.26.3