On Mon, Jun 03, 2019 at 01:48:55PM +0200, Peter Krempa wrote:
On Mon, Jun 03, 2019 at 13:13:14 +0200, Erik Skultety wrote:
> On Mon, Jun 03, 2019 at 12:31:56PM +0200, Peter Krempa wrote:
> > On Mon, Jun 03, 2019 at 08:52:55 +0200, Erik Skultety wrote:
[...]
> > I think the problem here is that it's unclear what the purpose of the
> > test driver is supposed to be. Clarifying the purpose first might have
> > positive impact on the design decisions here.
> >
> > So what's the point of the test driver?
> >
> > Is it meant for human interaction, e.g. for application developers or
> > users wanting to test stuff? In that case you probably want to simulate
> > the "worst" behaviour we have, which would be the blocking timeout
so
> > that users/devs can see the worst case implications of running such a
> > command.
>
> I agree and disagree at the same time. The part I disagree with is that anyone
> would be truly interested in seeing how much the API blocks, they'd just want
Well if you are designing an user interface around this it might be
useful to know and experience that this API will block for a while in
certain cases.
Ideally you should be able to figure that out from the docs.
There are also other APIs which can have interresting semantics in some
cases. E.g. in case of the qemu driver all APIs which use the guest
agent may block or fail if the guest agent is stuck or not installed.
Obviously covering all the cases may be hard, thus we want to limit the
scope ...
> to job done and as such the wait doesn't bring any value to that and I agree
> I'd also recommend doing anything related to ^this to be done on dummy VMs as
> you mentioned below, but then again, we're focusing on test driver coverage, so
How something is covered really depends on the purpose and that's
why I asked about the purpose. To me it's not clear what the test driver
is supposed to achieve and thus it's hard to determine whether a given
mock approach makes sense.
> from coverage perspective, we probably want to cover this API too - to some
> degree.
Sure we want to cover it, but to which degree? That's what I want to
clarify. If we do that, answering questions how to do things should be
easier.
To answer your scope question this is what virt-manager is using test
driver for:
- It's used by CLI commands to validate our functionality whether we
can generate correct XMLs and use the specific libvirt features,
this is probably what you've called "unit" testing.
- We have some basich UI testing in virt-manager that would ideally
cover all UI elements and that one may be considered more like CI
testing even though it uses test-driver as well
- During development sometimes we use test driver to roughly check
some feature before it will be tested using real driver with
proper host and VM setup where that setup might not be easy or
the specific HW required for some feature is not that easily
available.
In libvirt-dbus we use test driver as well to validate that APIs
provided over DBus actually works but again it's more like the "unit"
testing where we would not probably care about any specific blocking
behavior.
The idea of the whole project to implement test driver is to have full
coverage which would ideally make the test driver implementation
mandatory for everyone introducing new API.
It probably doesn't answer what scope should be covered by test driver,
I'm just writing down some thoughts and the usecase some projects have
for test driver.
Pavel