[libvirt] [test-API] RFC: Stabilization of libvirt-test-API

Hi everyone, following our minutes, I'd like to start a discussion on what should be done with libvirt-test-API so we can say it's stable and usable. I would like to stress out that everything mentioned here is just an opinion and I don't mean to talk down to someone as it may have seemed earlier. I think we should get some ideas from everyone, mostly QE as they will be (are) the ones using this the most (if I understood correctly), and then I'll be happy to help getting the code to the agreed status. I was thinking about this from the wrong way probably and changing the angle from what I look at it (and knowing there is some deadline) makes me think of few levels of changes, which when introduced, could speed up the test development and code understandability. So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e) - fix some env_* and util* code (functions duplicated with different behavior) - fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!) - fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.) The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module. - get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere) - be able to share variables between tests (connection object and anything else) - introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications - update the documentation - use some python code style (PEP-8?), make use of True/False, None - eliminate duplicated (and x-plicated) code (append_path in all the files, etc.) I have all of these figured out, so I'm willing to discuss all of them, but in most cases changing it in the current code seems very time-consumable to me. Please, feel free to comment on any of these, add yours, discuss, shout at me, etc. =) Regards, Martin P.S.: I don't see any point in sending my patches until some of these points are resolved as that could mean rewriting more code.

On 03/29/2012 02:14 PM, Martin Kletzander wrote:
Hi everyone,
following our minutes, I'd like to start a discussion on what should be done with libvirt-test-API so we can say it's stable and usable.
I would like to stress out that everything mentioned here is just an opinion and I don't mean to talk down to someone as it may have seemed earlier.
I think we should get some ideas from everyone, mostly QE as they will be (are) the ones using this the most (if I understood correctly), and then I'll be happy to help getting the code to the agreed status. I was thinking about this from the wrong way probably and changing the angle from what I look at it (and knowing there is some deadline) makes me think of few levels of changes, which when introduced, could speed up the test development and code understandability.
So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e)
- fix some env_* and util* code (functions duplicated with different behavior) - fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!) - fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module. Apart from actualy enabling all functionality provided by the libvirt python api this would actually increase the object orientation of the code. The current api breaks the idea of objects in some places: eg. looks up a guest by name, uses the domain object to call one API and discards it instead of re-using it for further calls that have to look the domain up again.
- get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere) Other useful thing would be to improve exception handling to free the test writer from handling exception that actualy signal that an error has occured and free him from having to catch the exception and print the error message in a nice way (not to have to read a backtrace ). (On the other hand, handling exceptions will be needed if an error actually should happen as a result of the test)
- be able to share variables between tests (connection object and anything else) This would enable to write really simple test cases that would not require to create a separate hypervisor connection and to the complete test separately, but you could combine these simple test cases into complex
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
- update the documentation This might speed up new deployments of the test suite as some filenames and other details have changed so the users have to figure them out by themselves.
- use some python code style (PEP-8?), make use of True/False, None - eliminate duplicated (and x-plicated) code (append_path in all the files, etc.) Agreed.
I have all of these figured out, so I'm willing to discuss all of them, but in most cases changing it in the current code seems very time-consumable to me.
Writing a test now requires some redundant work to be done. A common test case (written in python) requires that the writer creates a hypervisor connection, gets the domain object and then does all the testing. This could be minimized, when these common tasks would have utility tests (eg a test that just connects to the hypervisor and returns the object) and then you'd just combine them in the test case file.
Please, feel free to comment on any of these, add yours, discuss, shout at me, etc. =)
Regards, Martin
Peter

On 2012年03月29日 20:14, Martin Kletzander wrote:
Hi everyone,
following our minutes, I'd like to start a discussion on what should be done with libvirt-test-API so we can say it's stable and usable.
I would like to stress out that everything mentioned here is just an opinion and I don't mean to talk down to someone as it may have seemed earlier.
I think we should get some ideas from everyone, mostly QE as they will be (are) the ones using this the most (if I understood correctly), and then I'll be happy to help getting the code to the agreed status. I was thinking about this from the wrong way probably and changing the angle from what I look at it (and knowing there is some deadline) makes me think of few levels of changes, which when introduced, could speed up the test development and code understandability.
So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e)
Absolutely we should either change/destroy it or generalize it as global config.
- fix some env_* and util* code (functions duplicated with different behavior)
This should be caused by mutilple persons worked on that, but lacked reviewing.
- fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!)
So this means test-API only supports qemu:///system testing now, needs to be improved for qemu:///session too. Also I'd guess there are cases which only considers the QEMU/KVM driver testing. If so, we need to either generalize it or seperate it (in case of it are too specific to generalize), perhaps seperate directories for different drivers. But this should be the future plan, what we should do currently is try to generalize the tests as much as we could.
- fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
Yes, we need the consistent function/variable name, also consistent coding (including the comments on top of scripts) style.
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module.
Agreed, it looks to me many of the lib functions just simple pass parameter to the underlying libvirt-python API, that's just meaningless/ useless.
- get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere)
Agreed. Just like the classes method in lib, it's feet of snake, ;)
- be able to share variables between tests (connection object and anything else)
Not sure what's your exact meaning, could you explain more?
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
Do you mean destroy current config parsing codes? if so, we need to rewrite (or much modification) codes of generator too. Any critical disadvantage you see in the current parsing/generator codes? I'd think the pricinple of current parsing (no generator) is right, though it might have bugs or disadvantages, we can improve/extend it.
- update the documentation
Current documentation is in publican format, honestly I'm not fan of it, it might be good if used for an enterprise product, but for a community project, I never see what uses it. So I will vote if you mean destroy it and write just simple text docs.
- use some python code style (PEP-8?), make use of True/False, None
pylint should be able to take care of it.
- eliminate duplicated (and x-plicated) code (append_path in all the files, etc.)
Guannan already starts to do it, :-) I didn't go through the codes carefully yet, so no much thoughts yet, but what I could think of currently is: * Take use of JEOS, I'd guess for most of the testing, we won't do much complicate works inside the guest, so JEOS will be enough. * Add default configuration for generalize testing functions, such as for domain installation, we'd want to use it many places, and only need to specify specific configuration when testing the domain installation itself, for the tests which just wants to create a domain, we will just want a usable domain, I.e. lazy guys won't want to specify the parameters again and again. Will comment if I have further thought when went through the codes. Regards, Osier

On 03/29/2012 03:42 PM, Osier Yang wrote:
On 2012年03月29日 20:14, Martin Kletzander wrote:
Hi everyone,
following our minutes, I'd like to start a discussion on what should be done with libvirt-test-API so we can say it's stable and usable.
I would like to stress out that everything mentioned here is just an opinion and I don't mean to talk down to someone as it may have seemed earlier.
I think we should get some ideas from everyone, mostly QE as they will be (are) the ones using this the most (if I understood correctly), and then I'll be happy to help getting the code to the agreed status. I was thinking about this from the wrong way probably and changing the angle from what I look at it (and knowing there is some deadline) makes me think of few levels of changes, which when introduced, could speed up the test development and code understandability.
So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e)
Absolutely we should either change/destroy it or generalize it as global config.
- fix some env_* and util* code (functions duplicated with different behavior)
This should be caused by mutilple persons worked on that, but lacked reviewing.
- fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!)
So this means test-API only supports qemu:///system testing now, needs to be improved for qemu:///session too.
Also I'd guess there are cases which only considers the QEMU/KVM driver testing. If so, we need to either generalize it or seperate it (in case of it are too specific to generalize), perhaps seperate directories for different drivers. But this should be the future plan, what we should do currently is try to generalize the tests as much as we could.
- fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
Yes, we need the consistent function/variable name, also consistent coding (including the comments on top of scripts) style.
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module.
Agreed, it looks to me many of the lib functions just simple pass parameter to the underlying libvirt-python API, that's just meaningless/ useless.
- get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere)
Agreed. Just like the classes method in lib, it's feet of snake, ;)
- be able to share variables between tests (connection object and anything else)
Not sure what's your exact meaning, could you explain more?
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
Do you mean destroy current config parsing codes? if so, we need to rewrite (or much modification) codes of generator too. Any critical disadvantage you see in the current parsing/generator codes? I'd think the pricinple of current parsing (no generator) is right, though it might have bugs or disadvantages, we can improve/extend it.
To answer two of your questions at once, I'll show you an example of what I had in mind (BTW: one of the current disadvantages is also the fact that each indentation level _must_ be 4 spaces, otherwise you'll get an error). Please be aware that this doesn't make almost any sense, it just shows a couple of things I think could help everyone a lot. file testcase.cfg: # Local variables. You can use these variables only in this testcase # file. This requires 1 more line of code in CofigParser. [LocalVariables] my_hyper = "qemu" module = connections [GlobalVariables] # Could be named "Defaults" as these variables will be passed to all # test in this testcase file... uri = %(my_hyper)s:///system # This section is not needed if the tests are named Test1.Connect and # so on, but it is more readable a understandable for some readers [Tests] Test_1 = Connect Test_2 = Disconnect [Test.Connect] Module = $(connections)s Testcase = connect # if not specified, this could default to Test.<name>.Params Params = SomethingElse [SomethingElse] # ...unless they are overwritten with some others uri = %(my_hyper)s:///session [Test.Connect] Module = $(connections)s Testcase = disconnect # No parameters here (none needed) And then you will have two tests that look something like this (very rough idea with lots of things I had in my mind, just to show how nice it could look): file tests/connections/connect.py: def get_params(params): # clean the parameters, put defaults for undefined ones, etc. return params # this means that if needed, this test will create/update these # variables in the "shares" (will get to that in a few lines) provides = ('connection', 'uri') def run(logger, test_params, shares): # no need to test the return code, we can raise an exception that # will be caught outside of the test and reported through logger params_cleaned = get_params(test_params) # "shares" would be object that takes care of the values shared # between tests, check for dependencies, etc. shares.update('uri', conn) logger.debug('using uri: %s' % params_cleaned['uri']) # again, no need to check for an exception conn = libvirt.open(params_cleaned['uri']) # and for example it could return the values that should be # provided in "shares" return { 'connection' : conn } file tests/connections/disconnect.py: def get_params(params): # clean the parameters, put defaults for undefined ones, etc. return params # this can be either here or evaluated when the test is trying to get # the value from the "shares" object requires = ('connection',) def run(logger, test_params, shares): params_cleaned = get_params(test_params) conn = shares.get('connection') logger.info('disconnecting from uri: %s' % conn.getURI()) conn.close() And this would be the two tests with test case that tries to connect and disconnect. All the errors are caught in the test runner, if some tests depend on exception that is raised in underlying libvirt, they can catch it themselves without propagating it up (the whole point of exceptions), etc.
- update the documentation
Current documentation is in publican format, honestly I'm not fan of it, it might be good if used for an enterprise product, but for a community project, I never see what uses it. So I will vote if you mean destroy it and write just simple text docs.
Actually, I meant at least update the information. However writing a text doc is good idea (maybe some man-page styled manual?).
- use some python code style (PEP-8?), make use of True/False, None
pylint should be able to take care of it.
- eliminate duplicated (and x-plicated) code (append_path in all the files, etc.)
Guannan already starts to do it, :-)
I didn't go through the codes carefully yet, so no much thoughts yet, but what I could think of currently is:
* Take use of JEOS, I'd guess for most of the testing, we won't do much complicate works inside the guest, so JEOS will be enough.
* Add default configuration for generalize testing functions, such as for domain installation, we'd want to use it many places, and only need to specify specific configuration when testing the domain installation itself, for the tests which just wants to create a domain, we will just want a usable domain, I.e. lazy guys won't want to specify the parameters again and again.
Will comment if I have further thought when went through the codes.
Regards, Osier
Martin

On 03/30/2012 05:33 PM, Martin Kletzander wrote:
To answer two of your questions at once, I'll show you an example of what I had in mind (BTW: one of the current disadvantages is also the fact that each indentation level _must_ be 4 spaces, otherwise you'll get an error). Please be aware that this doesn't make almost any sense, it just shows a couple of things I think could help everyone a lot.
file testcase.cfg:
# Local variables. You can use these variables only in this testcase # file. This requires 1 more line of code in CofigParser. [LocalVariables] my_hyper = "qemu" module = connections
[GlobalVariables] # Could be named "Defaults" as these variables will be passed to all # test in this testcase file... uri = %(my_hyper)s:///system
# This section is not needed if the tests are named Test1.Connect and # so on, but it is more readable a understandable for some readers [Tests] Test_1 = Connect Test_2 = Disconnect
[Test.Connect] Module = $(connections)s Testcase = connect # if not specified, this could default to Test.<name>.Params Params = SomethingElse
[SomethingElse] # ...unless they are overwritten with some others uri = %(my_hyper)s:///session
[Test.Connect] Module = $(connections)s Testcase = disconnect # No parameters here (none needed)
And then you will have two tests that look something like this (very rough idea with lots of things I had in my mind, just to show how nice it could look):
file tests/connections/connect.py:
def get_params(params): # clean the parameters, put defaults for undefined ones, etc. return params
# this means that if needed, this test will create/update these # variables in the "shares" (will get to that in a few lines) provides = ('connection', 'uri')
def run(logger, test_params, shares): # no need to test the return code, we can raise an exception that # will be caught outside of the test and reported through logger params_cleaned = get_params(test_params)
# "shares" would be object that takes care of the values shared # between tests, check for dependencies, etc. shares.update('uri', conn)
logger.debug('using uri: %s' % params_cleaned['uri'])
# again, no need to check for an exception conn = libvirt.open(params_cleaned['uri'])
# and for example it could return the values that should be # provided in "shares" return { 'connection' : conn }
file tests/connections/disconnect.py:
def get_params(params): # clean the parameters, put defaults for undefined ones, etc. return params
# this can be either here or evaluated when the test is trying to get # the value from the "shares" object requires = ('connection',)
def run(logger, test_params, shares): params_cleaned = get_params(test_params) conn = shares.get('connection') logger.info('disconnecting from uri: %s' % conn.getURI()) conn.close()
And this would be the two tests with test case that tries to connect and disconnect. All the errors are caught in the test runner, if some tests depend on exception that is raised in underlying libvirt, they can catch it themselves without propagating it up (the whole point of exceptions), etc.
The .ini fomat is often used as data storage, that is perfect for config file. The testcase config currently used by test-API is the default testcase writing format in upstream autotest that supported by qemu. Guannan Ren

On 03/29/2012 08:14 PM, Martin Kletzander wrote:
- fix hard-coded options into real options (e.g. commit 65449e) - fix some env_* and util* code (functions duplicated with different behavior) - fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!) - fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module. - get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere)
the above should be easy to fix to cleanup. I can do it.
- be able to share variables between tests (connection object and anything else)
This belongs to new feature, we better consider it later.
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
Please list out some critical cause, why?
- update the documentation
I can do it.
- use some python code style (PEP-8?), make use of True/False, None
we used pylint to review it, It is fine.(maybe could be better)
- eliminate duplicated (and x-plicated) code (append_path in all the files, etc.)
easy to do. Guannan Ren

On 03/29/2012 04:20 PM, Guannan Ren wrote:
On 03/29/2012 08:14 PM, Martin Kletzander wrote:
- fix hard-coded options into real options (e.g. commit 65449e) - fix some env_* and util* code (functions duplicated with different behavior) - fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!) - fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module. - get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere)
the above should be easy to fix to cleanup. I can do it.
- be able to share variables between tests (connection object and anything else)
This belongs to new feature, we better consider it later.
No problem with that.
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
Please list out some critical cause, why?
Look at the mail I sent to Osier, I think it could ease the look of it pretty much.
- update the documentation
I can do it.
- use some python code style (PEP-8?), make use of True/False, None
we used pylint to review it, It is fine.(maybe could be better)
- eliminate duplicated (and x-plicated) code (append_path in all the files, etc.)
easy to do.
Guannan Ren
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
Thanks for your opinion, The only thing I'm thinking about is if it is worth doing in the current code, but as Dave said, the main thing is the deadline now and it's the end of April. After that, when I'm done with school, I can put something together in my free time and present it, maybe you'll like it, but there is no time for that right now. Martin

On 03/29/2012 10:20 PM, Guannan Ren wrote:
On 03/29/2012 08:14 PM, Martin Kletzander wrote:
- fix hard-coded options into real options (e.g. commit 65449e) - fix some env_* and util* code (functions duplicated with different behavior) - fix or remove harmful and pointless code (at this point, when creating domain on remote machine, be prepared for the test to fail with any other user then root and with root, have backup of both local and remote '/root/.ssh' directories as the contents will be erased!) - fix method names for the {connect,domain,etc.}API (get_host_name vs. lookupByUUID etc.)
The optional things: - get rid of classes in lib and make just few utility functions covering *only* the methods that do something else than call the same method in underlying class from the libvirt module. - get rid of the new exception (I don't see any other difference than in the name, which can make a difference in "except:" clause, but it's converted everywhere)
the above should be easy to fix to cleanup. I can do it.
The work is done and tested by QE pushed.
- be able to share variables between tests (connection object and anything else)
This belongs to new feature, we better consider it later.
- introduce new config file for tests (.ini format, can be parsed by ConfigParser, same as env.cfg, define variables used throughout the test specifications
Please list out some critical cause, why?
- update the documentation
Document is ongoing.
- eliminate duplicated (and x-plicated) code (append_path in all the files, etc.)
The work is done and pushed. Hi Martin Could you have a review on the code. Anything require changes, you could sent patch based on git head or list them out, let me know. Guannan Ren

On 04/01/2012 04:30 PM, Guannan Ren wrote:
On 03/29/2012 10:20 PM, Guannan Ren wrote:
On 03/29/2012 08:14 PM, Martin Kletzander wrote: [...] Hi Martin
Could you have a review on the code. Anything require changes, you could sent patch based on git head or list them out, let me know.
Guannan Ren
Hi, I went through almost the whole patch and it looks good to me. The tests are more readable and easier to write. I still haven't manage to go through all the code, I'll keep you posted. In the meantime, I created a test case Dave asked me for (thus the Cc). It is a simple screenshot test (it creates a screenshot into a specified file), could you have a look it, please? It is here: https://www.redhat.com/archives/libvir-list/2012-April/msg00038.html Thanks and have a nice day, Martin

On 03/29/2012 08:14 AM, Martin Kletzander wrote:
So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e)
Forgive me if I'm suggesting things that already exist - I haven't had time to go through all of the test-API code, but want to make sure this gets brought up sooner rather than later. Two things that I think are very important for a general purpose test setup are: 1) The ability to have multiple physical machines, network devices of various types (bridges, standard NICs, sr-iov NICs) in the test harness available via standard names (implying, of course, that the address/capabilities (and even the existence/non-existence) of at least one other machine be configurable in a global config file referenced by every test). 2) The ability to mark some tests as requiring certain standard objects from the global configuration (e.g. the second machine, a bridge interface, sr-iov NICs) and to either skip the test when some required object is missing, or fail the test (the test itself would be marked either OPTIONAL or REQUIRED). This would allow us to have, for example, full testing of networking capabilities present in the test suite, but without raising an entry barrier for "random Joe" who only has a single machine, one NIC, etc., but wants to run the test suite. When Joe ran the tests, each test that required multiple hosts (or sr-iov NICs or whatever esoteric piece of hardware/driver) and was designated as an "OPTIONAL" test, would be semi-silently skipped, but someone with a full complement of hardware who demanded a thorough and complete test could set a switch and guarantee that every single test would be run (or a failure saying, e.g. "object REMOTE_HOST required by this test is missing in config", or something like that). Is there currently an allowance for these two items?

On 04/04/2012 11:09 PM, Laine Stump wrote:
On 03/29/2012 08:14 AM, Martin Kletzander wrote:
So here are the things I would like to do definitely (the optional things follow later on): - fix hard-coded options into real options (e.g. commit 65449e) Forgive me if I'm suggesting things that already exist - I haven't had time to go through all of the test-API code, but want to make sure this gets brought up sooner rather than later.
Two things that I think are very important for a general purpose test setup are:
1) The ability to have multiple physical machines, network devices of various types (bridges, standard NICs, sr-iov NICs) in the test harness available via standard names (implying, of course, that the address/capabilities (and even the existence/non-existence) of at least one other machine be configurable in a global config file referenced by every test).
The env.cfg is the global config file that lists the default data needed by testcases. we could expand it for various requirements. The testsuit neither manage these devices or machines nor check if it is spelling-correct or present on testing machine right now. because it depends on testing environment , the tester should check it before writing it in env.cfg. <snip> ########################## # # PCI device # a PIC device to use for attach/detach/reset tests # for example testpic = 00:19.0 testpic = </snip>
2) The ability to mark some tests as requiring certain standard objects from the global configuration (e.g. the second machine, a bridge interface, sr-iov NICs) and to either skip the test when some required object is missing, or fail the test (the test itself would be marked either OPTIONAL or REQUIRED).
Currently, there is a file named BUGSKIP in the root of test-API If certain testcase is written in it for a bug related reason, the testcase will be skipped during running time, we could improve it for your purpose. There are 70~80 testcases. Any number of this testcases could be combined in different order according to various testing requirements. So there is not one shaped running model to go through all of testcases(we could have a basic one). there are some testcase file that are mostly used in "cases" folder in the root directory.
This would allow us to have, for example, full testing of networking capabilities present in the test suite, but without raising an entry barrier for "random Joe" who only has a single machine, one NIC, etc., but wants to run the test suite. When Joe ran the tests, each test that required multiple hosts (or sr-iov NICs or whatever esoteric piece of hardware/driver) and was designated as an "OPTIONAL" test, would be semi-silently skipped, but someone with a full complement of hardware who demanded a thorough and complete test could set a switch and guarantee that every single test would be run (or a failure saying, e.g. "object REMOTE_HOST required by this test is missing in config", or something like that).
The current data list in env.cfg is default and satisfied for all of testcases. "random Joe" have to know which testcases he want to run, if he need the testcase which rely on two NICs , he need to write MAC address of NICs to env.cfg before testing.
Is there currently an allowance for these two items?
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
participants (5)
-
Guannan Ren
-
Laine Stump
-
Martin Kletzander
-
Osier Yang
-
Peter Krempa