On 02/27/2018 07:17 AM, Andrea Bolognani wrote:
On Fri, 2018-02-23 at 14:18 -0500, Laine Stump wrote:
> On 02/21/2018 09:14 AM, Andrea Bolognani wrote:
>> The input configurations set all existing options for all PCI
>> controllers, even those that are not valid for the controller.
>> As we implement validation for PCI controller options, we expect
>> these test to start failing.
> A noble cause, but since multiple options are being tested for multiple
> controllers in the same file, once you have all the proper checks in
> place the tests won't actually be verifying all of the negative tests -
> only the first failure will be noticed - if one of the others is missed,
> it won't "fail extra hard" or anything.
I'm well aware of that.
Yeah, I'm just trying to be funny.
> Although I hate to explode the number of tests, I think if you want to
> have proper negative testing for every options that doesn't belong on a
> particular controller, then you'll need a separate test case for each
> combination of option and controller model. And since that would make
> for a *lot* of test cases if we tried to extrapolate to all other
> options for all other elements, I don't know that it's worth going down
> that rabbit hole.
So should I just drop this one, or is it still somewhat valuable
to have any sort of test suite coverage for PCI controller options?
I was thinking that having this test is better than not having this
test. But then I thought about what would happenĀ if there was a
regression in just a single one of these validations - the negative test
would still "succeed" (i.e. "succeed in detecting a failure") because
it
would hit the next disallowed attribute. As a matter of fact, the test
would continue to "succeed" until there was a regression in the
validation of every single attribute, so that no error would be
triggered. So having a negative test that has multiple examples of
failures actually gives us a false sense of security - we believe that
it's verifying we're catching incorrect config, but it won't actually
notify us until *all* of the bad config is missed by validation.
So, I think each negative test should have exactly one piece of
incorrect data. That necessarily means that it's only testing for proper
validation of a single aspect of a single attribute. But making this
generally useful with the current test apparatus would mean a huge
explosion in the number of test files, and I don't think that's
practical. But if we're only testing for one out of a thousand
validations, there's really not much point in it.