> Also, there was some discussion on the mailing list about
modifying
> the negative test cases so that they only check the provider return
> codes. I think it'll be awhile before we can add implementation
> specific return codes to the providers. Since the CIM return codes
> aren't specific enough to indicate exactly what kind of error
> occurred, I'm inclined to continue checking the return messages in the
> test cases for now.
>
> Thoughts?
I agree with you on checking both the return codes and the messages. But
I thought branching the test cases for different changeset of providers
is a little risky. I am in the mood that we're going to maintain massive
if-else branches on this if the provider message strings change too fast.
Completely agree - they've already been a headache to maintain as it is.
An optimistic view would be that even though we need to maintain a
little bit too many of branches at first. But as the providers get more
stable, these frequent changes are less likely to happen.
Ok, my third view is a little unrealistic. We can develop a fifth test
case return code, named 'conditional pass', specifically for the rc
matches, string doesn't match issue. :=)
While not a bad idea, maintaining yet another return code can be a pain.
Especially if it doesn't get set back to pass/fail when need be.
Heidi was working on updating these message, and is most of the way done.
From your F9 release providers test run, it looks like 1 or possibly 2
tests cases that encounter this issue. I'm inclined to have these tests
branch for now. If I see a trend that more error messages are changing,
then add something like an additional return code.
Either way, I'd say leave these test cases as a lower priority to fix
for now. I'd rather focus on ensuring the more complex tests pass on
KVM. =)
--
Kaitlin Rupert
IBM Linux Technology Center
kaitlin(a)linux.vnet.ibm.com