Easy-Bake-Ovens…
From time to time I have a lot of fun getting engaged in sales cycles here at iTKO. Frequently there’s some sort of bake-off between our product LISA and alternatives. That’s not what this entry’s about. This entry is about overselling test cases generation. Here’s how one went just a couple days ago. So we give our LISA pitch to a large financial institution and I think we’ve got them engaged; I think they’re excited and they like what they see. Then one person raised a hand and started a dialog that went something like this... Customer: “Hey we just saw a tool the other day that can automatically generate all of our test cases. Can LISA do that?" John: “Wow, so what do you mean? Automatically generate all your test cases? You know LISA generates test cases. You can use our product to point and click your way through all of the steps in your workflow, and that’s a generation if you will of test cases, but I’m not sure that’s the way you meant it.” Customer: “No, no, no I mean you point this tool at the code or compiled product and it just creates the test cases for you”. John: “really?” Customer: “Yeah, we’ll get 90% code coverage or so with this tool and it just generates them for you. We're 90% done testing within minutes. So, can your product do something like that?” John: “Get you 90% done with testing in minutes? No. But I don't get it... Who taught that tool the business requirements?” -- awkward pause -- Customer: "well, nobody" John: “Well then, what was it testing for? What are you proving?" Customer: “For things that don’t work the way they’re supposed to” John: “How does it know what it is and isn’t supposed to do?" Customer: “For example if you send a null to a particular call and it throws some kind of run time exception then that’s certainly something you wouldn’t want.” John: "That’s a great example of a garbage in – garbage out scenario that sounds like a pretty good thing to have testing for... But what can the business actually measure against the requirements documents that this tool automatically created?’ –-- another lengthy pause, and of course at this point they’re getting my point! --- The answer is “not a whole lot." Maybe one of a hundred requirements is to protect yourself from invalid input. Code level testing tools are certainly useful, especially for those who produce an API for public consumption since they have very little control over whom and how their API’s get executed. Putting a certain amount of testing around making sure you aren’t able to set your system into an invalid or inconsistent state by garbage input sounds like a pretty good idea. I like it – I’m not against it, but the problem is that sales guys are going a little too far with the concept when the say (almost in quotes) 'our tool will generate all your test cases for you and we’ll get you to 90%+ code coverage without you doing any work; our tool will do it all for you.' Well any amount of investigation of what is going on here and you understand that this is a farce. We all would love a little magic box you can put on the network and it automatically can test all the applications for you. But again, until we get requirements oriented we aren’t able to claim much coverage of the testing space that we need from the business' point of view. So I hope that next time you see a pitch from a tool that does such a thing, you’ll have a little more reality about what they’re really offering. Which can be a valuable function but just can’t replace requirements-based testing by domain-aware tests.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home