Unconventional Wisdom V3–Testability Trap

 

Unconventional Wisdom

“It isn’t what we don’t know that gives us trouble, it’s what we know that ain’t so.”

– Will Rogers

Welcome to my Unconventional Wisdom blog. Much of conventional wisdom is valid and usefully time-saving.  However, too much instead is mistaken and misleadingly blindly accepted as truth, what I call “conventional we’s dumb”.  Each month, I’ll share some alternative possibly unconventional ideas and perspectives I hope you’ll find wise and helpful.

 

Conventional Testing View of Requirements

Lack of testability seems to be what the testing community considers the biggest issue with requirements. In turn, lack of the requirement’s clarity is the main reason one cannot write tests to demonstrate a requirement has been met.  Consequently, unclear/untestable requirements are widely-accepted as the cause of creep that produces project budget and schedule overruns.

 

 

Conventional Wisdom about Early Testing Involvement

In a related, but actually somewhat different mode, many testers complain about the negative impact on their ability to test resulting from their not getting involved in projects until very late in the development process. Many of the testers I encounter in classes, consulting, and at conferences report that they only enter projects after the code is delivered to them to test.

 

By the time these testers do become involved, all the defects already are in the code. Thus, the testers’ challenges are maximal.  Since it’s so late, and since many projects already are past their deadlines at that point, test time tends to be minimal; and some of that limited window is spent creating the tests before execution of them can actually start.  Moreover, each defect found is much more difficult and costly to fix than if it had been found earlier, which may mean it’s not fixed then.

 

These testers also report that late involvement problems frequently are compounded by receiving inadequate information about the systems they are supposed to test. Thus, the value of their testing is further diminished because the testers lack suitable ways to understand what to test or how to test it in the limited time remaining.

 

Not surprisingly, virtually every tester and testing authority I’ve met accepts as Gospel that testers should become involved earlier, so they can have more of their tests ready to run in the little time left after the code finally does arrive.

 

I’ll See Your Early Involvement and Raise It

Beyond merely endorsing earlier tester involvement, a number of testing authors, speakers, and instructors further advocate that testers participate in requirements reviews—first to help the testers better understand the requirements they need to test systems against, and second to ensure said requirements are testable. Part of this conventional wisdom is that the testers may have to fight their way into the requirements reviews, since often testers are not invited to reviews.

 

But Actually…

While I do generally agree with needs for earlier tester involvement, several key qualifiers must be recognized. Without also receiving suitable information to guide test development, early involvement by itself not only has little value but actually can interfere by causing testers trying to find information for testing to get in developers’ ways.  Regardless of time of tester involvement, too-seldom-present relevant useful low-overhead documentation is needed to aid both testers and developers in doing their jobs well.

 

Similarly, being allowed to participate in requirements reviews is beneficial only when the tester actually is prepared and able to contribute productively. To the rest of the review participants, testers who are present only to learn what to test are costly overhead that provides no value to the review.  Furthermore, most non-testers view testers’ yammering about untestable requirements as counter-productive trivial nitpicking.  Analysts surely are not going to follow testers’ demands that they rewrite the requirements in more testable form.  Too many testers following supposed gurus’ conventional wisdom advice to fight their way into reviews are ejected and can’t return.

 

Testability Isn’t Top

Clarity/testability is an issue of form rather than content, whereas requirements should be but seldom are all about content. While requirements clarity/testability is important, much of creep occurs because product/system/software requirements, regardless how clear/testable, do not satisfy the REAL business requirements, largely because the REAL business requirements have not been defined adequately, because people mistakenly think the product requirements are the requirements.  A requirement can be perfectly clear and perfectly wrong.  If a requirement has been overlooked, as so often happens, its clarity/testability is irrelevant.

 

Others understand, albeit often unconsciously, that most review value comes from detecting wrong and overlooked requirements content issues. However, form is all one can address when they lack sufficient skills or knowledge to address content.  Not realizing why they aren’t adding value to the review further seals testers’ likely disinvitation from participating in further reviews.

 

Value-Adding Participation

Reviews are widely-accepted as valuable for detecting requirements issues, but few recognize how much conventional reviews still miss. Typical reviews use one or two usually weaker-than-realized techniques, ordinarily concentrating on format and mistakenly over-relying on reviewers’ knowledge alone to spot wrong and missing requirements content.

 

My book and related seminars describe more than 21 Proactive Testing™ ways to review requirements, including many more powerful special techniques that reveal wrong and overlooked requirements content. Testers who skillfully apply these methods truly add value and are valued, thereby avoiding the “Testability Trap.”

About the Author

Robin

Consultant and trainer on quality and testing, requirements, process measurement and improvement, project management, return on investment, metrics
Find out more about @robingoldsmith