Design for Testability

Many years ago, I wrote a program that converted Gregorian date to Chinese and vice versa. I researched the topic, found the formula, designed the user interface, and finished the code in several days. I ran it, tried several dates that I knew the matching results, and everything appeared to work. But I was not sure.

So I bought a conversion table from a local Chinese bookstore that listed all Gregorian and Chinese dates for several hundred years. I randomly picked about 50 entries from the table and verified the results with my program’s output. To my disappointment, about 3 to 4 did not match. I scratched my head, went back to the code, found the bugs, and fixed them. In software terms, my code was unit tested at this stage.

Then I modified my program to produce 200 random dates, ran it to convert all of them, and printed out the result. I then checked them, manually, against the table. Surprised, about 2 of them did not match (1%). Upon investigation, those are dates on the boundary conditions: mid-night and new moon are within seconds of each other, winter solstice is right at the beginning or end of the lunar month, etc. Hmm…

I fixed the code to make them work. At the same time, I added a hunt for the next boundary: the program will search for the next event that solar and lunar events are very close together. This time, nearly half of them were inconsistent with the table. I refined my program, double checked my parameters, and improved the computation precision.


This story is not about calendrical conversion. It is about software testing. Statistically, the events that would trigger the bug are much less than a needle in a haystack. They are literally astronomically rare events. Manual QA process would have extreme difficulty finding any of those bugs. The only way to ensure correctness is to design in testability.

A good piece of software probably has more than 75% of the code doing error checking; only 25% of the code in implementing the algorithm. This is something they did not teach you in school. This makes it real hard for another QA person to test all the paths. If it was not designed to be tested, there is really no hope of any software to achieve high quality.


After some soul searching, I actually left my program alone and did not fix those “boundary case” bugs. I researched how that bookstore-bought table was produced, double checked my algorithm and coefficients, and concluded that I might actually be right and the table not.

More than 10 years passed and I did not receive any complaint to my program (on the accuracy of the conversion). That fact is not a proof either way, since the astronomical events happen probably less than once in ten years. The program will enter oblivion way before a bug is filed against it.

That’s another entry on software management. Stay tuned.

This entry was posted in Management Thoughts. Bookmark the permalink.

One Response to Design for Testability

Leave a Reply

Your email address will not be published.