To have high enough confidence that the product will be good enough.
When company thinks they are done with the development of a new product, be it a new car, kitchen detergent, lipstick, or a piece of software. They would like to know if this product will really be successful in the market place. They would get a small number of real customers to try it before start selling. That is the famous “beta program.”
It turns out pollsters use the same technique to predict the election results. Surprisingly, a random sample of just a few thousand people can accurately predict the next president. This is a well-studied science, also long practiced.
How high the confidence is enough? There are two popular choices of 95% and 99%. How precise the answer should be? The “interval” is usually 10%. The “answer,” therefore, come in three numbers: for example “we will have 95% confidence that 65% to 75% will find it favorable.”
The real science is in calculating the sample size: how many do you need to poll to arrive at the answer? For that, we can Google “sample size calculator.” All of them ask you the size of the actual population, the confidence you wish and the interval that you can tolerate. With that, out spit a number: the required sample size. If you are too lazy to search, 400 is not a bad “go to number.” If you push it, 250 will work too.
Recently, we released a product with a beta program of less than 30 participants. At the end of the program, all of them were satisfied with the product and we released it to the general public. It turned out the product has a defect that affects roughly 5% of the customers. As our luck, none of the participants experienced the defect. Had we computed the required sample size, we would have known that 30 will yield such a low confidence level that this beta program is meaningless. It told us the product was good enough and, in fact, wrongly.