A failed browser extension release we did recently reminded me of an interesting presentation a job applicant once gave. A short talk on any topic was part of a hiring process for our team back then, and this candidate decided to talk about assumptions testing. Now I don’t remember the exact slides or words, but I do remember the spirit of the message, and I will try to describe it here.
This person defined assumptions testing as a technique in which you deliberately try to identify the expectations you have about the object of your testing. And try breaking the assumptions in hope of identifying issues with the item under test.
There are two types of assumptions you can make in this context:
- Explicit assumptions – known and declared. Defined product limitations, for example. They may appear as statements on what user would never do. And the tester performing assumptions testing would naturally not only ask, “why do we believe user would never do it?”, but would actually also perform that very action. The idea behind performing such test is to trying to catch serious product malfunctions before they can do real harm. An example that comes to mind here may be plane doors. One may make an assumption that no passenger would ever attempt to open them in flight. But it happens, many times per year. So a prudent assumptions tester would come up with a scenario to test exactly that.
- Implicit assumptions – unknown and undeclared. Could be something that is related to engineering culture at your company, or your individual experience. These may be harder to identify than the explicit ones, but there are methods that can help:
- Peer reviews of test plans and testcases: especially when the reviewer is not directly involved in the project, so less likely to make assumptions about the solution, and is more likely to ask questions that break them.
- Templates: I am not a big fan of documentation, but in some cases you can prepare templates that trigger questions similar to the ones a peer reviewer would ask.
Not breaking either type of assumptions may be equally catastrophic. In our case of the failed browser extension release the assumption made was an implicit one: this browser extension had never had any issues with upgrading, so testing the upgrade scenario never came to our minds. Fresh installation we tested worked well. But we broke the extension for any user that upgraded it. Lesson learned.
This post is also available in: English