The Volcano of Software Testing

What happens when you combine the Test Automation Pyramid with Exploratory Testing concepts? You get the Volcano of Software Testing:

Volcano of Tests

The volcano represents the fact that both checking and exploration are part of the activity we call software testing.

Note:

  • The shape of the Test Automation Pyramid refers to healthy test ratios and relative importance. The “volcano ash” cloud follows that pattern:
    • It is on the top, as these tests are typically relevant to the end user, can be relatively complex and costly.
    • In my opinion they are key to achieving customer satisfaction, so the cloud is large.
  • Using tools is not limited to automation. Manual testing should be supported by tools when necessary.
  • “Checking” can be both manual and automated.

Send me your thoughts.

Suggested reading:

Tester toolbox 101 – Developers

developersI like the developer-tester tension as much as the next person. I think it can be healthy, fun, and lead to improved quality: a prudent tester would make sure to have developers “in their toolbox”.

As with any other tool, it is important to know:

  1. How is it used? What benefits do I get from getting developers’ assistance in this context? Why would I want to ask for it?
  2. When to use it? What is the right time to ask?

Why would you ask for help?

  • To learn

Developers wrote the software you are testing, after all. So you can at least hope they know how it works. After all, you probably want to know how the thing works? A developer-friend would be happy to share their knowledge – not only on how they coded this or that module, but also on development in general, engineering, also on other subjects (that’s how I learned about the best food in Hawaiʻi).

A great developer would be also happy to hear your feedback on their software. And in return give you hints on how to test it better.

  • To build tools

You can try writing all your tools yourself. OR you could get help: have your code reviewed, or get fresh ideas. If you are lucky you may even have tools built for you.

  • To enjoy

It is great to make new friends, and celebrate success together. If you do not agree, I hope you at least like to learn (two bullet points up).

What is the right time to reach out?

  • When you are stuck

Asking for another person’s opinion is one of the best ways I know to get unstuck. They may come up with a fresh idea, or at least listen so that you can organize your thoughts.

  • When you need validation of your work

Unsure whether what you are doing is the right thing? Or maybe you did something awesome and want to share? Both are great opportunities to talk to your developers.

  • Choose your timing

Engineers tend to work best when they are “in the zone”. You do not want to break their focus, nobody likes it. If you see a busy developer – do not interrupt with your questions. Also when they are glaring into the distance (looking for inspiration), reading, sleeping… unless you know exactly when is the right time to approach this specific person – best is not to interrupt them at all.

Managing interruptions is probably part of your work environment culture. Some allow asynchronous requests (calendar invite, email, IM). Others use the headphones rule or some other sign that people are open to interaction.

  • Do your homework first

If you have not googled the solution to your problem, or have not checked the internal issue database – do it before approaching others with ask for help. Otherwise the answer you might get is an lmgtfy.com link, and a lazy person’s reputation.

My rule of thumb is I set a time limit for myself to solve a problem. If I still have not solved it by then, I spend a little extra time to push myself before reaching out for help. This approach may not work in every case, and estimating the correct amount of time is hard. But when I do it, I use this extra effort to try to learn something new. Even if it does not lead to the solution, I don’t count it as wasted.

How do you get along with developers, then?

I know what does not work:

  • “Us” vs “Them” blame game – developing vs testing software is not a zero-sum game. Start thinking about the development team as the unit that delivers value, and you are more likely to help each other. You are all in the same boat, after all. So don’t waste time trying to blame this or that person for not catching the bug, or for introducing it. That does not mean you should not do Root Cause Analysis – you absolutely should, and use it to learn from your mistakes. But not to finger point.
  • “Quality Police” approach – if a development team ships a buggy product you all loose. But if you don’t ship at all, you loose, too. The customer is unhappy in both cases – either they got broken product, or they got nothing at all. Focus on delivering value and try finding ways to move things forward.

What works:

  • Cover the basics – do your job. It may sound trivial, but I have seen many times when a poorly written bug report led to unnecessary back-and-forth between developers and testers. So if you bug reports have to contain certain information (environment, reproducibility rate etc.) – make sure you always give it.
  • Find great bugs – everybody loves these. Interesting bugs are a signal that you care, that you spend time and energy testing others’ hard work.
  • Learn and respond to change – be flexible, be agile, adapt.
  • Small gestures like bake a cake, bring cookies, get some cool stickers… whatever makes your developers happy.

I hope you find this post useful. I would love to hear your opinions on this topic. Do leave a comment if you agree or not.

TL;DR

I am a nice tester, not a mindless bug reporting machine. If I am to change this image, I must first change myself. Developers are friends, not defect producers.

Bruce, the shark tester

Tester toolbox 101 – Fiddler

27windows-live-writer-announcing-the-new-fiddler-logo_efdd-image_3-png-png

Fiddler is my favorite web debugging proxy. It is Windows-only, but in my opinion it justifies keeping a Windows VM just to be able to use it. What I do is I have a VMware Fusion or Oracle VM VirtualBox running on my Mac, with one of the Windows VMs dedicated to running Fiddler.

See below for some of typical use cases of this tool.

Disclaimer: some steps described below may affect your computer’s or network’s security. So be sure to know what you are doing.

Monitor local applications

This happens right out of the box. You may want to enable the HTTPS inspection as one of the first options after starting the tool:

2016-01-04_0114

Fiddler may prompt you to trust the certificate it generated. It is required for HTTPS inspection.

Fiddler2 requires additional steps to monitor Metro-style apps. But with Fiddler4 all should just work automagically.

Monitor remote applications

To monitor the traffic from other computers (like Mac) you need to allow remote computers to connect in Fiddler’s options:

2016-01-04_0116

Take a note of the port Fiddler listens on in the same options page (8888 in my case). The only other information you need is the IP address of the system running the proxy. Then set the other computer to use this IP and port as the proxy. Here’s how to do it on Mac. At this point you should see all the remote traffic going through Fiddler.

For HTTPS inspection from remote computers, remember to export the Fiddler’s root certificate and import it as a trusted Root CA on the remote computer. Otherwise you will get security prompts or your applications may refuse to contact their servers.

2016-01-04_0131

iOS and Android

The setup here is similar to the monitoring remote applications case – you need to allow remote computers to connect. Next, install the CertMaker for iOS and Android add-on. After that you have to restart Fiddler. Once it comes back online – change your mobile device settings to use the Fiddler machine as proxy. Last, visit the http://<Fiddler.machine.ipv4.address>:8888/ page from your mobile device – and install the root certificate from this website.

Modify the traffic on the fly

FiddlerScript is very powerful. One useful case might be simulating server errors.

When I wanted to test whether my application handles server outages gracefully, I was adding rules to OnBeforeResponse function that would fake service issue:
oSession.oResponse.headers.HTTPResponseCode = 503;

Troubleshooting

The one very common issue I have seen with Fiddler is that it may leave the proxy enabled on the local system, even if it is not running. It may not break some applications (Firefox maintains its own proxy settings, for example) but will affect others (IE, Chrome etc.). So if you see network-related issues when Fiddler is not running, check your Control Panel > Internet Options and disable the proxy if needed.

2016-01-04_0133

The add-ons

The list just keeps going on. My favorites are:

  • CertMaker for iOS and Android – makes capturing mobile traffic easy
  • Syntax-Highlighting Add-Ons
  • Watcher – a Passive Security Auditor – generates security report as you click around your web application

The book

Yes, there is a book. I have not read the “Debugging with Fiddler“, yet. But since The Man wrote it, looks like recommended position. The official documentation is also great.

By the way…

Other similar tools I have used, and may suit your needs better:

  • ZAP – by OWASP, and has a Jenkins plugin
  • Burp – has awesome website mapping, and powerful security scanning built-in
  • Charles Proxy – works on Mac (Yay!), but is not free (Nay)

Tester toolbox 101 – Wireshark

shark_squeezed

Wireshark – Go Deep

Indeed, Wireshark allows you to inspect your network traffic in-depth. And that may be very useful in testing. It may not be as easy to use as Fiddler, for example (more on this tool in the future), but at the same time it helps you learn about the basics of networking. This is why I would start with Wireshark before moving to other tools.

Installation

Easy on Windows systems: just get the 32-bit or 64-bit installer, and follow the wizard (do remember to install WinPcap if you plan to capture on local system).

OSX: I suggest using homebrew rather than using the official installers. If you want the UI – do remember to use the –with-qt option like this: brew install wireshark --with-qt

Linux: use your favorite method.

WinPcap, and tcpdump

These go along with Wireshark like peanut butter and jelly. Both are based on the libpcap packet capture library, and are used with Wireshark to capture the network traffic. WinPcap gets installed with the Windows version of Wireshark by default, tcpdump may require additional installation on OSX or Linux.

Capturing the traffic with WinPcap is pretty much automatic, when you are on Windows. All you have to do is use the Capture menu. But let me share my favorite, universal way of running tcpdump to capture traffic on Linux or OSX systems:

tcpdump -s0 -i eth0 -w dump.pcap

The options are:

  • -s0: dump the entire packet, important if you want to inspect the full payload
  • -i eth0: listen on the eth0 interface only, will work in 80% of the cases
  • -w dump.pcap: dump the output to the dump.pcap file, you will open this file in Wireshark later

I would typically run tcpdump only for the brief period of time to capture the traffic I am interested in. But if you cannot trigger the specific network event at will – you may have to filter tcpdump further by specifying port, limiting amount of payload captured etc.

After I have captured the data – I would copy it over to my local system with something like scp for further analysis.

Use cases

Typical Wireshark use case for a tester is better understanding of the application under test by seeing what actually gets sent on the wire. This allows you to increase your knowledge of the application. It may enable you to find interesting bugs, or just become an expert overall.

Another case may be any sort of security testing. For example verifying that all traffic is being sent over secure channel. Or looking for ways to exploit the system.

Some cases where it helped me: investigating Windows Phone 8 certificate-based authentication or debugging an iOS application.

Further information

On top of official documentation, you may want to check out these:

Assumptions testing

A failed browser extension release we did recently reminded me of an interesting presentation a job applicant once gave. A short talk on any topic was part of a hiring process for our team back then, and this candidate decided to talk about assumptions testing. Now I don’t remember the exact slides or words, but I do remember the spirit of the message, and I will try to describe it here.

This person defined assumptions testing as a technique in which you deliberately try to identify the expectations you have about the object of your testing. And try breaking the assumptions in hope of identifying issues with the item under test.

There are two types of assumptions you can make in this context:

  • Explicit assumptions – known and declared. Defined product limitations, for example. They may appear as statements on what user would never do. And the tester performing assumptions testing would naturally not only ask, “why do we believe user would never do it?”, but would actually also perform that very action. The idea behind performing such test is to trying to catch serious product malfunctions before they can do real harm. An example that comes to mind here may be plane doors. One may make an assumption that no passenger would ever attempt to open them in flight. But it happens, many times per year. So a prudent assumptions tester would come up with a scenario to test exactly that.
  • Implicit assumptions – unknown and undeclared. Could be something that is related to engineering culture at your company, or your individual experience. These may be harder to identify than the explicit ones, but there are methods that can help:
    • Peer reviews of test plans and testcases: especially when the reviewer is not directly involved in the project, so less likely to make assumptions about the solution, and is more likely to ask questions that break them.
    • Templates: I am not a big fan of documentation, but in some cases you can prepare templates that trigger questions similar to the ones a peer reviewer would ask.

Not breaking either type of assumptions may be equally catastrophic. In our case of the failed browser extension release the assumption made was an implicit one: this browser extension had never had any issues with upgrading, so testing the upgrade scenario never came to our minds. Fresh installation we tested worked well. But we broke the extension for any user that upgraded it. Lesson learned.

assumptions

Tester toolbox 101 – Excel

There are three reasons I can think of that make Microsoft Excel one of the tools I use often enough to make it into my basic toolbox.

Creating the test data

Imagine testing an application that handles creating user accounts. You checked the happy path and a sample user was created. But can it handle a hundred? Ten thousand? You are an engineer so you start thinking about writing a script to create the test data. Writing scripts is fun, but scripting everything is not always the most efficient way to do stuff. Enter Excel and if your application supports CSV files all you have to do is just write first couple lines and then drag to create as much sample data you need. You can then use this data set in your automation, load testing, etc. etc.

[WPGP gif_id=”330″ width=”600″]

Excel will also create date ranges, long strings, random data… you name it.

Reporting

Now that’s an important part of every tester’s life. Typical reports requested from testers include testing progress report, product quality report, or bugs outstanding. And while some of reports needed are provided by the tools you already use, some specific ones may be missing. I can guarantee though that every Test Case Management System or Issue Tracking System will let you export data into a format that Excel can consume: directly into an XLS file, CSV, or via some sort of API. Last resort is using ODBC to talk to the TCM or ITS database directly.

I found the following Excel skills useful:

Your users use it too

This is obvious if your application allows for uploading Excel spreadsheets (to create graphs, to add users, to process orders etc.). Also if Excel shows up on the output (sales reports, current usage data, for example). But also if CSV files are used, your users are likely to create them or process them with Excel. So couple things to think about when it comes to CSV file processing:

  • When saving or opening a CSV file – Excel is likely to attempt using system locale-related ANSI codepage. So your application may have to set it correctly on the output, and process on the input.
  • But at least UTF8 should be always supported? Well… not on Mac.
  • CSV files are always comma-separated? Hint: they are not.

So here were my three reasons for using Excel. What are yours?

 

Source: http://www.giantbomb.com/steve-ballmer/3040-90975/images/
Winning at Testing with Excel

Piñata testing

Pinata

I did not come up with this term. We started using it back in 2009/2010 at Symantec inspired by Michael Bolton‘s  When do we stop testing? presentation. In it he mentions a Piñata heuristic as one of the ways to decide when you tested enough.

The idea is to beat on the Piñata (software) until candy (bugs) starts falling out. And stop when you hit the first dramatic problem.

This term also describes well a testing technique we were using at Symantec. Its basic principle is to use the Piñata beating approach to identify weak, or lower quality, parts of the software. You start testing the application, and when issues start showing up in certain area, you concentrate on it even more. Keep hammering as long as candy/bugs are falling out. Do it as long as the ratio of new issues is not slowing down. When that happens you take a pause.

Now comes the time to assess the damage made to the Piñata (our test subject):

  • Have you found something that may require a fix invalidating your further tests? If yes, you probably should stop here.
  • Do the issues found so far have something in common? Patterns showing up in bugs may indicate that there are repeated issues in the code under test. This might be a good time to chat with your developers(*). Maybe they are using a buggy library? Patterns may also show up in code coming from specific developer only. Maybe that person is not following coding practices others are using? If it seems like a more general problem, it might be worth testing other areas of your software for similar issues, not only the module that exposed them.
  • Looking at the issues, can you predict other issues showing up? For example: lack of good input handling may also signal potential SQL injection exposure. Think about scenarios like that one, and check the suspicious ones.
  • How does this specific area compare to the rest of your software? Is it significantly worse? Significantly better quality? Can you tell why is it better or worse?

Smashed donkey pinata on floor with candy

Depending on the results of the assessment described above you may decide to stop testing this specific area, or keep hitting it. If you keep testing, you will again stop and assess at some time, and decide you were done. You may eventually ask yourself one of two questions:

  1. Have I found all the issues in given area? Am I sure there is not even more critical issue still uncovered?
  2. Have I correctly identified the risky/weak part of the application?

The answer to both questions is “maybe not”. You may have not identified all the issues, there still may be severe bug hiding, and the other areas of your software may be even worse than this one. But when you look at testing as risk-reducing activity I think Piñata testing, when combined with other testing techniques, is a fun way to find good issues and great help in identifying weak spots.

 

(*) Actually any time is good to chat with developers. They are smart people and you can both learn from each other.