Greater Saint Louis Area, Missouri, USA
314.482.4111
Sales@TheProductPlug.com

Dashboard or Automated Test?

Strategically aligning with our customers needs!

Dashboard or Automated Test?

There’s a tendency to use automated tests to do a lot more than testing. They’re great for detecting problems… but so are humans!

The trick is balancing when to let automation detect problems, and when to let a human do it.

Automation Isn’t Our Only Tool

The human mind is an amazing piece of organic machinery. People can see patterns in things that it would take a computer a long time to learn.

I think a lot of us don’t see the potential we have tied up in our heads. I think sometimes we try not to see it. 

But what if…

  • you could aim that mental power and determine problems almost instantly?
  • you had a diagram showing problem areas in the system?
  • you had a tool that just shows you facts about the system without lots of logic?

The goal of automation is to free up manual time spent running tests.

Does that mean we have to use automation for everything? Nope! We can use something that actively tells us things about what we’re testing–not necessarily by running tests.

That something is a Dashboard.

What’s a Dashboard?

A dashboard is a tool that tells you facts about a system.

They don’t often tell you problems about the system, although there are some indicators that can do this.

Here’s an example: An airplane has a lot of instrumentation. As you’re flying around, you take in pieces of info to determine how well you’re flying.

One instrument as the altimeter. It tells you how high up in the air you are.

Another instrument is the horizon indicator. It indicates what your angle is relative to the ground.

Apart from each other, these instruments have great value.

But together, a human can look at the altimeter, and see that it’s dropping, and look at the horizon indicator and see that the plane is headed toward the ground, then their brain goes, “!”, and then they react accordingly.

Is there an instrument that says, “WE’RE GONNA CRASH LOL!” No. We made that correlation based on multiple pieces of data.

This parallel can be made to software testing as well.

If we were to treat an airplane cockpit for the same way we use automated tests, we would have a red/green LED for every possible condition. One is the LOLCRASH indicator. Another is the wiper fluid indicator.

It would be difficult to quickly tell the condition of the craft by simply looking at a bunch of little lights. We might even consider that everything was probably ok just because very few of the lights were red.

A dashboard helps clean up the “noise” of individual conditions.

But if we’re going to use some kind of dashboard, there’s an important question to ask…

Where To Draw The Line?

A good question to ask is: are we trying to see something about the system, or are we trying to see how parts of the system behave?

There’s a granularity component to this question. Although we can tell something about the system based on how the parts behave, it’s often difficult to extrapolate.

We tend to use automated tests as a canary to tell whether something’s wrong with the system. But the reason why some tests fail for the same reason is due to something at a higher level, that we could’ve figured out quicker.

Which, it’s understandable, because while we’re here, we might as well write another test to make another check, right?

So: Do we need an automated test for something, or would a dashboard give us the same information more easily?

Here’s an example: Let’s say you have 100 webservices in some kind of configuration, each connected to at least one other webservice. No orphans.

If you had automated tests for every webservice’s functionality, what would they look like if one of the webservices was down?

You’d probably have a lot of failed tests right?

And then wouldn’t you have to go into each test to determine what the problem was, and then realize: oh hey there’s a pattern here, and I think maybe this particular webservice is down?

And if that weren’t the only problem in the system, would it be harder to diagnose the problem?

What if instead of using automated tests for that, you had a dashboard that could display the topology of the webservices, ping each one, and color the node on the diagram appropriately?

How long would it take you to realize something’s wrong with a webservice if it’s on the screen visibly, and it’s RED? And most importantly, how simple and low-maintenance would that code be?

I bet there’s already something out there that does that. It can probably email you when something goes wrong too. Fast feedback!

When we want to find out something high-level about the system, a dashboard is definitely an option.

 

Why Use a Dashboard?

Great question. If you can get the same answer with an automated test, why not use it?

Because:

  • Dashboards can be exposed to everybody–you don’t have to be a tester or an automator to have access to the information they provide. And big TVs are somewhat cheap. Throw it up and make the info public.
  • They’re usually simpler–as long as you stick to the idea of not having a ton of logic in your dashboard, you don’t have to fool with them much. From the example above: ping has been around for a long time. It’s everywhere and it’s mature. Will you have to change it? Probably not. Low maintenance.
  • More mental bang for the buck–your brain (and mine too!) can make correlations that it would take awhile to code out. Example of the plane cockpit before–there are probably thousands of things that can go wrong in a plane, but relatively few instruments available. It’s much easier and cheaper to allow the human brain to make correlations than to bang out code to do the same thing.

It’s another tool in the toolbox. Another weapon in the arsenal of QA. Where can you use a dashboard in lieu of an automated test? 

 

6 Responses

  1. T.J. Maher says:

    I am having trouble picturing this… do you have a screenshot of a Dashboard that you use?

    -T.J. Maher

    • Hmm.. Not one that I can show here, sorry. Another example may help though.

      If you had a list of ping times across these webservices, it would look like a gob of info. Hard to tell what’s going on.

      But if they’re displayed as a bar graph, you’d see any outliers sticking way out relative to the others.

      There are lots of ways to visualize data. Hope this helps!

  2. I was recently in a meetup where the discussion ended up in monitoring being a valuable addition to testing for certain aspects. Your dashboard fits in nicely there, so thanks for sharing that idea!

Comments are closed.