End-to-end Testing
6 min read

In Regression Tests, Conditionals = Complexity

The use of conditionals should be considered an anti-pattern in regression tests.

Todd McNeal
Published December 30, 2021
AI Assistant for Playwright
Code AI-powered test steps with the free ZeroStep JavaScript library
Learn more

At Reflect, we care a lot about making regression tests maintainable. Our regression testing guide covers a lot of common situations that lead to flaky tests, but one anti-pattern we don’t hear discussed a lot is the presence of conditionals (i.e. if-else and switch statements) in tests. In this article, we’ll cover common scenarios where we’ve seen conditionals used in regression tests, along with better alternatives.

Handling Third-Party Integrations

Since regression tests are mainly run on shared systems like a Staging or QA environment, third-party integrations like live chat widgets, personalization tools, and A/B testing platforms may be integrated with those systems and have a habit of interfering with tests. Imagine that your web application uses an A/B testing framework, and your marketing team is using it to run a site-wide campaign that displays a pop-up modal to 50% of sessions. This could easily interfere with your tests, and it’d be tempting to write an if-statement to check for and dismiss the modal if it is displayed. This can be a quick fix, but it’s keeping open a class of issues that could make many of your other tests flaky down the road. Rather than using an if-statement, a better approach is to try to disable the marketing widgets entirely. Here’s some alternatives for doing just that:

Alternative #1: Set user to ‘control’ group at start of the test

Our recommended approach is to set your test users to be in the “control” group in your marketing campaigns. The specifics of this approach depend on the implementation of your marketing software, but usually there’s a way to set a user’s specific state by setting a cookie value, or perhaps a LocalStorage or SessionStorage value.

We think this is the best alternative since keeping your marketing campaigns active ensures your test environment is most closely replicating production, but setting the user’s initial state to a control group lets you remove non-deterministic behavior caused by your marketing software.

Alternative #2: Disable the marketing widget in your test sessions

Many marketing integrations can be disabled for either a single user or a single session. There will sometimes also be an ‘Opt Out’ link available to opt out your test users from marketing campaigns, such as this opt-out feature in Visual Website Optimizer.

Alternative #3: Test in an environment where marketing integrations are not present

If you’re testing in a staging or QA environment, see if you can disable the marketing integrations entirely. This is a viable approach if the team(s) owning those marketing integrations are not testing them outside of production. Since this approach prevents your marketing team from testing outside of production, as well as prevents you from detecting bugs caused by your marketing integrations outside of production, we think it’s the least desirable of the three alternatives here.

Note that some marketing software lets you set up “first-run experiences” for users who have never visited a page or used a specific feature. Since this behavior IS deterministic, we think this is the rare case where using a conditional is perfectly fine.

Handling frequently changing data

Conditionals tend to show up in tests when the underlying state of the application is changing frequently. Consider an e-commerce site with a dynamic catalog. As products go in and out of stock, the state of the application changes and tests can fail. If you were testing the search and add to cart features of the site, you may find yourself adding conditionals to do things like finding an active product if the first product is out of stock, or selecting a different size or colorway when the desired one is unavailable. This can lead to a ‘whack-a-mole’ approach to test maintenance, where as application state changes and tests fail, more and more conditionals are added and tweaked to get tests back to a passing state.

Here’s what we’d recommend doing instead:

Alternative #1: Adjust your selectors to match on the first available size / color

In the add-to-cart scenario, instead of selecting a specific size and color, you could instead match on the first available size and/or color in the list. Usually there’s a class or attribute available that only appears on active sizes and colors, and so you can use that as part of your selector.

Note that with Reflect, we handle this for you and eliminate the need to think about and maintain selectors yourself.

Alternative #2: Explicitly manage the underlying test data

If you can make stronger guarantees on the state of the system prior to your tests running, then your tests can afford to be a lot simpler. In the e-commerce example, if we can guarantee that a specific product will rank first for a specific search term, and will have the desired size and colorway available, then our test can be very straightforward.

Testing multiple scenarios in a single test

Another common scenario where we see conditionals used is when the tester is testing multiple scenarios in a single test. Imagine that you have a feature which in some circumstances is enabled, and in other circumstances is disabled. This could be something that’s dependent upon the role of your test user (e.g. an admin can access this feature, but a read-only user cannot). Or it might be based on the state of the application as outlined in the example above. It could be based on time, such as a calendar app that doesn’t allow you to book meetings for dates in the past.

It’s again tempting to use if-else or switch statements to take different actions depending on what’s present in the app when running the test. Instead of using conditionals, we’d recommend the following alternative:

Alternative: Switch to a data-driven testing approach

For testing scenarios that depend on user roles, we recommend refactoring the test into two tests. The first test would verify the “can access” state, and the second test would verify the “can’t access” state. Common steps across these two tests could be saved as common functions, or what we call Test Segments in Reflect. This gets you the benefit of maintaining common steps in a single place, but by making this two separate scenarios it makes the tests more self-documenting and straightforward to understand.

The other benefit to this approach is that you can make these tests data-driven by running them multiple times with different inputs. You may have ten different user roles in your application, with half being able to access the feature, and half not being able to access the feature. By allowing the first test to be overridden with a different username/password when logging on, you can set up that test to run five times using each of the five user roles, and do the same for the five roles using the “can’t access” test.


Hopefully these tips will help you avoid conditionals in your own tests. By using an alternative to if-else statements, your tests should become easier to understand, and easier to maintain.

Get started with Reflect today

Create your first test in 2 minutes, no installation or setup required. Accelerate your testing efforts with fast and maintainable test suites without writing a line of code.

Copyright © Reflect Software Inc. All Rights Reserved.