Reflect Docs

Creating Resilient Tests

We recommend that you follow a few best practices when creating Reflect tests. These recommendations will help ensure that your tests are resilient to false-positive failures.

Designing Tests

Keep Tests Small

We recommend keeping tests as small as possible, but no smaller. What this means is that a test should replicate how real users interact with your web application, including all of their logically-related actions, but no additional, unrelated actions. Sometimes this requires a liberal use of visual assertions to verify that the site looks like you expect. What we advise against is chaining actions which could be split up into separate tests. This has the dual advantage of making your tests run faster (if you’re running them in parallel), as well as making the issue clearer when a test run fails.

Don’t Repeat Yourself

When possible, we recommend not creating multiple tests that have a lot of overlapping test steps and/or assertions unless those steps are a critical part of the workflow under test (such as logging in). If tests assert against the same behavior, they will also fail at the same time. In the worst-case this can lead to a majority of tests failing together, all for the same reason.

Utilize Existing Resources

Since Reflect tests are essentially manual tests that you record once and execute as many times as you wish, many of the principles of designing an effective manual test plan apply to designing Reflect tests. Utilizing some of the many resources on this topic may help you refine your testing strategy.

Handling Dynamic Elements

Most web applications today contain some form of dynamic behavior. From the standpoint of an automated test, dynamic behavior is behavior which may change between test runs, and is usually the main cause of tests failing to record successfully (or failing in one of its first few tests runs).

Examples of dynamic behavior include:

  • Pricing and availability changes on an e-commerce store which result in a frequently changing assortment of products on a category page.
  • Adjustments to the test account’s internal application state, which result in different “state” from one test run to the next.
  • Ongoing A/B tests which result in small changes to page components, visual styling, or marketing copy between test runs.

As a subject-matter expert of the site you’re testing, your best defense for avoiding this class of false positive failure is to know which elements of the page will change between test runs, and which will not. If you are testing an e-commerce store, instead of clicking on the first product in a category page that changes frequently, you could instead search for a specific product or SKU before selecting it. If you are testing an element which frequently changes (such as the nth-item in a collection of products), assert against visual elements which will not change even if a different item was selected. For example, instead of asserting on the name of the product added to cart, you could instead assert that the number of items in the cart is ‘1’.


With code-based automation tools, you need to choose your own selector when interacting with an element. This process of manually finding selectors is time-consuming, and hand-rolled selectors can often lead to false-positive failures even for innocuous code changes. With Reflect, you’ll never need to choose a selector. For every element that you interact with, we will automatically generate multiple selectors that uniquely identify the element. Our selector generation algorithm uses several strategies to produce selectors that are resilient to change. Here’s how.

When you interact with an element using our recorder, Reflect will generate multiple selectors that uniquely identify the element under test. The selectors we generate are ordered in terms of specificity, meaning that the first selector we’ll use is the one that we’ve deemed to most narrowly define an element. For example, a selector #baz has higher specificity than a selector .foo, because an ID is more likely to uniquely identify an element than a class. Similarly, a selector of .item:nth-of-type(5) will have the lowest specificity since it’s essentially an XPath value in the form of a CSS selector.

Note: Reflect always attempts to include a generic “XPath”-like selector as the lowest-specificity selector.

When executing a test, for each test step we will attempt to target the element by trying the list of selectors associated with that step, one by one. We will use a selector if (1) it uniquely identifies a single element in the DOM and (2) that element is visible in the current browser window. If no selectors match these criteria, we will mark the test step as failed and fail the test.

At recording time we’ll also attempt to generate a diverse set of selectors that do not share the same classes, attributes, or ids. This makes our tests more resilient to change because if the class/attribute/id is removed in the future, we’re more likely to have other selectors that we can fallback to.

“Test Attributes”

Some software organizations prefer to add explicit attributes to elements under test to make it easier for test automation to target the element, and make it less likely that false positive failures occur due to selectors becoming invalid due to innocuous changes. This practice is not required with Reflect, but if Reflect encounters an element with a “test attribute” it will generate a selector for that test attribute and make it the first selector that is used. Reflect will consider any of the following attributes to be test attributes:

  • data-test
  • data-testid
  • data-test-id
  • data-cy

An Alternative to Hard-coded “Waits”

One practice in test automation which is widely adopted yet widely considered an anti-pattern is adding explicit waits inside a test. Reflect does not support the concept of an explicit wait, and for good reason; they make tests more non-deterministic and prone to failure. In place of an explicit “wait”, we recommend adding a Visual Observe step instead. By adding an Observe step, you are essentially telling Reflect to wait until that element appears on the page, or otherwise fail the test if the element does not appear. This is a great solution to validate that a long-running operation has completed, or to guard against performance issues on your site which could cause tests to fail non-deterministically.

Scroll into View

Often the height (or width) of a page will change over time. Since Reflect captures user-initiated scrolls, this can cause elements that you interact with to be misaligned or even out-of-view. Reflect can automatically detect this situation and scroll these elements into view. This feature, scroll into view, is enabled by default on all relevant test steps, but you can toggle it on or off in Selector Options.

Failure Settings

Each test step in Reflect can be configured to have one of three failure modes. Generally, when a test step cannot be executed successfully, you will want to mark your test as “Failed” and stop the test execution immediately. A test step might fail because the element could not be found, or the element was found but the action failed, such as the input element being disabled.

An exception to the above rule is for Visual Validations. By default, when a visual validation step fails Reflect will ultimately mark the test itself as failed but Reflect will continue executing the test until a non-visual validation failure occurs. The reason for this is to allow you to see all failing visual validations from a single test execution. For example, if a website has wide-reaching visual updates, then every visual validation step will potentially fail. To avoid the tedious approach of seeing and fixing a single failed visual validation step for each execution, Reflect continues the test execution after a failed visual validation so that you can update all failed validations in one pass.

The last failure setting is to make a test step optional. This means if the test step’s target element is not found, or it fails execution, the test step will still never be marked as failed, and the test execution will continue.

You can configure the failure behavior for a test step by clicking on the test step and modifying the Failure Settings in the middle pane before clicking ‘Save and Run’:

Handling ReCAPTCHAs

ReCAPTCHA is a technology that presents a short puzzle that is easy for humans to solve, but impossible for machines to solve. Being a completely automated system, Reflect cannot solve reCAPTCHAs. This means that if you have workflows that include a reCAPTCHA step, you must conditionally disable this step for Reflect test recordings and test runs. There are multiple options for conditionally disabling reCAPTCHAs:

  1. Every Reflect test run and test recording sets a global variable on the Window object called isReflectTest. In your front-end code, you can add the following logic to conditionally disable reCAPTCHAs:
if (!window.isReflectTest) {
  // Display reCAPTCHA
  1. You can use Execution Overrides to pass a parameter containing a shared secret from your Reflect tests. This would be a query parameter that’s appended to the starting URL of each test. On your app’s side, you can then add logic to look for the existence of this parameter and make a server-side call to validate the parameter (thus not exposing the shared secret on the frontend). If the server-side check passes, set some flag on the user’s session (potentially server-side again to prevent tampering) to disable the captcha.
  2. The Standard tier and higher offers the ability to run all tests from a Static IP address. You can configure your server-side check to validate that the IP address matches the static IP, and if so then disable the captcha check in a way that can’t be forged client-side.

Handling HTTP Basic Auth

If your site utilizes Basic HTTP Authentication, you may see a prompt when initially navigating to your website:

You can bypass this prompt in Reflect by either:

  1. Specifying an Authorization header as an Execution Override.
    • The value of the header should be Basic, followed by a space and a base64-encoded string of the username and password separated by a colon.
    • For example, to authenticate for a user named foo with a password of bar, you would specify an Authorization header with a value of Basic Zm9vOmJhcg== (where Zm9vOmJhcg== is the base64-encoded value of foo:bar).
  2. Including the login credentials in the starting URL of your test with a prefix of username:password@ (for example,

Get started with Reflect today

Create your first test in 2 minutes, no installation or setup required. Accelerate your testing efforts with fast and maintainable test suites without writing a line of code.

Copyright © 2022 Reflect Software Inc. All Rights Reserved.