Reflect Docs

Creating Resilient Tests

We recommend that you follow a few best practices when creating Reflect tests. These recommendations will help ensure that your tests are resilient to false-positive failures.

Designing Tests

Keep Tests Small

We recommend keeping tests as small as possible, but no smaller. What this means is that a test should replicate how real users interact with your web application, including all of their logically-related actions, but no additional, unrelated actions. Sometimes this requires a liberal use of visual assertions to verify that the site looks like you expect. What we advise against is chaining actions which could be split up into separate tests. This has the dual advantage of making your tests run faster (if you’re running them in parallel), as well as making the issue clearer when a test run fails.

Don’t Repeat Yourself

When possible, we recommend not creating multiple tests that have a lot of overlapping test steps and/or assertions unless those steps are a critical part of the workflow under test (such as logging in). If tests assert against the same behavior, they will also fail at the same time. In the worst-case this can lead to a majority of tests failing together, all for the same reason.

Utilize Existing Resources

Since Reflect tests are essentially manual tests that you record once and execute as many times as you wish, many of the principles of designing an effective manual test plan apply to designing Reflect tests. Utilizing some of the many resources on this topic may help you refine your testing strategy.

Handling Dynamic Elements

Most web applications today contain some form of dynamic behavior. From the standpoint of an automated test, dynamic behavior is behavior which may change between test runs, and is usually the main cause of tests failing to record successfully (or failing in one of its first few tests runs).

Examples of dynamic behavior include:

  • Pricing and availability changes on an e-commerce store which result in a frequently changing assortment of products on a category page.
  • Adjustments to the test account’s internal application state, which result in different “state” from one test run to the next.
  • Ongoing A/B tests which result in small changes to page components, visual styling, or marketing copy between test runs.

As a subject-matter expert of the site you’re testing, your best defense for avoiding this class of false positive failure is to know which elements of the page will change between test runs, and which will not. If you are testing an e-commerce store, instead of clicking on the first product in a category page that changes frequently, you could instead search for a specific product or SKU before selecting it. If you are testing an element which frequently changes (such as the nth-item in a collection of products), assert against visual elements which will not change even if a different item was selected. For example, instead of asserting on the name of the product added to cart, you could instead assert that the number of items in the cart is ‘1’.

Selectors

With code-based automation tools, you need to choose your own selector when interacting with an element. This process of manually finding selectors is time-consuming, and hand-rolled selectors can often lead to false-positive failures even for innocuous code changes. With Reflect, you’ll never need to choose a selector. For every element that you interact with, we will automatically generate multiple selectors that uniquely identify the element. Our selector generation algorithm uses several strategies to produce selectors that are resilient to change. Here’s how.

When you interact with an element using our recorder, Reflect will generate multiple selectors that uniquely identify the element under test. The selectors we generate are ordered in terms of specificity, meaning that the first selector we’ll use is the one that we’ve deemed to most narrowly define an element. For example, a selector #baz has higher specificity than a selector .foo, because an ID is more likely to uniquely identify an element than a class. Similarly, a selector of .item:nth-of-type(5) will have the lowest specificity since it’s essentially an XPath value in the form of a CSS selector.

Note: Reflect always attempts to include a generic “XPath”-like selector as the lowest-specificity selector.

When executing a test, for each test step we will attempt to target the element by trying the list of selectors associated with that step, one by one. We will use a selector if (1) it uniquely identifies a single element in the DOM and (2) that element is visible in the current browser window. If no selectors match these criteria, we will mark the test step as failed and fail the test.

At recording time we’ll also attempt to generate a diverse set of selectors that do not share the same classes, attributes, or ids. This makes our tests more resilient to change because if the class/attribute/id is removed in the future, we’re more likely to have other selectors that we can fallback to.

“Test Attributes”

Some software organizations prefer to add explicit attributes to elements under test to make it easier for test automation to target the element, and make it less likely that false positive failures occur due to selectors becoming invalid due to innocuous changes. This practice is not required with Reflect, but if Reflect encounters an element with a “test attribute” it will generate a selector for that test attribute and make it the first selector that is used. Reflect will consider any of the following attributes to be test attributes:

  • data-test
  • data-testid
  • data-test-id
  • data-cy

An Alternative to Hard-coded “Waits”

One practice in test automation which is widely adopted yet widely considered an anti-pattern is adding explicit waits inside a test. Reflect does not support the concept of an explicit wait, and for good reason; they make tests more non-deterministic and prone to failure. In place of an explicit “wait”, we recommend adding a Visual Observe step instead. By adding an Observe step, you are essentially telling Reflect to wait until that element appears on the page, or otherwise fail the test if the element does not appear. This is a great solution to validate that a long-running operation has completed, or to guard against performance issues on your site which could cause tests to fail non-deterministically.

Scroll into View

Often the height (or width) of a page will change over time. Since Reflect captures user-initiated scrolls, this can cause elements that you interact with to be misaligned or even out-of-view. Reflect can automatically detect this situation and scroll these elements into view. This feature, scroll into view, is enabled by default on all relevant test steps, but you can toggle it on or off in Selector Options.

Copyright © 2020 Reflect Software Inc. All Rights Reserved.