Introduction
First, let’s clarify: what creates value for the customer? The fact that the interface, through which users interact with your company, works flawlessly. In production, this is a basic requirement. Even during development, this often determines priority. Additionally, during development, no one wants new bugs to appear in already tested and delivered features (known as regression bugs), as these result in unplanned costs and delays.
How do you want to achieve this? Through testing—this is now standard practice. The next step, from the customer’s point of view, is automating this testing. If testing is automated, we can find more bugs with less effort, right?
Short-term: not true.
Long-term: yes.
Let’s say Automation Anthony arrives on the project and is waiting for the assignment. From the client’s perspective, value is created by the working interface, so they give him tasks accordingly. They want to test real processes through the interface—that’s what they want automated first. This is a reasonable expectation.
What Does the Specialist Say?
They’ll say UI automation is time-consuming and should be done using a comprehensive framework to be effective. This takes time and may impact the development process. Instead, the specialist would prefer to automate APIs.
But if the client doesn’t perceive that as valuable, why would they ask for it?
Let’s Examine UI Test Automation
When a function’s specification is completed, ideally the developer and the tester start working in parallel. The tester begins planning manual test cases, creating test data, and defining test conditions. This means that once the function is deployed to the test environment, the tester already has a defined plan to follow—only the time needed to execute tests stands between deployment and identifying bugs.
Is this also true for automated tests?
Yes—and no.
Specifications often lack all the information needed for creating UI automation scripts—many details are finalized during development. So in practice, while test cases may be ready, actual automation can only start after development and deployment. And time is the key factor—automating takes time, meaning UI automation lags behind manual testing and is less useful for catching regression during development.
In short: the longer it takes to automate a test, the later it can be run, and the later it contributes to improving software quality.
Instead of UI Tests…What do we recommend?
Don’t start by automating the UI—automate APIs instead. API test automation is faster, cheaper, and still covers most of the business logic.
Sounds too good to be true? Let’s look deeper.
What is an API?
API is an intermediary between the front-end and the back-end (processing layer). (This applies to REST, SOAP, and MQ as well.) Essentially, all the data users enter through the UI is sent to the backend via API messages.
Does this mean API testing can replace UI testing?
Yes—and no.
In terms of data input and processes, APIs can provide near 100% coverage. However, some functionality exists only on the frontend: UI validations, animations (like tooltips on hover), and other visual interactions that have real business value.
Also, some business-critical logic is implemented only on the UI—for example, showing regional options in a dropdown based on selected country.
Example
Although some business logic is UI-only, why do we still recommend not prioritizing UI automation?
Because:
- Most business logic lives in the backend.
- UI automation is fragile, slow, and expensive.
Below are two test cases written in Robot Framework that test the same feature. One does it via UI, the other directly through the API that the UI uses. Functionally, they are identical.
(Of course, these are simplified—real tests would include checks and validations.)


Two things I would like to highlight.
First, the UI automated test is longer and contains more verification steps and instructions (we have to ensure that a given element has appeared and is clickable before entering data into it). This already shows that creating such a test takes more time, but the second point emphasizes this even more.
If you look closely, every data entry and interaction (clicking, typing data, etc.) involves an object with a so-called “locator.” This determines exactly where and how the automated test finds the object on the page. These locators almost never appear in the specifications; they are worked out by the developer during implementation. Therefore, finding them is always a manual and rather lengthy, meticulous task. This can be made easier if frontend developers build the interface with automation in mind. In that case, using a consistent naming pattern, this drawback can be minimized.
I didn’t want to prove my point by overcomplicating the UI test, so I based it on a page where each frontend component has a unique identifier and follows a consistent naming convention. If we pay attention to these two properties during interface implementation, it already greatly facilitates the work of the automation engineer, leading to faster and more stable automated tests.
However, even if we use consistently and thus relatively easily identifiable locators, there’s still a risk that any modification can “break” the test—meaning it won’t be able to find an object, resulting in a false negative test result.
APIs change less frequently during development, but even so, changes can still break them. However, since API specifications are almost always (90% of the time) part of the documentation, they can be directly updated from the specification as soon as it changes. Because of this, API tests can often be written in parallel with development, whereas for UI tests, experience shows that we must wait for deployment to the test environment before we can accurately define the locators and write the automated test. This results in a time lag for UI test automation.
Comparison of time resource allocation
“Regular” sprint
The first diagram illustrates the timeline of a classic sprint. After the specification phase, the sprint begins; test planning happens in parallel with implementation, then after implementation comes testing, bug fixing, and finally, the completed package is released.

Sprint with UI Test Automation
The second diagram includes UI automation as well. As we can see, since some of the information required for UI tests is only finalized during implementation, the creation of these tests can only begin later. As a result, the tests for the given sprint will only be completed and executed after the sprint has ended—meaning automated testing will lag behind by one sprint in such cases.

Sprint with API automation
In the third image, we automate API tests. Since all the information is available from the specification, the automation engineer can start creating the tests in parallel with manual test planning, and they can be run right after implementation. In this case, I optimistically indicated that manual testers might still have some time left for a brief exploratory regression testing, focusing on those UI elements that cannot be tested through API testing, thereby reducing the number of regression defects.

What do we recommend?
- Always test new features manually. Automated testing does not replace this step.
- Prioritize automating API tests that cover business logic.
- Establish rules for specifications that facilitate UI test automation.
- Once these are in place, automate those UI tests where API tests do not cover the functions important to you.
Thank you for reading, I hope you found this useful.
#API #APItesting #testautomation