Automated UI Tests on Androidarehard because they have too many moving parts. Key to rock solid automated tests is determinism.
What tests are we talking about?
These are big UI flows that traverse through multiple activities involving series of complex, simulated user interactions. For sake of discussion let’s imagine a sophisticated android app like Instagram clone unimaginatively called InstaClone.
complex user actions like:
Signing up as a new user into the app
Posting a picture with tags
Sending message on chat
Navigating to user profile via search and following or blocking a user
Automated testing follows the Arrange-Act-Assert pattern
Arrange: here's where we setup the test; arrange all necessary preconditions.
Create state of the world as the test expects
With UI tests this can get very elaborate for an app of any sophistication (because of too many external dependancies)
Say, for an automated test that checks if the user can send an image in chat, the following must be arranged in the app:
------> user must be signed into the app with a valid account ------> receiver of the image must be user's list of contacts, must be visible on search ------> user must have an image to send ------> user must have internet connectivity ------> the receiver of the image must have internet connectivity for sender to get the delivery receipt
This gets tricky as Dependancy Injections have their limitation on Android. Android component’s constructors access is restricted. Android OS controls the instantiation of Activity, Service, ContentProvide and Application classes.
Act: by simulating a series of complex user interactions, traversing though app activities
Assert: assert different outcomes
Testing types on Android
Let’s start with defining our vocabulary. Based upon what they test, Android Tests split into the following:
UI Tests: check if user action on app results in the appropriate UI response from the app.
End To End(E2E) Tests: involve the full system (app + network + backend) into test. E2E tests guarantee that data is sent to client app and whole system functions correctly.
Strategy 1: Using End To End as UI Tests:
App lies at the top of the stack. Events on the app pass through app --> network --> backend microservice mesh and back through the stack so these test check the integration of the entire system. Trouble is that E2E UI Tests are flaky by default because of too many variables like network flakiness, authentication issues with real servers, inconsistent data at backend db, etc.
Such E2E UI Tests(ideally) need:
An isolated server environment which is a clone of the production system
Ability to create data in the backend db as per what the tests expect through some golden requests to the backend APIs
Problems with End To End Tests as UI Tests:
Instinctively it seems like a good idea to base all your UI tests as End to End tests that work the whole system. As this most closely simulates user behaviour. Through pain and tears over the years I have realised that this approach is in-fact an anti pattern. The non-determinism of external dependancies ultimately doom E2E UI tests.
who has ever built and run E2E UI Tests of any sophistication has faced the
High flakinessissues because of too many moving parts
Staging backend server environment issues
Tests are super slow(as all calls take place over the network)
Tests are large
Its almost impossible to arrange some tests:
Suppose you want to simulate an edge case response from the server for a particular request the app sends. You cant.
Hard to debug. When a test fails you have to begin an investigation every time to say for sure what broke. So you cant really build automated escalation on test failure.
Such tests don't give reproducible results.
Authentication issues with the backend(Auth issues with automated tests are especially notorious)
Massive E2E tests are a hang-up from the monolith sever architecture era. While there is need for some tests to work the entire system, such E2E UI tests should be lot lesser than the number of UI functional tests. Most of your E2E tests should lie not at UI but rather at your app’s backend wrapper. This is a topic for some other time.
These problems caused by external dependancies can be solved by adopting extensive hermetic testing for app tests as well as the backend server tests.
Strategy 2: Hermetic UI Testing with Mock WebServer + Fake Data
The other strategy is to split the End To End tests from app into hermetic UI app tests and hermetic backend tests. The key to building fast, reliable automated tests is avoiding network access. As a rule of thumb inject fake hermetic test data as close to what you are testing as possible. To be hermetic, control background operations. App background operations are non deterministic as network calls made by the app could take anything from couple to seconds to minutes.
Hermetic Test Pattern states that each tests must be completely independent and self sufficient. Any dependency on other tests or third-party services that cannot be controlled must be avoided at all costs.
So you refactor tests and build the following:
A MockWebServer with pre-canned responses to handle all network requests from the app
A Mock Location provider if your app needs location data
A request dispatcher for handling multiple edge case responses for same request
Store all faked responses in data files or in memory implementations
Beware: Moving to this model commits you to having hermetic server tests.
A sweet compromise: Android Testing Pyramid
Don’t write any E2E tests as UI tests unless you absolutely have to
Bulk of you app tests would be App UI flow Tests with mocked network, location.
Write 5-6 times as many App UI tests as you have E2E UI tests
Build as many Unit Tests and Integration Tests as you possibly can. These run on the JVM unlike the instrumented tests above which run on Android.
External Dependancies for App Tests:
run locally on the JVM
all dependencies can be handled through dependancy injection like dagger
App UI Tests
need the Android OS to run on a device/VM to run. This is a big over head !
instrumentation tests are lot slower than Unit Tests
all external dependancies like backend servers, location are mocked
mocked and faked dependancies allow for app edge cases to be tested
E2E UI Tests
need all of the above + network + backend(microservice mesh) + external data backend to perform precisely as expected;
which is why they are flaky by default.
prepare to get a lot of false positives here.
As you go up this pyramid:
Test flakiness increases
Test Execution time increases
Maintanence cost increases
Where to build most of your E2E tests then?
Not on the UI. Build your E2E tests on the backend wrapper the app uses to talk to the microservice mesh underneath. Extend the hermetic test patten to the server.
More on this later. Watch this space.
How to build fast, robust UI Tests?
You can build fast, reliable, Android UI tests if you follow these 3 suggestions:
1.) Build tests on Android’s espresso framework:
`Espresso` and `UiAutomator` are UI test frameworks from Google in the Android Testing Support Library that allow you to create automated UI tests for Android apps. Espresso test authors need to think in terms of what a user might do while interacting with the application; locating elements and interacting with them.
Android UI tests are instrumented test. Unlike unit tests which run on the JVM, the instrumented test run straight on the emulator/device. These tests have access to the instrumentation API which allows replication of user behaviour via UI actions like click, swipe long press. This is achieved because the instrumented test app runs in the same process as the app being tested. Instrumentation is instantiated before any of the app code, allowing it to see the interactions the system has with the app under test.
If you choose the wildly popular Appium framework you are gonna have a bad time. Appium tests are inherently flaky. Espresso is not only much faster than Appium tests in execution time, but critically Espresso tests are more reliable as Espresso framework handles app’s aync calls elegantly. With Espresso, we have built suites of Automated Tests for sophisticated real world apps without using any sleeps and 0 flakiness. This is impossible with Appium. This is why Appium is dead on arrival.
2.) Mock your network responses
that your app/website UI has a button. When you press this button, an API call
is made. As a response to the is API call the following types of responses can
A happy response with expected payload
A request time out
A 500 internal server error
Edge case response 1
Edge case response 2
Now for anything but the first one your UI test fails as the assumptions your test relies on fail. UI tests need to be deterministic without mocking the network layer, you lose that. So, build: a MockWebServer to intercept all network calls and a Mock Response Dispatcher with multiple variants pre-canned responses. Square’s excellent, open source library makes this too easy. Watch this space for a tutorial post.
2.1) Consequences of mocking
have mocked the network later for your UI tests you get the following benefits:
As test reliability increases greatly and tests fail only when the app under test fails you can finally build an automated test failure escalation pipeline.
|test failure | ——-programatically—–—> |create a task on issue tracker|
Test App Edge Cases to increase test coverage
With mocking you can have automated tests for all edge case responses. You can turn around the edge case responses into new test cases.
This is something that’s possible only when you are mocking all the responses for an automated test. Think about it. A manual tester on most days has no chance of testing the edge case responses. Unless the tester is using a http proxy monitoring tool (like Charles or something) to mock network responses, which is something the tester is usually not doing every day.
Since app mostly breaks around the edge cases, precisely because edge cases are harder to test. Not anymore. Mocking responses allows all possible responses to be tested precisely.
Tests become blazing fast
This is a big win when you have your tests deeply integrated with the CI server in you deployment pipeline
My sanity run time has halved with mocked the network layer.
With mocking network and location my UI functional tests have become rock solid. The only time tests fail is when the app under test fails.
3.) Deeply integrate your tests in agile development pipeline for your app
It’s not enough to just have tests. The idea is to completely automate the testing and deployment pipelines at all points. This involves:
CI Servertriggers automated tests for:
every PR thats generated
any PR failing automated tests is automatically rejected till its fixed
commits on important branches like master, develop
Additionally have scheduled sanity runs for important that trigger daily on the basis of time
Once you have rock solid, fast UI Hermetic UI Tests that fail only when the app under fails, you can programatically build a failure escalation pipeline.
All failures must automatically create an issue in your bug/task tracker labeled Automation Bugs _underReview or something
Such issues can periodically be reviewed by the app developers
Also any Test failure must notify app devs via slack notification.
Make it seamless for App devs to run automated sanity on demand
Until the automated sanity, manual app testing by the QA was a black box to the app dev.
Automation must turn this around
Build bash/python scripts that allow app devs to trigger auto sanity on cloud via their command-line
Here’s my script that will do the following:
Build your android app
Send it programatically to Google’s firebase device lab
Run test on real devices and send report back to the developer
Test Reports meta analysis
It might be a good idea to collect your test reports over the months for meta analysis
Doing this at a single place for tests from across the stack might help you ascertain release health