Testing in the cloud with AWS
What is AWS anyway?
As demand for highly scalable and versatile applications grows, businesses have to expand their knowledge and utilize the newest technology to keep up with the market. Amazon Web Services (AWS) allows just about anyone with some computing knowledge to achieve this goal – in a highly sophisticated, or a straightforward way as possible. With multiple server farms across regions worldwide, the service offers scalability and wide choice of server locations. Users of AWS are able to pay for the computing resources they use, and can utilize an array of solutions for storage, deployment, user management, analytics, and application services. Included in this expanding list of services, is a little (actually huge) gem titled Device Farm which is essentially Amazon’s take on a cloud based test device cluster. With an impressive device list, covering all the major iOS, Android, and even Fire OS devices, it is immensely useful in debugging older or the more obscure devices.

Hoping the cloud graces you with green ticks throughout your test runs
Competition and key players
AWS Device Farm falls under the familiar concept of cloud testing using an infrastructure provided by a third party, and the industry is certainly not a stranger to this. Google offers Firebase Test Lab, Microsoft acquired Xamarin in early 2016, and there is already a growing list of service like Sauce Labs offering mobile testing in the cloud. In my personal view, what sets AWS apart from the competition, is the integration of other services they provide which enables for an end-to-end development and testing cycle, in theory never having to host or run anything locally.
Prime candidate app
It is important to choose the correct project for automation, both mobile and web, as you risk spending too much time writing automated tests, and not actually performing the required volume of testing throughout the project.
We will be looking at CUSoar as our target application – the app is a perfect candidate for mobile automation testing as it is an ongoing development with a team of varying size, coupled with a seamless Jenkins continuous integration setup, which enables an almost hassle-free development and testing cycle.
As individual credit unions request extra features, or new AWS instances are spun up to cater for the growing membership base, regression testing becomes the focus of testing between sprints, and tools like Appium make the process a fairly easy and intuitive one, providing consistent feedback to developers and business stakeholders alike.

We rely on tools like Jenkins and Fastlane to make Continuous Deployment a seamless part of our process
Mobile automation at its finest
Introducing Appium
While AWS Device Farm supports multiple methods of interacting with your apps on the cloud, one of the most effective ways is using a nifty implementation like the Appium framework. Originally based on Selenium, this clever piece of software finds a common language with your Android and iOS devices and allows inspection and manipulation of the app components, all done live via emulator you use on a daily basis as part of your development. Supporting both native and hybrid apps, and the ability to use an array of languages (Ruby, Python, Java, JavaScript, C#, and PHP) to write tests, Appium is most certainly one of the biggest players in the mobile automation field.
Diving head first into the setup
It is always a good to idea to try things out locally, within a familiar environment, ensuring that when the process is replicated in the cloud, you won’t encounter any showstoppers. To execute tests locally you’ll need:
- An APK for the target application
- Android Studio
- Android SDK
- A cup of coffee (optional, but recommended)
Throw some commands into a terminal
The GUI version of Appium can be downloaded via this link, and opened by running the executable, after the installation has finished, however, the command line approach is boatloads more fun. Go ahead and Ctrl-C Ctrl-V these commands into your console to achieve the same result:
npm install -g appium
followed by:
appium
The next step is having a detailed look into your APK’s structure and extracting vital details to be used in Appium. Ensure that you’ve navigated in your Android SDK build-tools folder, and run the following command using your terminal:
aapt dump badging C:\Users\SwarmUser\Downloads\cusoar-app.apk
The results will display a bunch of information including the permissions the app uses, which labels are to be used for international releases, and a heap of other technical data. What we’re interested in though, are a few lines, as extracted below:
package: name='com.swarmonline.cusoar' versionCode='41' versionName='0.2.10' platformBuildVersionName='6.0-27' sdkVersion:'18' targetSdkVersion:'23' launchable-activity: name='com.swarmonline.cusoar.MainActivity' label='CU Soar' icon=''
The four lines above give us all we need to switch over to Appium and plug in the configuration options.

All the exciting information you need to feed Appium
Writing the tests
Targeting your Android app
Open your favourite text editor (Atom is particularly popular here at SwarmOnline), and create a new file (i.e. appium_test.py), and this Python script will contain our test suite. I have chosen Python as the language of choice here, as I have prior experience with it and I love the syntax, however, Appium supports other languages too if you prefer. Let’s break it down line by line:
Import the required modules, if there are any complaints during runtime, install the missing modules using pip install <module>
.
#appium_test.py import os import unittest from appium import webdriver from time import sleep
Create a new class to hold our functions, first of which performs the initial setup of Appium by setting the desired capabilities, and letting Appium know a bit about your app. The snippet below is only to be used when running this locally. When running this in the cloud however, you need to use the cut down version as displayed underneath the next snippet.
class CUSoarAndroidTests(unittest.TestCase): "Class to run tests against CUSoar mobile app" def setUp(self): "Setup for the test" desired_caps = {} desired_caps['platformName'] = 'Android' desired_caps['platformVersion'] = '6.0' desired_caps['deviceName'] = 'Android Emulator' # Returns abs path relative to this file and not cwd desired_caps['app'] = os.path.abspath(os.path.join(os.path.dirname(__file__),'C:/Users/SwarmUser/Downloads/cusoar-app.apk')) desired_caps['appPackage'] = 'com.swarmonline.cusoar' desired_caps['appActivity'] = 'com.swarmonline.cusoar.MainActivity' self.driver = webdriver.Remote('http://localhost:4723/wd/hub', desired_caps)
desired_caps = {} self.driver = webdriver.Remote( 'http://127.0.0.1:4723/wd/hub', desired_caps)
After the initial setup, define a new function that will contain the instructions for your first test case. In the code snippet below, we give the WebDriver 5 seconds to wait until our app is fully rendered using sleep(5)
, followed by locating the textbox class and firing across the member number. Using the handy Appium Inspector, you are able to break the views down to a very granular level and grab an array of all TextView components, which allows for clicking an individual one as shown on line 8 below. You’ll start to notice that we constantly have to slow Appium down by a few seconds, otherwise it won’t be able to locate the elements in the views.
The next several lines utilise Android’s helpful UiSelector class, to locate and tap on the individual keypad buttons. The view is then given a short 2 second pause to successfully log us in. We grab an array of the TextView components, and extract the third item, which is the name of the first account. We use this information to confirm that the login has indeed been a successful one and the user has access to the Share 1 account.
def testLogin(self): "Confirm user is able to perform a successful login" sleep(5) element = self.driver.find_elements_by_class_name('android.widget.EditText') element[0].click() element[0].send_keys("112996") loginButton = self.driver.find_elements_by_class_name('android.widget.TextView') loginButton[1].click() sleep(8) el1 = self.driver.find_element_by_android_uiautomator('new UiSelector().text("1")') el1.click() el1.click() el3 = self.driver.find_element_by_android_uiautomator('new UiSelector().text("2")') el3.click() el4 = self.driver.find_element_by_android_uiautomator('new UiSelector().text("9")') el4.click() el4.click() el6 = self.driver.find_element_by_android_uiautomator('new UiSelector().text("6")') el6.click() sleep(2) accountElement = self.driver.find_elements_by_class_name('android.widget.TextView') accountName = accountElement[2].get_attribute("text") self.assertEqual("Share 1", accountName)
Specifically for when you execute your tests on AWS, you can use the following code to take screenshots at any point during the test execution, and Device Farm will automatically pick those up and display them at the end of the test run.
screenshot_folder = os.getenv('SCREENSHOT_PATH', '/tmp') self.driver.save_screenshot(screenshot_folder + '/devicefarm.png')
Tidying up after your tests have ran is an essential part of writing repeatable, and scalable test cases, and this time is no different – in addition to closing the WebDriver instance, you can also uninstall application if required as part of your test suite.
def tearDown(self): "Tear down the test" self.driver.quit()
Finish the script off by running the test suite as per the code snippet below.
if __name__ == '__main__': suite = unittest.TestLoader().loadTestsFromTestCase(CUSoarAndroidTests) unittest.TextTestRunner(verbosity=2).run(suite)
New functions separate any more test cases you write, and with a coherent test case notation and description, you should be able to keep track of a fairly large file of test cases. In terms of organisation, each file may be split into a test suite and in any order you see fit.
Briefly about Inspector
Appium comes bundled with an inspection tool, which allows you to view the components of your app and how it has been compiled. You can use this to locate elements, and find how deep within an array an element’s item may be.

Having a look at the application elements
Pushing to the device farm
Setting up a new project
Login to your AWS Management Console and navigate to Device Farm, and create a new project:

Creating a new project on AWS Device Farm
On the next screen, locate and click the button to create a new test run and upload your APK file:

Uploading your APK
Make sure to select Appium Python as your chosen test type, the correct Appium version, and upload your zipped (more on this further down) tests:

Selecting test types and uploading your zipped tests
Now comes the exciting part of creating your own device pool, we have opted in for a range of the most popular Android devices on the market:

Choosing your device pool to test against
Steps four and five are optional extras which you can add to your test run, such as network capability and installing other applications as part of your test run. Review and confirm the test run on the last step, taking into consideration how long your tests take to run, and adjusting the slider to compensate for this.

Setting an execution timeout so you don’t exceed your quota
Cleaning up and bundling your test suite
When you come to setting up your test suite, AWS Device Farms requires the test cases to be bundled up in a very specific way, this is done so that no extra dependencies are included and the automation service on their end knows where to look for and execute your test cases. Amazon has included a variety of guides to help developers and quality assurance personnel bundle their test cases correctly, you can access the Appium Python test package guide here. After you have successfully packaged up your test cases and required dependencies, your work space should resemble something of the following:
─ workspace ├─ tests/ ├─ test_bundle.zip ├─ requirements.txt └─ wheelhouse/
Once you are all set, click on Confirm and Start run, which pushes your tests into the cloud and executes them on your sele
Reviewing results
After the service executes all your test scripts on all the chosen devices, you will be notified that the test run has concluded and should see an overview results screen similar to below:

A brief overview of your test run
Clicking into this test run will bring you to a section containing all your individual test cases executed on each device, and an indication of how successful the session turned out to be. The screenshot below shows the sole test case passing on alternative devices, after having increased the wait time on our test case, so that the slower device can catch up.

Results shown for each device taken from a previous test run
Navigating into an individual test device also allows you to view the screenshots which are saved automatically, at points where you specified within your test cases. This can be very useful for spotting minor design or layout issues on particular devices that you might not otherwise have access to. In the example test run below, you can see that the view is slightly pushed up due to the virtual action bar on some devices.

Difference in devices with physical and virtual action buttons
The results screen will also allow you to view the stacktrace of all events happening on each device, with Android this is usually the output of a verbose Logcat command, meaning it will need to be heavily filtered if you want to narrow down to the issue and gather useful metrics.
What have we learned?
In conclusion
Using a service like AWS Device Farm clearly has benefits, but it is also a major investment in time, resources, and cost – especially if you target a very large pool of devices. Automated testing at this scale has the advantages of never having to touch a single physical device, and letting the cloud handle all of that for you. As an organisation, you need to be actively developing and maintaining a product over a long period of time to get the maximum benefit from testing at this scale, and that is not always feasible. You can most certainly go the extra mile and integrate a cloud service like this into your internal Continuous Deployment pipeline, where Jenkins or Travis pushes the tests up to AWS Device Farm automatically at the end of each release cycle, and feeds back results for the next iteration of fixes. Appium supports iOS testing across Apple devices, but it is out of scope for this post, and is significantly more difficult to setup and configure.