Automated Screen Testing for DevOps Success

For DevOps success

My colleague Zvonimir Ivanetic recently noted in TECHniques Issue 1, 2018: DevOps for the Mainframe and in the 2018: Issue 1 Enterprise Tech Journal article: Win-Win: Mainframe and DevOps (http://ourdigitalmags.com/publication/?i=484809) that “Continuous testing is required for DevOps success. Faced with increasingly complex applications delivered at dramatically faster speed, software testers have the potential to be the bottleneck that determines if a DevOps initiative fails or succeeds. To implement full DevOps, the testing process must be automated and transformed to happen continuously.”

In this article, I’d like to dive deep into why automated screen testing for mainframe applications is a vital element to ensuring your DevOps success and introduce Software AG’s wM ApplinX new Screen Test automation capability.

The role of testing in DevOps

DevOps is about delivering better software faster to meet the end users’ needs. Automated testing is instrumental to ensuring the quality of the software and is needed to overcome the significant pitfalls of manual testing. Manual tests are often plagued by high costs and lengthy testing time. Test cases were typically not documented and, with the retirement age of many programmers rapidly approaching, the ability to build on results of the past is rapidly eroding. The results tended to be error prone and the percent of code coverage and test were low.

ApplinX’s Screen Test Automation feature automates testing and thus dramatically reduces the level of effort required to ensure application functionality. By testing at the User Interface (UI) level, ApplinX goes beyond component testing to ensure that the application is working correctly from the end user’s perspective. Thus, the tests also cover the UI code, which is 5 to 10 percent of application code, for rules and screen navigation that call tested business objects. Unless you test on the screen level, the software components (for example, tested in NaturalONE) may work but no one knows if the application is correctly working for the user.

Screen-based test automation for business functions

There are a vast number of test tools on the market that test at different levels and for different goals. Some only compare characters on the screen to a standard visual. ApplinX's Screen Test Automation feature tests what is happening on the screen at the business level for any programming language (i.e., Natural, Cobol, RPG, Fortran, PL/I, Assembler) on the terminal protocols supported in Table 1.

Device type Protocol
IBM® 3278 TN3270, TN3270E
AS/400 TN5250, TN5250E
BS2000 TN9750
TANDEM TN6530
FUJITSU® TN6680
(Video) Terminal VT100, VT200, VT220-7, VT220-8

ApplinX's Screen Test Automation allows you to:

  • Record the flow of green-screen-based business functions and auto-convert them into unit tests
  • Automatically turn existing Path/Flow procedures into JUnit tests
  • Build executable test cases that can be shared across teams and environments
  • Record the tests once and provide test assertions—files that contain the input and output parameters that allow you to run a test using different sets of inputs to test all possible combinations
  • Run tests quickly and visibly see results
  • Integrate your tests into a build server and run them with every new build (DevOps)

Let me demonstrate with a Human Resources (HR) application example. In the HR app, I go to certain screens and enter the input data for adding a new employee. Then, at the end, I check to see if the employee has been entered correctly into the system. To test this, I store the input parameters and the output parameters in a separate test data file. The same test can run multiple times against multiple test data files.

ApplinX simulates the business user by feeding input data into the business objects then compares the results of the business function against the output data expected. The decision to only focus on input and output data in the test was driven specifically by customer requirements. If every character on the screen was to be compared (i.e., tested), the UI definitions would need to be very detailed. By only testing what is important—input and output—we have saved a tremendous amount of time and cost by eliminating the need to detail entire screens prior to test runs. This approach enables the user of the tool to determine what is important. With automated screen-based testing, you will spend less time doing routine tasks and release software faster.

Automate once and done

When new functionality is introduced into an application, you cannot get away with only testing that new element. You run a tremendous risk of not knowing what impact that one change may have on the remainder of your code. To increase your test coverage, you need to retest all of the previous code changes as well. This may require you to run thousands of tests before you can be truly certain that the system maintains the same quality and stability it had before a new change is introduced.

To do this manually is very expensive and error prone. In the past, people typically tested again and again when new functions were added. Obviously, this gets boring and thus the process
becomes error prone. Testers may overlook that a certain result of the business function is not correct—they just don’t see it. Today, you may not even be able to replicate those previous tests as they likely were not documented and the experts who conducted those tests may have retired or moved on to other organizations.

With manual testing, we just don’t have the capacity to re-run all of the tests done in the past. This is where the value of automation comes into play. If I can prove that all of the tests that were done (manually) previously are still working, then I can prove the application is still in a stable state after the latest change.
Through automation, you can simply reuse the tests created in the past and re-run for near total code coverage.


Fig 1: Compare the results

The idea behind automated testing is that testing is set up when new functionality is introduced and done only once (see Figure 1). By automating the test, the test is “canned” meaning it is conserved and redone in a fully automated environment. This is the main purpose of automating tests, to be certain that the system was not broken by making one small change.

There are a couple of ways to ensure automated testing works for your enterprise. If you are satisfied with using static tests for certain functions, then simply store input/output parameters in JSON™ files, Microsoft® Excel® or CSV format. By using a source control system such as GIT, you can keep all versions together with certain test data. The code of the application is checked into the source control repository and the test assertions are checked in as well for the same application code version with the associated test.

Another option to using static “test assertion” is to directly compare the results with data from a database or web service. This approach makes sense if rules change or the application lives in a world where change is the default. For example, by keeping test data separate from the tests you can change the data (e.g., VAT values) without having to change the test itself.

Creating a test bed of scenarios goes a long way to securing testing know-how, typically done by people who do it by heart, and giving your testers more time to do more exploratory testing. Automate testing is not about replacing testers it’s about not wasting their time on routines and securing institutional knowledge.

How does it work?

Our goal with developing this new Screen Test Automation feature was to allow users to build portable, executable test cases that can be shared across teams and environments by recording green-screen applications and auto-converting them into unit tests. It records the tests once and provides input files to run the tests using different set of inputs to test all possible combinations. Since the tests are in Eclipse™, they can be run quickly and the results are visible. Here is how it works.

Step 1 – Start a terminal session
Start your terminal session within the ApplinX Designer. Connect to the host-application using a direct TCP/IP connection. like every ApplinX application.


Fig 2: Start a terminal session and record the screen flow.

Step 2 – Record the screen flow
Record the screen flow you want to turn into a Unit-Test. Capture the screens and inputs fields used in the navigation flow.


Fig 3: Enter the data you want to use as inputs

Step 3 – Enter data
Identify the data you want to use as “inputs," which can be dynamic, in your test case. ApplinX will let you easily select the fields to be used as “inputs” out of a suggested list of potential fields already marked in your application, based on your field identification.


Fig 4: Select the fileds to be used as "assertions" out of a suggested list.

Step 4 – Record the output
Identify the data you will want to use as “assertions." Easily select the fields to be used as “assertions” out of a suggested list of potential fields already marked in your application, based on your field identification.


Fig 5: Compare the results.

Step 5 – Compare the results
After the test case has been created, you can generate it as a Unit-Test. Then execute it to compare the runtime data received against an expected result data. ApplinX lets you create multiple sets of different inputs to be compared with different sets of expected results that run in the same test case multiple times. It also integrates with your Independent Development Environment (IDE) and build environments such as Jenkins®.

Benefits of automated screen testing

Testing is an important part of the development of mission-critical applications. Don’t just test what you’ve implemented; remember there are many other areas can be impacted by your change. Be sure to test it all of your application. Thanks to automated testing, it is no longer too expensive to make testing part of your development process.

The benefits of automated screen testing are numerous. Whether you are addressing: small code changes to address business or regulatory changes; moderate changes while modernizing your applications; or significant changes when re-hosting your application to a new platform, relying on automated testing will help you achieve your goals within a reasonable cost and timeframe. With ApplinX's Screen Test Automation, integrated with DevOps, you can:

  • Protect your legacy systems from regressions and defects
  • Cut costs—spend less time doing testing
  • Improve quality
  • Preserve knowledge
  • Reuse tests during and after re-hosting
  • Integrate with build servers and DevOps tools

DevOps is really about getting software out to the end user fast and right. After the software is developed, integration testing is best case, fully automated. Automated testing that strikes the proper balance between ensuring functions operate as required and the user output is correctly delivered is vital to your overall DevOps success.

Try ApplinX's new Screen Test Automation feature

Try Software AG’s wM ApplinX Screen Test capability to close the gap between Agile Development, Continuous Integration & Continuous Delivery of code caused by manual tests of screen-based business functions. ApplinX allows to automate testing, allowing new business functions to be tested through a screen interface by providing input data to the business function and then checking the results.

To evaluate ApplinX, simply ask your Software AG representative for a test contract. This new Screen Test Automation capability is free to use by any ApplinX existing customer, both for API or WEB Enablement, and can be installed with Software AG Installer (available in Empower).