Postman Testrunner Framework

From Vincents CV Wiki
Jump to: navigation, search

work in progress ...

text from ScratchPad

The latest project that I've enjoyed working on has been to learn how to use Postman and to develop a framework that allows us to use an existing Postman collection, and apply it to a new business need. Our existing API automation suite is closely tied to a particular data set only available in a mocked environment. However, the business also needed to be able run some form of automation suite in integrated environments. It was decided that we would use Postman to create solutions for performing the positive and alternate test flows in the integrated test environments. I started on a personal project to see if I could leverage the extensive Postman collection we already had for the functional testing of the API. I was pleased that I was able to create a framework that allowed us to store and easily switch between different settings for users, financial institutions and environments. This eliminated the close association the existing automation suite had with the mocked data set. I was then able to create a mechanism by which we could use an external data file to specify the API calls necessary to perform a list of use-case scenarios as individual user sessions. Incidentally, we also were able to implement a simple mechanism for code reuse that was lacking from Postman. This simple proof of concept was able to extended by other team members to add reporting customised to our needs, and able able to be run periodically from our TFS build server. We now have a dashboard of test results spanning many environments and many clients' particular configurations. Our initial implementation was added to one Postman collection, but recently another team was able to take the framework and apply it to their collection as well, which showed the benefit of designing the framework in a manner that allowed it to be retrofitted to any collection, a goal of what I was attempting to achieve. I'm sorry I'm not in position to share any specific links to the work as its proprietary to Fiserv, but I am thinking about writing a linkedIn article about it in general terms.


Overview

The Postman Testrunner Framework was born out of a need to test the API serving our mobile banking apps in a large, integrated test environment, over which we have little control.
Our API platform acts as an aggregator of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data/interface contract.
The integrated environment is used by hundreds of staff, across the company, and the data setup changes constantly. Testing here is never deterministic, it has to be opportunistic, and needs to respond appropriately to any number of situations.
To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.
To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.
Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the response, and saves them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. However, the developer/test analyst must know the sequence of calls to make to start a session, and then they can perform some feature testing.
This collection is actively maintained and versioned with pull requests and reviews in a Git repo. It is a really wonderful resource, and this project tries to leverage it's value by implementing a framework that can orchestrate the correct sequence of API requests to automate common functional (API) scenarios.
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible response codes. Response handlers are little snippets of code that determine the next userAction to perform. When no next userAction is specified in the response handler, execution moves to the next userAction in the external data file until the scenario is completed. The PTF is a simple state-machine.
The PTF implements a data store of the information necessary to be able to test with many different users, FI's, OLB's, deploy instances, etc. A data syntax was developed that links different data types, and selects the values necessary to initiate a scenario for a user. The input variables are processed, the relevant data links expanded, so that the Postman global and environment variables are ready prior to the first request.
Throughout the implementation

A custom reporter was developed to receive the

Objectives

Code Library

Postman Object Model

classes in Code lib

Settings & Variables

orthogonal syntax cascading over riding values

userActions

Testrun Datafiles

Requirements

Schema Validation

Custom Reporter

Using JUnit.xml file format optimised for TFS build results

TFS

powershell script
variables
builds
- base
- Fi's
- time schedule
build agents
testrun results
dashboards of historic data

PTFWeb

Dashboards of results