Postman Testrunner Framework

From Vincents CV Wiki
Jump to: navigation, search

work in progress ...

Overview

The Postman Testrunner Framework was born out of a need to test the API serving our mobile banking apps in a large, integrated test environment, over which we have little control.
Our API platform acts as an aggregator of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data/interface contract.
The integrated environment is used by hundreds of staff, across the company, and the data setup changes constantly. Testing here is never deterministic, it has to be opportunistic, and needs to respond appropriately to any number of situations.
To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.
To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.
Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the response, and saves them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. However, the developer/test analyst must know the sequence of calls to make to start a session, and then they can perform some feature testing.
This collection is actively maintained and versioned with pull requests and reviews in a Git repo. It is a really wonderful resource, and this project tries to leverage it's value by implementing a framework that can orchestrate the correct sequence of API requests to automate common functional (API) scenarios.
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible response codes. Response handlers are little snippets of code that determine the next userAction to perform. When no next userAction is specified in the response handler, execution moves to the next userAction in the external data file until the scenario is completed. The PTF is a simple state-machine.
The PTF implements a data store of the information necessary to be able to test with many different users, FI's, OLB's, deploy instances, etc. A data syntax was developed that links different data types, and selects the values necessary to initiate a scenario for a user. The input variables are processed, the relevant data links expanded, so that the Postman global and environment variables are ready prior to the first request.
Throughout the implementation

A custom reporter was developed to receive the

Objectives

Code Library

Postman Object Model

classes in Code lib

Settings & Variables

orthogonal syntax cascading over riding values

userActions

Testrun Datafiles

Requirements

Schema Validation

Custom Reporter

Using JUnit.xml file format optimised for TFS build results

TFS

powershell script
variables
builds
- base
- Fi's
- time schedule
build agents
testrun results
dashboards of historic data

PTFWeb

Dashboards of results