Postman Testrunner Framework

From Vincents CV Wiki
Revision as of 09:54, 28 May 2019 by Vincent (talk | contribs) (Overview)

Jump to: navigation, search

Overview

The Postman Testrunner Framework was born out of a need to test the API serving our mobile banking apps in a large, integrated test environment, over which we have little control.
Our API platform acts as an aggregator of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data/interface contract.
The integrated environment is used by hundreds of staff, across the company, and the data setup changes constantly. Testing here is never deterministic, it has to be opportunistic, and needs to respond appropriately to any number of situations.
To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.
To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.
Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the responses and save them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. The user must know the sequence of calls to make to start a session and then perform some feature testing.
This project tries to orchestrate the correct sequence of API requests in the collection to automate some common functional scenarios.

fulfillx the feature test scenarios. Finally, we need to handle different responses. We may receive different success or failure codes, but also we may receive information that makes the feature incapable of being executed, eg, if the feature is switched off for the FI, or a user isn't configured to permit it.

Objectives

Code Library

Postman Object Model

classes in Code lib

Settings & Variables

orthogonal syntax cascading over riding values

userActions

Testrun Datafiles

TFS

powershell script variables builds

- base
- Fi's
- time schedule

build agents

PTFWeb

Dashboards of results