Difference between revisions of "Postman Testrunner Framework"

From Vincents CV Wiki
Jump to: navigation, search
(Overview)
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
work in progress ...
 
==Overview==
 
==Overview==
The Postman Testrunner Framework was born out of a need to be able to test the API used by a mobile banking app in an integrated test environment over which we did not have control. The API servers (platform) acts as an aggregator for a number of different core online banking systems (OLB's), each one of which servicing many financial institutions (FI's). We needed a simple check of API functionality covering the key user scenarios capable of being run with different user credentials, different FI's, from different OLB's , from different test environments. We essentially need to be able to emulate the flow of API calls for a variety of users as they would have been made by the application.  
+
:The Postman Testrunner Framework was born out of a need to test the API serving our mobile banking apps in a large, integrated test environment, over which we have little control.  
For a test user to be able to start using the application they need to select the test environment that has an instance of the platform, a code to select the FI, and the user's credentials (as well as selecting the type of device). Thereafter, the information required must be obtained in preceding calls. For example, if you want to test a transfer scenario, you need to first obtain a list of their accounts.  
+
:Our API platform acts as an aggregator of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data/interface contract.
The development teams had been using Postman for over a year and had built up a collection with 100+ endpoints and requests, many with test scripts that extracted data from the responses and saved them to the globals.  
+
:The integrated environment is used by hundreds of staff, across the company, and the data setup changes constantly. Testing here is never deterministic, it has to be opportunistic, and needs to respond appropriately to any number of situations.
The task, then, meant we needed to simply orchestrate the correct sequence of API requests to fulfillx the feature test scenarios.
+
:To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment. 
Finally, we need to handle different responses. We may receive different success or failure codes, but also we may receive information that makes the feature incapable of being executed, eg, if the feature is switched off for the FI, or a user isn't configured to permit it.
+
:To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.  
 +
:To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.  
 +
:Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the response, and saves them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. However, the developer/test analyst must know the sequence of calls to make to start a session, and then they can perform some feature testing.
 +
:This collection is actively maintained and versioned with pull requests and reviews in a Git repo. It is a really wonderful resource, and this project tries to leverage it's value by implementing a framework that can orchestrate the correct sequence of API requests to automate common functional (API) scenarios.
 +
:The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible response codes. Response handlers are little snippets of code that determine the next userAction to perform. When no next userAction is specified in the response handler, execution moves to the next userAction in the external data file until the scenario is completed. The PTF is a simple state-machine.
 +
:The PTF implements a data store of the information necessary to be able to test with many different users, FI's, OLB's, deploy instances, etc. A data syntax was developed that links different data types, and selects the values necessary to initiate a scenario for a user. The input variables are processed, the relevant data links expanded, so that the Postman global and environment variables are ready prior to the first request.
 +
:Throughout the implementation
 +
A custom reporter was developed to receive the
  
 
==Objectives==
 
==Objectives==
Line 21: Line 29:
  
 
==Testrun Datafiles==
 
==Testrun Datafiles==
 +
 +
==Requirements==
 +
 +
==Schema Validation==
 +
 +
==Custom Reporter==
 +
:Using JUnit.xml file format optimised for TFS build results
  
 
==TFS==
 
==TFS==
powershell script
+
:powershell script
variables
+
:variables
builds
+
:builds
- base
+
: - base
- Fi's
+
: - Fi's  
- time schedule
+
: - time schedule
build agents
+
:build agents
 +
:testrun results
 +
:dashboards of historic data
  
 
==PTFWeb==
 
==PTFWeb==
 
Dashboards of results
 
Dashboards of results

Revision as of 08:17, 25 January 2021

work in progress ...

Overview

The Postman Testrunner Framework was born out of a need to test the API serving our mobile banking apps in a large, integrated test environment, over which we have little control.
Our API platform acts as an aggregator of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data/interface contract.
The integrated environment is used by hundreds of staff, across the company, and the data setup changes constantly. Testing here is never deterministic, it has to be opportunistic, and needs to respond appropriately to any number of situations.
To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.
To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.
Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the response, and saves them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. However, the developer/test analyst must know the sequence of calls to make to start a session, and then they can perform some feature testing.
This collection is actively maintained and versioned with pull requests and reviews in a Git repo. It is a really wonderful resource, and this project tries to leverage it's value by implementing a framework that can orchestrate the correct sequence of API requests to automate common functional (API) scenarios.
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible response codes. Response handlers are little snippets of code that determine the next userAction to perform. When no next userAction is specified in the response handler, execution moves to the next userAction in the external data file until the scenario is completed. The PTF is a simple state-machine.
The PTF implements a data store of the information necessary to be able to test with many different users, FI's, OLB's, deploy instances, etc. A data syntax was developed that links different data types, and selects the values necessary to initiate a scenario for a user. The input variables are processed, the relevant data links expanded, so that the Postman global and environment variables are ready prior to the first request.
Throughout the implementation

A custom reporter was developed to receive the

Objectives

Code Library

Postman Object Model

classes in Code lib

Settings & Variables

orthogonal syntax cascading over riding values

userActions

Testrun Datafiles

Requirements

Schema Validation

Custom Reporter

Using JUnit.xml file format optimised for TFS build results

TFS

powershell script
variables
builds
- base
- Fi's
- time schedule
build agents
testrun results
dashboards of historic data

PTFWeb

Dashboards of results