Fiserv Auckland - Intermediate Software Test Engineer: Difference between revisions

Jump to navigation Jump to search
Line 10: Line 10:


=== Software Developer in Testing - 2019-2020 ===
=== Software Developer in Testing - 2019-2020 ===
The mobile API server was essentially an gateway to a network of several core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own data interface. Such a large and complex system was expensive to replicate for integration testing, and as such there were only few of them. Its configuration was also tightly controlled from the USA, and used by many staff across the company. The configuration and data was always in flux, so that testing there was never fully deterministic. Regardless of the difficulties, the integration testing needed to be performed. Hence, there was a desire to create a dashboard of system health, and readiness for testing, for the integrated environment. To monitor the environment's health we needed a simple check of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It had to be flexible to run for a range of users, FI's, and OLB's, as well as for different deploy instances of our platform, and finally to handle the range of dynamic responses possible in such a fluid environment.


During my tenure as a Software Developer in Testing at Fiserv, I was tasked with addressing the complexities of the API server used by the mobile apps as a gateway to a network of core online banking systems (OLBs), each serving multiple financial institutions (FIs), and each with unique interface contracts. Due to the expense and difficulty of replicating these systems for integration testing, only three such testing environments were available, and were subject to frequent configuration changes. Despite the non-deterministic nature of testing in this environment, integration testing remained essential.
To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of executing end-to-end user scenarios across various OLB data interfaces and FI/user configurations.
'''Key Contributions:'''
* '''Development of the Postman Collection:''' Utilizing tools like Fiddler, Burp Suite, and MITM Proxy, I captured API calls made by the mobile app to create a comprehensive Postman collection. Often a sequence of calls were required, each performing and action and storing relevant values in the Postman Environment Variables.
* ''' Architecture of the Postman Testrunner Framework (PTF):''' The PTF automatically orchestrated the calls in the correct order to execute end-to-end user scenarios reliably. It used an external JSON data file to specify a sequence of steps called userActions, executing a request from the collection, and the handlers that determined the next userAction to perform for each http response code. It operated as a simple state-machine, emphasizing obtaining data dynamically from the OLB system through API calls to minimize reliance on potentially stale data.
* '''Custom Development and Integration:''' The PTF was executed using Newman in a Node.js project, with a custom reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of scenario results and detailed logs, providing clear insights into failed scenarios and partial successes. Results were sent to an in-house web dashboard and a dedicated Splunk instance for comprehensive monitoring and analysis.
This approach proved invaluable in navigating the fluid and non-deterministic testing environment, enabling nuanced categorization of scenario outcomes beyond pass/fail distinctions, ensuring a more accurate assessment of system readiness and performance.
----
<!--
=== Software Developer in Testing - 2019-2020 ===
The API server used by the mobile app was essentially being used as a gateway to a network of core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own interface contract. Such large and complex systems are expensive to replicate for integration testing, and as such there were only a three of them at Fiserv. Each environment's configuration was tightly controlled from the USA, and used by many staff across the company. Hence, the configuration and data were always in flux, so that testing there was never fully deterministic. Regardless of the difficulties, the integration testing needed to be performed. In order to know the integration environment's readiness for testing there was a strong desire to create an automated dashboard of system health & readiness. To monitor the environment's health we decided to create a suite of simple checks of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It did need to be flexible to run for a range of users, FI's, and OLB's, and to handle the range of dynamic responses possible in such a fluid environment.


==== Developed the PTF ([[Postman Testrunner Framework]]) ====
==== Developed the PTF ([[Postman Testrunner Framework]]) ====
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and created a Postman collection that could replicate the calls.
With this Postman collection, I was able to create a system that could orchestrate the calls in the correct order to be able to reliably execute end to end user scenarios, such as transferring money between two accounts for any user for whom we were provided credentials.
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible http response codes that determine the next userAction to perform. When no next userAction is specified in the handler, execution moves to the next step in the external data file until the scenario is completed.
The PTF is really just a simple state-machine.


I had a policy that, where possible, the PTF always tries to obtain the data from the OLB system using API calls, rather than store data that might become stale. We did, however, need to be able to store some settings and user credentials. The PTF implemented a simple nested JSON data syntax to be able to store this data.


To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and tried to design a Postman solution to emulate the app's behaviour.
The PTF was usually executed using Newman in a Node.js project, and I was able to develop a custom reporter to process events emitted by Newman during the execution of the PTF. I used these events to capture the results from the scenarios, and most usefully logs to explain why and how scenarios failed to complete. This data was immediately sent to the inhouse dashboard, as well as the dedicated Splunk instance (described in the sections below).  
To initiate a check, we select a platform instance, a code for the FI, a useragent for the device, and then enter the user's credentials. Thereafter, the information required for subsequent scenarios must be obtained in prior calls. For example, to test a transfer scenario, you need to first obtain a list of their accounts.
Our development teams have been using Postman for over a year and built up a collection with 100+ endpoints and requests. Many requests are furnished with helpful test scripts that extract data from the response, and saves them to the Postman global/environment variables. The collection is an organised into feature folders, and alphabetised to facilitate interactive functional testing of the platform API. However, the developer/test analyst must know the sequence of calls to make to start a session, and then they can perform some feature testing.
This collection is actively maintained and versioned with pull requests and reviews in a Git repo. It is a really wonderful resource, and this project tries to leverage it's value by implementing a framework that can orchestrate the correct sequence of API requests to automate common functional (API) scenarios.
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible response codes. Response handlers are little snippets of code that determine the next userAction to perform. When no next userAction is specified in the response handler, execution moves to the next userAction in the external data file until the scenario is completed. The PTF is a simple state-machine.
The PTF implements a data store of the information necessary to be able to test with many different users, FI's, OLB's, deploy instances, etc. A data syntax was developed that links different data types, and selects the values necessary to initiate a scenario for a user. The input variables are processed, the relevant data links expanded, so that the Postman global and environment variables are ready prior to the first request.
Throughout the implementation
A custom reporter was developed to receive the


In this fluid, non-deterministic environment, it proved very helpful to separate results not just in passes and failures, but to recognise that some scenarios could not run, or were only partially successful. For example, a user with just one account could not try to transfer money between accounts (marked as "could not run"), and an attempt to fetch a list of bill payments returning no items because none had been made (marked as "pass ⚠").


-->


==== Setup Splunk Enterprise & Integrated PTF with Splunk ====
==== Setup Splunk Enterprise & Integrated PTF with Splunk ====