Fiserv Auckland - Intermediate Software Test Engineer: Difference between revisions

Jump to navigation Jump to search
m
Line 9: Line 9:
== Roles ==
== Roles ==


=== Software Developer in Testing - 2019-2020 ===
== Software Developer in Testing - 2019-2020 ==


During my tenure as a Software Developer in Testing at Fiserv, I was tasked with addressing the complexities of the API server used by the mobile apps as a gateway to a network of core online banking systems (OLBs), each serving multiple financial institutions (FIs), and each with unique interface contracts. Due to the expense and difficulty of replicating these systems for integration testing, only three such testing environments were available, and were subject to frequent configuration changes. Despite the non-deterministic nature of testing in this environment, integration testing remained essential.
During my tenure as a Software Developer in Testing at Fiserv, I was tasked with integration testing of the API server which was used by the mobile apps as a gateway to a network of core online banking systems (OLBs), each with unique interface contracts, and each serving multiple financial institutions (FIs). Due to the expense and difficulty of replicating these OLB systems only three integrated testing environments were available. The test environments were subject to frequent configuration changes, used by many staff, and tightly controlled from the USA. Despite these difficulties, and the non-deterministic nature of testing in these environments, integration testing remained essential.


To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of executing end-to-end user scenarios across various OLB data interfaces and FI/user configurations.
To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of executing complete user scenarios through various OLB data interfaces and FI/user configurations.


'''Key Contributions:'''
'''Key Contributions:'''


* '''Development of the Postman Collection:''' Utilizing tools like Fiddler, Burp Suite, and MITM Proxy, I captured API calls made by the mobile app to create a comprehensive Postman collection. Often a sequence of calls were required, each performing and action and storing relevant values in the Postman Environment Variables.  
=== Development of Postman Collection ===
Utilizing tools like Fiddler, Burp Suite, and MITM Proxy, I captured API calls made by the mobile app to create a comprehensive Postman collection. Each scenario was made up of a sequence of calls, each call performing an action and storing relevant data in the Postman Environment Variables. I emphasised obtaining data dynamically from the OLB system through API calls to minimize reliance on potentially stale data.


* ''' Architecture of the Postman Testrunner Framework (PTF):''' The PTF automatically orchestrated the calls in the correct order to execute end-to-end user scenarios reliably. It used an external JSON data file to specify a sequence of steps called userActions, executing a request from the collection, and the handlers that determined the next userAction to perform for each http response code. It operated as a simple state-machine, emphasizing obtaining data dynamically from the OLB system through API calls to minimize reliance on potentially stale data.
=== Architecture of the Postman Testrunner Framework (PTF) ===
The PTF automatically orchestrated the calls in the correct order to execute the user scenarios reliably. It used an external JSON data file to specify a sequence of steps called userActions, each userAction referenced a request from the collection, and contained handlers for each http response code to set the next userAction to perform. Effectively, the PTF was a simple state-machine. The PTF also implemented a simple nested JSON data syntax to be able to store data such as user credentials as well as FI connection settings.  


* '''Custom Development and Integration:''' The PTF was executed using Newman in a Node.js project, with a custom reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of scenario results and detailed logs, providing clear insights into failed scenarios and partial successes. Results were sent to an in-house web dashboard and a dedicated Splunk instance for comprehensive monitoring and analysis.
=== Custom Development and Integration ===
 
The PTF was executed using Newman in a Node.js project, with a custom reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of scenario results and detailed logs, providing clear insights into failed scenarios and partial successes. Results were sent to the Current Health Status dashboard, as well as to a dedicated Splunk instance for comprehensive monitoring and analysis. The dashboard and Splunk implementations are detailed in the sections below.  
This approach proved invaluable in navigating the fluid and non-deterministic testing environment, enabling nuanced categorization of scenario outcomes beyond pass/fail distinctions, ensuring a more accurate assessment of system readiness and performance.


The PTF used environment variables to set which FI and user to run for, and it was designed to be able to run several users in parallel. The TFS build server was configured to run scenarios for all the users at the same time once per hour.


----
----
<!--
=== Software Developer in Testing - 2019-2020 ===
The API server used by the mobile app was essentially being used as a gateway to a network of core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own interface contract. Such large and complex systems are expensive to replicate for integration testing, and as such there were only a three of them at Fiserv. Each environment's configuration was tightly controlled from the USA, and used by many staff across the company. Hence, the configuration and data were always in flux, so that testing there was never fully deterministic. Regardless of the difficulties, the integration testing needed to be performed. In order to know the integration environment's readiness for testing there was a strong desire to create an automated dashboard of system health & readiness. To monitor the environment's health we decided to create a suite of simple checks of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It did need to be flexible to run for a range of users, FI's, and OLB's, and to handle the range of dynamic responses possible in such a fluid environment.


==== Developed the PTF ([[Postman Testrunner Framework]]) ====
This approach proved invaluable in monitoring the fluid and non-deterministic testing environment, enabling nuanced categorization of scenario outcomes beyond pass/fail distinctions, ensuring a more accurate assessment of system readiness and performance.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and created a Postman collection that could replicate the calls.  


With this Postman collection, I was able to create a system that could orchestrate the calls in the correct order to be able to reliably execute end to end user scenarios, such as transferring money between two accounts for any user for whom we were provided credentials.
=== Development of Inhouse Web UI for Current Health Status Dashboard ===
This part of the solution used Node.js with Express.js and Pug to create  


The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible http response codes that determine the next userAction to perform. When no next userAction is specified in the handler, execution moves to the next step in the external data file until the scenario is completed.  
* an API for receiving events from the PTF, and  
* a Web UI to display a snapshot of the latest results in a tabular dashboard.  


The PTF is really just a simple state-machine.
See a snapshot of the PTF dashboard [https://dirksonline.net/CV/PTF%20Dashboard.JPG here].  


I had a policy that, where possible, the PTF always tries to obtain the data from the OLB system using API calls, rather than store data that might become stale. We did, however, need to be able to store some settings and user credentials. The PTF implemented a simple nested JSON data syntax to be able to store this data.  
The API was able to receive events from concurrently running PTF executions, and the Web UI updated itself in real-time to give immediate feedback about the environment health from multiple user perspectives. The fast feedback for multiple users was particularly useful following a deployment of the mobile API server.  


The PTF was usually executed using Newman in a Node.js project, and I was able to develop a custom reporter to process events emitted by Newman during the execution of the PTF. I used these events to capture the results from the scenarios, and most usefully logs to explain why and how scenarios failed to complete. This data was immediately sent to the inhouse dashboard, as well as the dedicated Splunk instance (described in the sections below).  
The results were not just shown as bland passes and failures, I chose to show that sometimes scenarios
* could not run, eg. a user with just one account could not try to transfer money between accounts. Or
* were only partially successful. eg. an attempt to fetch a list of bill payments returning no items because none had been made (marked as "pass ⚠").
* were not supported by the FI/OLB
* not run. eg. skipped, or still waiting to be run


In this fluid, non-deterministic environment, it proved very helpful to separate results not just in passes and failures, but to recognise that some scenarios could not run, or were only partially successful. For example, a user with just one account could not try to transfer money between accounts (marked as "could not run"), and an attempt to fetch a list of bill payments returning no items because none had been made (marked as "pass ⚠").  
For each cell of the dashboard I used hover and mouse actions to show further details.  


-->
----


==== Setup Splunk Enterprise & Integrated PTF with Splunk ====
==== Setup Splunk Enterprise & Integrated PTF with Splunk ====
*Splunk setup, data indexing, [https://dirksonline.net/CV/Splunk%20feature%20grid.JPG dashboard monitors] etc. of historic results
*Splunk setup, data indexing, [https://dirksonline.net/CV/Splunk%20feature%20grid.JPG dashboard monitors] etc. of historic results
==== Developed Inhouse Web UI for PTF Results ====
*Quick glance [https://dirksonline.net/CV/PTF%20Dashboard.JPG dashboard] (and associated data API) written in Node.js/Express.js/Pug


=== Software Test Engineer - 2017-2018 ===
=== Software Test Engineer - 2017-2018 ===

Navigation menu