Fiserv Auckland - Intermediate Software Test Engineer: Difference between revisions
Jump to navigation
Jump to search
Fiserv Auckland - Intermediate Software Test Engineer (view source)
Revision as of 06:25, 8 May 2024
, 8 May→Software Developer in Testing - 2019-2020
Line 10: | Line 10: | ||
=== Software Developer in Testing - 2019-2020 === | === Software Developer in Testing - 2019-2020 === | ||
During my tenure as a Software Developer in Testing at Fiserv, I was tasked with addressing the complexities of the API server used by the mobile apps as a gateway to a network of core online banking systems (OLBs), each serving multiple financial institutions (FIs), and each with unique interface contracts. Due to the expense and difficulty of replicating these systems for integration testing, only three such testing environments were available, and were subject to frequent configuration changes. Despite the non-deterministic nature of testing in this environment, integration testing remained essential. | |||
To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of executing end-to-end user scenarios across various OLB data interfaces and FI/user configurations. | |||
'''Key Contributions:''' | |||
* '''Development of the Postman Collection:''' Utilizing tools like Fiddler, Burp Suite, and MITM Proxy, I captured API calls made by the mobile app to create a comprehensive Postman collection. Often a sequence of calls were required, each performing and action and storing relevant values in the Postman Environment Variables. | |||
* ''' Architecture of the Postman Testrunner Framework (PTF):''' The PTF automatically orchestrated the calls in the correct order to execute end-to-end user scenarios reliably. It used an external JSON data file to specify a sequence of steps called userActions, executing a request from the collection, and the handlers that determined the next userAction to perform for each http response code. It operated as a simple state-machine, emphasizing obtaining data dynamically from the OLB system through API calls to minimize reliance on potentially stale data. | |||
* '''Custom Development and Integration:''' The PTF was executed using Newman in a Node.js project, with a custom reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of scenario results and detailed logs, providing clear insights into failed scenarios and partial successes. Results were sent to an in-house web dashboard and a dedicated Splunk instance for comprehensive monitoring and analysis. | |||
This approach proved invaluable in navigating the fluid and non-deterministic testing environment, enabling nuanced categorization of scenario outcomes beyond pass/fail distinctions, ensuring a more accurate assessment of system readiness and performance. | |||
---- | |||
<!-- | |||
=== Software Developer in Testing - 2019-2020 === | |||
The API server used by the mobile app was essentially being used as a gateway to a network of core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own interface contract. Such large and complex systems are expensive to replicate for integration testing, and as such there were only a three of them at Fiserv. Each environment's configuration was tightly controlled from the USA, and used by many staff across the company. Hence, the configuration and data were always in flux, so that testing there was never fully deterministic. Regardless of the difficulties, the integration testing needed to be performed. In order to know the integration environment's readiness for testing there was a strong desire to create an automated dashboard of system health & readiness. To monitor the environment's health we decided to create a suite of simple checks of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It did need to be flexible to run for a range of users, FI's, and OLB's, and to handle the range of dynamic responses possible in such a fluid environment. | |||
==== Developed the PTF ([[Postman Testrunner Framework]]) ==== | ==== Developed the PTF ([[Postman Testrunner Framework]]) ==== | ||
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and created a Postman collection that could replicate the calls. | |||
With this Postman collection, I was able to create a system that could orchestrate the calls in the correct order to be able to reliably execute end to end user scenarios, such as transferring money between two accounts for any user for whom we were provided credentials. | |||
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible http response codes that determine the next userAction to perform. When no next userAction is specified in the handler, execution moves to the next step in the external data file until the scenario is completed. | |||
The PTF is really just a simple state-machine. | |||
I had a policy that, where possible, the PTF always tries to obtain the data from the OLB system using API calls, rather than store data that might become stale. We did, however, need to be able to store some settings and user credentials. The PTF implemented a simple nested JSON data syntax to be able to store this data. | |||
The PTF was usually executed using Newman in a Node.js project, and I was able to develop a custom reporter to process events emitted by Newman during the execution of the PTF. I used these events to capture the results from the scenarios, and most usefully logs to explain why and how scenarios failed to complete. This data was immediately sent to the inhouse dashboard, as well as the dedicated Splunk instance (described in the sections below). | |||
In this fluid, non-deterministic environment, it proved very helpful to separate results not just in passes and failures, but to recognise that some scenarios could not run, or were only partially successful. For example, a user with just one account could not try to transfer money between accounts (marked as "could not run"), and an attempt to fetch a list of bill payments returning no items because none had been made (marked as "pass ⚠"). | |||
--> | |||
==== Setup Splunk Enterprise & Integrated PTF with Splunk ==== | ==== Setup Splunk Enterprise & Integrated PTF with Splunk ==== |