Difference between revisions of "Fiserv Auckland - Intermediate Software Test Engineer"

From Vincents CV Wiki
Jump to: navigation, search
m (Software Developer in Testing - 2019-2020)
m (Custom Development and Integration)
 
(46 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
'''Jan-2017 - Apr-2020'''
 
'''Jan-2017 - Apr-2020'''
== Intro ==
+
== Fiserv ==
Fiserv Auckland is responsible for developing mobile apps utilized by over 2000 banks (mainly in the USA), serving more than 8 million active users. Additionally, they manage multi-tier and multi-tenanted Web and API integration servers interfacing with core online banking systems and third-party platforms. Fiserv's solutions offer extensive configurability, allowing for customization of features and branding. Operating within the stringent and risk-averse banking domain, reliability and quality are paramount. Testing at Fiserv presents complexities and challenges, yet it has been rewarding and intellectually stimulating despite its difficulties.
+
Fiserv Auckland is responsible for developing mobile apps utilized by over 2000 banks (mainly in the USA), serving more than 8 million active users. Additionally, they manage multi-tier and multi-tenanted Web and API integration servers interfacing with core online banking (OLB) systems and third-party platforms. Fiserv's solutions offer extensive configurability, allowing for customization of features and branding. Operating within the stringent and risk-averse banking domain, reliability and quality are paramount. Testing at Fiserv presented complexities and challenges, yet it has been a rewarding and intellectually stimulating role despite its difficulties.
  
 
== [[References_-_Full_List|References]] ==
 
== [[References_-_Full_List|References]] ==
Line 7: Line 7:
 
* [https://dirksonline.net/CV/Letter%20of%20Recommendation%20from%20K%20V%20Kaufman%20-%20signed.pdf 2020 '''K Vaughan Kaufman'''] Letter of Recommendation
 
* [https://dirksonline.net/CV/Letter%20of%20Recommendation%20from%20K%20V%20Kaufman%20-%20signed.pdf 2020 '''K Vaughan Kaufman'''] Letter of Recommendation
  
== Roles ==
+
== Software Developer in Testing - 2019-2020 ==
  
=== Software Developer in Testing - 2019-2020 ===
+
With this role I assisted with integration testing the mobile API server, which was used by the mobile apps as a gateway to a network of core online banking systems (OLBs). Each OLB had its own interface contract, and each served multiple financial institutions (FIs). Due to the expense and difficulty of replicating the OLB systems, only three integrated testing environments were created. These test environments were subject to frequent configuration changes, used by many staff, and tightly controlled from the USA. Despite these difficulties and the non-deterministic nature of testing in these environments, integration testing remained essential.
  
During my tenure as a Software Developer in Testing at Fiserv, I was tasked with addressing the complexities of the API server used by the mobile apps as a gateway to a network of core online banking systems (OLBs), each serving multiple financial institutions (FIs), and each with unique interface contracts. Due to the expense and difficulty of replicating these systems for integration testing, only three such testing environments were available, and were subject to frequent configuration changes. Despite the non-deterministic nature of testing in this environment, integration testing remained essential.
+
To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of dynamically executing complete user scenarios through various OLB's and FI/user configurations.
  
To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of executing end-to-end user scenarios across various OLB data interfaces and FI/user configurations.
+
=== Development of Postman Collection ===
 +
Utilizing tools like [https://www.telerik.com/fiddler '''Fiddler'''], [https://portswigger.net/burp/communitydownload '''Burp Suite CE'''], and [https://mitmproxy.org/ '''MITM Proxy'''], we captured API calls made by the mobile app, and to then create a comprehensive Postman collection of requests. Each user scenario was a sequence of calls, each call performing an action and storing relevant data in the Postman Environment Variables. I emphasised obtaining data dynamically from the OLB to minimize reliance on potentially stale data.
  
'''Key Contributions:'''
+
=== Architecture of the Postman Testrunner Framework (PTF) ===
 +
The PTF automatically orchestrated the calls in the correct order to execute the user scenarios reliably. It used an external JSON file to specify a sequence of steps called userActions, each userAction referenced a request from the collection, and contained response handlers for each http response code which set the next userAction to perform. Effectively, the PTF was a simple state-machine. The PTF also implemented a simple nested JSON data syntax to be able to store data such as user credentials as well as FI connection settings. Passwords were encrypted when stored, and decrypted at run time.
  
* '''Development of the Postman Collection:''' Utilizing tools like Fiddler, Burp Suite, and MITM Proxy, I captured API calls made by the mobile app to create a comprehensive Postman collection. Often a sequence of calls were required, each performing and action and storing relevant values in the Postman Environment Variables.  
+
=== Custom Development and Integration ===
 +
The PTF was implemented using [https://www.npmjs.com/package/newman '''Newman'''] in a [https://nodejs.org/en '''Node.js'''] project, with a custom JavaScript reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of results and detailed logs, providing clear insights into failures and partial successes. Results were sent to the PTF dashboard, as well as to a dedicated [https://www.splunk.com/ '''Splunk'''] instance for comprehensive monitoring and analysis. The PTF dashboard and Splunk implementations are detailed in the sections below.  
  
* ''' Architecture of the Postman Testrunner Framework (PTF):''' The PTF automatically orchestrated the calls in the correct order to execute end-to-end user scenarios reliably. It used an external JSON data file to specify a sequence of steps called userActions, executing a request from the collection, and the handlers that determined the next userAction to perform for each http response code. It operated as a simple state-machine, emphasizing obtaining data dynamically from the OLB system through API calls to minimize reliance on potentially stale data.
+
The PTF was executed inside a shell terminal on [https://learn.microsoft.com/en-us/previous-versions/azure/devops/all/overview?view=tfs-2018 '''TFS'''] build agents, and used shell environment variables to provide the PTF with FI settings and user credentials. The PTF was designed to be able to execute in parallel, and TFS was configured to run all users concurrently once per hour.
  
* '''Custom Development and Integration:''' The PTF was executed using Newman in a Node.js project, with a custom reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of scenario results and detailed logs, providing clear insights into failed scenarios and partial successes. Results were sent to an in-house web dashboard and a dedicated Splunk instance for comprehensive monitoring and analysis.
+
=== Development of PTF Dashboard ===
 +
I used [https://nodejs.org/en '''Node.js'''] with [https://expressjs.com/ '''Express.js'''] and [https://pugjs.org/ '''Pug'''] to create
  
This approach proved invaluable in navigating the fluid and non-deterministic testing environment, enabling nuanced categorization of scenario outcomes beyond pass/fail distinctions, ensuring a more accurate assessment of system readiness and performance.
+
* an API for receiving events from the PTF, and  
 +
* a Web UI to display a snapshot of the latest results in a tabular dashboard.  
  
 +
The API was designed to process data from concurrent PTF executions, and the Web UI updated in real-time to give immediate feedback about the environment health from multiple user perspectives. The fast feedback for multiple users was particularly useful following a deployment of the mobile API server.
  
----
+
In addition to ''pass'' and ''fail'', I chose to also show that sometimes scenarios
<!--
+
* ''could not run'', eg. a user with just one account could not try to transfer money between accounts.
=== Software Developer in Testing - 2019-2020 ===
+
* ''pass ⚠'' when only partially successful. eg. an attempt to fetch a list of bill payments returning no items because none had been made
The API server used by the mobile app was essentially being used as a gateway to a network of core online banking systems (OLB's), each serving multiple financial institutions (FI's), and each with its own interface contract. Such large and complex systems are expensive to replicate for integration testing, and as such there were only a three of them at Fiserv. Each environment's configuration was tightly controlled from the USA, and used by many staff across the company. Hence, the configuration and data were always in flux, so that testing there was never fully deterministic. Regardless of the difficulties, the integration testing needed to be performed. In order to know the integration environment's readiness for testing there was a strong desire to create an automated dashboard of system health & readiness. To monitor the environment's health we decided to create a suite of simple checks of API functionality, capable of exercising all integration paths across the range of different OLB data interfaces, and FI/user configurations. It did need to be flexible to run for a range of users, FI's, and OLB's, and to handle the range of dynamic responses possible in such a fluid environment.
+
* ''not supported'' by the FI/OLB
 +
* ''not run''. eg. skipped, or still waiting to be run
  
==== Developed the PTF ([[Postman Testrunner Framework]]) ====
+
For each result cell I used hover and mouse actions to show details.
To start with, we captured and observed the API calls made by the mobile app (Using Fiddler, Burp Suite, and MITM Proxy) and created a Postman collection that could replicate the calls.  
 
  
With this Postman collection, I was able to create a system that could orchestrate the calls in the correct order to be able to reliably execute end to end user scenarios, such as transferring money between two accounts for any user for whom we were provided credentials.
+
[https://dirksonline.net/CV/PTF%20Dashboard.JPG Link to a screenshot of the PTF dashboard]
  
The Postman Testrunner Framework (PTF) uses an external data file to specify a sequence of steps called userActions. A userAction executes a request from the underlying collection, and then has a list of handlers for the possible http response codes that determine the next userAction to perform. When no next userAction is specified in the handler, execution moves to the next step in the external data file until the scenario is completed.
+
=== Setup Splunk Enterprise ===
  
The PTF is really just a simple state-machine.
+
I setup a dedicated instance of [https://www.splunk.com/en_us/products/splunk-enterprise.html '''Splunk Enterprise'''] to store and analyze trends in the PTF data (results, logging, and full API requests and responses). This involved configuring indexes, HEC event collectors, user access permissions, and managing VM storage requirements. I developed dashboards to visualize historical PTF data, utilizing shades of green, red, and grey to represent pass, fail, and indeterminate results. The shading was used to differentiate users. These grids provided valuable insights into environment health, user status, feature performance, and OLB status. Click-through functionality was added to facilitate investigations and drill down through the layers into increasingly more detailed views of the data.
  
I had a policy that, where possible, the PTF always tries to obtain the data from the OLB system using API calls, rather than store data that might become stale. We did, however, need to be able to store some settings and user credentials. The PTF implemented a simple nested JSON data syntax to be able to store this data.
+
* Configured indexes, HEC event collectors, and user access permissions
 +
* Extensively analysed historical PTF data
 +
* Developed dashboards to visualize historical PTF data, using colour to show health, status, and performance
 +
* Implemented click-through functionality for detailed data exploration
  
The PTF was usually executed using Newman in a Node.js project, and I was able to develop a custom reporter to process events emitted by Newman during the execution of the PTF. I used these events to capture the results from the scenarios, and most usefully logs to explain why and how scenarios failed to complete. This data was immediately sent to the inhouse dashboard, as well as the dedicated Splunk instance (described in the sections below).
+
[https://dirksonline.net/CV/Splunk%20feature%20grid.JPG Link to screenshot of the feature grid]
  
In this fluid, non-deterministic environment, it proved very helpful to separate results not just in passes and failures, but to recognise that some scenarios could not run, or were only partially successful. For example, a user with just one account could not try to transfer money between accounts (marked as "could not run"), and an attempt to fetch a list of bill payments returning no items because none had been made (marked as "pass ⚠").
+
== Software Test Engineer - 2017-2018 ==
 
 
-->
 
 
 
==== Setup Splunk Enterprise & Integrated PTF with Splunk ====
 
*Splunk setup, data indexing, [https://dirksonline.net/CV/Splunk%20feature%20grid.JPG dashboard monitors] etc. of historic results
 
 
 
==== Developed Inhouse Web UI for PTF Results ====
 
*Quick glance [https://dirksonline.net/CV/PTF%20Dashboard.JPG dashboard] (and associated data API) written in Node.js/Express.js/Pug
 
 
 
=== Software Test Engineer - 2017-2018 ===
 
  
 
At Fiserv, I began as a QA member within agile teams responsible for implementing changes across various mobile banking solutions.  
 
At Fiserv, I began as a QA member within agile teams responsible for implementing changes across various mobile banking solutions.  
  
 
My responsibilities included:
 
My responsibilities included:
 
 
* Testing new features for mobile apps, and conducting cross-device regression checks.
 
* Testing new features for mobile apps, and conducting cross-device regression checks.
 
* Contributing to the development of the C# Specflow API automation suite for mobile API servers.
 
* Contributing to the development of the C# Specflow API automation suite for mobile API servers.
Line 65: Line 63:
 
* Testing a banking Web App hosted on dedicated hardware, where I leveraged Powershell scripts for configuring and automating deployments.
 
* Testing a banking Web App hosted on dedicated hardware, where I leveraged Powershell scripts for configuring and automating deployments.
  
During this period, I used tools and technologies such as:
+
== Tools and Technologies ==
 +
At Fiserv I used the following tools and technologies:
  
 
* [https://www.postman.com/ '''Postman'''] and [https://www.soapui.org/ '''SoapUI'''] for API testing.
 
* [https://www.postman.com/ '''Postman'''] and [https://www.soapui.org/ '''SoapUI'''] for API testing.
Line 74: Line 73:
 
* [https://specflow.org/ '''Specflow'''] and [https://en.wikipedia.org/wiki/C_Sharp_(programming_language) '''C#'''] for API automation using BDD.
 
* [https://specflow.org/ '''Specflow'''] and [https://en.wikipedia.org/wiki/C_Sharp_(programming_language) '''C#'''] for API automation using BDD.
 
* Conducted testing across various domains including mobile functional, accessibility, iOS upgrade, and platform API functional testing.
 
* Conducted testing across various domains including mobile functional, accessibility, iOS upgrade, and platform API functional testing.
* Used [https://www.telerik.com/fiddler '''Fiddler'''], [https://portswigger.net/burp/communitydownload '''Burp Suite -CE'''], and [https://mitmproxy.org/ '''MITM Proxy'''] for capturing network calls, as well as [https://xmind.app/ '''XMind'''] for mind mapping.
+
* [https://www.telerik.com/fiddler '''Fiddler'''], [https://portswigger.net/burp/communitydownload '''Burp Suite CE'''], and [https://mitmproxy.org/ '''MITM Proxy'''] for capturing network calls.
 
+
* [https://xmind.app/ '''XMind'''] for mind mapping.
== orig stuff ==
+
* [https://www.atlassian.com/software/confluence '''Confluence'''] for product & project documentation
 
+
* [https://en.wikipedia.org/wiki/SQL_Server_Management_StudioMicrosoft '''SQL Server Management Studio'''] for data queries, test data setup, and testing SQL scripts
 
+
* [https://visualstudio.microsoft.com/ '''Visual Studio'''] for code related tasks
:Fiserv Auckland produces mobile apps for 2000+ banks (8M active users), as well as the multi-tier web and API integration servers that interface to core online banking systems and third parties. Our solutions are configurable with varying degrees of customisation of features and branding. The banking domain is very strict and risk averse! Reliability and quality are particularly important. I've found testing our product complicated, difficult & challenging. 
+
* '''Microsoft Test Manager''' for managing test cases and suites, and recording test progress.
:*(2019-2020) - Sole developer of
 
::*''[[Postman Testrunner Framework]]'' for automated integration checks.
 
::*Quick glance [https://dirksonline.net/CV/PTF%20Dashboard.JPG dashboard] (and associated data API) written in Node.js/Express.js/Pug
 
::*Splunk setup, data indexing, [https://dirksonline.net/CV/Splunk%20feature%20grid.JPG dashboard monitors] etc. of historic results
 
:*(2017-2018) - QA member of agile teams delivering changes to a range of mobile banking solutions.
 
 
 
:Whilst working at Fiserv I worked with the following technologies
 
:* Postman/Newman/Javascript/TV4 JSON validator
 
:* Node.js/Express.js/Pug (Simple Web UI, Data API for test results, task scripting, data analysis)
 
:* Splunk (system monitoring, setup data collectors, creating new dashboards)
 
:* TFS (Git repos, build server, and script scheduling)
 
:* Powershell (System deployment automation & TFS)
 
:* Octopus (deployment engine)
 
:* Specflow/C# (Gherkin API automation)
 
:* Mobile functional, accessibility, iOS upgrade testing
 
:* Platform API functional testing
 
:* XMind (Mind Mapping Tool)
 
:* Fiddler/Burp Suite (Network capturing)
 
:* Soap UI (API testing)
 
 
 
 
 
 
 
old content from firserv long page
 
 
 
[[Curriculum_Vitae_-_Vincent_Dirks#Intermediate_Test_Engineer_-_Fiserv_.28Jan_2017_-_Current.29|<< return to main page]]
 
{{FiservShort}}
 
----
 
 
 
I joined Fiserv first work day of 2017. Fiserv is a very different organisation from Trade Me and presented some real challenges for me. They are a huge global (but USA centric) financial services company with over 23,000 staff and over 100 million active users. Their systems were far more complex, the financial services domain far more risk averse, and has very strict quality and deployment process requirements. The management hierarchy was much deeper and most of the work performed in NZ was directed from offshore with little access to clients or end users. The NZ office of Fiserv produces mobile
 
 
 
==Below is being updated to reflect my time and experience at Fiserv, but was copied from a prior role as a template==
 
==Development at Fiserv==
 
:xxx
 
::* Database
 
::* System Architecture
 
::* API
 
::* UI
 
:*The squad is responsible for the story's design, implementation, testing ....
 
:*Development is performed on short feature branches using mercurial. When stories are ready to be deployed they are merged into the integration and then the release branches before being deployed to production and eventually merged with the default trunk of the code.  
 
  
==Agile at Fiserv ==
+
== Agile ==
:Fiserv uses the [https://www.scaledagileframework.com Scaled Agile Framework] which is ...
 
:* XXX to be updated XXX The squads were usually  2 Dev's, 1 tester, &frac12; BA, with access to design. In addition, the PO providing direction but considered just outside the squad.
 
:* XXX to be updated XXX Most squads are product facing, but there are also a number of squads that provide internal technical and support services to help the product squads. (DB, Platform, API, Automation, Code Health etc) Squads are trusted to ask for assistance when needed, and when to reach out to others when there are shared or over lapping responsibilities.
 
  
==Testing at Fiserv==
+
Fiserv used the [https://www.scaledagileframework.com '''Scaled Agile Framework''' (SAFe)] to govern their Agile practices. Squads, were typically about ten in size, and engaged in common Agile Rituals, and were responsible for SDLC through to integration. We followed Gitflow (on TFS).
:* TBD
 
  
==Tools I used at Fiserv==
+
The squad
:* Postman for functional API testing, and developed framework for managing settings, and to be able to orchestrate API calls from a general collection to test different user scenarios.
+
* Engaged in Agile Rituals - stand-ups, backlog grooming, estimation, planning, demos, and retros.
:* SoapUI for functional API testing
+
* Owned the development lifecycle - story design, implementation, testing, and integration
:* VersionOne for managing cases/stories, test plans/session charters, bug tracking, test progress, issue(bug) tracking
+
* Followed Gitflow - feature branches for development and integrating changes into release-train branches.
:* xmind for mind maps and visual models to help test planning, execution, and reporting
+
* Contributed to quality checking at various stages before code changes were deployed to production.
:* Confluence Wiki for storing anything that might be useful for others, eg implementation details, how-to's for testing, common testing processes
 
:* Git & TFS version control and build server
 
:* Powershell scripts, mainly for speeding up repetitive tasks, eg deployments to multi-VM test environments
 
:* Octopus deployment engine
 
:* Chrome CJS Custom Javascript extension (to assist with repetitive QA specific tasks)
 
:* Microsoft SQL Server Management Studio
 
:* Visual Studio for various code development tasks
 
:* Microsoft Test Manager for managing test cases and suites, and recording test progress.  
 
:* Splunk error analysis and error graphs
 
:* Fiddler & Blurp Suite for network traffic capture
 
:* Developer tools on common browsers
 
:* MS Office
 

Latest revision as of 21:39, 22 May 2024

Jan-2017 - Apr-2020

Fiserv

Fiserv Auckland is responsible for developing mobile apps utilized by over 2000 banks (mainly in the USA), serving more than 8 million active users. Additionally, they manage multi-tier and multi-tenanted Web and API integration servers interfacing with core online banking (OLB) systems and third-party platforms. Fiserv's solutions offer extensive configurability, allowing for customization of features and branding. Operating within the stringent and risk-averse banking domain, reliability and quality are paramount. Testing at Fiserv presented complexities and challenges, yet it has been a rewarding and intellectually stimulating role despite its difficulties.

References

Software Developer in Testing - 2019-2020

With this role I assisted with integration testing the mobile API server, which was used by the mobile apps as a gateway to a network of core online banking systems (OLBs). Each OLB had its own interface contract, and each served multiple financial institutions (FIs). Due to the expense and difficulty of replicating the OLB systems, only three integrated testing environments were created. These test environments were subject to frequent configuration changes, used by many staff, and tightly controlled from the USA. Despite these difficulties and the non-deterministic nature of testing in these environments, integration testing remained essential.

To streamline integration testing and monitor environment readiness, I spearheaded the development of the Postman Testrunner Framework (PTF), a flexible solution capable of dynamically executing complete user scenarios through various OLB's and FI/user configurations.

Development of Postman Collection

Utilizing tools like Fiddler, Burp Suite CE, and MITM Proxy, we captured API calls made by the mobile app, and to then create a comprehensive Postman collection of requests. Each user scenario was a sequence of calls, each call performing an action and storing relevant data in the Postman Environment Variables. I emphasised obtaining data dynamically from the OLB to minimize reliance on potentially stale data.

Architecture of the Postman Testrunner Framework (PTF)

The PTF automatically orchestrated the calls in the correct order to execute the user scenarios reliably. It used an external JSON file to specify a sequence of steps called userActions, each userAction referenced a request from the collection, and contained response handlers for each http response code which set the next userAction to perform. Effectively, the PTF was a simple state-machine. The PTF also implemented a simple nested JSON data syntax to be able to store data such as user credentials as well as FI connection settings. Passwords were encrypted when stored, and decrypted at run time.

Custom Development and Integration

The PTF was implemented using Newman in a Node.js project, with a custom JavaScript reporter developed to process events emitted by Newman during execution. This allowed for real-time capture of results and detailed logs, providing clear insights into failures and partial successes. Results were sent to the PTF dashboard, as well as to a dedicated Splunk instance for comprehensive monitoring and analysis. The PTF dashboard and Splunk implementations are detailed in the sections below.

The PTF was executed inside a shell terminal on TFS build agents, and used shell environment variables to provide the PTF with FI settings and user credentials. The PTF was designed to be able to execute in parallel, and TFS was configured to run all users concurrently once per hour.

Development of PTF Dashboard

I used Node.js with Express.js and Pug to create

  • an API for receiving events from the PTF, and
  • a Web UI to display a snapshot of the latest results in a tabular dashboard.

The API was designed to process data from concurrent PTF executions, and the Web UI updated in real-time to give immediate feedback about the environment health from multiple user perspectives. The fast feedback for multiple users was particularly useful following a deployment of the mobile API server.

In addition to pass and fail, I chose to also show that sometimes scenarios

  • could not run, eg. a user with just one account could not try to transfer money between accounts.
  • pass ⚠ when only partially successful. eg. an attempt to fetch a list of bill payments returning no items because none had been made
  • not supported by the FI/OLB
  • not run. eg. skipped, or still waiting to be run

For each result cell I used hover and mouse actions to show details.

Link to a screenshot of the PTF dashboard

Setup Splunk Enterprise

I setup a dedicated instance of Splunk Enterprise to store and analyze trends in the PTF data (results, logging, and full API requests and responses). This involved configuring indexes, HEC event collectors, user access permissions, and managing VM storage requirements. I developed dashboards to visualize historical PTF data, utilizing shades of green, red, and grey to represent pass, fail, and indeterminate results. The shading was used to differentiate users. These grids provided valuable insights into environment health, user status, feature performance, and OLB status. Click-through functionality was added to facilitate investigations and drill down through the layers into increasingly more detailed views of the data.

  • Configured indexes, HEC event collectors, and user access permissions
  • Extensively analysed historical PTF data
  • Developed dashboards to visualize historical PTF data, using colour to show health, status, and performance
  • Implemented click-through functionality for detailed data exploration

Link to screenshot of the feature grid

Software Test Engineer - 2017-2018

At Fiserv, I began as a QA member within agile teams responsible for implementing changes across various mobile banking solutions.

My responsibilities included:

  • Testing new features for mobile apps, and conducting cross-device regression checks.
  • Contributing to the development of the C# Specflow API automation suite for mobile API servers.
  • Deploying environments and modifying configurations using Octopus.
  • Testing a banking Web App hosted on dedicated hardware, where I leveraged Powershell scripts for configuring and automating deployments.

Tools and Technologies

At Fiserv I used the following tools and technologies:

  • Postman and SoapUI for API testing.
  • Splunk for log analysis and monitoring.
  • Team Foundation Server(TFS) for version control (Git repos) and continuous integration (build server). (Note: TFS has been rebranded to Azure DevOps)
  • Powershell for automation tasks.
  • Octopus as a deployment automation tool.
  • Specflow and C# for API automation using BDD.
  • Conducted testing across various domains including mobile functional, accessibility, iOS upgrade, and platform API functional testing.
  • Fiddler, Burp Suite CE, and MITM Proxy for capturing network calls.
  • XMind for mind mapping.
  • Confluence for product & project documentation
  • SQL Server Management Studio for data queries, test data setup, and testing SQL scripts
  • Visual Studio for code related tasks
  • Microsoft Test Manager for managing test cases and suites, and recording test progress.

Agile

Fiserv used the Scaled Agile Framework (SAFe) to govern their Agile practices. Squads, were typically about ten in size, and engaged in common Agile Rituals, and were responsible for SDLC through to integration. We followed Gitflow (on TFS).

The squad

  • Engaged in Agile Rituals - stand-ups, backlog grooming, estimation, planning, demos, and retros.
  • Owned the development lifecycle - story design, implementation, testing, and integration
  • Followed Gitflow - feature branches for development and integrating changes into release-train branches.
  • Contributed to quality checking at various stages before code changes were deployed to production.