Stephan Baumhoff

Head of Test Center Performance Testing

Biography.

Stephan has a long history as a performance test engineer and team lead for performance testing and test automation. As a Senior Performance Test Engineer, he is passionate about solutions with the potential to address long-standing critical issues, particularly those that hinder the advancement of automation in performance testing. At PostFinance, while routine performance test execution is manageable, delivering high-quality test analysis within a reasonable timeframe remains a challenge. The desire for fast, automated, high-quality evaluation of complex performance tests has been on his team’s wish list for quite some time. Despite having well-defined requirements, they had little hope of achieving them—until recently.

Talk.

A Case Study of AI-Powered Software QA & Reliability Engineering

While most of us intuitively agree on the need for more agility and speed, traditional decision-making methods like Change Advisory Boards (CABs) often become bottlenecks. These human-centric steps, valuable in the past, now hinder the DevOps flow.
To effectively automate the decision-making process within the DevOps pipelines, we need to invest into the transition towards smart quality gates that operate based on insights generated from machine learning (ML). This requires a shift in quality assurance practices. Within the Swiss Digital Network, we call this new approach “Effective Continuous Verification” and we are conducting a Innosuisse (Swiss Innovation Agency, https://www.innosuisse.admin.ch/en) project with three Swiss universities from Zurich, Geneva and Neuchatel, called AI-SQUARE.
This talk presents the design patterns behind AI-SQUARE, an AI-driven solution for assessing software the quality and maturity. We address the challenges of integrating dynamic smart quality gates within the CI/CD pipeline and show how these checkpoints improve alignment with current DevOps practices. Further, we illustrate the concept of knowledge graphs and how it can be used to capture and store additional QA context parameters in addition to the standard observability data coming from various test results and system behavior data sources. We share how the enriched data in such knowledge graph is used as input for ML models to analyze the data to detect anomalies and assess quality gates related to performance, reliability, and functional requirements.
Finally, we present a concrete customer environment where we explain:
• what are the main requirements of automated performance quality gates verification within the pipelines?
• why we need a new AI-driven paradigm instead of human-centric one to automate quality gates verification in this customer context?
• to which extent the target AI-SQUARE capabilities are addressing the above introduced requirements and are compliance with the new paradigm

Get in Touch

We would love to speak with you.
Feel free to reach out using the below details.