By Bruno Plasch
Swiss Testing Day 2025 kicked off with a warm welcome to the testing community, valued partners, and board members at StageOne in Zurich. In the opening speech, we reflected on the event’s nearly two-decade journey- from its founding in 2006 to becoming one of the most significant gatherings for professionals in testing, quality, and digitalization. Built on a foundation of expertise, interdisciplinary exchange, and a shared belief that quality is a mindset, not just a goal, the event once again lived up to its legacy.
This year’s theme, “Conquer the Chaos,” invited us to bring clarity to complexity, take responsibility in a world increasingly shaped by algorithms, and strengthen the trust that underpins both technology and human decision-making.
🌱 Technological Development: From the Flint to the Cloud
Since ancient times, people have been testing: whether fire burns, whether a spear flies, or whether a structure holds. Trial and error was a survival strategy. Today, testing is highly specialized – yet the core remains the same: gaining understanding through verification, learning through failure, and achieving safety through experience.
Technological milestones mark our progress. Agriculture enabled sedentary living, but also introduced the first division of labor in error chains. Writing preserved knowledge – but also mistakes. The steam engine generated efficiency – and brought new risks. Today, we experience not only progress but especially acceleration. The leap from calculators to AI systems that influence legal decisions or diagnose diseases took place within just a few decades. And the next quantum leap is already on the horizon: through quantum computers, biotechnology, and interstellar communication.
The systems of tomorrow will be scalable, resilient, and dynamic. Yet they are only as effective as the culture in which they are developed. Cloud Native is not a buzzword. It is a paradigm shift: moving away from monolithic structures towards responsive, modular, and self-regulating environments. But technology alone is not enough. If testing is not an integral part of the corporate culture – no attitude, no dialogue, no collaboration – quality becomes a matter of chance.
That is why we need leadership that listens, teams that trust, and processes that do not demand quality, but enable it. We stand at the threshold of technical self-referentiality: machines creating machines, systems evaluating other systems – and we are challenged to understand and shape this phenomenon.
Testing becomes the key to ensuring that technology serves humanity – and not the other way around. It creates orientation where automation makes decisions and prevents technical progress from becoming an end in itself. It preserves humanity in the digital transformation. Thus, testing has become a cultural competence. One that asks questions – especially when no one else does. One that understands: the faster progress is made, the clearer the purpose must be.
⚙ Test Architecture & Data: Structures That Support
The more complex our systems become, the more crucial it is to understand architecture not merely as a blueprint, but as a
communication model. A good test architecture is like a city map: it shows the routes, obstacles, and connections, and it must remain legible. Only then does it come to life instead of becoming a bottleneck in times of change.
When roles blur in projects and responsibilities overlap, test architecture becomes the guiding framework – both technically and in
human terms. It provides stability in a dynamic field. It creates consistency in distributed teams, promotes reusability, enables
governance, and prevents chaos from starting at the code level.
Closely linked to this is the issue of test data. In times of GDPR, globally distributed teams, and AI-driven systems, it is no longer
sufficient to work with fixed data sets. What we need are data landscapes that are flexible, automated, and secure simultaneously.
They must be scalable, traceable, anonymized, and testable – and at the same time suitable for training, validation, and audits.
The combination of intelligent architecture and a dynamic data strategy forms the foundation of modern quality assurance. It is more
than technology: it is an expression of our commitment to responsibility. Those who invest here not only gain time – they also earn
trust.
💡Automation & Delivery: Speed with Purpose
Automation means more than simply being faster. It means asking which tasks make sense to automate – and which are better left to humans. Speed in itself is not a value; progress is only truly progress if accompanied by meaningfulness.
Continuous Integration, Delivery, and Testing – all of these require not just tools, but responsibility. Every automated test case carries the risk of an unnoticed error. And the higher the degree of automation, the greater the risk that errors will multiply systematically if human involvement is lacking.
Test automation is no longer optional. It is essential for shortening release cycles, detecting errors early, and scaling quality. At the same time, it must not become an end in itself: the true benefit lies not only in saving time, but in the repeatability, reliability, and transparency of processes.
In modern organizations, hybrid teams are emerging: testers, DevOps specialists, SMEs, and business stakeholders working together to ensure quality. Automation must be embedded in this structure to be sustainable. It must be orchestrated, monitored, and continuously improved – it only thrives if it is maintained.
At the same time, it is not self-operating: without regular upkeep, test cases become outdated; without governance, oversight is lost; without strategy, automation descends into chaos. That is why methodological frameworks are necessary to structure automation, make it measurable, and embed it within larger quality objectives.
When understood correctly, automation creates space: for exploratory testing, creative approaches, and a stronger user orientation. It is not a substitute – it is a multiplier of human diligence. Good automation not only creates more efficiency but also builds more trust. It allows us to invest more time where human judgment is indispensable.
🧠 AI & Ethics: The Human at the Center
Artificial Intelligence is revolutionizing our world in 2023, but with this transformation comes great responsibility. Today, we’re not just testing functions, but also effects—systems and their consequences. Decisions are increasingly made by algorithms whose behavior we can no longer always explain. This raises critical questions: Who tests these systems? Who is liable when an algorithm discriminates?
The ethics of AI require new ways of thinking. What values shape our models? Who is responsible when an algorithm fails? How do we ensure transparency, fairness, and traceability? What safeguards protect individual dignity?
Initiatives from IBM, ETH Zurich, and UNESCO have made strides in addressing these issues, but more is needed: clear guidelines, ethical testing processes, and heightened awareness among those developing and testing AI. For example, IBM calls for explainable, fair, and robust AI. UNESCO’s global principles include do-no-harm, human-centered control, and data protection. ETH Zurich fosters interdisciplinary collaboration on AI ethics.
Yet, there is no simple solution. As philosopher Michael Sandel warns, algorithmic decisions can not only reproduce human prejudices but also present them as objective truth. This is why testers are needed as an ethical safeguard, serving as a reminder of our responsibilities. We are accountable not only for systems but for the society in which they operate. Responsibility begins at the code and extends to us all—whether we build tools, train models, or design scenarios.
Testing has become a moral discipline. It ensures technology aligns with our core values, identifies potential harms, and, when necessary, has the courage to say no. Even Asimov’s famous robot laws were insufficient. He added a “Zeroth Law”—a robot may not harm humanity. Today, we understand that even this is not enough.
International bodies like UNESCO, IBM, the EU, and ETH Zurich have identified six central ethical challenges in AI:
Bias and Discrimination – AI can perpetuate or amplify human biases, affecting areas like lending, hiring, or criminal justice.
Transparency and Explainability – Ensuring AI decisions are understandable and traceable to build trust.
Privacy and Data Protection – Safeguarding personal data amid vast amounts of information.
Human Oversight and Responsibility – Clear human supervision and accountability for AI.
Regulation and Guidelines – Ethical guidelines and government regulations are essential.
Shared Responsibility – Ethics requires collaboration among businesses, governments, academia, and society.
These principles aren’t just checkboxes; they represent an attitude that makes them testable—not through tools, but through culture. Quality without ethics is efficiency without meaning.
This means we must critically reflect on our ethical testing procedures, adopt diverse perspectives, and consider the broader impact of our tests; on the environment, society, and future generations. In short, quality means not only “it works,” but also “it is responsible.”