In Conversation with Ben - QA Tester

A Q&A With QA
March 18, 2026

To start us off, tell us a little bit about yourself, your background, and the work you do at ClearSky?

My professional background actually started in the world of video game QA where I worked on everything from Nintendo Switch, Xbox, PC and mobile games. I made the leap into software testing when I joined ClearSky almost 3 years ago.
As a QA tester at ClearSky, my day-to-day is incredibly varied. I’m currently integrated into several different squads, which keeps things fresh, and I can be working on anything from writing test cases and documentation to automating complex mobile applications. Moving from games to software was a big jump, but my core skills of breaking things professionally has been the cornerstone of my career and helped me build on new skills such as automation.

Rather than being a final check at the end of a project, QA is deeply integrated into our work. How do you collaborate day-to-day with developers and product owners to ensure quality is baked into the code from the start?

As a team, we try to incorporate shift-left practices where we can and get QA involved as early as possible. By avoiding using QA as a ‘gatekeeper’ and moving towards an integrated partnership at every aspect of the SDLC (Software Development Life Cycle), we can move fast without breaking things and ensure high quality at every step. It also saves the company time and money.


For a typical project, we’ll jump right in at the discovery phase and familiarise ourselves with the platform and gain a full understanding of the upcoming work. By understanding the "why" behind the work, we can spot potential logic flaws before a single line of code is written. We’ll then check-in with the design team at the prototype phase. Here, we verify that the proposed UI/UX actually meets the functional requirements and remains testable from a technical perspective. From then, we set up a feedback loop with the developers for faster delivery and adaptability. With regular check-ins, we can keep the quality high without compromising timescale, budget and effort.


Very rarely is our approach for one project the same for another, so as a QA team we have set up an adaptable standard way of working so that our testing approach can be as flexible as required. By maintaining this high level of transparency and communication with product owners and developers alike, we ensure that quality is a shared responsibility, keeping projects on scale, on budget, and above standard.

How do you define 'quality' within QA?

Quality is a subjective term, but I ground my definition in the "Zero Defects Attitude," a concept championed by industry pioneers like Dr. Kennedy and Phil Crosby. It feels against the grain to say “zero defects” when as software developers and testers we know that is physically impossible. However, this mindset is less about achieving literal perfection and more about aligning our standards with the client’s perspective. From a user's point of view, the answer to "How many defects are too many?" is almost always "One."


By adopting a Zero Defects Attitude, we shift our focus from merely finding bugs to preventing them. This involves rigorous documentation, clear acceptance criteria, and a refusal to settle for "good enough”, meaning our target is always the ceiling, never the floor. We’re not expecting zero defects to be achievable, but by aiming for zero defects the quality will always be high as a result. Ultimately, our goal is to ensure that the final product doesn't just "work," but excels in reliability, security, and user satisfaction, guaranteeing we meet and exceed our clients' high expectations.

What are your favourite moments within a project?

I have a bit of a reputation for breaking things in weird and wonderful ways and I have to admit I quite enjoy breaking them. There is a certain creative satisfaction in finding a path through an application that no one else considered, effectively "stress-testing" the logic of the system. The whole team gets a good laugh when you find bugs like that, and the look of bewilderment on a developer’s face as I demonstrate a particularly ridiculous edge case is always entertaining.  But there’s a serious side to the fun: for every strange bug I find, there is a real-world user out there who would have eventually stumbled upon it. Catching those "how did you even think to do that?" bugs before they reach a customer is incredibly rewarding. It’s that "aha!" moment of discovery (finding the needle in the haystack) that makes the job feel like a high-stakes puzzle I get to solve every single day.

How is the emergence of AI and new technology affecting QA? Is it a different challenge or are there consistencies with more traditional tech?

The emergence of AI is fundamentally shifting the QA landscape from deterministic systems (where input A always equals output B) to probabilistic systems. Testing a tool that might have a 5% margin of error requires a complete rethink of what "quality" looks like. We can no longer rely solely on binary pass/fail results; instead, we have to hold a higher level of accountability over AI-generated outputs, ensuring a "human-in-the-loop" focus to maintain our standards.


Despite these new challenges, AI is proving to be a massive asset in several areas such as automation and documentation. In traditional automation, if a button has changed its ID, that previously had the potential to break hundreds of tests. Now AI can ‘self-heal’ automated tests, reducing maintenance time. One of our go-to tools for test case management, Testrail, now has an AI feature that can intake user stories and generate test cases and steps in the same style as previous cases written by our QA team.


Rather than AI replacing traditional tech, we are using it alongside or integrated with our go-to test tools for efficiency and consistency. Whether we are testing a legacy web app or a cutting-edge LLM integration, the core principles of verification and validation remain the same. We are simply adding more powerful, intelligent tools to our kit to keep up with the increasing speed and complexity of modern development.

What specific trends or technologies are you most interested in right now? Is there anything that you think will be game changing to QA?

The most significant hurdle in modern QA is the fragmentation of automation tools. Currently we have to split our efforts using Playwright for web browsers and Appium for mobile devices. My biggest area of interest right now is the move toward unified, cross-platform automation suites that can handle web, mobile, and desktop all under one roof. The real game-changer I’m watching is the development of AI Vision-based testing. Traditional automation relies on locators (specific pieces of code like IDs or XPaths) to find elements on a screen. If the code changes, the test breaks. However, new AI technology allows automation tools to "see" the interface just like a human does. This means a single test suite could potentially be applied to a web app, a mobile app, and even a desktop application simultaneously. This would accelerate the time taken to write the automation, but would also make the tests more robust as the platform continues to evolve, and allow us to scale our tests to different operating systems and screen resolutions efficiently. As these AI vision tools mature, I believe we will see a shift away from platform-specific scripts toward a more holistic approach to quality. 

Finally, is there a common myth about the role of QA in the tech industry that you’d like to debunk?

The most common myth - one even developers sometimes believe- is that a QA’s entire job is simply finding bugs. In reality, bug hunting is only the most visible and reactive tip of the iceberg. I would estimate that roughly 95% of effective QA happens before a developer even begins to code. Our role involves deep-level test planning, architecting automation frameworks, analysing requirements for logical gaps, and collaborating with stakeholders to define "done." When QA is performing at its best, it often goes completely unnoticed by the end user. This is because a successful QA process prevents the "fire drills" and major failures that usually grab headlines. We are the silent engine of efficiency, saving the team immense amounts of time, money, and stress by ensuring the path is clear before the journey even begins. I wish more people understood that we aren't just here to find errors; we are here to build the foundations that prevent them.