top of page

Simulating Services for Effective QA Testing: A Comprehensive Guide

Zaktualizowano: 12 gru 2023


pixelart robot from thumbnail

Hi! In this post, we'll delve into the fascinating world of QA processes, focusing on emulators, mock services, and why these tools are invaluable.


I'll introduce some terms you might be less familiar with, such as stubs, mocks, emulators, and simulators. Let's clarify the differences between them:


  • Emulator: An emulator is a versatile tool or software that replicates the behavior of real systems, components, or environments. It allows us to conduct testing in conditions closely resembling real-world scenarios. Emulators often prove essential when mimicking hardware, devices, or services that may not be available during testing, facilitating comprehensive scenario testing.

  • Simulator A simulator, similar to an emulator, replicates aspects of a system or environment. However, it typically focuses on specific functionalities or components, making it ideal for testing complex systems without mimicking every detail.

  • Mock: In the testing context, a mock is a simulated object or component mimicking real behavior. It serves to isolate the code under test from external dependencies like databases, services, or APIs. Mocks allow for controlled testing of specific interactions without engaging actual external components.

  • Stub: A stub is a minimal component or function implementation that offers predefined responses to specific inputs. Stubs are commonly employed in unit testing to isolate the code under examination, providing simplified and predictable behavior for external dependencies. They remain static and don't simulate the dynamic behavior of real systems or components.


In summary, emulators and simulators recreate real-world conditions or systems, mocks isolate and control interactions with external components, and stubs furnish predefined responses for particular inputs in unit testing. Each of these tools serves a distinct purpose in software testing and quality assurance.


Now, let's explore why simulating services is crucial. Often, we encounter scenarios where a specific device or service is unavailable in our testing environment, causing limitations. It may also be the opposite—you want to test a negative scenario, but the service cannot be temporarily disabled.


This is where emulators come to the rescue, enabling comprehensive end-to-end and high-level integration tests. Throughout this post, I'll use the term "emulator" to encompass stubs, mocks, mocked services, and emulators. Emulators provide greater testing flexibility, allowing QA professionals to assess service connections under various conditions.


In automated environments, where stability and repeatability are paramount, using emulators is a wise choice, while reserving more comprehensive checks for the manual testing team.


Why Simulating Services Matters


Whether a service is unavailable in our environment or we rely on an always-available production service, testing is limited. In the former scenario, we can only perform simple, static, stubbed responses, while the latter lacks opportunities to test negative scenarios.


You might wonder why static mocks or stubs fall short. Let me clarify. Developers often create mock responses within the code for testing, typically representing ideal or "good" responses. These mocks may not cover the diverse scenarios and behaviors that real-world services exhibit.


Having a service emulator or simulator at a QA professional's disposal allows the creation of dynamic and versatile testing environments, including negative scenarios.


This flexibility enables comprehensive testing that closely mirrors real-world conditions, ensuring the application's robustness and reliability.


In essence, relying solely on code-based mock responses can limit testing possibilities, whereas dedicated service emulators offer breadth and depth.


To reinforce this, let's consider some real-world examples.


We were testing payment processing engine that was linked to mainframe via Message Queues per each cluster of countries. Each MQ was a live system same with mainframe hence we could not kill it or face problems on global level. However, if we had a simulated MQ this is what we could do

1. Test Failover Scenarios: Wecould simulate what would happen if the MQ system went down unexpectedly. This is crucial to ensure that our payment processing engine can gracefully handle disruptions without compromising data integrity or customer experience.


2. Scenario-Based Testing: With an emulator, we could create various testing scenarios, such as delayed message processing, error responses, or high message traffic, to evaluate how our application behaves under different conditions.


3. Isolation of Testing: By using an emulator, we could isolate our testing environment from the live production system, reducing the risk of accidental disruptions to real payment processing operations.


4. Controlled Testing: Emulators give us control over the timing and conditions of tests. This level of control is essential for thorough testing of how our system responds to different situations.


5. Reproducibility: We could easily reproduce specific scenarios or edge cases for testing, ensuring that our payment processing engine performs consistently and reliably under various conditions.


In a critical system like a payment processing engine for a bank, where downtime and errors can have significant financial implications, having an emulator for essential components like the MQ system becomes an essential part of testing toolkit. It allows QAs to proactively identify and address potential issues, ensuring the system's stability and resilience in real-world scenarios, even when testing conditions aren't ideal.


The QA Professional's Dilemma


Like I mentioned before. Often problem is lack of access to services or devices that users have (real-life setup). Usually this is caused by lack of infrastructure to support miriad of devices of the client setup. Or services that do not provide acceptance counterpart or another environment.


How this challenge affects both manual and automated testing processes


In simple words manual QAs can't test the application as a whole which means lots of areas go uncovered and are potential areas where a bug might be hiding.

I can share with you another life example.


In our network lab we had around twenty devices. However, clients have labs that range to even one thousand different devices. This is hardly possible to be replicated using real life devices due to cost (maintenance, electricity...) And problems like scalability or performance were popping up unnoticed by QA team simply becasue our twenty devices were not "challenging" enough.


Similar problem I have faced with automation. Since I lacked access to devices and had to borrow them from manual QA team, support team or even PO (who was using it for live demos)- in effect I was given strict time windows and boundaries when and what I can run with automation scripts, on what devices (so to not corrupt someone's data). Sometimes I even faced situations where specific scenarios could not be run becasue I was lacking equipment (how reliable this sounds? Sorry I didn't run tests for device XYZ and ZYX becasue we faced downtime and it wasn't caused by the device nor test harness just somebody borrowed them. So if users come back to complain I won't be shocked).

Once even I wasn't notified that device is in use for live demo for a big client. Device that was designed to change incoming video input where user can for example change color channels, saturation hue and etc. And I have started the automated tests! That was a tragedy when device suddenly like possessed did odd things (boundary values testing) and video which should be crystal clear suddenly dropped to red then blue then green or suddenly increased saturation that people turned orange. (on the other hand... Was it that bad? It works and you can do goofy stuff with it).


Challenges faced by developers in providing service emulators


Usually this is not considered by dev team to be a crucial part. Especially that during the sprint they are tasked with developing new functionalities or focused on fixing bugs. Simulators or emulators are more crucial for the QA team hence QA team has to always repeat we don't have this device we must have it mocked. About the challenge... Well if service changes emulator has to be updated. Consider this as fact that the constant maintenance is a pain in the lower back. Often maintenance or updates are hindered by mixture of lack of proper documentation and emulator being written "on the knee" - written the quicker the better.


Real-World Benefits


Imagine you are part of Cyberdyne (company that developed terminators and Skynet from James Cameron 1984 "Terminator" movie that sparked a franchise) engineering team as a QA. You can spawn a simulator in a whim of thought (development team are guys who make simulators during lunch break since those guys are code virtuoso).


Testing doomsday scenarios and critical operations. Especially in the complex environment of a controlling application handling killer machines.

As a QA :

1. Disaster Recovery Testing: You could simulate the worst-case scenario, such as a complete outage of the control system for a specific country or group of countries, to ensure that Skynet could recover gracefully when the system was restored - no data corruption, smoothly starts dropped work etc. .


2. Error Handling: Testing how your application responds when an error occurs during active proxess is crucial. Emulation allowed you to deliberately introduce errors and assess your system's ability to handle them without data corruption or loss (imagine Skynet having a brain - lag in the crucial moment it got it's hands on John Connor - exception that caused total global freeze and nobody willing to hit the restart button...)


3. Fallback Procedures: In scenarios where Skynet systems were down, you could evaluate the effectiveness of your fallback procedures. This includes identifying any manual interventions required and validating their correctness.


4. Concurrency and Load Testing:Emulation enabled you to simulate high message traffic scenarios to assess your system's performance and scalability, ensuring that it could handle large volumes of drones during peak periods.


5. Boundary Testing: You could test boundary conditions, such as processing a high volume of operations just before a power outage, to assess how your system manages such situations.


6. Audit and Compliance Testing: In the AI sector, audit and compliance requirements are stringent. Emulated testing allowed you to ensure that your system logs all necessary information and adheres to regulatory requirements even during disruptions.


Conclusion


By taking this proactive approach to testing and using emulators effectively, you will be able to identify potential weaknesses, strengthen your AUT resilience, and ensure that it could reliably handle various operational scenarios. This level of testing is vital for maintaining the integrity and availability of not only financial systems, where errors can have far-reaching consequences.


In next post I will show you how to craft a simple mocked service of user CRUD operations using python FLASK library that should be accessible from postman.


So stay tuned and happy testing!


Comentarios


Subscribe to QABites newsletter

Thanks for submitting!

  • Twitter
  • Facebook
  • Linkedin

© 2023 by QaBites. Powered and secured by Wix

bottom of page