Hi! One of the most hot topics for business people in the IT industry is "test parallelism". Apparently it has to work as miracle solution to any problems ranging from flaky test to frequent timeouts. However, with paralel testing power comes a great respopnsibility (thanks uncle Ben). So let us dive a bit deeper into the subject.
Considerations:
If you really have to run tets in parallel you need to consider few things.
Test Granularity
If your tests share a resource you might end up with failing tests for no apparent reason.
For example you have few tests related to RBAC (Role- Based Access Control), and few of your tests mix the same user roles to test his access levels.
If done in sequence there should be no problem. We change user role, try to login as the user and we expect he won`t be able. Now consider you run those tests in parallel. And while Test A switches user role to lets say read-only, Test B removes all roles/groups from him to test negative scenario where user wiothout groups cannot access the AUT.
Both tests perform this setup almost simultaneously. Here's where the issue arises: Test A tries to access the AUT and fails due to the missing role, preventing the login. Test B passes. But on the second attempt Test A passes and Test B fails. That is not only example of flaky tests but also tests that are not granular enough.
AUT Capabilities
Ask yourself a qestion 'is my AUT (Application Under Test) is capable of supporting parallel testing?'
Imagine you have test harness that will use parallel run however, your AUT requires login and does not support multiple sessions per user.
What to do? One option is to request developers to disable limitations on dev/ test and acc environments. Option two is to simply add new 'bot' user. Of course second option comes with increased maintenance, test itself will also run bit longer since login operation will take some time too.
Apparently this problem can be mitigated fairly easily. But what if our AUT performance is low? Multiple sessions will donwgrade performance even further leading to very annoying timeouts.
Leaking Implementation
Third thing that some AQA might forget is memory leaks in our own code. Memory leak happens when resources are not freed correctly.
Can it be avoided? Sure. But it does not change a thing it happens. And might occur with longer running tests but remain unnoticeable in shorter runs.
is your AUT (Application Under Test) is capable of supporting parallel testing?
Implementation of parallelism with pytest:
Now, let's explore how to implement parallel testing using pytest.
To parallelize tests we will be using pytest-xdist, you can follow these steps:
Install pytest-xdist:
Install the pytest-xdist plugin using:
```bash
pip install pytest-xdist
```
Run Tests in Parallel:
Modify test run command hidden in the runner to include the -n option to specify the number of workers:
```bash
pytest -n <number_of_workers>
```
For example, if you want to run tests with four parallel workers:
```bash
pytest -n 4
```
This will distribute your tests across the specified number of workers, running them in parallel.
Considerations (again!):
Ensure that your tests can run concurrently without interference. If your tests modify shared resources (e.g., a shared database), you might need to set up separate instances for each worker or take other measures to avoid conflicts.
Check the documentation for pytest-xdist for additional options and configurations: pytest-xdist — pytest-xdist documentation.
Here's an example of how your test command might look like with parallelization:
```bash
pytest -n auto
```
The auto option will automatically detect the number of available CPUs and use them for parallelization.
Ensure that your tests can run concurrently without interference
Pytest code changes
I made slight modification to BasePage:
```python
class BasePage:
def __init__(self, base_web_test):
"""Initialize the BasePage with the browser driver taken from the TEST class."""
self.driver = base_web_test.driver # Get the driver from the base_web_test fixture
def fluent_click(self, locator):
"""Perform fluent click using WebDriverWait."""
element = WebDriverWait(self.driver, 10).until(
EC.element_to_be_clickable(locator)
)
element.click()
```
I have added a new test (test navigation to Forms page). New Page Object class (Forms page) which is almost exact copy of Elements page for now.
```python
class TestNavigationPage(BaseWebTest):
"""Test class for navigating to the Elements page."""
def test_navigate_to_elements_page(self ):
home_page = ToolsQAHomePage(self) # Create an instance of the ToolsQAHomePage
home_page.navigate_to_home_page() # Navigate to the home page
elements_page = home_page.navigate_to_elements_page() # Click on the "Elements" link to navigate to the
# Elements page
assert elements_page.is_elements_page_loaded() # Verify that the Elements page is loaded
def test_navigate_to_forms_page(self ):
home_page = ToolsQAHomePage(self) # Create an instance of the ToolsQAHomePage
home_page.navigate_to_home_page() # Navigate to the home page
forms_page = home_page.navigate_to_forms_page() # Click on the "Forms" link to navigate to the
# Forms page
assert forms_page.is_forms_page_loaded() # Verify that the Forms page is loaded
```
And here is runner code, note the '-n' flag.
```python
if __name__ == '__main__':
# Set up pytest arguments
args = [
'-v', # Verbose output
'-s', # Print output to console
'-n=2', # Run tests in parallel using 2 workers
# '--html=report.html', # HTML test report
# '--junitxml=report.xml', # JUnit XML test report
'tests/', # Path to test directory
]
# Call pytest with arguments
pytest.main(args)
```
Output
without XDIST:
```bash
plugins: xdist-3.5.0
collecting ... collected 2 items
tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_elements_page PASSED
tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_forms_page PASSED
======================= 2 passed in 5.52s ==========================
```
with XDIST:
```bash
plugins: xdist-3.5.0
created: 2/2 workers
2 workers [2 items]
scheduling tests via LoadScheduling
tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_forms_page
tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_elements_page
[gw0] PASSED tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_elements_page
[gw1] PASSED tests/test_navigate_elements_page.py::TestNavigationPage::test_navigate_to_forms_page
====================== 2 passed in 11.35s ==============================
```
Note that despite the parallel run, sequential tests were quicker.
Summary:
Parallelizing tests can introduce challenges related to resource usage, including memory consumption, dependencies, and the capabilities of both the System Under Test (SUT) and the machine running the tests. Here are a few additional tips.:
Memory Leaks:
Keep an eye on memory usage during parallel test execution. The simultaneous execution of tests in parallel can uncover memory leaks that may remain unnoticed in sequential runs due to different resource usage patterns.
Monitor memory consumption and investigate any significant increases or issues. Tools like psutil in Python can help with monitoring resource usage.
Dependency Isolation:
Ensure that each test is independent and does not rely on shared resources or state. Tests should be able to run in isolation without interfering with each other.
If your tests interact with external services or databases, consider using separate instances for each test worker to avoid conflicts.
SUT and AUT Capabilities:
Consider the capabilities of the SUT and AUT. For example, if your application or service has limitations on the number of concurrent connections, ensure that your parallelization strategy does not exceed these limits.
Test your SUT and AUT under conditions that simulate the production environment as closely as possible.
Machine Resources:
Ensure that the machine running the tests has sufficient resources (CPU, memory, etc.) to handle the parallel workload.
Consider distributing the tests across multiple machines if a single machine is not sufficient. Tools like pytest-xdist support distributed testing.
Test Environment Setup:
Make sure that your test environment setup and teardown procedures are robust and can handle concurrent test execution.
Retry Mechanism:
Introduce a retry mechanism for tests that might intermittently fail due to parallel execution. Sometimes, a test might fail in a parallel environment but pass when run individually. This can be caused by either the dependency problems I highlighted earlier or random issues like network problems.
Monitoring and Reporting:
Use monitoring tools to keep track of resource usage, test progress, and potential issues during parallel test execution. For example you could utilize psutil libary to monitor CPU usage. If running locally you can also monitor cpu usage and processes using Task Manager.
Configure comprehensive reporting to capture results and identify any tests that may have failed due to parallelization-related issues.
Parallelizing tests can introduce challenges
Always conduct thorough testing in a controlled environment to validate that parallel execution does not introduce unexpected behaviors or issues. Regularly review and adjust your testing strategy based on the evolving needs of your project.
Happy testing!
Comments