In this post, I would like to show how to create multiple instances of JMeter servers/slaves on demand using docker compose. I assume you have some idea on using docker in JMeter distributed load testing. If not, please read this post first.
Docker Compose:
As part of our application design, we might have a webserver, few app servers and a db server. We would have created different docker images for the web, app and db servers. We need to run all the docker containers and create a network/link them so that they can communicate among themselves for the application to work fine.
Docker-compose is a tool to define and run multiple docker containers. With Compose, we describe our multi-container application in a single YAML file, then spin our application up with a single command.
Installing Docker Compose:
Check this link here for detailed steps to install docker compose.
Compose File:
This is a file, in YAML format, in which we would describe how we want our docker containers to run and link to each other. We define our entire application and networks details in it. The default path for a Compose file is
./docker-compose.yml
In order to run the JMeter distributed load testing, we would need 1 master and N number of slaves. Using the docker-compose file reference we create a compose file as shown below.
version: '2'
services:
master:
image: vinsdocker/jmmaster
container_name: master
tty: true
hostname: master
networks:
- vins
slave:
image: vinsdocker/jmserver
tty: true
networks:
- vins
networks:
vins:
driver: bridge
- master:
- We will reuse the same docker image we had created in this post for the master container. We will create a new network in which all these master and slaves would be connected.
- slave:
- We reuse the docker image for jmeter server. We do not set any container_name and hostname for the slave because we would create more than 1 slave container. So we can not have the hostname and container_name in the compose file. docker-compose tool itself will assign a name in this format – <projectname>_<servicename>_<index>.
As part of this docker-compose file we have defined how our architecture is going to be to run the JMeter test. Now lets see that in action!!
Running application with Compose:
- Create a directory for this project
mkdir tag
- Create a new docker-compose file by copying above file content.
cd tag
sudo vim docker-compse.yml
- Run the application. Just this command will do to start all the containers, setup the network etc.
sudo docker-compose up -d
Creating network "tag_vins" with driver "bridge"
Creating master
Creating tag_slave_1
- We now have one 1 master & 1 slave running. Lets assume we need 15 slaves to run our JMeter test. Simply issue below command to spin up 14 more slaves.
sudo docker-compose scale slave=15
- In the terminal, we can see that starting 14 more containers for jmeter-slave. To get the running containers information
sudo docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
master /bin/bash Up 60000/tcp
tag_slave_1 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_10 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_11 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_12 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_13 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_14 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_15 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_2 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_3 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_4 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_5 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_6 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_7 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_8 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
tag_slave_9 /bin/sh -c $JMETER_HOME/bi ... Up 1099/tcp, 50000/tcp
- Now our master and all the slaves are up and running and appropriate ports are open.
- Lets run one more command to get the all the IP addresses of the slaves.
sudo docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(sudo docker ps -aq)
/tag_slave_12 - 172.19.0.15
/tag_slave_14 - 172.19.0.16
/tag_slave_13 - 172.19.0.12
/tag_slave_15 - 172.19.0.17
/tag_slave_11 - 172.19.0.11
/tag_slave_10 - 172.19.0.10
/tag_slave_9 - 172.19.0.13
/tag_slave_8 - 172.19.0.14
/tag_slave_7 - 172.19.0.9
/tag_slave_6 - 172.19.0.7
/tag_slave_4 - 172.19.0.8
/tag_slave_3 - 172.19.0.6
/tag_slave_2 - 172.19.0.5
/tag_slave_5 - 172.19.0.4
/tag_slave_1 - 172.19.0.3
/master - 172.19.0.2
Note:
Eventhough all these containers are running in the same custom network and know each other by their name (example, tag_slave_1), docker by default appends the project name and index with an underscore (_) to the slave machines when we issue the scale command. Java RMI somehow does not like _ in the hostname which causes issues while running the test in the distributed mode. So we use IP address.
Running JMeter test:
- We have our entire application ready to run the load test.
- Run the below command to connect to the master container.
sudo docker exec -it master /bin/bash
- Navigate to /jmeter/apache-jmeter-2.13/bin
- Create a dummy jmeter test yourself to test your application or run below command to download a simple test which i have uploaded.
cd /jmeter/apache-jmeter-2.13/bin
wget https://s3-us-west-2.amazonaws.com/dpd-q/jmeter/jmeter-docker-compose.jmx
- Everything is ready to run the test with all the slaves. So, I run below command and I see below output in my terminal. My original test, i have uploaded, is test for 10 threads. With 15 slaves, it creates 150 users.
./jmeter -n -t jmeter-docker-compose.jmx -R172.19.0.16,172.19.0.15..........
Creating summariser
Created the tree successfully using jmeter-docker-compose.jmx
Configuring remote engine: 172.19.0.16
Configuring remote engine: 172.19.0.15
Configuring remote engine: 172.19.0.17
Configuring remote engine: 172.19.0.13
Configuring remote engine: 172.19.0.14
Configuring remote engine: 172.19.0.11
Configuring remote engine: 172.19.0.12
Configuring remote engine: 172.19.0.9
Configuring remote engine: 172.19.0.10
Configuring remote engine: 172.19.0.8
Configuring remote engine: 172.19.0.7
Configuring remote engine: 172.19.0.6
Configuring remote engine: 172.19.0.5
Configuring remote engine: 172.19.0.4
Configuring remote engine: 172.19.0.3
Starting remote engines
Starting the test @ Sat Sep 24 16:17:22 UTC 2016 (1474733842116)
Remote engines have been started
Waiting for possible shutdown message on port 4445
summary + 6016 in 8s = 795.6/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 45 Started: 33 Finished: 0
summary + 132200 in 30s = 4405.9/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 138216 in 38s = 3679.2/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 179100 in 30s = 5965.0/s Avg: 0 Min: 0 Max: 3 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 317316 in 68s = 4694.6/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 179100 in 30s = 5975.2/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 150 Started: 138 Finished: 0
summary = 496416 in 98s = 5088.0/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
summary + 138980 in 24s = 5852.8/s Avg: 0 Min: 0 Max: 2 Err: 0 (0.00%) Active: 0 Started: 138 Finished: 150
summary = 635396 in 121s = 5237.7/s Avg: 0 Min: 0 Max: 6 Err: 0 (0.00%)
Tidying up remote @ Sat Sep 24 16:19:23 UTC 2016 (1474733963754)
... end of run
- Once we are done with our testing, stop and remove all the containers. Ofcourse with a single command as shown below. Within next few seconds, all the master and slave services and networks are removed.
sudo docker-compose down
Stopping tag_slave_12 ... done
Stopping tag_slave_14 ... done
Stopping tag_slave_13 ... done
Stopping tag_slave_15 ... done
Stopping tag_slave_11 ... done
Stopping tag_slave_10 ... done
Stopping tag_slave_9 ... done
Stopping tag_slave_8 ... done
Stopping tag_slave_7 ... done
Stopping tag_slave_6 ... done
Stopping tag_slave_4 ... done
Stopping tag_slave_3 ... done
Stopping tag_slave_2 ... done
Stopping tag_slave_5 ... done
Stopping tag_slave_1 ... done
Stopping master ... done
Removing tag_slave_12 ... done
Removing tag_slave_14 ... done
Removing tag_slave_13 ... done
Removing tag_slave_15 ... done
Removing tag_slave_11 ... done
Removing tag_slave_10 ... done
Removing tag_slave_9 ... done
Removing tag_slave_8 ... done
Removing tag_slave_7 ... done
Removing tag_slave_6 ... done
Removing tag_slave_4 ... done
Removing tag_slave_3 ... done
Removing tag_slave_2 ... done
Removing tag_slave_5 ... done
Removing tag_slave_1 ... done
Removing master ... done
Removing network tag_vins
Summary:
We learnt few basic and important commands of docker-compose. Docker along with compose saves us a lot of time in setting up the load testing infrastructure quickly. With scale command we can create any number of jmeter-slave instances we need. With a single command, we bring the entire application up and running or stop and remove them.
Note: I actually create all the containers on a single host as part of this article. This setup would be helpful to test your scripts in your local machine before doing the actual performance testing. We would be creating one container per host for actual performance testing. Please check the article here – JMeter – Distributed Load Testing using Docker + RancherOS in Cloud
Happy Testing 🙂
I don’t quite get it – won’t this simply create the master & all slaves on the same host?
Doing docker compose in combination with swarm might make sense, though..?
You are totally right. I had explained that in my previous article – http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/. Idea is to show how compose works. Yes, the real use of docker & jmeter – http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/. Swarm article yet to be released.
Very useful. Eagerly waiting to see jmeter swarm article, have to implement in our project.. :). Hope so it would be published soon,
Sure, I will try to do soon. There are many pending in my drafts list. Have to clear them one by one soon. But, Did you check this – http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-rancheros-in-cloud/?
Guys, you will definitely love this :https://github.com/ajeetraina/jmeter-docker
Are you still working on the article of the swarm mode? I’m really curious how this is realized.
I found that RancherOS/K8S might be a better choice in creating the jmeter multi-host network. You can check this article – http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-rancheros-in-cloud/
Could you please go a little bit more in detail why you have chosen Rancher/K8s? Because Swarm became part of the Docker core as well.
Dear Vins,
in above setup, if influxdb is not part of the docker image, How can we send the data out of jmeter?