Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The "RPS" generated by locust is much fewer #236

Open
odidev opened this issue Jun 29, 2023 · 8 comments
Open

The "RPS" generated by locust is much fewer #236

odidev opened this issue Jun 29, 2023 · 8 comments
Labels

Comments

@odidev
Copy link

odidev commented Jun 29, 2023

@arey, I performed load testing on a HTTP interface with locust and found that the 'RPS' generated by locust is very less.

Details of load testing:

  1. 6 locust worker nodes
  2. 160 users, RPS=44.1
  3. Spawn rate =16

1 petclininc
10 petclinic
6 petclinic

The api/customer/owners, api/customer/owners/pets and api/customer/pettypes APIs are taking more time to response for the given requests (get and post) while load testing using locust. As the average response time for the APIs is greater than expected, it shows decreased RPS.

Above 160 users the RPS value decreases and it shows failures.

Could you please share some pointers for the same? Let me know if you need any other information.

@arey
Copy link
Member

arey commented Jul 7, 2023

Hi @odidev. Thank you for the feedback.
Could you give us some context about your test?
For instance, how to you deploy the microservices and the databases : cloud, docker, vm, physical machine..
And while doing your load testing with Locust, do you monitor the application with any APM tools in order to identify ?
What do you expect

@arey arey added the question label Jul 8, 2023
@odidev
Copy link
Author

odidev commented Jul 10, 2023

Hi @arey ,

Deployed spring-petclinic on AWS Arm64 instances using Docker, I have successfully built the images locally for arm64 by executing ./mvnw clean install -P buildDocker. Next, I deployed the spring-petclinic using docker-compose up command.

Cloud: AWS
Docker version: 24.0.2

I deployed the spring-petclinic on c6g.2xlarge, c7g.2xlarge, c6i.2xlarge and c6a.2xlarge respectively and tested the load using locust over c7g.16xlarge instance with 6 worker nodes and recorded the data for 10 minutes. If I increase the number of users beyond 165 then I start getting failures. I found that the api-gateway and customer services are limiting the RPS value as their response time is much higher than the other services. As the POST request to create owners and pets take more time for the response to write in the database. Also, while load testing, CPU utilization is around 20-30%.

CPU DATA -
percentage-of-cpu-all-ut

I’m not using any APM tools while load testing.

Could you please share some pointers to increase rps? Let me know if you need any other information.

@odidev
Copy link
Author

odidev commented Jul 17, 2023

@arey, could you please share your feedback regarding the above issue.

@arey
Copy link
Member

arey commented Jul 18, 2023

Hi @odidev

The docker-compose up command line starts all the microservices on the same machine. I don't think this is a good practice for scalability testing. his means you can't take advantage of the horizontal scalability provided by a microservices architecture.

The 3 microservices vets-service, visits-service and customers-service could have replicas. You could increase customers-service replica number to 2 or 3. To do this you don't have to use in memory database but a MySQL or PostgreSQL database that could share their data between instances.

@odidev
Copy link
Author

odidev commented Jul 20, 2023

Hi @arey ,

As per your suggestion, I have added the replicas for the 3 microservices vets-service, visits-service and customers-service in the docker-compose.yml file which is mentioned below:

docker-compose.yml

If I add replicas to vets-services and visits-services it does not show any change in RPS and failures, but after adding the replicas to customers-service it shows increase in RPS, but the failures are increased for a smaller number of users.

Previously, it was supporting upto 160 users after adding replicas it is not able to support 100 users. I also tested for 50 users, but it still shows 3 to 5 failures.

Logs of load testing for 160 users after adding replicas:
result of 160 users

@odidev
Copy link
Author

odidev commented Jul 28, 2023

@arey, could you please share your feedback regarding the above issue.

@seungmin-uplus
Copy link

@odidev, Could you share the locust.py? I am testing Observability services using spring-petclinic service, and looking for loadtesting service to create enough traffic.

@arey
Copy link
Member

arey commented Sep 4, 2023

Thank you @odidev for your new test.

In order to identify the source of the errors, I'll need the logs. Did you keep them?

At the same time, although it's practical on the dev station, I'm not sure that docker-compose is the best tool to deploy a microservice architecture in a live environment. All microservices are deployed on the same machine. There is a risk of CPU overload.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants