Spryker Performance Load Test

Fabian Wesner
Fabian Wesner CTO Spryker
18. May 2016 in

Technology English

In this load test we want to demonstrate Spryker’s performance and scalability capabilities. The idea is to run a jMeter test for normal and peak load conditions with a single cloud instance and then to scale up to prove the scalability capabilities of Spryker.


Spryker is divided into two application: a lightweight shop front end called Yves and a more heavyweight backend application called Zed. Yves retrieves all data from a key-value storage (Redis) and a search engine (Elasticsearch). It also performs calls to Zed to run business logic. You can read more about the concepts in the previous blog post.

From a conceptual point of view Yves has three types of pages:

  •  Pages that only read from Redis (e.g. homepage, product-detail-page, …)
  •  Pages that query Elasticsearch (e.g. all catalog-pages with full text search and facet-filters)
  •  Pages that perform calls to Zed (e.g. add-to-cart)

Each type has a specific behavior under peak load conditions. To give you a better understanding we decided to run the load tests for the page types independent from each other. The first test demonstrates the baseline of the system: this is the execution time when there is no high traffic. The second test shows what happens when we increase the number of requests per minute and afterwards we increase the number of servers from one to five and then ten.

To run all tests we use our current Demoshop (online, code) which acts as a boilerplate for new projects. Therefore you can think about these results as a footprint of the Spryker (e)commerce framework. Feel free to install and evaluate it: Getting Started Guide.

Test setup

All load tests are performed on Heroku PaaS “Performance-L” dynos. According to Heroku this type of dyno has 14GB of Ram, eight CPU cores and it is hosted on Amazon Web Services. To optimize the dyno usage we set the WEB_CONCURRENCY var to 224 and allocate 64MB of Ram for each process. The application runs on PHP 7.0.6 and nginx 1.8.1.

The only parameter which we changed during the test is the number of servers, which are called dynos in Heroku’s terminology. All performance tests are run with a single dyno for Yves and a single dyno for Zed. The scalability tests are performed with five and ten dynos and an adopted WEB_CONCURRENCY of 448.

To run Spryker we needed to add some additional add-ons. For Yves we added Bonsai Elasticsearch (Dedicated 80) and Redis Cloud (2.5 GB). For Zed we added the Heroku Postgres :: Database (Standard 4) and shared the connections for Redis and Elasticsearch. We also added the Loader.io add-on which is an online load testing service based on jMeter. We decided to use the “Maintain client load“ test type to linearly scale up the number of requests per minute. For each test run we use the same number of parallel clients. Each client performs one request at once and waits for the response. In case the response time takes 100 ms, a single client makes 10 request per second (~ 600 requests per minute). As you can see the number of requests per second is related to the application’s response time.

Finally we added New Relic to retrieve the test results. All measurements are presented in “requests per minute (rpm)” which is New Relic’s default metric for throughput.

Baseline performance

In the first test we show the execution time for Spryker under normal load conditions operated by five parallel clients and hosted on a single dyno.


The following screenshot is taken from Loader.io. The number of clients increases from zero to five what results in up to 42 requests per second (~ 2310 requests per minute at the end of the test). In the diagram you can see the increased amount of requests and an average response time of 119 ms for the whole HTTP round trip.

In contrast to Loader.io, New Relic measures the traffic in request per minute. The following graph shows a peak of 2310 requests per minute which is related to ~40 requests per second.

Let’s look into New Relic to capture the server-side execution time:

As you can see the average server-side execution time (without HTTP overhead) is 34,5 ms and it is not affected by increasing traffic. The blue area represents the raw PHP execution time to bootstrap the application, run the controller and render the templates. The yellow section indicates the time which is used to get data from Redis. Behind the scenes most of this is network latency because the Redis does not run on the same server.


In contrast to the homepage, the catalog makes use of Elasticsearch which is significantly slower than Redis. With up to 1.840 requests per minute the average execution time of 55,6 ms is a bit higher than for the homepage.

As you can see there is a green area which represents the time that is needed to query Elasticsearch. Although the green and yellow parts have the same dimensions, it is important to know that we only perform one query to Elasticsearch but several Redis::get().



The third page type represents all requests that perform a call to the backend application Zed. For this reason the execution time is a sum of Yves and Zed.

The green part in this diagram shows the time which is needed to make the request to Zed.


When we look into the breakdown table we can see the single call to Zed with an execution time of 71,4 ms



New Relic allows us to also look directly into the Zed application. Here you can see the usage of the PostgreSQL database which is queried during the add-to-cart procedure. As you can see Zed takes 71,5 ms in average, so that the overall time for the add-to-cart request sums up to 108 ms.



Peak load conditions

The next test run demonstrates how Spryker behaves under peak load conditions. This is important for special situations, for instance when a TV airing generates high traffic. Again we test the three different types of pages: homepage, catalog and add-to-cart.


This time we increase the number of requests per minute from zero to 10.000 over five minutes and measured the execution time.


As you can see the execution time of the homepage does not change for the first three minutes. Then it increases to up to 200 ms. The reason is the limited capacity of the network which makes the Redis::get() slower. The website is still very fast but we don’t recommend to permanently go higher in a production environment. A doubled execution time would be the perfect trigger to add more servers, so let’s run this test again with the same configuration but five dynos.


Now the execution time is flat again:

Because of the faster response time we can even achieve a higher throughput with the same number of parallel clients:



Surprisingly the catalog has a better behaviour under peak load conditions than the homepage. This is because we only perform a single query to Elasticsearch and do not get any latency issues.




We also run this test for the add-to-cart action. The Zed part becomes slower under load, but in general the execution time never exceeds 200 ms. We reach the limit of a single dyno at 2400 requests per minute. Just imagine a shop that gets this amount of add-to-carts … We would like to see their revenue :-)

Scaling up

In the first test runs we demonstrated the performance of Spryker. Now it’s time to prove the scalability capabilities. For this reason we perform the load tests on several dynos. What we expect is a linear horizontal scalability. That means five dynos execute five times better than a single one. Ten dynos are twice as powerful as five, etc. This only works out if there are no bottlenecks. You can read the previous article which explains Spryker’s Performance and Scalability Concepts.


We scaled up the system to five dynos and go full throttle with up to 600 clients. Let’s see what happens now…
Loader.io shows that the number of parallel clients goes up. The average response time for the whole HTTP roundtrip increases but is still acceptable:

The number of requests linearly raises up to a maximum of 847 requests per second (~ 50.000 requests per minute).

The application behaves as expected. The response time stays flat at the beginning and goes up to ~ 250 ms at the end of the test.

We finally retrieve 50.900 requests per minute which is almost exactly 5 times of the maximum throughput compared to a single dyno which saturated at 10.000 requests per minute.


Now we want to see what happens when we scale up to ten dynos. We repeat the last test with the same number of clients.

This time the application executes much faster and we reached a maximum throughput of 92k rpm which is almost the double of 50,9k rpm.



For the catalog we only run the last test with ten dynos. We know that Elasticsearch scales very well, so we don’t expect any surprise. We reached up to 81,5k rpm which results in an execution time between 60 and 300 ms:


It is also interesting to look under the hood. As you can see the CPU usage went up to 6,71k which means that each of the ten dynos has a usage of 671%. This makes sense because each one has eight cores and every core has a usage of ~84%.



In this performance load test we demonstrated a very fast server-side execution time of Spryker between 36 and 100 ms under normal load conditions. We also showed that the system does not stop working under peak load conditions. It slowly increases execution times so that there is enough buffer and time to scale up. Ultimately we scaled up to five and ten servers to verify the scalability capabilities of Spryker.

What does this mean for real life projects? From my experience you can expect to reach 5.000 to 10.000 requests per minute for the whole website with a single cloud instance/dyno. For availability reasons we don’t recommend to use a single server for any business critical website. You should plan to have a minimum amount of three dynos for Yves and Zed. You can also run Spryker on smaller instances or choose other add-ons to optimize the costs. Anyway with three “Performance L” dynos, you can expect to reach 15.000 to 30.000 requests per minute which is enough even for shops with massive marketing budget. Most users will not make more than 2 requests per minute on average, so this means you have up to 15.000 parallel visitors on the website. 99% of all shops will never reach this limit and in case your are the 1% exception, just login to Heroku and pull up the slider to scale up. We easily reached 92.000 requests per minute and for sure you can go higher.

I want to thank my team members Ehsan and Oliwier who adopted Spryker for Heroku and realized the load test. And I want to thank David Zuelke who is Heroku’s PHP ambassador for great support. Finally I want to thank Ben Longden at Inviqa for his awesome assistance.

The tested Demoshop is still online. We scaled it down a bit to reduce costs, but it is still very fast.

Still got questions?
Ask the author for further information.