Erlang Central

MZBench Tutorial

From ErlangCentral Wiki

Contents

Overview

Modern internet applications are required to handle millions of simultaneous connections and requests. In order to ensure that these applications are able to handle these connections and meet their SLAs, it is important to create benchmarks that can generate this kind of traffic and also measure the performance of the application while it is serving the traffic.

There are many open source tools like Basho Bench, JMeter, Tsung, etc. that allow load generation and measure performance of the system under test. However scaling these tools past a single node is challenging and require effort from the user to run in a distributed environment.

MZBench is distributed, cloud aware benchmarking tool that can seamlessly scale to millions of requests. It is built around the principles that are fundamental to the continuous delivery of code changes in high traffic, critical environments.

What is MZBench

MZBench is a distributed, benchmarking tool. Some of the main features are

  • - Cloud aware: Built in plugin for AWS, but can be used on other public or private cloud with additional plugins.
  • - Trivially deployable and workable on a localhost (such as a developer's box).
  • - Open sourced with a BSD License: get it at https://github.com/machinezone/mzbench
  • - Flexible: MZBench uses plugin system so adding support for a new communication protocol can be done without modifying core code. It also features a DSL for writing custom scenarios.
  • - Scalable: tested with 100 nodes and millions of connections
  • - Erlang based: OTP is just the right foundation for such a tool

How MZBench works

The following chart gives a high level system overview:

http://carprofile.ru/erlang_central/tutorial_1.png

  • 1. A client performs an HTTP request a particular test to be performed. The CLI can be bypassed by directly accessing the Web interface.
  • 2. MZBench asks a cloud controller to allocate required number of nodes (N + 1 director node).
  • 3. MZBench installs to the freshly allocated nodes:
    • - MZBench node code;
    • - protocol-specific worker code;
    • - execution scenario;
    • - resource files.
  • 4. Status is continuously returned through the whole process.


After everything is installed and setup, nodes start to interact with a target system. Director is polling nodes for metrics, it performs data aggregation and pushing to API server. The whole process from the very beginning could be tracked with an interactive web dashboard.

MZBench usage example

The most used internet application protocol is HTTP. Let’s try to use an HTTP worker to test some remote HTTP server:

git clone https://github.com/machinezone/mzbench.git && cd mzbench
./bin/mzbench start_server
# now you should be able to see MZBench dashboard at http://localhost:4800/

Open this dashboard with your browser and edit new bench scenario:

http://carprofile.ru/erlang_central/tutorial_2.png

We’ll try to execute the following scenario:

#!benchDL
# this simplified http example is trying to request "target_url" at variable rate from 1 rps to 200
# "ok" rate should be always greater than 0.5 per second, otherwise it fails

assert(always, "http_ok.rps" > 0.5)

make_install(
        git = "https://github.com/machinezone/mzbench.git", # worker location
        dir = "workers/simple_http") # sub-folder in git repo

pool(size = numvar("worker_count", 20), # 20 parallel "threads"
     worker_type = simple_http_worker):
        loop(time = 120 sec, # execution time 120 seconds
             rate = ramp(linear, 1 rps, numvar("max_rps", 200) rps)):
            get(var("target_url", "http://172.21.3.3/index.html"))
            # GET parameters are passed through url, for example get("http://172.21.3.3/?x=y")
  • - Skipping the comments, very first statement is an assert which requires http_ok rate to be greater than 0.5 in a second, so 1 is fine, 2 is fine and 0.3 is not fine. This value is checked every ten seconds so rates under 0.1 are not recommended to try because of measurement period would be smaller than event period.
  • - “make_install” is a command which could be considered as a kind of “include”, it downloads and installs a worker code. In this case it takes simple_http worker from subfolder “workers/simple_http” of our repository. Workers are usually not embedded into MZBench code so add and update operations do not require server restart.
  • - “pool” is the main top-level operation which is responsible for load generation, it has size and worker_type, every pool could have only one worker and every worker could be used for multiple pools.
  • - “loop” is something like a “for” operation in usual programming languages, it has no iteration count, but it has duration and rate, which could be constant or variable
  • - “ramp” inside of loop rate means ascending frequency of the loop body execution, in this case rate will grow from 1 rps to 200 or "max_rps" input value
  • - “get” is worker-specific function, in this case it is responsible for http web page get operation and could be anything which worker creator made up. We’d see below how to do that
  • - Now we could change the rate, request two pages in a row or create two simultaneous threads requesting different pages. Let’s check how to specify a protocol, check out workers/simple_http/src/simple_http_worker.erl

After clicking “run” button, the script starts its execution and other tabs like logs and graphs become available.

Cloud

In MZBench cloud stands for a pool of nodes and some defined allocation protocol. There are several cloud providers, one of the most popular is Amazon EC2 (https://aws.amazon.com/ec2/). Allocation plugin for it shipped with MZBench, it is also possible to implement custom cloud plugin.

The execution above is a single-node, once you need to get higher request rates, you probably need more nodes working on your task, to do that, add the following file as ~/.config/mzbench/server.conf

[

   {mzbench_api, [
       {cloud_plugin, {module, mzb_api_ec2_plugin}},
       {aws_config, [{ec2_host, "ec2.us-west-2.amazonaws.com"},
                     {access_key_id, "<your_key_id>"},
                     {secret_access_key, "<your_key>"}]},
       {ec2_instance_spec, [
                     {image_id, "ami-3b90a80b"},
                     {group_set, ""},
                     {key_name, "<your_keypair_name>"},
                     {subnet_id, "<your_subnetwork_if_required>"},
                     {iam_instance_profile_name, ""},
                     {instance_type, "t2.micro"},
                     {availability_zone, "<your_zone>"}]},
       {ec2_instance_user, "ec2-user"}
   ]}

]. you should provide your AWS credentials here

After server restart, MZBench would allocate and deallocate nodes in AWS, the same script could be used to create millions and millions of requests!

Implementing a worker plugin

MZBench has a pluggable part which describes the methods used by scenario. This part called worker and usually implements some protocol-specific functions. Deployment model allows to keep a worker code completely separate, you don’t need to modify MZBench to add a protocol. There are of course some already implemented workers, in this part we will describe how to create a new one.

A worker has three mandatory parts: 1. metric description; 2. functional description; 3. the initial state.

Metric description is a list containing a metric name and its type. This list could be hierarchical, it means that some metrics need to be grouped on charts.

Function description is a set of functions available for this protocol, for simple HTTP we have only one request function. But any other HTTP methods like HEAD or DELETE could also be implemented. To do that, you need to code these functions and export them from an erlang module, nothing more required, after that, you could use them from a script.

Each worker thread has its state, it is useful when you have some kind of connections or sessions, initial_state function is called to obtain the initial state for each worker, it is called as many times as new workers are started.