Skip to content

mojaloop/ml-perf-characterization

Repository files navigation

ML Performance Characterization Repository

Here you will find Performance Characterizations for the Mojaloop Services.

1. High-Level Characterization Scenarios, components or libraries

# Description Documentation Notes
1. FSPIOP Discovery ./fspiop-discovery/README Done 1st-pass.
2. FSPIOP Agreement ./fspiop-agreement/README Done 1st-pass.
3. FSPIOP Transfers ./fspiop-transfers/README Done 1st-pass.
4. FSPIOP Discovery + Agreement + Transfers To Be Done
5. Central-Services-Stream Library ./lib-central-services-stream/README.md Done 1st-pass
6. Support Services (e.g. Network) ./support-profiling/README.md

2. Capturing End-to-end Metrics

We have two approaches to capture the End-to-end metrics of a transaction.

2.1 Tracestate Headers

The Tracestate header is part of the Mojaloop Specification which conforms to the W3C Tracing standards.

As such we are able to take advantage of this header by propagating the following key-value pairs during the End-to-end transaction:

tracestate-key tracestate-value Notes
tx_end2end_start_ts timestamp Generated by the Test-runner (i.e. K6)
tx_callback_start_ts timestamp Generated by the Payee Participant Simulator (e.g. when receiving the FSPIOP GET /parties Request)

Example header: tracestate=tx_end2end_start_ts={{TIMESTAMP}}, tx_callback_start_ts={{TIMESTAMP}}

2.2 WebSocket Subscriptions

The Simulators (i.e. "Callback Handler Service") have been developed to support a simple WebSocket (WS) mechanism that allows the Test Executer (i.e. K6) to subscribe for Callback events.

For example, let's take the FSPIOP GET /parties use-case. Here we have K6 subscribe to a Callback via a WS on the Payer Participant Simulator based on the following properties:

  1. The TraceID
  2. The HTTP Operation (i.e. PUT)
  3. The Party ID (i.e. MSDISN Number)

This ensure that the K6 subscription-notification will be unique for each test.

We gain two benefits by using this approach:

  1. The K6 Runner will only iterate once the current request is completed End-to-end which means that our execution strategy is closer to a real-work scenario.
  2. The K6 Runner will be able to report on the End-to-end duration and operations per second.

The down-sides of this approach, is that it only works well when we have a single Payer Participant Simulator. Its possible that we can support scaling the Payer Participant Simulator by having the K6 Runners subscribe to multiple instances, but that is currently not supported.

3. Types of tests

Test Type Description
Smoke Validates scripts works and that our target env/system performs adequately under minimal load.
Average-load Assess how the system performs under expected normal conditions.
Stress Assess how the system performs at its limits when load exceeds the expected average.
Spike Validates the behavior and survival of the system in cases of sudden, short, and massive increases in activity.
Breakpoint Gradually increase load to identify the capacity limits of the system.

Reference.

4. Tools Used

Tool Description
ml-core-test-harness The ml-core-test-harness is a light-weight Docker-composed based test harness used by the Mojaloop community to execute Functional, and now Performance-Characterization tests
K6 Grafana k6 is an open-source load testing tool.
Docker Compose Docker Compose is a tool for defining and running multi-container Docker applications.

About

Mojaloop Performance Characterization Result Repo

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published