×


Redis Latency Monitoring How to enable it

Redis Latency Monitoring helps the user to check and troubleshoot possible latency problems.

Here at Ibmi Media, as part of our Server Management Services, we regularly help our Customers to perform Redis configuration tasks.

In this context, we shall look into the latency monitoring and how it relates to Redis.


More about Redis Latency Monitoring?

Redis is often used in the context of demanding use cases. It serves a large number of queries per second per instance. At the same time, there are very strict latency requirements. For the average response time as well as the worst-case latency.

While Redis is an in-memory system, it deals with the operating system in different ways. For instance, in the context of persisting to disk.

Moreover, Redis implements a rich set of commands. Certain commands are fast and run in constant or logarithmic time, while others are slower and can cause latency spikes.

Finally, Redis is single-threaded. This is usually an advantage from the point of view of the amount of work it can perform per core. In the latency figures, it is able to provide. However, it poses a challenge in latency since the single thread must be able to perform certain tasks incrementally. For example, key expiration.

For all these reasons, Redis 2.8.13 introduced Latency Monitoring. It helps the user to check and troubleshoot possible latency problems.

Latency monitoring is of the following conceptual parts:

i. Latency hooks that sample different latency sensitive code paths.

ii. Time series recording of latency spikes split by a different event.

iii. Reporting engine to fetch raw data from the time series.

iv. Analysis engine to provide human-readable reports and hints according to the measurements.


More about Events and time series?

Different monitor code paths have different names called events. For example, a command is an event measuring latency spikes of possibly slow command executions. However, fast-command is the event name for the monitoring of the O(1) and O(log N) commands.

Other events are less generic and monitor a very specific operation by Redis. For example, the fork event only monitors the time taken by Redis to execute the fork(2) system call.

A latency spike is an event that runs in more time than the configured latency threshold. A separate time series associate with every monitored event.

This is how the time series work:

i. Every time a latency spike happens, it logs in the appropriate time series.

ii. Every time series compose 160 elements.

iii. Each element is a pair: a Unix timestamp of the time the latency spike was measured, and the number of milliseconds the event took to execute.

iv. Latency spikes for the same event happening in the same second will merge. So even if continuous latency spikes measure for a given event, at least 180 seconds of history are available.

v. For every element, the all-time maximum latency will record.


The framework monitors and logs latency spikes in the execution time of these events:

command: regular commands.

fast-command: O(1) and O(log N) commands.
fork: the fork(2) system call.
rdb-unlink-temp-file: the unlink(2) system call.
aof-write: writing to the AOF – a catchall event fsync(2) system calls.
aof-fsync-always: the fsync(2) system call when invoked by the appendfsync allways policy.
aof-write-pending-fsync: the fsync(2) system call when there are pending writes.
aof-write-active-child: the fsync(2) system call when performed by a child process.
aof-write-alone: the fsync(2) system call when performed by the main process.
aof-fstat: the fstat(2) system call.
aof-rename: the rename(2) system call for renaming the temporary file after completing BGREWRITEAOF.
aof-rewrite-diff-write: writing the differences accumulated while performing BGREWRITEAOF.
active-defrag-cycle: the active defragmentation cycle.
expire-cycle: the expiration cycle.
eviction-cycle: the eviction cycle.
eviction-del: deletes during the eviction cycle.

 

How to enable latency monitoring ?

There are applications where all the queries must serve in less than 1 millisecond. And applications where time to time a small percentage of clients experiencing a 2-second latency is acceptable.

So the first step is to set a latency threshold in milliseconds. The Redis latency-monitor-threshold directive sets a limit in milliseconds that will log all or some of the commands and activity of the Redis instance that exceed that limit with a default of 0. This means Redis does not automatically run latency monitoring but must be actively set.

Only events that will take more than a specific threshold will log as latency spikes. The user should set the threshold according to their needs.

For example, if the maximum acceptable latency is 100 milliseconds, the threshold is to set to such a value in order to log all the events blocking the server for a time equal or greater to 100 milliseconds.

We can enable the latency monitor with the following command:

CONFIG SET latency-monitor-threshold 100

By default, the threshold is 0. However, while the memory requirements of latency monitoring are very small, there is no good reason to raise the baseline memory usage of a Redis instance that is working well.


[Can't enable latency monitoring? We'd be happy to assist!]


Information reporting with the LATENCY command

The user interface to the latency monitoring subsystem is the LATENCY command. Like many other Redis commands, LATENCY accepts subcommands that modify its behavior.

These subcommands are:

LATENCY LATEST – returns the latest latency samples for all events.

LATENCY HISTORY – returns latency time series for a given event.

LATENCY RESET – resets latency time-series data for one or more events.

LATENCY GRAPH – renders an ASCII-art graph of an event’s latency samples.

LATENCY DOCTOR – replies with a human-readable latency analysis report.

The LATENCY DOCTOR command reports about different latency-related issues and advises about possible remedies.


This command is the most powerful analysis tool in the latency monitoring framework. It is able to provide additional statistical data. For instance, the average period between latency spikes, the median deviation, and a human-readable analysis of the event.


[Stuck in between the analysis of Redis? We are here for you!]


Conclusion

This article will guide you on the aspects of Latency Monitoring and how it helps Redis which helps the user to check and troubleshoot possible latency problems.