A CLI tool to gather performance data and visualize using HTML graphs. Data from multiple collection runs can be viewed side-by-side, allowing for easy comparison of the same workload across different system configurations.
APACHE-2.0 License
A CLI tool to gather many pieces of performance data in one go. APerf includes a recorder and a reporter sub tool. The recorder gathers performance metrics, stores them in a set of local files that can then be analyzed via the reporter sub tool.
Performance issues in applications are investigated by recreating them locally and collecting data/metrics using monitoring tools like sysstat, perf, sysctl, ebpf, etc... or by running these tools remotely. Installing and executing various performance monitoring tools is a manual process and prone to errors. Even with the Graviton Performance Runbook, understanding the output of these tools requires deep domain specific knowledge.
The aim of APerf is to enable anyone to collect performance data in their environment while providing tools to analyze and visualize application performance. APerf will hopefully enable faster troubleshooting by analyzing and highlighting deviations in performance between two application environments automatically.
APerf collects the following metadata:
APerf collects the following performance data:
--profile
and perf
binary present)Download the binary from the Releases page.
aperf
only supports running on Linux.
cargo build
cargo test
aperf record
records performance data and stores them in a series of files. A report is then generated with aperf report
and can be viewed in any system with a web browser.
KNOWN LIMITATION
The default configuration of 10ms for perf_event_mux_interval_ms
is known to cause serious performance overhead for systems with large core counts. We recommend setting this value to 100ms by doing the following:
echo 100 | sudo tee /sys/devices/*/perf_event_mux_interval_ms
aperf record
aperf
binary.aperf record
:./aperf record -r <RUN_NAME> -i <INTERVAL_NUMBER> -p <COLLECTION_PERIOD>
aperf report
aperf
binary.aperf record
.aperf report
:./aperf report -r <COLLECTOR_DIRECTORY> -n <REPORT_NAME>
To compare the results of two different performance runs, use the following command:
./aperf report -r <COLLECTOR_DIRECTORY_1> -r <COLLECTOR_DIRECTORY_2> -n <REPORT_NAME>
To see a step-by-step example, please see our example here
aperf record
has the following flags available for use:
Recorder Flags:
-V, --version
version of APerf
-i, --interval
interval collection rate (default 1)
-p, --period
period (how long you want the data collection to run, default is 10s)
-r, --run-name
run name (name of the run for organization purposes, creates directory of the same name, default of aperf_[timestamp])
-v, --verbose
verbose messages
-vv, --verbose --verbose
more verbose messages
--profile
gather profiling data using the 'perf' binary
--profile-java
profile JVMs by PID or name using async-profiler (default profiles all JVMs)
./aperf record -h
Reporter Flags:
-V, --version
version of APerf visualizer
-r, --run
run data to be visualized. Can be a directory or a tarball.
-n, --name
report name (name of the report for origanization purposes, creates directory of the same name, default of aperf_report_
-v, --verbose
verbose messages
-vv, --verbose --verbose
more verbose messages
./aperf report -h
Below are some prerequisites for profiling with APerf:
root
or sudo
permissions, set the perf_event_paranoid
to 0
.ulimit
settings accordingly./proc/kallsyms
, so we need to relax kptr_restrict
by setting it to 0
(on Ubuntu OS).perf
binary on your instances.env_logger
is used to log information about the tool run to stdout../aperf <command> -v
../aperf <command> -vv
.See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License. See LICENSE for more information.