Gathering performance data with sysstat

The sysstat package is included in all distributions but not always installed by default. It’s a collection of performance monitoring tools and you can check the options for your current version in the manual pages. It’s platform independent and so this works on Linux in general.

After installation of the package there are two different ways to gather data.

Using the command line
Ad hoc data can be gathered by calling the data collector from the command line. For newer versions of sysstat the command

/usr/lib64/sa/sadc -S XALL -F 5

and for older versions of sysstat the command

/usr/lib64/sa/sadc -d -F 5

will collect data in 5 second intervals until it’s interrupted by ^C. The -F forces a file compatible with the current sysstat version and the -S DISK or -d enables the collection of statistics for the block devices.
You then can convert the created binary file by

sar -A -f > outfile.txt

to get a report to read. There are many options for sar to select what’s being shown in the report, please check the man page for them. The -A option shows all the data.

Using cron
For regular monitoring the distributions configure sysstat to be started as a cron job or as a service. The history files are then put under /var/log/sa and converted once a day to text file. In the configuration file /etc/sysstat/sysstat (SUSE) and /etc/sysconfig/sysstat (Red Hat) the archive settings for this history can be configured.

For SLES  you can install the cron settings by

  • SLES10: /etc/init.d/sysstat start
  • SLES11: /etc/init.d/boot.sysstat start
  • SLES12: systemctl start sysstat

For RHEL5, RHEL6, RHEL7 the installation is done automatically.

For Ububtu 16.04.1 cron jobs are disabled by default (after installation). You have to edit the file /etc/default/sysstat and change the variable ENABLED from “false” to “true“. After that you have to restart the service: /etc/init.d/sysstat restart

In the end there is a file/link to /etc/cron.d/ that you can adapt and e.g. changing the 10 min collection interval or adapting the reports (e.g. add the -S XALL). As usual the documentation for the sa1 and sa2 commands used there are in the respective man pages.


I’m often ask what the overhead is. I did a quick test on a really large system with 100 cores, SMT2 enabled, 1004 SCSI disks with two paths and devices mapper resulting in 8000+ disk entries in sar.
If you run this with RHEL 7.7. on a z14 with a 1s interval it uses around 25% of one logical CPU. At 10s we are having 2.5% of a logical CPU for this really large system. and at a 1 min interval it’s 0.4%.

(updated 10/12/2016)

Leave a Comment

Your email address will not be published. Required fields are marked *