Benchmarking Side-Channel Mitigations

November 19 2019

by Florian Pester

Last week we announced our public side-channel mitigation test and benchmarking lab in light of the publication of the TAA variant of ZombieLoad. This new lab will enable us to evaluate new side-channel attacks and new mitigations and answer questions such as:

  • Do the mitigations work?
  • How much performance does each individual mitigation cost?
  • How much performance do the combined mitigations cost?
  • Are there platforms that stand out in performance cost?

In this article we will take a closer look at these questions with a comparison of a Linux kernel compile benchmark on different hardware plattforms with different Linux kernels.

Side-Channel Lab Value

Every time a new Side-Channel attack is published, Cyberus advises businesses and academia about the impact of the attack. People are frequently asking about what they can and should do. Soon after the publication of the attack operating system vendors and / or Intel usually publish updates with mitigations.

There has been a history of faulty updates, poor performance after updates etc. Thus another question that comes up frequently is whether it is advisable to do a certain update or not.

Finally a particularly interesting question that is often asked by IT decision-makers is:

  • Do we need to acquire more hardware for our workloads? And if so: How much?

The Cyberus Side-Channel Lab will enable answers to all these questions.

Background & Setup

Since the publication of Meltdown and Spectre in early 2018 a constant stream of side-channel attacks was published. For each of these attacks software and hardware mitigations were published. Every additional mitigation potentially introduces new performance regressions. Over the years, a whole series of regressions has been added. We were interested in the individual and accumulated performance impacts those have on real workloads.

As a benchmark we are taking a full compilation of the Linux kernel and measure the compilation time in seconds. We have done so with a booted Linux 4.19.84 and with 5.3.11.

The benchmark is run on opensource.sotest.io, which at the moment is consists of an Intel Core i7-3612QE (Ivy Bridge, 4 cores, 8 threads) and a Core i5-7300U (Kaby Lake, 2 cores, 4 threads). We also ran the same tests on our internal non-public instance, which additionally has a Xeon Bronze 3104 (6 cores, 6 threads) and a Core i5-5300U (Broadwell, 2 cores, 4 threads).

Evaluation

Enabling all mitigations slows the compilation down by roughly 23 seconds or 26%. Hyperthreading has the most impact here. When we enable all mitigations except Hyperthreading we see a slowdown of 2-4 seconds or 3-5% for the Linux kernel compilation.

For example on the Kaby Lake Hardware on Linux 4.19.84 we end up with the following benchmark times:

Configuration Command Line Compile Time (s) Overhead (%)
No mitigations mitigations=off 89 -
Spectre & Meltdown mds=off, tsx_async_abort=off 89 0
+ ZombieLoad mds=full, tsx_async_abort=off 92 3,4
+ TAA mitigations=auto 92 3,4
+ NOSMT mitigations=auto, nosmt 112 25,8

When we take a look at all machines and all configurations we end up with the following graph:

Linux Kernel Compile Time Overhead

We can clearly see that the overhead stays below 5% for most configurations. The impact of the TAA benchmark is also rather low for a compilation benchmark. An interesting thing to note is that the Spectre & Meltdown only overhead for the i5-7300U (a Kaby Lake processor) is 0. This processor already contains hardware mitigations against Spectre and Meltdown and therefore Linux does not turn on the mitigations for this CPU. From the kernel documentation:

If the CPU is vulnerable, enable all available mitigations for the MDS vulnerability, CPU buffer clearing on exit to userspace and when entering a VM. Idle transitions are protected as well if SMT is enabled.

The Xeon Bronze has 6 physical cores and no Hyperthreading. It is therefore the only CPU in the field that does not suffer massively once hyperthreading is turned off.

Summary

Our current publicly available demo setup features a limited set of machines and operating system kernels. Our side-channel lab is built on top of our Sotest Test infrastructure, which makes it easy to add more hardware platforms and more benchmarks in the future. The lab itself is fully automated and allows us to quickly test and benchmark new side-channel attacks and mitigations against them.

We can now say the mitigations do work. Performance impact is generally less than 5% for a compile benchmark (which is quite a lot) but gets a lot worse when SMT (or hyperthreading) is disabled. In that case performance can decrease by 25%.

We are in the process of scaling this lab up with additional hardware and software environments, improved analysis resources and allow you to turn data into decisions.

We are interested in a conversation with you about automated testing and benchmarking labs. Do you have a special testing or benchmarking need, that you think is not addressed by current market solutions? If so, we would like to hear from you.

Please feel free to contact Jacek Galowicz.


Share this article: