Multiple programs running on the same computer may not be able to directly access each other’s hidden information, but because they share the same memory hardware, their secrets can be stolen by malicious programs through ” memory timing sub-channel attack”.
This malicious program reports a delay when it tries to access the computer’s memory, because the hardware is shared among all the programs that use the machine. It can then interpret those delays to get another program’s secret, like a password or cryptographic key.
One way to prevent these types of attacks is to only allow one program to use the memory controller at a time, but this slows down the computation considerably. Instead, a team of MIT researchers devised a new approach that allows shared memory to continue while providing strong security against this type of side-channel attack. Their method can speed up programs by 12 percent when compared to modern security programs.
In addition to providing better security while allowing faster computation, this technique can be applied to a variety of different sub-channel attacks targeting shared computing resources, said the researcher.
“It is very common these days to share a computer with others, especially if you do the computation in the cloud or even on your own mobile device. A lot of this resource sharing is going on. Through these shared resources, an attacker can find even very detailed information,” said senior author Mengjia Yan, Career Development Assistant Homer A. Burnell, Professor of Electrical Engineering and Computer Science (EECS), and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Co-authors are CSAIL PhD students Peter Deutsch and Yuheng Yang. Other co-authors include Joel Emer, a professor of practice at EECS, and CSAIL graduate students Thomas Bourgeat and Jules Drean. The research will be presented at the International Conference on Architectural Support for Programming Languages and Operating Systems.
Committed to Memory
One can think of the computer’s memory as a library, and the memory controller as a library door. A program needs to go to the library to get some stored information, so that program opens the library door very quickly to get inside.
There are several ways that a malicious program can exploit shared memory to access confidential information. This work focuses on a contention attack, where the attacker needs to determine the exact moment when the victim program walks through the library door. The attacker does that by trying to use the door at the same time.
“The attacker is poking at the memory controller, the library door, to say, ‘are you busy now?’ If they are blocked because the library door is open – because the victim program is already using the memory controller – they will be delayed. Notice that delay is information that is leaking,” said Emer.
To prevent contention attacks, the researchers developed a scheme to “shape” a program’s memory requirements into a predefined pattern, independent of when the program actually needs it. use the memory controller. Before a program can access the memory controller and before it can interfere with another program’s memory requests, it must go through a “request formatter” that uses the graph structure. to process requests and send them to the memory controller on a fixed schedule. This type of chart is called a directed pivot chart (DAG) and the group’s security scheme is called a DAGguise.
Fool the attacker
Using that rigid schedule, DAGguise will sometimes delay the program’s request until the next time it is allowed to access memory (on a fixed schedule), or sometimes it will send a fake request if the program does not need to access the memory next time. schedule period.
“Sometimes the show will have to wait an extra day to get to the library and sometimes it will go when it’s not really needed. But by implementing this very structured pattern, you can hide from the attacker what you are actually doing. Deutsch says:
DAGguise represents a program’s memory access requests as a graph, where each request is stored in a “node” and the “edges” connecting the nodes are the time dependencies between requests. . (Request A must be completed before request B.) The edges between the nodes – the time between each request – are fixed.
A program can send a memory request to DAGguise whenever it needs it, and DAGguise will adjust the timing of that request to always ensure confidentiality. Regardless of how long it takes to process a memory request, an attacker can only see when the request is actually sent to the controller, which happens on a fixed schedule.
This graph structure allows for dynamic sharing of memory controllers. DAGguise can adapt if many programs are trying to use memory at the same time and adjust the fixed schedule accordingly, allowing for more efficient use of the shared memory hardware while maintaining the integrity of the memory. secret.
Increasing productivity
The researchers tested DAGguise by simulating how it will do in an actual implementation. They constantly send signals to the memory controller, which is how an attacker will try to determine other programs’ memory access patterns. They officially verify that, with whatever effort they can, no personal data has been leaked.
They then used a computer simulation to see how their system could improve performance compared to other security methods.
“When you add these security features, you will run slower than normal execution. You will pay for this in performance,” explains Deutsch.
Although their method was slower than the original insecure implementation, when compared to other secure alternatives, DAGguise resulted in a 12% increase in performance.
With these encouraging results, the researchers want to apply their approach to other computational structures shared between programs, such as networks on a chip.. They are also interested in using DAGguise to quantify the threat level of certain types of side-channel attacks, in an effort to better understand performance and security tradeoffs, Deutsch said.
This work was funded in part by the National Science Foundation and the Air Force Office of Scientific Research.