close
The Wayback Machine - https://web.archive.org/web/20101004082050/http://en.wikipedia.org/wiki/Side_channel_attack

Side channel attack

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Image
Image
An attempt to decode RSA key bits using power analysis. The left peak represents the CPU power variations during the step of the algorithm without multiplication, the right (broader) peak - step with multiplication, allowing to read bits 0, 1.

In cryptography, a side channel attack is any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms (compare cryptanalysis). For example, timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information which can be exploited to break the system. Many side-channel attacks require considerable technical knowledge of the internal operation of the system on which the cryptography is implemented.

Attempts to break a cryptosystem by deceiving or coercing people with legitimate access are not typically called side-channel attacks: see social engineering and rubber-hose cryptanalysis. For attacks on computer systems themselves (which are often used to perform cryptography and thus contain cryptographic keys or plaintexts), see computer security. The rise of Web applications and software-as-a-service has also raised the possibility of side-channel attacks on these programs, even when transmissions between a Web browser and server are encrypted, according to Microsoft and Indiana University researchers.

Contents

[edit] General

General classes of side channel attack include:

In all cases, the underlying principle is that physical effects caused by the operation of a cryptosystem (on the side) can provide useful extra information about secrets in the system, for example, the cryptographic key, partial state information, full or partial plaintexts and so forth. The term cryptophthora (secret degradation) is sometimes used to express the degradation of secret key material resulting from side channel leakage.

[edit] Examples

A timing attack watches data movement into and out of the CPU, or memory, on the hardware running the cryptosystem or algorithm. Simply by observing how long it takes to transfer key information, it is sometimes possible to determine how long the key is in this instance (or to rule out certain lengths which can also be cryptanalytically useful). Internal operational stages in many cipher implementations provide information (typically partial) about the plaintext, key values and so on, and some of this information can be inferred from observed timings. Alternatively, a timing attack may simply watch for the length of time a cryptographic algorithm requires -- this alone is sometimes enough information to be cryptanalytically useful.

A power monitoring attack can provide similar information by observing the power lines to the hardware, especially the CPU. As with a timing attack, considerable information is inferable for some algorithm implementations under some circumstances.

As a fundamental and inevitable fact of electrical life, fluctuations in current generate radio waves, making whatever is producing the currents subject -- at least in principle -- to a van Eck (aka, TEMPEST) attack. If the currents concerned are patterned in distinguishable ways, which is typically the case, the radiation can be recorded and used to infer information about the operation of the associated hardware.

A recently declassified NSA document reveals that as far back as 1943, an engineer with Bell telephone observed decipherable spikes on an oscilloscope associated with the decrypted output of a certain encrypting teletype.[2] According to former MI5 officer Peter Wright, the British Security Service analysed emissions from French cipher equipment in the 1960s.[3] In the 1980s, Soviet eavesdroppers were known to plant bugs inside IBM Selectric typewriters to monitor the electrical noise generated as the type ball rotated and pitched to strike the paper; the characteristics of those signals could determine which key was pressed.[4]

If the relevant currents are those associated with a display device (ie, highly patterned and intended to produce human readable images), the task is greatly eased. CRT displays use substantial currents to steer their electron beams and they have been 'snooped' in real time with minimum cost hardware from considerable distances (hundreds of meters have been demonstrated). LCDs require, and use, smaller currents and are less vulnerable. This is not to say they are invulnerable - some LCDs have been proven vulnerable too.[5]

Also as an inescapable fact of electrical life in actual circuits, flowing currents heat the materials through which they flow. Those materials also continually lose heat to the environment due to other equally fundamental facts of thermodynamic existence, so there is a continually changing thermally induced mechanical stress as a result of these heating and cooling effects. That stress appears to be the most significant contributor to low level acoustic (i.e. noise) emissions from operating CPUs (about 10 kHz in some cases). Recent research by Shamir et al. has demonstrated that information about the operation of cryptosystems and algorithms can be obtained in this way as well. This is an acoustic attack; if the surface of the CPU chip, or in some cases the CPU package, can be observed, infrared images can also provide information about the code being executed on the CPU, known as a thermal imaging attack.

[edit] Countermeasures

Because side channel attacks rely on emitted information (like electromagnetic radiation or sound) or on relationship information (as in timing and power attacks), the most reasonable methods of countering such attacks is to limit the release of such information or access to those relationships. Displays are now commercially available which have been specially shielded to lessen electromagnetic emissions reducing susceptibility to TEMPEST attacks. Power line conditioning and filtering can help with power monitoring attacks, as can some continuous-duty UPSs. Physical security of hardware can reduce the risk of surreptitious installation of microphones (to counter acoustic attacks) and other micro-monitoring devices (against CPU power draw or thermal imaging attacks).

Another countermeasure is to jam the emitted channel with noise. For instance, a random delay can be added to foil timing attacks. As the amount of data in the side channel increases, this rapidly becomes impractical; while useful against simple timing attacks or scripted attacks, it is not a practical countermeasure against TEMPEST attacks if the adversary is capable of sophisticated cryptanalysis (as such an adversary typically would be).

The most obvious countermeasure against timing attacks is to design the software so that it is isochronous -- so it runs in a constant amount of time, independent of secret values. This makes timing attacks impossible.[6]

One partial countermeasure against power attacks is to design the software so that it is "PC-secure" in the "program counter security model". In a PC-secure program, the execution path does not depend on secret values -- in other words, all conditional branches depend only on public information. (This is a more restrictive condition than isochronous code, but a less restrictive condition than branch-free code). Even though multiply operations draw more power than NOP on practically all CPUs, using a constant execution path prevents such operation-dependent power differences -- differences in power from choosing one branch over another -- from leaking any secret information.[6] On architectures where the instruction execution time is not data-dependent, a PC-secure program is also immune to timing attacks and error disclosure attacks. It is possible to automatically transform almost any arbitrary code into PC-secure code. Typical cryptographic algorithms are simple enough that such automatically transformed code usually suffers only a constant-factor slowdown compared to the untransformed code. [7][8]

Another way in which code can be non-isochronous is that modern CPUs have a memory cache: accessing infrequently-used information incurs a large timing penalty, revealing some information about the frequency of use of memory blocks. This is why cryptographic code tends to use memory only a very predictable fashion (such as accessing only the input, outputs and program data, and doing so according to a fixed pattern). For example data-dependent look-up tables are avoided because the cache could reveal which part of the look-up table was effectively used. Cryptographic schemes therefore favor primitives that do not need such a look-up to be computed: addition, substraction, boolean operations and logical shifts and rotations...

Other partial countermeasures attempt to reduce the amount of information leaked from data-dependent power differences. Some operations use power that depends strongly on the number of 1 bits in a secret value. Using a constant-weight code (such as using Fredkin gates or dual-rail encoding) keeps such operations from leaking information about the Hamming weight of the secret value. This "balanced design" can be approximated in software by manipulating both the data and its complement together.[6]

Several "secure CPUs" have been built as asynchronous CPUs; they have no global timing reference, which makes both timing and power attacks more difficult.[6]

[edit] See also

[edit] References

  1. ^ Compromising Reflections -or- How to Read LCD Monitors around the Corner.
  2. ^ Wired.com
  3. ^ Cryptome.org
  4. ^ Church, George (April 20 1987). "The Art of High-Tech Snooping". Time. http://www.time.com/time/magazine/article/0,9171,964052-2,00.html. Retrieved January 21 2010. 
  5. ^ Newscientist.com
  6. ^ a b c d "A Network-based Asynchronous Architecture for Cryptographic Devices" by Ljiljana Spadavecchia 2005 in sections "3.2.3 Countermeasures", "3.4.2 Countermeasures", "3.5.6 Countermeasures", "3.5.7 Software countermeasures", "3.5.8 Hardware countermeasures", and "4.10 Side-channel analysis of asynchronous architectures".
  7. ^ "The Program Counter Security Model: Automatic Detection and Removal of Control-Flow Side Channel Attacks"
  8. ^ Usenix.org by David Molnar, Matt Piotrowski, David Schultz, David Wagner (2005)

[edit] Additional reading

[edit] External links

Personal tools
Namespaces
Variants
Actions
Navigation
Interaction
Toolbox
Print/export
Languages