(TL;DR: technical definitions and notes on the cryptographic notion(s) of program obfuscation, philosophical asides on what it means
to obfuscate a program and why one would want to;
the main impossibility result for program obfuscation.)
Let's talk about copyright Easter eggs.
You're working on a piece of IP, but for some reason it's really tough to prove that your work is your own. Maybe you're producing a transcription of a famous speech. Or you're implementing a computer algorithm you invented to find shortest paths in linear time. The canonical example is mapmaking, which is a painstaking process but one where, if done right, the end result is (approximately) the same regardless of who does it.
It's very easy for someone else to steal your work and pass it off as your own without proper attribution, and, as a pre-emptive defence, you might subtly watermark
your work.
Replace some words in the transcript with plausible synonyms; rig your program to produce a wrong answer on very specific input; add a fake street or exaggerate a bend on your city map.
If you know what you're looking for, you can easily spot when someone has copied your work. But for them to notice the error, they'd have to be doing as much work as it took to make the transcript/program/map in the first place, so why bother?
...well, almost. This strategy doesn't work so well for the case where you watermark
your algorithm. The Easter egg is going to be a snippet of code:
if (hash(input_graph) == "50a2fabfdd276f573ff") { return 42; }
...and a clever thief can go through your (decompiled) code, find odd
functionality like this, and strip it away before repackaging it.
Worse still, they can just trace through your algorithm to see how it works.
Ideally, you would like to obfuscate your program: run it through some automated tool that makes it as difficult as possible for an attacker to descry anything from the compiled program.
Plenty of approaches for this have been devised over the years, be it
padding the code with extra variables and gibberish
,
adding new blocks of code that provably but non-obviously do nothing,
or the host of techniques used by Obfuscator-LLVM, such as
flattening the program's control flow into a giant state machine.
The cryptography community has approached this security problem in the reverse direction. Starting with [BGR01], researchers have been carefully defining notions of obfuscation which impose provable limits on an attacker with access to obfuscated programs.
In the remainder of this post, I'll introduce some of these technical definitions, show how they inter-relate. We'll see an important upper bound on how strong/general an obfuscator can be, and consider some philosophical arguments for why even weaker notions of obfuscation show strong security properties. This will provide a basis for a subsequent post exploring Garg, Gentry and Halevi's candidate indistinguishability obfuscation scheme (which has been doing the rounds lately).
Terminology
Some shorthand terms borrowed from the relevant literature:
- Efficient: polynomial time, i.e. \(O(n^k)\) for some fixed \(k\).
- Negligible: superpolynomially small, i.e. \(o\left(\frac{1}{n^k}\right)\) for all \(k\). We write \(a \simeq b\) to denote that \(|a-b|\) is negligible.
- \(PPM\): efficient randomised algorithms (short for probabilistic polynomial Turing machine).
Black boxes
One very natural notion of what it means for code to be obfuscated is All an attacker can do is run the program
. This leads us to the notion of black box obfuscation,
wherein an attacker who just runs the obfuscated program as a black box
is just as [non-]powerful as an attacker who tries to inspect the source code.
Definition (black box obfuscation): A black box obfuscator for some (deterministic/probabilistic) Turing machine class \(\mathcal{M}\) is a (potentially randomised) algorithm \(\mathcal{O}\) that, given a machine \(M \in \mathcal{M}\), outputs a new (deterministic/probabilistic) Turing machine \(\mathcal{O}(M)\), with the following properties:
- Functional equivalence: the programs compute the same thing.
I.e. for all \(M \in \mathcal{M}\), for any input \(x\) and expected output \(y\), \[\Pr\left[\mathcal{O}(M)(x) = y\right] \simeq \Pr\left[M(x) = y\right] \] - Polynomial slowdown: the obfuscated program is at most polynomially longer, and polynomially slower than the original program.
- Black box obfuscation: anything an attacker could learn from \(\mathcal{O}(M)\), they could have learned from running the program as a black box.
I.e. for any attacker \(\mathcal{A} \in PPM\), there exists an attacker-simulator (with black box oracle access to someplug and play
oracle \(\langle M \rangle\)) \(S^{\langle M \rangle} \in PPM^{\langle M \rangle}\), so that for all \(M \in \mathcal{M}\) and all possible things \(y\) the attacker couldlearn
: \[\Pr\left[\mathcal{A}(\mathcal{O}(M)) = y\right] \simeq \Pr\left[S^M(1^{|M|}) = y\right] \]
A black box obfuscator for a circuit family is defined analogously.
Example: Suppose one-way functions exist. Then there is a black box obfuscator for the class of password verify
programs \(\{PV_\alpha \; | \; \alpha \in \{0,1\}^*\}\), where \(PV_\alpha\) is the program that outputs \(1\) if its input is \(\alpha\), and \(0\) otherwise.
An obfuscator for this program class would replace \(PV_\alpha\) with a machine that compares the hash
of its input to the hash of \(\alpha\).
An attacker who learns anything about an obfuscated program using non-black-box techniques would effectively invert the one-way function, a contradiction.
Unfortunately, black box obfuscators cannot, in general, be realised:
Theorem 1 (no black box obfuscator for TMs): There is no black box obfuscator for Turing Machines.
Proof (sketch): We're going to define a class of machines, \(\{X_{\alpha, \beta, \alpha', \beta'}\}\), for which an attacker can easily learn whether \((\alpha, \beta) = (\alpha', \beta')\) by looking at the source code, but for which this can't be learned by simply running the program. (The parameters \(\alpha, \beta, \alpha', \beta'\) are all binary strings of the same length.)
The program \(X_{\alpha, \beta, \alpha', \beta'}\) has two modes
.
If the first bit of the input is \(0\), the program runs in check-and-say mode
;
if it's \(1\), the program runs in say-and-check mode
.
The two modes are akin to two halves of a secret handshake, like two spies exchanging secret passphrases in Cold War-era Vienna.
In check-and-say mode
, the program expects a passphrase \(\alpha\), and outputs \(\beta\) if and only if the correct passphrase is given (and the null string otherwise).
In say-and-check mode
, the program expects a description of a TM \(M\), and outputs \(SUCCESS\) if and only if passing input \(\alpha'\) to \(M\) produces output \(\beta'\) (and the null string otherwise)*.
Given an obfuscated machine \(\mathcal{O}(X_{\alpha, \beta, \alpha', \beta'})\), if an attacker with source code access wants to check if \((\alpha, \beta) = (\alpha', \beta')\), they need only pass the first mode (i.e. a program that prepends \(0\)
to its input and then runs \(\mathcal{O}(X_{\alpha, \beta, \alpha', \beta'})\)) to the second mode and read the output.
On the other hand, it is near-impossible to determine this by black box examination**. (Intuitively, this is because there is negligible probability of a query producing anything but null, hence negligible probability of learning anything by running the program.)
* The only catch here is that say-and-check mode
cannot simulate its input program forever, but this is easily solved by having the simulation terminate after a small
(read: polynomial in the input size) number of steps.
If the obfuscator \(\mathcal{O}\) is fixed, we can pick a sufficiently large polynomial function to bound the simulation time by.
(Alternatively, we can further parameterise our class of machines by a choice of polynomial, so that this class of machines cannot be obfuscated independent of our choice of \(\mathcal{O}\).)
**Given an attacker \(S^{\langle M \rangle} \in PPM^{\langle M \rangle}\), and drawing \(\alpha, \beta, \alpha', \beta'\) randomly (s.t. \((\alpha, \beta) \neq (\alpha', \beta')\)), the probability that \(S\) behaves differently on either \(X_{\alpha, \beta, \alpha', \beta'}\) or \(X_{\alpha, \beta, \alpha, \beta}\) is negligible. By an analogue of the the Pigeonhole Principle, this means that there is at least one pair of such programs which \(S\) cannot distinguish.
Actually, it's a little worse than this. Our construction uses only polynomial-time-boxed (i.e. easily provably polynomial) deterministic Turing machines, so there's no black box obfuscator for even this constrained subset of \(P\). It appears that seeking black box obfuscators for most interesting
classes of program is a moot exercise.
Alternative models of obfuscation
In this section we consider two weaker notions of obfuscation. The first of these is best-possible obfuscation, which formalises the intuition that
An obfuscated program's code should reveal as little as possible
.
In particular, we require that an obfuscated program's code reveal as little information as any functionally equivalent program (of the original program's size) —
this implies that our obfuscator's choice of functionally equivalent program was the best possible
.
Definition (best-possible obfuscation): A best-possible obfuscator for some Turing machine class \(\mathcal{M}\) is an algorithm \(\mathcal{O}\), mapping machines \(M \in \mathcal{M}\) to machines, with the following properties:
- Functional equivalence.
- Polynomial slowdown.
- Best-possible obfuscation: anything an attacker could learn from \(\mathcal{O}(M)\), they could have learned from an equivalent program of similar size to \(M\).
I.e. for any attacker \(\mathcal{A} \in PPM\), there exists an attacker-simulator \(S \in PPM\), so that for all functionally equivalent pairs of programs \((M_1, M_2) \in \mathcal{M}^2\) with equal size (\(|M_1| = |M_2|\)), and all possible things \(y\) the attacker couldlearn
: \[\Pr\left[\mathcal{A}(\mathcal{O}(M_1)) = y\right] \simeq \Pr\left[S(M_2) = y\right] \]
Our next notion of obfuscation is indistinguishability obfuscation, which advances the requirement that
an obfuscator should discard as much of the input program's structure as possible
. In particular, we require that for two functionally equivalent programs of the same size,
their obfuscations should be indistinguishable
.
Definition (indistinguishability obfuscation): An indistinguishability obfuscator for some Turing machine class \(\mathcal{M}\) is an algorithm \(\mathcal{O}\), mapping machines \(M \in \mathcal{M}\) to machines, with the following properties:
- Functional equivalence.
- Polynomial slowdown.
- Indistinguishability obfuscation: anything an attacker could learn from \(\mathcal{O}(M_1)\), they could have learned from an equivalent \(\mathcal{O}(M_2)\).
I.e. for any attacker \(\mathcal{A} \in PPM\), for all functionally equivalent pairs of programs \((M_1, M_2) \in \mathcal{M}^2\) with equal size (\(|M_1| = |M_2|\)), and all possible things \(y\) the attacker couldlearn
: \[\Pr\left[\mathcal{A}(\mathcal{O}(M_1)) = y\right] \simeq \Pr\left[\mathcal{A}(\mathcal{O}(M_2)) = y\right] \]
Both these notions extend readily to circuit families.
Example (circuit canonical form): Consider the class \(\mathcal{C}\) of all Boolean circuits. An (incredibly slow) indistinguishability obfuscator for this class is an algorithm that, given an input circuit \(C \in \mathcal{C}\), finds the smallest, then lexicographically smallest circuit which is equivalent (testing for equivalence by brute force).
Clearly, any two functionally equivalent circuits will be obfuscated to the exact same canonical form
, so no attacker can tell them apart.
In light of the above example, we might ask whether indistinguishability obfuscation can really be thought of as obfuscation
at all.
After all, it reduces circuits like this...
\[(a \lor b) \land (c \lor d) \land (e \lor f)
\land (\lnot a \lor \lnot c)
\land (\lnot c \lor \lnot e) \\
\land (\lnot e \lor \lnot a)
\land (\lnot b \lor \lnot d)
\land (\lnot d \lor \lnot f)
\land (\lnot f \lor \lnot b)\]
...into this...
\[\bot\]
...which is decidedly easier for an attacker to analyse.
But this obfuscator is ridiculously slow, running in \(O(4^n)\) in its input size. As it turns out, if we restrict our focus to efficient indistinguishability obfuscators, these turn out to be powerful indeed.
Example (proof checkers): Consider the family \(\{C_n\}\) of Boolean circuits, comprising two kinds
of circuit:
- Proof checker circuits, which check if their input bitstring is an axiomatic proof of some open mathematical problem (e.g. the Riemann Hypothesis, \(P \neq NP\), \(P = NP\)), and return \(0\) or \(1\) according to failure/success.
- Zero circuits, which always return \(0\).
An efficient indistinguishability obfuscator \(\mathcal{O}\) for this (constrained, polynomial) class of circuits produces indistinguishable output whether it is fed
a proof checker circuit for a false statement (e.g. a proof checker for Fermat's Last Theorem is false
) or a zero circuit. There are two main ways in which it might do this:
Firstly, it might realise
that the statement the proof checker is considering is false (or: unprovable within the given proof length bounds), and output a null circuit accordingly.
But these problems are intractable, and it seems implausible that they could mechanistically be solved in polynomial time.
(An alternate argument: Boolean satisfiability readily reduces to proof checking, so this obfuscator would have to be powerful enough to solve \(NP\)-complete problems.)
Secondly (and more plausibly), it might perform some dumb
(random) transformation to its input program without attempting to prove any properties about its behaviour. In this case,
for the obfuscations of zero circuits and (impossible-)proof checker circuits to be indistinguishable, the transformation must be noisy/one-way
enough for an attacker to
be unable to see anything of the zero circuit's simplicity afterwards. This justifies describing the transformation as obfuscation
.
Some lemmas regarding types of obfuscator
Lemma 2: Black box obfuscators are indistinguishability obfuscators.
Proof: Let an efficient black box obfuscator \(\mathcal{O}\), an attacker \(\mathcal{A}\),
and a pair of functionally equivalent same-length programs \((M_1, M_2)\) be given.
From the definition of black box obfuscation, obtain the plug-and-play
simulator \(S^{\langle M \rangle}\).
\(S^{M_1}\) and \(S^{M_2}\) must behave the same, since the oracles are equivalent. But then: \[\Pr\left[\mathcal{A}(\mathcal{O}(M_1)) = y\right] \simeq \Pr\left[S^{M_1}(1^{|M_1|}) = y\right] \simeq \Pr\left[\mathcal{A}(\mathcal{O}(M_2)) = y\right] \] Indistinguishability follows by transitivity of \(\simeq\).
Lemma 3: Best-possible obfuscators are indistinguishability obfuscators.
Proof: Given \(\mathcal{A}\), obtain \(S\) from our best-possible assumption, and notice that: \[\Pr\left[\mathcal{A}(\mathcal{O}(M_1)) = y\right] \simeq \Pr\left[S(M_2) = y\right] \simeq \Pr\left[\mathcal{A}(\mathcal{O}(M_2)) = y\right] \]
Lemma 4: Efficient indistinguishability obfuscators are (efficient) best-possible obfuscators.
Proof (sketch): The attacker-simulator simply composes the programs \(\mathcal{A} \cdot \mathcal{O}\), both of which (by our efficiency assumption) are polynomial-time.
Corollary 5: Efficient black-box obfuscators are (efficient) best-possible obfuscators.
So as long as we restrict ourselves to efficient obfuscators, indistinguishability is the best we can do
.
References:
[BGR01] Barak, Goldreich, Rudich, et al. "On the (im) possibility of obfuscating programs." Advances in Cryptology—CRYPTO 2001. Springer Berlin Heidelberg, 2001.
[GSR07] Goldwasser, Shafi, and Guy N. Rothblum. "On best-possible obfuscation." Theory of Cryptography. Springer Berlin Heidelberg, 2007. 194-213.
No comments:
Post a Comment