## Wednesday, February 26, 2014

### The Fable of the Fox and the Grapes

(Apologies to Aesop.)

One hot summer's day, a fox was strolling through a forest when he spied a bunch of grapes, sparkling purple and ripe, hanging over a lofty branch.

Just the thing to quench my thirst, quoth he.

Drawing back a few paces, he took a run and a jump, and just missed the bunch.

He turned around, took a few steps back and again ran and leapt at the grapes, but still with no success.

Thinking craftily, he unsheathed his claws and scratched away at the trunk of the tree, hoping to topple the whole thing down. But after a time he realised he had barely made a dent in its side.

Giving up at last, the fox sighed and walked away, his nose upturned, saying: They are probably sour grapes anyway. They could hardly be called grapes at all.

Moral of the fable: If they are legitimate grapes, the fox's body has ways to try and cut the whole thing down.

## Friday, February 21, 2014

### Program obfuscation, part 1: What is program obfuscation?

(TL;DR: technical definitions and notes on the cryptographic notion(s) of program obfuscation, philosophical asides on what it means to obfuscate a program and why one would want to; the main impossibility result for program obfuscation.)

You're working on a piece of IP, but for some reason it's really tough to prove that your work is your own. Maybe you're producing a transcription of a famous speech. Or you're implementing a computer algorithm you invented to find shortest paths in linear time. The canonical example is mapmaking, which is a painstaking process but one where, if done right, the end result is (approximately) the same regardless of who does it.

It's very easy for someone else to steal your work and pass it off as your own without proper attribution, and, as a pre-emptive defence, you might subtly watermark your work. Replace some words in the transcript with plausible synonyms; rig your program to produce a wrong answer on very specific input; add a fake street or exaggerate a bend on your city map. If you know what you're looking for, you can easily spot when someone has copied your work. But for them to notice the error, they'd have to be doing as much work as it took to make the transcript/program/map in the first place, so why bother?

...well, almost. This strategy doesn't work so well for the case where you watermark your algorithm. The Easter egg is going to be a snippet of code:

if (hash(input_graph) == "50a2fabfdd276f573ff") {
return 42;
}


...and a clever thief can go through your (decompiled) code, find odd functionality like this, and strip it away before repackaging it. Worse still, they can just trace through your algorithm to see how it works.

Ideally, you would like to obfuscate your program: run it through some automated tool that makes it as difficult as possible for an attacker to descry anything from the compiled program.

Plenty of approaches for this have been devised over the years, be it padding the code with extra variables and gibberish, adding new blocks of code that provably but non-obviously do nothing, or the host of techniques used by Obfuscator-LLVM, such as flattening the program's control flow into a giant state machine.

The cryptography community has approached this security problem in the reverse direction. Starting with [BGR01], researchers have been carefully defining notions of obfuscation which impose provable limits on an attacker with access to obfuscated programs.

In the remainder of this post, I'll introduce some of these technical definitions, show how they inter-relate. We'll see an important upper bound on how strong/general an obfuscator can be, and consider some philosophical arguments for why even weaker notions of obfuscation show strong security properties. This will provide a basis for a subsequent post exploring Garg, Gentry and Halevi's candidate indistinguishability obfuscation scheme (which has been doing the rounds lately).

## Friday, February 7, 2014

### "Iron this."

This blog post constitutes [a summary of / reading notes on] More than “just a joke”: The prejudice-releasing function of sexist humor. by Ford, Boxer, et al., 2008. Interested parties may wish to read the full paper.

(TL;DR: quoting the relevant article: Sexist humour may derive power to trivialize sexism and foster a sexist normative climate from the ambiguity of society's attitudes toward women.)

Setup: Participants were rated on hostile sexism scores according to Glick et al.'s Ambivalent Sexism Inventory, a 22-question battery featuring statements (to be rated on a 6-point agree/disagree scale) such as:

Women are too easily offended.

Women seek special favours under [the] guise of equality.

Feminists are making reasonable demands.*

* Naturally, scores for some of these items were inverted.

(The statements shown test for hostile sexism; others testing for benevolent sexism include Women have a superior moral sensibility and so on.)

The inventory was administered in the students' classrooms. Two to four weeks later the same students were (seemingly unrelatedly) exposed to a number of short role-playing scenarios. Buried in the middle of those were one of the following:

• Sexist humour condition: A vignette consisting of various characters exchanging mostly sexist jokes (How can you tell if a blonde's been using the computer? There's White-Out on the screen!).
• Neutral humour condition: A vignette consisting of various characters exchanging neutral jokes (What's the difference between a golfer and a skydiver? A golfer goes whack — Damn![...]).
• Sexist statement condition: A vignette consisting of various characters exchanging sexist social commentary (I just think that a woman's place is in the home and that it's a woman's role to do domestic duties such as laundry for her man.).

(Pretest ratings indicated that the sexist jokes were considered just as funny as the neutral ones, and just as sexist as the sexist statements.)

Within the context of the role-play, the students were then asked how much of a fixed budget they would be willing to donate to a fictional women's organisation.

Result: In the sexist humour condition, students' hostile sexism levels predicted how little they would donate. However, in the other two conditions, students' hostile sexism levels did not affect donation amounts.

### So what does that mean?

According to the authors:

In other words, sexist humor can serve as a releaser of prejudice. People with internalised sexism don't necessarily always act upon it, but they're far more likely to when other people are joking and creating a safe environment for them to freely act upon those values.

(Omitted: discussion of the second experiment in the paper which addresses some methodological issues with the above experiment (e.g. imagined versus real social groups; imagined versus real money).

Also omitted: the usual discussion about how representative undergrad sociology students are of their society at large.)

### IRL takeaways

• Even if you think you are not particularly bigoted yourself, making jokes at the expense of a marginalised group is absolutely not a morally neutral action. (No, not just gender.)

(Obviously this assumes you believe that further marginalising marginalised groups is ceterus paribus bad. If you don't, that's a whole other discussion. Several whole other discussions.)

• Jokes do not exist in a vacuum; they coexist with culture. Jokes are not just a byproduct of culture, they influence culture.
• That thing they said in primary school about not making fun of other people? Still relevant.

References:
Ford, Thomas E., et al. "More than “just a joke”: The prejudice-releasing function of sexist humor." Personality and Social Psychology Bulletin 34.2 (2008): 159-170.
Glick, Peter, and Susan T. Fiske. "The Ambivalent Sexism Inventory: Differentiating hostile and benevolent sexism." Journal of personality and social psychology 70.3 (1996): 491.