Wednesday, February 11, 2015

In praise of counterfactuals

“Look, just, say for the sake of argument that anthropogenic climate change is real,” he says. “Would you support government spending to combat it then?”

There’s the temptation, of course, to reply that the question is nonsense because Andrew Wakefield proved in 1998 that the planet is actually cooling by two degrees every year; hell, he published a paper and everything. But once you’ve done that (as is probably best with trolls like this fellow you’re arguing with), there’s personal value in taking the question seriously.

It’s a yes-or-no question, admitting two obvious strains of answer: “Even if it were so, that wouldn’t change anything”, and “Well in that case of course things would be different”. Those, by themselves, are boring, a pre-packaged answer recited in two seconds.

The exciting part is getting to ask Why?.

Credit: kevron2001 / deviantArt

I sat down to write a blog post about the ethics/pragmatics of particular kinds of rhetoric. (Implicit versus explicit universal quantifiers, if you care. It’s beside the point, because as I’m going to reveal below, I got sidetracked.)

I got sidetracked.

The post was going to open with “For the purposes of this post, I’m going to start from the assumption that X is inappropriate in context Y”, hedging this point specifically because “X is inappropriate in context Y” is contentious, but it’s awfully impractical to prepend “and so assuming so-and-so...” to the beginning of every sentence so, ugh, why not get it over with.

It’s a little like picking a scientific paradigm, an article of faith, an axiom system. You want to make a point in a context. It’s hardly unusual; the vast majority of arguments are built upon some kind of premise.

Picking your premises is like picking a scientific paradigm in the sense of Lakatos: your argument exists within a programme of thinkers building a shared body of understanding in the context of a socially agreed collection of base assumptions. “Phlogiston explains everything.” “Electrons orbit nuclei like planets around stars.” “Central planning produces better results than markets.” “Markets produce better results than central planning.” “Gender is performative.” “Improving the plight of our country’s poor is more important than other countries’.” You take your base beliefs and you do important work with others who share them.

Picking your premises is an article of faith: it’s an inferential gap, an uncrossable divide. If you say “Given that morality is relative...” you’ve lost the attention/credulity of the moral realists in the room. They don’t have to accept any argument that extends from premises they don’t believe. This is true even if the premises are proveably right or wrong! Proof is social, proof is contextual.

Those people probably think they’re provably correct, too.

(Human beliefs are not closed under logical implication, and not necessarily consistent. Imagine how redundant most STEM education would be if it were otherwise.)

But most interestingly, yes, picking your premises is like picking an ‘axiom system’. And where this particular analogy shines is that sometimes there’s a huge boon to playing around with the implications of an axiom system you don’t necessarily believe.

Philosophical arguments and mathematical proofs may ofttimes differ in surface syntax, but they’re both very much about exploring the connection between ideas and concepts. A lecturer can hand me an assignment with “Assume that \(P = NP\)” at the top of one question and “Assume that \(P \neq NP\)” on the next. There’s a good chance that one of these is true. (A good chance, from where I’m sitting, not a certainty. Fuck your law of excluded middle, here’s my middle.) Certainly one of these is false. But there is still value in both these questions.

Still from Elementary, 2x02, Solve for X. CBS, 2013.

Exploring the consequences of falsehoods (let alone merely things you’re not one hundred percent certain about) is a way of understanding the ways in which you are tied to the contingent and the ways in which you aren’t.

If your grandmother’s ghost wasn’t literally real and you’d really just been imagining her scratching on the windows at night, some things would change. You wouldn’t salt your windowsills before going to bed, for instance. But other things wouldn’t change. You still wouldn’t throw rubbish on her grave. Some of your behaviours and beliefs depend on whether or not you think that ghost is real or just branches making noise, and the counterfactual gives you the ability to play with those beliefs in a safe hypothetical context.

I’m just starting to browse through Alicorn’s social justice AUs (tag).

In one note, she speaks of the project not as direct analogies that can be verbatim exported to the real world (as happens when soapboxes and fiction mix poorly) but rather a context in which one can more readily step away from their object-level beliefs to examine the language and the meta-concerns:

Most of [these AUs] are just “Have a look at this rhetoric when it’s about something that isn’t real, so you can look directly at it without being distracted by its personal relevance to you, your friends, and your political battles.”

You best see the threads binding your beliefs when you start moving them around a bit. Like (good) science fiction, or even fiction in general, stepping a hair’s breadth away from reality can be enough to grant you a whole new perspective.