Lanchester's Laws are a handful of mathematical formulae describing armed combat between two forces. I'll talk about their derivation for a little bit, then move on to dodgeball (duh).

### Ranged combat

Two armies of foes face one another. One is armed with rifles; the other with crossbows. Which army prevails?

Let's call the size of the respective armies \(x\) and \(y\), and call their soldiers' *lethality* \(\alpha\) and \(\beta\) respectively. (The lethality of a soldier is the number of enemy soldiers they can strike down per unit time.)

An army's casualty rate is equal to the total lethality of all the soldiers in the other army: \[ \left. \begin{array}{c} \frac{dx}{dt} = -\beta y \\ \frac{dy}{dt} = -\alpha x \end{array} \right\} x, y > 0 \]

A little calculus (see, e.g. [Wa00] for one derivation) gives the following relation, sometimes known as Lanchester's Square Law: \[ \beta (y_0^2 - y^2) = \alpha (x_0^2 - x^2) , \; x, y > 0 \]

(Here \(x_0\) and \(y_0\) denote the initial sizes of the two armies.)

In general, the winner of the battle depends on which is higher: \(\beta y_0^2\), or \(\alpha x_0^2\). Thus we can define an army's *strength* in a way that is completely independent of
its opponents (and so if we wanted to we could rank a collection of armies by strength). An army's strength is *the lethality of its soldiers times the square of its size*.

The intuition: give your soldiers guns that fire twice as fast, and they're twice as deadly.
Double your soldiers' numbers, and they're *four times* as deadly. (That's times two because
of the twice-as-big total lethality, and times another two 'cause the average soldier will survive [and keep shooting] for twice as long.)

The applications of this are questionable for even WWI combat, where strategy and tactical manoeuvring often accounted for a lot more than sheer numbers did — let alone modern combat, which is almost never two giant armies shooting at each other in an open field.

On the other hand, this is surprisingly useful for game design [Ad04], specifically for balancing games. Using equations like this allow the designers of real-time strategy games like Starcraft or Age of Empires (sequel pictured above) to estimate whether the games' different factions are competitive with one another.

I'd wager that laws like this apply to any game where the worse off you are, the harder it is to be effective. Civilisation V. Vanilla Magic: the Gathering. Maybe even Monopoly. (On the other hand, probably not Tekken or Call of Duty, since in those games your raw lethality isn't affected by how much damage you've suffered.)

### Ranged combat with bad aim

Same setup, slightly different assumption. Let's assume that the soldiers are all *very* bad at aiming. (Maybe they're fighting in the dark.) They fire randomly in the enemy army's
direction. This means that it's a hundred times harder to hit a lone gunwoman than an army of a hundred. This is effectively how action movies work.

We model this thusly: \[ \left. \begin{array}{c} \frac{dx}{dt} = -\beta xy \\ \frac{dy}{dt} = -\alpha xy \end{array} \right\} x, y > 0 \]

Solving these equations gives us what is sometimes called Lanchester's Linear Law: \[ \beta (y_0 - y) = \alpha (x_0 - x) , \; x, y > 0 \]

Thus the armies whittle away at each other at the same proportional rate, and whichever of \(\alpha x_0\) or \(\beta y_0\) is larger determines the winner. It's a game of attrition.

Compare and contrast to the law of Conservation of Ninjitsu, which posits that *any* two armies are approximately evenly matched, regardless of size or weapon strength.

### Dodgeball with bad aim

Let's talk about a variation of dodgeball I used to play, where whenever you get hit by a ball, you switch to the opposite team. (Yes, nobody's ever out

'til the game's over. There are no losers: how very 21st century primary school.)

There are obvious similarities to the military battles that Lanchester's Laws describe, but instead of casualties eliminating people from the field, we merely have people swapping between sides.

To start with, let's assume the kids playing this game have terrible aim (not unreasonable). The equations here are similar to that for Lanchester's Linear Law: \[ \left. \begin{array}{c} \frac{dx}{dt} = (\alpha - \beta)xy \\ \frac{dy}{dt} = (\beta - \alpha)xy \end{array} \right\} x, y > 0 \]

Solving for this by eliminating \(t\) as before gives: \[\frac{dx}{dy} = -1\]

...which is completely useless. It tells us that as one team grows, the other shrinks, which we already knew. Instead, let's look at how the team sizes change over time. Let \(N > 0\) be the total number of players, a constant, and let \(\gamma = \alpha - \beta\), i.e. the strength advantage of team \(x\). Then:

\[ \begin{align} \frac{dx}{dt} &= \gamma x (K-x) \\ \frac{dx}{x(K-x)} &= \gamma \; dt \\ \ln \frac{x}{K-x} &= \gamma K t + C \\ x &= K \left(\dfrac{Ae^{\gamma K t}}{Ae^{\gamma K t} + 1}\right) \end{align} \]

The equation makes sense: if \(\gamma\) is positive, i.e. team \(x\) is stronger, then \(x \rightarrow k\) as \(t \rightarrow \infty\). Similarly, if team \(y\) is stronger, then \(x\) approaches zero over time.

Interpreting this: \(\frac{A}{A+1}\) is the fraction of players who are on team \(x\) at time 0; hence \(A\) is the ratio of \(x\) players to \(y\) players at time 0. This said, this is an asymptotic approach: the game drags on slower and slower over time, and then suddenly recess is over.

But it's barely worth getting any deeper into this, because our use of \(\alpha\) and \(\beta\) implies that the two teams have different skill levels, which makes no sense if kids are swapping between teams all the time. (The only conceivable exception is if being on one team confers an extra advantage; e.g. if everyone on Team Emu stands in the parking lot while everyone on Team Cockatoo drops balls down on them from the second storey.)

So in all practicality, \(\gamma = 0\), i.e. the game is a complete equilibrium, and kids move between the teams at equal rates, regardless of which team is bigger.

### Dodgeball with good aim

Now, what if we assume the kids can aim? This brings us back to the concentrated fire

conditions of Lanchester's Square Law:
\[
\left.
\begin{array}{c}
\frac{dx}{dt} = \alpha x - \beta y \\
\frac{dy}{dt} = \beta y - \alpha x
\end{array}
\right\}
x, y > 0
\]

*This* gives us the following solution (letting \(N = x + y\)):
\[
x = \dfrac{\beta}{\alpha + \beta}N + Ae^{(\alpha + \beta) t}
\]

(...up until one of the teams loses.)

Unlike the previous scenario, this is still vaguely interesting when the teams are evenly matched, i.e. when \(\alpha = \beta\): \[ x = \frac{N}{2} + Ae^{2 \alpha t} \]

Here \(A\) can be interpreted as the initial value of \(\frac{x-y}{2}\), i.e. the (scaled) difference between initial team sizes.

There are two possible cases in which this is constant:

- when \(A = 0\), i.e. when the game starts with
*exactly*equal numbers between the teams,*or*, - when \(\alpha = 0\), i.e. when none of the players are able to hit each other because they are all toddlers and they cannot throw for shit.

Notice that this is a highly unstable equilibrium: if even one extra player joins team \(y\), then team \(x\) is sunk, and their size exponentially snowballs downwards until it smashes to zero with a painful *thunk*.

### Conclusion

Kids' games can be pretty bloody unfair.

## No comments:

## Post a Comment