It’s a snowy day here, so it’s a great time to sit down and do some thinking about games. Lately, the subject of the runaway leader problem seems to be on everyone’s minds. I’ve heard 4 or 5 different podcasts that have spent a lot of time talking about the it and potential balancing methods like rubber banding. Even my own article on luck references it. I wonder what happened to get everybody talking about it all at once.

I had been writing down my thoughts when I read a great post about positional balance from Precipice Games that not only references most of the articles that appeared and the games that were used as examples, but also hits a lot of the points I wanted to make, so I decided to refocus and take a different approach. I particularly felt the need to go back and address some of the commonly referenced games from a different angle.

First, Mario Kart is often called out as a big example of rubber-banding, where the player who is ahead gets slowed down, and the player in the back gets all the great items. Looking back at the history of the series, this was much more subtle in the original Super Mario Kart and Mario Kart 64, where your item distribution changes, but a good driver can overcome this effect and still win. By the time Mario Kart Wii rolled around, they had pushed this effect so far that good driving has very little to do with being the winner, and in fact the leader is punished by the game. Why would I want to play a game that punishes me for playing well? The first player is literally slowed down, plus other players have the ability to draft to overtake, and you get only useless items unless you’re in the back of the pack. (The CPU players don’t seem to suffer these effects, which makes me think the game is actually cheating) The result is a game where everyone is bunched up and the outcome is fairly random.

This is an effect of feedback, in which your rate of progress through the game is related somehow to your current progress. Feedback can take many forms. Some common examples are building things that give you extra abilities or resources to use later, taxes that hurt everyone proportionally, and turn order effects. Really, any time you have to put something you have back into the game in order to continue, it is a form of feedback.

For a different take on the subject, lets look at this from a mathematical approach. Warning, it gets a bit more mathy from here out, so run away now if you have a fear of calculus. Or just skip to the bottom for the useful stuff.

Lets start with a simple game, with no effects of feedback. All players approach the goal at the same rate throughout the game. How do we figure out your progress? Well your rate R of progress on turn t is roughly constant throughout the game, R[t]≈C_{0}. Using integration (Math!) over the number of turns, your progress is basically the constant gain, C_{0}, times the number of turns: P[t]≈C_{0}t. Progress through the game is roughly linear, and small differences don’t get larger with time. Something like Ticket to Ride has benefit for the player with the most points. These games tend to be simple, but do not have a runaway leader problem, because the leader has no benefit for being the leader.

Now lets introduce the concept of feedback into the equation. Let’s assume for simplicity that the rate of progress is based on some portion of your current progress, and some constant amount. R[t]≈C_{1}P[t] + C_{0}. But hold on. We only know what the current progress P[t] is based on the rate. Mathematically, this is known as a first order differential equation. Fortunately, this is easily solved, and the result includes the term et, an exponential model of progress. Suddenly, small changes do get larger with time, and this is how you get a runaway leader. Catan and many other games exhibit this, where people appear to be evenly matched, but suddenly one player leaps ahead and can’t be stopped. When the growth is steady from the beginning of the game, it is called snowballing, but the end result is the same.

Let’s look at a more complicated case, where you can not only increase your score, but you can also increase your rate of scoring, and lets add in an effect that the higher your score, the more quickly your rate of scoring decreases. This is roughly what happens in Suburbia, where your reputation increases your score each round, but the higher your score, the faster your reputation decreases. We’ll call this the Rate of Reputation, or RR. Reputation is conveniently the rate of progress, and can just be R. Without going into the detailed math of setting the problem up (since this is all simplified and idealized, anyway) this results in a relationship between all of the factors, Progress, Rate of Progress and Rate of Reputation, to give a second order differential equation, in the form RR[t]≈C_{2}R[t]+C_{1}P[t]+C_{0}.

Here is where things get really interesting. If the feedback is purely negative and proportional to the input, this simplifies to an equation with a well known solution (how convenient!): simple harmonic motion, or oscillation. If the two are perfectly balanced, the lead just swings back and forth until an outside event triggers the end of the game. But the more interesting case is when there isn’t a perfect balance and other factors are applied. This is effectively what happens in Power Grid, since the leader has a number of disadvantages relative to the players farther behind to slow down how quickly they can progress. Part of the gameplay is not just scoring, but timing the oscillation in scoring to coincide with the end of the game. (A side note: Longest Road and Largest Army can swing back and forth in Catan, but they don’t directly have feedback to your future ability to score.)

Now, sometimes, the effect of the oscillation is too large, and this is what is called rubber banding, where the game keeps bouncing you back and forth between making a lot of progress when you’re behind, and little, or negative progress when you’re ahead. In mechanical systems, this effect typically leads to catastrophic failure, and the effects on game balance can be just as damaging when they completely overwhelm the player choices.

The smaller the level of feedback in the exponential case, the more turns it will take for the problem to present itself. An interesting consequence of the feedback is that you can create a case with a negative effect that reduces throughout the game. For example, starting with a pool of money or resources that can be used to progress, but taxes make you lose some each turn, or anything modelling radioactive decay in a game will fit this (a mechanic which is not frequently used). Pandemic has a negative feedback effect in which the longer you play (and see more outbreaks) the faster cities become infected, independently of whether you have made progress on eradicating a disease or not. This is represented mathematically by introducing independent variables in place of the constants. The solution works out with more complex constants.

The more factors that contribute to progress, the more difficult it is to find a good balance. This is where the concept of stability comes in. Ideally, the game should neither get out of control, nor be impossible to change the outcome. A linear game is stable because the feedback is always the same. But with more complex behavior, stability takes on a new meaning. The feedback changes on the local level, but over time is bounded. There are several ways to achieve this, and we can look to math and physics for some hints. One answer is “damping”, which can take many forms.

Mass damping determines how fast the feedback can change. This is something like using a large deck of cards or having a lot of units and choices. Another way of stating this is that the initial conditions are important, so if players start off with a lot, it will take longer for the exponential function to take over. This is because the values in the progress equation are scaled by the initial condition C0. The portion of things affected by a change in progress is the critical balance point. The radioactive decay case also fits here, since the effect is directionally proportional to the total quantity which is always decreasing)

Friction and drag are really two forms of the same thing and change the rate of progress directly. One way this frequently shows up is in diminishing returns. (Like the decreased income for more cities powered) The amount a player can increase or decrease their rate of progress relative to the magnitude of the rate of progress is an important balance point.

The values themselves can be limited, much like a speaker turned up too high will sound “tinny” because it can’t vibrate fully. The values can artificially be truncated to be within some set limits. In Suburbia, the reputation and income track are limited between -5 and 15, so the feedback can’t grow indefinitely. I don’t know of many games where players can actually change these limits. They are usually either fixed or completely open.

Another class of approaches includes conservation of energy, momentum, and mass. Without getting too deep into the subject (too late, maybe) being able to shift resources between progress, and rate of progress is a very powerful tool for forming feedback in a lot of games. Taking the mathematical approach, the cost should reflect the feedback, i.e. to increase the rate of progress, you give up progress linearly, and to affect the rate of change in progress, you give up progress exponentially. The game becomes about the tradeoffs for time spent to improve and the expected return over time.

—–

So let’s return to game theory. If your eyes glazed when all the math stuff started, you can start reading again. All the math above gives us some strategies for addressing the runaway leader problem.

Give the player a lot to work with and make any effects of feedback small, so that the game ends before a runaway leader occurs. If it is carefully tuned, the feedback could be used instead to quickly end the game once a tipping point has been passed. But depending on the game, it might be better to just end the game at the tipping point.

Give the player ways to control their rate of progress, like actions that affect growth tracks. Similarly, limit how much the game affects the rate of progress. You can limit it naturally by making large changes prohibitively expensive, or just set a hard limit to how much it can change.

For a more complex approach, make the game limit progress more as the progress increases, so that players can’t just rely on increasing their rate of progress to win. This is a “diminishing returns” approach. Power Grid includes this with diminishing returns of income based on population. To balance this effect, you should give players ways to make progress that is independent of progress.

The flip side of diminishing returns is increasing cost. This isn’t simply having things later or more powerful cost more, although that can be part of it. Have cost increase simply by use, so that the first building costs 1, the second costs 2, the third costs 3. That way each player’s costs grow based on their own progress. Kevin G Nunn just had a great post about different growth schemes in scoring as part of his excellent series on statistics in games, but the same patterns can be used for costs, as well.

Add non-linear behavior into the system by using relative progress instead of absolute progress. If this is done in a fixed way, it can make the relative positions a meta-game, like in Power Grid. If the effect is scaled by the relative difference, then progress will tend to balance out.

Provide more than a single path for progress, so that there might be a runaway leader in one category, but other players have the chance to do that in other categories. Be careful when considering the number of players, so that you don’t force players to compete over one path while leaving the other player free to progress.

For more variety, use these effects in maintenance phase or translated directly to the game actions by having actions that scale based on progress.

What I’ve done here is study how an element of feedback in a game can lead to a runaway leader problem. I’ll throw in one last case study to show how some of these concepts can be applied. When launching a rocket, most of the weight is the fuel, and most of it is used early on. Gravity, the atmosphere, and conservation of momentum and energy all work against it. Skipping over the actual rocket science, it turns out that the weight of the fuel tank is a big player, which is why most rockets are staged, and drop their fuel tanks bit by bit. This seems like a great mechanic to use in a game: staged effects on rate of progress, where once you’ve maxed out your returns, you can make a big play to reset everything without losing progress. Each stage becomes a mini-race to see who can get to the next level most quickly and efficiently, and once any player does, everyone else gets brought along for the ride. In the end, that’s the job of a game designer: to make the game go somewhere fun and exciting, and bring all the players along for the ride.

#1 by Alex Harkey (@GamesPrecipice) on February 17, 2014 - 10:16 pm

Great article Nat, I was looking forward to seeing your thoughts on the topic and it is a wonderful read.

#2 by Adam Torgerson (@A_Torg) on August 1, 2014 - 11:09 am

Thanks. I referenced this in a discussion about economics to describe wealth distribution. I’m a board gamer (BGG:amtorgerson), so the runaway leader effect was a perfect example of a feedback loop. Wading through a 5 hour game of Outpost made me sincerely appreciate games’ balancing mechanisms.