What is game theory? Why should we study it? Game Theory 101 is a free introductory course to the basics of game theory. Over the course of the next few dozen videos, we will learn about strict dominance, iterated elimination of strictly dominated strategies, pure strategy Nash equilibrium, mixed strategy Nash equilibrium, the mixed strategy algorithm, weak dominance, backward induction, subgame perfect equilibrium, comparative statics, and more.

This initial lecture outlines the course. The material follows *Game Theory 101: The Complete Textbook*. I strongly recommend purchasing it--it's only $3.99 and it spells out everything we will be doing in much greater depth and with more examples. However, you should be fine without it.

Let's have some fun!

Two prisoners are locked into separate interrogation rooms. The cops know they were trespassing and believe they were planning on robbing a store, but they lack sufficient evidence to charge them with the latter crime. Thus, they offer the prisoners the following deal:

If no one confesses, both will only be charged with trespassing and receive a sentence of one month. If one confesses while the other keeps quiet, the confessor will get to walk away free, while the one who kept quiet will be charged to the fullest extent of the law--twelve months in jail. Finally, if both confess, each criminal's testimony is less useful, and both will be locked up for eight months.

If each prisoner only want to minimize the amount of time he spends in jail, what should they do?

This lesson introduces the concept of strict dominance, which is a very useful tool for a game theorist.

We cover strict dominance in lesson 1.1 of *Game Theory 101: The Complete Textbook*.

The prisoner's dilemma had an obvious solution because each player had one strategy that was always the best regardless of what the other players do. Most games don't have solutions that are that simple, however. What if one player's best strategy depends entirely on which strategy the other player chose?

This video covers iterated elimination of strictly dominated strategies. If one strategy is always worse than another for a player, that means the other player should infer that the first player would never choose that poor strategy. But this has interesting implications. It is possible to keep removing strategies from a game based on this information, until you eventually arrive at a single solution. We go over an example in this video.

As a general rule, if you ever see a strictly dominated strategy, you should always eliminate it immediately. Although there may be more strictly dominated strategies that you could eliminate first, those other strictly dominated strategies will still be strictly dominated in the reduced game. Therefore, you lose nothing by immediately eliminating a strategy.

We cover strict dominance in lesson 1.2 of *Game Theory 101: The Complete Textbook*.

What happens when we have a game that doesn't have any strictly dominated strategies? This video introduces the concept of Nash equilibrium. A Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to change his or her strategy given what the other players are doing. This video shows how to find pure strategy Nash equilibria by looking at each individual outcome and checking for profitable deviations.

Nash equilibrium is the most important topic in game theory, so we will spend a lot of time further dissecting it.

We cover pure strategy Nash equilibrium in lesson 1.3 of *Game Theory 101: The Complete Textbook*.

A Nash equilibrium is a set of strategies, one for each player, such that no player has incentive to change his or her strategy given what the other players are doing. But what does that mean? This lecture discusses how Nash equilibria are essentially laws that no one would want to break even in the absence of an effective police force. We look at traffic responding to stoplights as an example.

We cover this intuitive interpretation of Nash equilibrium in lesson 1.3 of *Game Theory 101: The Complete Textbook*.

Finding pure strategy Nash equilibria was easy when there were only four outcomes. But if there are a lot more outcomes, say 16, going through each of them individually would be far too time consuming. This lesson shows how to find pure strategy Nash equilibria using the best responses method.

To illustrate the concept, we use the safety in numbers game. Two generals decide whether to pass on fighting a battle or sending one, two, or three units. If at least one general decides not to fight or the generals send the same number of units, the game ends in a draw. Otherwise, the general with the most number of units wins.

We cover best responses in lesson 1.3 of *Game Theory 101: The Complete Textbook*.

What happens when a game has no pure strategy Nash equilibria? We must turn our attention to mixed strategy Nash equilibria, in which players randomize between two or more strategies. We use matching pennies--the classic game of two diametrically opposed players--to illustrate the concept.

We cover mixed strategy Nash equilibrium in lesson 1.5 of *Game Theory 101: The Complete Textbook*.

To check for mixed strategy Nash equilibria, we must run the mixed strategy algorithm. This algorithm shows whether there exists a mixed strategy for a player that leaves the other player indifferent between his or her two pure strategies. If such a mixed strategy exists for both players, then those strategies collectively form a mixed strategy Nash equilibrium.

The mixed strategy algorithm is the first computationally intensive part of game theory we have encountered. However, we will be using the algorithm quite a bit later on, so do not be worried if the logic of the math is difficult to grasp at first.

We cover the mixed strategy algorithm in lesson 1.5 of *Game Theory 101: The Complete Textbook*.

For a player to be willing to mix between two strategies, he must be indifferent between them. Put differently, he must expect to earn exact the same amount by choosing either strategy in expectation.

As a result, we must be very careful when we write mixed strategies. Although we commonly write 1/3 as .33, those two numbers are not equal to one another. This lecture shows why using decimals can be problematic. In general, you should always play it safe and write numbers as fractions, not decimals.

We cover the mixed strategies in lesson 1.5 of *Game Theory 101: The Complete Textbook*.

Some games have both pure strategy Nash equilibria and mixed strategy Nash equilibria. Battle of the Sexes provides a classic example. A couple wants to get together for an evening of entertainment. Will they be able to coordinate even if they want to meet at different places?

We cover the mixed strategies in lesson 1.5 of *Game Theory 101: The Complete Textbook*.

How do we calculate payoffs in mixed strategy Nash equilibria? The process has three simple steps:

1) Find the probability each outcome occurs in equilibrium.

2) For each outcome, multiply that probability by a particular player's payoff.

3) Sum all of those numbers together.

That's all it takes!

It is always a good exercise to calculate payoffs in mixed strategy Nash equilibria and compare them to the payoffs for pure strategy Nash equilibria, or even non-equilibrium outcomes. Is the mixed strategy Nash equilibrium efficient? Are there pure strategy Nash equilibria that are better for both players? What about non-equilibrium outcomes?

We cover how to calculate payoffs in lesson 1.6 of *Game Theory 101: The Complete Textbook*.

Sometimes a pure strategy is not strictly dominated by any other pure strategies but is by a mixed strategy. If that is the case, we can eliminate the strictly dominated pure strategy from the game as normal. From there, we can use the same tools as before to find its Nash equilibria.

We cover this type of dominance in lesson 1.7 of *Game Theory 101: The Complete Textbook*.

A weakly dominated strategy is a strategy that is never better than another strategy and sometimes worse. Would a player ever want to play a weakly dominated strategy? That answer may not be so obvious. Sometimes, a player earns more in a Nash equilibrium by playing a weakly dominated strategy than he earns in any other Nash equilibria. As a result, we cannot easily dismiss weakly dominated strategies as inherently foolish.

We cover weak dominance in lesson 1.4 of *Game Theory 101: The Complete Textbook*.

Sometimes, a game can have infinitely many equilibira. This lecture provides an example and also illustrates the concept of partially mixed strategy Nash equilibria.

We cover games with infinitely many equilibria in greater depth in lesson 1.8 of *Game Theory 101: The Complete Textbook*.

Virtually all games have an odd number of equilibria. We've seen examples of games with infinitely many equilibria. This lecture shows an even rarer example of a game with an even number of equilibria. Weak dominance is usually at fault. Thus, if you are working on a homework assignment and don't find an odd number of equilibria, check to see if there are any weakly dominated strategies. If there are none, you should keep looking for more equilibria.

We cover the odd rule in greater depth in lesson 1.8 of *Game Theory 101: The Complete Textbook*.

This lecture begins our adventure through sequential games, in which players take turns moving. Not all Nash equilibria are sensible in this context, so we introduce a new concept: subgame perfect equilibrium. A subgame perfect equilibrium requires all actions to be Nash equilibria in every subgame of the larger game. In essence, this requires all threats players make to be credible.

We consider a game between two firms deciding whether to enter a market and engage in a price war. Can a monopolist's threat to launch a price war convince a challenger to stay out of the market?

We introduce subgame perfect equilibrium in lesson 2.1 of *Game Theory 101: The Complete Textbook*.

How do we find subgame perfect equilibria? Backward induction is the simplest method. To know the smart moves at the beginning of the game, we must first figure out how today's actions affect tomorrow's consequences. As such, we start at the end of the game and work our way back up. This is backward induction in a nutshell.

We cover backward induction in lesson 2.2 of *Game Theory 101: The Complete Textbook*.

A subgame perfect equilibrium is a complete and contingent plan of action. It must tell us what happens both on and off the equilibrium path. If it only says what happens on the equilibrium path, then we have no perspective on why taking the strategies that lead to that particular outcome is wiser than selecting an alternative strategy. This lecture explains how to avoid losing easy points by making such a mistake.

We cover this issue with backward induction in lesson 2.2 of *Game Theory 101: The Complete Textbook*.

Occasionally, extensive form games can have multiple subgame perfect equilibria. This lecture provides an example and explains why indifference plays an important role here. We also learn how to solve extensive form games with simultaneous move games embedded in them.

We cover how multiple subgame perfect equilibria can arise in lesson 2.3 of *Game Theory 101: The Complete Textbook*.

You might think that limiting your future options is a bad thing. After all, if you eliminate one possible course of action in the future, you will be unable to utilize it if it becomes necessary. However, to make a threat credible, you sometimes need to take all other options off the table. We call this *tying hands*. This lecture gives the classic example of an invading army burning the bridge behind it to close off any avenue of retreat.

We cover hand tying in lesson 2.4 of *Game Theory 101: The Complete Textbook*.

It's a hot day. A police officer pulls you over and asks to search your vehicle. If you refuse, he can call a K-9 unit in to sniff around. It will take the K-9 a half hour to arrive. "Just let me conduct a quick search of your vehicle. It will be better for both of us," the officer tells you.

Should you wait for the dog or give the officer permission to search?

I actually faced this situation in real life. If you want to learn more about it--and how commitment problems relate to civil war--check out this lecture: http://www.youtube.com/watch?v=xOEvRzolBsE

We cover commitment problems in lesson 2.5 of *Game Theory 101: The Complete Textbook*.

Two players take turns deciding whether to add $2 to a pot or end the game and take slightly more than half of the money. If the players can cooperate all the way through, they stand to make $100 each. If they do not cooperate at all, then the players will only receive $2 between them in total. Will selfishness get in the way of the great payoff?

We cover the centipede game in lesson 2.7 of *Game Theory 101: The Complete Textbook*.

The backward induction solution to the centipede game differs greatly from how actors play in laboratory settings. What gives? This lecture explains how our game theoretical results are actually a function of our assumptions. Our assumptions drive our conclusions; game theory is merely a mathematical way to ensure that these conclusions are logically valid.

In other words, game theory provides no black magic. It is merely a logical accounting standard. (But accounting standards are very useful!)

We cover problems with backward induction in lesson 2.7 of *Game Theory 101: The Complete Textbook*.

Backward induction essentially assumes that all future play will be rational. What if we think of it the opposite way. That is, what if we assume that all *past *play *was *rational? Will that change our results? The answer is yes. The pub hunt provides an intuitive example.

We cover forward induction in lesson 2.8 of *Game Theory 101: The Complete Textbook*.

Making slight changes to a player's payoff changes the game's mixed strategy Nash equilibrium. Rather than solve all of these minor changes individually, it would be nice if we had a general formula which could solve *all *versions of a game at once. We will work toward that in this unit.

However, to ensure that the results of the generalized game actually make sense, we must be able to correctly identify valid probability distributions. This lecture is the first step in that direction.

We cover probability distributions in lesson 3.1 of *Game Theory 101: The Complete Textbook*.

How do we solve a game that has variable payoffs rather than specific numerical values? This lecture looks at the generalized form of battle of the sexes for an example. We also go over how to solve for mixed strategy Nash equilibria step-by-step, which is tricky when you do not have numbers to work with.

*Game Theory 101: The Complete Textbook* has a bunch of examples of this in section 3.2. Since these types of games can be difficult, I suggest you go through the examples in the book on your own.

When games have exogenous variables, some equilibria may only exist for particular configurations of those variables--increase them or decrease them by a small amount, and they completely disappear. We call such equilibria knife-edge equilibria, as their existence lies precariously on the edge of a knife. The downside is that knife-edge conditions usually lead to a large number of equilibria, which require extra work to solve.

The good news is that knife-edge equilibria are inherently unrealistic, so we can ignore them for the most part. We cover knife-edge conditions in lesson 3.3 of *Game Theory 101: The Complete Textbook*.

A soccer penalty kick provides a compelling reason to study games generally. In this lecture, we consider optimal strategies when the striker is more accurate toward one side than another. How should the players optimally play?

Learning about penalty kicks is the first step in learning about comparative statics, which is the subject of lesson 3.4 of *Game Theory 101: The Complete Textbook*.

Does a striker aim to his weaker side more or less frequently as his accuracy improves on that weak side? Rather than just find equilibria, we should also analyze how equilibria change as a function of the game itself. We use comparative statics to study such changes. This lecture introduces the concept of comparative statics and goes through their method of calculation.

Unfortunately, this is the one lecture where calculus is necessary. If you have a quarter or semester's worth of calculus, you will be fine; we only need to calculate simple derivatives here. The lecture also includes a non-technical proof for the claim in case you have no calculus background.

In general, comparative statics are what make game theory interesting. If you need more practice running through the process, check lesson 3.4 of *Game Theory 101: The Complete Textbook*. I cover five examples in great detail. You should have a hang of it by the end.

Our focus now shifts to understanding how mixed strategy Nash equilibrium functions beyond the scope of 2x2 games. This lecture covers the support of mixed strategies--that is, the pure strategies a player chooses with positive probability in his mixed strategies. Such strategies in the support must fulfill certain properties or they cannot be a part of a mixed strategy Nash equilibrium.

We cover the support of mixed strategies in lesson 3.5 of *Game Theory 101: The Complete Textbook*.

Weak dominance usually causes us headaches. Here, however, is one bright spot. If your opponent is mixing among all her strategies, you cannot play a weakly dominated strategy in response in equilibrium. This lecture explains why and provides an example using the take-or-share game, which is commonly featured on the U.K. game show Golden Balls and the U.S. game show Friend or Foe.

We cover this trick in lesson 3.5 of *Game Theory 101: The Complete Textbook*.

You have probably played rock paper scissors at some point in your life. This video looks at the straightforward strategic form of the game and uses our knowledge from matching pennies to correctly guess the game's mixed strategy Nash equilibrium.

However, slight tweaks to the payoff structure prevent us from accurately guessing the equilibria. As a result, we will have to learn how to find mixed strategy Nash equilibria that utilize more than two pure strategies. We will work toward this goal with the remainder of the unit.

Rock paper scissors is the subject of lesson 3.6 of *Game Theory 101: The Complete Textbook*.

Every Nash equilibrium of a two player, symmetric, zero sum must give each player an expected utility of zero. This lecture defines symmetric and zero sum games and then explains why that rule must hold. The rule is very useful for proving that certain strategies are not Nash equilibria, and we will see that in the next lecture.

This trick appears in lesson 3.6 of *Game Theory 101: The Complete Textbook*.

In this lecture, we apply the theorem on symmetric, zero sum games to eliminate possible Nash equilibria in the game of modified rock paper scissors. Afterward, we will know that the only equilibrium involves both players mixing among all three strategies. We will solve for it in the next lecture.

The modified version of rock paper scissors appears in lesson 3.6 of *Game Theory 101: The Complete Textbook*.

This lecture expands the mixed strategy algorithm to three strategies. The process is very similar to the algorithm with two strategies, except we have more equations and more unknowns. We use the modified game of rock paper scissors to illustrate.

We go over the mixed strategy algorithm for three strategies in lesson 3.6 of *Game Theory 101: The Complete Textbook*.

Where do we go from here?