Introduction to Incentives

Why is it important to develop a well-structured incentive system? One of the reasons is due to the concept of moral hazard. Moral hazard is an idea that an individual or group that is protected in some way will act differently than if they did not have that same protection.

We have come across examples of moral hazard every day. For example, one a professor is tenured, does the quality of his or her lectures decrease to focus more on research? If workers go from an hourly wage to a salaried one do they take extra breaks and longer lunches? If you have car insurance are you more than likely to take risks when driving your vehicle?

Most recently, the outcome of the Great Recession represents moral hazard. Are corporations that are too big to fail more willing to take larger risks due to them being bailed by the government and main street?

A major issue with moral hazard is because you can’t always observe someone’s action or effort. You can only assume that they are being productive at work and not shirking (slacking). As a result, an incentive, let’s say a bonus payment, cannot be based on effort. This is very unfortunate since companies ultimately want more effort from their employees. For an incentive system to work, it must be based on a metric that is observable like an increase in a company’s profit.

However, incentive programs can be impacted by other factors. For example, in agricultural areas like the Central Valley of California, the recent drought contracted profits for agricultural firms. Even though farmers and workers may have put in maximum effort, profits decreased due to factors beyond their control resulting in a missed incentive payout. Ultimately, when developing incentives, you need to be cautious in penalizing workers due to bad luck (bad outcome) or rewarding them for good luck (good outcome). This is especially the case when the factor has a high probability of occurring.

Sometimes as an employer you must pay a premium for talent and to mitigate the chances of shirking and employee turnover. This is known as an efficiency wage. An efficiency wage is a wage that is higher than the market equilibrium. Incentives can be tied to efficiency wages in the form of a carrot (additional reward).

As consumers, we deal with efficiency wages all the time. For example, why do people choose to buy Apple products? One of the reasons could be the quality of the product in relation to competing products. We are willing to pay a premium for better quality (efficiency wage). Why do people choose to go to the same mechanic when they can go somewhere else and possibly pay a lower price for the same work? The answer could be due to honesty. People are willing to pay a higher premium for honesty (efficiency wage) and the honest work that the mechanic will do rather than take the risks and costs associated with a potentially dishonest mechanic.

Game Theory & Bargaining

High-level overview of bargaining problems in sports.

Today I will cover the topic of bargaining and how it applies to game theory. Throughout our lives we engage in instances of bargaining whether it be work related, familial, or social. In this blog, I will focus on bargaining related to the NFL.

Recently, the NFL collective bargaining agreement has been under intense scrutiny due to the powers of the NFL commissioner. Let’s assume the current collective bargaining agreement has expired and talks have gone nowhere. The lack of an agreement runs the risk of spilling into the NFL season resulting in billions lost in revenue as the dispute continues.

Both the NFL and NFLPA have agreed to ultimatum bargaining (take it or leave it). A judge has been appointed to specify the rules of the negotiation. The overall amount of the “pie” has been determined to be 2.25 billion dollars (4 times the amount is more realistic).

Here are the rules:

  1. The players will begin the negotiations by making an offer to the owners. The offer will state the proposed split of the pie.
  2. The owners can either accept or reject the offer. If the owners accept, the negotiation ends and the pie is split as specified by the players. If the owners reject the offer, the owners would then make an offer to the players on how to split the pie.
  3. Each time a rejection occurs, 250 million is lost in game revenues resulting in the entire pie shrinking.
  4. The players can either accept or reject the owners’ offer. If accepted, the negotiation ends. If rejected, another offer is made by the players. The process continues until either an offer is accepted or all the $2,250,000,000 is lost.

So to solve this bargaining game, we can use a combination of probability and backwards induction.

bargaining

Above is the scenario of the game through five rounds. The players have first movers advantage due to the rules from the judge.

This is represented in the following probability formula: (n+1)/(2n).

“n” represents the number of rounds in the game, which is determined by the formula: Total Amount/Loss Revenue per reject or $2,250,000,000/$250,000,000 = 9.

So the optimal percentage share for the players would be (9+1)/(2*9) = 56% or .56

The owners have a disadvantage represented by the formula: (n-1)/(2n) = (9-1)/(2*9) = 44% or .44

So the optimal choice for the players is to present a 56/44 split of the pie. Now if we work backwards we can see that it is in the owners best interest to accept the initial offer. The reason is because once the owners reject the initial offer, the pie shrinks by 11% (-250 million for each offer rejected). In the 2nd round of negotiations the players will reject the 2nd offer due having more favorable terms in the 3rd round. Once we enter the 3rd round, it doesn’t get any better for the owners as the pie continues to shrink.

This is a simple representation of bargaining which is more complex in the real world. In addition, the NFL would have more power and influence in negotiations over the players as seen in recent collective bargaining agreements.

Manipulating Information

An introduction to how players can manipulate to their advantage.

It is very common to be in situations where you don’t have all the facts available. In different games, other players may have more information than you do resulting in an advantage. We have seen extreme cases from this during instances of insider trading and other types of financial fraud. In addition, those who possess the additional information may choose to conceal or reveal it depending on the situation.

An example would be shopping for a used car. When going to the dealership to purchase a used car you are at a disadvantage due less information. The dealer knows the value of the car you are looking to purchase as well as how much he or she is willing to sell it for. Although you have done much research through Kelley Blue Book, Carfax, and other sources, the dealer has more information thus ultimately having an advantage.

Depending on what other players do, remember that actions speak louder than words. Especially when games are repeated with the same players. Now depending on the type of information that somebody knows, they may choose to reveal or conceal. For example, if the information is damaging to the player, he or she will more than likely conceal it from others.

As a result, the actions they take will reveal information that is favorable to them. This is an example of signaling. They can also attempt to conceal something that would be unfavorable to them which is defined as signal jamming.

An example of signaling can be related to college education. For example, when I completed my undergrad there were a number of classes I took that were not really transferable to my job. Does that mean some of these classes were a waste of time? It depends on the overall goal of the individual. Many of us would agree that most of your skill development will be on the job. But the main point is the signaling of the college degree to potential employers.

The general idea is you have developed an aptitude to think and learn by completing your degree and that is the signal sent to employers. In addition to college education, you are also sending signals to potential employers based on your attire during the job interview. We can expand further than attire to a handshake, posture, and behavior.

What about signal jamming? Let’s imagine a scenario where you are looking to purchase a used vehicle from a private party. You go to see the vehicle and both interior and exterior of the car is clean. Do not be fooled by assuming that the current condition of the car reflects the overall quality and maintenance of the car. For all you know the car could be a lemon. The owner may be using signal jamming to paint this picture that the vehicle has been kept up well and is in great condition.

To screen this potentially misleading information it is best to have a certified mechanic investigate the vehicle before a deal is made.

Credibility in Game Theory

A framework for developing credibility.

How do you make strategies credible? We have seen some strategies not hold much weight and as a result, cooperation and agreements are unattainable. For example, I remember seeing action movies where the villain is about to escape and the hero is holding the villain at gunpoint. However, the villain knows that the hero will not shoot him or her (due to the hero’s reputation of doing the right thing) so the threat is not credible and the villain escapes.

According to The Art of Strategy, there are three broad principles to improving the credibility of changing your strategic moves. The first principle is changing the payoffs of the game. Depending on your strategy, a threat can be a warning or a promise can be an assurance. This is attainable through written contracts and establishing a reputation.

Contracts are very effective as parties will agree to pay penalties and/or fines should there be a breach in the contract. For example, a financial institution could hold a vendor accountable and be rewarded if services level agreements are not maintained.

With regards to reputation, you never want to make a strategic move in the game and then back-off otherwise you lose credibility. We’ve seen this occur in past Presidencies which have led to insurmountable damage.  At times, you can be playing with different players at different times for a specified game. By establishing a reputation (a good one), you have developed history that future players will remember which gives you instant credibility.

The second principle is to change the game by limiting your ability to back out of a commitment. This is can be achieved by either cutting of communication or possibly burning bridges. Cutting off communication can appear credible because an action is truly irreversible. An extreme case would be ultimate sacrifice made by heroes in movies. Another example is people being served. Once they have identified who they are, there is no going back and the choice that they made is irreversible.

Burning bridges in business would be like fighting against potential intruders attempting to enter the market. An example is the NFL having a monopoly on the market. Some competitors have tried, but all have failed.

The third principle is using others to help you maintain commitment and ultimately credibility. For example, teams could possibly establish credibility easier than an individual. Health programs like Weightwatchers use this as part of their business model. By bringing like-minded people with the same goals together, commitment, accountability, and motivation is increased.

Another example is the use of mandated agents. Sometimes you may find yourself in a difficult position to negotiate whether it be a family member or some other social link/bond. As a result, it may be more advantageous to hire an impersonal mediator. We see this with professional athletes who hire agents to handle contract negotiations with professional teams.

Next time we will discuss ways to manipulate information during strategic decisions.

Game Theory: Achieving Cooperation

Concepts for achieving cooperation through punishment

Last time we spoke about potential strategies to analyzing the Prisoner’s Dilemma game. However, most of these strategies contained certain flaws and were not sustainable in the long run. Today I am going to provide a framework of how to initiate successful punishments that will achieve cooperation. These frameworks come from the book The Art of Strategy  which I highly recommend to readers.

The first requirement is detection of cheating. This makes sense as before potential cheating can be punished, it must be detected. In addition, if detection is quick and accurate, the resulting punishment should be quick and accurate as well. You see this a lot in business like airline prices. As soon as one airline drops their price for a fare, a competing airline can do the some with quick turnaround due to monitoring capabilities.

The second requirement is the nature of punishment or choice of the punishment. In different scenarios, players have actions available to them that can be used to hurt others. In addition, in the long run, these actions can be great enough to erase all potential gains from those who cheated. One example that comes to mind is the SMU Football program in the 1980’s. Due to numerous NCAA infractions, the program was given the death penalty for many years. Even though the football program is back, they have not recovered since the punishment.

The third requirement is clarity. There must be clear boundaries of acceptable behavior as well as consequences. This should be clear for all players and potential cheaters. If there is a lack of clarity, there is a risk of players cheating by mistake. The fourth requirement is certainty. All players involved in the game must be confident that detection will be punished and cooperation is rewarded. Otherwise they will not have faith in the system and the potential for cheating will increase.

The fifth requirement is the size of the punishment. The question that needs to be asked is how harsh should the punishments be? This is a tough question to ask and is dependent on the situation. Sometimes it is a good idea to invoke a strong enough punishment to deter cheating. As a result, the punishment may never be used due to its severity.

The final requirement is repetition. Punishment needs to be continuous for all players and repeated as necessary. However, players who have long-term relationships with others run the risk of developing a reputation that will damage the relationships of current and future players. If you are a known cheater, in any form being sports, business, etc., others will turn away from you. This could be teammates, teams, customers, and business partners. This is not saying that punishment is always the way to go for cooperation. However, for punishments to be effective, defining these frameworks will improve the chances of cooperation.

 

 

 

Game Theory: Applying Strategy to Prisoners Dilemma

Intro to strategies for solving Prisoner’s Dilemma

Originally we used Nash Equilibrium to solve for both sequential and simultaneous games. However, there are other methods and strategies for analyzing games. In this blog we will direct our attention to Prisoner’s Dilemma.

An example of Prisoner’s Dilemma is below:

1\2 Cooperate Defect
Cooperate 3,3 -2,6
Defect 6,-2 0,0

The Nash Equilibrium for this game is (Defect, Defect) for a payout of (0,0). This is assuming that both players are rational decision makers and they know each other are rational decision makers. However, in the real world we know that this is not the case. Players can either have advantages or disadvantages. So is it possible that players can end up in different strategy spaces (Cooperate, Cooperation) than (Defect, Defect)?

One possible strategy is providing an incentive or reward. Player 2 can be given the incentive to cooperate rather than defect by means of a suitable reward. On the flip side of this, Player 2 can be deterred from defecting by being threatened. But the reward approach is unsustainable and can create problems. For example, the reward cannot be given before the choice is made otherwise Player 2 can just take the reward and then defect. In addition, if the reward is not credible and just promised, Player 2 can decide to defect.

On other hand, punishment more than often used to solve Prisoner’s Dilemma. However, the threat must be credible and hold weight otherwise the other player will not cooperate. Fear of retaliation can be a very effective tool.

Sports like baseball can illustrate threat of punishment to maintain cooperation. American League batters are more prone to be hit pitches than National League batters. This is due to the fact that American League pitchers do not go up to bat so the threat of punishment doesn’t hold much weight. But National League pitchers have to bat so the fear of retaliation/punishment is more apparent.

Another strategy used in Prisoner’s Dilemma is tit for tat. This is a variation of the eye for an eye rule which says to do unto others as they have done onto you. The strategy cooperates in the first period and from then on copies the other player’s action from the previous period. The founder of the tit for tat strategy listed four principles that must be used in an effective strategy.

  1. Clarity – Tit for tat is clear and simple.
  2. Niceness – It never initiates cheating.
  3. Provocable  – The strategy does not let cheating go unpunished.
  4. Forgiving – It does not hold a grudge for too long and will return to cooperation.

However, a major problem with tit for tat is there is no end. It involves too much provocation and not enough forgiveness. Player 1 would punish Player 2 for defection which would set of the endless cycle. Player 2 would respond to the punishment through retaliation which provokes another punishment from Player 1. Unfortunately we see this occur in Middle Eastern conflicts (Israel vs Palestine). Next time we will continue to look at more ways to achieve cooperation.

 

Sequential Games V

Final look at sequential games

Today we will take one last look at solving sequential games. Before we do that though let’s solve the example at the end of the previous blog.

Shirk

Let p represent the probability that your boss writes you up and 1-p be the probability that your boss does not write you up. The probability distribution would be -10p + 40(1-p) = 20.

-10p + 40 – 40p = 20 =>

-50p = -20 =>

p = -20/-50 = 2/5

1-p = 3/5

So you will choose to shirk if the probability of your boss writing you up is less than 2/5. Or you will choose to shirt if the probability of your boss not writing you up is greater than 3/5. Of course you would never shirk to begin with and always work hard right?

In most of our examples we have looked at sequential games that involved two players. In the following example there will be four participants. In particular, Yogurtland is looking to establish a shop in a new market. However, three potential competitors (Mickey’s Yogurt, Yum Yum Yogurt, and Big Kahuna) are also debating whether to enter the market or not.

yogurt1

The payoffs are organized from top to bottom representing Yogurtland, Yum Yum, Mickey’s, and Big Kahuna respectively. We can solve this game using backwards induction. Let’s start with the options for Big Kahuna.

Scenarios for Big Kahuna (a lot):

Enter, Enter, Enter, Enter = 0

Enter, Enter, Enter, No = 1

Enter, Enter, No, Enter = 3

Enter, Enter, No, No = 1

Enter, No, Enter, Enter = 3

Enter, No, Enter, No = 2

Enter, No, No, Enter = 3

Enter, No, No, No = 4 

No, No, Enter, Enter = 4

 

No, No, Enter, No = 4

No, No, No, Enter = 3

No, No, No, No = 0

No, Enter, Enter, Enter = 2

No, Enter, Enter, No = 4

No, Enter, No, Enter = 3

No, Enter, No, No = 1

yogurt 2

The preferences of Big Kahuna have been highlighted in red based on the payouts from each outcome. I also placed in bold the preferred actions of Big Kahuna above. Now let’s look at Mickey’s. Since Mickey’s assumes Big Kahuna will make optimal decisions, Mickey’s options are the following:

Enter, Enter, Enter, No = 3

Enter, Enter, No, Enter  = 1

Enter, No, Enter, Enter = 2

Enter, No, No, No = 1

No, No, Enter, Enter = 1

No, No, Enter, No = 1

No, No, No, Enter = 2

No, Enter, Enter, No = 1

No, Enter, No, Enter = 0

yogurt 3

Using backwards induction, Mickey’s optimal choices will be in blue. Let’s continue the process by identifying Yum Yum’s options.

Scenarios for Yum Yum:

Enter, Enter, Enter, No = 2

Enter, No, Enter, Enter = 3

No, No, No, Enter = 1

No, Enter, Enter, No = 3

yogurt 4

Yum Yum’s optimal choices are now highlighted in green. As you can see, the options for yogurt shops becomes less as we use backwards induction. This leaves Yogurtland with a decision to make, enter or do not enter the market.

Their scenarios are:

Enter, No, Enter, Enter = 2

No, Enter, Enter, No = 2

yogurt 5

Based on Yogurtland’s options, the company is indifferent between entering the market or not entering the market since their payoffs will be the same. So the solution to this game is (2,3,2,3) and (2,3,1,4) or (Enter, No, Enter, Enter) and (No, Enter, Enter, No).