L07: Smart Contracts & Game Theory

Master the intersection of smart contracts and game theory: Nash equilibrium, mechanism design, and how to align incentives in decentralized systems.

⏱️ Estimated Time: 3 hours for complete mastery

Learning Objectives

By the end of this study session, you will be able to:

  • Understand Nash equilibrium and identify it in crypto economic games
  • Analyze security models: 51% attacks, incentive compatibility, and mechanism design
  • Design incentive structures that align individual and network goals
  • Apply game theory to evaluate consensus mechanisms and voting systems
  • Understand smart contract vulnerabilities from a game theory perspective
  • Model multi-agent systems and predict equilibrium outcomes
  • Recognize and prevent gaming attacks: fee sniping, MEV extraction, Sybil attacks

Study Path

Read Summary Slides

Start with the summary slides (PDF). Focus on game theory diagrams and protocol mechanism designs.

Learn Game Theory Basics

Review the key concepts below: Nash equilibrium, mechanism design, and incentive compatibility. These form the foundation for understanding all crypto protocols.

Analyze Real Protocols

Work through practice problems analyzing actual blockchain protocols and identifying game-theoretic properties.

Take the Quiz

Test your knowledge with Quiz 7. Aim for at least 80% correct.

Smart Contract Audit (Optional)

Try the Smart Contract Audit Challenge to practice spotting vulnerabilities.

Key Concepts Summary

Nash Equilibrium

Definition: A state where no player can improve their outcome by unilaterally changing strategy, given what others are doing.

In Blockchain: If all miners follow the protocol honestly and earn block rewards, each miner's best response is to also follow the protocol honestly. This is a Nash equilibrium: everyone is happy, no incentive to deviate.

Bad Equilibrium: If miners know that 51% of hashpower can attack, they might sell their rigs (everyone defects). This is also a Nash equilibrium, but worse for the network.

Incentive Compatibility (IC)

Goal: Design mechanisms so that truthful/honest behavior is optimal for each participant.

Example: Proof of Work: Miners earn rewards for finding valid blocks. Lying (creating invalid blocks) costs electricity for no reward. Honest > dishonest, so IC is satisfied.

Violation: If protocol rewards long-range attacks (reversing old blocks), miners have incentive to rewrite history. System breaks.

Mechanism Design

Challenge: Design rules (mechanisms) so that individuals pursuing self-interest produce socially optimal outcomes.

Example: Auction design. Second-price auction makes bidding true value optimal. First-price auction encourages underbidding. Same auction, different incentives.

In Crypto: Emissions, slashing penalties, and governance rules are mechanisms. Good design aligns incentives; bad design creates perverse outcomes.

Consensus Security & Game Theory

51% Attack: If attacker controls >50% of hashpower/stake, they can rewrite history or censor transactions. The question: Is attacking profitable?

Incentive Analysis: If attacking costs more (lost rewards, slashing) than gained (profit from double-spend), rational actors won't attack. This is incentive compatibility.

Example: Bitcoin: Cost of 51% attack ≈ billions (buying hardware, electricity). Profit from one double-spend ≈ millions. Not worth it for most actors.

Maximal Extractable Value (MEV)

Definition: Additional profit miners/validators can extract by reordering or inserting transactions in a block.

Example: Sandwich attack - see a pending swap in mempool, insert your own trade before it (pushing price up), let victim's trade execute at worse price, profit from price change.

Impact: MEV benefits miners but harms users. Reduces incentive compatibility: users can't rely on transaction ordering.

Voting & Governance Games

Voter Apathy: Rational voters might not participate if cost (time, gas fees) > benefit (influence). Leads to low participation and whale dominance.

Quadratic Voting: Cost to vote = (votes)². Encourages many people to vote once rather than few people to vote many times. More aligned incentives.

Delegation: Allow voters to delegate power. Reduces barrier to participation but concentrates power among delegates.

Sybil Attacks & Identity

Problem: In open networks, one person can create many accounts (Sybils) to gain disproportionate voting/reward power.

Solutions:
- Proof of Work (costly to create many identities)
- Stake-based voting (need capital, not just accounts)
- Human verification (captchas, biometrics)
- Reputation systems (long-lived accounts trusted more)

Practice Problems

Problem 1: In Bitcoin's PoW, mining a valid block costs ~$100k (electricity, hardware amortized) and earns ~$156k block reward. Is honest mining a Nash equilibrium?
Answer: Yes, for honest mining (earning block reward), Nash equilibrium is satisfied. Cost $100k, revenue $156k, profit $56k. Miner's best response to others mining honestly is to mine honestly too.
Alternative (attacking): Miner could try 51% attack. But success requires controlling >50% hashpower (billions in investment), and profit from one double-spend is at most $100M (rarely that much). Expected value is negative. So attacking is irrational. Honest mining is the dominant strategy, making it a strong Nash equilibrium. System is secure.
Problem 2: An Ethereum validator can stake 32 ETH and earn 5% annual rewards. If they attack the network and get caught, their stake is slashed (lost entirely). Is honest validation incentive-compatible?
Answer: Yes, highly incentive-compatible. Expected value of honest validation: $32 * 1.05 = $33.6 ETH next year. Expected value of attacking: If caught, lose all $32 ETH. If undetected, gain, say, $10 ETH. Detection probability p needs to be: p * (-$32) + (1-p) * $10 < $1.6 (honest profit). Solving: -32p + 10 - 10p < 1.6, so 42p > 8.4, so p > 20%. Since slashing catches >95% of attacks, honest validation is strictly better. Incentive-compatible and secure.
Problem 3: A governance token allows 1 token = 1 vote. Whales with 30% of tokens can control votes. How could quadratic voting reduce whale power?
Answer: In 1-token-1-vote: Whale with 30% tokens has 30% voting power.
In Quadratic Voting: Cost to cast n votes = n². Whale votes cost = 30%² = 9% of voting power (in terms of cost ratios). More formally: 1000 tokens gets sqrt(1000) ≈ 31.6 votes (not 1000). Distribution becomes:
- 10 people with 1 token each: 10 * sqrt(1) = 10 votes total
- 1 person with 100 tokens: sqrt(100) = 10 votes
Now equal voting power despite 100x capital difference! Whale's dominance is severely reduced. Trade-off: Whales might exit if they feel underrepresented. But better alignment with democratic "one person, one vote" principle.
Problem 4: A DEX allows MEV extraction via sandwich attacks. A user's trade will increase token price 5%. A sandwich attacker buys before, sells after. Calculate profit if attacking a $100k trade.
Answer: Price impact: +5%.
1. Attacker buys at current price, say Token A at $100. Gets 1000 tokens. Cost: $100k.
2. User's trade pushes price to $105 (5% impact).
3. Attacker sells 1000 tokens at $105. Gets $105k.
4. Attacker profit: $105k - $100k = $5k (ignoring fees).
The victim (user) pays $5k extra for their trade due to MEV extraction. Is this incentive-compatible? From the user's perspective, no - they're harmed. From attacker's perspective, yes - profit is free. This misalignment is why MEV is problematic. Solutions: Private mempools (encrypted trades), PBS (Proposer-Builder Separation), MEV-resistant consensus.
Problem 5: A governance vote requires 50% participation threshold. If voting costs 1 gwei (time, gas) per vote, and average voter benefit from voting is 0.5 gwei, will voters participate?
Answer: Rationally, no. Cost (1 gwei) > benefit (0.5 gwei), so expected value is negative. This is voter apathy problem. Participation will be low (only idealistic voters will participate despite losses).
To fix:
1. Reduce voting cost: Lower gas fees via L2s, batch voting.
2. Increase benefit: Make voting more impactful, reward early voters.
3. Change threshold: Remove participation requirement, just require quorum or majority of participants.
Example: Uniswap requires 65k votes quorum (not 50% participation). This is achievable with whales and engaged community, even if most token holders don't vote. Trade-off: Whale dominance. Better: Delegate to community members for free, reducing participation cost.
Problem 6: A Sybil attack in an airdrop: Create 1000 wallets to claim airdrop 1000 times. What mechanisms prevent this?
Answer:
Mechanisms:
1. Proof of Work/Stake: Require stake or work for each wallet. Attacker needs 1000x cost. Sybil attack becomes economically infeasible.
2. Historical Snapshots: Airdrop to wallets holding funds before announcement. If attacker knew beforehand, they'd have time to fund 1000 wallets (expensive). Most attackers can't.
3. Verification: KYC (Know Your Customer), proof of personhood (Orca). One person = one airdrop. Privacy trade-off.
4. Reputation: Airdrop to old accounts (>2 years). New wallets excluded. Incentivizes long-term participation.
5. Quadratic Funding: Distribute based on individual count, not total stake. 1000 wallets with 1 token each = same power as 1 wallet with 1000 tokens. Reduces reward for Sybil attacks.
Reality: Most projects use combinations. Uniswap used historical snapshots + KYC for large claimers.
Problem 7: Compare two protocol designs: A requires 100 tokens to vote, B requires 1 token. Which prevents Sybil attacks better, and what are trade-offs?
Answer:
Sybil Prevention: Protocol A is better. To create 1000 voting accounts, attacker needs 100k tokens (expensive). In B, need only 1k tokens (much easier).
Trade-offs:
A's benefits: Sybil-resistant, whales get proportional voting power (fair for capital contribution).
A's drawbacks: Excludes poor users, reduces participation.
B's benefits: Inclusive, "one person, one vote" ideal, high participation.
B's drawbacks: Vulnerable to Sybil attacks, easily gamed.
Hybrid Solution: Voting power = sqrt(tokens held). Requires 100 tokens for significant vote power (Sybil-resistant), but allows small holders to vote. Combines benefits of both.
Problem 8: Design a mechanism to align small validator incentives with network security. Small validators are ignored; large validators make all decisions. How do you fix this?
Answer:
Problem: Large validators dominate governance, small validators are demotivated. Network quality suffers (small validators might drop out).
Solutions:
1. Committee Rotation: Randomly select small validators to propose blocks. Even small validators get occasional opportunities and rewards.
2. Tiered Voting: Weight votes logarithmically (log(stake)) instead of linearly. Small holders' votes matter more.
3. Delegation: Small validators delegate to large ones but get a % of delegated rewards. Aligns incentives without removing participation.
4. Minimum Rewards: Guarantee minimum per-validator reward, regardless of size. Ensures profitability even for small validators.
5. Network Incentives: Rewards for running nodes offline (increasing decentralization). Favor small-validator-heavy architectures.
Example: Cosmos uses tendermint with limited active validators (top 125 by stake), but allows delegation. Small holders delegate to validators, earning rewards. Reduces Sybil risk while maintaining participation.
Problem 9: A protocol has a fee market: transaction fees = f. Users want low fees, miners want high fees. Design a mechanism that balances both.
Answer:
Problem: User interests (low fees) vs. miner interests (high fees) are opposed. Without mechanism design, this becomes an arms race.
Ethereum's Solution - EIP-1559:
1. Base Fee: Automatically adjusts block size. If full, base fee increases. If empty, decreases. Reaches equilibrium fee where demand ≈ capacity.
2. Fee Burn: Base fee is burned, not given to miners. Miners only get tip (prioritization). This decouples miner incentive from fee level, reducing fee pressure.
3. Game Theory: Users see base fee, know it will adjust. Miners can't influence base fee directly (disincentive to game it). Users adjust transaction timing based on base fee (respond to supply-demand). System reaches equilibrium.
Result: More predictable fees, less volatility. Trade-off: Miners earn less (less base fee revenue), but get more stable income (from tips + staking rewards in PoS). Overall incentive-compatible.
Problem 10: Critical thinking - A protocol has no slashing penalties for validators. Is this safe? What game-theoretic consequences arise?
Answer:
Safety: Without slashing, validators can attack with minimal cost. They earn rewards from honest validation without downside risk.
Game-Theoretic Consequences:
1. Incentive Misalignment: Honest and dishonest behavior earn similar rewards. No strong incentive to stay honest.
2. Attacks Become Profitable: If attacker can earn $1M from a 51% attack and downside is 0%, expected value is positive. Attacks increase.
3. Reduced Security: Network becomes vulnerable. Users lose confidence.
4. Concentration Risk: Validators with nothing to lose might go rogue or disappear. Participation may drop.
5. Bad Equilibrium: Rational validators see others attacking, realize staying honest is not optimal, also attack. Network spirals down.
Solution: Implement slashing. Caught attacking = lose stake. Honest validation's expected value (reward - slashing * attack_probability) > attacking (gain - expected_slashing). Restores incentive compatibility.

External Resources

Articles & Papers

Videos

Interactive Tools

Self-Check Questions

Before moving to Lesson 8, ensure you can confidently answer these questions:

  • Can you define Nash equilibrium and identify it in a simple game?
  • Can you explain why honest mining is a Nash equilibrium in Bitcoin?
  • Can you design a slashing penalty to prevent validator attacks?
  • Can you explain MEV and describe sandwich attacks?
  • Can you compare quadratic voting to token-based voting?
  • Can you describe mechanisms to prevent Sybil attacks?
  • Can you identify misaligned incentives in a protocol design?

If you answered "yes" to most, you're ready for Lesson 8: Regulation & Future!