Preparation Checklist
- Print 4 team prep sheets (one per team)
- Print 6 use case scenario cards
- Print voting ballots (1 per student, minus team members voting on their own work)
- Print grading rubrics (1 per team = 4 copies)
- Review fact-check reference below
- Prepare timer for each phase
- Optional: Whiteboard/slides for tracking time and scores
Session Timeline (60 Minutes)
0:00-0:10
Team Formation & Prep
- Divide class into 4 teams, assign mechanisms randomly
- Distribute prep sheets
- Teams read and strategize (encourage note-taking)
0:10-0:30
Opening Statements & Cross-Examination
- Each team: 2-minute opening argument (8 min total)
- After all openings: Cross-examination round (3 min per team = 12 min)
- Teams take turns answering questions from opponents
- Keep time strictly—use timer visible to all
0:30-0:45
Use Case Analysis
- Randomly distribute 4 of the 6 scenario cards (one per team)
- Teams have 2 minutes to discuss
- Each team presents 3-minute argument for why their mechanism fits the use case
- Brief 1-minute rebuttal period for opposing teams
0:45-0:55
Voting & Tabulation
- Distribute voting ballots
- Students vote anonymously (cannot vote for own team)
- Collect and quickly tally votes
- Optional: Announce "People's Choice" winner per category
0:55-1:00
Instructor Debrief
- Key takeaways about tradeoffs
- Emphasize: "No perfect mechanism—only appropriate choices for specific contexts"
- Preview next lesson on consensus mechanism details
Moderation Tips
Keep Energy High
- Encourage respectful rivalry—this is a debate, not a lecture
- Interject with "Good point!" or "That's a strong counterargument" to validate contributions
- If a team struggles, prompt with leading questions: "What about energy consumption?" or "How does finality work in your mechanism?"
Handle Dominance/Silence
- If one student dominates: "Let's hear from [name]—what's your take?"
- If team is silent: "Team X, you have 30 seconds to respond—what's your defense?"
- Encourage equal participation in grading (see rubric)
Common Pitfalls to Watch
- False claims: Gently correct major factual errors (see fact-check section below)
- Ad hominem attacks: Redirect to technical arguments—"Let's focus on the mechanism, not the team"
- Oversimplification: Nudge students to acknowledge tradeoffs—"But what's the cost of that benefit?"
Fact-Check Reference
Use this to gently correct major inaccuracies during debates. Don't interrupt flow for minor errors.
Proof of Work (PoW)
| Metric | Accurate Value | Common Misconception |
|---|---|---|
| Bitcoin TPS | ~7 TPS | "Bitcoin is slow" (true, but that's by design for security) |
| Energy Consumption | ~150 TWh/year (2024) | Comparable to Argentina; 58.9% renewable energy (2022 data) |
| 51% Attack Cost | ~$20B+ hardware + ongoing electricity | Often underestimated; requires controlling majority hashrate |
| Finality Time | ~60 min (6 confirmations) | Probabilistic, not absolute; exchanges vary (some use 3-6 blocks) |
| Mining Centralization | Top 4 pools: ~55% hashrate | Concerning but miners can switch pools instantly |
Proof of Stake (PoS)
| Metric | Accurate Value | Common Misconception |
|---|---|---|
| Ethereum Energy Reduction | 99.95% (post-merge) | From ~94 TWh/year (PoW) to ~0.01 TWh/year (PoS) |
| Validator Count | ~900,000 validators (2024) | But top staking pools control ~40% of stake (Lido ~30%) |
| Finality Time | ~15 minutes (2 epochs) | Faster than PoW but not instant; economic finality earlier |
| Minimum Stake | 32 ETH (~$96k at $3k/ETH) | Liquid staking pools enable participation with less |
| Slashing | Minimum 1 ETH, max entire stake | Penalties scale with number of simultaneous slashings |
Delegated Proof of Stake (DPoS)
| Metric | Accurate Value | Common Misconception |
|---|---|---|
| EOS Throughput | 4,000 TPS (achieved) | Theoretical max higher, but 4k is real-world sustained |
| EOS Validators | 21 active + 49 standby | "Only 21" is concerning but standby validators provide backup |
| Block Time | 0.5 seconds (EOS) | Much faster than PoW/PoS, enables high TPS |
| Transaction Fees | Zero (EOS model) | Users stake tokens for network resources instead |
| Governance Issues | Steem/Hive fork (2020) | Example of both vulnerability and resilience—community forked |
Byzantine Fault Tolerance (BFT)
| Metric | Accurate Value | Common Misconception |
|---|---|---|
| Finality | 1-3 seconds, deterministic | Instant and provable—no probabilistic waiting |
| Throughput | 1,000-10,000 TPS (Tendermint) | Varies by implementation; Hedera claims 10k+ |
| Fault Tolerance | Tolerates f Byzantine out of 3f+1 total | Requires >2/3 honest validators (33% Byzantine tolerance) |
| Validator Limit | ~100-200 practical | O(n²) communication complexity; not infinitely scalable |
| Cosmos Validators | 175 active validators | Tendermint (BFT + PoS hybrid); permissionless via governance |
Key Takeaways to Emphasize
The Blockchain Trilemma is Real
No mechanism perfectly achieves all three: decentralization, security, and scalability. Each mechanism makes deliberate tradeoffs:
- PoW: Maximizes security and decentralization, sacrifices scalability and efficiency
- PoS: Balances all three, but with economic rather than thermodynamic security
- DPoS: Maximizes scalability and governance, accepts reduced validator decentralization
- BFT: Maximizes consistency and finality, requires smaller validator sets
Context Determines "Best"
There is no universally superior consensus mechanism. The right choice depends on:
- Trust assumptions: Permissionless vs. permissioned participants?
- Performance requirements: High TPS needed or can tolerate low throughput?
- Finality needs: Is probabilistic finality acceptable or must it be instant?
- Regulatory/ESG constraints: Energy consumption a concern?
- Governance model: Who makes decisions about protocol upgrades?
Evolution is Ongoing
Consensus mechanisms continue to evolve:
- Ethereum's successful merge (2022) validated PoS at scale
- Layer 2 solutions (Lightning, rollups) address PoW/PoS scalability limits
- Hybrid mechanisms combine approaches (e.g., Tendermint = BFT + PoS)
- New variants emerge (e.g., Avalanche's repeated sub-sampled voting, Algorand's Pure PoS)
Grading Guidance
Holistic Assessment
- Don't penalize teams for their assigned mechanism—grade their defense of it
- Reward honest acknowledgment of tradeoffs over blind advocacy
- Look for understanding of why tradeoffs exist, not just what they are
- Equal participation matters—observe who contributes during prep and presentation
Use Peer Voting as Input
Student votes can inform (but not replace) instructor grading:
- If team wins peer votes but made technical errors → reduce technical accuracy score
- If team loses peer votes but was technically excellent → peer engagement may have been weak
- Peer votes reveal which arguments resonated with audience—valuable signal
© Joerg Osterrieder 2025-2026. All rights reserved.