Paper: Systemic Risk Channels in Digital Finance Date: 2026-03-28 Scope: Full paper (18 .tex sections, 160 bib entries, ~485 citation commands)
1. Executive Summary
65
Problematic citations found
43
WRONG (content mismatch)
22
DECORATIVE (adds nothing)
58
Fixed
An ultra-hostile citation audit examined every \citep{} and \citet{} command in the paper against its surrounding sentence and the actual content of the cited work. Two passes were performed:
Pass
Scope
WRONG
DECORATIVE
Fixed
T1: Methodology
05_methodology.tex (38 citations)
5
8
20
T1b: All other sections
17 .tex files (~447 citations)
38
14
38
Total
43
22
58
Result: Paper compiles cleanly after all fixes. Zero undefined citation warnings. PDF output: 97 pages.
The root cause was a systematic substitution pattern during LLM-assisted drafting: five cite keys for topically unrelated papers were repeatedly swapped in where canonical systemic risk references belonged. The % EVIDENCE: comments above paragraphs correctly identified the intended sources in most cases, but the actual \citep{} / \citet{} commands used the wrong keys.
2. The Systematic Substitution Pattern
Five cite keys were systematically substituted for canonical systemic risk references. This pattern accounted for the majority of WRONG citations:
Wrong Key
What It Actually Is
What It Replaced (Intended Reference)
Times
WRONGBordo2017Central
Bordo & Levin (2017) -- CBDC monetary policy paper
Adrian & Brunnermeier (2016) -- CoVaR systemic risk measure
Acemoglu et al. (2015) -- Systemic risk and stability in networks
5
How this happened: During LLM-assisted drafting, the model apparently could not resolve the correct bib keys for well-known systemic risk papers (which may not have been in the .bib file at the time). Instead it substituted keys for papers that were in the .bib file, selecting entries with superficially related topics (blockchain, DeFi, CBDC) even though the actual paper content had nothing to do with the claimed contribution.
3. All Fixes by Section
Below are all 58 fixes organized by .tex file. Each fix shows the before/after and justification. Expand each section to see details.
05_methodology.tex 5 WRONG8 DECORATIVE = 20 fixes
WRONG citations fixed
Fix 9 (line 15): Blockchain security paper cited for database coverage claim
\citep{Shoetan2024Blockchain}
→
Removed
Justification: Shoetan2024 is about blockchain security beyond crypto uses; says nothing about academic database coverage methodology.
Fix 12 (line 31): AI paper cited for exclusion criteria methodology
\citep{Olanrewaju2025Artificial}
→
Removed
Justification: Olanrewaju2025 is about AI in finance; irrelevant to the paper's quality-filtering methodology.
Fix 13 (line 31): Layer-2 scalability paper cited for quality-filtering
\citep{Jourenko2025Sok}
→
Removed
Justification: Jourenko2025 is a SoK on L2 scalability protocols; has nothing to do with systematic review quality filtering.
Fix 16 (line 67): CBDC blockchain paper cited for systemic risk scoring
\citep{Sethaput2023Blockchain}
→
Removed
Justification: Sethaput2023 is about CBDC blockchain applications; irrelevant to composite systemic risk scoring framework.
Fix 18 (line 74): Bond tokenization paper cited for channel classification
\citep{Cisar2025Designing}
→
Removed
Justification: Cisar2025 is about bond market tokenization; irrelevant to hybrid traditional/digital channel classification.
DECORATIVE citations fixed
Fix 7 (line 8): IMF elements paper as generic survey framing
\citep{imf2023elements}
→
Removed
Justification: IMF paper decoratively cited to lend authority to search methodology description.
Fix 8 (line 15): IMF elements paper for database coverage
\citep{imf2023elements}
→
Removed
Fix 11 (line 26): DeFi leverage paper decoratively cited
\citep{aramonte2022defi}
→
Removed
Fix 14 (line 36): Smart contract attacks paper as decoration
\citep{qin2023attacks}
→
Removed
Fix 15 (line 48): FSB/IMF reports as false authority for scoring
\citep{fsb2022assessment, imf2021gfsr_crypto}
→
Removed
Justification: The scoring framework is the author's own creation; citing FSB/IMF falsely implies their endorsement.
Fix 17 (line 67): Interbank contagion simulation paper as decoration
\citep{upper2011simulation}
→
Removed
Fix 19 (line 79): IMF elements again as decoration
\citep{imf2023elements}
→
Removed
Citations added (correct references)
Fix 1 (line 9): Made IMF citation an explicit example
\citep{imf2023elements}
→
[e.g., \citet{imf2023elements}]
Fix 10 (line 21): Added foundational references to traditional finance lit sentence
Fix 6 (line 104): Rewrote taxonomy-first claim Removed false characterization of Brunnermeier 2009 as a taxonomy paper.
Fix 5 (line 88): Added brunnermeier2009market to hybrid classification
EVIDENCE comments (lines 45-47): Corrected to clarify that FSB/IMF/BIS documents informed channel identification, not the scoring framework itself.
01_introduction.tex 8 WRONG
Fix 1: Bordo2017Central (CBDC monetary policy) cited where Adrian & Brunnermeier CoVaR was intended
Bordo2017Central
→
adrian2016covar
Fix 2: Sethaput2023Blockchain (CBDC apps) cited where Acharya et al. MES/SRISK was intended
Sethaput2023Blockchain
→
acharya2017measuring
Fix 3: Wen2024Foray (DeFi attack tool) cited where Brunnermeier & Pedersen 2009 was intended
Wen2024Foray
→
brunnermeier2009market
Fix 4: Jourenko2025Sok (L2 scalability) cited where Brunnermeier 2009 Deciphering was intended
Jourenko2025Sok
→
Brunnermeier2009Deciphering
Fix 5: Shoetan2024Blockchain (blockchain security) cited where Acemoglu et al. 2015 was intended
Shoetan2024Blockchain
→
acemoglu2015systemic
Fixes 6-8: Three additional substitution-pattern replacements in the same section (same five keys, same corrections).
02_background_tradfi.tex 7 WRONG
Fixes 1-7: Seven instances of the five substitution-pattern keys replaced with correct canonical references. All followed the same pattern: Bordo2017Central→adrian2016covar, Sethaput2023Blockchain→acharya2017measuring, Wen2024Foray→brunnermeier2009market, Jourenko2025Sok→Brunnermeier2009Deciphering, Shoetan2024Blockchain→acemoglu2015systemic.
03_background_digital.tex 4 WRONG
Fixes 1-4: Four substitution-pattern replacements. Included one instance of Bordo2017Central where CoVaR was discussed in context of traditional-to-digital risk transmission.
04_background_gaps.tex 8 WRONG + 1 empty citep
Fixes 1-8: Eight substitution-pattern replacements across the research gaps section.
Additional fix: Found and repaired an empty \citep{} command (zero keys).
06_ch_network_contagion.tex 4 WRONG
Fix 1: Shoetan2024Blockchain cited in sentence about Acemoglu et al. phase-transition result
Shoetan2024Blockchain
→
acemoglu2015systemic
Fixes 2-4: Three more substitution-pattern corrections (Jourenko2025Sok, Sethaput2023Blockchain, Bordo2017Central replaced with their intended canonical references).
07_ch_liquidity_spirals.tex 2 WRONG
Fixes 1-2: Two instances of Wen2024Foray replaced with brunnermeier2009market in sentences discussing market/funding liquidity spirals.
08_ch_stablecoin_runs.tex 1 WRONG
Fix 1: Sethaput2023Blockchain replaced with acharya2017measuring in sentence about systemic risk measurement applied to stablecoins.
11_ch_counterparty_concentration.tex 1 WRONG
Fix 1: Bordo2017Central replaced with adrian2016covar in sentence about measuring counterparty systemic importance.
12_ch_information_asymmetry.tex 4 WRONG
Fixes 1-4: Four substitution-pattern replacements. Jourenko2025Sok was cited where Brunnermeier 2009 Deciphering was intended (discussing information asymmetry in the 2007-08 crisis).
13_ch_gateway_risk.tex 3 WRONG
Fixes 1-3: Three substitution-pattern replacements. Shoetan2024Blockchain and Sethaput2023Blockchain replaced with acemoglu2015systemic and acharya2017measuring where traditional finance systemic risk concepts were discussed in the gateway risk context.
14_cross_channel.tex 5 WRONG
Fixes 1-5: Five substitution-pattern replacements in the cross-channel interactions section. All five wrong keys appeared here, each in sentences discussing their respective canonical concepts (CoVaR, SRISK, liquidity spirals, information channels, network effects).
15_evolutionary.tex 3 WRONG
Fixes 1-3: Three substitution-pattern replacements in the evolutionary dynamics section.
17_policy.tex 2 WRONG
Fixes 1-2: Two substitution-pattern replacements in the policy implications section.
18_conclusion.tex 1 WRONG
Fix 1: One substitution-pattern replacement in the conclusion.
references.bib 1 entry added
Added:adrian2016covar -- Adrian & Brunnermeier (2016), "CoVaR", American Economic Review.
The other four canonical references already existed in the bib file: acharya2017measuring, acemoglu2015systemic, Brunnermeier2009Deciphering, brunnermeier2009market.
4. Prevention Framework
A heuristic citation screening script was created to detect future citation problems before they reach the manuscript:
Attributing contributions to the wrong author -- the pattern that produced the 43 WRONG citations
INSTITUTION_METHOD
Institution report (FSB, IMF, BIS, IOSCO, ECB) cited in a sentence describing the author's own methodology
False authority claims where regulatory documents are cited to legitimize the author's framework
TOPIC_MISMATCH
Title keywords and sentence keywords share zero words (beyond stopwords)
Citations with no topical overlap -- potential substitution errors
SUSPICIOUS_YEAR
2024-2025 paper cited in a sentence referencing pre-2020 findings, theories, or events
Anachronistic citations where recent papers are cited for established results
DUPLICATE_CITE
Same cite key appears twice in one \citep{} command
Copy-paste errors in citation lists
Example Output
Loaded 160 bib entries from output/paper/references.bib
Found 485 citation commands across 18 .tex files
============================================================
SUMMARY: 53 / 485 citations flagged
============================================================
Flags by rule:
SUSPICIOUS_YEAR 28
TOPIC_MISMATCH 17
AUTHORITY_CLAIM 7
DUPLICATE_CITE 3
Top-10 most-flagged cite keys:
Adamyk2025Risk 6 Risk Management in DeFi: Analyses...
fsb2023highlevel 5 High-Level Recommendations...
qin2021empirical 5 Quantifying Blockchain Extractable...
imf2022gfsr_defi 4 Cryptoasset Risks: Is a Regulatory...
Cisar2025Designing 4 Designing the future of bond markets...
Chaliasos2024Smart 4 Smart Contract and DeFi Security...
gromb2010liquidity 3 Limits of Arbitrage
Werner2022Sok 3 SoK: Decentralized Finance (DeFi)
bis2024tokenisation 3 Tokenisation in the Context of Money...
Alamsyah2024Review 3 A Review on Decentralized Finance...
Recommended workflow: Run verify_citations.py after any LLM-assisted drafting session and before submission. Review all AUTHORITY_CLAIM and TOPIC_MISMATCH flags manually. SUSPICIOUS_YEAR flags require judgment (some are legitimate).
5. Remaining Items
14 DECORATIVE citations (not yet fixed)
The audit identified 14 citations in the non-methodology sections that are decorative -- they add no specific evidentiary value but are not factually wrong. These were classified as lower priority and left in place:
Decorative citations pad the reference list but do not misattribute claims. They are a style issue rather than an integrity issue. Fixing them requires case-by-case judgment about whether the citation adds anything to the sentence.
Remaining EVIDENCE comment issues
Some % EVIDENCE: comments above paragraphs may still reference the wrong keys (the keys used in the old, wrong citations). These comments are invisible in the compiled PDF but could cause confusion during future editing. A systematic cleanup pass would bring them into alignment with the corrected cite keys.
14 problematic cite keys still in text
The five substitution-pattern keys still appear 14 times in the tex files, but these are now in correct usage contexts -- sentences where the actual topic of the cited paper matches the discussion. For example, Wen2024Foray legitimately appears in the composability risk chapter where DeFi attack synthesis is the topic. The verification script flags 0 of these as TOPIC_MISMATCH.
Short paper sections
The short paper (output/paper_short/) methodology section received 6 fixes. The remaining short paper sections were not audited in this pass.
Script limitations
The verify_citations.py heuristic screener has inherent limitations:
Cannot verify whether a cited paper's content actually supports the claim -- only whether title keywords overlap
SUSPICIOUS_YEAR flags produce false positives when recent survey papers legitimately cite earlier findings
AUTHORITY_CLAIM detection relies on verb patterns and may miss subtler attribution errors
Uses only the .bib title field (no abstract available) for semantic comparison
Citation Audit Report -- Generated 2026-03-28 -- Systemic Risk Channels in Digital Finance