Computational methodology documentation and verification reports
External verification of ALL 2,433 papers against the live OpenAlex API and ALL 25 crisis events against public sources. Breaks the circular verification chain. Findings: 100% paper existence confirmed, zero citation fabrication, all crisis losses sourced — but irrelevant papers (biomedical studies) found contaminating top-citation scores.
Where do those numbers come from? Shows exactly how each paper contributes to the scoring. Per-channel deep dives with the actual top-10 most-cited papers (with titles), crisis events with log₁₀ weights, red-flag badges for irrelevant papers, and the full weight sensitivity table.
78 citation fixes across both papers. Found systematic LLM-drafting pattern: 5 cite keys were swapped for canonical references (e.g., a CBDC paper attributed with inventing CoVaR). Full before/after inventory with the prevention framework.
Complete narrative explanation of the paper collection and composite scoring methodology. Includes worked examples with real numbers, independent verification of all 42 sub-scores and 14 composites, sensitivity analysis, and known limitations. Every number traced to source data — zero hallucinations.
Technical reference for all 17 functions across 3 Python scripts. Each function documented with INPUT (parameters, types, sources), CALCULATION (algorithms, formulas, edge cases), and OUTPUT (return types, downstream consumers). Includes data flow diagram and function call graph.
Line-by-line hostile review of all Python code. 31 findings across 17 functions: 5 BUG (red), 7 LOGIC (orange), 8 STYLE (yellow), 6 INFO (blue), 5 PASS (green). Per-function verdicts with exact line references and code excerpts.