Quick Start
import brisma as br
data = br.load_data("data/brisma_data.xlsx")
result = br.inverse_optimize_mv(weights, cov, lambda_=2.5)
premia = br.extract_factor_premia_ols(result.mu, betas, rf=0.02)
| Module | Key Functions |
|---|---|
| data_loading | load_data, create_mapping_tables |
| inverse_opt | inverse_optimize_mv, mrar, omega |
| factor_premia | extract_factor_premia_ols, price_new_asset |
| optimizer | min_variance, max_sharpe, frontier |
data_loading Core
load_data
(file_path="data/brisma_data.xlsx") -> Dict[str, DataFrame]
Load all sheets from Excel.
Returns: tbl_client_portfolio, tbl_risk_model, tbl_idx_data, tbl_fx_data
create_mapping_tables
(portfolio, risk_model) -> Dict
ID-to-currency and ID-to-name mappings.
inverse_optimization New
mu* = lambda * Q * w
inverse_optimize_mv
(weights, cov, lambda_=2.5, rf=0.0) -> Result
MV implied returns (closed-form).
- weights
- (n,) sums to 1
- cov
- (n,n) PSD
- lambda_
- risk aversion
Returns: mu, utility, risk
inverse_optimize_mrar
(weights, returns, gamma=2.0, rf=0.0) -> Result
MRAR (CRRA) implied returns.
inverse_optimize_omega
(weights, returns, threshold=0.0) -> Result
Omega ratio implied returns.
factor_premia New
pi = (B'B)^-1 B'(mu*-rf)
extract_factor_premia_ols
(mu, betas, rf=0.0) -> Result
OLS cross-sectional regression (APT).
Returns: factor_premia, r_squared, t_stats
fama_macbeth_regression
(returns, factors, window=60) -> Result
Two-pass with Shanken correction.
price_new_asset
(premia, new_betas, rf=0.0) -> float
mu_new = rf + B'pi
optimizer New
optimize_min_variance
(cov, min_wt=0.0, max_wt=1.0) -> Result
optimize_max_sharpe
(mu, cov, rf=0.0) -> Result
optimize_target_return
(mu, cov, target) -> Result
efficient_frontier
(mu, cov, n_points=50) -> List[Result]
reporting New
create_dashboard
(data, path=None) -> Figure
create_risk_attribution_chart
(weights, cov, names) -> Figure
create_efficient_frontier_chart
(frontier, current=None) -> Figure
generate_report_html
(data, path, title) -> str
data_quality New
validate_data_quality
(portfolio, idx_data, ...) -> Report
investigate_portfolio_weights
(portfolio) -> (sum, valid, text)
detect_outliers_zscore
(values, threshold=3.0) -> array
detect_outliers_iqr
(values, multiplier=1.5) -> array
bantleon.lambda_calibration New
extract_lambda_from_returns
(weights, mu, cov) -> float
lambda* = w'mu / w'Qw
black_litterman_lambda
(market_ret, rf, market_var) -> float
(E[rm]-rf) / sigma^2
confidence_weighted_lambda
(observed, prior, conf) -> float
bantleon.method1
lambda_M1 = ln((1+y)/(1+rf))/beta_ref
calculate_lambda_m1
(y_10y, rf, beta_ref=7.0) -> float
compute_expected_returns_m1
(betas, lambda_m1) -> array
mu_i = beta_i * lambda
bantleon.method2
calculate_lambda_m2
(factor_rets, weights, annualize=True) -> float
create_time_weights
(n, decay="exp", halflife=24) -> array
bantleon.hybrid
mu = R^2*mu_M1 + (1-R^2)*mu_M2
compute_hybrid_returns
(mu_m1, mu_m2, r_squared) -> array
recommend_method
(r_squared, high=0.6, low=0.3) -> Dict
covariance Core
estimate_iterative_covariance
(ret22, idx, id_, id_rm, period) -> Dict
GARCH-weighted iterative covariance.
Returns: Q_emp, Q_rm_comp, ei, weights
estimate_factor_model
(Q_rm, Q_emp, ei, id_port, threshold=0.95) -> Dict
Returns: Q_shrink, betas, id_comp
garch_utils Core
fit_garch_models
(idx, beta_resid, id_port, id_comp, horizon=260) -> Dict
Returns: sd_comp, sd_resid
calculate_garch_covariance
(beta_fit, sd_comp, sd_resid, id_port) -> array
Returns: Q_garch (n,n)