Model faster, automate reports, and stop being limited by Excel.
I'm a finance analyst. I build models in Excel all day. My colleagues keep telling me to learn Python but I don't see why — Excel works fine.
How many rows is your biggest dataset?
We have one client report with about 800,000 rows. Excel crashes if I try to pivot it.
Python handles 800,000 rows without blinking. Ten million rows? Still fine. No row limit, no crashing, no "not responding." And beyond scale, Python models are version-controlled and testable — when a formula reference shifts silently in Excel, Python would have caught it with a test.
We actually had an error in a model last quarter that nobody caught for weeks.
That's the deeper problem. Spreadsheet models break silently. Python models can include automated tests that verify outputs against known values — something Excel simply cannot do. Here's what a DCF function looks like with a test attached:
def dcf_valuation(free_cash_flows, discount_rate, terminal_growth_rate):
pv_cash_flows = sum(
cf / (1 + discount_rate) ** year
for year, cf in enumerate(free_cash_flows, start=1)
)
terminal_value = free_cash_flows[-1] * (1 + terminal_growth_rate) / (discount_rate - terminal_growth_rate)
pv_terminal = terminal_value / (1 + discount_rate) ** len(free_cash_flows)
return pv_cash_flows + pv_terminalVersion-controlled. Testable. Auditable. No cell references to misplace.
Automated morning reports. My VP would love that.
JPMorgan, Goldman, and Citadel all require Python now. The analysts who get promoted to associate are the ones who can model faster and automate the grunt work. Start the Python Fundamentals track — by week 3 you'll be building your first automated finance task.
I'm starting tonight. My VP is going to wonder how I got so fast.
Finance is the last major professional discipline still running critical work on spreadsheets. That's changing fast. The analysts making VP faster aren't the ones with better Excel skills — they're the ones who automated the Excel work entirely.
Excel models have a structural flaw: they look correct even when they aren't. A misplaced formula reference, an accidentally deleted row, a circular reference that resolves to the wrong value — these produce wrong numbers that look like right numbers. At scale, that's dangerous.
Python models don't have this problem because they're code: version-controlled in git, testable with assertions, reproducible on any machine. Every change is tracked. Every output can be verified.
def test_dcf_model():
# Discount rate equal to growth rate should raise ZeroDivisionError
try:
dcf_valuation([100, 110], discount_rate=0.025, terminal_growth_rate=0.025)
assert False, "Should have raised an error"
except ZeroDivisionError:
pass
# Known input should produce expected output within 1% tolerance
result = dcf_valuation([120, 135, 150, 165, 180], 0.10, 0.025)
assert abs(result - expected) / expected < 0.01, f"Model output drift: {result}"
test_dcf_model()"If discount rate equals growth rate, raise an error." Excel cannot do that.
A 50-scenario sensitivity table that takes hours in Excel runs in under a second in Python:
import pandas as pd
import numpy as np
growth_rates = np.arange(0.05, 0.25, 0.02)
margin_rates = np.arange(0.10, 0.30, 0.02)
base_revenue = 500 # $M
results = pd.DataFrame(
index=[f"{m:.0%} margin" for m in margin_rates],
columns=[f"{g:.0%} growth" for g in growth_rates]
)
for margin in margin_rates:
for growth in growth_rates:
projected_revenue = base_revenue * (1 + growth) ** 3
ebitda = projected_revenue * margin
results.loc[f"{margin:.0%} margin", f"{growth:.0%} growth"] = f"${ebitda:.0f}M"
print(results.to_string())Change the base revenue or projection years and every cell updates instantly. There are no data tables to rebuild.
Pulling prices and calculating risk metrics used to mean manual Bloomberg exports. Python connects directly:
import yfinance as yf
import pandas as pd
import numpy as np
tickers = ["AAPL", "MSFT", "JPM", "GS"]
prices = yf.download(tickers, period="2y", auto_adjust=True)["Close"]
returns = prices.pct_change().dropna()
risk_free = 0.045 / 252 # Daily risk-free rate
def rolling_sharpe(series, window=30):
excess = series - risk_free
return (excess.rolling(window).mean() / series.rolling(window).std()) * np.sqrt(252)
sharpe = returns.apply(rolling_sharpe)
print(sharpe.tail())Schedule that at 6 AM. Your VP has a Sharpe ratio report before they sit down.
| Task | Before Python | After Python |
|---|---|---|
| 800K-row pivot | Excel crashes | Pandas processes in under 2 seconds |
| DCF model audit | Manual cell-by-cell check | Automated test suite catches drift instantly |
| Sensitivity table (50 scenarios) | Half-day data table setup | Nested loop, done in under a second |
| Morning risk report | Manual Bloomberg download + Excel | Scheduled script, auto-formatted, auto-emailed |
| Scenario analysis for CFO deck | Rebuild from scratch each time | Parameterized function, call with new inputs |
JPMorgan's internal Python platform, Goldman's Marquee, Citadel's quant infrastructure — these are not fringe experiments. Python is now the operating language of modern finance. Associates who arrived knowing Python were immediately more productive than those who didn't. Senior roles in risk, quant research, and FP&A now list Python as a requirement, not a nice-to-have.
The analysts promoted to senior roles are the ones who eliminated their own grunt work and spent that time on judgment. Python is how you do that.
Not syntax — just thinking. How would you solve these?
1.Your Excel DCF model produced a $2.1B enterprise value last quarter. A junior analyst changed one formula reference and the model now shows $1.4B — but nobody noticed for two weeks. What would have prevented this?
2.You need to run a sensitivity analysis across 50 combinations of discount rates (8% to 14%) and terminal growth rates (1.5% to 3.5%). What's the right approach?
3.A portfolio manager wants a report that shows the rolling 30-day Sharpe ratio for each holding, updated every morning before market open. What's the correct setup?
Build real Python step by step — runs right here in your browser.
Calculate Portfolio Weighted Return
You manage a portfolio of positions. Each position has a ticker, a weight (as a decimal, e.g. 0.25 for 25%), and a return (as a decimal, e.g. 0.08 for 8%). Write a function `portfolio_return(positions)` that calculates the weighted return of the portfolio (sum of weight * return for each position), rounded to 4 decimal places. Also validate that weights sum to approximately 1.0 (within 0.001 tolerance). If weights don't sum to ~1.0, raise a ValueError with message "Weights must sum to 1.0".
# portfolio_return([{"ticker":"AAPL","weight":0.4,"return":0.12},{"ticker":"JPM","weight":0.35,"return":0.08},{"ticker":"BND","weight":0.25,"return":0.03}])
0.0835Start with the free Python track. No credit card required.