Stedman Blake Hood

Optimizing for personal growth, fun, and impact
$$V(t) = \underset{t}{max}\bigg\{ \ f(\ growth, \ \ fun, \ \ impact \ ) \bigg\}$$

In short:

Business interests:

  • Product strategy
  • Sales discovery
  • Testing acquisition & distribution channels
  • Debugging people-related issues
  • Scripting to automate the slow stuff

Projects

GovSimple

After an amazing weekend at the Bayes Impact hackathon, I began preliminary market research to validate the idea my team had developed: "TurboTax for unemployment benefits".

The goal was to minimize frictional unemployment for low-skill workers.

Low skill workers have a hard time effectively searching for jobs: relying on first-degree networks and local job boards.

We sought to facilitate their job search: using data on their employment history that they provided in the unempmloyment application.

Employment Development Centers currently fill this role; but they have a hard time getting the folks who need them most in the door.

By providing an easy UI for the unemployment benefits application, we sought to find people as soon as possible after they'd lost their jobs.

I used google adwords, bidding on search terms like "online unemployment application" to direct traffic to the site, and applied ~25 people for unemployemnt.

Though the top of the funnel was promising, I ultimately couldn't validate a sustainable business model around the service, and decided to table it.

Research & Publications

Details

After gratuating from McGill, I joined a research section in the Division of International Finance at the Federal Reserve in DC.

My goal was to determine whether I wanted to dedicate my life to academic research. Over the course of my time there, I decided that I did not. But I got to build some cool stuff!

I was involved in two main areas of research:

  1. Forecast evaluation and macroeconomic time series modeling:

    • Ericsson, N.R., Hood, S.B., Joutz, F., Sinclair, T., and H. Stekler. Greenbook Forecasts and the Business Cycle. Working paper, 2013.
    • Ericsson, N.R., Hood, S.B., Joutz, F., Sinclair, T., and H. Stekler. Time-varying Bias in the Fed's Greenbook Forecasts. In JSM Proceedings, Business and Economic Statistics Section, American Statistical Association, Alexandria, VA, 2015.
    • Ericsson, N.R., D.F. Hendry, and S.B. Hood. "Milton Friedman as an Empirical Modeler." Milton Friedman: Contributions to Economics and Public Policy. Ed. Cord, B. 2016. 91-142.
    • Ericsson, N.R., D.F. Hendry, and S.B. Hood. Milton Friedman and Data Adjustment, Vox, Forthcoming 2017.

  2. Jump-robust volatility estimation with high-frequency time series:

    TL;DR

    Working under Dobrislav Dobrev, I built a high-frequency financial data pipeline to run through his analytics engine. This enabled us to statistically identify anomalous activity in public markets. Fed policymakers requested that we build this as a tool for macroprudential risk management.

    Our system found an ideal test case in the summer of 2012, when the European Central Bank's president declared "the euro is irreversible". Using a panel of currency futures cross-rates we statistically identified the ensuing market reaction as a purely euro-centric event.

    A bit more on the method of identification:

    Dobri and his PhD advisor Torben Andersen treated volatility as a Lévy process composed of a continuous component and a discontinuous component. With this stochastic framework, they developed "jump robust volatility estimators", which were able to distinguish the continuous component of a volatility series from its discrete counterpart. Their measured were said to be jump robust as compared to traditional Realized Volatility (RV) measure, which simply summed the squared returns on a financial series.

    Distinguishing between these discreta and continuous components is important because often, a large shock can hit the market and cause massive discrete moves. These blow up the RV measure, dwarfing the continuous contribution to overall RV. This in turn makes it harder for analysts to compare the continuous part of a daily volatility estimate, which gives them an idea of overall activity independent of the large discrete moves.

    With these jump-robust volatility meausures, one can construct a t-test of sorts, with respect to RV. $$(RV_{robust} - RV)/\sqrt{ \ quadratic \ volatility }$$

    When the magnitude of this t-like-statistic is sufficiently large, we can statistically identify the presence (and magnitude) of a discontinuous contribution to volatility. i.e. We can detect a jump. Below we discuss a use case in which we can make statistical inference that yeilds immediate qualitiative insight.

    Boosting statistical power with a large panel of currency cross pairs:

    Ok so now that we can statistically detect a jump... detect this!

    Suppose you were to measure this t-test (the jump detector test) across a panel of currency cross pairs. You could construct a diagonal matrix where element [i,j] corresponds to the exchange rate from currency i to currency j.

    What would it mean if a single entire row of this matrix identified jumps, and all other rows detect nothing but continuous volatility?

    For example, consider EUR, USD, JPY. What if we noticed a simultaneous jump in EUR-USD, EUR-JPY, but not the third cross-pair, USD-JPY?

    That would tell us that there had been an "EUR-specific event". Pretty neat qualitative insight, using nothing but a few beefed up t-tests!!

    Now suppose that instead of 3 currency exchange rates you had \(n\)? Then you'd have \({n \choose 2}\) cross pairs. And for a particular currency, \(x\), there are \(n-1\) corresponding cross-pairs.

    You can now use all \(n-1\) t-tests of currency \(x\)'s cross-pairs to test for a jump in that currency. Your statistical power against the null hypothesis of "no \(x\) currency-specific event" thus scales up at the rate \(\sqrt{n}\).

    Super useful for the International Finance Division of the Fed in keeping an eye on international currency markets :)