The Modeling Problem

April 16, 2009 in Economics

Mike at Rortybomb has a post on the pros and cons of MBAs.  Among his cons is the following:

They just wrote a post about MBA students being owned by their models. The general idea underneath this is that these models are too complicated to understand, and taught to be something that is true rather than something that approximates a lot of contingent data.

There’s a joke that goes around quant message boards. An MBA class at a highly ranked school has their esteemed teacher solve the Black-Scholes PDE, and they get the final answer:

at which point someone raises their hand and goes “can’t we cancel out the d’s?” Ha, right? “No, thats the symbol of the derivative, not a variable. Moving on….”

Now think about that for a second. What should we expect the student to walk away with in this situation? There’s too much emphasis on solving models rather than seeing models used. Often the final answer on the test is “write out the CAPM equation” rather than “discuss how you would use the CAPM in your work day.” Not the normal “list the X assumptions this model makes”, but actually thinking through the deployment in the field. I think shifting the focus from models as a kind of deep economic truth to just another tool you use in your day-to-day, like excel, would be very useful.

I agree, though I think if anything he is being too soft on naive model-users (for lack of a better term), though to be fair his purpose is to focus on what's lacking in the curriculum.  Modeling - or the use of models - can't simply be part of day-to-day work unless those models are well understood in the first place.  The article he links to puts it more aggressively:

Most MBA programs are taught in such a way that rather than owning the models, the models own students.Management research has become more thorough, rigorous, and technical, and it has developed tools based on complex models. Students in business school have to absorb many tools in a short time, so they aren't inclined to delve deep into the inputs or the workings of the underlying models. They focus mainly on the outputs. When professors try to go into the details, students make it clear that they prefer the takeaways--not its derivation or caveats. In any case, faculty members, proud of the models they've developed or sharpened, aren't eager to focus too much attention on situations in which their frameworks don't work.

As a result of this little dance, MBAs join organizations with a toolbox full of models for which they primarily understand only the outputs. Worse, they believe: "I know a bunch of powerful tools that work in most, if not all, circumstances. I can therefore apply them aggressively, confidently, and to their fullest."

This problem has been epitomized by the recent crisis in financial risk management.  Salesmen pitching CDO's don't know that the underlying model is a single factor Gaussian copula, and moreover even if they didn't they wouldn't know what to make of the single correlation parameter.  Finally, even crossing that bridge the model would still have failed since it was itself inadequate.

I must stress I am not speaking in favor of models per se.  But modeling is an unavoidable consequence of dealing with data; a necessary simplification (I use the term loosely) to extract signal from noise.  With the exception of looking at raw numbers, there is little we do in this age that does not involve data transformed in some way.  It may not all be as complex as an obscure mathematical formula, but it is just as important to understand the underlying assumptions before taking the output for granted (or in blind faith as correct).

The realization that models are fallible should not drive us away from models.  If possible, it should drive us toward better ones, or at least toward an understanding of what creates those failures.  As George Box famously stated, "Essentially, all models are wrong, but some are useful."

There is a firm that has two types of employees: salesmen and quants.  The salesmen take information from clients, give it to the quants, receive a result and pass the new information back to the client.  The salesmen have no idea what the client's data means, nor what the quants do to it. In many cases, the salesman does not even fully comprehend the input or outputs, he merely acts as a messenger, formatting the data as it needs to be. If there is any problem with the final result, the salesmen are powerless to fix it since they can not identify where things went wrong.  Conversely, the quants can not fix the problem because they have no conception of where the inputs come from or where they go; they only comprehend the intermediate step.  This (real) firm is an excellent analogy to model use on a micro scale, where the calculation is a black box (the quant) and the user is feeding data in and receiving data out (the salesman).

Needless to say, the system is broken.

Leave a Comment

Previous post:

Next post: