Skip to content

Übermacro models

Ricardo and Robert Fernholz have a new paper (PDF) about a model of theirs that shows how economic inequality occurs necessarily but the big winners win strictly by chance.  I say shows instead of proves, because I’ve literally only glanced at it.  James Kwak has read it.  He still doesn’t understand the details of the model, which is OK because it’s clearly complicated, but he summarizes the main argument for us.  (Thank you, James.)  I really like his conclusion:

Because we want to find order and meaning in the universe, we like to think that success is deserved, but it almost always comes with a healthy serving of luck. Bear that in mind the next time you hear some gazillionaire hedge fund manager or corporate CEO insisting that he knows how the country ought to be run.

There’s a lot of truth to the notion that success can go to your head.  The fact that someone has been successful doesn’t mean that they can be objective about themselves (cough Jack Welch cough).  I’ve been wondering lately whether the same phenomenon can occur with a successful society.  The amount of economic growth over the last five centuries has been nothing short of amazing.  How much of it is due to historical accident and how much is due to the inherent awesomeness of ourselves?

Advertisements

You don’t know, Jack

We all know what Jack Welch, former CEO of General Electric, said last week when the Bureau of Labor Statistics unemployment report showed that unemployment had dropped below 8% for the first time since 2009 or so.

Unbelievable jobs numbers..these Chicago guys will do anything..can’t debate so change numbers

Astounding.  A government agency staffed by career civil servants cooks the books to help the current administration get a second term?  What evidence did Welch have of the manipulation he alleges? None whatsoever (of course). Further, Andrew Gelman notes that under Jack Welch, General Electric manipulated its earnings reports and settled with the SEC over accounting fraud charges, and implies that Jack Welch is just projecting his own corrupt standards onto the BLS.  Nice.

Simple rules make lovely networks

This is a synthetic network generated using a few dozen lines of Python and a very simple rule. More on this later.

Hello, Dude

Marc Leder, the host of the fundraiser for Mitt Romney where the presidential candidate said, famously,

There are 47 percent of the people who will vote for the president no matter what. All right, there are 47 percent who are with him, who are dependent upon government, who believe that they are victims, who believe the government has a responsibility to care for them, who believe that they are entitled to health care, to food, to housing, to you-name-it. That that’s an entitlement. And the government should give it to them. And they will vote for this president no matter what…These are people who pay no income tax.

is known for being quite the party boy:

In August 2011, the same tabloid reported on a Hamptons bacchanal at a $500,000-a-night oceanfront mansion rented by Leder, “where guests cavorted nude in the pool and performed sex acts, scantily dressed Russians danced on platforms and men twirled lit torches to a booming techno beat.”

All of this is bizarre enough.  As Paul Krugman writes, “Clearly, we’re living in a bad political novel written by some kind of liberal.”  The truth of the matter, however, is that we’re living in a brilliant political film written by the Coen brothers, with Mitt Romney as the rich Jeffrey Lebowski and Marc Leder as Jackie Treehorn.

Wind power, I’m a huge fan

Being a huge fan (pun intended!) of wind power, I’m made hopeful by studies that signal a big future for wind. Unfortunately, the model created for this paper suffers from a horrible assumption about the scale of the buildout of wind power generation: “wind turbines [can] be installed anywhere and everywhere, without regard to societal, environmental, climatic, or economic considerations.” Sure, simplifying assumptions are sometimes necessary when modeling complex phenomena, but what problem does this scale assumption solve? Would the model have been intractable if it were at least somewhat conservative, not to mention realistic, about turbine deployments?

Prescience — always welcome, usually unanticipated

In May 2006, Dino Kos, the then-Executive Vice President of the Markets Group at the New York Fed, finished a speech about volatility in foreign exchange markets with the near-prescient words:

… we should be extremely skeptical of risk measures based on the status quo assumption that volatility will remain low. This is precisely because the events that we should be most concerned about are those that have the ability to turn the status quo on its head.

I’d not heard of Mr. Kos before reading this transcript, but at the time of his speech, US unemployment was a mere 4.5 percent and was to go even lower before the beginning of the recession in late 2007.


So I’m impressed by his caution and sobriety, especially considering the Panglossian views of the Fed Chairman at the time.

Dimensionality and correlation

I’ve been thinking about the difficulties that highly correlated variables pose in a supervised learning context. The supervised learning problem is typically to learn a regressor or classifier from input \mathbf{X} to observation \mathbf{Y}, where \mathbf{X} is a set of predictor variables X_1, X_2, \ldots, X_p and \mathbf{Y} a set of observations y_i, y_2, \ldots, y_n. If the predictor variables X_1, X_2, ..., X_p are highly correlated, the learning algorithm — which perhaps assumes that they are independent — is at a disadvantage.

This is in part the motivation for regularization techniques such as the lasso, which is designed to handle the case where p >> n. It can nonetheless be useful to winnow one’s set of independent variables \mathbf{X} to remove highly correlated variables. Doing so can, for instance, result in models that are easier to interpret. Further, since lasso, for instance, tends to arbitrarily choose a variable X_i from some set of strongly correlated variables that are a subset of \mathbf{X}, reducing or eliminating highly correlated variables from \mathbf{X} can result in more consistent variable selection when building multiple models.

After running across Stephen Turner’s recent post about visualizing correlations (especially the oh-so-useful chart.Correlation in PerformanceAnalytics), I decided to go a step further and see whether fractal dimensionality can expose correlations in one’s data. Using the correlation dimension for a distance l (C(l)), I plotted log C(l) against the log of the distance. Roughly speaking, the slope of the plot is a measure of the dimensionality of the data. I expected the highly correlated variables to distort the log-log plot of correlation dimension against length in some way, causing unexpected curvature or even discontinuity.

Consider these two matrices.

> M1 <- matrix(rnorm(50*100), ncol=50)
> M2 <- as.matrix(M1)
> M2[,1:25] <- 0

And their correlation dimension plot.

Log-log correlation dimension plot for M1 and M2

Log-log correlation dimension plot for M1 and M2. M1 is shown in blue, M2 in red. The lack of strong correlation among the variables in M1 is reflected in its smoothness and concavity. The irregularity of the plot for M2 indicates that it contains at least one group of highly correlated variables.

The correlation dimension plot for M1 shows a fairly smooth, likely concave line. That for M2 — and this appears to be typical of data sets with groups of highly correlated variables — has segments with greater curvature and segments with zero slope. My intuition is that the segments with zero slope occur because correlated variables are close to one another; the reason for the greater curvature is not at all clear to me. Regardless, if there’s research on the use of fractal dimensionality for subset selection, I’d love to hear about it.