Epsilon Theory: Two Discoveries

The world made two discoveries last week. Everyone is aware of the first discovery – that ISIS is not “a junior varsity team” but an able protagonist in what Pope Francis quite rightly calls “a piecemeal third World War”. Very few are aware of the second discovery – the existence of a polynomial-time algorithm to determine whether two networks, no matter how complex, are identical. Both are watershed events, part of a continuing destabilization of politics and science. Neither will impact markets very much today. Both will change markets forever in the years to come.

I won’t say much about the first discovery here, but will take this opportunity to reprint a note I wrote in December 2014, eerily right before the Charlie Hebdo attack: “The Clash of Civilizations”. I’d also point out that the all-too-predictable Orwellian response to events like the Paris attack, namely to rewrite history and expand government monitoring of our private lives, is in full swing.

For example, here’s a before and after France Inter headline (hat-tip to Epsilon Theory reader M.O.), as noted by The Daily Telegraph. The headline as it originally ran a few weeks back calls a potential terrorist infiltration of Syrian refugees a “fantasy” of the lunatic right. Immediately after the attack, the headline has been rewritten (and the body of the article partially rewritten as well), to suggest that of course one might question whether or not a few terrorists managed to sneak in with the refugees. France Inter – surprise! – is part of the state-owned media apparatus, now in full-throated advocacy for a “pitiless” war.

Given how this Narrative is developing within left-leaning European governments (hmm, amazingly enough, no marches this time around calling for solidarity with peace-loving Muslims), my political advice to left-leaning US politicians like Connecticut governor Dannel Malloy is that you might be getting a wee bit ahead of yourself by loudly and publicly promoting Syrian refugee relocation in your state. Just sayin’.

The second discovery – an algorithm that dramatically shortens the information processing power required to tell if two networks are structurally the same – requires a bit more explication. Here’s a picture of two such visibly different but structurally identical (isomorphic) networks.

For a whole host of data science applications – from cryptography to genomics to nuclear physics to, yes, finance – we’d often like to know how or if two networks or two data arrays are permutations of the same underlying structure. Maybe you could eyeball an answer to an 8-node network like the example above, but it doesn’t take much imagination to realize that this problem gets very hard very quickly for the human brain as the number of nodes increases.

Podcast: Ben Hunt on Information Theory, Bonds, and the Entropic Ending

Fortunately, of course, we have non-human intelligences to help us crack these problems today, but the only known algorithms or programs for computers to follow for this particular problem existed in what is called “non-deterministic polynomial” or NP-time, where the amount of time or information processing power required to carry out the algorithm increases at a staggering rate as the number of nodes increases. This is in contrast to a polynomial or P-time algorithm, where the time required to crunch the algorithm increases at a more manageable rate as the number of nodes increases. Think of it as the difference between 2n (an NP-time algorithm) and n2 (a P-time algorithm), where n is something like 1 million. 2 raised to the 1,000,000th power is an incomprehensibly large number, greater than the number of atoms in the universe. If that’s your algorithm for solving the isomorphic network problem, then it’s physically impossible for any computer, no matter how powerful, to crack the problem for a network with 1 million nodes. On the other hand, 1,000,000 squared is a trivially small number (1 trillion) as a representation of a powerful computer’s information processing capabilities. If that’s your algorithm for solving the isomorphic network problem, then there is no network too large for you to measure and compare to another network.

The isomorphic network problem was a classic example of something that most computer scientists believed could only be solved with NP-time algorithms. But last week, Laszlo Babai at the University of Chicago announced the existence of an algorithm for this class of problems that is, for all practical purposes, in P-time. Why is this important? Because it is the modern day equivalent of discovering a new continent, one that happens to exist in cyberspace rather than human space. Because it is now by no means clear that there are ANY problems of data science that are inexorably lost in the cosmic fog of NP-time algorithms. Why will this one day change markets forever? Because the ability of computers to analyze and predict (and ultimately shape) the behavior of a complex network comprised of millions of semi-autonomous agents exchanging a set of symbolic chips with each other – The Market – just took a giant step forward. If you thought that humans were a marginalized participant in public capital markets today … if you thought that the casino-fication of markets had reached some sort of natural limit … well, you ain’t seen nothing yet.

Sigh. Last week was a tough week for the human team. With the loud explosions out of Paris, the illiberal left, the illiberal right, and the illiberal jihadists are now ALL in political ascendancy. And with the quiet announcement out of Chicago, we are oh-so close to the day when no human communication over any network can be shielded or kept private from a machine intelligence. God help us as the two discoveries merge into one.

About the Author

Author
ben [dot] hunt [at] epsilontheory [dot] com ()
randomness