Tuesday, June 12, 2012

You're a Mean One, Mr. Grinch - Or Are You?

An Inside Look at High-Frequency Trading

High-frequency Trading has been in the news a lot in the past few years.  I’d like to say it has been in the sort of way the recent Queen’s Jubilee has, a highly covered, well-reported, highly-esteemed occasion that at its heart is nonetheless an ultimately boring and uneventful affair of pomp and ceremony.  Unfortunately, a better analogy is with the photos of a drunk and underdressed celebrity, let’s call her Britney, exiting a car with a little less modesty and decorum than usually befitting a grimy alleyway at 2am.

Why the animosity, why the disdain? The most visceral responses are to general headlines about executive compensation, insider trading, market manipulation, and collusion.  It’s not as pervasive as it seems, however.  The recent rash of such headlines have less to do with any real increase in such occurrences and more to do with public interest in them, sharply risen with outrage over the banks’ involvement in the housing market, outrage over government bailouts, and outrage over the growing disparity between the upper echelon and the general public.  The abundance of passion and ideals, as well as ire, has fueled the Occupy Wall Street movement and continues to ricochet at dining room cocktail chatter.

The irony is that high-frequency trading is almost completely absent in these problems.  The genesis of high-frequency trading comes from the “high-frequency” part, which requires lots of participants who are confident in the rules, understand the simple nature, and find the process easy and affordable, and so frequently interact.  This ubiquity and confidence is primarily found in the stock market.  Such cannot be claimed by the OTC derivative markets, credit default swaps, mortgage backed securities, convertible bonds, and numerous other deals and contracts heavily customized on a one-off basis and then sold and resold with little oversight and hazy mathematical mechanics.  By contrast, from hollywood to grandma’s house, nearly everyone understands how to invest in stocks, with the protection and oversight provided by the SEC, FINRA, numerous legislation and an abundance of case-law around each.  This means regulators and politicians understand, for the most part, how the stock market works and have as a result built an extensive system of checks and balances with vigorous monitoring and strict enforcement.  Far from subverting these rules, high-frequency trading thrives adhering to these rules, because these rules ensure fairness, ease, and ubiquity, the exact ingredients that lend to the creation of the “high-frequency” part of the name.

Moreover, due to the frequency of the trades, all profits are considered “short-term capital gains” by the IRS.  While the lens of public scrutiny over capital gains being taxed at a flat 15% is well deserved, it requires mentioning that only “long-term capital gains” and some interests and dividends are taxed at this rate.  This means private-equity firms like Bain Capital and their former members like Mitt Romney paid this highly coveted, effective rate.  It also means that high-frequency trading firms pay ordinary income tax, which is what short-term capital gains are treated as.  A high-frequency trading firm in New York City pays a combined federal, state, city and borough tax rate north of 45%.  If there were any hard evidence that high-frequency trading is not a privileged member of the establishment or bosom buddies with political elites, the harsh tax treatment is it.

Why, then, all the focus on high-frequency trading?  The ease of understanding stock markets that makes it accessible to the multitude of participants is the same ease of understanding that makes it a convenient target for the political microscope.  It also fits the luddite narrative of evil machines taking over the world, a very evocative and frightening image.  It also plays towards the sentiment that there’s some inherent unfairness and difficulty in the stock market which resonates at a time of plummeting net worths, a prolonged recession, and postponed retirements.  This narrative is convenient in that it garners readers easily, sounds extremely plausible because it does contain a kernel of truth, and helps move the spotlight away from the many real problems with Wall Street and inequality that desperately need reform.

Let’s demystify high-frequency trading.  A lot of the misconceptions persist because while fanciful prose about machines, the speed of light, and number-crunching algorithms have been penned and repenned, no one actually knows what it is.  The most common misconception is that, somehow, really-fast machines buy before someone else can buy and then sells back to that initial person for a higher price.  The first problem with such a notion is that it means high-frequency trades cannot lose money, which as any auditor can assure the public, is not true.  Like most financial transactions, nearly half of all high-frequency trades lose money.  The second problem with the notion is over how it implies the high-frequency firm has advanced knowledge of the initial buyer’s intention.  This is called “front-running” and not only is it already illegal, but is nearly impossible to execute since high-frequency firms are not retail brokers and are therefore not privy to a buyer’s intent.  The easiest way to tell whether the writer has no idea how high-frequency works is to see the word “somehow” in the description of the trading strategies.  It might as well read “magically” as a tell-tale sign of fantasy not fact.

Now that we know what it isn’t, what is it?  Well, we first need to understand “two-sided quoting”, and a delicious way to do so is with cake!  It’s an old puzzle:  two individuals both desire cake and it needs to be divided in a way both deem fair.  Even with rules like “cut down the middle”, what prevents the cutter from cheating and giving himself a bigger piece?  One clever solution is to have one individual cut the cake, and the second individual choose which piece to take.  If the first person cuts unfairly, the second will pick the larger piece and only the cake-cutter winds up with the smaller portion.  This system ensures the first person will try to cut the cake in the fairest way.  In financial lingo, the first person is considered the “market-maker”, a fancy word for the same concept.  In the stock market, a market-maker thinks of a fair price for, say, shares of IBM, and offers to buy and offers to sell around that price.  The passerby can then choose whether to buy, whether to sell, or keep going without interacting.  Accusations that the price is unfair, however, would be silly since that only hurts the market-maker’s interest in the same way it hurts the cake-cutter.

Okay, so market-makers seem to actually have some value, but how do computers and fast speed come into play?  First we need to understand what a “spread” is.  It’s jargon for the subtle difference between the highest price someone is willing to buy a stock and the lowest price someone is willing to sell a stock.  If you travel overseas and try to exchange currency, you’ll also notice a spread; converting from dollars to euros back to dollars will leave you with fewer dollars.  Having smaller spreads benefits investors in the same way being able to convert from dollars to euros back to dollars and winding up as closely to the initial amount benefits travelers.

Now, suppose an investor has the view that Facebook should have a market value somewhere between 40 and 70 billion dollars.  This investor will then have a willingness to buy stock when the market value of the company dips below this range and a willingness to sell the stock when the market value rises above this range.  In between the range, the investor is neutral and will likely neither buy nor sell.  The investor, like a market-maker, has a two-sided quote, except the spread is $30 billion of market value.  A company like Facebook does not change in market value by $30 billion very often or very quickly, so the frequency of such trades are low, even though the magnitude of the trading opportunity may be large.  This would be low-frequency trading.

By contrast, if an investor has a view that Facebook should have a market value somewhere between $58.01 and $58.03 billion, then the investor will find himself frequently buying and selling the stock.  This is high-frequency trading.  Unlike the investor who may be very confident about the $40 to $70 billion range, and therefore be willing to make a massive investment, an investor is likely to not be very confident about the $58.01 to $58.03 billion range and thus take a tiny investment.  These two investors co-exist within the same ecosystem, one attempting to make big payoffs on sizable investments over opportunities that occur once in a while, the other attempting to make modest payoffs on small investments over opportunities that occur multiple times a day.

Hmm, again, how do high-speed computers come into play?  Well, in order to be semi-confident about the tight spread around the market value of the company from the last example, the investor needs to process a lot of information that is rapidly updating.  The information that directly affects the company does not update frequently, such as quarterly earnings, customer surveys, the health of the CEO, pending court cases, employee retention and so on.  However, information that hints about the company, such as whether a related company has suddenly surged or fallen, does frequently update.  These relevant but not directly related data points are considered correlating factors.  It’s a bit like seeing the neighbor’s kid getting bit by a dog and then worrying about your own kid.  While information that directly affects the company can send its value up or down by several billions of dollars, these indirect but correlating factors only affect the company subtly, often by less than a tenth of a percent in value.  However, the interval of $58.01 to $58.03 billion dollars in market value for Facebook translates to only 1 cent per share, so subtlety is key.  If the two-sided quote is a penny per share too high or too low, the high-frequency investor loses 100% of his spread.  This means a lot of subtle, correlating factors that frequently update need to be processed and the forecast for the stock updated in a timely fashion, a feat only made possible by well-designed computer software.

The comparison between high-frequency and low-frequency trading is very analogous to the comparison between weather and climate.  Weather is about predicting whether it will rain this afternoon in your city.  It requires watching minute-by-minute the clouds over nearby cities and communicating that back to your city and having a computer quickly process the data.  Climate is about predicting how sea levels will rise over the next ten years.  It uses monthly or annual data, also uses computational methods, but may use less efficient software.  Both are essential endeavors.

What about news over how these computers destabilize markets, and caused the ‘Flash Crash’?  The Flash Crash was a widely reported incident wherein the U.S. broad market fell by more than 5% in the span of minutes, and regained to its original level over the course of a few more minutes.  The popular blame lay squarely on “robots going berserk”, a diminutive way of accusing high-frequency traders for the volatility.  Most of such news reports are sadly written by people who have no clue what they’re talking about.

The SEC has scrutinized the incident and found the origin of the Flash Crash to have likely emanated from a futures contract trader who had misentered the quantity of contracts to sell by two orders of magnitude.  The massive selling spree of $4.1 billion, manually instigated, would have sent the futures market to depths far lower than what was witnessed if high-frequency trading firms hadn’t formed a virtual mattress to soften the plunge.  However, this cushioning effect is achieved only because the load is distributed to a variety of other related stocks and products.  When high-frequency firms see the futures contract go down while the highly related underlying stocks do not go down, they see a mismatch.  It brings into doubt whether the stocks that didn’t go down perhaps ought to have, and also into doubt whether the futures contract that did go down perhaps ought not to have.  This downward doubt on stocks and upward doubt on the futures contract resulted in high-frequency firms taking small, measured trades that sent the stocks slightly lower toward the related futures contract and sent the futures contract slightly upward toward the related underlying stocks.  In essence, high-frequency firms acted as a type of elastic glue that slowed down the falling object and tugged related stationary objects down a bit.  Blaming high-frequency firms for the Flash Crash is a bit like someone running a car into a brick wall at 60mph and blaming the airbag for breaking their nose.  After the firm that entered the manual sell orders detected their mistake and stopped, the futures market quickly rebounded, and the glue-like mechanism provided by high-frequency trading firms brought back up the related, underlying stocks.

Okay, so perhaps high-frequency trading isn’t harmful, but how is it fair for investors to make money while not contributing to society in any material way?  Well, this is a bit of a loaded question in that it presumes no societal benefit by investing.  Investing is a broad area, and high-frequency trading is only one specific type of investing; nonetheless, it’s fair to question whether investing helps society.  We’ll take a quick aside before answering.

Companies can raise money in basically one of two ways, raising money through borrowing from a bank or bond holders, or raising money through selling an investment stake.  That is, instead of borrowing $20 million and having to repay it with interest, they may give an investor 10% of the company in exchange for $20 million.  They never have to repay an investor, since the money isn’t borrowed -- the company sells 10% of itself for that money.  In both examples, the company raises $20 million and may hire employees, expand its facilities, or otherwise spend the cash however it sees fit.  In the investment example, the company has the added benefit of never being on the hook to repay -- it doesn’t have a gun to its head to constantly pay interest or risk bankruptcy.  For this reason, many companies choose raising money through investors.

It’s a little less understood how the stock market contributes to the ecosystem.   The investment example from earlier is called “private equity investment”.  As the word ‘private’ may denote, it is not open to the public and unless you have a chat with the CEO over dinner, such opportunities are not available to you.  This is problematic not only for the general public but also for the CEO who may not have that many dinner buddies interested in investing.  This is where stock markets come into play.  The company may decide to do a “public offering” where they sell a large chunk of stock to an investment bank who then facilitates providing this stock to the general public.  This helps the CEO raise the cash he may not have been able to with dinner buddies alone, and it also allows anyone in the general public to become an investor -- no yacht-club memberships required!

So, how do high-frequency trading firms play a part in this mechanism of helping companies raise capital?  This is through a somewhat indirect effect.  Take the traveller’s dilemma from before, where converting from dollars to euros back to dollars results in a slight loss.  When converting is more efficient, it encourages people to make the conversion.  Similarly, because high-frequency trading firms make trading extremely efficient for everyone by promoting cheaper conversion from cash to stock and back again, it encourages people to make the conversion.  It’s important to keep in mind how a crucial factor dictating whether a company can raise $15 million, $20 million, or $25 million in a public offering is the appetite of investors.   Some of those investors may be enamored with the company and invest regardless of any other facilitation.  However, some investors are skittish and need a nudge.  Imagine a company selling a new gizmo.  It costs $299 to buy one of these gizmos, and returns are not allowed.  Suppose a thousand people are eager to take that deal.  Now imagine the company allows returns with a 35% restocking fee.  Perhaps an extra hundred people who were on the fence now will take the deal.  Now imagine the company allows returns with a 0% restocking fee.  Perhaps an extra five hundred who were on the fence now participate.  In the case of a public offering, if the company can raise $25 million instead of $20 million because a few more thousand investors who were on the fence now decide to buy on the confidence that if they have buyer’s remorse they can punt their investment back to the stock market quickly and efficiently, it means $5 million more to hire employees, expand facilities, et cetera.

The impact of not having high-frequency trading is apparent by looking at investing opportunities that lack their participation.  These are “low-liquidity” stocks which seldom trade, “pink sheets”, and private investments.  Without high-frequency trading firms providing clarity over what the stock price is, accurate to less than a tenth of a percent of the market value, and the relevant glue to make them ebb and flow with their related peers, these low-liquidity stocks exist in a sort of neverland where they may stagnate in price while their peers all go up.  The most disadvantaged in this example are not wealthy investors who can patiently wait for their investment to turn a profit but investors in dire financial straits who need cash.  A wide swath of hollywood movies have the iconic image of a lad down on his luck pawning off precious items for fractions of their true worth.  If pawn-shops had the spreads offered by high-frequency trading firms, the lad down his luck would get the same price for his autographed Mickey Mantle baseball as a Christie’s auctioneer, and be able to buy it back at-cost whenever he’s ready.  It all comes down to spread and being able to revolve cash to an item back to cash and barely lose anything in the process.  It encourages risk-taking, and this in turn puts money back into the greatest risk-takers of all, the American middle-class.

Copyright 2012  thoreaulylazy.  All Rights Reserved.

Monday, February 20, 2012


Internet Poster's Question:
I am not a economist, but just a simple question: What does India really make to warrant its economic growth? Only thing that I have experienced is the frustrating call center service from India. Don't think India really export any agriculture, natural resources, or anything else high tech. So really, what is India doing to generate wealth?

My Answer:
GDP does not equal the trade imbalance, or even exports. It is a measure of goods & services provided. Keep in mind that Earth does not export anything to Mars, Pluto or other planets, and yet you would agree that us Earthlings have become wealthier from 4000BC til present day. Earthlings enjoy televisions, internet, cellphones, and all the comforts we know despite never exporting to Mars and Pluto. That is because wealth doesn't merely come from exporting, it comes from producing. India has a large domestic industry, producing a multitude of goods, as well as various services like tailors, spas, restaurants, etc. to a population that equals 1.2 Billion people, the same as Earth's entire population in 1850.

Is India self-sufficient? No. It lacks one key resource: oil, which is by and far India's largest import. The largest export industries for India are textiles, agriculture, steel, automobiles, pharmaceuticals, and the smallest of its exports are its IT related services like call centers and software. Will India improve its exports? Yes, slowly. Why does India have tiny exports? Part of it is that India did not open its economy until the early 1990s; China, by contrast, opened its economy in the late 1970s / early 1980s as part of Nixon's ping-pong diplomacy. A closed economy does not trade very much with outside economies, usually achieved with high tariffs to discourage trade, and restrictive rules regarding foreigners opening shop in the country.

Why were China and India once closed economies? Security. A closed economy is immune from sanctions and other trading decisions of a foreign market. An open economy can become (dangerously) reliant on some key component, say, rubber, that can be turned off by the foreign supplier at any time, or used to extort the fledgling nation. During WW2, the allies stopped trading with Germany, and Germany suddenly lost key items like rubber and helium. This led to them trying to use hydrogen instead of helium on their airships, and that led to the Hindenburg fire. Also, foreign companies can engage in hostile takeovers of domestic companies, or product dumping, and this also leads to a security problem. For example, Domino's Pizza in the U.S. was known to sell a large pizza for $5 in a market that ordinarily charged $12, a ridiculously low price that lost them money; however, this ensured all competing pizzerias in the area went out of business, after which Domino's Pizza would raise their prices to $20. For a fledgling nation with small companies, the potential for a larger foreign company to use this practice was a threat.

Why did they open their economy? Trade imbalance. Despite being a closed economy, certain items, like oil, needed to be imported. These imports need to be paid in a currency the seller desires. Oil is sold in U.S. dollars, and the only way countries like India and China can acquire U.S. dollars is by (i) exporting, (ii) foreign direct investment (FDIs), or (iii) borrowing. Borrowing only delays the problem, since if they borrow U.S. dollars, they owe more U.S. dollars at a later date. FDIs are when, say, an American company wants to create an Indian subsidiary. All the Indian construction workers to erect the offices, employees, and local computers and other goods, need to be paid for in rupees, the local Indian currency. The American company exchanges some of its U.S. dollars for Indian rupees to achieve this in order to invest rupees into their Indian subsidiary. This currency exchange leaves India with some U.S. dollars and the American company with Indian rupees. India can then spend those U.S. dollars to buy oil. Exports are the simplest to explain: Indian companies sell some item to the consumers in the U.S., and are paid U.S. dollars. The U.S. is actually a really bad exporter, as our trade deficit shows, but the U.S. dollar can be spent to buy oil, which is by and far the most valuable product sold for U.S. dollars.

China actually has a problem in that it has too much U.S. currency. How on earth can you have too much? Because U.S. currency can only be spent in certain ways, such as buying oil, and China acquires more U.S. dollars each year than its need to buy oil. The excess money it then re-invests into U.S. treasuries, earning interest, since it can't spend it. The U.S. treasury interest is only 5% whereas China's domestic market is growing at 14%, so it would much rather invest that money for higher growth than 5%, but can't.

This is also why the world watches with bated breath as several oil-producing nations have been grumbling about the weakening U.S. dollar and contemplate selling oil for other items, like a basket of currencies. Iran, under recent trade sanctions by the U.S. and Europe, is rumored to be in negotiations to sell oil for Chinese yuan. If oil is no longer U.S. dollar denominated, it radically changes the global trade dynamics, since much of the value of the U.S. dollar is in buying this key commodity. It is also why the U.S. has such huge vested interest in the middle-east.

I hope that helps give an introduction to world economics and currencies.

Thursday, November 24, 2011


Internet Poster's Question:
I understand (drag etc) why the asteroid would disintegrate, but that is something different from actually exploding which, to me at least, implies that there is some kind of internal energy reserve which is released causing the asteroid to fragment violently from the inside. This is different from breaking up due to external forces (equivalent to the difference between how a bomb exlodes on hitting the ground and how a china teapot disintegrates on hitting the ground).

If I hit the water at 100 miles an hour I would, indeed, splatter quite unhappily, but I would not actually explode.

The Siberian example given by the speaker features an asteroid actively exploding, not just burning up. How does that happen?

My Answer:

The explosion is not a chemical explosion but a pressure-wave induced explosion. Take a look at this high-speed camera footage of a bullet going through an apple and a banana:

Inside the apple and banana, if the bullet were traveling slowly, it would just enter and exit leaving a hole-shaped burrow, much like slowly jabbing a pencil through chocolate cake. Because of the bullet's speed relative to the resistance of the medium, the bullet creates a pressure wave as it passes through.

This article talks some more about asteroids exploding in the atmosphere.

An explosion can be triggered by expanding gas, as with nuclear bombs that heat the air and cause it to expand, or with TNT which chemically combusts to create CO2 and O2 gases that expand from denser, solid matter (and heats the air since it's exothermic). But an explosion can be any shock wave moving outward from the center.

When an asteroid hits the hard rocky surface of the earth, an explosion occurs as the kinetic energy is transferred to the surrounding rock as shock wave energy. That's why a room-sized asteroid can leave a mile-wide crater.

Similarly, if an asteroid is moving fast enough, the atmosphere will seem "hard" enough for much of its kinetic energy to be converted into shock wave energy (and heat from friction). When a smooth asteroid fractures and breaks off into numerous jagged non-aerodynamically shaped objects, further shock waves are created as these non-aerodynamic objects suddenly transfer even more kinetic energy into shock wave energy.

P.S. Regarding your teapot v bomb analogy: If your teapot were moving at 3,000m/s, it'd have the explosive power of a kilogram of TNT (4.1 megajoules). Remember, kinetic energy = 0.5mv^2, so something moving 100 times faster has 10,000 times the energy.

Friday, August 26, 2011


I was reading a certain xkcd comic talking about how large the clouds in the sky are and yet how difficult it is for us as humans to recognize their size. In the comic, the feat was accomplished through a marvelous use of technology, utilizing HD cameras kept 100 feet apart to achieve the depth perception needed to appreciate the true scale of these giants floating above us.

Well, that started cranking some of my now aged and beleaguered neurons into remembering an epiphany I had as a small child.

I was four years old and it was my first night out, literally. I had until then never experienced the night's sky, the stars, or the moon in all their splendor for any more than a glimpse, courtesy of getting tucked into bed each night. Well, this particular night was different; I was ushered into a car and, as per usual, wasn't told anything. Not being told anything actually had its advantages -- it meant I grew savvy at eavesdropping, though listening at that age generally led to more confusion than clarity. From bits of conversation, I gleaned that this was a long road trip. I disliked confined spaces, and I especially disliked confined spaces that so nauseatingly hurtled down the monotonous roads.

It was late, I was tired, but I was too excited to sleep. The night was so mysterious. My nose was practically pressed against the car window as I tried to soak in the strange new world. A bright glowing creature seemed to be hovering and following us, though. At first I wondered if this was some new insect or animal I hadn't known. None of the adults seemed to pay it any heed. It behaved very strangely for an insect, though. It seemed to follow and keep pace however fast we went, yet if we stopped, so did it and it seemed to keep its distance. It didn't seem like an insect; if it were interested, why did it not fly straight into the car? If it were not interested, why did it follow? Why did it not chase any other cars; why was it fixated on us? Moreover, insects fly forward, and thus would chase us from behind; this creature seemed just as content to fly sideways, hovering outside my side window. Then, something strange happened; we turned at an intersection and this creature rather than hovering to our side was hovering in front of us! This was either some devilishly clever creature trying to play tricks on me, or, as was made increasingly likely as we turned at more intersections, it was a fixture in the sky!

I marveled at it. It seemed so near and tangible like a firefly but now so very distant. I trained my gaze to probe it, to study it. It was the most inspiring object I had seen in that short little life I led. After some time we meandered through the small streets and intersections and hit a large, straight, road. The glowing orb was still outside. The car picked up speed, the pavement whizzed by in a blur, the grass across the shoulder of the road sped by as well. The trees in a distance danced along, and even the faint outlines of mountains in the extreme distance crept. The glowing ball stood still -- it stood perfectly still! This was astonishing to me. A tree was large, and a mountain gigantic; what did that say about this glowing ball? Well, it was no ball at all, it was a planet!

And for the next four years I believed the moon was the largest object in the universe, bigger than Earth or anything else that had ever existed.

Tuesday, May 03, 2011

Patriarchy, Patrilineality, and Primogeniture

With movies such as the King's Speech recently out on DVD, and the media showering attention at the wedding of a bald bloke named Willy and his air hostess bride Katy, one can't help but be cognizant of the English hereditary system.

In my case, I was additionally reminiscent of conversations I had in college regarding the distinctions between the English and French systems of heredity. The English system confers all status and privilege to the eldest male born; by contrast, the French system distributed status and privilege among all male born. Both systems employed patriarchy and patrilineality, but only the English system additionally employed primogeniture. It seems like an inconsequential difference, but, as with mathematical fractals and chaos, any iterative rule-based system takes subtle deviations and manifests them into mammoth ones.

With the English and French rules of heredity, the English fiefdoms were never divided upon inheritance, whereas the French fiefdoms were continually divided and subdivided leaving a patchwork of ineffective plots of land ultimately conquered by another, not to mention the severe inflation French titles underwent as a result. This ultimately led to the French lords losing their grasp on power, as there were simply too many of them trying to uphold the standard of living meant for a much smaller group of their ancestors.

Well, my train of thought continued along this line, and yesterday I pondered, is there an even better iterative rule-based system for consolidating power? I envisioned the Medieval society, so it had to conform to patriarchy and patrilineality. However, I decided to bend the definition of patrilineality to non-contiguous patrilineality. That is, instead of a father bequeathing to his eldest son, let him bequeath to his eldest grandson. Seems like a subtle difference, doesn't it? But, as was demonstrated earlier, subtle differences in the rules can lead to large scale differences in impact.

Let's play it out, shall we? Suppose a man from the Montague family and a woman from the Capulet family form a union. If their son is the eldest among all his male cousins, on his Montague side as well as his Capulet side, then he would be twice-inherited, creating a merged Montague-Capulet colossus!

If the English system proved superior to the French system because estates were preserved rather than divided across inheritance, this new hypothetical system may have proven even more potent in this Medieval society by actually consolidating estates across inheritance. Noble families often intermarried to form alliances, but under the proposed hereditary system, you wouldn't get a mere alliance, but an actual consolidation of two families' estates into a single heir.

In fact, this phenomenon did occasionally happen in English noble society, but more by accident than by systematic forethought. Every so often, a couple was left with no male heirs, and in such cases their son-in-law or grandson could become twice-inherited. This happened with rarity, however, and was generally avoided because the maternal grandfather had the unenvious distinction of generally not being accorded by English society any influential role over either his son-in-law or his matrilineal grandson. In fact, much of the fortune in such circumstances was afforded to charity. By contrast, a hereditary system that specifically inherited across a more removed relation — from a grandfather to his eldest grandson — would confer far greater influence to such a grandfather over such a grandson.

Monday, July 26, 2010

media hype and bell's theorem

So, I found the following e-mail dated 2007-08-24, in which I replied to an e-mail forward titled "FW: We Have Broken The Speed Of Light". The forward was regarding physicists from the University of Koblenz demonstrating quantum-entanglement, but the reporter chose to refer to that as "We Have Broken The Speed Of Light".

It's just a misleading headline, and the German scientists have done nothing new. The news agencies are renowned for pulling this sort of stuff -- ignoring scientific progress which inches along like a slug on the pavement, and then after decades of ignoring this, some news agency realizes that the slug has moved far enough from where the public was last informed it was, and the agency now declares this a "leap" in scientific understanding with a sensationally misleading headline and complete misunderstanding of the fundamentals. Unfortunately, most people do not grasp quantum mechanics and so any reporter can rise to fame with whatever sort of babble he writes.

The public would be more skeptical if, for example, the reporter claimed that the new Toyota manages to draw their new cars with 2,000 actual horses. That's not what a horsepower is, and people know that. We know that horsepower is a unit of force approximately equal to the output of one horse. This is important because the output of one horse is not the same thing as a horse, because one is a unit of energy per distance (force) that can be produced by a reasonably small device, and the other is an equestrian creature that needs to be fed and when in quantities exceeding twenty will occupy all lanes of a typical highway and would struggle to fit inside the average consumer's garage.

Physics isn't the only realm where "new"s is really "old"s repackaged under the mantra "if you haven't seen it, it's new to you." Remember 9/10/2001? The summer was all about shark attacks, and every news agency jumped on that band wagon because shark attacks went un-reported so long that it became new and terrifying. Of course, the little known fact was that 2001 witnessed a below average incidence of annual deaths due to sharks. That's right, a year with fewer shark attacks than usual fueled a media frenzy over each and every shark attack.

Take, for example, these headlines:

Why Can't We Be Friends? A horrific attack raises old fears, but new research is revealing surprising keys to shark behavior
(TIME magazine, cover, 2001)
A Scary Jump in Shark Attacks...Could Threaten the Sharks (Businessweek, April 23, 2001)
Expert confirms surfer was bitten by shark (CNN, July 20, 2001)
Boy dies after shark attack (CNN, Sept 2, 2001)
Man killed in N.C. shark attack; woman hurt (CNN, Sept 3, 2001)

Any many more..

Contrast that media hype with academia trying to bring people to their senses:

"GAINESVILLE, Florida, February 18, 2002 (ENS) - Despite the prevailing perception that 2001 was a banner year for shark attacks, actual numbers were slightly down, a new University of Florida study shows." -- University of Florida (

" 'Falling coconuts kill 150 people worldwide each year, 15 times the number of fatalities attributable to sharks,' said George Burgess, Director of the University of Florida's International Shark Attack File and a noted shark researcher." -- Daily University Science News ( May 23, 2002

Dubious Data Award 2001: "The frenzy was remarkable. According to the NEXIS database, there were a mere 58 stories about shark attacks in the US print media in June. This increased tenfold in July to 592, and it rose again in August to 684. September was the month the story would have consumed all others, with 624 entries up to and including September 11. The advent of another dangerous but unseen villain stopped all that. ... During the 1990s, when only five people were killed by sharks, 28 children were killed by falling TV sets. The Times editorial mentioned above concluded from our data that, loosely speaking, 'watching Jaws on TV is more dangerous than swimming in the Pacific.' " (, Jan 1, 2002)

Right, so we've thoroughly exposed how most of these headlines are useless "old"s repacked with bad math and poor editorializing to shock you as "new"s. Why is the faster-than-light article misleading? Well, first, we need to delve into what speed is. Speed is distance per time. What is distance? One might answer "the distance between A and B is the length of a straight line between those points". But that's not quite right. A straight line is the definition of distance only in Cartesian space, but in other geometries where the straight line is sub-optimal (longer than the shortest path) or impossible (shorter than the shortest path), the length of a "straight line" is not the distance. So what is the distance? It's the length of the geodesic -- which is just a math term for the "shortest possible path". Aha! So now we're making progress - that means that I can travel at a speed (distance per time) slower than light but arrive sooner by reducing the distance! But of course we all knew that -- we find "short-cuts" to drive to work, often traveling on slow back-roads instead of fast superhighways yet still arriving quicker.

So how do these spatial short-cuts work? Well, there are all kinds of spatial short-cuts, because physicists have long since known that spacetime is heavily warped with many microfissures like quantum entanglements peppering the universe as well as with large fissures made by gravity wells such as blackholes, neutron stars, and wormholes. Physics of the very big (astrophysics) is only a looking glass, but physics of the very small (quantum mechanics) prescribes us experiments which are feasible to undertake. Specifically, the easiest way we can warp spacetime is through quantum entanglement -- which is far more than just a theory since Bell's paper shook the scientific community in 1964 with results which we will eventually get to in this email.

Okay, but what is quantum mechanics? Back in Einstein's day, it was centered on one notion: that a particle could be indeterminate -- that is, not only is its state unknown by you and me, but its state is unknown by even itself and the universe and, if we want to drag theology into this, God. Einstein famously exclaimed "God does not play dice!" to ridicule the notion of indeterminate states. Who were Einstein's intellectual nemeses? Neils Bohr and Werner Heisenberg, the two who came up with what became known as the Copenhagen interpretation while working together in Copenhagen, 1927. Essentially, Bohr and Heisenberg believed that sub-atomic particles were probabilistic and not deterministic. Einstein believed that probabilities were merely mathematical constructs on paper but actual things in the universe were deterministic.

Unlike a purely philosophical disagreement, this disagreement was scientific. If particles were determinate, they would behave like you would expect. If particles could be indeterminate, then some peculiar things can be done: particle X can be at A with 50% probability and at B with 50% probability -- which is a very different thing from saying X is at A 50% of the time and at B 50% of the time. To illustrate the difference, suppose I am (A) dialing my apartment number with my cell phone with 50% probability and I am (B) dialing my cell phone with my apartment number with 50% probability. There is just one (My Name – edited out), so you would expect either the apartment phone to ring or my cell phone to ring. However, that interpretation (Einstein's) would assume that (A) happens 50% of the time and (B) happens 50% of the time but never both. Under the Copenhagen interpretation, you would sometimes hear the apartment phone ring (A), you would sometimes hear the cell phone ring (B), and you would sometimes hear a busy signal (A+B)! That third possibility is highly important -- it means that both (A) and (B) occurred simultaneously and interacted with itself. How on earth can I get a busy signal by calling one of my phones from the other? You can see why there was a lot of debate - quantum mechanics does not sound reasonable. The scientific community was split.

All this is purely theoretical at the time (1920s and 1930s), but people were intrigued. So what is quantum entanglement? Well, "entanglement" was a term Schrodinger used during his debates with Einstein after the 1935 EPR paper (a paper written by Einstein, Podolsky and Rosen showing bizarre conclusions derived from quantum mechanics, a thinly veiled attempt at demonstrating the absurdity of such a view in a rational universe). Shortly put, these indeterminate states can be determinate by entangling with it. Schrodinger famously used his "cat in a box" thought experiment. We start with a single radioactive atom with a 1 week half-life, which means that it will decay in the first week with equal probability that it will not decay. Second, we add an execution machine which releases poisonous gas, triggered by alpha/beta emission (radioactive decay). Third, we add a living cat. Fourth, we seal them all in one impenetrable box. The cat's fate is entangled with the fate of the radioactive atom. If the atom decays, the cat dies, else the cat lives. However, because of the how the psi wave function works (the mathematical function calculating probabilities for indeterminate states), only the act of observing will collapse the wave function into a single determinate state, but for only the observer and no one else. The cat is observing the execution machine (through being alive or dead), and the execution machine is observing the radioactive atom (through being triggered or not). Therefore, the radioactive atom is in a determinate state for the machine and the cat. However, we the people on the outside are observing none of this, so the radioactive atom is in a quantum superposition of both states. Therefore, the cat also has entered quantum entanglement, because it observed the quantum particle yet was not itself observed. This means that the cat is also in quantum superposition, one involving both living and dead states. If we observe the radioactive atom, we make determinate the state of the cat; similarly, if we observe the cat first, then we make determinate the state of the radioactive atom. This is entanglement.

The Copenhagen interpretation says that such a thing is possible. The classical approach, endorsed by Einstein, says such a thing is impossible -- that it cannot possibly be that only when the box is opened the atom and cat decide to be either decayed and dead or non-decayed and alive. The classical approach suggests that this decision happened, that either it was fated that the atom decay and the cat die or it was fated that the atom not decay and the cat remain alive. The classical approach could never accept that a cat dead for one entire day could be decided now at the time of opening the box to have died yesterday. The present affecting the past is considered counterfactual, and this was the crux of the EPR paradox showing how quantum mechanics is in direct contradiction with a rational universe.

As reasonable as the classical interpretation is and as absurd as the Copenhagen interpretation, and quantum mechanics along with it, sounded, the mathematics was inescapable and particle physicists had known since the Thomas Young double-split experiment in 1801 that physics of the very-small behaves in absurd ways. While the rift within the scientific community lingered, exacerbated by Einstein's obstinacy at pursuing a feud with quantum mechanics, many bright minds and leaders of the community flocked toward quantum mechanics. In many ways, Einstein's opposition was a bit hypocritical. His own theory of special relativity, and later general relativity, battled against classical Newtonian physics, and the new theories of quantum mechanics did not jeopardize relativity -- in fact, quantum mechanics and the theory of relativity can coexist side by side perfectly without contradictions. The dispute was academic, it was abstract, it was about the philosophical nature of reality. It was, that is, until 1965 when Irish physicist John Bell released a paper which thundered through the community shaking classical beliefs down to their very core and proving beyond any doubt that the "rational universe" envisaged by classical notions was what was fake, rendering any classical notions of local realism illusory.

Specifically, Bell's theorem proved how local hidden variables cannot explain the phenomena observed at the quantum scale. It sounds like a benign statement, unless one realizes what local hidden variables are. In the example with Schrodinger's cat in a box, the classical approach would suggest that whether the atom decayed and the cat died was hidden, not indeterminate. Classical thinking states that this special impenetrable box merely hides what happened, but all the events still transpired and are not in an indeterminate state. Bell's theorem shows that however reasonable such classical thinking is, it is wrong, hidden local variables do not work. The theorem is simple, and involves setting up an apparatus to empirically verify how our universe isn't a rational universe, one which could be explained merely by local hidden variables.

The remainder of this e-mail is devoted to summarizing Bell's theorem:

Bell's theorem begins, if local realism is true, then there is no impact of a distant actor's actions when measuring a local particle's spin. The phenomenon where two particles under quantum entanglement can have perfect correlation when measured at identical angles (let X be this angle) is a phenomenon that can be explained under the model of local realism by employing local hidden variables, as follows: A deterministic mapping could be programmed into these entangled particles at the time they were entangled, which is feasible because at the time they were entangled they were proximate to each other. The deterministic mapping, call it m, would map an angle, from 0 to 360 degrees, to spin, -1 for counter-clockwise and +1 for clockwise. Therefore, even if these entangled particles are now apart by great distances, one actor measuring one of the particles at angle X would see a spin m(X) and a distant actor measuring the other particle at angle X would also see spin m(X). Since m is a local property carried by each particle, nothing non-local affects the measurement.

While local realism seems to work at explaining entanglement, Bell's theorem devises a situation where it fails. Let Q and Q' be two angles the distant actor can measure at, and R and R' be two angles the local actor can measure at. Therefore, we need concern ourselves with only a reduced m, one which doesn't handle 0 to 360 degrees but instead only four angles, Q, Q', R, and R'. Since m is deterministic, there are only 16 possible mappings it could be -- e.g. m(Q,Q',R,R') could be (1,1,1,1) or (1,1,1,-1) or (1,-1,-1,1) et cetera. Let M be the set of these 16 possible mappings. Let s be the spin the local actor observes and let t be the spin the remote actor observes. We can decompose E[s*t|Q,R], the expected value for the product s*t for when angles Q and R are chosen but m is free to be anything in M, into "sum p(m)*m(Q)*m(R) for all m in M", where p(m) is the probability of m occurring, which may be 1/16 if all 16 mappings in M are equally likely to occur, but we place no such restriction on the distribution.

Next, let W = E[s*t|Q,R] + E[s*t|Q,R'] + E[s*t|Q',R] - E[s*t|Q',R'], which decomposes into "sum p(m)*(m(Q)*m(R) + m(Q)*m(R') + m(Q')*m(R) - m(Q')*m(R')) for all m in M". We know that any expression S of the form qr + qr' + q'r - q'r', where q, q', r, and r' are real numbers in the closed interval [-1, 1], must be a real number in the closed interval [-2,2] due to algebra. (Explanation: qr + qr' + q'r - q'r' is linear in all four variables, in other words, the partial derivative of S with respect to any single variable is an expression lacking that same variable; so, S must take on its maximum and minimum values at the corners of its domain. Thus, some integer inputs q, q', r and r' each in { -1 , +1 } must yield the expression's min/max bounds. Either employing further algebra or applying brute-force on all the 16 possible integer inputs, we find -2 and 2 are the bounds.) Therefore, W is bounded by "sum p(m)*Sm for all m in M" where each S is an unknown in the interval [-2,2]. Therefore, W is in the interval [-2,2]. (Explanation: p(m) are weights that sum to 1, and a weighted average of values in an interval must yield a result in the same interval.) This inequality, -2 ≤ E[s*t|Q,R] + E[s*t|Q,R'] + E[s*t|Q',R] - E[s*t|Q',R'] ≤ 2, is the BCHSH inequality, which gives us a formal mathematical constraint when believing spin is determined by local hidden variables and not a distant actor.

Bell's theorem concludes, this time finding a constraint for W using quantum theories. The wave function in quantum mechanics simplifies E[s*t|A,B] to cos(2B-2A) for any real angles A and B. By letting Q = 2pi/8, Q′ = 0, R = pi/8, and R′ = 3pi/8, then W simplifies to cos(-pi/4) + cos(pi/4) + cos(pi/4) - cos(3pi/4), which is approx. 0.707 + 0.707 + 0.707 - (-0.707) = 4 * 0.707 = 2.828, thereby violating BCHSH. This is perfect, it places contradictory mathematical constraints, between [-2,2] with local hidden variables and 2.828 with the wave function, on W.

Due to the law of large numbers, the running average, obtained by repeatedly rerunning experiments, rapidly converges to the expected value, meaning we can empirically measure E[s*t|A,B] for any angles A and B at arbitrarily high confidence levels. Current empirical evidence shows with high statistical confidence that the expected value is above 2.7, easily violating BCHSH and approaching 2.828 predicted by the wave function.

There is a caveat about Bell's theorem I had omitted from the above e-mail. A hidden variable could exist at the time of the universe's creation. Let's call this variable U, for universal simulator. If everything in the universe were proximate at this time of creation, then all particles may share this variable U, and any new particles may copy U whenever it comes across a U-carrying particle. Thus, all particles with high statistical confidence are U-carriers. Now, since U contains all information from the time of the universe's creation, then, if everything in our universe is pre-determined, we can know seemingly non-local information by merely querying the local hidden variable U, which, as a universal simulator, knows everything about our universe if events are pre-determined from initial conditions. Therefore, two entangled particles may "know" about each other from U and conspire to yield strange results.

Rather than a gaping loophole, this is a minor caveat because such a universe that conspires so maliciously to deceive us is akin to the ones imagined by philosophers of antiquity when they questioned whether the universe may have arisen only now, with every particle perfectly in place with the right momentum to trick us into believing there was any history at all when none existed. Were such celestial malice present, it would be impregnable to philosophy, science, and studies in general. Therefore, by reductio ad absurdum, if we wish to study, we must study the alternative, the scenario in which the universe is not so malicious.

Saturday, February 07, 2009

Re: Imminent U.S. Attack On Iran?

Continuing on the theme of dredging up old emails in lieu of posting anything new, I found this reply I gave to someone who had forwarded to me a widely circulating email. The circular suggested an imminent U.S. attack on Iran while hypothesizing false-flag operations would precipitate the event and referencing a film called "Terrorstorm". She asked what I thought, if investments need to be protected, and if I had heard of the film. My reply, posted Wed, Nov 29, 2006 at 1:14PM is as follows:

It's unlikely that anything large-scale is imminent. Moreover, it's extremely dubious that any administration can plan and execute self-injury to its own naval vessel.

This is not to say that were accidental injury to occur that the event would not be propagandized mercilessly in an opportunistic manner. The Spanish-American war of 1898 precipitated in large part due to the sinking of the USS Maine on February 15, 1898. While the cause of losing the vessel was deemed inconclusive both by independent experts of that time and by those today, the U.S. Govt. under strained relations with Spain decided to spin the event as a deliberate attack by Spain. The rest is history. Current theories as to what caused the USS Maine to sink agree that the ammunition magazines in the ship exploded, destroying the vessel. The theories then diverge from there, one believing the vessel detonated a Spanish mine and that ignited the magazines and another believing the high-temperatures of the coal-engine ignited the magazines. Despite the theories, it is widely agreed that Spain desperately wanted to avoid any confrontation with the U.S. because it was more than aware of its naval inferiority to the newly industrialized U.S. Why a nation aware of the certain defeat it would suffer if engaged in a war with the U.S. would precipitate one stands against reason; yet, in 1898 the U.S. Govt. played Spain as the aggressor and emanated its views across all distribution channels.

One thing to note, however, is that the U.S. Govt would never explicitly execute injury to one of its own vessels because the gravity of that act would be so earth-shattering that it could simply never be kept under wraps. Too many individuals would need to be involved and the risk exposure too high. Passive negligence is often the route to achieve casus belli, as some suspect Roosevelt employed with Pearl Harbor. The passive negligence Roosevelt is suspected of is in concealing and not relaying in a timely manner information pertaining to an imminent strike on Pearl Harbor, in the hopes that a successful Japanese strike, even if on a remote outpost in the Pacific, would engage the American public in furor. Of course, as witnessed with the slow information pipeline under the Clinton administration regarding the 1998 Pokhran-II tests, it seems reasonable that inefficiency and lack of readiness is caused by bureaucracy and not by malicious intent.

All that said, is an invasion of Iran imminent? The option of passive negligence is always available and it would inevitably lead to a confrontation of some kind with an enemy of the Govt.'s choosing, presumably Iran. However, I do not think our current administration would exercise such an option for two reasons.

The first reason is that the military command structure has built-in shortcuts to bypass sending every decision to Bush. If reinforcements or some other defensive precaution is needed, the military can carry out the needed tasks without involving Bush. The only time the president is necessary is in transforming intelligence provided by the CIA into action carried out by the military, and the president can choose to delay this process as some claim Roosevelt had. While the CIA may provide vital information which Bush has the ability to be passively negligent of, the strong presence of the U.S. military forces in the middle-east supply the military with a self-sufficient source of intelligence, effectively diminishing the utility of the CIA and removing opportunities for Bush to be negligent.

The second reason is that Bush's political capital is extremely low, both with allies as well as with ordinary folk, and I doubt he would successfully manage to convince everyone that stretching our military thinner than how extremely thin they are already stretched is a good idea.

Of course, it's likely that Ahmadinejad's advisers have made similar analyses and will try to push the boundaries of what the U.S. will allow them to do. This could spiral into brinkmanship. As of yet, there hasn't been any serious news of Iran testing the patience of the U.S. Without properly testing our reactions to various, minor infractions of internationally acceptable behavior under the current settings, Iran would not be able to triangulate a clear enough landscape of permissibility. Thus, even if they are aware that the landscape of permissibility has expanded due to an emaciated Whitehouse and depleted spare military power, they are in the dark as to the exact boundaries of this new permissive landscape. So long as they are uncertain what our reactions will be, Iran will not take any path of major consequence. Of course, I could be over-estimating Iran's prudence and they may decide that venturing into an unknown landscape is of high enough value to merit the risk.

Still, should investments be protected? Of course. But the advice is no different than it always has been: a well-balanced portfolio. Equity, currency baskets, funds, etc. I don't think anything in particular needs to be done whether or not we invade Iran. In the extreme case, the Govt. might issue higher-interest bonds to summon additional, immediate spending power for war-financing, thus reducing the price of existing bonds currently trading at lower yields and reducing the price of mediocre equity. The impact on stocks would not be as dire as one would think. If war-financing is conducted through the issuance of bonds, the US Dollar will weaken further, having two effects: firstly, better exports to Europe; secondly, cheaper labor and higher inflation. Traditionally, such devaluation has a positive effect on the economy because the mood of the consumer revolves around nominal wages. Thus, if they make 45,000 now and 52,000 next year, even if purchasing power has diminished, they tend to be giddy with joy at, yes, earning less, but receiving more currency units. This consumer confidence acts as a steroid, bolstering demand for almost every product, and helping stocks perform.

No, I haven't seen "Terrorstorm", but I now quickly googled and read reviews about it. It seems to be done with the theme that the govt manufactures a perpetual state-of-war in order to subdue domestic freedoms and attain maximum centralized power in order to perpetuate status quo and concentrate wealth to an elite class. It's a bit like the elected chancellor in Star Wars who manufactures an enemy to help receive enough votes to assume dictatorial control, legally, so as to manage the war effectively. "Terrorstorm" seems to be a mix of 1984 and Machiavelli's The Prince, done with a collage of news events. I think it assumes a higher degree of cohesion and capability than exists in reality, even if its assumption that greed and self-interest fuel most administrations is probably correct. However, even if most top-officials are corrupt and greedy, I think it would be disingenuous to assume that the corrupt and greedy are mostly top-officials. There are plenty of corrupt and greedy people acting purely on self-interest at all levels, from high-school dropouts at the lower rungs of the UCLA campus police to various mayors of small, insignificant towns.

It's not disconcerting that greed and corruption are pervasive, since that's an unfortunate reality; what is disconcerting is that the central govt., which is best suited to "policing the police" is instead distracted with imperialistic dreams abroad and is inattentive at home, or, worse, an enabler in granting an oversupply of power to domestic bureaus and state law-enforcement with little to no oversight. While the '60s showed a central govt. willing to act for civil liberties by stepping in with U.S. Marshals and other federal forces to ensure that black students were permitted into white schools in the South, despite opposition from the Southern governors and local police, it now no longer seems conceivable that the new central govt. would use forces to protect citizens against rogue police. The path currently taken seems to steal focus away from domestic abuse and onto foreign affairs, leaving local forces with a carte blanche in using newly conferred powers. With the evaporation of habeas corpus and other guarantees to prevent abuse, there's less and less apart from per-capita income that differentiates the U.S. from a stereotypical non-democratic regime.

(My name)

Thursday, February 05, 2009

Re: high school graduation speech

Walking down memory lane with my gmail archive, I found an interesting email I sent in response to Paul Graham's graduation speech. I had emailed him on Fri, Jan 21, 2005 at 3:57 PM, the following:

Hello Paul,

I hope you're not being inundated with slashdot readers writing their feedback. I'm certainly not lessening the effect by writing, but I feel I should write since I enjoyed reading your graduation speech and am as dismayed as you that it couldn't be delivered. I'm not in highschool anymore, but mentoring highschool students in my free time I know your advices and anecdotes to unveil reality are good ones. I've even forwarded your webpage to my younger cousins who themselves are in college but still very curious about life, the division between childhood and adulthood, and the "point of life" - to stay upwind as you put it.

Apart from my compliments, I also have a suggestion, which I'll get to after a short anecdote of my own. I was very mathematically inclined as a kid and delved into software more than most I knew. All along, I had only one friend whom I could learn from and he equally learned from me. Even my father who was a software developer was uninterested in the murkier topics of computer science theory, such as lambda calculus, frequency analysis, and feistel networks, as the practicing world cared more about programming libraries. Needless to say, it was a difficult journey to learn and a lonely one at that. I knew college would be better, but that's eons away when you're in eighth grade. I stumbled upon linux, open source, and a community working on things without monetary purpose nearing the end of my tenth grade. My first email communique outside of my highschool was to Andrew Tridgell, then the sole samba developer. I had my vague notions of tcp and udp and OS datagram frames but in one email response he clarified questions I would've spent the next year analyzing. Instantly, solitary learning where I learn from my mistakes like an ape became human learning where I stood on civilization learning the past mistakes of all. It was incredible, and I wish I hadn't had to have stumbled on that revelation.

I'm not saying open source is the answer to everyone's grade school intellectual doldrums, even if it was my answer; I'm saying that a useful community will inevitably exist outside college, and highschool students impatient for that college life can tap into it earlier. The key element is people. Most students, including both the academic ones trying to learn and the non-academic ones trying to be popular, will benefit from the idea that there are more people to know and learn from than those in their own school. I know of far too many students who never communicate in email or chat beyond their school classmates, and parents unfortunately find that comforting. The concept I couldn't grasp was how many 6 billion people are and yet how only a few dozen people would be interested in samba in 1995. As a ratio, it's astounding. However, these non-popular -- distinctly separate from the unpopular -- projects are bastions of clever lonely people, the perfect type for a student with little to offer besides attention and a lot to gain such as knowledge.

Ergo, my suggestion. I believe it would be highly useful to impress upon students how they should look beyond their region - that while their home town may be the only place in the world to know certain inside jokes and terms, there are things far grander in the world and fractious enough as to make the individual teams small, closeknit, and meaningful. While being trendy and knowing the latest, local, and popular things can make one feel good about oneself, the eclectic, esoteric, and historical things are longer lasting benefits which compound with themselves in value over time. To be eclectic, however, one cannot be content with what is provided. I was very distrusting of supposed quality, and knew the world contained a spectrum of quality far greater than I could contemplate. Far too many people cling to the first anchorman, reporter, or developer they meet if they're interested in that subject. I suggest exploring and discerning whom to emulate.

Being mindful of history is necessary in order to be eclectic. Old news tends to be overlooked, or be seen as unprofitable, making it somewhat immune from the noise of limelight seekers' premature ideas and from marketing propaganda. There's money in making software, not in writing a taxonomy of comparisons between, or meticulously documenting the concepts wielded by, different software. There are even taxonomies on taxonomies, each level becoming less popular and less profitable and taking longer to complete, thereby missing the slim window of public interest. However, by sitting in 1995 and reading tcp/ip lessons in 1994 about lessons from 1991 etc., dating back to the release of the cornell worm in 1970, one could learn a lot more than if one had only read some arbitrary book circa 1980 focusing purely on unproven and trendy '80s paradigms. I've given the following advice to many: we should look at the thirty year topics that are still alive today, even if barely, and trace the discussion back to their origins. By learning from the collective discussion, any contemporary incident can be seen through the lens of a learned person, yielding an amalgam of concepts surviving a darwinian massacre of preceding years' ideas. Once the lens has been shaped, it serves as a crude tool to craft finer tools. Apply that lens on another, more recent, incident, and repeat the process until you're looking at present day situations with an extremely well-pruned, eclectic corpus of knowledge with which to interpret anything you choose. This strategy works tremendously well at solving the problem of "noise" -- too much nonsense posing as quality work; and, interestingly, this same approach is taken by bayesian spam filtering.

Sorry for writing as much as I have - it was originally meant to be two or three short paragraphs. Pardon me for any typographical errors.

(My name)

Tuesday, April 22, 2008

Fun with Number Theory: Guessing Your Age

  1. Take the last digit of your age, multiply it by 2, then for every decade you've lived add 1 to it. Color this result Blue in your mind
  2. Think of your favorite digit, any digit at all, multiply it by 11.
  3. Add the last digit of your age to the previous result.
  4. For every decade you've lived subtract 1 from the previous result. This is your Red number.
Alright, now tell me your Blue and Red numbers and I will guess your age!
Blue Number:
Red Number:

© 2008 by Thoreaulylazy. All Rights Reserved.


Monday, March 10, 2008

astrology in the modern world

Below is my comment left on a Slashdot article regarding astrology. I was replying another poster who wrote something along the lines of "astrology is 100% wrong."
I think you meant to say astrology is 50% wrong, because if it were 100% wrong, it would have perfect anti-correlation (akin to scoring a perfect zero on a T/F test, and is just as difficult as scoring a perfect 100). If astrology is 50% wrong, it therefore is 50% right, and depending on the brain chemistry of the person, happy memories may get weighted more than unhappy memories, and therefore the weighted average of astrology working can be significantly higher than 50% - assuming a person who adheres to astrology derives happiness from when it is correct. In fact, for such a person whose happy memories are weighted more than unhappy memories, any catalyst for increased variance will lead to a happier life, including a coin-toss on whether to drive or walk to work. If astrology is a method to higher variance in the day to day experiences of its adherents, then so be it, it results in a happier life among those humans who benefit from high variance. Conversely, for those whose brain chemistry weights unhappy memories more than happy memories, lowered variance in day to day experience is the best method for maximizing happiness. The world needs both people, those who enjoy variance and are willing to eat a mysterious berry, be it a sweet, tasty berry or a bitter, sour berry, and those who hate variance and will only eat the safe, known berry. The risk-takers help society learn about new, tasty berries, and the risk-averse help society continue the species in case the berries were poisonous after all. Astrology is merely a shrub blooming random berries, half of which are sweet (+1 correlation), half of which are bitter (-1 correlation).