Skip to content

The ugly secrets behind magic tricks

A recent Radiolab episode described the magic act of the Australian couple Syd and Lesley Piddington. The two claimed to be mentalists and had a radio show on which they would communicate information “telepathically” even when separated by hundreds of miles.

The entire episode is great (as usual), and I recommend it. One thing that I found interesting is Penn Jillette’s comment about secrets behind great magic tricks: They are invariably ugly and boring. Jilette says that when you learn how the trick is done, there is no “A-ha!” moment – compared to the experiencing the trick, the revelations are mundane, uninteresting, and disappointing.

To our ancestors most of what was happening around them was magical. Imagine not knowing why it rains, what clouds are made of, or why you get sick. Science has often taken away the mystery. Sometimes what it reveled is awe inspiring – the fact that our Sun is one of countless others in an unimaginably large universe is more mind blowing than if we lived on a disk riding on the back of a turtle.

However, sometimes scientific revelations can turn the magical into the mundane. For instance, the moving rocks of Death Valley were somehow more interesting before we new they just skated around on ice. An explanation could have made us feel different if it involved something unexpected, or outside our daily experience, perhaps strong magnetic fields, or aliens with hockey sticks. However, partly it was the mystery itself, not knowing the secret, that gave the moving rocks their special aura. Once we know how the magic is done, something is lost.

The main goal of science is to understand how the world works. Some of the time what we find will be awe inspiring. At other times, the explanations will be mundane, unreducibly complicated and even ugly. We definitely crave the first kind. But if it is important to find out how the world works, should we put such a high value on esthetics.

Indeed, I fear that some of the great unanswered questions of science will have answers that we will find unsatisfying. I am reading the book Consciousness and the Social Brian by Michael Graziano (here is a shorter post about it) – roughly the idea is that awareness is the result of brain’s model of what it is paying attention to. We need to have a model of what we ourselves, as well as others, attend to. Awareness is just an abstract, communicable representation of the act of paying attention. I am not sure that this is right, but Graziano offers pretty good arguments that it is plausible.

Even if this theory is not right, it is quite possible that we will ultimately find the answer to the question of consciousness disappointing. It is arguably the greatest magic trick of all, the one that lets us experience all the rest of the magic around us. And like the magic tricks that Jillette describes, the revelation may ultimately be ugly and unsatisfying.

The Friendship Paradox

It seems to be in our nature to compare our accomplishments to those of others. Teenagers worry about popularity. Later in life we compare our success to that of friends. But, there is a mathematical reason why we usually come up short!

You may have noticed that your friends seem to have more friends than you do.  And you are right – on average, your friends are more popular than you are. This is true on Facebook and in real life, and is a consequence of what statisticians call biased sampling – you are more likely to befriend an outgoing, easy-to-get-along person than a recluse who hardly talks to anyone. Your typical friends make friends easily. Therefore you do not form friendships at random – statisticians would say that you are taking a biased sample of society.

Biased sampling is why playing a game of poker against strangers in a casino is usually a bad idea – you are more likely to meet an opponent who spends a lot of time playing the game, rather than a beginner or a complete amateur.

More surprisingly, not only are your friends more popular, but on average they are also more successful. The reason for this is that people with more friends seem to be on average more successful. We already established that because of biased sampling your friends have more friends than you do. If more friends means more success, it follows that your friends are on average more successful than you are. Hence comparing yourself to your friends, is not a good idea – you are using a biased sample of society that is likely doing better than you.

OK, you may say, but is it useful?  Indeed it can be – select a group of students at a university, and ask them to give you the name of a few of their friends.  On average they will name people who are more popular, and therefore have more social contacts than the average student. If you are interested in hearing the latest rumor, you will be well advised to go to this new group. But, since these friends are more popular, they also have more interactions with others, and may be among the first to get sick in an epidemic.

Researchers have confirmed that this is the case: they asked random students to name friends, and found that in a flu outbreak this named group got sick about two weeks earlier than the average student. To get an early warning of an epidemic, just pick people at random and ask them if their friends are sick. You can do the same if you are trying to spot a new trend.

So mathematics tells us something valuable about our friends:  Comparing our accomplishments to theirs is likely going to leave us depressed. Instead listen to your friends if you want to hear about a good place to eat, what concert to go to, or interesting new technology.  Your friends will be able to tell you about it better than the average person.

Some references: 

Popular articles about the friendship paradox can be found here.  Here is a more detailed, yet very easy to follow discussion by the inimitable Steve Strogatz.

The paradox was originally described by Scott L. Feld in  “Why your friends have more friends than you do,” American Journal of Sociology, Vol. 96:6, pp. 1464–1477 (1991).

The generalized friendship paradox – the observations that your friends are more successful than you are (on average), is described here. I have taken some liberties with the term success – the fact that friends are more successful has been shown, for example, for the number of co-authors and citations for scientific papers and and followers on Twitter. If this counts as “success” – and in scientific circles, then what I have said is strictly true.  However, it is also likely that the observation extends.

The article that describes how friends of friends can be used to track outbreaks of diseases is Christakis NA, Fowler JH (2010) Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE 5(9): e12948. doi:10.1371/ journal.pone.0012948, and can be found at    and a followup Garcia-Herranz M, Moro E, Cebrian M, Christakis NA, Fowler JH (2014) Using Friends as Sensors to Detect Global-Scale Contagious Outbreaks. PLoS ONE 9(4): e92413.

There is a difference between the mean and the median of the number of friends that I did not get into.  This is described in more detail here. This also provides a more detailed description of the mechanism behind the generalized friendship paradox.

The power of cities

In the 1930s the Swiss born biologist Max Kleiber studied how much energy different animals expend at rest, and noticed something curious.  A human weighs about 10 times more than a cat.  But rather than expending 10 times the energy of a resting tabby, we only spend 6 times as much.  This number is not arbitrary — A cow is about 10 times heavier, and also expends about 6 times the energy of a human.

Kleiber was the first to notice this regularity: He showed that energy expenditure follows a 3/4 power law.  What this means is that if you double the size of an animal, it will use about 2^3/4, or about 1.7 times as much energy.  If you increase the size tenfold, it will use about 10^3/4 times as much energy. Amazingly organisms from bacteria to whales follow this law.

Power laws are all around us: If you add twice the amount of salt, a dish will not taste twice as salty.  Rather, it will appear about 2^1.4 or 2.6 times as salty. A star twice the mass of our Sun will be 10 times as bright. There are many other example, and surprisingly even human constructs behave similarly. Cities are particularly fascinating: If the size of a city doubles, we see more than a doubling of the number of patents, inventors and artists, and the amount of time we spend in traffic. All of these quantities follow power laws.

On the other hand some quantities grow more slowly – if a city doubles in size, we spend less than twice the gasoline or electricity – bigger cities are more efficient. These may be reasons why cities grow in size.  But all is not rosy – unfortunately, the number of crimes and cases of a disease more than double in a city twice as large.

We do not yet know exactly why such different quantities as a city’s crime rate, road density and the number of inventors behave so predictably.  But scientists have plausible theories:  It is possible that ideas, information and inspiration behave like diseases, and spread more easily when populations are larger and denser. The larger the city, the more contacts we have, the higher the chance that we will hear the latest important news and insight, or hear about a good job opportunity.

More than half the world’s population now lives in urban areas. Can we reap the benefits of living in a city – the higher wages, and higher energy efficiency – without the disproportionate increase in crime, pollution and disease? The city of Zürich in Kleibner’s native Switzerland suggests that this may be possible.   Zürich has grown tremendously in the last 20 years. But proper planning has kept traffic reasonable, and crime low. The underlying laws that govern how cities behave gives me hope that we will also be able to understand the mechanisms behind these laws. This will allow urban planners and administrators to avoid the mistakes of the past. They will be able to work with physicists and mathematicians to help cities reach their full potential.

References and notes:

Here is a nice article about Kleiber’s Law (there are many other good ones easy to find with Google).  A reason for why it may hold has been proposed given in the 1990s. However, it relies on the fractal geometry of the circulatory system, while Kleiber’s Law seems to extend to organisms that do not have one. The mechanisms behind the law are therefore still under debate.

For discussion of power laws and perception you can see this Wikipedia entry on Steven’s power law.  The laws here are a bit controversial because quantifying subjective experiences is difficult. More information about how luminosity scales with the mass of stars is here.

Here are some references (not complete) on how to explain power law scaling in cities. Arbesman, Kleinberg and Strogatz assume that the network of human contacts is assumed to have a hierarchical, self-similar (fractal) structure. Under certain conditions, with an increasing city, the increased number of contacts can lead to power law growth of the the overall benefit.  However the assumption that interactions are hierarchical may be too strong. It could be simply the increase in density that facilitates the interchange of ideas and information, as explained hereand reviewed hereLuís Bettencourt’s explanation develops this idea, but is also more complete.

A couple more interesting short references: A call for a science of cities (polisology?)  and an overview of why innovation thrives in cites.

Luís Bettencourt and Geoffrey West also gives a nice review of the statistical findings and how they could be used.  Unfortunately, behind a paywall in Nature

Here is another take on city complexity,  and here.

 

 

New paper on circadian clocks

I have recently worked with Jae Kyoung Kim, Zack Kilpatrick, and Matt Bennett on the problem of synchronization in circadian clocks.  The paper just came out in the Biophysical Journal. Suppose you take a bunch of cells that oscillate with slightly different frequencies. If you couple the cells in the right way, they will tend to synchronize, and hence oscillate with a single frequency.  However, it could be that the fast oscillators pull the slower ones, and the synchronous population speeds up.  Or the slower oscillators drag the faster ones down, and the populations slows down after coupling. How can we make sure that the population does neither, and oscillates at the average frequency of the uncoupled population.

Jae asked how this happens in the master circadian clock of mammals.  He had a suspicion that it is due to the mechanism that drives the individual cells to oscillate.  In particular, he showed that if the genetic oscillator is driven by protein sequestration, then the synchronous state will exhibit the behavior observed in experiments (cells will synchronize at the mean frequency). This will not happen if the oscillations are modeled using the more popular Hill kinetics.

Thus the synchronous oscillations of thousands of cells can provide clues about what makes each of the individual cells oscillate. Here is a nice overview of the paper.

Why write about mathematics?

I am stepping down as managing editor of the Dynamical Systems magazine, a post I held for a bit more than 2 years.   David Uminsky is joining the team and will carry on along with Elizabeth Cherry and Peter van Heijster.   I am sure the magazine has a bright future.  In my last contribution as managing editor I tried to briefly explain why it is important to write about mathematics.  While this is directed to the readers of the DSWeb magazine, the message applies much more generally. Comments are welcome, as always:

In parting, I wanted to encourage the dynamical systems community in general to contribute to the magazine. The content here is mainly generated by you. Many of you will be contacted by the new editing team to contribute. Writing an article is a serious investment of time, and there will be no immediate rewards – you will not get paid, and it is unlikely to help you in getting promoted. So why spend time writing for the DSWeb magazine, instead of working on a manuscript or a grant proposal?  Indeed, is outreach ever worth the time we invest in it?

I think the answer is a resounding yes. Many of us chose to study mathematics or science in part because we have been inspired by Martin Gardner, Ian Stewart, James Gleick or Carl Sagan. Perhaps the best known applied dynamicist today is Steve Strogatz. He is a truly wonderful popularizer of mathematics (if you haven’t read his column in the New York Times, do yourself a favor and do so). While your piece will not reach such a large audience, it will almost certainly be read by more people than your next academic paper. And you have the chance to tell people about your work, what you think is interesting, and about your concerns.  Thereby you will help shape the public discourse about the profession.

But there is another thing that I have learned from Steve: Go and read any one of the papers that he has written in the last 10-15 years or so. You will notice that they are marvelously written (I admit, I have not read them all, but I doubt that my sample is biased).  This is certainly not unexpected. But what comes first: Are you simply born to be a good communicator so that writing both popular and technical prose comes easily?  Or is writing something that needs to be learned and practiced? If the latter, the practice of writing cogently for a nontechnical audience will translate into clearer technical expositions.  If you talk to popular science writers, or scientists who write popular pieces, you will find that in most cases writing did not come easily to them. However, invariability writing has gotten easier and the results better after years of practice.

So, you can choose your reason to contribute to contribute to the DSWeb magazine. You can be altruistic, and help the community.  Or you can make this part of a more general outreach effort, which will increase your visibility and help you become a better expositor.

Dual Inheritance Theory

Here is a piece I am writing for Engines.  The problem  is always to make a clear, concise presentation.  This makes it difficult to address any controversies.  Evolutionary theory is not controversial (at least not amongst those who know what they are talking about).  However, Dual Inheritance Theory is far more recent, and it is not fully developed.  Here is the story – comments are welcome, of course:

The theory of evolution is well understood and accepted. It explains how simple organisms gave rise to the multitude of complex creatures that inhabit our planet. But human societies and cultures also changed over the ages. The small hunting groups of our ancestors were the precursors of today’s complex states, their stone tools replaced with the smart phones in our pockets. Does the evolution of our genes also impact cultural changes? Can cultural shifts impact the genetic makeup of our species in turn?

There are a number of examples to support this idea. For instance, most ancient humans were lactose intolerant. However, as our ancestors domesticated animals, milk became a readily available food source. Those that could use it had an advantage over those that could not.  The ability to digest milk in adulthood thus became widespread in certain.

A cultural change – the domestication of animals – thus influenced our genes. But this new ability to drink milk in turn affected our culture. Those of our ancestors that could digest milk were less likely to slaughter their cattle for food. Some became shepherds, and developed pastures. Hence, changes in our genes in turn influenced how we spend our time, how we raise animals, and how we use the land.

These processes are the domain of Dual Inheritance Theory – a mathematically grounded approach to understanding how our culture and our physical selves co-evolved.

The theory also suggests that our capacity for culture evolved along with our genes. How we innovate, and how we propagate knowledge was shaped by evolution. But the cost of experimentation can be high – you do not want to be the first to try a new type of plant or mushroom. If you remember a successful way to hunt you will prosper. To use a well worn phrase – we are lucky that not every generation needs to re-invent the wheel. Our ability to learn is therefore beneficial. It is possible that good learners had more children, who were good learners in turn.  The capacitity to Learn and impart information may have evolved and shaped our culture.  But these capacities may have shaped our genetic selves in turn.

If you compare our abilities to those of our closest relatives, the difference is remarkable.  Consider the Kellogg family who in the 1930’s began rearing their baby son, Donald, along with the baby chimpanzee Gua.  Their goal was to see whether the chimp would learn to behave and vocalize like a human.  However, the experiment was halted when baby Donald started copying the chimp’s shrieks.

We have now mapped the human genome, and can sequence it for less than $1000.  We now know that we are not just a direct product of this genetic information. The evolutionary biologist Theodosius Dobzhansky said that “Nothing in Biology Makes Sense Except in the Light of Evolution”.  To understand humans and our societies we will need more than understand the evolution of our genes.   We will need to understand how they evolved along with our culture.

Some notes:

Here is an interesting article about the evolution of lactose tolerance. Here is a link to Theodosius Dobzhansky’s famous essay “Nothing in Biology Makes Sense Except in the Light of Evolution“.  More about Winthrop Kellogg  who was an interesting character, and his experiment.  I couldn’t find more information about what happened to Gua after the experiment, except that she died a year later.

A classic book of Dual Inheritence Theory is Culture and the Evolutionary Process by R. Boyd and P. Richerson . One of the conclusions is that humans should are less likely to use parent-to-offspring transmission cultural information, and prefer conformist transmission, i.e. learning from the broader environment. Over the last decades it has beceome easy and chap to access a broad range of cultural models. If conformist transmission is the norm, this may have a profound influence on how our culture develops.

New paper on temperature compensation in synthetic gene circuits

Will Ott, Chinmaya Gupta and I  have been collaborating with Matt Bennett’s group at Rice on modeling different synthetic gene circuits.  A new paper in PNAS describes some of our recent work (link to paper at bottom).  Matt is interested in engineering  synthetic gene circuits that are robust and predictable.  This is difficult, since most current technology produces circuits that are often fragile – perturbations will alter their behavior.  The mathematical tools that will allow us to design circuits with desired properties are also in their infancy.  

In our work we showed that environmental sensitivity can be reduced by simultaneously engineering circuits at the protein and gene network levels. Faiza Hussain and others in Matt’s lab constructed a synthetic genetic clock whose period does not depend on temperature.  Why is this surprising?  Well, as temperature changes, biochemical reactions speed up.  Unless the genetic oscillator has special properties, its frequency will thus increase with temperature (BTW, this is also a problem with mechanical clocks which was solved by John Harrison).

To solve the problem , Matt’s group engineered thermal-inducibility into the clock’s regulatory structure.  What this means, is that they used a mutant gene as part of the gene circuit.  We hypothesized that this mutation changed the rates of a particular reaction in the genetic circuit. Chinmaya Gupta used a computational model to check whether this idea explains the observed temperature compensation.  Indeed, the results of including the rate changes in the computational model resulted in a clock with a stable period across a large range of temperatures. This matched precisely the behavior of the mutant synthetic clock.

I find this satisfying for two reasons:  First I think that it shows that we can set out to design genetic circuits that behave robustly. Second, and more important to me, we can use mathematical modeling to understand what about these circuits makes them tick and how. I hope that we will be able to understand native gene circuits, and design new ones using such tools.  

Here is some coverage with a video.

Follow

Get every new post delivered to your Inbox.