The amazing variety of life on this planet is a product of evolution. However, it took billions of years for sharks, chimps and magnolia to evolve from their common ancestor. Given that evolution operates on such an enormous timescale, how could we possibly study it in a laboratory? Human life just seems too short.
But not all evolution is slow. Within our lifetime bacteria have evolved defenses against the most powerful antibiotics. Indeed, many antibiotics are themselves the result of evolution fueled warfare between different bacteria and between bacteria and the fungi they attack. We simply borrow some of their weapons. But, bacteria that are not killed by antibiotics can prosper. They give rise to new resistant generations, rendering our weapons useless. This type of evolution can occur within days, and if we don’t discover new drugs, the resulting antibiotic resistance may end up costing millions of lives by the middle of this century.
Can we control how organisms evolve? Our ancestors have done this to suit their own ends: dogs and wheat are in their current form a result of evolution that humans have been steering.
Scientists have tried to do this more deliberately. Perhaps the first was the Reverend William Dallinger. Just over 20 years after Darwin published his theory of evolution, Reverend Dallinger examined whether single celled organisms could adapt to slow changes in their environment. He started with an incubator filled with microbes that could initially only survive at room temperature. Over six years, the Reverend slowly increased the temperature inside the incubator to 158 degrees F to see whether the microbes would adapt.
More recent versions of this experiment are being carried out by many scientists, including Tim Cooper at the University of Houston, and Richard Lenski at Michigan State University. In 1988, Lenski started growing bacteria giving them just enough food to survive from day to day. He has been observing 12 different lines of bacteria ever since. This amounts to over 60,000 bacterial generations, equivalent to about 1,500,000 human years – longer than our species has been around.
The results of this experiment are giving extraordinary insights into how life changes and adapts. The 12 different lines of bacteria have all evolved to thrive on their meager diets. Looking at their genes reveals that they have often used the same tricks – the same mutations – to achieve this. But in one line something unexpected happened. The bacteria started feeding in a completely new way, a change similar to us evolving the ability to eat wood.
So the churn of mutations, and transfer of genes, keeps creating variants of organisms that have never before existed. Most quickly disappear. A few succeed and create offspring that inherit their parents’ characteristics. And so – as Darwin wrote at the end of his Origin of Species – “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”
We have become so used to antibiotics that it is hard to imagine what medicine would look like without them. Antibiotics are not only used to treat infections, but also make many other medical treatments possible by preventing infections in the first place. Here evolution works against us – medicine without antibiotics will look very different indeed.
Some of Richard Lenski’s reflections on his long running experiment.
The last two issues of SIAM News both have an article that describes some collaborative efforts with different colleagues. Eric Shea-Brown writes about our joint effort to relate the patterns of connections between the neurons in the brain to their joint dynamics, and ultimately function. Dana Mackenzie gives a nice overview of interactions between mathematicians and synthetic biologists. This includes a description of some joint work with Will Ott (UH) and Matt Bennett (Rice). Unfortunately, the second article does not mention the two postdocs who did most of the work: Faiza Hussain (Rice) who did the majority of the experimental work, and Chinmaya Gupta (UH) who did most of the modeling. The press release from UH is here.
When asked why he no longer frequented Ruggeri’s restaurant in St. Louis, Yogi Bera famously replied: “Nobody goes there any more. It’s too crowded.” As other Yogi Bera quotes, this one is silly, yet insightful. And it is related to an interesting problem at the interface of mathematics and economics.
The problem was inspired by the El Farol bar in Santa Fe, near the famous Santa Fe Institute for the study of complex systems. Scientists at the institute like to frequent El Farol on Thursdays to listen to live Irish music. However, the bar is too small for all of them. Let’s assume that if more than half of them decide to go, it is so packed that nobody has a good time. However, when the bar is not overcrowded, an evening of music is more fun than staying at home. Importantly, in our example everybody needs to make up their mind at the same time about whether to go or not. Nobody can call ahead to see how many people are at the bar, or coordinate with others.
How can a patron decide whether to go or not? If everybody acts the same, then everybody either stays at home, or everybody goes to the bar. Neither of these outcomes is optimal, since the bar is either completely empty or completely packed.
But these are scientists. They observe the bar and track how crowded it is from one week to the next. They look for patterns in attendance to help them decide whether to go or not. Maybe the bar was nearly empty the last couple of weeks, indicating that it might be empty again this week. Or maybe there is a cycle with the bar overcrowded one week, and nearly empty the next. Thus each scientist develops a strategy to translate these observations into a decision.
But here is the catch – there is no one strategy that guaran-tees success. If there was, everybody would be using it. But with the same strategy everybody would again be making the same decision each week, and the bar would be either overcrowded or empty. We would be where we started. The best hope is that the scientists choose different strategies. Some stay home, and each Thursday the bar is filled exactly to capacity.
This may seem like a frivolous problem. However, it has been studied extensively by mathematicians and economists as a simple model of a market. Indeed, the scientists compete for a resource – music at a bar. They are rational, as they monitor attendance and use this information strategically. But, as in a real market, they have limited information – they do not know the strategies of others, only how many show up each week. To do well, everybody needs to keep learning and adapting.
But the El Farol problem applies more widely. Suppose that instead of a bar and patrons we are think of the ocean and fisher-man. If all go out to fish, the stock will collapse. But if none do, many will go hungry. Or think of how we make use of our environment. And consider, that unlike the patrons of the El Farol bar as residents of planet Earth, we will not get a chance to try again, if we overcrowd it and overtax its resources.
The original El Farol problem was proposed in a very readable paper by Brian Arthur. In the original paper, the assumption is that if more than 60% of potential patrons visit the bar, then it will be overcrowded. I have changed this to make the exposition a bit simpler, and kind of combined the El Farol problem and the Minority Game. These are similar, but not the same. You can find the origi-nal paper here.
As noted, the El Farol problem has an even simpler version – the Minority Game. The idea is that each agent plays in successive rounds of a game where there are only two choices A and B. Any agent in the minority wins a round. Everybody in the majority loses. For instance if during one round most agents chose A, then all those who chose B win a set amount. Each agent learns a strategy based on their previous choices. Here is an introduction, and an accessible overview which discusses implications for economics can be found here.
A recent Radiolab episode described the magic act of the Australian couple Syd and Lesley Piddington. The two claimed to be mentalists and had a radio show on which they would communicate information “telepathically” even when separated by hundreds of miles.
The entire episode is great (as usual), and I recommend it. One thing that I found interesting is Penn Jillette’s comment about secrets behind great magic tricks: They are invariably ugly and boring. Jilette says that when you learn how the trick is done, there is no “A-ha!” moment – compared to the experiencing the trick, the revelations are mundane, uninteresting, and disappointing.
To our ancestors most of what was happening around them was magical. Imagine not knowing why it rains, what clouds are made of, or why you get sick. Science has often taken away the mystery. Sometimes what it reveled is awe inspiring – the fact that our Sun is one of countless others in an unimaginably large universe is more mind blowing than if we lived on a disk riding on the back of a turtle.
However, sometimes scientific revelations can turn the magical into the mundane. For instance, the moving rocks of Death Valley were somehow more interesting before we new they just skated around on ice. An explanation could have made us feel different if it involved something unexpected, or outside our daily experience, perhaps strong magnetic fields, or aliens with hockey sticks. However, partly it was the mystery itself, not knowing the secret, that gave the moving rocks their special aura. Once we know how the magic is done, something is lost.
The main goal of science is to understand how the world works. Some of the time what we find will be awe inspiring. At other times, the explanations will be mundane, unreducibly complicated and even ugly. We definitely crave the first kind. But if it is important to find out how the world works, should we put such a high value on esthetics.
Indeed, I fear that some of the great unanswered questions of science will have answers that we will find unsatisfying. I am reading the book Consciousness and the Social Brian by Michael Graziano (here is a shorter post about it) – roughly the idea is that awareness is the result of brain’s model of what it is paying attention to. We need to have a model of what we ourselves, as well as others, attend to. Awareness is just an abstract, communicable representation of the act of paying attention. I am not sure that this is right, but Graziano offers pretty good arguments that it is plausible.
Even if this theory is not right, it is quite possible that we will ultimately find the answer to the question of consciousness disappointing. It is arguably the greatest magic trick of all, the one that lets us experience all the rest of the magic around us. And like the magic tricks that Jillette describes, the revelation may ultimately be ugly and unsatisfying.
It seems to be in our nature to compare our accomplishments to those of others. Teenagers worry about popularity. Later in life we compare our success to that of friends. But, there is a mathematical reason why we usually come up short!
You may have noticed that your friends seem to have more friends than you do. And you are right – on average, your friends are more popular than you are. This is true on Facebook and in real life, and is a consequence of what statisticians call biased sampling – you are more likely to befriend an outgoing, easy-to-get-along person than a recluse who hardly talks to anyone. Your typical friends make friends easily. Therefore you do not form friendships at random – statisticians would say that you are taking a biased sample of society.
Biased sampling is why playing a game of poker against strangers in a casino is usually a bad idea – you are more likely to meet an opponent who spends a lot of time playing the game, rather than a beginner or a complete amateur.
More surprisingly, not only are your friends more popular, but on average they are also more successful. The reason for this is that people with more friends seem to be on average more successful. We already established that because of biased sampling your friends have more friends than you do. If more friends means more success, it follows that your friends are on average more successful than you are. Hence comparing yourself to your friends, is not a good idea – you are using a biased sample of society that is likely doing better than you.
OK, you may say, but is it useful? Indeed it can be – select a group of students at a university, and ask them to give you the name of a few of their friends. On average they will name people who are more popular, and therefore have more social contacts than the average student. If you are interested in hearing the latest rumor, you will be well advised to go to this new group. But, since these friends are more popular, they also have more interactions with others, and may be among the first to get sick in an epidemic.
Researchers have confirmed that this is the case: they asked random students to name friends, and found that in a flu outbreak this named group got sick about two weeks earlier than the average student. To get an early warning of an epidemic, just pick people at random and ask them if their friends are sick. You can do the same if you are trying to spot a new trend.
So mathematics tells us something valuable about our friends: Comparing our accomplishments to theirs is likely going to leave us depressed. Instead listen to your friends if you want to hear about a good place to eat, what concert to go to, or interesting new technology. Your friends will be able to tell you about it better than the average person.
The paradox was originally described by Scott L. Feld in “Why your friends have more friends than you do,” American Journal of Sociology, Vol. 96:6, pp. 1464–1477 (1991).
The generalized friendship paradox – the observations that your friends are more successful than you are (on average), is described here. I have taken some liberties with the term success – the fact that friends are more successful has been shown, for example, for the number of co-authors and citations for scientific papers and and followers on Twitter. If this counts as “success” – and in scientific circles, then what I have said is strictly true. However, it is also likely that the observation extends.
The article that describes how friends of friends can be used to track outbreaks of diseases is Christakis NA, Fowler JH (2010) Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE 5(9): e12948. doi:10.1371/ journal.pone.0012948, and can be found at and a followup Garcia-Herranz M, Moro E, Cebrian M, Christakis NA, Fowler JH (2014) Using Friends as Sensors to Detect Global-Scale Contagious Outbreaks. PLoS ONE 9(4): e92413.
There is a difference between the mean and the median of the number of friends that I did not get into. This is described in more detail here. This also provides a more detailed description of the mechanism behind the generalized friendship paradox.
In the 1930s the Swiss born biologist Max Kleiber studied how much energy different animals expend at rest, and noticed something curious. A human weighs about 10 times more than a cat. But rather than expending 10 times the energy of a resting tabby, we only spend 6 times as much. This number is not arbitrary — A cow is about 10 times heavier, and also expends about 6 times the energy of a human.
Kleiber was the first to notice this regularity: He showed that energy expenditure follows a 3/4 power law. What this means is that if you double the size of an animal, it will use about 2^3/4, or about 1.7 times as much energy. If you increase the size tenfold, it will use about 10^3/4 times as much energy. Amazingly organisms from bacteria to whales follow this law.
Power laws are all around us: If you add twice the amount of salt, a dish will not taste twice as salty. Rather, it will appear about 2^1.4 or 2.6 times as salty. A star twice the mass of our Sun will be 10 times as bright. There are many other example, and surprisingly even human constructs behave similarly. Cities are particularly fascinating: If the size of a city doubles, we see more than a doubling of the number of patents, inventors and artists, and the amount of time we spend in traffic. All of these quantities follow power laws.
On the other hand some quantities grow more slowly – if a city doubles in size, we spend less than twice the gasoline or electricity – bigger cities are more efficient. These may be reasons why cities grow in size. But all is not rosy – unfortunately, the number of crimes and cases of a disease more than double in a city twice as large.
We do not yet know exactly why such different quantities as a city’s crime rate, road density and the number of inventors behave so predictably. But scientists have plausible theories: It is possible that ideas, information and inspiration behave like diseases, and spread more easily when populations are larger and denser. The larger the city, the more contacts we have, the higher the chance that we will hear the latest important news and insight, or hear about a good job opportunity.
More than half the world’s population now lives in urban areas. Can we reap the benefits of living in a city – the higher wages, and higher energy efficiency – without the disproportionate increase in crime, pollution and disease? The city of Zürich in Kleibner’s native Switzerland suggests that this may be possible. Zürich has grown tremendously in the last 20 years. But proper planning has kept traffic reasonable, and crime low. The underlying laws that govern how cities behave gives me hope that we will also be able to understand the mechanisms behind these laws. This will allow urban planners and administrators to avoid the mistakes of the past. They will be able to work with physicists and mathematicians to help cities reach their full potential.
References and notes:
Here is a nice article about Kleiber’s Law (there are many other good ones easy to find with Google). A reason for why it may hold has been proposed given in the 1990s. However, it relies on the fractal geometry of the circulatory system, while Kleiber’s Law seems to extend to organisms that do not have one. The mechanisms behind the law are therefore still under debate.
For discussion of power laws and perception you can see this Wikipedia entry on Steven’s power law. The laws here are a bit controversial because quantifying subjective experiences is difficult. More information about how luminosity scales with the mass of stars is here.
Here are some references (not complete) on how to explain power law scaling in cities. Arbesman, Kleinberg and Strogatz assume that the network of human contacts is assumed to have a hierarchical, self-similar (fractal) structure. Under certain conditions, with an increasing city, the increased number of contacts can lead to power law growth of the the overall benefit. However the assumption that interactions are hierarchical may be too strong. It could be simply the increase in density that facilitates the interchange of ideas and information, as explained hereand reviewed here. Luís Bettencourt’s explanation develops this idea, but is also more complete.
Luís Bettencourt and Geoffrey West also gives a nice review of the statistical findings and how they could be used. Unfortunately, behind a paywall in Nature
I have recently worked with Jae Kyoung Kim, Zack Kilpatrick, and Matt Bennett on the problem of synchronization in circadian clocks. The paper just came out in the Biophysical Journal. Suppose you take a bunch of cells that oscillate with slightly different frequencies. If you couple the cells in the right way, they will tend to synchronize, and hence oscillate with a single frequency. However, it could be that the fast oscillators pull the slower ones, and the synchronous population speeds up. Or the slower oscillators drag the faster ones down, and the populations slows down after coupling. How can we make sure that the population does neither, and oscillates at the average frequency of the uncoupled population.
Jae asked how this happens in the master circadian clock of mammals. He had a suspicion that it is due to the mechanism that drives the individual cells to oscillate. In particular, he showed that if the genetic oscillator is driven by protein sequestration, then the synchronous state will exhibit the behavior observed in experiments (cells will synchronize at the mean frequency). This will not happen if the oscillations are modeled using the more popular Hill kinetics.
Thus the synchronous oscillations of thousands of cells can provide clues about what makes each of the individual cells oscillate. Here is a nice overview of the paper.