Skip to content

Why write about mathematics?

I am stepping down as managing editor of the Dynamical Systems magazine, a post I held for a bit more than 2 years.   David Uminsky is joining the team and will carry on along with Elizabeth Cherry and Peter van Heijster.   I am sure the magazine has a bright future.  In my last contribution as managing editor I tried to briefly explain why it is important to write about mathematics.  While this is directed to the readers of the DSWeb magazine, the message applies much more generally. Comments are welcome, as always:

In parting, I wanted to encourage the dynamical systems community in general to contribute to the magazine. The content here is mainly generated by you. Many of you will be contacted by the new editing team to contribute. Writing an article is a serious investment of time, and there will be no immediate rewards – you will not get paid, and it is unlikely to help you in getting promoted. So why spend time writing for the DSWeb magazine, instead of working on a manuscript or a grant proposal?  Indeed, is outreach ever worth the time we invest in it?

I think the answer is a resounding yes. Many of us chose to study mathematics or science in part because we have been inspired by Martin Gardner, Ian Stewart, James Gleick or Carl Sagan. Perhaps the best known applied dynamicist today is Steve Strogatz. He is a truly wonderful popularizer of mathematics (if you haven’t read his column in the New York Times, do yourself a favor and do so). While your piece will not reach such a large audience, it will almost certainly be read by more people than your next academic paper. And you have the chance to tell people about your work, what you think is interesting, and about your concerns.  Thereby you will help shape the public discourse about the profession.

But there is another thing that I have learned from Steve: Go and read any one of the papers that he has written in the last 10-15 years or so. You will notice that they are marvelously written (I admit, I have not read them all, but I doubt that my sample is biased).  This is certainly not unexpected. But what comes first: Are you simply born to be a good communicator so that writing both popular and technical prose comes easily?  Or is writing something that needs to be learned and practiced? If the latter, the practice of writing cogently for a nontechnical audience will translate into clearer technical expositions.  If you talk to popular science writers, or scientists who write popular pieces, you will find that in most cases writing did not come easily to them. However, invariability writing has gotten easier and the results better after years of practice.

So, you can choose your reason to contribute to contribute to the DSWeb magazine. You can be altruistic, and help the community.  Or you can make this part of a more general outreach effort, which will increase your visibility and help you become a better expositor.

Dual Inheritance Theory

Here is a piece I am writing for Engines.  The problem  is always to make a clear, concise presentation.  This makes it difficult to address any controversies.  Evolutionary theory is not controversial (at least not amongst those who know what they are talking about).  However, Dual Inheritance Theory is far more recent, and it is not fully developed.  Here is the story – comments are welcome, of course:

The theory of evolution is well understood and accepted. It explains how simple organisms gave rise to the multitude of complex creatures that inhabit our planet. But human societies and cultures also changed over the ages. The small hunting groups of our ancestors were the precursors of today’s complex states, their stone tools replaced with the smart phones in our pockets. Does the evolution of our genes also impact cultural changes? Can cultural shifts impact the genetic makeup of our species in turn?

There are a number of examples to support this idea. For instance, most ancient humans were lactose intolerant. However, as our ancestors domesticated animals, milk became a readily available food source. Those that could use it had an advantage over those that could not.  The ability to digest milk in adulthood thus became widespread in certain.

A cultural change – the domestication of animals – thus influenced our genes. But this new ability to drink milk in turn affected our culture. Those of our ancestors that could digest milk were less likely to slaughter their cattle for food. Some became shepherds, and developed pastures. Hence, changes in our genes in turn influenced how we spend our time, how we raise animals, and how we use the land.

These processes are the domain of Dual Inheritance Theory – a mathematically grounded approach to understanding how our culture and our physical selves co-evolved.

The theory also suggests that our capacity for culture evolved along with our genes. How we innovate, and how we propagate knowledge was shaped by evolution. But the cost of experimentation can be high – you do not want to be the first to try a new type of plant or mushroom. If you remember a successful way to hunt you will prosper. To use a well worn phrase – we are lucky that not every generation needs to re-invent the wheel. Our ability to learn is therefore beneficial. It is possible that good learners had more children, who were good learners in turn.  The capacitity to Learn and impart information may have evolved and shaped our culture.  But these capacities may have shaped our genetic selves in turn.

If you compare our abilities to those of our closest relatives, the difference is remarkable.  Consider the Kellogg family who in the 1930’s began rearing their baby son, Donald, along with the baby chimpanzee Gua.  Their goal was to see whether the chimp would learn to behave and vocalize like a human.  However, the experiment was halted when baby Donald started copying the chimp’s shrieks.

We have now mapped the human genome, and can sequence it for less than $1000.  We now know that we are not just a direct product of this genetic information. The evolutionary biologist Theodosius Dobzhansky said that “Nothing in Biology Makes Sense Except in the Light of Evolution”.  To understand humans and our societies we will need more than understand the evolution of our genes.   We will need to understand how they evolved along with our culture.

Some notes:

Here is an interesting article about the evolution of lactose tolerance. Here is a link to Theodosius Dobzhansky’s famous essay “Nothing in Biology Makes Sense Except in the Light of Evolution“.  More about Winthrop Kellogg  who was an interesting character, and his experiment.  I couldn’t find more information about what happened to Gua after the experiment, except that she died a year later.

A classic book of Dual Inheritence Theory is Culture and the Evolutionary Process by R. Boyd and P. Richerson . One of the conclusions is that humans should are less likely to use parent-to-offspring transmission cultural information, and prefer conformist transmission, i.e. learning from the broader environment. Over the last decades it has beceome easy and chap to access a broad range of cultural models. If conformist transmission is the norm, this may have a profound influence on how our culture develops.

New paper on temperature compensation in synthetic gene circuits

Will Ott, Chinmaya Gupta and I  have been collaborating with Matt Bennett’s group at Rice on modeling different synthetic gene circuits.  A new paper in PNAS describes some of our recent work (link to paper at bottom).  Matt is interested in engineering  synthetic gene circuits that are robust and predictable.  This is difficult, since most current technology produces circuits that are often fragile – perturbations will alter their behavior.  The mathematical tools that will allow us to design circuits with desired properties are also in their infancy.  

In our work we showed that environmental sensitivity can be reduced by simultaneously engineering circuits at the protein and gene network levels. Faiza Hussain and others in Matt’s lab constructed a synthetic genetic clock whose period does not depend on temperature.  Why is this surprising?  Well, as temperature changes, biochemical reactions speed up.  Unless the genetic oscillator has special properties, its frequency will thus increase with temperature (BTW, this is also a problem with mechanical clocks which was solved by John Harrison).

To solve the problem , Matt’s group engineered thermal-inducibility into the clock’s regulatory structure.  What this means, is that they used a mutant gene as part of the gene circuit.  We hypothesized that this mutation changed the rates of a particular reaction in the genetic circuit. Chinmaya Gupta used a computational model to check whether this idea explains the observed temperature compensation.  Indeed, the results of including the rate changes in the computational model resulted in a clock with a stable period across a large range of temperatures. This matched precisely the behavior of the mutant synthetic clock.

I find this satisfying for two reasons:  First I think that it shows that we can set out to design genetic circuits that behave robustly. Second, and more important to me, we can use mathematical modeling to understand what about these circuits makes them tick and how. I hope that we will be able to understand native gene circuits, and design new ones using such tools.  

Here is some coverage with a video.

A math talk for kindergardeners

Today I went to my son’s kindergarden class to tell them about my work. This was a bit of a challenge. Instead of talking specifically about my research, I tried to explain to them what mathematics is all about. The presentation is here.  If you are called upon to do something similar, feel free to use this to build on.  It is a bit short, but about right given their attention span.  I emphasized applications, since I thought that the more abstract ideas of pure math would be a bit hard to get across.  I probably underestimated them.  It was a fun exercise, and I think they got something out of it.

Randomized Clinical Trials

There has been a lot of controversy about a recent clinical trial of an oxygen treatment for premature babies. However, I found the reporting of the issue very confused. This article from the AP demonstrates the difficulty in communicating the issues surrounding randomized clinical trials (RCTs). There is a general misunderstanding – and frequently, misrepresentation – of what clinical trials are all about. It is all too easy to get the impression that these are just experiments on humans. The truth is that the evidence gathered from clinical trials is essential in deciding which treatments and medicines work, and which might be harmful. Without them medicine would not have come nearly as far.

My point here is not to give a full review of the controversy. For a good explanation see here.  Rather, I would like to use this as an example to explain some misconceptions about RCTs.

The following statement from the article demonstrates how it is easy to misunderstand the issue:

“… the debate is about one of modern medicine’s dirty little secrets: Doctors frequently prescribe one treatment over another without any evidence to know which option works best. There’s no requirement that they tell their patients when they’re essentially making an educated guess, or that they detail the pros and cons of each choice.”

I fully agree with this statement. But the writer never follows up to explain that this is precisely why we need clinical trials – to provide the evidence that will help decide which option is best.

It is easy to be reminded of the horrors of the past when reading about RCTs (like the Tuskegee syphylis experiment). I am not saying that we are living in a wonderful world in which medical researchers always do what is best for their patients – far from it. However, RCTs are the very tools that allows doctors to offer provenly better medical care.

If you read the beginning of the article, it remains unclear whether in the study premature babies were hurt (perhaps even on purpose) in order to test new medical approaches. Of course, this would be truly horrific if true. As you read further the picture becomes more confusing. The article states that

“Oxygen has been a mainstay of treating them [premature babies], but doctors didn’t know just how much to use. Too much causes a kind of blindness called retinopathy of prematurity. Too little can cause neurologic damage, even death. So hospitals used a range of oxygen, with some doctors opting for the high end and some for the low.”

This is exactly the point: Before the study was performed, a range of treatments was prescribed (85%-95% oxygen saturation levels). Doctors knew that oxygen treatment helped. They did their best to guess how much to use. But before the study was performed they were just guessing what treatment will lead to the best outcome. They did not know whether they could be doing more harm than good by administering too much or too little oxygen. In the absence of evidence, they essentially gambled.

This was a very important issue to resolve, and that is precisely why the trial was performed. Doctors could not have guessed that the higher oxygen levels both reduced mortality and improved outcomes. Now that the answer is known future generation of premature babies will receive better care.

But would this be ethical if it came at the expense of the babies involved in the study? Of course not! We cannot pay for progress in medicine by knowingly harming patients – indeed, the very thought of it evokes the darkest chapters of medical history.

So the question is if the babies in the study received the best medical care known at the beginning of the study. In clinical trials patients are split into groups that are given different treatments. One treatment cannot be known to be worse than the other(s) — this is what the trial is designed to resolve. However, if one treatment turns out to be better, then one of the groups will have received better (more effective) medical care. But this will be known only after the study is completed.

This is an essential point: Before the trial is performed, nobody knows for certain which treatment is better. Indeed, babies that did not participate in the study received a range of treatments, according to the best guess of their doctor. If the trial was not performed, the range of treatments — including the worse one — would still be administered.

What may be difficult to accept is that sometimes, perhaps more often than we realize, doctors simply do not know what the best treatment is. We laugh at the medieval use of leaches, bloodletting, and remedies that would help balance the humors. But doctors today still often guess about what works (how much is a matter of debate) – and I am not even talking about nutritional supplements almost all of which are completely unproven, if not known to be harmful.

I think this is where scientists in general – and mathematicians and statisticians in particular – need to better explain, and keep on explaining why we think certain things are true. Clinical trials offer a way forward in situations where we simply cannot base decisions on experience, but need to look at data and use statistics.

Returning to the case of the premature babies, the stories are heartbreaking

“I unknowingly placed my son in harm’s way,” said Sharissa Cook of Attalla, Ala., who wonders if vision problems experienced by her 6-year-old, Dreshan Collins, were caused by the study or from weighing less than 2 pounds at birth. “The only thing a mother wants is for her baby to be well.”

Dagen’s mother, Carrie Pratt, was more blunt with reporters: “Why is omitting information not considered lying?” she said. “We were told they would give her the best care every day.”

I cannot imagine what these families have gone through. But I do believe that what they suffered was not a consequence of a participation in the study. The babies in the study on average had lower mortality, and better outcomes than babies that were not in the study (perhaps due to the “inclusion benefit”). Had they not participated, it is impossible to know what treatment their physician would have chosen. It may have been any of the ones used in the studies, since the entire range of treatments offered was used in practice.

Explaining the need and the reasoning behind RCTs is not easy. It is far easier to write a story about premature babies harmed by a heartless group of faceless scientists and doctors in white coats. There are many example of people who do misuse evidence knowingly or unknowingly and harm patients by doing so: from the anti-vaccine fantasies of Andrew Wakefield to the cancer therapies of Dr. Burzynski. However, the people in this clinical trial have not done anything of the sort.

Gather all the data

I keep hearing some variations of the following comment: “Living organisms generate the equivalent of exabytes (or zettabytes, or whatever) amount of information per second. We will need to store all this data, and then analyze it to make sense of what is going on.” The first part of the statement is certainly true. A detailed description of the position and state of every molecule even within a single cell will take enormous amounts of data. Similarly, recordings of the activity of a population of neurons already generate gigabytes of data per second. As our recording techniques get better, the rate at which data is generated will also increase.

However, I am worried about the second part of the statement. There are a couple of concerns here. If we mindlessly accumulate data, it is possible that important features will be buried in the mess. As I wrote separately, some have suggested that this is perfectly fine. We just need to feed the whole shebang to some data crunching supercomputer, and it will tell us what matters and what does not. In this brave new world, scientists would only have to decide what questions need to be answered – machines will collect and interpret the data for us.

However, I doubt that our algorithms are this powerful. For the foreseeable future, we will have to play an active part in analyzing and understanding the data. And this means that blindly collecting all the data we can may not be the best approach.

A second related question is what is the complexity of a satisfactory description of a living organism — say a bacterium, or the human brain. I would expect that the complexity of the description will dictate how much data we will need to fully develop it.

What constitutes a satisfactory description is subjective. Satisfactory can mean that  the description gives us a feeling that we understand how the organism functions. A satisfactory description could also give accurate predictions of how an organism behaves, without giving us an understanding of the mechanisms.

I am relatively optimistic that we will be able to develop the second type of models. We already have some computational models of organisms that give very good predictions about their behavior (here is an example by Jae Kyoung Kim, and collaborators  and a computational model of a cell about which I wrote before). However, these models are not simple. I doubt that you can really stare at them and gain a deep understanding of how the model, or the organism, ticks.

Perhaps we will be able to develop models of living organisms that both give us accurate predictions, and deep insights into how they function. After all, physicists have given us such descriptions of the physical world. However, I doubt that we will get there just by blindly amassing data.

The illusion of absolute categories

Here is a recent fascinating paper (original study and a nice comment). Briefly, the study shows that absolute pitch — the ability to identify a note when played on its own — is not really that absolute. In one of the experiments, subjects with absolute pitch listened to a piece of music that was slightly detuned. For listeners with absolute pitch this detuned music established a new reference point. After listening, their internal map of pitches shifted, and they identified notes in accord with the detuned reference they had just heard.

John Lienhard pointed me to his related radio episode. He points out that even people without absolute pitch will sing a familiar song in the original key. However, the experiment above points to how flexible our minds can be. Our memories, and the internal categories we establish are not absolute. They can be shifted to adjust to the environment – and this will happen without us being aware of these internal changes.

Perhaps this is even more impressive than muscle memory. We do many things mechanically and unconsciously. But our unconscious brain is not dumb robot. It is flexible, and self-correcting. We are under the illusion that our ever changing mind is stable.

Follow

Get every new post delivered to your Inbox.