Professional Documents
Culture Documents
Introduction by:
John Brockman
INTRODUCTION
by John Brockman
In modern medicine you might find it strange that many people dont
think in theoretical terms. It's a shock to many physical scientists when
they encounter this attitude, particularly when it is accompanied by a
conflation of correlation with causation. Meanwhile, in physics, it is
extremely hard to go from modeling simple situations consisting of a
handful of particles to the complexity of the real world, and to combine
theories that work at different levels, such as macroscopic theories
(where there is an arrow of time) and microscopic ones (where theories
are indifferent to the direction of time).
All my life, the problems that have been of interest to me are the ones
that connect science into a whole. Do we have an integrated theory of
science, as it were, or is science broken into many separate parts that
don't add up? We have, from observation, the expectation we live in a
single universe, so we'd expect consistency, and that's what leads you to
demand this in properties of theories that describe the world we live in.
If you look at the way we categorize our theories, there are different ways
of analyzing them. Some lie within the domain of physics or even applied
mathematics. We have chemistry, biology, engineering; these usually are
regarded as separate disciplines and historically have comparatively little
to do with one another. It's not a surprise when you ask questions about
who's doing what, in scientific terms, that answers my question of a
unified theory of knowledge, so to speak, that it's rather fragmented still
today.
In modern biology and medicine today you would find most people not
even trying to think in theoretical terms. It's quite a shock to many
physical scientists when they encounter this. It's a funny clash between
two philosophies of science that have been around for overall 500 years
or so. What we call "Baconian theory" says, don't worry about a
theoretical underpinning, just make observations, collect data, and
interrogate the data. This Baconianism, as it's come to be known, is very
widespread in modern biology and medicine. It's sometimes also called
informatics today.
Certainly, that's not my position on this at all. I would agree that it's
beneficial, in areas where we don't understand a whole lot, to do a lot of
observational work initially. That will help you unravel some of the
correlations that do exist. The big challenge is then to make sense of that
in a deeper way, and that's usually forgotten. Unfortunately, we get the
conflation of causality with correlation there, which is clearly a false
one.
If you just stick with personalized medicine for a moment the questions
are to do with: so what if I know your entire genome sequence? Again, if
you were a reductionist, molecular biologist, pulling Dawkins' leg for a
moment, you might think that's the blueprint, that's all we need to know
and the rest is then a consequence of what that genome sequence is. I
don't think anyone seriously believes that to be the case. The huge
number of genome studies that people carry out today show that no
matter how much people try to use so-called big data analytics
informaticsthey cannot get clear correlations that account for disease
cases which are based solely on genomic data. This is an extremely rare
occurrence.
It's going to depend on what the problem is, which level you select to
take as the primary one to base your analysis on. This is the same
through all of science. When we talk about physics, theres a sense that
chemistry somehow sits off it, maybe biology and engineering too. The
same applies when we talk about the organic or the medical. The same
analogies pertain there. If I'm trying to design new materials, once again
their chemistry uses quite different levels of description from their
properties.
The opportunities that come the way of chemistry to promote itself are
usually spurned and squandered by the establishment in the field. I've
already mentioned one in passing a few minutes ago, and that's to do
with the origin of life. The origin of life on Earth is fundamentally a
chemical question. How did the first self-replicating molecules emerge, if
they did, from some Darwinian soup? That's a chemical question,
therefore, it's the equivalent of consciousness or the origin of evolution
cosmology, and things like this. The thinking person wants to know about
it. And yet the community has spurned it, on the grounds that it's
somehow not a respectable discipline.
My own research agenda may seem curious to people who just look at
what I'm doing randomly, as if it's kaleidoscopic, but it's not. It's always
been systematic, exactly along the lines we're talking about. The
fundamental thing I'm interested in is how I connect an understanding of
things on the very small scales with the larger levels. Microscopic to
macroscopic. That might be seen as enshrined in the relationship
between atoms and molecules and thermodynamics, like a classical
description from the 19th century, 20th century, Boltzmann, et cetera.
But that program is still there, and that's the hope that makes you believe
that it is possible. As long as you can connect microscopic descriptions to
larger scale, you have the hope of being able to predict all these things,
whether it's inanimate or animate matter, in terms that relate to these
shorter length scales where it's necessary.
It's a good question: how do you relate the two together? They appear to
be completely at odds with one another. Depending on whom you're
dealing with, they are, because the education and training of the people
concerned is so different. But if you have enough understanding of what's
going on between the two, you can draw off both beneficially. That was
something that's hinted at in some of the descriptions we're
giving.
For example, in the work of Judea Pearl (at UCLA) and otherswe do
share a similar take on this. The problems we face in the bio and
biomedical world today are a serious potential clash between two
approaches which should be aligned for the beneficial reasons I just
outlined. You use data with Popperian methods. In a Bayesian fashion,
you can extract deterministic descriptions in a desirable
way.
But the real fundamental limitation there, I have to say, is education and
training of people in these disciplines. One of the things we may be
moving on to, and it should appear in the descriptions that are given in
the future, is that if I want to be able to educate and train a doctor to
carry out the sort of interventions that I mentioned earlier, they have to
understand a lot more about the theoretical basis of their subject.
Otherwise, they are not going to be able to champion these approaches,
and we will be spending years with fingers crossed hoping that they do
adopt them. There is a big, big challenge there.
It's quite easy to give real-world examples. In fact, the latest thing we did
got picked up by Toyota, which is a famous car manufacturer. They put
out a patent back in the late 80s, which was going to the thing I
mentioned earlier, the desirability of creating what are called
nanocomposite materials, no metal in them, which would be as strong
and durable as steel and other things. Their first patent on that was found
by mixing a material as banal as clay with nylon and they found some
extremely interesting properties. In fact, within a few years, they were
making some. They still make some car parts out of that, but not the
entire frame.
The idea is, tell me, as the experimentalist, what ingredients I should mix
in order to get the important properties I want: low density, strength,
toughness, that the thing won't undergo fractures, et cetera. Fracture is a
classic example of multi-scale challenge. It involves, at the smaller scale,
a chemical bond-breaking. That's an electron rearrangement, so there's
going to be quantum mechanics in this. It's manifestation on a larger
scale could be the whole wing of an aircraft is fractured as a result of that
bond, so we need to know how those things are connected, and we have
to find ways in these scenarios to stop it. I need to be able to design a
materialit could be a so-called self-healing materialthat just doesn't
allow a fracture to propagate. At the larger scale of everyday life, it's
clear what we're trying to do here, and it just requires all of these bits and
pieces to be brought together.
At this point, the biggest motivation for me has been the purely
intellectual one. How do you do this kind of thing? When you have some
ideas that are good, you find you can apply them in particular instances.
I'm now telling you about where we can apply this, and I haven't been
talking to them for very long. But the same approach is necessary when
I'm dealing with these medical issues because I know I've got to get the
molecular end in, but it could have a manifestation that might be in a
heart arrhythmia or something.
Here's an example of the sort of thing we're after: In the genomic era,
we're going to know, if we don't already, individual genome sequences.
This is happening already for cases like HIV because we know the
sequence of the virus that's doing the infecting, which is much shorter
than ours, so a virologist will get that sequence on a patient. That's useful
information because if you know what the sequence is it might give you
an inclination as to which drug to give the patient. Well, how would you
do that? Existing approaches are Baconian. Data is collected on endless
patients and stored in databases and some form of expert system is run
on them. When a new sequence comes in from a virologist, it's matched
up to everything that was done before and someone will infer that the
best treatment now is the same thing as what was done for some group
of people before. This is not a reliable approach to individual medical
treatment.
If you can find a Popperian method, you'd be much better off. What is
that method? That's one of the things I'm interested in, and that is doing
sequence-specific studies from the virushow it binds to individual drugs.
It's no longer a generic task that a drug company is interested in. The
drug companies have their problems now. They're trying to produce drugs
as blockbusters, one-size-fits-all. This is not going to work in the future
anyway. We have to tailor drugs to individuals. The challenge there is, can
I match drugs to individual sequences? That's quite a demanding thing. It
has quantum mechanics in it, it has classical mechanics, and it connects
up to the way the patient is treated clinically. It too is a multilevel thing.
Where do I get my funding from? It's like other that things I do. Origins of
life, I contribute to, but you can't get large funding opportunities there. I
do it with the resources I get from other places in what I call the
intersticesit's not obvious to people including myself where it's coming
from next.
There are limits to what we're talking about here. I'm not trying to go
from the blockbuster one-size-fits-all to every single person has their
own. That would be a reach too far all at one fell swoop. But there's this
idea of stratification, which simply means clustering into groups for whom
we know there may be adverse reactions to the drug that's on the
market. It is quite shocking how low percentages of the population can
respond positively to the drugs that exist. In cancer, it's well under 50
percent. In a sense, scientifically, we owe it to people to understand
better what drugs to give them. It's not a question of suddenly having to
give everyone a different one, but finding different sets of drugs. It does
challenge the model, which typically costs 2 billion dollars and up to
seventeen years, to produce one drug. I need lots more of them.
Computational methods of the sort I'm talking about are going to have an
impact on speeding all that up, no question.
It's such an intensive approach today. It's expensive and labor intensive.
One of the points I was making earlier is that, in a pharmaceutical
company, unlike many companies that make chemicals, actually most
companies when you're talking about shampoos and other thingsit
might be a company like Procter & Gamble or Unileverdon't actually
have chemists to make these compounds. They reach for the shelf and
look for suppliers who provide a set of chemicals that they will mix
because those suppliers are in the business of doing the chemistry. But
they know what it takes to make their compounds, and those are the ones
the other companies will choose from.
You can imagine what I would be advocating, and that would be for a lot
more scientific basis to what's going on in medicine. My humble opinion is
it isn't particularly scientific today. It's a lot of experience and rote
knowledge, but it's not informed by proper mechanistic understanding. In
the end, we need that kind of mechanistic understanding to have a
predictive capability in the discipline.
The one or two examples I've given you, and I can give you plenty more,
are all pointing towards the fact that we now have enough theoretical
knowledge to build models that have predictive capability in the medical
area. Today, many clinicians would admit to you that their decision-
making is a bit of a finger in the air job. They have to take a decisionI
know from discussing with medical colleagues that many would like to
use better methods to support those decisions. That doesn't need to
imply we're going to do away with doctors at all, but it's just enhancing
the value and quality of the decision-making.
It all plays into the fact that we do not want to do too much animal
testing in the future. If you can have a virtual human model, clearly you
can do testing on that, and you don't have to do the amount of animal
testing we've been doing to date. There'd be more high-fidelity stuff. I
can't refrain from also mentioning, because I use this high-fidelity
modeling and simulation for medical purposes. A lot of people who are
not experienced with these things think, oh, how can we ever trust the
outcome of a prediction, especially from a computer, if it's on a human
being? Somehow, it's got to be 100 percent correct or we'll never use it.
This is certainly not true. What does it mean for a model to be 100
percent correct?
There's a question mark about who's going to pay for it, and in the US
you might have to pay for your simulation if you want an enhanced result.
I've been in discussions like that. For example, some of my work is funded
by the EU in their eHealth sector, and therethe EUthe guys in the
Commission in Brussels assume that everyone would get access to these
techniques, it would be free at the point of delivery. But a US colleague
would expect you'd pay a premium to get a computer simulation done to
enhance your clinicians decision. This is all part of the way that the
future will evolve.
It ceases, in the end, to be only something for people who are ill. It's
relevant to people in a state of wellness. My friend Leroy Hood is talking
all the time about wellness things. He wants to do a 100,000 Genomes
Project on wellness, because he's not trying to do the disease case that
the UK is about. Just to help people understand their predicament, and to
take decisions, lifestyle choices, based on that information.