Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Beyond Zero and One: Machines, Psychedelics, and Consciousness
Beyond Zero and One: Machines, Psychedelics, and Consciousness
Beyond Zero and One: Machines, Psychedelics, and Consciousness
Ebook258 pages4 hours

Beyond Zero and One: Machines, Psychedelics, and Consciousness

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Can we build a robot that trips on acid? This is not a frivolous question, according to neuroscientist Andrew Smart. If we can’t, he argues, we haven’t really created artificial intelligence.
In an exposition reminiscent of crossover works such as Gödel, Escher, Bach and Fermat’s Last Theorem, Andrew Smart weaves together Mangarevan binary numbers, the discovery of LSD, Leibniz, computer programming, and much more to connect the vast but largely forgotten world of psychedelic research with the resurgent field of AI and the attempt to build conscious robots. A book that draws on the history of mathematics, philosophy, and digital technology, Beyond Zero and One challenges fundamental assumptions underlying artificial intelligence. Is the human brain based on computation? Can information alone explain human consciousness and intelligence? Smart convincingly makes the case that true intelligence, and artificial intelligence, requires an appreciation of what is beyond the computational.
LanguageEnglish
PublisherOR Books
Release dateDec 3, 2015
ISBN9781682190074
Beyond Zero and One: Machines, Psychedelics, and Consciousness
Author

Andrew Smart

Andrew Smart is a breathwork teacher based in Los Angeles, California. He is the author of Crystals: The Stone Deck.

Read more from Andrew Smart

Related to Beyond Zero and One

Related ebooks

Internet & Web For You

View More

Related articles

Reviews for Beyond Zero and One

Rating: 5 out of 5 stars
5/5

2 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Beyond Zero and One - Andrew Smart

    Author

    CHAPTER 0001

    ¹

    BASEL, SWITZERLAND

    If the doors of perception were cleansed everything would appear to man as it is, infinite.

    —William Blake

    It’s not like it’s the end of the world—just the world as you think you know it.

    —Rita Dove

    The great organic chemist Albert Hofmann was fond of saying that luck favors the prepared. Because he was Swiss, there was little for which Hofmann did not prepare. Meticulous and well-organized, Hofmann indeed made one of his most famous ­discoveries through an amazing accident and stroke of luck. The famous story is that in 1943, while Hofmann was ­re-synthesizing lysergic acid diethylamide-25, some of the formulation accidentally got on his hand. He soon began to experience the first inkling of the psychedelic effects of LSD, writing in his lab notes:

    Last Friday, April 16, 1943, I was forced to interrupt my work in the laboratory in the middle of the afternoon and proceed home, being affected by a remarkable restlessness, combined with a slight dizziness. At home I lay down and sank into a not unpleasant intoxicated-like condition, characterized by an extremely stimulated imagination. In a dreamlike state, with eyes closed (I found the daylight to be unpleasantly glaring), I perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors. After some two hours this condition faded away.

    Initially confused by what could have caused this episode, Hofmann thought it might have been the solvent he used, because it was similar to chloroform. If it had been the crystallized LSD itself, it was an extremely potent substance given the small amount that his body had absorbed. Being a thorough and above all curious scientist, Hofmann planned a self-experiment to determine if the LSD had induced his bizarre altered state. Guessing based on the known activity of similar ergot alkaloid substances from which LSD is derived, Hofmann prepared 250 micrograms of LSD and ingested it diluted in water. On April 19, 1943, his lab journal read:

    4/19/43 16:20: 0.5 cc of 1/2 promil aqueous solution of diethylamide tartrate orally = 0.25 mg tartrate. Taken diluted with about 10 cc water. Tasteless.

    This was the smallest dose he estimated might produce an effect. Doubting that such a small amount would produce any effect at all, he planned to increase the dose gradually until he found one that worked. As it turned out, a quarter of a milligram of LSD is an enormous dose of an extremely active compound. The rest is, of course, psychedelic history, to which we will return throughout this book.

    I wrote this book in Basel, Switzerland. I spent a few months working in the same building where Hofmann performed his historic psychedelic experiment. Basel is also where Friedrich Nietzsche taught at the university and wrote some of his great works. These historical facts may not appear related to each other at first thought, or to the title of this book. But this is a story about a philosophical thread that I started in Basel and that ends with Nietzsche, Hofmann, the LSD molecule, hallucinations, binary numbers, machine consciousness, and superintelligence.

    Ray Kurzweil, director of engineering at Google, predicts that by the year 2045 we will reach the technological ­singularity. This is the point at which computers become superintelligent and their intelligence—their comprehension and ­intellectual independence—will continue to grow exponentially, far surpassing ours. The Swedish philosopher Nick Bostrom has already written a book called Superintelligence: Paths, Dangers, Strategies about how to deal with the possibility that we may lose control of the very technology we invented.

    The last few years have produced an increasing amount of popular discourse about the singularity, which describes events in technological development that would lead to human-level machine intelligence, and would lead to computers achieving superintelligence. Following this occurrence, there would come an explosion of intelligent machines with cognition so advanced that it poses an immediate existential risk to humanity. Machine intelligence would learn how to improve independently of human programmers; it would spawn other machines capable of improving their intelligences, resulting, some argue, in the end of the human race.

    Recent books such as Superintelligence, Kurzweil’s How to Create a Mind: The Secret of Human Thought Revealed and The Singularity Is Near: When Humans Transcend Biology, and Martine Rothblatt’s Virtually Human: The Promise—and the Peril—of Digital Immortality describe with often breathless enthusiasm the near future in which robots, whole brain emulations, mindclones, superintelligences, and cyberconsciousnesses live either harmoniously or not-so-harmoniously with or as part of their human inventors. I am beginning to suspect that the singularity refers simply to an explosion of books about artificial intelligence.²

    The more immediate and practical threat posed by AI may be economic. There are increasing fears that robots are going to cause widespread unemployment in the near future. Martin Ford argues in Rise of the Robots that even professional jobs in the law, software engineering, and healthcare will soon be eaten by software.³ These areas have traditionally required years of training and education for humans to acquire the necessary skills. We assume that competence in professional jobs requires a high degree of human judgment. But Ford convincingly shows that automation is invading every aspect of our working lives. As artificial intelligence improves, this trend toward workforce automation will only accelerate.

    Among this expanding literature, there is a strain of thought that holds that some type of Artificial General Intelligence (AGI)⁴ and then superintelligence is inevitable—i.e., the singularity will happen sooner or later, probably around the year 2100 (or 2045, or 2075, depending on which survey you believe). While most futurists hedge their bets about the exact year when machines will reach human-level intelligence, almost all agree that this century will see revolutionary advances in machine capabilities. If this sounds like science fiction, consider the fact that in just a century airplanes and air travel have gone from impossible fantasy to mundane reality. And it appears that computer technology is advancing at an even more rapid pace than aviation did.

    Once this intelligence explosion takes place, Luke Muehlhauser writes in Singularity Hypotheses: A Scientific and Philosophical Assessment, Such machines would be superior to us in manufacturing, harvesting resources, scientific discovery, social aptitude, and strategic action, among other capacities. We would not be in a position to negotiate with them, just as neither chimpanzees nor dolphins are in a position to negotiate with humans.

    But would this superintelligence be conscious? Bostrom’s otherwise excellent book on the dangers of machine intelligence includes only cursory treatment of human consciousness, and does not discuss whether the various forms of machines he foresees taking over the world would be conscious. The focus is instead on intelligence. Many researchers in AI feel that consciousness simply does not matter—the things the super AI would need to do to take over the world and wipe out the human race do not require consciousness. Others feel that human consciousness will be explained away as an ill-defined philosophical invention.

    There are a few who argue that consciousness is a result of the fundamental organizational principle of the brain and therefore that we cannot call any machine truly intelligent without it. In other words, human consciousness is not a useless phenomenon but confers some real adaptive advantage. Whether that advantage is enough to survive a ­mega-superfast-and-supersmart army of zombie artificial intelligences is another question. I argue that consciousness is the fundamental issue for superintelligence, and thus, in contrast to many writers on AI, I focus on machine consciousness.

    Martine Rothblatt’s Virtually Human, much like Kurzweil’s earlier The Age of Spiritual Machines, deals more with machine and human consciousness but insists that cyberconscious mindclones will become commonplace. Rothblatt glosses over some deep philosophical issues about objectivity and subjectivity, and assumes that hardware and software engineering will overcome all their myriad difficulties. She is less worried than Bostrom about robots wiping us out, but more worried that conscious machines will be treated as a kind of oppressed minority, to whom we’d better be nice lest we make them angry.

    Rothblatt writes, Given the exciting work on artificial intelligence that’s already been accomplished, it’s only a matter of time before brains made entirely of computer software express the complexities of the human psyche, sentience, and soul. Contrast this view with that of Duke University neuroscientist Miguel Nicolelis,⁵ who is actually building brain-machine interfaces, or BMIs. The arguments favoring the emergence of a relativistic brain strongly suggest, writes Nicolelis, that the primate central nervous system, and the human mind in particular, may not be compressed into any type of classical computational algorithm.

    Bostrom and Rothblatt are preparing for a future in which it is assumed that technology will achieve these dazzling heights of superintelligence and cyberconsciousness. Both are much more concerned with economic and social issues surrounding superintelligence than with the actual science, philosophy, and engineering involved. There seems to be a curious correlation between how closely an author works with actual brains and her views on AI—the more you work in neuroscience the less you believe in superintelligence.

    My intention is not to try to poke holes in the superintelligence and cyberconsciousness arguments. Nor do I attempt to catalogue the legion of specific ongoing AI projects, all of which, according to Rothblatt, Bostrom, and Kurzweil, portend the rise of the supermachine. Rather, I take the possibility of machine consciousness seriously, attempting a few answers and posing a few questions.

    As philosopher Hilary Putnam said in 1964, What I hope to persuade you is that the problem of the Minds of Machines will prove, at least for a while, to afford an exciting new way to approach quite traditional issues in the philosophy of mind. Whether, and under what conditions, a robot could be conscious is a question that cannot be discussed without at once impinging on the topics that have been treated under the headings Mind-Body Problem and Problem of Other Minds.

    A parallel trend within medical science, and to a lesser extent in popular culture, is the return of what is probably the most controversial substance ever developed by man: LSD. Before it was classified as a Schedule I drug in 1970 under the Controlled Substances Act, more than forty thousand people had participated in roughly one thousand clinical studies on LSD, about which some two thousand research papers were written. The drug showed major promise as a treatment for depression, alcoholism, and anxiety, but after being banned and outlawed, serious research on LSD stopped virtually overnight. While it has always been popular as a recreational drug, and it is estimated that about nine percent of Americans have tried it, its therapeutic potential in clinical settings has been forgotten. With modern neuroscience techniques, a trickle of new studies is starting to reveal the precise mechanisms by which LSD and other hallucinogens alter consciousness. The insights provided by this research have the potential to transform the science of human consciousness.

    But why does the human brain react to LSD the way it does? And, if a machine were to achieve human-like consciousness, would it also react to digital LSD and enter an altered state of machine consciousness? Recently, Google’s artificial neural networks for image recognition have been creating fantastic psychedelic distortions of images all across the web. The networks’ parameters are tuned to search for certain features – and this tuning is fed back into the network as it analyzes an image. In other words, according to Google, neural networks that are trained to discriminate between different kinds of images have a lot of the information needed to generate images too. Google calls this inceptionism.

    Given the unique and powerful way in which LSD and other hallucinogens alter human and animal consciousness, it is surprising to me that the field of artificial intelligence has not engaged with research on psychedelic drugs. It seems clear that if you are interested in building a machine that matches or exceeds human intelligence, and has human-like conscious experience, you would be extremely interested in a group of molecules that reliably produce altered states of consciousness in humans and animals. If you are willing to entertain the possibility that machines can have human consciousness, you should be willing to consider the possibility that these machines can, and perhaps should, trip on acid.

    This book is also an attempt to ground some of the often-hyperbolic discourse on machine intelligence in philosophical, psychopharmacological, engineering, and scientific contexts. Here, the details are important: in fact, they are everything. As Daniel Dennett, a philosopher sympathetic to artificial intelligence, has said:

    The recent history of neuroscience can be seen as a series of triumphs for the lovers of detail. Yes, the specific geometry of the connectivity matters; yes, the architecture matters; yes, the fine temporal rhythms of the spiking patterns matter, and so on. Many of the fond hopes of opportunistic minimalists have been dashed: they had hoped they could leave out various things, and have learned that no, If you leave out x or y, or z you can’t explain how the mind works.

    Or, I would argue, build a superintelligent machine.

    You can do all the formal philosophical and ­mathematical proofs you want, and design the greatest self-improving algorithm known to AI, but until you really put a thing you have designed and built into the real world and it works reliably, it remains what those in the tech industry call smoke and mirrors or, worse, vaporware. All the AI systems shown in public thus far are extremely good at domain-specific tasks—like chess or Jeopardy! But they only work in these narrow domains. Give IBM’s artificially intelligent computer system Watson the clue this happens to your voice when you inhale the helium from a balloon. He is just as likely to ask, What is Moscow.

    The likelihood is that the majority of scary AI systems of the future are still vaporware. In other words, they are decent demos that don’t actually work, which doesn’t mean they will never work. But before we prepare to bow down before our robot overlords, we should demand more than dramatic books and papers.

    In response to this challenge, many of the writers cited above would point to dozens of cool AI projects, such as Jürgen Schmidhuber’s Gödel Machine, which can self-improve by writing its own code after it proves the new code is more optimal. It is a stunning model, but has yet to be really programmed because of the serious technical challenges involved. (Of course these challenges will be overcome eventually.) Or they might point to the success of the artificial intelligence startup based in London called DeepMind, recently acquired by Google for half a billion dollars. Larry Page has said that DeepMind is the most exciting thing he has seen in a long time. DeepMind created an artificial intelligence system that can teach itself how to play video games—oftentimes better than humans. The algorithm is simply given the instruction to increase the game score and it learns the rest by itself. Google wants to apply these deep learning algorithms to many types of data, from healthcare to finance.

    DeepMind’s algorithm is directly inspired by theories about how the human brain stores, retrieves, and learns information—called reinforcement learning. The founder of the company, Demis Hassabis, is himself a neuroscientist who began his career as a game developer. Using insights from neuroscience, he proposes to solve the problem of intelligence. Even though DeepMind’s AI can only play relatively simple video games at this point, the implications for autonomous computer systems are enormous. These theoretical breakthroughs and practical successes all contribute to the growing chorus of voices announcing the advent of the age of superintelligence. However, the inexorable march of progress in artificial intelligence does not happen without raising serious ethical, economic, social, and philosophical questions.

    Most of the philosophical issues in this book have been addressed elsewhere in great and (at times) agonizing detail. However, there are still a few implicit assumptions in the current discourse about superintelligence that should be made explicit, so that they can be examined and, if possible, improved—or ­jettisoned. I focus on the implicit assumptions that interest me, but there are surely many more.

    This includes the computational stance—the assumption that brains and computers are carrying out computations. This also includes the assumption that consciousness is not important for artificial intelligence or for us. Finally, this includes the assumption that information underlies reality and is a natural entity like mass, which can be used to explain physical phenomena like human consciousness, an idea I call the information theory of everything.

    I also focus on the consciousness part of the equation, at the expense of the intelligence part. Hopefully intelligence is a quality of this book, but what is more interesting to me than intelligence is the study of subjective internal conscious states.⁸ Intelligence, super or not, is boring. Being a philosopher at heart, I am interested in the first-person subjective ineffableness of human experience, and whether a robot could have this too.

    Within contemporary cognitive science there is a blurry and awkward line between intelligence and consciousness. However, it seems clear to me that much of our intelligence derives from the fact that we are conscious. This is despite the fact that it is well known that most of our brain’s activity remains unconscious. To use the computer AI metaphor, we can to a certain degree inspect and change our code to improve our own behavior. This is in fact borne out by current neuroscience; the organization of our cortex is in constant flux. Rats are conscious too, but we would not claim that they are very intelligent relative to humans. The assumption in artificial intelligence is that a computer must first achieve intelligence in order to become conscious. So if a computer has a rat’s level of intelligence, would it also have a rat’s level of consciousness?

    Neither brains nor computers carry out computations. Consciousness is a hallucination, albeit a very useful one. And information is an epistemic tool that is a cultural invention rather than a natural entity.

    Some philosophers, like David Chalmers, argue that the brain doesn’t have to be computational or informational to be simulated. Artificial intelligence does not have to be conscious to achieve either evil or benign demigod-level powers of intelligence. One thing discussions on superintelligence are missing that is fundamental to human brains is hallucination. Human brains hallucinate frequently in many different circumstances, because of certain diseases and on certain types of drugs such as LSD. What’s more, our consciousness itself shares many properties commonly associated with hallucinations. In other words, our brains are probably more or less hallucinating all the time. Hallucinations are real perceptions, just not very practical ones.

    Some argue that once superintelligence occurs, machines will simply enslave or wipe out the human race. Later in the book, I will examine an idea for robots that dates from the 1960s: would superintelligent machines be nicer to us if we gave them LSD? The great psychedelic psychiatrist Stanislav Grof said, Psychedelics are spiritual tools. If they are properly used, they open spiritual awareness. Could psychedelics open spiritual awareness for conscious machines? And if that did not work, could digital LSD incapacitate superintelligence at least temporarily? This was the idea the CIA had for weaponizing LSD in the 1960s—give the enemy army LSD so they could not fight.

    I embarked on this

    Enjoying the preview?
    Page 1 of 1