Hegwin.Me

Time doth transfix the flourish set on youth. And delves the parallels in beauty's brow.

Excerpt from Home Deus A Brier Histroy of Tomorrow

读《未来简史》以及摘抄

I finally finished reading the book "A Brief History of Tomorrow" today. I did not see or buy this book in a bookstore. Coincidentally, I had some expiring points from China Unicom and I had to spend them on exchanging something before they were no longer valid. Inadvertently I got the electronic edition of this book with those points.

Although it was a book that I casually exchanged points for, I felt inspired to read it, and there are many classic examples in it. I read books about the brain and the psyche when I was a child, and later took some courses in neuroscience in Coursera, but this book is a great way to look at the future of humanity in the context of history and social development and has many novel insights that, for me, helped me expand my horizons.

The book talks a lot about being in the future, the changes in warfare, the connection between science and religion, the distinction between reality and fiction, and life, pleasure, and death, as well as human evolution, AI, and algorithms. Many of the topics were very interesting, and I'll excerpt some parts here that I may use in my future studies.

On Neuroscience

I once analyzed the connection between my father's stroke and hemiplegia in one of my mini-essays. Based on the enhanced CT I think his stroke occurred in the corpus callosum, blocking the transmission of motor nerve signals from the right side of the brain to the left side of the limb, thus causing hemiplegia. My view at that time was quite naive but I was very interested in the role of the corpus callosum in the exchange of information between the right and left brain. Cases of people who have lost connection between the right and left brain are also mentioned in A Brief History of the Tomorrow:

Many of the breakthroughs in the study of left-right brain relationships stem from studies of people with epilepsy. In severe epilepsy, an electrical storm is set off from one area of the brain and spreads rapidly to other areas, causing an acute seizure. During a seizure, patients have no control over their bodies. Once the seizures become frequent, they often lose their jobs and are unable to live a normal life. In the mid-20th century, if other treatments failed, doctors' last resort was to cut the nerve bundles connecting the two hemispheres so that the electrical storm from one hemisphere would not affect the other. For brain scientists, these patients are like a gold mine, providing a lot of amazing data.

The most famous researchers on these "split-brain" patients are Roger Sperry (who won the 1981 Nobel Prize in Physiology or Medicine for his groundbreaking discoveries) and his student, Professor Michael S. Gazzaniga. The subject of one of the studies was an adolescent. The researchers asked him what he wanted to be when he grew up, and the boy replied, "A draftsman." This answer was provided by the left brain, and logical reasoning and language were also mostly controlled by the left brain. However, the boy's right brain also had another active language center that, while unable to control spoken language, could spell out words using the letter tiles of the Scrabble game Scrabble. The researchers were curious to know what the boy's right brain had to say, so they scattered the letter tiles on the table, wrote on a piece of paper: "What do you want to be when you grow up?" and placed the paper at the boundary of the boy's left visual field. The data from the left visual field is processed by the right brain, which has no control over the spoken word, so the boy said nothing, but his left hand began to move quickly around the table, collecting letter tiles everywhere and spelling out "car race.

Another equally surprising behavior was seen in World War II veteran WJ, whose hands are controlled by separate brain hemispheres. There was no connection between his left and right brain, so sometimes his right hand would open the door, but his left hand would close it.

In another experiment, Gazzaniga's team showed the left brain (responsible for language) a picture of a chicken claw, while showing the right brain a picture of a snowy landscape. The patient was then asked what PS saw, and he replied, "Chicken paws." Gazzaniga then proceeded to show PS many more pictures and asked him to point out what best matched what he saw. The patient's right hand (controlled by his left brain) was pointing to a chicken, but at the same time his left hand was reaching out and pointing to a snow shovel. Gazzaniga then asked the all-too-obvious question, "Why did you point at both the chicken and the snow shovel?" PS replied, "Uh, the chicken claw has something to do with the chicken, and to clean the coop you need a shovel."

What the heck is going on here? The left brain, which controls language, did not receive this information about the snow scene and had no idea why the left hand was pointing at the shovel, and as a result, the left brain created its own explanation that felt reasonable. After repeating the experiment several times, Gazzaniga concluded that the left brain not only controls expressiveness, but is also an internal translator, weaving seemingly plausible stories with various fragmentary clues, trying to find meaning for our lives.

About AI

AI is evolving very rapidly. As a programmer, I am probably actively or passively absorbing knowledge and information about AI. Where once it was very difficult for AI to identify a cat, today it is true that AI has become readily available and everyone can use it for interesting applications. At this year's Google conference, for example, we saw AI applications that could recognize lines drawn by human hands, and others that could sequel music composed by humans.

When A Brief History of the Future talks about scientific development and AI, on the one hand I'm excited because there are so many possibilities for the future, and on the other hand I'm worried that I'll lose my job as a programmer.

It is only wishful thinking to think that humans will always have their own unique capabilities and that mindless algorithms will never catch up. The current scientific feedback on this pipe dream can be summarized in three simple principles.

  1. Creatures are algorithms. Every animal (including Homo sapiens) is a collection of various organic algorithms, the result of millions of years of evolutionary natural selection.
  2. The operation of algorithms is not affected by the constituent substances. The beads of an abacus, whether wooden, iron or plastic, two beads plus two beads still equal four beads.
  3. Therefore, there is no reason to believe that a non-organic algorithm can never replicate or surpass what an organic algorithm can do. What difference does it make whether the algorithm is carbon-based or silicon-based, as long as the computational results are valid?

It is true that there are still many things that organic algorithms can do better than non-organic algorithms, and there are also things that experts repeatedly claim that non-organic algorithms will "never" be able to do. But it turns out that usually the "forever" here is not more than a decade or two. Just as not so long ago, people were fond of using facial recognition as an example of a task that even babies could easily do, but that the most powerful computers were incapable of accomplishing. But today, facial recognition programs can recognize faces much faster and more efficiently than humans. Police and intelligence agencies are now well accustomed to using such programs, scanning countless hours of video footage from surveillance cameras to track down suspects and criminals.

Since we can't predict what the employment situation will be in 2030 or 2040, we don't know how to educate the next generation now. By the time children reach 40, everything they learn in school may be obsolete. Traditionally, life is divided into two main periods: the study period, plus the work period after that. But this traditional model will soon become completely obsolete, and there is only one way to keep from being eliminated: to keep learning throughout your life and keep building a new you. Only, many, if not most, people probably can't do this.

If you think about the next few decades, the things to watch out for are global warming, worsening inequality, and the destruction of the job market. But if you look at the whole of life, no other issue or development is as important as the following three closely related developments.

  1. Science is converging on an all-encompassing dogma that all living things are algorithms, and life is about data processing.

  2. Intelligence is being decoupled from consciousness.

  3. Unconscious but highly intelligent algorithms may soon know more about ourselves than we do.

These three developments raise three key questions that I hope will remain on the reader's mind long after they finish reading this book.

  1. Is biology really just algorithms, and is life really just data processing?

  2. Which is more valuable, intelligence or consciousness?

  3. What will happen to society, politics and daily life when unconscious but highly intelligent algorithms know more about ourselves than we do?

< Back