The Cybersecurity Canon: Life 3.0 – Being Human in the Age of Artificial Intelligence

Dec 05, 2017
10 minutes


We modeled the Cybersecurity Canon after the Baseball or Rock & Roll Hall-of-Fame, except for cybersecurity books. We have more than 25 books on the initial candidate list, but we are soliciting help from the cybersecurity community to increase the number to be much more than that. Please write a review and nominate your favorite. 

The Cybersecurity Canon is a real thing for our community. We have designed it so that you can directly participate in the process. Please do so!

Executive Summary

Life 3.0 – Being Human in the Age of Artificial Intelligence is absolutely fascinating. Although it doesn’t adequately address cybersecurity solutions, is very complicated in places, and is a lengthy read, I recommend including it in the Cybersecurity Canon, Despite these drawbacks, the book made me think deeply about the future and the rather undefined, but imminently critical, role that cybersecurity must play if we are to get the future right – or even have a far future at all. This book is about the future of life in our universe, specifically about the next evolution in life, which Tegmark calls Life 3.0, or the third stage of life, where super intelligent machines will coexist with, and then eventually replace, human life. As a very quick reference, bacteria represent the first stage of life, and we, humans, represent the second. There are many interesting and controversial issues associated with the next evolution in life, including when – not whether – it will come. These issues traverse the categories of physics, biology, engineering, and ethical implications about how we should be pursuing the goals for artificial intelligence along what is likely a precarious path.

There is also a long list of extremely serious consequences associated with decisions that the world’s best minds across a variety of fields must be involved in making along the way if we are to avoid ultimate catastrophe. Tegmark takes the reader on a journey through various scenarios that could play out – and their implications. He dives deeply into the concepts of life, intelligence and consciousness as they apply to AI and the future. That is the primary reason why I believe this book deserves a spot in the Canon. Today’s cybersecurity community must better understand its role in the profound issues this book illuminates about the future of machines and AI. If the future of life in our cosmos is dependent on machines, then the security of those machines is the fundamental condition for the future of life.


I gained recent interest in studying AI in a more serious way after hearing in a CNN news report this past September 5 that Russian Federation President Vladimir Putin recently stated, “Artificial intelligence is the future, not only of Russia, but of all mankind. Whoever becomes the leader in this sphere will become the ruler of the world.”[1] He wasn’t the only world-famous person to be making alarming statements about AI either. CNN went on to say in the same report that, after seeing Putin’s statement, SpaceX and Tesla CEO Elon Musk tweeted: “Competition for AI superiority at national level most likely cause of WW3.”[2] Therefore, I wanted to read something current about the topic. The fact that the author of Life 3.0 – Being Human in the Age of Artificial Intelligence, Max Tegmark, is a physics professor at MIT seemed to me to represent a serious view on the subject, rather than some of the hyperbolic AI rhetoric sometimes seen in the media. Additionally, the fact that Tegmark worked the underlying concept this book represents with the likes of other serious people, such as Elon Musk, Stephen Hawking, Bill Gates and Larry Ellis, just to name a few, lured me further to read this particular book about AI.

The underlying concept the book represents is the AI safety research movement (with the goal of creating beneficial AI), probably most notably represented today by the nonprofit Future of Life Institute (FLI) founded by Tegmark. However, the underlying question the book asks the reader to ponder is whether the decisions we make now and into the near future regarding the advancement of technology will provide life the opportunity to flourish in unimaginably magnificent ways throughout the cosmos or destroy itself and become meaningless.

The book begins with a prelude that lays out one possible, relatively near, future in the development and harnessing of an AI technology by a corporate team of brilliant researchers and a CEO with the secret ambition to take over the world for the good. Over time, this team accomplishes the most dramatic transition in history. Using its AI technology, the corporation creates an alliance consolidating global power; eliminating all previous national power structures; ending state conflict; increasing the entire planet’s standard of living; improving education, health, prosperity and governance; and enabling life to flourish into the far future throughout the cosmos. I must admit that this was a bit dramatic for me, but a good scene setter nonetheless.

The rest of the book is organized into eight chapters that take the reader on a journey that starts at the beginning of time, and describes complexity and how it relates to intelligence.   Early on, Tegmark describes the three stages of life: Life 1.0 describes simple biological forms that can survive and replicate; Life 2.0 describes cultural forms that can not only survive and replicate but design their own “software,” such as humans teaching their brains to learn and communicate; and Life 3.0 describes technological forms that can do all of the above, plus design their own “hardware,” such as super-intelligent machines that can design their own physical improvements. Next is a detailed description of intelligence, memory, computation and learning. These qualities are discussed in the context of whether they are limited to Life 2.0 humans or applicable to Life 3.0 machines as well.

Then, Tegmark discusses some of the main issues regarding AI and its impact on the near future in terms of breakthroughs, vulnerabilities vs. robust AI, laws, weapons, jobs and wages, and human-level intelligence. I found this to be a very interesting and practical section in which the cybersecurity professional can first begin to understand why securing AI is not only critical in the near term but especially how this sets the stage for ultimate consequences over the much longer term. The consequences for inadequate security of AI have already led to catastrophic events in space exploration, financial stability, and severe public safety risks in manufacturing, transportation, energy and healthcare. In the near term, these become much more pronounced if AI isn’t secure for the future.

There is also a section that describes the current and future-trending cybersecurity problems associated with the negative impact on communications. This is where I found the discussion in need of a more robust infusion from the cybersecurity community. There’s no doubt that the trend of increasing exploitation of vulnerabilities and delivery of malicious software into our communication systems – resulting in undesirable influence, deception, disruption and even destruction to date – has been deplorable. However, there seems to be little realization that, as the operational technology world increasingly connects to the information and communication technology world through the phenomenon known as the “internet of things,” the resulting consequence of insecure systems and networks will mean drastic risk for national security and economic viability, and even dire public safety issues on a grand scale. While the consequences for the lack of adequate security were, at least, marginally described, the discussion of what to do about the problem was inadequately covered. The marginal treatment of the role of cybersecurity solutions, or even a projection of a direction required, inspired me to think about what the cybersecurity community should be doing to ensure that we are involved in solving the problem and not just assuming it away.

Subsequently, Tegmark takes the reader on a journey through the “intelligence explosion” and, as a result of that, the aftermath of the next 10,000 years. The latter includes some very interesting possible scenarios for outcomes and consequences, all depending on the varying outcomes of how we design AI’s path, and whether the super intelligent machines of the future stay on those paths or decide on paths for themselves.

Next, Tegmark takes the reader through a far future look at the next billion years and beyond.  Of course, much of this portion depends on the decisions and actions of the former periods of time. This is because, at this future point in time, there are no humans. Machines have taken over the cosmos, and its incumbent on us now, and into the near future, to make decisions about the goals we set for not only ourselves but the machines we develop, engineer, program and, ultimately, set free to carry on the legacy of life in the cosmos.

Finally, Tegmark describes the ultimate characteristic of consciousness. “Ultimate” is a good word because, in the end, consciousness defines whether the machines that succeed us into the far future will really extend continuity to our very existence or not. It’s a fairly cold concept to believe that the ultimate meaning of life depends on whether the experiences we have today as biological forms with complex neurological computations that are limited by the laws of physics can be experienced as consciousness and replicated by machines with an intelligence we can’t even yet comprehend. However, that’s the premise of this book in the end, and only if we do it right, including doing it securely.


I think this is a book for just about anyone who is, or should be, interested in the direction of AI, its relationship to the human race, and, ultimately, the future of life in the universe. While a bit complex and rather lengthy (749 pages in the electronic form), it’s current, relevant and compelling. For the professional cybersecurity community, I think this book is a must-read. I say this even though cybersecurity is only minimally addressed in the book, and solutions to potentially catastrophic cybersecurity problems are conveniently assumed. As far as I’m concerned, this book is an important contribution to a much-needed dialogue to ensure we are making informed decisions about the safe development of intelligent machines in today’s digital age and beyond.

As a community, we must focus a serious effort on understanding our role in the future of machines and AI that might well be existential, regarding life itself, period. If you believe that, as a human race, we will eventually be replaced by machines with super intelligence in order to outlive our eventually exploding solar system and survive interstellar travel to populate the cosmos with life, intelligence and consciousness, then cybersecurity as a concept becomes about more than protecting our way of life in the digital age as it is today. It becomes an issue of protecting this future form of life itself, and the very continuity of existence and consciousness. And if we don’t do that, what’s the meaning of life at all? This is weighty indeed for all cybersecurity professionals to ponder; and the sooner we begin thinking about this, the better, in my opinion.

[1] CNN News report by Gregory C. Allen, Tue September 5, 2017.

[2] Ibid.


Subscribe to the Newsletter!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.