In this episode, Byron and Esther talk about intelligence, jobs, her experience in being a backup cosmonaut and more.
Byron Reese: Today, our guest is Esther Dyson. Esther Dyson is a living legend. She has been an angel investor, and sits on the boards of a number of companies. She is also a best-selling author, a world citizen, and a backup cosmonaut for the Russian Space Program. Now, she serves as the Executive Founder for a non-profit called Way to Wellville. Welcome to the show, Esther.
Esther Dyson: Delighted to be here.
Let’s start with that; that sounds like an intriguing non-profit. Can you talk about what its mission is, and what your role therein is?
Yeah. My role is, I founded it. The reason I founded it, was a question, which was… As I was an angel investor, and doing tech, and getting more and more interested in healthcare, and biotech, and medicine, I also had to ask the basic question; which is: “Why are we spending so much money and countenancing so much tragedy by fixing people when they’re broken, instead of keeping them healthy and resilient, so that they don’t get sick or chronically diseased in the first place?”
The purpose of Way to Wellville is to show what it looks like when you help people stay healthy. I could go on for way too long, but it’s five small communities around the US, so you can get critical mass in a small way, rather than trying to reshape New York City or something.
The basic idea is that this happens in the community. You don’t actually need to experiment and inspect people one-by-one, but change the environment they live in and then look at sort of the overall impact of that. It started a few years ago as a five-year project and a contest. Now, it’s a ten-year project and it’s more like a collaboration among the five communities.
One way AI is really important is that in order to show the impact you’ve had, you need to be able to predict pretty accurately what would’ve happened otherwise. So, in a sense, these are five communities, the United States is the control group.
But, at the same time, you can look at a class of third graders and do your math, and say that one-third of these are going to be obese by the time they’re sixteen, 30% will have dropped out, 10% will be juvenile delinquents, and that’s simply unacceptable. We need to fix that. So, that’s what we’re doing.
We’ll get to the AI stuff here in a moment but I’m just curious, how do you go about doing that? That seems so monumental, as being one of those problems like, where do you start?
Yeah, and that’s why we’re doing it in small communities. Part of the drill was, ask the communities what they want, but at the same time I went in thinking diabetes and heart disease, and exercise, and nutrition. The more we learned, the more we actually, as you say—you’ve got to start at the beginning, which is prenatal care and childhood. If you come from a broken home or with abusive parents, chances are it’s going to be hard for you to eat properly, it’s going to be hard for you to resist drugs.
There’s a concept called adverse childhood experiences. The mind is a very delicate thing. In some ways, we’re incredibly robust and resilient… But then, when you look at a third of the US population is obese—a smaller number is diabetic, according to age. You look at the opioid addiction problem, you look at the number of people who have problems with drinking or other kinds of behavior and you realize, oh, they’re all self-medicating. Again, let’s catch them when they’re kids and help them be addicted to love and children and exciting work, and feeling productive—rather than substances that cause other problems.
What gives you hope that you’ll be successful? Have you had any promising early findings in the first five-year part?
Not the kind you’d want. The first thing is in each community, part of the premise was there’s a group of local leaders who are trying to help the community be healthy. Mostly, they’re volunteers; they don’t have resources; they’re not accountable; so it’s difficult. We’re trying to help bring in some—but not all—of that Silicon Valley startup culture… It’s okay to fail, as long as you learn.
Plan B is not a disaster. Plan B is the result of learning how to fix Plan A, and so forth. If you look at studies, it’s pretty clear that having caring adults in a child’s life is really important. If you look at studies, it’s pretty clear that there’s no way you can eat healthily, if you can’t get healthy food, either because they’re too poor, or it’s inaccessible, or you don’t know what’s healthy.
Some of these things are the result of childhood experiences. Some are the result of poverty, and transportation issues… Yes, you’re right, all these things interact. You can’t go in and fix everything; but if you focus on the kids and their parents, that’s a good place to start.
I learned a lot of concepts. One of them is child storage, as opposed to child enrichment. If your child is going to a preschool that helps them learn how to play, that has caring adults, that can help the kid overcome a horrible home environment… It’s not going to solve all the community’s problems, but it’s definitely going to help some percentage of the children do better. That kind of stuff spreads, just the way the opposite spreads.
In the end, is your hope that you come out of it with, I guess, a set of best practices that you can then disseminate?
People know the best practices. What we really want to do is two things. One, show that it’s possible and inspire people that are [in] regular communities. This is not some multi-million dollar gated community designed for rich people to live healthy and fulfilling lives and go to the spa.
There are five of them, real places in various parts of America: Muskegon, Michigan; Spartanburg, South Carolina; North Hartford, Connecticut; Clatsop County, Oregon; and Lake County, California; that normal people in these places can fundamentally change the community to make it a place where kids are born lucky, instead of unlucky.
Yes, they can look at what we did and there will be certain things we did. One includes… The community needs to come together in different sectors; like the schools, and business people, and the hospital system need to cooperate. And, most likely, somebody needs to pay.
You need coaches to do everything from nurse visits, pre- and post-birth, early childhood education that’s effectively delivered, caring teachers in the schools, healthy school lunches. Really sad to see the government just backtracked on sodium and other stuff in the school lunches… But in a sense, we’re trying to simulate what it would look like, if we had really wonderful policies around fostering healthy childhoods and show the impact that has.
Let’s zoom the lens way out from there, because that might be an example of the kinds of things you hear a lot about today. It seems like it’s a world full of insurmountable problems, and then it’s also a world full of real, legitimate hope that there’s a way to get through them.
If I were to ask you in a broad way, how do you see the future? [Through] what lens do you look at the future, either of this country, or the world, or anything, in ten years, twenty years, thirty years? What do you think is going to happen, and what will be the big driving forces?
Well, I get my dopamine from doing something, rather than sitting around worrying. Intellectually, I feel these problems; and practically, I’m doing something about them the best way I know that will have leverage, which is doing something small and concentrated, rather than diffuse with no impact.
I want a real impact in a small number of dense places. Then, make that visible to a lot of other people and scale by having them do it, not by trying to do it myself. If you didn’t have hope, you wouldn’t do anything. Nothing happens without people doing something. So, I’m hopeful. Yeah, this is very circular.
So, I was journalist and I didn’t persuade people, I told them the truth. Ultimately, I think the truth is extremely powerful. You need to educate people to understand the truth and pay attention to it, but the truth is always much more persuasive than a lot of people just trying to cajole you, or persuade you, or deceive you, or manipulate you.
I want to create a truth that is encouraging and moves people to action, by making them feel that they could do this too; because they can, if they believe they can. This is not believing you will be blessed… It’s more like: Hey, you’ve got to do a lot of the hard work, and you need to change your community, and you need to think about food, and you need to be helping parents become better parents. There are active things you can do.
Is there any precedent for that? That sounds like it calls for changing lots of behaviors.
Well, the precedent is all the lucky people we know whose parents did love them, and who felt secure, and did amazing things. Many of them don’t realize how lucky they are. There’s also, of course, the people who had horrible circumstances and survived somehow anyway.
One of the best examples currently is J.D. Vance in the book Hillbilly Elegy. Many of them were just lucky to have an uncle, or a neighbor lady, or a grandmother, or somebody who gave them that support that they needed to overcome all the obstacles, and then there’s so many others who didn’t [have that].
Yes, certainly, there’s these people who’ve done things like this, but not ones that are visible enough that it really moves people to action. Part of this, we’re hoping to have a documentary that explains what we’re doing. Now, it’s early, because we haven’t done that much.
We’ve done a lot of preparation, and the communities are changing, but believe me: We’re not finished. I will say, when we started we put out a call for applications, and got applications for us to come in and help from forty-two communities.
Then, in the Summer of 2014, Rick Brush, our CEO, and I picked ten of them to go visit. One of them we turned down, because they were too good. That’s the town of Columbus, Indiana, which is, basically, the company town of Cummins Engine, which is just a wonderful place.
They were doing such a good job making their community healthier that we said, “Bless you guys, keep doing it. We don’t want to come in and claim the credit. There’s five other places that need us more.”
There are some pretty wonderful places in America, but there’s also a lot of places that have lost their middle class, people are dispirited, high unemployment. They need employers, they need good parents, they need better schools, they need all this stuff.
It’s not a nice white lady who came from New York to tell you how to live or to give you stuff. It’s this team of five that’s here to help you fix things for yourself, so that when we leave in ten years, you own your community. You will have helped repair it.
That sounds wonderful, in the sense that, if you ever can affect change, it should be kind of a positive reinforcement. Hopefully, it stays and builds on itself.
Yeah. It’s like, if you need us to be there, yes, we believe we’re helping in making a difference. But at some point, it’s their community, they have to own it. Otherwise, it’s not real, because it depends on us and when we leave it, it’s gone.
They’re building it for themselves, we’re just kind of poking them, counseling them, and introducing them to programs. And, “Hey, did you know this is what they’re doing at adverse childhood experiences in this or that study. This is how you can design a program like that for yourselves or hire the right training company, and build capacity in your own community.”
A lot of this is training people in the community to deliver various kinds of coaching and care, and stuff like that.
Your background is squarely in technology. Let’s switch gears and chat about that for a moment. Let’s start with the topic of show, which is artificial intelligence. What are your thoughts about it? Where do you think we’re at? Where do you think we’re going? What do you think it’s all about?
Yeah. Well, so, I first wrote about artificial intelligence inside a newsletter back in the days of Marvin Minsky and expert systems. Expert systems were basically logic. If this, and that, and the other thing, then… If someone shows up, and their blood pressure’s higher than x, and so forth. They didn’t sell very well.
Then they started calling them assistants instead of experts. In other words, we’re not going to replace you with an expert, we’re just going to assist you in doing your job. Pretty soon, they didn’t seem to be AI anymore because they really weren’t. They were simply logic.
The definition of artificial intelligence, to me, is somewhat similar to magic. The moment you really, really understand how it works, it no longer seems artificially-intelligent. It just seems like a tool that you design and it does stuff. Now, of course, we’re moving towards neural nets, and the so-called black boxes and things that actually, in theory, they can explain what they do; but now, they start to program themselves, based on large datasets.
They’re beyond the comprehension of a lot people, what exactly they do, and that’s some of the sort of social/ethical discussions that are happening. Or, you ask a bot to mimic a human being, and you discover most human beings make pretty poor decisions a lot of the time, or reflect biases of their culture.
AI was really hard to do at scale, back when we had very underpowered computers, compared with what we have today. Now, it’s both omnipresent and still pretty pathetic, in terms of… AI is generally still pretty brittle.
There’s not even a consensus definition on what intelligence is, let alone, what an AI is, but whatever it means… Would you say we have it, to at least some degree, today?
Oh, yeah. Again, the definition is becoming… Yes, the threshold of what we call AI is rising from what we called AI twenty years ago.
Where do you think it will go? Do you think that we’re building something that as it gradually gets better, in this kind of incrementalism, it’s eventually going to emerge as a general intelligence? Or do you think the quest to build something as smart and versatile as a human will require dramatically different technology than we have now?
Well, there’s a couple of different things around that. First of all, if something is not general, is it intelligent or is it simply good at doing its specific task? Like, I can do amazing machine translation now—with large enough corpuses—that simply has a whole lot of pattern recognition and translates from one language into another, but it doesn’t really understand anything.
At some point, if something is a super-intelligence, then I think it’s no longer artificial. It may not be wet. It may be totally electronic. If it’s really intelligent, it’s not artificial anymore, it’s intelligent. It may not be human, or conceived, or wet… But that’s my definition, someone else might just simply define it differently.
No, that’s quite legitimate actually. It’s unclear what the word artificial is doing in the phrase. One view is that it’s artificial in the sense that artificial turf is artificial. It may look like turf, but it’s not really turf. That sounds kind of like how you—not to put words in your mouth—but that sounds kind of like how you view it.
It can look like intelligence for a long time to come, but it isn’t really. It isn’t intelligent until it understands something. If that’s the case, we don’t know how to build a machine that understands anything. Would you agree?
Yes. They’re all these jokes, like… The moment it becomes truly intelligent, it’s going to start asking you for a salary. There are all these different jokes about AI. But yeah, until it ‘has a mind of its own’, what is intelligence? Is it because of the soul? Is it purpose? Can you be truly intelligent without having a purpose? Because, if you’re truly intelligent, but you have no purpose, you will do nothing, because you need a purpose to do something.
Right. In the past, we’ve always built our machines with implicit purposes, but they’ve never, kind of, gotten a purpose on their own.
Precisely. It’s sort of like dopamine for machines. What is it that makes a machine do something? Then, they have the runaway machines who do something because they want more electricity to grow, but they’ve been programmed to grow. But then, that’s not their own purpose.
Right. Are you familiar with Searle’s Chinese Room Analogy?
You mean the guy sitting in the backroom who does all the work
Exactly. The point of his illustration is, does this man who’s essentially just looking stuff up in books… He doesn’t speak Chinese, but he does a great job answering Chinese questions, because he can just look stuff up in these special books.
But he has no idea what he’s doing.
Right. He doesn’t know if it’s about cholera or coffee beans, or cough drops, or anything. The punchline is, does the man understand Chinese? The interesting thing is, you’re one of few people I’ve spoken to who unequivocally says, “No, if there’s nobody at home, it’s not intelligent.” Because, obviously, Turing would say, “That thing’s thinking; it understands.”
Well, no, I don’t think Turing would’ve said that. The Turing Test is a very good test for its time, but, I mean… George [Dyson, the futurist and technology historian who happens to be her brother] would know this much better. But the ability to pass the test… Again, what AI was at that point is very different from what it is now.
Right. Turing asked the question, can a machine think? The real question he was asking, in his own words, was something to the effect of: Could it do something radically different than us, that doesn’t look like thinking… But don’t we kind of have to grant that it is thinking?
That’s when he said… This idea that you could have a conversation with something and therefore, it’s doing it completely differently. It’s kind of cheating. It’s not really, obviously, but it’s kind of shortcutting it’s way to knowing Chinese, but it doesn’t really [know Chinese]. By that analogy and by that logic, you probably think it’s unlikely we’ll develop conscious machines. Is that right?
Well, no. I think we might, but then it’s going to be something quite… I mean, this is the really interesting question. In the end, we evolved from just bits of carbon-based stuff, and maybe there’s another form of intelligence that could evolve from electronic stuff. Yeah, I mean, we’re a miracle and maybe there’s another kind of miracle waiting to happen. But, what we’ve got in our machines now is definitely not that.
It is fascinating. Matt Ridley, wrote Rational Optimist, said in his book that the most important thing to know about life is [that] all life is one, is that life happened on this planet and survived one time… And every living thing shares a huge amount of the same DNA.
Yeah. I think it might’ve evolved multiple times, or little bits went through the same process, but I don’t think we all came from the same cell. I think it’s much more likely there was a lot of soup and there were a whole bunch of random bits that kind of coalesced. There might’ve been bunches of them that coalesced separately, but similarly.
I see. Back in their own day, merged into something that we are all related to?
Yeah. Again, all carbon-based. There are some interesting things at the bottom of the ocean that are quite different.
Right. In fact, that suggests you’re more likely to find life in the clouds on Venus—as inhospitable as it is, at least stuff’s happening there—than you might find on a barren, more hospitable planet.
Yeah.
When you talk to people who believe in an AGI, who believe we’re going to develop an AGI, and then you ask them, “When?” you get this interesting range between five and five hundred years, depending on who you ask. And these are all people who have some amount of training and familiarity with the issues. What does that suggest to you, that you get that kind of a disparity from people? What would you glean from that?
That we really don’t know.
I think that’s really interesting, because so many people are on that spectrum. Nobody says oh, somewhere between five and five hundred years. No person says that. The five-year people—
—They’re all so different. Yeah.
But all very confident, all very confident. You know, “We’ll have something by 2050.” A lot of it I think boils down to whether you think we’re a couple of hops, skips, and a jump away from something that can take off on its own… Or, it’s going to be a long, long, long time.
Yeah. It’s also, how you define it. Again, to me, in a sense, I’ve been thinking about this and reading Yuval Noah Harari’s Homo Deus and various other people… But to me, in the end, there’s something about purpose, which means, again, it really is… It’s the anti-entropy thing.
What is it that makes you grow, makes you reproduce? We know how that works physically, but, then when you talk about a soul or a consciousness, there’s some animating thing or some animating force, and it’s this purpose in life. It’s reproduction to create more life. That’s sort of an accident, of something that had to have purpose to reproduce, and the other stuff didn’t.
Again, there’s more biological descriptions of that. Where that fits in something that’s not wet, how that gets implemented—purpose; we haven’t yet found. It’s like, we found substances that correlate with purpose, but there’s some anti-entropy that moves us. Without which, we wouldn’t do anything.
If you’re right, that without purpose, without understanding—as fantastic as it is with our very stone-knives-and-bearskins kind of AI we have today—I would guess… And not to put words in your mouth, but, I would guess you are less worried about the AI’s taking all the jobs than somebody else might be. What is your view on that?
Yeah. Well, in [terms of] the AIs taking all the jobs… That is something that we can control, not easily. It’s just like saying we can control the government or we can control health. Human beings collectively can—and I believe should—start making decisions about what we do about people and jobs.
I don’t think we want a universal basic income, as much as we want almost universal basic vouchers to… Again, I think people need purpose in their lives. They need to feel useful. Some people can create art and feel useful, and sell it, or just feel good when other people look at their art. But I think a more simple, more practical way to do this is, we need to raise the salaries of people who do childcare, coaching, you know.
We need to give people jobs, for which they are paid, that are useful jobs. And I think some of the most useful things people can do, generally—some people can become coders and design things and program artificial intelligence tools, and so forth, and build things. But a lot of people, I think, can be very effectively employed. This goes back to the Way to Wellville in caring for children, in coaching mothers through pregnancy, in running baseball teams in high schools.
We can sit here and talk about artificial intelligence, but this is a world in which people are afraid to let their kids out to play and everywhere you go, bridges are falling down. I live in New York City, and we’re going to have to close some of our train tunnels, because we haven’t done enough repair work. There actually is an awful lot of work out there.
We need to design our society more rationally. Not by giving everybody a basic income, but by figuring out how to construct a world in which almost everybody is employed doing something useful, and they’re being paid to do that, and it’s not like a giant relief act.
This is a society with a lot of surplus. We can somehow construct it so that people get paid enough that they can live comfortable lives. Not easy lives, but comfortable lives, where you do some amount of work and you get paid.
At the margins, yes, take care of people who’ve fallen off; but let’s do a better job raising our children and creating more people who do, in fact… You know, their childhoods don’t destroy their sense of worth and dignity, and they want to do something useful. And feel that they matter, and they get paid to do that useful thing.
Then, we can use all the AI that makes society, as a whole, very rich. Consumption doesn’t give people purpose. Production does, whether it’s production of services or production of things.
I think you’re entirely right, you could just… on the back of an envelope say, “Yeah, we could use another half-million kindergarten teachers and another quarter-million…”—you can come up with a list of things, from a societal standpoint, [that] would be good and that maybe market forces aren’t creating. It isn’t just make-work, it’s all actually really important stuff. Do you have any thoughts on how that would work practically?
Yeah.
You implied it’s not the WPA again, or is it…?
No. Go to the people who talk about the universal basic income and say, look, why don’t you make this slightly different. Let’s talk about, you get double dollars for buying vegetables with your food stamps. How do we do something that gives everybody an account, that they can apply to pay for service work?
So, every time I use the services of a hairdresser, or a babysitter, or a basketball coach, or a gym teacher, there’s this category of services. This is not simple, there’s a certain amount of complexity here, because you don’t want to be able to—to be gross, you know—hire the teenage girl next door to provide sexual services. I think it needs to be companies, rather than government.
Whether it’s Uber vetting drivers—and that’s a whole other story—but you want an intermediary that does quality control. Both in terms of how the customers behave, and how the providers behave, and manage the training of the providers, and so forth.
Then, there’s a collective subsidy to the wages that are paid to the people who provide the services that foster… Long ago, women didn’t have many occupations open to them, so second-grade teachers tended to be a lot of very smart women, who were dedicated, and didn’t get paid much.
But that was okay, and now that’s changing. Now, we need to pay them more, which is great. There’s a collective benefit to having people teaching second grade that benefits society and should be paid for collectively.
In a way, you could throw away the entire tax code we have and say for every item, whether it’s a wage or buying something, we’re going to either calculate the cost to society or the benefit to society. Those will either be subsidies or taxes on top of that, so that the bag of potato chips—
—The economic term is—
—Internalizing the externalities?
Yes, exactly.
Yeah, exactly. It’s actually the only thing I can think of that doesn’t actually cause perverse incentives, because in theory, all the externalities have been internalized and reflected in the price.
Yes. So, you’re not interfering with the market, you’re just letting the market reflect both the individual and collective costs and stuff like that. It doesn’t need to be perfect. We’re imperfect, life is imperfect, we all die, but let’s sort of improve things in the brief period that we’re alive.
I can’t quite gauge whether you’re ‘in theory’ optimistic, or practically optimistic. Like, do you think we’re going to accomplish these things? Do you think we’re going to do some flavor of them? Or, do you just realize they’re possibilities and we may or may not?
I’m trying to make this happen. The way I would do that is not, “Gee, I’m going to do this myself.” But I’m going to contribute to a bunch of people, both doing it and feeling… A lot more people would be doing this, if they thought it was possible, so let’s get together and become visible to one another.
Just as in what I saw happen in Eastern Europe, where individually people felt powerless, but then, they—and this really was where the Internet did help. People began to say, “Oh, you know, I’m not the only one who is beginning to question our abusive government.” People got together, and felt empowered, and started to change the story, both by telling their own stories and by creating alternative narratives to the one that the government fed them.
In our case, we’re being fed, I don’t know, we’re being fed short-term. Everything in our society is short-term. I’m on the board of The Long Now, just for what it’s worth. Wall Street is short-term. Government politicians are mostly concerned with being reelected. People are consuming information in little chunks and not understanding the long-term narratives or the structure of how things work.
It’s great if you hear someone talk about externalities. If you walk down the street and ask people what an externality is, they’ll say, “Is that, like, a science fiction thing or what?” No, it’s a real concept and one that should be paid attention to. There are people who know this, and they need to bring it together, and change how people think about themselves.
The very question you asked: “Do you think you can do this practically?” No, I can’t alone, but together, yeah, we can change how people think about things, and get them to think more about long-term investments. Not this day-by-day, what’s my ROI tomorrow, or what’s next quarters? But if we do this now, what will be different twenty years from now?
It’s never been easier, so I hear, to make a billion dollars. Google and Facebook each minted something like six billionaires apiece. The number of billionaires continues to grow. The number who made their own money, the percent that made their own money, continues to grow, as opposed to inheriting it.
Right.
But, am I right that all of that money that’s being created at the top, that isn’t… I mean, mathematically, it contributes to income inequality because it’s moving some to the end… But do you think that that’s part of the problem? Do all of those billions get made at the expense of someone else, or do those billions get made just independent of their effect on other people?
There’s no simple answer to that one. It varies. I was very pleased to see the Chan Zuckerberg Foundation. And the people that bother me more, honestly, are… There’s a point at which you stop adding value, and I would say a lot of Wall Street is no longer adding value. Google, it depends what they do with their billions.
I’m less concerned about the money Google makes. It depends what the people who own the shares in Google do with the money they’ve made. Part of the problem is, more the trolls on the Internet are encouraging some of this short-sided thinking, instant gratification. I’d rather look at cat photos than talk to my two-year old, or what have you.
For me, the issue’s not to demonize people but to encourage the ones who have assets and capacity to use them more wisely. Sometimes, they’ll do that when they’re young. Sometimes, they will earn all the money and then start to change later, and so forth.
The problem isn’t that Google has a lot of money and the people in Muskegon don’t. The problem is that the people in Muskegon, or so many other places… They have crappy jobs, the people who are parents now might have had parents who weren’t very good. Things are going downhill rather than uphill. Their kids are no longer more educated than they are. They no longer have better jobs. The food is getting worse, etc.
It’s not simply an issue of more money. It’s how the money is spent, and what the money is spent on. Is it spent accountably for the right things? It’s not just giving people money. It’s having an education system that educates people. It’s having a food system that nourishes them. It’s stuff like that.
We now know how to do those things. We also are much better, because of AI, at predicting what will happen if we don’t. I think the market, and incentives, and individual action are tremendously important; but you can influence them. Which is what I’m trying to do, by showing how much better things could work.
Well, no matter what, the world that you would envision as being a better world, certainly requires lots and lots and lots of people power, right? Like, you need more teachers, you need more nutritionists, you need all of these other things. It’s sounds like you don’t—
Right. And, you need people voting to fix the bridges instead of keep voting on which politician makes promises that are unbelievable or whatever. In a sense, we need to be much more thoughtful about what it is we’re doing and to think more about the long-term consequences.
Do you think there ever was a time that, like, do you have any society that you look at or even, any time in any society when you say… “Well, they weren’t perfect, but here was a society that thought ahead, and planned ahead, and organized things in a pretty smart way”? Do you have any examples?
Yes and no. There was never like a perfect place. A lot of things were worse a hundred years ago, including how the women were treated, how minorities were treated, a lot of people were poor. But there was a lot less entitlement, there was a lot less consumption around instant gratification. People invested.
In many ways, things were much worse, but people took it for granted that they needed to work hard and save. Again, many of them had a sense of purpose. You go back to the 1840s, and the amount of liquor consumed was crazy. There’s no perfect society. The norms were better.
Perhaps there was more hypocrisy. Hey, there was a lot of crime a hundred years ago and, sort of, the notion of polite society was perhaps not all of society. People didn’t aspire to be celebrities. They aspired to be respected, and loved, and productive, and so forth. It just goes back to that word: purpose.
Being a celebrity does not mean having an impact. It means being well-known. There’s something lacking in being a celebrity, versus being of value to society. I think there’s less aspiration towards value and more towards something flashier and emptier. That’s what I’d love to change, without being puritan and boring about it.
Right. It seems you keep coming back to the purpose idea, even when you’re not using that word. You talked about [how] Wall Street used to add value, and [now] they don’t. That’s another way of saying they’ve lost their purpose. We talked about the billionaires… It sounds like you’re fine with it, it depends on what their purpose of it all is with it. How do you think people find their purpose?
It goes back to their parents. There’s this satisfaction that really can’t be beaten. When I spent time in Russia, the women were much better off than the men, because the men felt—many of them—purposeless. They did useless jobs and got paid money that was not worth much, and then their wives took the rubles and stood in line to get food and raise the children.
Having children gives you purpose, ideally. Then, you get to the point where your children become just one more trophy, and that’s unutterably sad. They’re people who love the children and also focus too much on, “Is this child popular?” or “Will he get into the right college and reflect well on me?” But, in the end, children are what give purpose to most people.
Let’s talk about space for a minute. It’s seems that a lot of Silicon Valley folks, noteworthy ones, have a complete fascination with it. You’ve got Jeff Bezos hauling Apollo 11 boosters out of the ocean. Elon is planning to, according to him, “die on Mars, just not on impact.” You, obviously, have a—
—I want to retire on Mars. That’s my line. And, not too soon.
There’s a large part of this country, for instance, that doesn’t really care about space at all. It seemed like a whole lot of wasted money, and emptiness, and all of that. Why do you think it’s so intriguing? What about it is interesting for you? For goodness sakes, I can’t put you as “trained to be backup cosmonaut” in your introduction, and then not—that’s like the worst thing a host can do, and then never mention it again. So please talk about that, if you don’t mind.
It’s our destiny, we should spread. It’s our backup plan if we really screw up the earth and obliterate ourselves, whether it’s with a polluted atmosphere, or an explosion, or some kind of biological disaster. We need another place to go.
Mars… Number one, it’s good backup. Number two, maybe we can learn something. There’s this wonderful new thing call the circular economy. The reality is, yes, we’re in a circular economy, but it’s so large we don’t recognize it. On Mars, because you start out so small, it’s much clearer that there’s a circular economy.
I’m hoping that the National Geographic series is actually going to change some people’s opinions. Yeah, in some sense, our purpose is to explore, to learn, to discover what else might lie beyond our own little planet. Again, it’s always good to have Option B.
Final question: We already talked about what you’re working on, but… What gives you… Because our chat had lots of ups and downs, possibilities, and then worries. What is—if there is anything—what gives you hope? What give you hope that there’s a good chance that we’ll muddle through this?
I’m an optimist. I have hope, because I’m a human being and it’s been bred into me over all those generations. The ones who weren’t hopeful didn’t bother to try, and they mostly disappeared. But now you can survive, even if you’re not hopeful; so maybe that’s why all this pessimism, and lassitude and stuff is spreading. Maybe, we should all go to Mars, where it’s much tougher, and you do need to be hopeful to survive.
Yeah, and have purpose. In closing, anybody who wants to keep up with what you’re doing with your non-profit…
WaytoWellville.net.
And if people want to keep up with you, personally, how do they do that?
Probably on Twitter, @edyson.
Excellent. Well, I want to thank you so much for finding the time.
Thank you. It was really fun.
Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.