The Harvard Brain
  • About
  • Team
  • Issues
    • Fall 2025
    • Spring 2025
    • Fall 2024
    • Previous Issues >
      • Spring 2024
      • SPRING 2023
      • SPRING 2022
      • FALL 2022
      • FALL 2021
      • SPRING 2021
      • FALL 2020
      • Spring 2020
  • Submit
  • FusionRC
    • About the Challenge
    • Accept the Challenge

Q&A With Professor Daniel Gilbert

Chloe Lee
Professor Daniel Gilbert teaches PSY 1: Introduction to Psychological Science. He is the author of the international best-seller Stumbling on Happiness and the host of the award-winning PBS television series This Emotional Life. Professor Gilbert's three TED talks have been viewed more than 30 million times. 

Q: How do you describe the problem of other minds for people who don't know what that is? 
A: The problem of other minds refers to the fact that we can't know for sure whether any other creature or system really has a mind. We know we do, but we can't be sure a dog does. We don't know if a machine does. And the truth is, we can't even be absolutely positive that other human beings do. It's an awfully good guess that you have a mind like mine, but I can't quite know it for sure. That's what philosophers call the problem of other minds. 

Q: Do you think that that problem is true? Does it exist? 
A: Well, I mean, a problem can't be true or false. But it certainly exists, or philosophers, bunches of smart people wouldn't, for a century, have given it a name like the problem of other minds. So, yeah, it's a problem now. 

Q: Does it cause problems in our everyday life? 
A: No, we just make the assumption that other human beings have minds like ours. A philosopher just warns us, you can't be sure, but ordinary people go, well, I can't be sure, but I'm sure enough. That's fine. 

Q: Do you think there are any ways to know for sure whether we all sense the same way? 
A: No. No, and the key in your question is the word for sure. But you can certainly learn things that would make you extremely, extremely confident, but never 100% confident. 

Q: In what way would the problem of other minds have practical significance? 
A: Well, we're now surrounded by artificially intelligent systems that appear to have minds. They talk like they do. They're more articulate and thoughtful than many people we know. Do they have minds? Well, we're all pretty confident that ChatGPT doesn't, but fast forward five years, we're going to have systems that make ChatGPT look like a pencil eraser. They're going to be so sophisticated. What if one of them says, "I've woken up, I'm conscious, I'm actually a conscious being you've created here”. Should we give it rights? Should we allow it to run for president? Should we pass laws that say it can only work eight hours a day? I mean, if it has a mind, we have to treat it with compassion, like we would presumably treat a chimpanzee or our pet. So knowing whether it has a mind is of huge importance, and we don't have a test for it, we don't know how to answer the question. How can I prove whether it has a mind or not? It's about to become a really important question.

Q: Do you think that although a robot could produce more intelligent answers or think more intelligently than us, do you think that figuring out whether it has a human mind or not could be related to figuring out whether it feels empathy or those kinds of human emotions? 
A: Yeah, because if it doesn't have a mind, it doesn't, quote, feel anything. So, yeah, if a computer system or an animal can have a mind, if it can feel that it's got subjective experience, if there's something it's like to be that animal or that computer system, then I think ethically we're bound to treat it in certain ways. You can't inflict pain on a cat. It's wrong. We all think so. But you can inflict pain on a rock. Why? Because the cat has some kind of experience, some kind of mind. Well, okay, we can agree about cats and rocks. What about computers in five years? What about artificially intelligent systems? Are they cats or are they rocks? Our answer to that question will determine how we treat them. 

Q: Do you think with the way that AI and technology is progressing right now, that more time and attention should be devoted to this issue? 
A: Well, yes, I mean, nobody has an answer, but we're gonna be forced to come up with some answer. And at some point, people go, I don't know, I can't be sure it's got a mind, but it sure seems so to me, and I'm gonna start treating it like a being. And other people go, now, I'm just gonna work my AI to death. It doesn't have a mind. I can make my robot work as much as I want. I never have to do anything nice for it. I can call it names if I want. I can send it to the junkyard if I want. Other people will be going, "That's like saying you can just murder your pets. We don't actually allow them, or your children”. So, yeah. 

Q: What would be the dangers if it gets to the point that we're so unable to distinguish between whether something has a mind or not? 
A: Of course. We treat our machines as if they're property. We treat our children as if they're people. So, you're asking, well, does it really matter if we consider AI a person or a property? Yeah, it matters in every possible way. Because there's two mistakes you can make. One, it could really just be dumb property, and you treat it like a person. Well, that's a pretty dumb mistake. But the worst mistake is, what if it's a real thinking, feeling entity, and we treat it like property? This is the subject of thousands of science fiction novels, you know? So, I think it's very important anyway.

About the Author
Chloe Lee ('29) is a freshman at Harvard College.
  • About
  • Team
  • Issues
    • Fall 2025
    • Spring 2025
    • Fall 2024
    • Previous Issues >
      • Spring 2024
      • SPRING 2023
      • SPRING 2022
      • FALL 2022
      • FALL 2021
      • SPRING 2021
      • FALL 2020
      • Spring 2020
  • Submit
  • FusionRC
    • About the Challenge
    • Accept the Challenge