#277 Aaron Rabinowitz: Moral Realism, and Objective Morality
RECORDED ON OCTOBER 22nd, 2019.
Aaron Rabinowitz is an Adjunct Professor (PTL) in the Rutgers Philosophy department and the Rutgers Honors College. He specializes in ethics, metaethics, and AI. His work focuses on developing a secular moral realism that is compatible with the problem of moral luck. He also hosts two philosophy podcasts: Philosophers in Space and Embrace the Void. The goal of both shows is to make philosophy accessible for everyone, using science fiction and existential horror.
In this episode, we discuss metaethics, and moral realism. We first go through the definitions of moral realism and objective morality. Then we get into several different issues, like our conflicting moral intuitions, moral foundations, and the limitations of our evolved morality. We also talk about moral nihilism, and moral relativism. We discuss the differences between knowledge produced by science and moral truths (and value judgments). In the latter part of the conversation, we discuss to what extent moral axioms also apply to people’s decisions about their own wellbeing; the fact that we don’t have direct access to other people’s minds and its moral implications, when it comes to paternalism; and also to what extent we should care about how we treat other animals and, in the future, advanced AI systems.
Time Links:
Can Aaron convince me that moral realism is right?
What is moral realism?
What does “objective” mean in morality?
How can we arrive at moral truths?
The problem with conflicting ethical systems
Is morality just about preferences and intuitions?
People with different moral foundations
Evolutionary psychology, and our complicated evolved morality
If moral truths are instantiated in the Universe, where do they come from?
What about moral nihilists, and moral relativists?
Are moral truths equivalent to scientific truths?
We are stuck with our evolved moral epistemology
Should we take seriously people deciding on moral truths by voting?
Do moral truths also apply to how people deal with their own wellbeing?
We don’t have direct access to other people’s minds
How should we treat advanced AI systems (and other animals)?
Closing statements on moral realism
Follow Aaron’s work!
Follow Aaron’s work:
Philosophers in Space (Spotify): https://spoti.fi/2N8zmDA
Embrace the Void: http://bit.ly/35YAdQ4
Philosophers in Space (facebook group): http://bit.ly/32DBie1
Philosophers in Space (Twitter handle): @0gPhilosophy
Embrace the Void (Twitter handle): @ETVPod