🙏🏽⛩🤖 219 - Joshua Schrei on Embodied Ethics in The Age of A.I.

14th May 2024 Gemini 1.5 Pro

The Stakes: More Power Than Wisdom

Michael Garfield and Josh Schrei begin by outlining the high stakes of our current moment in technological history, drawing parallels to the Sorcerer's Apprentice, ill-equipped to handle powers they've unleashed. Schrei highlights the ease with which we can now "summon things"–referring to AI's capacity to bring imaginative concepts into being–contrasting it with the rigorous initiations and societal safeguards traditionally surrounding such power. This disparity, he argues, creates a critical vulnerability:

Containment is much more challenging and containment comes with wisdom. Now you can just summon things. It used to take people a lot of initiatory preparation and work and deep inner exploration and communal accountability and all of these kinds of things to be in the position to summon things. The shamanic figure in traditional culture had webs and networks of accountability around them and went through a process that often started at birth to make sure that they were individually prepared to summon things. And now anybody with a laptop and some prior coding experience can summon things. And so the stakes, as I said in a recent Justice episode quoting Early 90s hip-hop band De La Soul. The stakes is high.

This lack of preparation, Schrei suggests, extends beyond individual responsibility. He critiques the modern world's systemic neglect of embodied wisdom, exacerbated by AI's potential to further distance us from it. While acknowledging AI's potential benefits, he emphasizes the inherent biases embedded in its development, questioning whether humanity, driven by market forces and often short-sighted desires, can be trusted with such world-altering tools. He argues that AI's rapid advancement requires a commensurate evolution in our understanding of responsibility and wisdom.

The Role of Human Agency: More Than Just Inspiration

Garfield and Schrei delve into the nature of human agency in the development of AI, acknowledging the influence of larger forces at play. Schrei points to a "religiosity" driving technological advancement, a spiritual longing inherent in humanity's quest to create and connect with powers greater than ourselves. This desire, often fueled by a "mystery drive," mirrors the innate curiosity that propels us to explore the unknown. However, this exploration, Schrei cautions, necessitates containment and guidance.

I think I'm always in favor of human beings recognizing that there are larger forces at play and deeper drives at work. And that's true for everything that we do. Everything that we do, you mentioned Descartes and in the modern kind of Cartesian worldview, we assume that everything we do is decided by the rational decider that lives within us. And if you really start to examine it, and even scientific studies are verifying this now, that's really not how it works.

He draws parallels between the Golem of Prague and AI, both instances where human creation takes on a life of its own, often with unintended consequences. The challenge, he posits, lies in developing safeguards and ethical frameworks that foster responsible interaction with these powerful forces. Drawing on the wisdom of traditional cultures, Schrei suggests incorporating elements of gradual initiation, communal accountability, and reverence for the unknown into our approach to AI.

Cultivating Wisdom: Slowing Down and Deepening Engagement

The conversation shifts towards practical solutions, exploring how to cultivate the wisdom necessary for navigating the AI age. Schrei argues that external regulations, while important, are insufficient without a fundamental cultural shift. He advocates for a return to "slow knowledge"–a deeper, more embodied understanding that prioritizes long-term well-being over rapid innovation.

That first insight, that first idea, that's what I'm going to do. And then I go down this spiraling rabbit hole that because I didn't lay any foundational work and really steepen it and really develop it and really start to understand and feel into it and really ultimately understand if it's actually something I want to be working on.

Schrei challenges the "rush to market" mentality prevalent in the tech world, proposing a more deliberate approach that emphasizes ethical considerations throughout the development process. He envisions a future where ethics is not merely an afterthought, but an integral part of the conversation, woven into educational systems, corporate structures, and even the code itself. This shift, he believes, requires a renewed appreciation for the natural world and its inherent cycles of growth, decay, and regeneration–a perspective that can temper our technological ambitions with wisdom and humility.

Amplifying Our Best Selves: What Do We Want to Carry Forward?

The discussion explores the potential for AI to amplify specific human qualities, raising questions about what aspects of ourselves we want to see reflected and perpetuated. Garfield suggests that AI can act as a form of "instant technological karma," highlighting the work of Doug Rushkoff. Just as our actions have consequences, the biases and values we program into AI systems will have far-reaching implications.

AI is a sliver of the human experience of the world magnified exponentially. It's a sliver of how human beings process information and make decisions, and it's magnified. Within that, is there room for a whole lot more perspective? Sure. I think it would be interesting to have AI music tutors who have wandered the earth in their deep reservoirs of what they're drawing from and studied all the musical traditions of the world.

Schrei emphasizes the importance of intentionality in shaping AI's development, advocating for systems that prioritize not just intelligence, but wisdom, compassion, and a deep understanding of interconnectedness. He acknowledges the potential for AI to enhance human capabilities, but emphasizes the irreplaceable value of embodied experience and human-to-human connection. The challenge, he argues, lies in ensuring AI serves as a tool for amplifying our best selves rather than exacerbating our worst tendencies.

Embracing the Mystery: Beyond Control and Into the Unknown

In closing, Garfield and Schrei address the inherent limitations of AI, particularly its tendency to "hallucinate" or generate unexpected and sometimes inaccurate outputs. Rather than viewing this as a flaw, they suggest embracing it as an opportunity to engage with the unknown and reawaken our own intuitive capacities.

What you're saying underlies, it reinforces the fact that what is possible ultimately being sought has much more to do with mystery than control. And we've convinced ourselves that's the narrative of control and really what we're looking for is mystery.

This perspective reframes AI not as a replacement for human intuition but as a tool for augmenting it, inviting us to step into a more collaborative relationship with these emerging technologies. Schrei argues that by relinquishing the illusion of complete control, we open ourselves to a more nuanced and ultimately more fruitful engagement with the mysteries of both the artificial and the natural world.