Sentience and Salience

by Peter Saint-Andre


As you might have guessed from yesterday's post about opinion machines, I'm skeptical that so-called artificial intelligence systems will exhibit sentience anytime soon. Thus, although my primary concern is how humans will act in relation to these systems (even more personally, how I will act), I figure it can't hurt to sketch out my thinking on the broader topic.

The perspective I've taken so far is heavily influenced by Wallace Matson's book Sentience, which I first read in college and recently revisited. Matson argues that the core skill underlying sentience is what he calls "sizing things up". Indeed, he states explicitly that this idea is an updated version of Aristotle's concept of nous, which I typically translate as "insight". To my mind, insight involves seeing in a situation what is most salient or significant - as we also say, getting a read on things, grasping their meaning, making sense of them, predicting where they might lead, and so on.

It seems to me that sizing things up is intimately bound up with a whole range of human capacities: memory, foresight, recognizing patterns, envisioning possibilities, drawing inferences, perceiving subtleties of behavior in humans and other animals, etc. Yet activating these cognitive capacities also connects them with emotional reactions, intentionality, valuation, goals, commitments, and meaning in general. Ultimately, sizing up is a biological phenomenon that leads to a judgment about whether a situation might be threatening or advantageous, painful or pleasant, for or against one's interests.

Consistent with both Aristotle's philosophy of biology and modern biological thought, there's no hard cutoff point where we can say definitively that humans size things up but other animals don't. It's pretty clear to me that mammals, birds, and even insects recognize the significance and implications of various situations for their survival and well-being. Sure, the range of such situations might be narrower along various dimensions (e.g., subtlety, physical distance, futurity), but that's a difference in degree, not in kind.

Will artificial intellegences ever have a rich experience of sentience, given that it seems to be rooted in the phenomenon of life? Hell if I know, but so far I'm skeptical. Eventually perhaps, but in the meantime I'm more interested in cultivating deeper sentience in natural intelligences. After all, that's hard enough!

(Cross-posted at


Peter Saint-Andre > Journal