top of page
Search

PHILOSOPHY DEPARTMENT DEATH MATCH: AGENTIC AI AND FREE WILL

Welcome to the death match of AI philosophy. There isn’t much on the line here. Really just people presenting opposing views in research papers. But what glorious battle takes place on those pages!

Let me set the scene. If we assume Free Will exists at all, does Agentic AI have it?

We have Frank Martella from Aalto University in Finland in one corner who in his paper conclude that  “…the best (and only viable) way of explaining both of their behaviour involves postulating that they have goals, face alternatives, and that their intentions guide their behaviour…we must conclude that they are agents whose behaviour cannot be understood without postulating that they possess functional free will.”

At the same time Paul Formosa, Inês Hipólito and Thomas Montefiore in the opposing corner arguing in their paper that: “…while current AI systems are highly sophisticated, they lack genuine agency and autonomy because: they operate within rigid boundaries of pre-programmed objectives rather than exhibiting true goal-directed behaviour within their environment; they cannot authentically shape their engagement with the world; and they lack the critical self-reflection and autonomy competencies required for full autonomy.”

So what is going on? Are we living in a world where not even philosophers can agree? The simple answer to that is: Yes. Actually, I think that is the very essence of philosophy. It is a high stakes game of opposing viewpoints that cannot be proven or disproven.

In this case it comes down to a simple matter of definition.

The former paper uses a definition by Christian List, Professor of Philosophy and Decision Theory, that suggests that free will boils down to three conditions all of which must be fulfilled for an entity to have free will:

  • the capacity for intentional agency,

  • the capacity to have alternative possibilities, and

  • the capacity to control one’s actions.

At the same time, the latter paper distinguishes between basic and full autonomous agency. Whereas Agentic AI may show basic autonomy, it does not meet the definition of full autonomous agency which requires more sophisticated capacities for

  • self-direction,

  • authentic decision making, and

  • critical reflection.

But it does call out that autonomy exists on a spectrum, starting from “…the most basic level of machine autonomy and extending through to the full autonomous agency characteristic of some mature humans.”

So I wonder at what level we should stop. Does it make sense to create Agentic AI that does have full autonomous agency? And if we do, have we not created something that also should have rights? 


References:


 
 
 

Comments


© 2024 by Mikael Svanström
bottom of page