One of the problems you face when engineering AI systems is determining what information is relevant for the task in hand. Be your system an Artificial Neural Network (what most folks simply call Machine Learning these days) or a logic-based AI system such as a Knowledge Graph. In the Neural Network the challenge is to define what features are relevant or in the case of reinforcement learning you still need to determine what is the input, such as what the sensors will be used; in the case of a knowledge graph, you need to determine what the ontology (or dictionary) to be used will be. As it turns out this is a far from trivial problem, it was first identified in the 1950s when logic-based AI systems were receiving most attention, and as it turns out, it is also a problem for what was called ‘connectionist’ systems or Artificial Neural Networks. It is called the Frame or Framing Problem, in essence how do you put a frame around what is relevant, leaving out what is not relevant. And it turns out to be a deep epistemological problem. You can read more about this here: https://plato.stanford.edu/entries/frame-problem/
Let us take an example to show how this works in practice, and I absolutely love this example from a legal context this time.
How to Do Things with Contexts:”Is There Anything in the Oven?” By Samuel Bray, Notre Dame Law School: https://reason.com/volokh/2021/07/28/how-to-do-things-with-contexts/ He describes having a conversation with a 5 year-old child where the answer to the simple question “is there anything in the oven” will be answered in opposite ways dependent on the context. I encourage you to read the article. Also worth a read, he then goes on to make the connection to work on interpretation of legal statutes, in his longer article on ‘the mischief rule’: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3452037
If you look at the frame problem today, as described in the Stanford summary, and a rather extended quote (below) you will see the way forward here is proposed by the late great Hubert Dreyfus, and cashed out in practical terms as to how this is achieved by Michael Wheeler:
“Although it can be argued that it arises even in a connectionist setting (Haselager & Van Rappard 1998; Samuels 2010), the frame problem inherits much of its philosophical significance from the classical assumption of the explanatory value of computation over representations, an assumption that has been under vigorous attack for some time (Clark 1997; Wheeler 2005). Despite this, many philosophers of mind, in the company of Fodor and Pylyshyn, still subscribe to the view that human mental processes consist chiefly of inferences over a set of propositions, and that those inferences are carried out by some form of computation. To such philosophers, the epistemological frame problem and its computational counterpart remain a genuine threat.”
Now many folks have said that you just don’t need all this philosophy:
“I have heard expressed many versions of the propositions . . . that philosophy is a matter of mere thinking whereas technology is a matter of real doing, and that philosophy consequently can be understood only as deficient.”
Philip E. Agre, Computation and Human Experience (Cambridge: Cambridge University Press, 1997), 239.
But has Dreyfus, 2008 has pointed out, those very same classical AI technologists rest their work on a great number of philosophical foundations, that they just take as ‘common sense’ but in fact is taking a position. He describes his time in the 1960s, in the heyday of classical AI in discussion with folks from MIT Artificial Intelligence Laboratory (such as Marvin Minsky), and the RAND corporation (Alan Newell and Herbert Simon), where folks were saying we don’t need Philosophy, we can do in a few years what you Philosophers failed to do in thousands of years, and solve the mysteries of cognition and intelligence. Hubert Dreyfus then lists the philosophical foundations of their work, which I list here in summary:
- De Corpore, Hobbes – the view that reasoning is computation – https://en.wikipedia.org/wiki/De_Corpore
- René Descartes – “evil demon” and his mental representations – https://www.philosophybasics.com/philosophers_descartes.html
- Leibniz’s systematic character of all knowledge and plans for a universal symbolism, a Characteristica Universalis – https://en.wikipedia.org/wiki/Characteristica_universalis
- Kant’s view that understanding must provide the concepts, which are rules for identifying what is common or universal in different representations – https://en.wikipedia.org/wiki/Immanuel_Kant
- Frege’s formalization of such rules – https://plato.stanford.edu/entries/frege-theorem/
- And finally; Russell’s postulation of logical atoms as the building blocks of reality – https://plato.stanford.edu/entries/logical-atomism/
There is a set of alternative philosophical positions:
Returning to Stanford University’s summary of the frame problem once more, and I quote:
“Dreyfus claims that this “extreme version of the frame problem” is no less a consequence of the Cartesian assumptions of classical AI and cognitive science than its less demanding relatives (Dreyfus 2008, 361). He advances the view that a suitably Heideggerian account of mind is the basis for dissolving the frame problem here too, and that our “background familiarity with how things in the world behave” is sufficient, in such cases, to allow us to “step back and figure out what is relevant and how”. Dreyfus doesn’t explain how, given the holistic, open-ended, context-sensitive character of relevance, this figuring-out is achieved. But Wheeler, from a similarly Heideggerian position, claims that the way to address the “inter-context” frame problem, as he calls it, is with a dynamical system in which “the causal contribution of each systemic component partially determines, and is partially determined by, the causal contributions of large numbers of other systemic components” (Wheeler 2008, 341).”
See for further references
- Then Maurice Merleau-Ponty: https://plato.stanford.edu/entries/merleau-ponty/
- Martin Heidegger: https://plato.stanford.edu/entries/heidegger/
Also, work in biology and neuroscience, to name just two:
- The biology of Humberto Maturana https://en.wikipedia.org/wiki/Humberto_Maturana
- The neuroscience (neurodynamics) of Walter Freedman: https://www.sciencedirect.com/science/article/pii/S0079612306650280
If you really want to dig into this issue, I would strongly recommend you purchase and read the work of Michael Wheeler, 2005.
https://mitpress.mit.edu/9780262731829/reconstructing-the-cognitive-world/
Not just because it presents a clear way forward to escaping from the frame problem for cognitive science, but also because he was my flatmate in the early 1990s, and we shared a great number of late night philosophical discussions which will also form the basis of ongoing posts on #JabeOnAI as well as a foundational part of my forthcoming books, the Poetry of Liquid Sunshine (and the first rough draft, the scrapbook of liquid sunshine). More soon …
Comments
One response to “How to escape the frame problem”
Not sure I see the escape path, but definitely love the contextualization of our philosophical and computational forefather framers!