A framework for Understanding Ourselves and Artificial Intelligence. Or as I say later, understanding ourselves, understanding how to engineer useful AI systems; and to get a view on how we can best live in a world with these AI systems ubiquitously around us.
Some personal context here
I want to share a perspective – that when I studied Artificial Intelligence – I took for granted, or rather was open to, the basic ideas which were to become the basis of the field of 4E Cognition – due to a number of factors based on my background to that point (which I will elaborate seperately)- but you can get a great taste of what my experience at Sussex University was like, at the School of Cognitive Studies, because it is captured well by this anthropological study (http://anthrobase.com/Txt/R/Risan_L_05.htm). I digress.
Another point to make is the fact that the current approach to AI had come out of an AI winter just a few months before taking up studies at sussex in 1989 (see the ground breaking book – search up Rumelhart, and McClelland and https://mitpress.mit.edu/9780262680530/parallel-distributed-processing/). I rocked up there at 18 years of age, and just took it for granted that we were programming artificial neural networks, sure that’s just what you do at University. I had no real conception at the time just how unusual and new this was.
A counter intuitive start …
Let’s set the scene with a couple of counter intuitive quotes:
“How does the biological wetware of the brain give rise to our experience: the sight of emerald green, the taste of cinnamon, the smell of wet soil? What it I told you that the world around you, with its rich colours, textures, sounds, and scents is an illusion, a show put on for you by your brain? If you could perceive reality as it really is, you would be shocked by its colourless, odorless, tasteless silence. Outside your brain, there is just energy and matter. Over millions of years of evolution the human brain has become adept at turning this energy and matter into a rich sensory experience of being in the world. How?”
The Brain: The Story of You, by David Eagleman.
… and to be honest, ‘energy’ and ‘matter’ are ideas about what this substrate we exist in might be, we only have second hand, mediated access to anything, so the ideas of ‘energy’ and ‘matter’ are really only guesses at what is “out there” and indeed “within us” because as those who are fond of telling us that the ‘atoms’ that make up our bodies are billions of years old – would acknowledge – we are made up of the “out there” and really there is no boundary between what is us, and what is the outside …
Another more poetic quote:
“Light, shadows and colours do not exist in the world around us”.
Presentation speech by Professor C. G. Bernard, Member of the Nobel Committee for Physiology or Medicine. Quoted in “Sensational: A New Story of our Senses, by Ashley Ward
Engineering AI systems
To paraphrase the prevailing view around AI right now:
“So, why should I care about philosophy of mind? Or this esoteric cognitive science? I just want to build some great AI systems. Just use dictionaries, triangulate it together and then get the biggest possible neural network – (“a big artificial brain”) – to process all of that data – just like chat-GPT4 or GPT4 – so why, why isn’t that going to work?
So, the fiction is, you find you can’t combine all of this data; and those people who have a particular perspective that science and mathematics is simply defining that base reality come unstuck and they don’t really know why, they expend huge amounts of effort, trying to create master dictionaries or ontologies, they don’t understand why the world is so messy, they think it just needs a little bit more tidying up, and that is a kind of category error, a kind of deep misunderstanding of our relationship to the world.
So, why might dictionaries not map, because people are lazy? Seems to be the assumption of some people. Well, if you dig and delve into philosophy of mind, and cognitive science, you come across what I alluded to early on in these blog posts – the map is not the territory – you might think well, sure, it is just a simplification, but the reality is slightly more nuanced that that – it is that in some respects there is – if you want to put it as a polemic – there is no base reality – as such – onto which all these things are constructed- there is no firm foundation – in reality.
Now, this is the point that most people say, that is crazy, what are you talking about?
Well, this is why, the area of 4E cognition is so important, in understanding the challenge we face, and a lot of talk around this area, centres on the notion of the ‘brain in a box (the skull)’ the brain, the human brain you have and everyone you know, has no direct access to the world outside, such as the taste of a glass or red wine, or it’s colour, all of these things are constructions, it gets weirder than that, because your sense of self is also a construction
The way in which representations are created, is – and evolutionarily, it is based on the need of a human to collaborate in large groups, to hunt and feed themselves – into a language – and to a great extent the sense of self is used – for communication about your motivations as to why you are doing something – is what they call – a community of practice, which is a fancy way of just saying – a bunch of people trying to do a specific thing together. In order to do that thing together, they need to communicate, in language.
Now language is used to give a sense of what these individuals are experiencing uniquely as qualia (so called) – but put in a common reference point – and this common reference point is completely constructed – and the reason why it seems such a firm foundation to us (as a human) – is because that is how we live our lives – everything we do, is embedded in the context of object as common reference points through our experience – that we approach as infants, that we map on our experiences of the world, from the sensory input – that is partly given out by other humans – they capture the sounds and pressure waves – converted to electrical signals that our brains interpret – basically bring these things together as common representations – but these things are very much associated with our embedded and embodied nature – in our physical bodies – embedded in the cultural context of how we grow – they are enacted – and these representations and the way that we comprehend the world is dynamic – you learn these things – you learn what a cup is by picking it up, trying to drink from it – these representations (i,e spoken words as pressure waves, and later as visual patterns) as these dynamic processed mapping to our lived experiences and observations of other humans interacting in similar ways, and uttering sounds we come to associate with those common shared experiences.
And the complex thinking, if you want to call it that, which we do as we grow, is extended by objects in our environment, symbol systems, tools, all of these things.
So, what I am trying to do – is touch on why representations need to be understood in the context of tasks being done, and communities of practice, and with our global culture, certain things are in common, and seem universal, and in science – (which we have as a system of which I am proud to be a part) we have – scientific communities represent communities of practice – and the fields of cross disciplinary semantics – are trying to address this – and there is ontology mapping – but it comes down to people have an intuition that is incorrect based on our day to day experience – that the representations around us are grounded in what you might call the ‘real world’ of cups and clouds, aeroplanes and sticks … but that is to a very great extent an illusion, but the reality is that when you truly dig down … to based reality it is constructed … sure there must ultimately be a substrate with which we are ultimately interacting, but the only way we can access this through senses or indeed scientific instruments and measurements – is mediated – the tools we construct to measure – and if you look at areas such as quantum physics what you start to find it that our day to day intuitions of common sense reality – just to not match in any way what our more advanced instruments are finding.
So this is a roundabout way of saying, from an engineering perspective, it is much better to take the radical assumption, there is no base reality, as an engineering principle (and perhaps indeed as a day to day perspective – footnote ) – and that the only foundations that we have – for semantics, and data generated from instruments – is a constructed one. And the only way to understand that constructed foundation is to look at the context in which those measurements, those understandings if you like, those ontologies are constructed. And they do not map, because purposes are different – the ontology for one area is very different to another – and ultimately they may just never map, until you bridge them with a whole bunch of other disciplines (intersecting tasks) – so, if you are into cooking, what is the bridge to quantum physics, in terms perhaps of understanding the physics of how an egg binds a cake … where you have to find a path … it doesn’t necessarly follow that things will map, and it is a lot of work.
So getting back to the position: I just want to build AI systems on common sense – no nonsense base reality – you are gonna fail. Let’s put it plainly, you are just gonna fail.
You are not sure where an when, but I have seen over the years, many people of huge frustrations building semantic systems, howls of rage, why is my agile team failing?
And basically, this is the reason. And to overcome that from an engineering point of view, you have to get your hands dirty with philosophy of mind, and 4E cognition, or at least the heuristic approaches that are based on them.
And I contend the same is probably true of finding a way to live in this new world of ubiquitous AI computing systems in our day to day lives.
What has this got to do with daily life?
Taking this on from another perspective, less the engineering view, and more where does our common sense view of “the world” come from.
So the picture you might want to start from is the concept of – autopoesis – (https://en.wikipedia.org/wiki/Autopoiesis)
we each “reach an accommodation” with the outside and inside world. The outside world also consisting of other humans and existing culture. we accommodate to associating sense responses to bodily actions, reflected in sound waves or visual symbols that we come to associate with these.
The argument is we do the same with our intereoperception and there is a process of introspection evolved for cooperation and collaboration- but that should not be confused with true knowledge- you can think of it as an organ – not “the brain” but some processes within it focused on collaboration.
taken together- our senses – our sense of self – and labels we attach to experiences – make up our sense of the world. lets call it the content of our consciousness (other animals are surely conscious but have other content)
this has been described as an hallucination. we see it as base reality, common sense. but really it is a fabrication; and we have no real certainty that what we experience is the same as someone else. We can be sure that other creatures’ qualia IS different.
surely it is just base. well your sense of what it is depends on what you measure. surely there is what you might call a substrate. but we surely don’t know what it is. AND our sense of it (excuse the pun) depends on what we measure and for what purpose.
If there is a gist to this book (I am writing) is that this matters, and can be useful in understanding both ourselves, how to engineer useful AI systems; and how we can best live in a world with these systems ubiquitously around us.
Most of this book (I am writing) will be focused on this last point, as opposed to my day job which focusses to a great extent on this question of engineering AI systems.
You can turn this lens of understanding on our creations. And there is a long tradition of this way of thinking in Buddhism (and its Westernised strains as they have interreacted with phenomenology).
More on all this soon … from Jabe On AI