top of page

"Artificial Bodies and the Promise of Abstraction": a conversation with Peter Wolfendale (Keywords: Philosophy of Mind; Phenomenology; Embodiment)


Artificial Bodies and the Promise of Abstraction

Please can you start by saying a few things about the rise of embodiment within contemporary philosophy? It seems to me to be mainly used as a corrective against: 1) the Cartesian notion of an immaterial mind, and 2) the materialist tendency to place the mind in the brain. But what are the main positive claims that defenders of embodiment are making? I think that the meaning of the term “embodiment” in philosophical circles is deceptively diverse, and that those who champion the concept are motivated by concerns that overlap less than is often appreciated. If they are unified by one thing, it is a rogues’ gallery of common enemies. Although Descartes is the most reviled of these, his errors are often traced back to some original sin perpetrated by Plato. However, in order to make sense of these conceptual crimes, it’s worth first distinguishing the explanatory concerns of cognitive science and artificial intelligence from the normative concerns of political and social theory, while acknowledging that both of these are downstream from more general metaphysical concerns regarding the difference and/or relation between matter and mind. So, though there are many purely metaphysical objections to the Platonic dualism of intelligible and sensible worlds, and the Cartesian dualism of mental and physical substances, what really unites the embodiment paradigm is their objection to the outsized role that Plato, Descartes, and their inheritors give to “the life of the mind” in explaining how we make our way in the world, and establishing which aspects of it we should value. For want of a better word, we might call this “intellectualism”. This “life of the mind” is distinguished by the capacity for abstract thought. This is to say that it abstracts away from concrete features of the context in which thinking occurs: it is theoretical, or unconstrained by the practical problems posed by our bodily environment; and it is contemplative, or independent of the sensorimotor capacities through which we interact with this environment. Both Plato and Descartes take mathematics to exemplify this sort of thinking, and on that basis, thought as such. Mathematical theorems are not strictly about anything in our physical environment, and they can be verified even if they’re not applicable to it, in ways that needn’t involve interacting with it. This being said, what really distinguishes Descartes from Plato is his conviction that the physical world can be accurately represented by mathematical models, and thus that our experiences can be treated as internal representations akin to such models. There are other problematic aspects of the Cartesian picture, but this will do for now.


So, what are the main explanatory objections to intellectualism?

There are two. On the one hand, its opponents claim that intellectualism ignores the vast majority of human cognition: most of our lives are spent carrying out tasks and navigating obstacles whose contours are determined by the way our body fits into its environment, rather than reasoning our way from premises to conclusions. Making a cup of tea in an unfamiliar kitchen is a more representative instance of our problem-solving capacity than demonstrating the infinity of primes. On the other, they claim that intellectualism has its priorities backwards: rather than treating this sort of “skilled coping” as a deficient form of abstract cognition, we can only understand the latter by showing how it emerges from the former. Even our ability to imagine complex geometric constructions has at some point been bootstrapped from a basic bodily grasp of orientation and gesture.

The most important targets of these complaints are the classical computational theory of mind in cognitive science and what gets called “good old-fashioned AI” (GOFAI). These see cognition as principally a matter of rule-governed symbol manipulation not unlike mathematical reasoning. They are opposed by a range of “4E perspectives”, so called because they emphasise some combination of the embodied, embedded, enactive, and extended dimensions of cognition. The extent to which these diverge from traditional views varies, but, in rough order, the points of contention are: 1) the extent to which cognition is dependent on features of the body outside of the brain (e.g., the structure of sensory organs), and features of the environment outside of the body (e.g., the availability of cognitive resources), 2) whether the concepts of computation and representation are irredeemably intellectualist (e.g., if they can account for pre-linguistic “meaning”), and 3) whether dependence implies constitution (e.g., if my notebook is part of my mind).

What about the normative objections?

Again, there are two. On the one hand, opponents claim that intellectualism reflects and reinforces implicit social hierarchies: those who have historically enjoyed the luxury of theoretical contemplation have done so because the practical problems and bodily processes it abstracts away from have been taken care of for them, often by groups who have been systematically identified with their bodies and bodily capacities, such as women, slaves, and colonised peoples. The disembodied Cartesian ego is an illusion engendered by ignorance and privilege. On the other, they claim that intellectualism devalues significant sources of human knowledge: there are forms of “lived experience” and “situated knowledge” that are valuable even if they aren’t (and possibly can’t be) articulated in a manner that divorces them from the embodied contexts in which they occur (e.g., their emotional valence). This Cartesian false-consciousness doesn’t simply impact the way we treat others, but even the way we treat ourselves, potentially disconnecting us from our embodied existence.

The idea that privileging the mind over the body is associated with other sorts of illicit privilege (e.g., economic, racial, sexual, etc.) is now fairly widespread in contemporary feminist and critical theory. However, there are a variety of philosophical frameworks drawn from the Continental tradition that get used to articulate, elaborate, and offer solutions to this problem. Roughly speaking, the main strands are Spinozist (Deleuze, Affect Theory, etc.), Nietzschean (Foucault, Butler, etc.), and phenomenological (Heidegger, Merleau-Ponty, etc.), though there is much cross-pollination. The first is characterised by the metaphysical tenor of its critique, proposing some form of materialist monism as an alternative to the dualisms of Plato and Descartes. The second is characterised by its focus upon social dynamics, providing an analysis of the way bodies are “ensouled” by the internalisation of patterns of thought and action. But the last provides the greatest point of overlap with the explanatory concerns discussed above, as it provides a detailed introspective analysis of the body’s involvement in the constitution of experience. Phenomenology has had a marked influence on 4E approaches to cognition, and is responsible for the concept that straddles and sometimes connects all these varying concerns, namely, “the lived body”.


The idea of the lived body suggests that the body is not just a causal bridge between ourselves and the world, but rather that the body is our engagement with the world in a way that serves as a condition for the emergence of our subjectivity. This suggests that only a “proper” body will be fit for this purpose – no ersatz or artificial alternative will do. Embodiment is in fact “real meat” embodiment. Is this a fair picture, both in phenomenology and in the other frameworks you discuss above? Though not every proponent of embodiment will go so far as to insist on an essential link between mind and meat, I think it’s fair to say that this is where the rhetoric of embodiment leads. To some extent, this is because it aligns with other philosophical and political goals, such as undermining pernicious distinctions between human and animal or diagnosing dangerous fantasies implicit in the very suggestion that minds could be uploaded into computer simulations. However, there are some arguments for the claim, and I’ll try to tease out the general pattern of these as I see it. But first, it’s worth saying something more about the idea of the lived body. The cornerstone of the phenomenological tradition is the idea that the content of explicitly articulated representations, such as declarative sentences or mathematical models, depends upon a more primitive form of meaning implicit in ordinary conscious experience. This gets formulated in slightly different ways by Husserl, Heidegger, and Merleau-Ponty, but they essentially agree that our many and varied representations are able to pick out the same object (e.g., galaxies, spleens, recessions) across changes in time, shifts in perspective, differences of opinion, and diverging interests, only because the referential frameworks they deploy (e.g., star charts, anatomy, econometrics), are so many layers arranged on top of those simple unities that tie together our everyday activities (e.g., places, obstacles, tools). My coffee cup is unified as something I can reach out and grasp, but this grasping is not a carefully planned sequence of muscle movements guided by a mechanical understanding of shapes and forces, it is a single fluid movement in which my fingers fit themselves to the cup’s contours without so much as a second thought. What distinguishes the “lived body” from the “biological body” is not simply that it is not yet an object of scientific representation, but rather that it is what ties everything together in the last instance. It is the origin of all intentional directedness, and it is experienced as such: an immediate awareness of agency. The question remains: if the lived body is not the biological body, why is “real meat” so important?


The notion that there is some split between an original and a dependent (or derived) form of intentionality is not unique to phenomenology. Wittgenstein is famous for arguing that the usage rules that give words their meaning ultimately only make sense in the context of some shared “form of life”, while John Searle is (in)famous for arguing (in his Chinese room thought experiment) that a mind cannot be built from rule-governed symbol manipulation, precisely because these symbols must already be interpreted as meaningful. Wittgenstein and his followers tend to emphasise the role that social constraint plays in making intentionality possible, while Searle and his followers tend to emphasise the sheer uniqueness of the human body’s capacity for intentionality, whatever it consists in. However, they are entirely compatible with embodied phenomenology and other strands of the paradigm, and are often blended together. So a second question emerges: how should we understand the “dependence” between the original (embodied/concrete) and the derived (disembodied/abstract)?


So, the importance of “real meat” has something to do with the way in which “dependence” is understood. How does this work?

I think it is useful to draw two distinctions. On the one hand, we should distinguish empirical claims about the workings of the human mind from transcendental claims about the workings of any possible mind. On the other, we should distinguish conditions that enable our cognitive capacities from constraints that limit the form they take. When these lines are blurred, it becomes all too easy to mistake significant features of our mental make-up for essential features of any possible cognitive architecture: de facto dependence becomes de jure constraint.

For instance, there is much experimental research indicating that basic information processing tasks (e.g., determining the direction of a noise) are carried out by heuristics closely tailored to environmental and/or bodily parameters (e.g., the distance between our ears). Does this mean that all cognition is heuristic, or just good enough for the environmental conditions it is adapted for? Similarly, there is much phenomenological research arguing that most mental content (e.g., heeding the warning “beware of the dog”) is constituted by sensorimotor expectations tied to specific sensory modalities (i.e., an imaginary bundle of potential sights, smells, sounds, and motions). Does this mean that all thought is parochial, or restricted by the range of our sensory imagination?

I imagine you would want to dispute such conclusions, but why? Has the critique of intellectualism missed something important?

To give Plato and Descartes their due, pure and applied mathematics provide us with a wealth of counter-examples, and not simply because they involve brute calculation as opposed to creative inspiration. Mathematicians certainly deploy heuristic techniques in searching for solutions to complex problems (cf. George Pólya’s How to Solve It), and physicists clearly exercise their imaginations in exploring theoretical possibilities (cf. Einstein’s “thought experiments”); yet what makes it the case that any two practitioners are thinking about the same things (e.g., a twisted manifold or an alpha decay event) has come unmoored from the trappings of bodily immediacy, be it the fingers they count on or the eyes they see with. One way to approach this is to explore modal differences in the analogies physicists find helpful (e.g., visual/auditory takes on particles), but my preferred example is Smale’s theorem, which, loosely, proves that there must be a way to turn a sphere inside out without creating creases (eversion). Not only is this physically (and so bodily) impossible, but at the time no one could envision a way to do it. It ultimately took several mathematicians working together – one of whom (Bernard Morin) was blind – to find one.

I think there’s no good reason to assume that there couldn’t be similar collaborations between mathematicians with more radical divergences in embodiment (e.g., humans, aliens, and AIs). This is the promise of abstraction: that we can repurpose diverse cognitive talents to common representational ends. I think that the error of much work on embodiment is to see this as a false promise: that whatever enables our immediate purchase upon the world inevitably constrains any more mediated comprehension of its contents; that there is no true escape from the concrete, only misguided escapism. As a consequence, the idea of abstractions anchored to the world in any manner other than our own becomes inherently suspect. Not only is immediate (“lived”) experience seen as more authentic than that which is mediated, but the form taken by our immediate (“embodied”) purchase on the world – meat and all – becomes the only authentic form. I contend that it is this association between immediacy and authenticity that supposedly renders artificial minds and bodies “unreal”.


I’m not sure this is enough to dismiss the importance of meat. It seems to me that it’s still a salient issue when considering the possibility of minds housed in artificial bodies. Just look at social distancing and the impact that changes to our intercorporeal habits will have on our cognition, on our sense of trust, openness to others etc. Isn’t our flesh incredibly significant here? I don’t think that people are invested in the importance of “real meat” because they have identified some positive feature that makes meat the one true medium of cognition. Rather, it serves as an index of authenticity: a stand-in for whatever it is that supposedly enables actual human cognition at the expense of those merely possible minds such people would rather rule out. This gets dressed up in various ways, such as insisting that only socialisation “in the flesh” can provide the sort of social constraint Wittgensteinians think makes intentionality possible, but there’s little reason offered for this beyond its centrality to the current form of life we share. I have no qualms with anyone who wants to analyse this importance. I’ve no doubt there is much of philosophical interest to be said about the spiritual impoverishment produced by the substitution of virtual for physical contact in life during lockdown. I simply think that elevating it to the status of a transcendental condition is a hyperbolic version of familiar complaints about “kids these days and their smart phones”. Meat merely functions as the common denominator of those factors such people deem intrinsic to a “real life”, encapsulating everything from our peculiar emotional palette and the centrality of touch, to our inevitable mortality and the significance of suffering.

To put my own cards on the table, I’m entirely convinced that artificial bodies and minds are possible, with or without meat, but I think we can loosen the link between the “lived” and the “biological” by beginning with less controversial examples. If nothing else, there is much of the biological body that simply is not lived. There is no lived experience of my spleen, my lymph nodes, or my mitochondria as distinct unities that bear upon action. Their (dys)functioning is frustratingly opaque. Similarly, though reaching for my coffee cup is a single fluid movement, I can, through reflection, decompose it to some extent: I can separate the movements of shoulder, elbow, and wrist in my awareness; consider the motions of individual fingers, and then their joints; but there are limits to this process. When it comes to bodily awareness, immediacy does not imply transparency. The edges of volition blur as we descend deeper into our own somatic depths. The embodiment paradigm sometimes advertises this as a further departure from Descartes, for whom the inner workings of experience must be fully laid open to introspection. The lived body is no Cartesian theatre. What about the converse? Can the lived body extend beyond the bounds of the biology? Yes! Merleau-Ponty was particularly interested in the phenomenon of phantom limbs, or cases in which amputees can still feel the presence of appendages that are no longer there. This is a key piece of evidence for the existence of a “body schema”, or an internal model of the body that tracks and organises our experience. There are disagreements over the nature of this schema (e.g., whether it is a “representation”), but there are other psychological phenomena that let us trace its parameters. Consider the rubber hand illusion, in which someone’s hand is hidden from view, but positioned and stroked in the same manner as a rubber hand they can see. This induces the feeling that the rubber hand is part of the subject’s body. This shows both that the schema is multi-modal, or that it integrates information from distinct senses (i.e., vision, as well as touch and proprioception), and that it can identify non-biological things as belonging to our body. There are a number of other so called “body transfer illusions”, but it’s important to see that these are only deemed illusions on the assumption that their objects are not really part of the body, even if they are felt as such. This assumption comes into question at the point where phantoms and illusions overlap, namely, in prosthetics. In designing a prosthetic hand, the goal is to exploit the sensory basis of the rubber hand illusion to map the phantom to the mechanism – to put the ghost in the machine, as it were. Thankfully, the human brain is very flexible, and can remap sensorimotor signals so that pressure on a stump can be felt in a hand, or flexing of an unrelated muscle be felt as a grip. If a prosthetic is to play the role of the relevant body part as well as possible, it must be integrated into the body schema. The deep question is whether this is enough to make it a genuine part of my body. The rubber hand is felt, but it is not lived. But the prosthetic hand is lived, even if it is not strictly living. As far as I can see, there’s nothing about the lived body that prevents us from building it to our preferred specifications, as long as it supports an immediate awareness of our agency. What about the more controversial examples you hinted at? There’s reason to think that the body schema is even more malleable than it seems, and that the sorts of skilled coping mentioned above involve tools literally being appropriated as temporary extensions of our bodies. A seasoned pool player doesn’t feel the pool cue hitting the white ball in their hands, but feels it at the tip of the cue itself. An experienced driver knows the dimensions of their car in the same way they know the dimensions of their body, not in feet and inches, but in the range of movements that feel comfortable. This protean potential of the lived body can be exploited to create prosthetics that diverge from their natural counterparts in form and function, allowing us to embed ourselves in our environments in new and unexpected ways (e.g., thought controlled computer cursors used by paralysis victims). This should be perfectly acceptable to those 4E proponents that believe in the extended mind (cf. Andy Clark’s Natural Born Cyborgs), and to those critical/feminist/Continental theorists that endorse certain forms of posthumanism (cf. Donna Haraway’s “Cyborg Manifesto”). More contentiously, these mechanisms can be exploited not just to extend the physical body, but to embed our bodily awareness into new environments, be they spatially remote (telepresence) or purely simulated (virtual reality). This has opened a whole new frontier of technological experimentation: from surgeons operating on patients on different continents, to gamers cooperatively exploring shared fantasy worlds: what it means to “be there” is gradually becoming as flexible as what it means for a hand to “be mine”. Of course, there are still those who will insist that we are not ­really there unless we are there “in the flesh”, but again, I think this begs the question. What’s at stake here is whether we can separate out the different cognitive roles played by the human body, which, in homage to 4E, we might call the 3Is: incarnation, interaction, and immersion. In order, these require: 1) that cognition be physically realised (e.g., in the brain and central nervous system), 2) that cognition be causally entangled with an environment (e.g., in sensorimotor feedback loops); and 3) that cognition be grounded in some immediate practical purchase upon that environment (e.g., skilled coping configured by a body schema). Though incarnation and immersion may seem essentially united for us, there’s no good reason to assume that they cannot be teased apart. It’s entirely feasible that isolated human brains could animate androids from a distance, or that distributed artificial intellects could inhabit human bodies from the cloud, without sacrificing any of the cognitive capacities enabled by embodiment. There is nothing in principle preventing a virtual avatar from being a lived body, or its computational underpinnings from being as frustratingly opaque as our own somatic depths. In sum, though the embodiment paradigm has done a great deal to help us understand the functions of the bodies with which nature has equipped us, this very understanding permits us to engineer systems that realise these functions in new and perhaps quite different ways.


In a recent review of AI research by Tim Crane in the TLS, he mentions that artificial minds can “reckon”, i.e. calculate, but not “judge”, i.e. give a damn. Crane notes that “[A.I.] is not, and has never been, a theory of human thinking”. Furthermore, he argues that there is no consensus around what the very notion of “general intelligence” amounts to, which raises questions about the extent to which it can be replicated artificially. Is your point that embedding cognition in non-biological bodies will not faithfully reproduce specifically human thinking or general intelligence, but may enable a distinctively super-human or trans-human form of thinking/intelligence?

Tim Crane is not quite right here. He’s right to distinguish the symbolic approach of GOFAI, which focused on emulating the types of competence that we can decompose into explicit rules (e.g., calculating orbital trajectories), from contemporary sub-symbolic approaches such as deep neural networks (DNNs), which focus on emulating the types of implicit competence that can only be acquired through training (e.g., classifying photos of animals by type). But he’s wrong to say that AI has taught us nothing about human thinking, as there has been quite a productive back and forth between research on natural and artificial neural networks. Most importantly, research on the structure of the visual cortex in humans and animals inspired the development of the convolutional neural networks now widely used in machine vision and image analysis (e.g., Google’s DeepDream), while the latter have provided new ways of modelling and testing hypotheses about the former. Furthermore, all this work on deep networks that represent features in layers moving from concrete to abstract (e.g., edges > faces > object structure > object type) has helped to foster more general theoretical frameworks that aim to understand what is common to both natural and artificial intelligence (e.g., predictive coding, the Bayesian brain hypothesis, and Karl Friston’s free energy principle). I could say something more discerning about these frameworks, but it’s perhaps better to point out that they are in active dialogue with the 4E perspectives mentioned above.

He’s also right to claim that precisely what “general intelligence” amounts to is a significant philosophical problem about which there is no solid consensus. It’s more often indexed to the sorts of competence we humans display than given an independent definition suitable for the study of “thought as such”. However, the idea that it is characterised by judgment, defined within his review as “an overarching, systemic capacity or commitment, involving the whole commitment of the whole system to the whole world” is admirably Kantian and I heartily endorse it. A system capable of making judgments about the world must, in principle, be able to integrate any and all information that is relevant to them into a unified picture. The difficulty of designing systems with this capacity is underlined by one of the major stumbling blocks of GOFAI, known as “the frame problem”. Getting to grips with this is a good way to understand the contrast between GOFAI and contemporary approaches.

The problem is this: if you attempt to make a “general problem solver” by writing a program that deduces solutions to problems from a set of propositions describing its environment (using first-order classical logic), the number of propositions you need to give it grows exponentially. For each new variable the program tracks, you must specify how it is related to every other variable, even if the variables are independent. For example, if you want the program to deduce instructions for cooking an omelette, you must not only tell it that the eggs will cook faster if the heat is increased, but also that nothing will change if it begins to rain outside. Everything in the world is potentially relevant to everything else, but a generally intelligent agent must be able to cope with this without learning the actual relationships all at once. This means that it needs to be able to learn “frames” or the local relationships that determine which information is relevant to specific problems, without needing a global picture suitable for every problem.

This is precisely what DNNs are good at capturing. By means of training on sample cases, a DNN learns the complex relationships between elements of its inputs that are relevant to producing the correct outputs, encoding them non-propositionally as an intricately layered pattern of connections and weights. The problem is that there’s no easy way to teach them to take into account a wider range of inputs and outputs without retraining the whole system from scratch. Once more, the system’s model of the world cannot easily be expanded to incorporate new things. We can use trained DNNs as black box components of larger systems, not unlike the way in which the brain incorporates task-specific subsystems (e.g., facial recognition, distance estimation, etc.), but their representations don’t compose. By contrast, a generally intelligent agent must be able to re-frame problems, by reassessing the relevance of other information it has at its disposal. This means that its subsystems need to be organised in a way that enable that enable it to integrate information across them.

I think that this contrast between symbolic and sub-symbolic approaches parallels that between abstract/disembodied thought and concrete/embodied cognition discussed above. The former creates explicit “knowledge representations” in a manner that is designed to be more or less independent of the purposes it can be put to, while the latter settles for “know-how” that is implicit in task-specific heuristics. One is organised, but computationally intractable, while the other is tractable, but computationally disorganised. The real question is not which approach is correct, but how they can be united, much as Kant sought the unity of understanding and sensibility. This may be the route to creating distinctively super-human intelligences, but I don’t see why it can’t also be a route to understanding ourselves. Computer science provides us with resources to pursue what Kant would call “transcendental psychology” beyond the bad analogies (e.g., body/mind ≈ hardware/software) and dubious metaphors (e.g., “memory files”) about which phenomenologists complain.

To end on a more methodological note, the difference between what I’m proposing and much embodied phenomenology is that I think we can describe the form and function of concrete experience in thoroughly abstract terms, without falling into contradiction. I believe that we can only explain the immersive character of embodiment if we first understand the computational structure of interaction in general. By contrast, there are those in the embodiment paradigm who not only think that we must begin with the lived body, but that its truth can only be lived. For them immediacy is not just the content, but also the form that our understanding must take. It’s unsurprising that this leads to the sort of somatic chauvinism that cannot imagine forms of life that look nothing like its own.

Peter Wolfendale is a philosopher based at Newcastle University whose work focuses mainly upon the intersection between the methodology of metaphysics and the structure of rationality, but also includes foundational topics in the philosophy of value, ethics, aesthetics, computer science, and social theory.

Interview by Anthony Morgan

 

Read more articles from The Philosopher, purchase this issue or become a subscriber.

2 Comments


Josh Smith
Josh Smith
4 days ago

Are you looking for a reliable translation agency? As a highly reliable translation agency, we are here to help. We offer a comprehensive range of professional translation services to meet your different needs. Whether it is general translation or sworn translation, specialist translation or notarized translation, we meet all your needs.


Like

Guest
Nov 09

EPTU Machine ETPU Moulding…

EPTU Machine ETPU Moulding…

EPTU Machine ETPU Moulding…

EPTU Machine ETPU Moulding…

EPTU Machine ETPU Moulding…

EPS Machine EPS Block…

EPS Machine EPS Block…

EPS Machine EPS Block…

AEON MINING AEON MINING

AEON MINING AEON MINING

KSD Miner KSD Miner

KSD Miner KSD Miner

BCH Miner BCH Miner

BCH Miner BCH Miner

Like
bottom of page