As I stated in my previous post, the first major content theme in The Atlas of Social Complexity is Cognition, Emotion and Consciousness.
- Chapter 6 addresses autopoiesis.
- Chapter 7 turns to the role of bacteria in human consciousness.
- Chapter 8 explores how the immune system, just like bacteria and cells, is cognitive – and the implications this has for our wider brain-based consciousness.
- Chapter 9 explores a complexity framing of brain-based cognition, emotion and consciousness.
- Chapter 10 explores the complex multilevel dynamics of the Self.
- Chapter 11, the final one for this section and the focus of the current post, is about human-machine intelligence.
CLICK HERE to purchase the book, or to request it for your library.
HUMAN-MACHINE, A quick summary
AI is everywhere today. And it happened quickly. What does it all mean for human consciousness?As we navigate the Digital Anthropocene, AI and technology provoke critical questions about how our cognition, emotions, and awareness are evolving alongside these new technologies. In particular, they raise quesdtions about the current development and future evolution (as a species) of our cognition, emotion and consciousness in relation to technological systems. Chapter 11 of the Atlas explores these questions. Our guide for this journey is the American literary critic and posthumanist scholar, Katherine Hayles, and her theory of human-technical cognitive assemblages, as outlined in her book, Unthought.
To put Hayles’ framing to work, we do three things in this chapter.
- We define what she means by human-technical cognitive assemblages.
- We rework her definition of machine cognition to better align it with the study of social complexity.
- We set the stage for Chapter 22, in which we spend considerable time exploring our current complex system of systems of digital machines and our posthuman condition.
Here is a glimps at some of our conclusions from this chapter.
The inability of machine cognition to explain itself is why scholars refer to machine learning as a ‘black box’. We know how to programme machine cognition using artificial neural nets, genetic algorithms or computational models; but we often have little insight into how machine cognition arrives at its conclusions because these machine are ignorant of what they do, beyond the output they provide and the data upon which they are trained. Case in point is Cliff Kuang’s New York Times article, Can A.I. be taught to explain itself?[1] In the article, Kuang explains that “as machine learning becomes more powerful, the field’s researchers increasingly find themselves unable to account for what their algorithms know – or how they know it”.
This gets to a core problem of nonconscious cognition: while it extends our cognitive and emotional life and our consciousness to a near global level, it still requires a significant degree of attendant human intelligence, involvement, management, guidance, or control. This core problem also points to a wider and as yet unaddressed problem in our travels: complexity. Machine cognition is very good at ‘difficult’ and ‘complicated’, processing information and large amounts of data, in brute force, at speeds that are humanly impossible; but human cognition is still better at complexity. Winning at chess is one thing, but winning in diplomacy is another. Human consciousness needs to retain is executive function. The Self evolved and emerged for a reason: complex living systems, even when their embodiment is extended to the mechanical and digital, require guidance, even when that executive function is limited by its own consciousness.
Or at least for now.
What we will become, as posthuman cyborgs, over the course of the several hundred years, given our increasing integration into and cognitive dependency upon a global network of human-machine cognitive assemblages, is difficult to determine. One thing, however, is for sure: any exhaustive study of human cognition, emotion and consciousness needs to contend, at some point, with this newly emerging form of human evolution, as we move through the early stages of the Digital Anthropocene.
KEY WORDS: Machine cognition, actor network theory, new materialism, posthumanism, transhumanism, human-technical cognitive assemblages.
No comments:
Post a Comment