Jakub Ferenc | Články | The science of the interface in extended cognition and its parallels to McLuhan’s media theory

Zpět na články

The science of the interface in extended cognition and its parallels to McLuhan’s media theory

  • jakubferenc
  • Napsáno
  • kategorie:

Source: https://medium.com/@wordsandsuch/prototyping-the-extended-mind-b522a145335e

Abstract

The seminal paper by Clark and Chalmers (1998a) introduced the Extended Mind (EM) thesis, the parity principle, and arguments for the active externalist view on cognition. The proponents of the EM treat the external world and technological artefacts as constitutive parts of our cognition that are tightly coupled with the internal brain processes, forming interactive, higher-order extended and distributed cognitive systems (Sutton, 2008; Hollan et al., 2000). According to EM, our cognition leaks into the environment and needs media, technological objects and other people to function properly as a mind (Sutton, 2010), which renders human beings naturally-born cyborgs (Clark, 1998b, 2001, 2004).

Reacting to the critique (Adams & Aizawa, 2001), EM theory has moved away from the isomorphic restrictions of the parity principle into its second wave that emphasizes the complementarity of the biological and non-biological. Whereas the first wave of EM flattens out any distinction between the inner and outer components by analysing functionally same elements as equal, the complementarity of the second wave keeps their analytical separation and enables turning our attention to how these components interface between each other. Scholars in EM therefore call for „a science of the biotechnological mind“ (Clark, 2002) or “the science of the interface” (Sutton, 2010) that would investigate how technology scaffolds our cognition and how the interface brings together and transforms seemingly ontologically stable properties of inner and outer components by the virtue of its mediating effects.

Although EM thesis emerged in the context of cognitive science and philosophy of mind, from the beginning it used thought experiments of how we interact with technological media (Tetris, paper notebooks) to pump up our old intuitions about the mind and its relationship to the external world. It is this interrelation of philosophy, technology and media that leads me to analyse both sides of the interface, the human and technology, via the core ideas presented in the works of media theory of Marshall McLuhan, who was a vocal proponent of technology bringing about „psychic and societal“ changes in the human affairs.

The topic of my presentation is to explore extended mind theory on the background of McLuhan’s media theory to sketch an approach to “the science of the interface” where interface can be seen as a mediation process that is always a transformative force between two or more dissimilar sides that needs an element in between to enable their interaction.

Introduction

We are living in an age where most of our interactions with other people and the outside world are technologically mediated (Deuze, 2015). Our world is digital, and our technology is ubiquitous. Most of what we call technology, however, stays in the background as a part of massively distributed, multilayered, and interconnected infrastructures (Bratton, 2015).[1] To get an access to their functions, we need an open window that would mitigate and smoothen all the differences between the human being and technology in order to establish the interaction. But what is it that we should see inside the window beside the functional and aesthetical affordances that we as users look for? It turns out that the window that we can call an interface of technology does not represent reality by mimicking it with the strategy of mimesis known in the visual arts. Instead, its strategy is the opposite: to hide the reality of technology by reducing the sheer complexity of technological infrastructures to the level that is perceptible and manageable by human cognitive abilities. On this view, the nature of interface of technologies is being problematized as something much more profound than just the fixed screen displaying a graphical user interface filled with icons and windows that we are familiar with by using today’s personal computers and smart devices. Although the interface of many technological devices has a visual side, in addition  of representing something existing, such as photographs or videos, it is also a new, synthetic image or rather a cognitive map of the technological infrastructure that is created by engineers and designers in order to hide the true complexity of technology and offer only those features that users deem useful.

Analogously to a language translator or a dispute mediator in law, the interface stands as a third, an element between two sides that cannot otherwise communicate with each other. Specifically, in technology, it creates a relationship between humans and technology. When interface[2] mediates or translates, in other words, work properly, paradoxically something is being lost or transformed, which is, however, a prerequisite for being a medium in the first place because inherent to the process of mediation or translation is a transformation of whatever is being exchanged between the two or more incompatible sides.

McLuhan’s medium

This is the main idea embodied in the famous dictum “the medium is the message” by the media theorist Marshall McLuhan or the work by the Czech phenomenologist Miroslav Petříček (McLuhan, 1994; Petříček, 2009, Pages 10-12). Following McLuhan, instead of asking questions about the media institutions or the content of media (the content of a book or YouTube videos), we should focus on the technology itself and its formal and unique attributes with which we interact. For McLuhan, technology is an extension of natural sensory capabilities and each extension brings about different changes in our cognition that permeates through individuals to society as a whole. McLuhan devoted his work to mapping out the impact of various technologies such as a phonetic alphabet, a printing press or television on our culture, but also our cognitive abilities. Due to his devotion to the power of technological change, McLuhan has been accused of being a technological determinist. Many sentences in his work indeed read in this manner, for example when he pronounces “the phonetic alphabet, alone, [as] the technology that has been the means of creating ‘civilized man’” (McLuhan, 1994, Page 84) or the bold assertion that “[p]rint created individualism and nationalism in the sixteenth century. Program and ‘content’ analysis offer no clues to the magic of these media […]” (McLuhan, 1994, Pages 19-20). But McLuhan expressed much more nuanced version of the power of technology where the human being and technology influence each other in a cybernetic feedback loop (McLuhan, 1994, Page xxi).

Should we take McLuhan’s notion of technology extending our senses literally or is it just a wellchosen metaphor by a “mere” English literature scholar? McLuhan was clearly serious about his views on how technology shapes our cognitive abilities when he synthesised the research on the cognition of illiterate people or cognitive and neuroscience research that seemed to vindicate his insistence on the impact of phonetic alphabet leading to a Western, “linear thinking”. (McLuhan, 1978) But how do technology and its interface transform us and our societies when we let them become part of our culture? The answer lies in what McLuhan as one of few humanities scholars studied: a convergence of cognitive science, analysis of our relationships to technologies and the technical knowledge behind the construction of interfaces whereby such relationship is enabled.

If it is the case that technology transforms us in a non-trivial way, it yields exciting new questions that should not be left to engineers or designers only but should be naturally incorporated into the philosophy of technology. For example, if my iPhone is a part of my cognition, will stealing it be considered a theft but also an assault on my health? If other smart devices are somehow parts of our cognition, does it mean that designers have the power to literally design our minds?

In the following sections, I will explore how contemporary research in the second wave of extended mind theory, share McLuhan’s project to study how technology extends us. Moreover, I will show the that McLuhan’s insistence on studying the process of mediation, or what we should call the interface (Ferenc, 2018), is embedded in the contemporary EM research even though none necessarily cites the media theorist as a direct source.

First-wave extended mind thesis

The seminal paper by Clark and Chalmers in 1998 introduced the notion of extendedness of our mind and cognition to the broader cognitive science and philosophy of mind community. Since the publication of the paper, Andy Clark has become one of the main proponents of the thesis, authoring several books about the intimate relationship between humans, technology, and the world.

External mind (EM) thesis proposes that we should consider the external world as a part of our cognitive processing and under specific circumstances also a part of our mind. Here EM thesis does not solely argue for the common-sense idea that external objects such as technology, when we use them, play an important role in our daily coping with the world. The intimacy of the connection is much more profound and suggest a different picture: technology is not another layer on top of existing fixed biological, cognitive and mind layers, but ontologically saying, it actively co-constitutes all of them. Clark (Clark, 2001) argues in remarkably similar terms as McLuhan before him that:

“[…] the tools and culture are indeed as much determiners of our nature as products of it. Ours are (by nature) unusually plastic brains whose biologically proper functioning has always involved the recruitment and exploitation of non-biological props and scaffolds. More so than any other creature on the planet, we humans emerge as natural-born cyborgs”

To help us overcome the bias of internalist view on cognition and mind, the proponents of the thesis employ various thought experiments to “pump up” our intuition. Here I am going to briefly mention four of them: the Richard Feynman remark, a personal story of Patrick James, the deacon in a Catholic church who suffers from amnesia and uses technology to keep his life intact, and the famous Tetris and Inga/Otto thought experiments from the original EM paper.

Feynman’s notes

Being a famous physicist, Richard Feynman was interviewed by a historian of science, Charles Weiner, about his work routines. Clark in his book Supersizing the mind re-narrates the story of the encounter between those two and mentions the glee with which the historian looked at the work notes and said that it is an important record of Feynman’s work. Feynman struck the historian when he answered that the notes are not the record but the work itself. (Clark, 2008, Page xxv)

Patrick James’s memento

In the movie Memento, the main character suffers from anterograde amnesia that prevents him from forming new memories and relies on external objects such as Polaroid photographs or body tattoos to reminds himself the essential information. With the same condition fights a real-life person in the US by the name of Patrick Jones, who works as a deacon in a Catholic church (Marcus, 2008). Due to the childhood brain trauma and following concussions, he is also unable to form new memories. In contrast to the character in the movie, Jones relies on the latest technologies to keep his life and job in order. He uses an iPhone and Evernote app to consult whenever he has memory lapses.

Tetris

The Tetris thought experiment (Clark & Chalmers, 1998) involves a person playing the game. The authors invite us to imagine three similar cases. In the first case, the player mentally rotates the geometrical shapes (or zoids) on the screen in his head. In the second case, the player can rotate the zoids with the use of the keyboard. In the third, cyberpunk-like case the same player has a neural implant that simulates the physical rotation with the keyboard but this time the entire process of the rotation is done by the will of the player inside the head thanks to the cooperation of the brain and the neural implant. The authors of the paper argue that if we do not object to the implant in the third case being part of our cognition, we should regard external resources for cognitive tasks as being cognitive as well. In other words, we should not privilege the skin and the skull to be the boundary of cognition which the authors consider arbitrary and an unsubstantiated historical prejudice.

Inga and Otto experiment

The famous Inga and Otto experiment tries to show that if our beliefs about the state of the world are external to our head, we should argue that a part of our cognition is extended beyond the brain and skull boundary. The experiment lets us imagine two people, Inga and Otto. Both want to go to see an art exhibition at the MOMA. Inga does not know where the museum is but after consulting her memory, she remembers that the museum is on the 53rd street and goes in. Next, imagine Otto who suffers from a mild Alzheimer’s and carries a notebook with himself into which he writes the vital information. Otto also wants to see the exhibition. Naturally, he cannot remember the street but after consulting his notebook, he finds the street address and goes in.

The authors of the paper suggest following: Otto goes to the museum because he wants to go and because he believed even before consulting his notebook that the museum is on 53rd street. We do not have a problem to endorse that Inga believed before consulting her biological memory that she knew where the museum is and the authors argue that Otto’s belief is the same even though the substrate of the memory is different. Clark and Chalmers write that Otto’s memory “functions just like the information constituting an ordinary nonoccurrent belief—it just happens that this information lies beyond the skin” (Clark & Chalmers, 1998)

Parity principle

The authors do not make their case suggesting that there is no difference between the biological and non-biological memories. Instead, in their original paper, they propose that the external memory device such as a notebook functions as a process that is functionally identical to how the brain works. In fact, this functionalist approach to extended mind thesis represents the core argument later called the parity principle which in the original paper reads as follows:

“If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.” (emphasis in the original)

Of course, such a definition may lead to an impossibly broad extension of the mind, counting other people and even the whole internet as parts of our mind. To specify what counts as cognitive in the external world, the authors list four criteria that I will summarize here and that John Sutton (Sutton, 2010) calls “glue and trust” criteria: First, the external object such as Otto’s notebook must be a constant in one’s life, meaning that rarely will a person take action without the external object. Second, the information is readily available. Third, after retrieving the information, the person automatically endorses it. Fourth, the information from the external object has been consciously endorsed in the past. The parity principle is a probe based on the functionalist perspective of the mind designed, as Clark comments, to “free us from mere bio-chauvinistic prejudices”.

Active externalism

The given examples of Otto or Patrick James should stimulate our intuitions enough to see clearly that taking the external technological objects away from them would drastically reduce their memory and cognitive competency, leading to change in their personalities as well. On this view, we have to acknowledge that the relationship between humans and technological objects is indeed much more intimate and transformative that would otherwise seem. As Clark and Chalmers explain:

“the human organism is linked with an external entity in a two-way interaction, creating a coupled system that can be seen as a cognitive system in its own right. All the components in the system play an active causal role, and they jointly govern behaviour in the same sort of way that cognition usually does” (Clark & Chalmers, 1998)

Clark and Chalmers call this coupled human-technology relationship active externalism, distinguishing it from the previously known semantic externalisms advocated by Putnam and Burge. Active externalism, as opposed to passive, semantic externalisms, emphasises the active and direct impact of external objects on our behaviour. Embracing active externalism means that tools and methods for studying and explaining human behaviour must change. Whereas the internalist view on the mind and cognition would argue that to study human cognition requires only to “look inside the head”, active externalism proposes that when we use technological objects and external features of the world, we become participants of higher-order cognitive systems that have emergent properties that we would not otherwise find by a close analysis of individual components of such coupled systems. As the original paper reminds us, active externalism is closely related to other approaches in different fields, such as the distributed cognition approach by Hutchins (Hutchins, 1995) where the cognition is distributed between human agents and external objects, or phenomenologically-inspired situated cognition in human-computer interaction (Suchman, 1987) or robotics and artificial intelligence (Brooks, 1991) in which dynamical, real-time and two-way coupling with the environment serving as its own best representation help humans and robots solve ad-hoc problems.

Parity criticism and second-wave EM

Although the first-wave EM based on the functionalist parity principle and coupled, two-way interactive systems attracted a large following, it was not without its critics. The main criticism against the firstwave EM came from Adams and Aizawa (Adams & Aizawa, 2001; Adams & Aizawa, 2007).  Their criticisms centres around the notion of “intrinsic content” and the inability to clearly distinguish the “mark of cognitive”. They argue that only internal, that is neural representations have intrinsic content and therefore can be regarded as cognitive, instead of a derivative content that we find in the case of Otto who according to this analysis must first interpret the perceptual data, which is something Inga need not do. Even though Clark responded to these objections and others criticizing the functional similarity of various internal and external processes (Clark, 2005; Clark, 2010), John Sutton (Sutton, 2010) presents another criticism and suggestion that moves the EM thesis beyond the parity principle.

Sutton warns about the possible problem if we commit ourselves to a strong variant of the parity principle and treat the internal and external components of the cognitive systems as equal. Sutton cites John Haugeland who takes the functionalistic isomorphism to its extreme when he argues that we should not

“treat brain and body as clearly separable components joined at a well-defined psychophysical interface, nor can we slide ‘the hump in the rug’ outward by identifying principles interfaces between body or sense organs and the physical world […] we have to make it all lie flat”. (Haugeland, 1998, Pages 228-229)

Haugeland’s flat ontology without distinction influenced by the parity principle presents an issue if we want to study individual differences among people that “connect” to those extended bio-technological systems via those interfaces that Haugeland dismisses. And Sutton believes that keeping the distinct features of the biological and non-biological is necessary because there are in fact important and interesting differences between humans and artefacts, even though the higher-order extended systems may smooth them “under the rug” as Haugeland claims. As Suttons argues, clearly extended mind research needs to study the human component because humans are not blank slates, having species and individually specific differences. The study of non-human components is necessary as well, no one would argue that Otto’s notebook is the same as Inga’s memory cells in her brain. Moreover, if the parity principle reduces all external artefacts to their functions and interfaces provided to us, we would not need, as Suttons says, “media theory, history, or any other ‘social, cultural, and technological studies’ in cognitive science” (Sutton, 2010, Page 200).

Therefore, the parity principle must be fundamentally incomplete and at best a sufficient but not a necessary condition for EM thesis. Sutton (Sutton, 2010) therefore suggests loosening the requirements for EM thesis and instead of focusing on the functional isomorphism, we should stress the importance of complementarity of the internal and the external. Such a position of stressing rather the complementarity, according to Sutton, would move the EM thesis to its second wave beyond the parity principle.

Clark himself says that the brain does not have to replicate whatever useful processes the external objects provide. Rather, it

“must learn to interface with the external media in ways that maximally exploit their particular virtues”. (Clark, 1997, Page 220) (my emphasis)

Sutton adds that our brains are “leaky associative engine[s]” that rely on “need media, objects, and other people to function fully as minds”. The complementarity version of EM keeps the distinction between the internal and the external and importantly shifts our attention to how these two parts can, despite their natural differences, actually interact with each other, with which Clark agreed already in 1998, when he suggested that the extended mind thesis should turn “primarily on the way disparate inner and outer components co-operate so as to yield integrated larger systems capable of supporting various […] forms of adaptive success” (Clark, 1998).

It seems that the first wave was occupied with comparing the emergent outcomes of various interactions between humans and various forms of media as if we were looking at the system from a bird’s perspective. The second wave, however, agrees with the existence and importance of emergent properties on the large scale of extended mind systems, but nevertheless looks closely at the interactions between the system’s components that are the cause of the emergent properties in the first place.

Andy Clark writes that: “[t]he cash value of the emphasis on extended systems (comprising multiple heterogeneous elements) is thus that it forces us to attend to the interactions themselves: to see that much of what matters about human-level intelligence is hidden not in the brain, nor in the technology, but in the complex […] interactions and collaborations between the two.”

Then Clark continues with a prophetic voice:

“The study of these interaction[s] […] is not easy and depends both on new multidisciplinary alliances and new forms of modelling and analysis. The pay-off, however, could be spectacular: nothing less than a new kind of cognitive scientific collaboration involving neuroscience, physiology, and social, cultural, and technological studies in about equal measure”. (Clark, 2001, Pages 153-154)

Sutton calls this turn to study the interactions and relations between the inner and external resources the science of the interface.

The science of the interface

The primary question in the hypothetical science of the interface is how we interface with technology and what is the taxonomy of interfaces that enable such interaction. The question can be tackled from various sides using the methods and tools from a variety of disciplines, which satisfies Sutton’s and Clark’s call for the interdisciplinary approach to extended mind.

Through the lens of the science of the interface Sutton explicitly touches upon ontological questions and a subject-object dichotomy when he writes that the second-wave EM must take into account that when the inner and external resources are “brought to the interface with all […] different media and symbolic technologies”, then “interacting with different external artefacts” and “interfacing” with them is “often inherently transformative”. Surprisingly, we can hear in Sutton’s words the fundamental McLuhan’s ideas about the process of mediation as always being transformative as well as the media’s power to bring about “psychic and societal” changes.

Moreover, McLuhan’s program of studying the historical examples of media and their influences on the society can be clearly seen in the “historical cognitive science” that Sutton argues for. The argument for such activity is in the light of extended mind thesis straightforward: if we agree that cognition is extended and distributed, it means that the brain is a historically influenced “biosocial organ” because of sociocultural layers that have causal power in our cognition. According to Merlin Donald, whom Sutton cites, culturally specific technologies and media “have constituted part of human cognitive architecture since the upper Palaeolithic period” (Sutton, 2008).

Lastly, the science of the interface must accommodate the role of context in the use of technology. Given the fact that different media and technologies feature different interfaces that will furthermore afford different kinds of interaction based on the contextual clues that a human may or may not take advantage of, Sutton argues that the second-wave EM is “an invitation to give detailed attention to […] differences [in kinds of interaction] in specific contexts and case studies” (Sutton, 2010). Sutton calls for anthropological or ethnographic accounts of how real-world coupling of humans and technologies actually works. The call may remind us of the seminal work by a cognitive ethnographer Edwin Hutchins who developed a related approach to extended mind theory called “distributed cognition” when he published a detailed ethnographic report about the way the ship crew controls a large navy ship by distributed the control over various people and objects (Hutchins, 1995).

Cognitive anthropological study of the real-world coupling of humans and technology faces a methodological and philosophical issue that phenomenologically informed ethnography is well aware of: studying the use of technology means that we are concerned with practical skills of coping in the world in various types of human activity and the articulation of practical skills, especially at the expert level, proves to be extremely difficult because expertise “relies on an immense reservoir of practical skill memory, embodied somehow in the fibres and in the sedimented ability to sequence technical gestures appropriately, verbal descriptions of it (by either actors or observers) will be inadequate” (Sutton, 2008). For example, we can imagine a situation where we have to explain to visitors from a different planet how humans walk. At a superficial level, the verbal description of walking will be sufficient. But once we are compelled to provide a detailed account of walking, at some level the best we can do is to show how we do that. This invokes the distinction made by Hubert Dreyfus (Dreyfus, 1979; Dreyfus, 1992; Dreyfus, 2009) who, arguing against the older, symbolic approach to artificial intelligence research, divided knowledge into “what” declarative and “how” practical types. Or even Dreyfus’ mentor Heidegger who distinguished between present-at-hand and ready-to-hand (Heidegger & Macquarrie, Robinson, 2006) modes of coping with the world. Once we are becoming proficient enough in any activity involving tools, our perception of the activity undergoes a shift from very deliberate coping where we are highly aware of the tool (present-at-hand mode) to a proficient coping where the tool becomes transparent to us and ceases to occupy our awareness, instead our attention is focused on whatever goal we want to achieve, with the tool becoming an extended part of ourselves. To research coupling of technology and humans at the expert level, we have to immerse in the activity and become through apprenticeship experts ourselves (Sutton, 2008).

The science of the interface indeed seems like a massively interdisciplinary subject that can be studied via at least four viewpoints. First one resembles Sutton’s project of historical cognitive science that McLuhan was already participating in even before the idea of such field existed and consists primarily of analysing and explaining how past technologies throughout ages co-constituted our cognition and in turn led to changes in society and culture. Second, how and why technology works the way it does and how it interacts with us from the engineering and design point of view. This viewpoint is routinely discussed and solved in the human-computer interaction community. Third, what are the stable psychological and cognitive attributes, if any, with which we bring to the interaction with technology. Fourth, how does the interaction with technology work from the experiential point of view, in other words, how does the interaction with technology reveal for us and what are the transformative effects on our cognition as well as the personal identity? Here, the (post)phenomenological analysis of humantechnology relations and its concept of technological mediation that treats explicitly the co-constitution of humans and technology and dispels with subject-object dichotomy will be helpful. Furthermore, phenomenologically oriented ethnography informs us that the researcher must herself become an expert in a given activity to attain a rich understanding of context-dependent interaction with the tools.

Conclusion

McLuhan’s life project of mapping out the influence of technology on our cognition, which in turn shapes our culture and society, is increasingly not only vindicated but also developed further by extended mind where McLuhan is not routinely cited, if at all. McLuhan set out to understand the deep effects of technology on our lives and equate historical changes with its introduction to our culture. McLuhan was also the first media theoretician who warned us that if we want to understand the workings of media, we should not be blinded by the content of media and instead turn our attention the formal and technical aspects of how media enable the human-technology interaction.

Extended mind theory in its second-wave incarnation goes beyond the unreasonably strict functionalist isomorphism of parity principle and the call to flatten out any distinction between the biological and non-biological components of extended cognitive systems. Instead, the second-wave EM acknowledges the obvious deeper and meaningful differences between the components and makes their incompatibility and the possibility of mutual interaction the main issue for extended mind research, which is embodied in the proposed the science of the interface.

By this turn to interaction and interface, extended mind research could benefit greatly from a closer cooperation with already established fields such as media theory, human-computer interaction and (post)phenomenology that treats the mediation effects, interface and human-technology relationships as their main research problem.

 

References

Adams, F. & Aizawa, K. (2001). The bounds of cognition [Online]. Philosophical Psychology, vol.

14(issue 1), 43-64. https://doi.org/10.1080/09515080120033571

Adams, F. & Aizawa, K. (2007). The bounds of cognition. Oxford: Blackwell.

Bratton, B. (2015). The stack: on software and sovereignty. Cambridge, Massachusetts: The MIT press.

Brooks, R. (1991). Intelligence without representation [Online]. Artificial Intelligence, vol. 47(1-3), 139-159. https://doi.org/10.1016/0004-3702(91)90053-M

Clark, A. (2005). Intrinsic content, active memory and the extended mind [Online]. Analysis, vol. 65(issue 1), 1-11. https://doi.org/10.1093/analys/65.1.1

Clark, A. & Chalmers, D. (1998). The Extended Mind [Online]. Analysis, vol. 58(issue 1), 7-19. https://doi.org/10.1093/analys/58.1.7

Clark, A. (1997). Being there: putting brain, body, and world together again. Cambridge, Mass.: MIT Press.

Clark,   A.        (1998). Author’s          response          [Online].          Metascience,    vol.      7(issue 1),        95-104. https://doi.org/10.1007/BF02913278

Clark, A. (2001). Natural-Born Cyborgs?. In Cognitive Technology: Instruments of Mind (pp. 17-24). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-44617-6_2

Clark, A. (2001). Mindware: an introduction to the philosophy of cognitive science. New York: Oxford University Press.

Clark, A. (2008). Supersizing the mind: embodiment, action, and cognitive extension. Oxford: Oxford University Press.

Clark, A. (2010). Memento’s Revenge: The Extended Mind, Extended [Online]. In R. Menary (ed.), R.

Menary            (ed.),    The      Extended         Mind    (pp.      43-66). The      MIT     Press. https://doi.org/10.7551/mitpress/9780262014038.003.0003

Deuze, M. (2015). Media life: Život v médiích. Praha: Univerzita Karlova v Praze, nakladatelství Karolinum.

Dreyfus, H. (1992). What computers still can’t do: a critique of artificial reason. Cambridge, Mass: MIT Press.

Dreyfus, H. (1979). What computers can’t do: the limits of artificial intelligence. New York: Harper Colophon Books.

Dreyfus, H. (2009). How Representational Cognitivism Failed and is being replaced by Body/World Coupling. In After cognitivism a reassessment of cognitive science and philosophy (pp. 39-73). Dordrecht: Springer.

Eede, Y. (2012). Amor technologiae: Marshall McLuhan as philosopher of technology : toward a philosophy of human-media relationships. Brussels: VUBPRESS.

Ferenc, J. (2018). Postkognitivistické HCI: Vidět interface jako sociotechnický vztah (Diplomová práce). Praha.

Haugeland, J. (1998). Mind embodied and embedded. In Having thought: essays in the metaphysics of mind (pp. 207-237). Cambridge, Mass.: Harvard University Press.

Heidegger, M. & Macquarrie, J., Robinson, E. (2006). Being and time. Oxford: Blackwell.

Hutchins, E. (1995). Cognition in the wild. Cambridge, Mass.: MIT.

Marcus, G. (2008). What if HM had a Blackberry?: Coping with amnesia, using modern technology

[Online].                       Psychology                       Today.                      Retrieved                      from

https://www.psychologytoday.com/us/blog/kluge/200812/what-if-hm-had-blackberry

McLuhan, M. (1978). The Brain and the Media: The “Western” Hemisphere [Online]. Journal of Communication, vol. 28(issue 4), 54-60. https://doi.org/10.1111/j.1460-2466.1978.tb01656.x

McLuhan, M. (1994). Understanding media: the extensions of man. Cambridge, Mass.: MIT Press.

Petříček, M. (2009). Myšlení obrazem: průvodce současným filosofickým myšlením pro středně nepokročilé. Praha: Herrmann & synové.

Suchman, L. (1987). Plans and situated actions: the problem of human-machine communication. New York: Cambridge University Press.

Sutton, J. (2008). Material Agency, Skills and History: Distributed Cognition and the Archaeology of

Memory [Online]. In C. Knappett (ed.) & L. Malafouris (ed.), C. Knappett (ed.) & L. Malafouris (ed.),

Material Agency (pp. 37-55). Boston, MA: Springer US. https://doi.org/10.1007/978-0-387-74711-8_3

Sutton, J. (2010). Exograms and Interdisciplinarity: History, the Extended Mind, and the Civilizing Process [Online]. In R. Menary (ed.), R. Menary (ed.), The Extended Mind (pp. 189-225). The MIT

Press. https://doi.org/10.7551/mitpress/9780262014038.003.0009

 

[1] The most obvious example is the internet. It is a vast network of interconnected computers, smart and other devices.

[2] According to (Eede, 2012, Pages 167-168), Marshall McLuhan uses the words “interface” and “translation”, even “medium” and “technology” interchangeably.