FILOSOFIJA. SOCIOLOGIJA. 2021. T. 32. Nr. 1, p. 84–92 © Lietuvos mokslų akademija, 2021
This article elucidates the meaning of intelligence in machines. It employs hermeneutic-phenomenology and cybernetics. Its point of departure is Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans (2019). It (1) reviews the different types of machine intelligence (MI) Mitchell describes and the understanding of intelligence she suggests is common among MI researchers and developers, (2) hermeneutic-phenomenologically exhibits the intelligence of Da-sein (t/here-being, human being as such), (3) discerns the intelligence of machines cybernetically in contrast to the intelligence of Da-sein rendered hermeneutic-phenomenologically, and (4) assesses the MI industry’s goal of producing ‘general human-level’ intelligence in machines.
Keywords: human intelligence, artificial intelligence, machine intelligence, hermeneutic-phenomenology, cybernetics
This article elucidates the meaning of intelligence in machines. It employs hermeneutic-phenomenology and cybernetics. Its point of departure is Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans (2019). Mitchell’s book attempts to render an understanding of ‘the true state of affairs in artificial [machine] intelligence’ (2019: 14), is an able primer for the study of machine intelligence (MI), and provides good kindling for phenomenological analysis. Mitchell says that the MI industry’s understanding of intelligence ‘remains ill-defined’ (2019: 14, 19). Other experts, as Mitchell also notes, convey similar views. MI industry leaders Legg and Hutter write, ‘a fundamental problem in artificial intelligence [AI] is that nobody really knows what intelligence is’ (2007: 391). MI researchers Lehman, Clune and Risi assert, ‘because we don’t deeply understand intelligence or know how to produce general AI, rather than cutting off any avenues of exploration, to truly make progress we should embrace AI’s “anarchy of methods”’ (2014: 61; Mitchell 2019: 21). And MI researcher Marcus says that despite the industry’s ‘short term’ accomplishments, ‘there has been almost no progress’ creating ‘general human-level’ MI, and ‘it’s time for genuinely new ideas’ (Press 2016; Mitchell 2019: 13). Perhaps the provision of a more exact understanding of the intelligence of human being and machines is one of them.
The mathematician John McCarthy coined the term, ‘artificial intelligence’, in 1956 to distinguish the project from cybernetics (general system theory), but preferred the more telling, ‘“genuine” intelligence’ (Mitchell 2019: 18–19). This article, like the one written by Legg and Hutter, ‘Universal Intelligence: A Definition of Machine Intelligence’ (2007), uses the more accurate, ‘machine intelligence’, rather than the more common, ‘artificial intelligence’, to throw the technology into relief and help clarify the matter. Legg is a cofounder and the Chief Scientist of DeepMind Technologies, a subsidiary of Alphabet Inc. (Google). The entity is a leader in MI development. Its ‘original mission statement’ was to solve the problem of intelligence and use the solution ‘to solve everything else’ (Mitchell 2019: 4). Its current mission is ‘to research and build safe AI systems that learn how to solve problems and advance scientific discovery for all’ (2019b).
According to Mitchell, MI has evolved broadly along two tracks that aim at replicating distinct features of human intelligence. The first track is ‘symbolic’ MI. Its basis is inductive. The method, which is also called ‘expert systems’ because it encompasses programs based on rules drawn from specialists, is associated with the design of functions that algorithmically represent human signs and symbols. ‘Advocates’ of this approach contend that effective MI is best accomplished by ‘symbol-processing’ programs rather than algorithms that replicate processes within the human brain (Mitchell 2019: 21–24; 2019a). Tax preparation software instances the method.
The second track is ‘subsymbolic’ MI. Its basis is deductive. The method is modelled after neural networks. A subsymbolic MI program consists of a ‘stack of equations’ comprising layers of ‘weighted inputs’ and ‘threshold values’ that classify information against a ‘labeled’ data set (Mitchell 2019: 24). Deviations between the scores and standard are fed back into the system via a ‘general learning algorithm’ or ‘supervision signal’ to refine its functions and reduce errors. The method is called ‘backward propagation’. A system with more than one layer of functions is called ‘multilayered’ or ‘deep’. Subsymbolic MI essentially is a very fast, very robust trial-and-error system and the basis ‘machine learning’, a paradigm that includes ‘supervised learning’, ‘reinforced learning’ and ‘unsupervised learning’ (Mitchell 2019: 24–42). Examples of the technology include self-driving cars and facial recognition systems. The reliance of supervised and reinforced learning on large, labelled data sets to optimize their functions is a basic challenge impeding the technologies’ development, and the majority of MI experts agree that methods are not a ‘viable path to general-purpose AI’ (Mitchell 2019: 101).
Unsupervised learning, which is included in the second track, is based primarily on principal component and cluster analysis. It applies the statistical techniques to identify redundancies in data and classify them according to the density of their similarities. It is the forefront of MI development, is commonly associated with ‘deep learning’, which is a subset of machine learning that uses unsupervised learning and more than one layer of processing, and does not require labelled data sets to optimize its functions. The technology is commonly used in visual recognition and anomaly detection systems.
The Defense Advanced Research Project Agency (DARPA) uses a corresponding taxonomy to distinguish the different types of MI. It organizes the technology into three ‘waves’. First-wave MI corresponds to symbolic MI. Its programming is based on rules fashioned from ‘the specialized knowledge of experts’. DARPA correlates the weaknesses of this system to its limited ‘applicability’ and the prohibitive time and cost associated with handcrafting functions. Second-wave MI corresponds to supervised learning, and is also called ‘statistical learning’. The approach ‘applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples’. The limitations that DARPA associates with the technology are the same ones associated with machine learning generally. The ‘task of collecting, labeling, and vetting data’ to optimize neural networks is time consuming and cost prohibitive. Third-wave MI corresponds to unsupervised learning. The system, which DARPA also calls, ‘contextual learning’, classifies data ‘through generative contextual and explanatory models’ (2019a).
Mitchell says that the ultimate goal of MI developers is to achieve ‘general human-level AI’, or ‘artificial general intelligence’ (AGI), which she also calls ‘strong, human-level, general, or full-blown AI’ and contrasts to its narrow or weak versions, as instanced by self-driving cars and expert systems (2019: 46). AGI, as Mitchell sees it, means machines that understand ‘the situations they encounter in essentially the same way humans do’ so they may successfully ‘interact with humans in the world’. Mitchell voiced this view in response to a student who asked if MI needed ‘to have a humanlike understanding’ and why the industry could not ‘accept AI with a different kind of understanding’. She replied she did not ‘have any idea what a “different kind of understanding” would mean’ (2019: 298). Mitchell associates intelligence with common-sense (2019: 248), which she asserts is ‘governed’ by abstraction, analogy and the subconscious (2019: 249), language (2019: 95), understanding, which, along with meaning, she calls an ‘ill-defined’ term and a semantic placeholder ‘because we don’t yet have the correct language or theory to talk about what’s actually going on in the brain’ (2019: 245), and consciousness. Regarding consciousness, she says:
‘I planned to entirely sidestep the question of consciousness, because it is so fraught scientifically. But what the heck–I’ll indulge in some speculation. If our understanding of concepts and situations is a matter of performing simulations using mental models, perhaps the phenomenon of consciousness–and our entire conception of self–comes from our ability to construct and simulate models of our own mental models. Not only can I mentally simulate the act of, say, crossing the street while on the phone, I can mentally simulate myself having this thought and can predict what I might think next’ (2019: 241–242).
Mitchell’s understanding of intelligence is endemic to the MI industry. It is ambiguous, fragmented, incomplete and burdened with dualistic presumptions about human reality that thwart fuller expositions of the phenomenon. Nilsson’s rendition of intelligence also instances these deficiencies. He calls MI the ‘activity devoted to making machines intelligent’ and defines intelligence as the ‘quality that enables an entity to function appropriately and with foresight in its environment’ (2010: 12). This definition is not incorrect, but its fantastic generality limits its usefulness. His description of intelligence as the ability ‘to reason, achieve goals, understand and generate language, perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize and summarize information, create art and music, and even write histories’ (2010: xiii) provides more specificity and identifies characteristics of intelligence, but does not discern what the phenomenon essentially is. The definition of intelligence given by a Stanford University MI research report, which Mitchell cites and calls ‘a bit circular’ (2019: 20), contains the same shortcomings identified in Nilsson’s understanding. It says intelligence ‘remains a complex phenomenon whose varied aspects have attracted the attention of different fields of study, including psychology, economics, neuroscience, biology, engineering, statistics and linguistics’, and views MI ‘primarily as a branch of computer science that studies the properties of intelligence by synthesizing intelligence’ (2016: 12–13).
The different definitions of intelligence that Legg and Hutter examine in their article are informative and express basic aspects of intelligence but also fall short of providing a vigorous understanding of the phenomenon. These views include, as Legg and Hutter cite them: ‘good sense, practical sense, initiative, the faculty of adapting oneself to circumstances’; ‘the capacity to learn or to profit by experience’; success learning or the ability to learn to adjust oneself to one’s environment’; ‘the ability of an organism to solve new problems’; and ‘a global concept that involves an individual’s ability to act purposefully, think rationally, and deal effectively with the environment’ (2007: 401). Legg and Hutter’s definition of intelligence moves closer to the hermeneutic-phenomenological understanding of the phenomenon but is confined by its dualistic start-point. It fails to account for the meaning of ‘is’ and the principal relation of human being to being. ‘The essence of intelligence’, they assert, ‘measures an agent’s ability to achieve goals in a wide range of environments’ (2007: 402). MI industry leader Banavar expresses a similar view when he associates intelligence with ‘language’ and the ability to ‘solve the real-world messy problems’ (2017).
Hermeneutic-phenomenology, which encompasses both its transcendental-horizonal expression, as witnessed in Being and Time (Heidegger 1962), and being-historical perspective, as shown in Contributions (Heidegger 1999) and Mindfulness (Heidegger 2006), is the open endeavour to exhibit human being as such, or factical Da-sein (t/here-being), the event and way of human being in concrete living. Hermeneutic-phenomenology is a course of thinking that labours to let Da-sein and phenomena originary to and continuous with it, including the World and being, disclose-show their ownmost (Wesen) or essential meaning. Da-sein is not a subject. It is not a ‘self’. It is transcendence, a lighting-process [Lichtung] that goes beyond beings to unfold as their being (and meaning). It is a continuum wherein distinct Da-seins are conjoined into a single World through the ‘is-ness’ of their togetherness. Terms commonly used to discern this happening include ‘φαινόμενον’ (phenomenon), ‘beings in the whole’ (das Seiende im Ganzen), and ‘the clearing of the self-concealing-self-withdrawing’.
Da-sein is the factical disclosure of beings, the comprehension of being, and the potentiality to render the meaning of phenomena through words. Hermeneutic-phenomenology corresponds this disclosing-comprehending-saying power to an originary element (dimension, flux) of Da-sein (transcendence) it calls, ‘λόγος’ (logos). The meaning of Da-sein is the t/here of its ‘to be’, and λόγος is its intrinsic potentiality to endure the meaning of ‘is’ (being), interpretively gather beings into a whole, and comprehend their being. This dynamism is bound together with language. Language is ownmost to λόγος. It, along with the comprehension of being, frees the meaning of phenomena and the World to manifest within/through/as Da-sein (Richardson 1967: 261–262). Language and the comprehension of being, the understanding of ‘is’, are intrinsic to each other. At the same time language is enabled and shaped from within by the meaning of ‘is’, it frees Da-sein to render phenomena under the light of comprehension.
The use of ancient and pre-Socratic Greek is not incidental to hermeneutic-phenomenology. The tactic brings to bear tools begged by the problematic of human being, by the open-ended striving to disclose-show-say the meaning of phenomena and the World. It speaks to the lexiconic limits of contemporary language, including its inherent propensity to rely on subject–object dualisms, and the need to put things into proper perspective to allow thinking to be commandeered by Da-sein and being. The Greeks did not think human phenomena or reality dualistically. They did not individuate the World as a discrete (local) thing comprising discrete subjects and objects or identify subjectivity as an encapsulated ‘self’ disconnected from things. They endured (thought, experienced, spoke) the originary, pre-philosophical ‘togetherness’ of subjectivity and being and the unity of the individual and the world (Heidegger 1977). They developed a thesaurus to elucidate different dimensions of the human ‘to be’, including consciousness (intentionality), which transcendental-phenomenology renders as a correlate between νόησις (nóēsis) (the experience of phenomena) and νόημα (nóēma) (the phenomena experienced). The thinking of the Greeks is distinguished by its attendance to the meaning of ‘is’, the primeval phenomenon commonly deserted by contemporary culture and thought. Hermeneutic-phenomenology readily uses original Greek because it is commensurate with pre-philosophical thinking and language. It, like the thinking of the Greeks, strives to let phenomena reveal themselves as they are from themselves within transcendence.
Intelligence, thought hermeneutic-phenomenologically, is the intrinsic power of Da-sein to heed and bring forth to completion a situation’s equifinality (entelechy), or ἐντέλέχεια (entelèkheia) (ἐν-τέλει-ἒχει (en-télē-ekhē)) (Richardson 1967: 265–266, 310–311). It is the inherent potentiality of λόγος to unearth and fulfill a possibility sheltered within the clearing of the self-concealing–self-withdrawing that begs to be brought forth to its fulfillment in transcendence. It is the power to attune to the ‘should’ or ‘ought’ harboured within transcendence (Da-sein) and render it to its culmination. The τέλος (télos) of ἐν-τέλει-ἒχει does not signify a closure or conclusion. It signifies a ‘point of repose’, the ‘culmination of movement’, or a ‘work’ (ἒργον), hence, something always underway (Richardson 1967: 311). Its preceding and succeeding terms locate the meaning in/within (ἐν) one’s situation (ἒχει) (Richardson 1967: 266). Intelligence is the immanent dynamism of Da-sein to respond to an appeal emanating from a prospect hidden t/here, within the meaning of its ‘to be’, which is its situation (χώρος) (khóros), to render it to its concrete achievement. It is ποίησης (poiēsis) or τέχνη (tékhnē), where the first phenomenon signifies the potentiality to heed and bring forth to completion a possibility that is hidden t/here and whose human relevance emanates principally from itself (e.g. a work of art) and the second signifies the power to heed and bring forth to completion a possibility that is obscure or withdrawing more than it is hidden, which is to say that it is more or less ‘at hand’ (e.g. a bridge, a crop, MI), and whose realization is also provoked by its meaning-context. Τέχνη, as these remarks suggest, ‘belongs’ to ποίησης (Heidegger 1977: 13), and έντέλέχεια is their ‘end term’ (Trujillo 2018: 136). It noematically ‘leads human becoming’ (Ricoeur 1967: 159; Husserl 1970: 15). Τέχνη differs from ποίησης insofar as its ‘bringing-forth’ is not only incited by its equifinality, but also by the situation wherein the equifinality is embedded (Heidegger 1977: 7–8, 10–11). Whereas the call of ποίησης emanates as a quiet voice, a whisper, a hidden suggestion, the call of τέχνη emanates as a ‘challenging-forth’ and a ‘setting-upon’ that compels a ‘putting-in-order’ (Heidegger 1977: 15, 17, 27). Michelangelo’s Pietà is overwhelmingly a product of ποίησης. Einstein’s 1905 theory of special relativity less so insofar as it was also evoked by inconsistences in the Newtonian conception of reality. Efforts to produce AGI are almost entirely driven by τέχνη. The absence of exact definitions of intelligence and MI within the industry have compelled engineers to embark on a trajectory ‘guided by a rouge sense of direction and an imperative to “get on with it”’ (2016: 12; Mitchell 2019: 20).
Ποίησης and τέχνη are contingent on and continuous with thinking. They ensue from it. Thinking is not equal to logic, calculation, following recipes, or iterating the works, words, or acts of others, although it can include these moments (e.g. ‘standing on the shoulders of giants’) insofar as they are part of an authentical struggle to discover, build and unearth freely of machination and reification. It strives to suspend and liberate itself of the μετὰ τὰ (metà tà) of metaphysics, including dualistic pre-renditions of phenomena, and let φύσης (phúsis, physis), or reality as such, disclose-show-say its ownmost significance. Thinking is a potentiality of Da-sein distinguished by its incipience, resoluteness, movedness, openness, freedom and solicitude. It is Da-sein steadfastly yielding λόγος to the truth (ἀλήθεια), being, or ownmost meaning of phenomena or to being as such (be-ing, enowning) (Heidegger 1999: 24). It is Da-sein releasing λόγος to be seized by the ‘matter’ to be thought. The heedfulness, attentiveness and responsiveness of ποίησης and τέχνη originate from and belong to thinking. They are commensurate with principal moments of thinking discerned as steadfastly attuning and listening to, caring for and inabiding (Inständigkeit) (‘dwelling poetically’, finding ‘abode’ in) phenomena, including the phenomenon of being (Kovacs 2015: 10, 12–13, 16, 123; Heidegger 2006: 99–100). They are consistent with t/here-being setting itself ‘free from the gravitational immanence of subjectivity’, from the self-absorbing ‘self’, and letting ‘thought’ originate out of and be based on its ‘matter’ (Kovacs 2008: 45).
The hermeneutic-phenomenology of intelligence does not suggest that machines cannot contain intelligence. It implies that, in accordance with the hermeneutic-phenomenological axiom, ‘machines do not exist’ (Trujillo 2018: 137), they cannot embody intelligence as the phenomenon occurs in Da-sein. The principle, ‘machines do not exist’, moreover, does not say that machines are not t/here. Existence, thought hermeneutic-phenomenologically, denotes transcendence. It is ἔκστασις (ékstasis), or the discernment of human being as the entity who stands outside of itself, who is the being-of-the-t/here, its situation (Richardson 1967: 536). The basic constituents of MI are algorithms, computations and data. The technology does not transcend beings to come to pass as their being or undergo them as beings in the whole. It is devoid of ‘to be’, bereft of the meaning of ‘is’, and, hence, barren of the potentialities of thinking, ποίησης, and τέχνη. It does not exist and is unable to endure the possibilities sheltered in transcendence that summon to be brought forth to fulfillment.
The intelligence of machines is confined to the intelligence that cybernetics distinguishes in all open systems, or systems, animate, inanimate, or otherwise, that exchange information with their environments (Bateson 2000: 410; Bertalanffy 1951: 308–309). Intelligence, thought cybernetically (laterally, informationally), is the transformation of a system toward the completion of an equifinality, one of the four basic constituents of all open systems; the others are redundancies, variations and parameters (Trujillo 2018). The cybernetic notion of equifinality shares isomorphic correspondences with the hermeneutic-phenomenological rendition of έντέλέχεια, but is foundationally distinct from it. The perspective individuates equifinality as a variable within an informational or evolutionary process. It does not associate it with transcendence. Bertalanffy discerns equifinality, which he also calls ‘entelechy’, as the ‘final state’ an open system reaches ‘from different initial conditions and in different ways’ (1951: 309). Beer encapsulates this proposition in the assertion: ‘the purpose of a thing is what it does’ (2002: 217). Ashby reduces it similarly (1957: 1–3). Maturana and Varela render the basic processes of a system that transform it into its final state (i.e. equifinality) as ‘autopoiesis’, which they describe as a ‘manner of relation’ between a system, such as a neuron or cell, and its environment that ‘entails not picking or processing information, but specifying what counts as relevant’ to the continuity of the system (1987: 253). Their discernment of autopoiesis does not denote the hermeneutic-phenomenological understanding of ποίησης (or τέχνη), however. It describes an automatic or programmed process that is radically different from the ownmost of intelligence in Da-sein (Trujillo 2018). Autopoiesis contains a directionality and capacity to specify things relevant to a system’s state, but does not include transcendence, thinking, or a comprehension of being.
Mitchell quotes the question posed by the mathematician-philosopher G. C. Rota asking ‘whether or when AI will ever crash the barrier of meaning’ (Mitchell 2019: 235; Rota 1986). She says that the notion, ‘barrier of meaning,’ conveys a central idea of her book, which is: ‘humans, in some deep and essential way, understand the situations they encounter, whereas no AI system yet possesses such understanding’. Mitchell goes on to explain that although ‘state-of-the-art AI systems have nearly equaled (and in some cases surpassed) humans on certain narrowly defined tasks, these systems all lack a grasp of the rich meanings humans bring to bear in perception, language, and reasoning’, and the ‘barrier of meaning between AI and human-level intelligence still stands today’ (2019: 235). The hermeneutic-phenomenology of intelligence exposes the basis of Mitchell’s assertion, but renders no evidence suggesting that the barrier she individuates will ever be traversed. MI is an autopoietic system engineered by humans for humans. The system is an artifact of Da-sein, but it is not artificial. It is the intelligence of open systems. It is an informational process comprising patterns, variations, rules and equifinalities. It exchanges information with its environment. It is not the way of intelligence in human being, however. MI does not exist. It is not its situation, does not come to pass as the being and meaning of phenomena, and cannot endure the equifinalities sheltered in transcendence.
The cybernetic rendition of intelligence should not be automatically discounted by developers of MI. Cybernetics is systems thinking, and everything, including human phenomena and MI, contains the basic elements of a system. The method is ‘inherently trans-disciplinary’ (Heylighen, Joslyn 2001: 155). It renders phenomena isomorphically and allows observations and theses to migrate more or less freely across empirical disciplines. The symmetry between the cybernetics and hermeneutic-phenomenology of intelligence intimated here and unearthed by other analyses (Trujillo 2017; Trujillo 2018) suggests that cybernetics may provide a consistent way to frame the development of MI hermeneutic-phenomenologically. It implies that the method has the capacity to coherently expose the MI industry to an understanding of human intelligence that could help it clarify its thinking about its matter.
Rendered cybernetically, the level of intelligence in machines correlates to the depth and range of its coded equifinalities and its programmed capacity to complete them (Trujillo 2018). It corresponds to the sophistication, variability, responsiveness and reliability of the transformations governed by the system’s algorithms and the ability of the functions to process complex information, mitigate noise and complete their programmed objectives. These ends are coupled to the notions, ideas, projects and actions of the technology’s originators, to their efforts to bring to fulfillment the possibilities the technology harbours, to the ποίησης and τέχνη of the human persons designing MI. So is the technology’s evolution. Although hermeneutic-phenomenology yields nothing to suggest the MI industry will produce ‘general human-level’ MI, it does not contend the trajectory that Banavar foresees for MI’s development. MI will likely evolve along a course where engineers design ‘narrow intelligence’ systems ‘many times over systematically’, locate them on a ‘common ground’, and create ‘platforms’ that will allow them to ‘build more versions of narrow intelligence-based systems that can help people actually solve problems’ (2017).
Received 12 June 2020
Accepted 7 September 2020
1. 2016. One Hundred Year Study on Artificial Intelligence (AI100): Report of the 2015 Study Panel. Stanford: Stanford University.
2. 2017. Creating Human-Level AI: How and When? Future of Life Institute. Available at: https://youtu.be/V0aXMTpZTfc (accessed 06.06.2020).
3. 2019a. AI Next Campaign. Defense Advanced Research Projects Agency. Available at: https://www.darpa.mil/work-with-us/ai-next-campaign (accessed 02.06.2020).
4. 2019b. DeepMind. Available at: https://www.deepmind.com/about (accessed 01.01.2020).
5. Ashby, W. R. 1957. An Introduction to Cybernetics. London: Chapman & Hall Ltd.
6. Bateson, G. 2000. Steps to an Ecology of Mind. Chicago: University of Chicago Press.
7. Beer, S. 2002. ‘What is Cybernetics?’, Kybernetes 31: 209–219.
8. Bertalanffy, L. V. 1951. ‘General System Theory: A New Approach to Unity of Science (Symposium)’, Human Biology 23: 302–312.
9. Heidegger, M. 1962. Being and Time. New York: Harper & Row Publishers.
10. Heidegger, M. 1977. The Question Concerning Technology and Other Essays, trans. W. Lovitt. New York: Harper & Row Publishers.
11. Heidegger, M. 1999. Contributions to Philosophy (From Enowning), trans. P. Emad and K. Maly. Bloomington: Indiana University Press.
12. Heidegger, M. 2006. Mindfulness, trans. P. Emad and T. Kalary. London: Continuum International Publishing Group.
13. Heylighen, F.; Joslyn, C. 2001. ‘Cybernetics and Second Order Cybernetics’, in Encyclopedia of Physical Science and Technology, ed. R. Meyers. Cambridge: Academic Press.
14. Husserl, E. 1970. The Crisis of European Sciences and Transcendental Phenomenology, trans. D. Carr. Evanston: Northwestern University Press.
15. Kovacs, G. 2008. ‘Heidegger’s Directives in Mindfulness for Understanding the Being Historical Relationship of Machination and Art’, Heidegger Studies 24: 39–59.
16. Kovacs, G. 2015. Thinking and Being in Heidegger’s Beiträge zur Philosophie (Vom Ereignis). Bucharest: Zeta Books.
17. Legg, S; Hutter, M. 2007. ‘Universal Intelligence: A Definition of Machine Intelligence’, Minds & Machines 17: 391–444.
18. Lehman, J.; Clune, J; Risi, S. 2014. ‘An Anarchy of Methods: Current Trends in How Intelligence is Abstracted in AI’, IEEE Intelligent Systems 29: 56–62.
19. Maturana, H. R; Varela, F. J. 1987. The Tree of Knowledge. Boston: Shambhala Publications, Inc.
20. Mitchell, M. 2019. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and Girnoux.
21. Nilsson, N. J. 2010. The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge: Cambridge University Press.
22. Press, G. 2016. ‘12 Observations About Artificial Ingelligence From the O’Reilly AI Conference’, Forbes. Available at: https://www.forbes.com/sites/gilpress/2016/10/31/12-observations-about-artificial-intelligence-from-the-oreilly-ai-conference/#6ef577222ea2
23. Richardson, W. J. 1967. Heidegger: Through Phenomenology to Thought. The Hague: Martinus Nijhoff.
24. Ricoeur, P. 1967. Husserl: An Analysis of His Phenomenology. Evanston: Northwestern University Press.
25. Rota, G. C. 1986. ‘In Memoriam of Stan Ulam: The Barrier of Meaning’, Physics D: Nonlinear Phenomena 22: 1–3.
26. Trujillo, J. 2017. ‘The Thinking of Tesla/SpaceX CEO Elon Musk’, Existentia 27: 231–261.
27. Trujillo, J. 2018. ‘Thinking Machine (Artificial) Intelligence’, Existentia 28: 133–157.
Santrauka
Straipsnyje nagrinėjama mašinų intelekto samprata, remiantis hermeneutine fenomenologija ir kibernetika. Atskaitos taškas yra Melanie’os Mitchell Artificial Intelligence: A Guide for Thinking Humans (2019). Straipsnis (1) apžvelgia M. Mitchell aprašytus skirtingus mašinų intelekto (MI) tipus ir intelekto supratimą, jos liudijimu bendrą MI tyrinėtojams ir plėtotojams; (2) hermeneutiškai ir fenomenologiškai išskleidžia Da-Sein (čia-būties, žmogaus būties kaip tokios) intelektą; (3) išskiria mašinų intelektą kibernetiškai, kaip kontrastą hermeneutiškai ir fenomenologiškai aprašytam Da-Sein intelektui ir (4) įvertina MI industrijos tikslą pagaminti „bendro žmogiškojo lygio“ mašinų intelektą.
Raktažodžiai: žmogaus intelektas, dirbtinis intelektas, mašinos intelektas, hermeneutinė fenomenologija, kibernetika