Artificial general intelligence

Extracted and modified from ikipedia under CC-BY-SA 3.0

Artificial general intelligence (AGI) is the hypothetical [1] intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, [2] [3] [4] full AI, [5] or general intelligent action. [6] Some academic sources reserve the term "strong AI" for machines that can experience consciousness. [7] Today's AI is speculated to be many years, if not decades, away from AGI. [8] [9]

Some authorities emphasize a distinction between strong AI and applied AI, [10] also called narrow AI [3] or weak AI. [11] In contrast to strong AI, weak AI is not intended to perform human cognitive abilities. Rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks.

As of 2017, over forty organizations are researching AGI. [12]

Various criteria for intelligence have been proposed (most famously the Turing test) but to date, there is no definition that satisfies everyone. [13] However, there is wide agreement among artificial intelligence researchers that intelligence is required to do the following: [14]

Other important capabilities include the ability to sense (e.g. see) and the ability to act (e.g. move and manipulate objects) in the world where intelligent behaviour is to be observed. [15] This would include an ability to detect and respond to hazard. [16] Many interdisciplinary approaches to intelligence (e.g. cognitive science, computational intelligence and decision making) tend to emphasise the need to consider additional traits such as imagination (taken as the ability to form mental images and concepts that were not programmed in) [17] and autonomy. [18] Computer based systems that exhibit many of these capabilities do exist (e.g. see computational creativity, automated reasoning, decision support system, robot, evolutionary computation, intelligent agent), but not yet at human levels.

The following tests to confirm human-level AGI have been considered: [19] [20]

Chinese researchers Feng Liu, Yong Shi and Ying Liu conducted intelligence tests in the summer of 2017 with publicly available and freely accessible weak AI such as Google AI or Apple's Siri and others. At the maximum, these AI reached a value of about 47, which corresponds approximately to a six-year-old child in first grade. An adult comes to about 100 on average. In 2014, similar tests were carried out in which the AI reached a maximum value of 27. [21] [22]

The most difficult problems for computers are informally known as "AI-complete" or "AI-hard", implying that solving them is equivalent to the general aptitude of human intelligence, or strong AI, beyond the capabilities of a purpose-specific algorithm. [23]

AI-complete problems are hypothesised to include general computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem. [24]

AI-complete problems cannot be solved with current computer technology alone, and also require human computation. This property could be useful, for example, to test for the presence of humans, as CAPTCHAs aim to do; and for computer security to repel brute-force attacks. [25] [26]

Modern AI research began in the mid 1950s. [27] The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do." [28] Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant [29] on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time; Crevier quotes him as having said on the subject in 1967, "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," [30] although Minsky states that he was misquoted. [citation needed]

However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". [31] As the 1980s began, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". [32] In response to this and the success of expert systems, both industry and government pumped money back into the field. [33] However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. [34] For the second time in 20 years, AI researchers who had predicted the imminent achievement of AGI had been shown to be fundamentally mistaken. By the 1990s, AI researchers had gained a reputation for making vain promises. They became reluctant to make predictions at all [35] and to avoid any mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer [s] ." [36]

In the 1990s and early 21st century, mainstream AI achieved far greater commercial success and academic respectability by focusing on specific sub-problems where they can produce verifiable results and commercial applications, such as artificial neural networks, computer vision or data mining. [37] These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is very heavily funded in both academia and industry. Currently, development on this field is considered an emerging trend, and a mature stage is expected to happen in more than 10 years. [38]

Most mainstream AI researchers hope that strong AI can be developed by combining the programs that solve various sub-problems using an integrated agent architecture, cognitive architecture or subsumption architecture. Hans Moravec wrote in 1988:

"I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts." [39]

However, even this fundamental philosophy has been disputed; for example, Stevan Harnad of Princeton concluded his 1990 paper on the Symbol Grounding Hypothesis by stating:

"The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer)." [40]

The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud [41] in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. [42] The research objective is much older, for example Doug Lenat's Cyc project (that began in 1984), and Allen Newell's Soar project are regarded as within the scope of AGI. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel [43] as "producing publications and preliminary results". The first summer school in AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. MIT presented a course in AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers. However, as yet, most AI researchers have devoted little attention to AGI, with some claiming that intelligence is too complex to be completely replicated in the near term. However, a small number of computer scientists are active in AGI research, and many of this group are contributing to a series of AGI conferences. The research is extremely diverse and often pioneering in nature. In the introduction to his book, [44] Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century, but the consensus in the AGI research community seems to be that the timeline discussed by Ray Kurzweil in The Singularity is Near [45] (i.e. between 2015 and 2045) is plausible. [46]

However, most mainstream AI researchers doubt that progress will be this rapid. [citation needed] Organizations explicitly pursuing AGI include the Swiss AI lab IDSIA, [citation needed] Nnaisense, [47] Vicarious, Maluuba, [12] the OpenCog Foundation, Adaptive AI, LIDA, and Numenta and the associated Redwood Neuroscience Institute. [48] In addition, organizations such as the Machine Intelligence Research Institute [49] and OpenAI [50] have been founded to influence the development path of AGI. Finally, projects such as the Human Brain Project [51] have the goal of building a functioning simulation of the human brain. A 2017 survey of AGI categorized forty-five known "active R&D projects" that explicitly or implicitly (through published research) research AGI, with the largest three being DeepMind, the Human Brain Project, and OpenAI. [12]

In 2019, video game programmer and aerospace engineer John Carmack announced plans to research AGI. [52]

A popular discussed approach to achieving general intelligent action is whole brain emulation. A low-level brain model is built by scanning and mapping a biological brain in detail and copying its state into a computer system or another computational device. The computer runs a simulation model so faithful to the original that it will behave in essentially the same way as the original brain, or for all practical purposes, indistinguishably. [53] Whole brain emulation is discussed in computational neuroscience and neuroinformatics, in the context of brain simulation for medical research purposes. It is discussed in artificial intelligence research [46] as an approach to strong AI. Neuroimaging technologies that could deliver the necessary detailed understanding are improving rapidly, and futurist Ray Kurzweil in the book The Singularity Is Near [45] predicts that a map of sufficient quality will become available on a similar timescale to the required computing power.

For low-level brain simulation, an extremely powerful computer would be required. The human brain has a huge number of synapses. Each of the 1011 (one hundred billion) neurons has on average 7,000 synaptic connections (synapses) to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5×1014 synapses (100 to 500 trillion). [55] An estimate of the brain's processing power, based on a simple switch model for neuron activity, is around 1014 (100 trillion) synaptic updates per second (SUPS). [56] In 1997, Kurzweil looked at various estimates for the hardware required to equal the human brain and adopted a figure of 1016 computations per second (cps). [57] (For comparison, if a "computation" was equivalent to one "floating point operation" – a measure used to rate current supercomputers – then 1016 "computations" would be equivalent to 10 petaFLOPS, achieved in 2011). He used this figure to predict the necessary hardware would be available sometime between 2015 and 2025, if the exponential growth in computer power at the time of writing continued.

The artificial neuron model assumed by Kurzweil and used in many current artificial neural network implementations is simple compared with biological neurons. A brain simulation would likely have to capture the detailed cellular behaviour of biological neurons, presently understood only in the broadest of outlines. The overhead introduced by full modeling of the biological, chemical, and physical details of neural behaviour (especially on a molecular scale) would require computational powers several orders of magnitude larger than Kurzweil's estimate. In addition the estimates do not account for glial cells, which are at least as numerous as neurons, and which may outnumber neurons by as much as 10:1, and are now known to play a role in cognitive processes. [58]

There are some research projects that are investigating brain simulation using more sophisticated neural models, implemented on conventional computing architectures. The Artificial Intelligence System project implemented non-real time simulations of a "brain" (with 1011 neurons) in 2005. It took 50 days on a cluster of 27 processors to simulate 1 second of a model. [59] The Blue Brain project used one of the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to create a real time simulation of a single rat neocortical column consisting of approximately 10,000 neurons and 108 synapses in 2006. [60] A longer term goal is to build a detailed, functional simulation of the physiological processes in the human brain: "It is not impossible to build a human brain and we can do it in 10 years," Henry Markram, director of the Blue Brain Project said in 2009 at the TED conference in Oxford. [61] There have also been controversial claims to have simulated a cat brain. Neuro-silicon interfaces have been proposed as an alternative implementation strategy that may scale better. [62]

Hans Moravec addressed the above arguments ("brains are more complicated", "neurons have to be modeled in more detail") in his 1997 paper "When will computer hardware match the human brain?". [63] He measured the ability of existing software to simulate the functionality of neural tissue, specifically the retina. His results do not depend on the number of glial cells, nor on what kinds of processing neurons perform where.

The actual complexity of modeling biological neurons has been explored in OpenWorm project that was aimed on complete simulation of a worm that has only 302 neurons in its neural network (among about 1000 cells in total). The animal's neural network has been well documented before the start of the project. However, although the task seemed simple at the beginning, the models based on a generic neural network did not work. Currently, the efforts are focused on precise emulation of biological neurons (partly on the molecular level), but the result cannot be called a total success yet. Even if the number of issues to be solved in a human-brain-scale model is not proportional to the number of neurons, the amount of work along this path is obvious.

A fundamental criticism of the simulated brain approach derives from embodied cognition where human embodiment is taken as an essential aspect of human intelligence. Many researchers believe that embodiment is necessary to ground meaning. [64] If this view is correct, any fully functional brain model will need to encompass more than just the neurons (i.e., a robotic body). Goertzel [46] proposes virtual embodiment (like in Second Life), but it is not yet known whether this would be sufficient.

Desktop computers using microprocessors capable of more than 109 cps (Kurzweil's non-standard unit "computations per second", see above) have been available since 2005. According to the brain power estimates used by Kurzweil (and Moravec), this computer should be capable of supporting a simulation of a bee brain, but despite some interest [65] no such simulation exists [citation needed] . There are at least three reasons for this:

In addition, the scale of the human brain is not currently well-constrained. One estimate puts the human brain at about 100 billion neurons and 100 trillion synapses. [68] [69] Another estimate is 86 billion neurons of which 16.3 billion are in the cerebral cortex and 69 billion in the cerebellum. [70] Glial cell synapses are currently unquantified but are known to be extremely numerous.

In 1980, philosopher John Searle coined the term "strong AI" as part of his Chinese room argument. [71] He wanted to distinguish between two different hypotheses about artificial intelligence: [72]

The first one is called "the strong AI hypothesis" and the second is "the weak AI hypothesis" because the first one makes the stronger statement: it assumes something special has happened to the machine that goes beyond all its abilities that we can test. Searle referred to the "strong AI hypothesis" as "strong AI". This usage is also common in academic AI research and textbooks. [73]

The weak AI hypothesis is equivalent to the hypothesis that artificial general intelligence is possible. According to Russell and Norvig, "Most AI researchers take the weak AI hypothesis for granted, and don't care about the strong AI hypothesis." [74]

In contrast to Searle, Ray Kurzweil uses the term "strong AI" to describe any artificial intelligence system that acts like it has a mind, [45] regardless of whether a philosopher would be able to determine if it actually has a mind or not.In science fiction, AGI is associated with traits such as consciousness, sentience, sapience, and self-awareness observed in living beings. However, according to Searle, it is an open question whether general intelligence is sufficient for consciousness. "Strong AI" (as defined above by Kurzweil) should not be confused with Searle's "strong AI hypothesis." The strong AI hypothesis is the claim that a computer which behaves as intelligently as a person must also necessarily have a mind and consciousness. AGI refers only to the amount of intelligence that the machine displays, with or without a mind.

There are other aspects of the human mind besides intelligence that are relevant to the concept of strong AI which play a major role in science fiction and the ethics of artificial intelligence:

These traits have a moral dimension, because a machine with this form of strong AI may have legal rights, analogous to the rights of non-human animals. As such, preliminary work has been conducted on approaches to integrating full ethical agents with existing legal and social frameworks. These approaches have focused on the legal position and rights of 'strong' AI. [76]

However, Bill Joy, among others, argues a machine with these traits may be a threat to human life or dignity. [77] It remains to be shown whether any of these traits are necessary for strong AI. The role of consciousness is not clear, and currently there is no agreed test for its presence. If a machine is built with a device that simulates the neural correlates of consciousness, would it automatically have self-awareness? It is also possible that some of these properties, such as sentience, naturally emerge from a fully intelligent machine, or that it becomes natural to ascribe these properties to machines once they begin to act in a way that is clearly intelligent. For example, intelligent action may be sufficient for sentience, rather than the other way around.

Although the role of consciousness in strong AI/AGI is debatable, many AGI researchers [78] regard research that investigates possibilities for implementing consciousness as vital. In an early effort Igor Aleksander [79] argued that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language.

Since the launch of AI research in 1956, the growth of this field has slowed down over time and has stalled the aims of creating machines skilled with intelligent action at the human level. [80] A possible explanation for this delay is that computers lack a sufficient scope of memory or processing power. [80] In addition, the level of complexity that connects to the process of AI research may also limit the progress of AI research. [80]

While most AI researchers believe strong AI can be achieved in the future, there are some individuals like Hubert Dreyfus and Roger Penrose who deny the possibility of achieving strong AI. [80] John McCarthy was one of various computer scientists who believe human-level AI will be accomplished, but a date cannot accurately be predicted. [81]

Conceptual limitations are another possible reason for the slowness in AI research. [80] AI researchers may need to modify the conceptual framework of their discipline in order to provide a stronger base and contribution to the quest of achieving strong AI. As William Clocksin wrote in 2003: "the framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts". [80]

Furthermore, AI researchers have been able to create computers that can perform jobs that are complicated for people to do, but conversely they have struggled to develop a computer that is capable of carrying out tasks that are simple for humans to do (Moravec's paradox). [example needed] [80] A problem described by David Gelernter is that some people assume thinking and reasoning are equivalent. [82] However, the idea of whether thoughts and the creator of those thoughts are isolated individually has intrigued AI researchers. [82]

The problems that have been encountered in AI research over the past decades have further impeded the progress of AI. The failed predictions that have been promised by AI researchers and the lack of a complete understanding of human behaviors have helped diminish the primary idea of human-level AI. [46] Although the progress of AI research has brought both improvement and disappointment, most investigators have established optimism about potentially achieving the goal of AI in the 21st century. [46]

Other possible reasons have been proposed for the lengthy research in the progress of strong AI. The intricacy of scientific problems and the need to fully understand the human brain through psychology and neurophysiology have limited many researchers in emulating the function of the human brain in computer hardware. [83] Many researchers tend to underestimate any doubt that is involved with future predictions of AI, but without taking those issues seriously, people can then overlook solutions to problematic questions. [46]

Clocksin says that a conceptual limitation that may impede the progress of AI research is that people may be using the wrong techniques for computer programs and implementation of equipment. [80] When AI researchers first began to aim for the goal of artificial intelligence, a main interest was human reasoning. [84] Researchers hoped to establish computational models of human knowledge through reasoning and to find out how to design a computer with a specific cognitive task. [84]

The practice of abstraction, which people tend to redefine when working with a particular context in research, provides researchers with a concentration on just a few concepts. [84] The most productive use of abstraction in AI research comes from planning and problem solving. [84] Although the aim is to increase the speed of a computation, the role of abstraction has posed questions about the involvement of abstraction operators. [85]

A possible reason for the slowness in AI relates to the acknowledgement by many AI researchers that heuristics is a section that contains a significant breach between computer performance and human performance. [83] The specific functions that are programmed to a computer may be able to account for many of the requirements that allow it to match human intelligence. These explanations are not necessarily guaranteed to be the fundamental causes for the delay in achieving strong AI, but they are widely agreed by numerous researchers.

There have been many AI researchers that debate over the idea whether machines should be created with emotions. There are no emotions in typical models of AI and some researchers say programming emotions into machines allows them to have a mind of their own. [80] Emotion sums up the experiences of humans because it allows them to remember those experiences. [82] David Gelernter writes, "No computer will be creative unless it can simulate all the nuances of human emotion." [82] This concern about emotion has posed problems for AI researchers and it connects to the concept of strong AI as its research progresses into the future. [86]

As of March 2020, AGI remains speculative [87] [88] as no such system has been demonstrated yet. Opinions vary both on whether and when artificial general intelligence will arrive. At one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". However, this prediction failed to come true. Microsoft co-founder Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". [89] Writing in The Guardian, roboticist Alan Winfield claimed the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight. [90]

AI experts' views on the feasibility of AGI wax and wane, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question but with a 90% confidence instead. [91] [92] Further current AGI progress considerations can be found below Tests for confirming human-level AGI and IQ-tests AGI.

The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many public figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers of the thesis sometimes express bafflement at skeptics: Gates states he does not "understand why some people are not concerned", [93] and Hawking criticized widespread indifference in his 2014 editorial:

'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI.' [94]

Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence? [95] [96]

The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy. [97]

Much of existing criticism argues that AGI is unlikely in the short term. Computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe [a technological singularity] is likely to happen, at least for a long time. And I don't know why I feel that way." [98] Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." [99]


  1. "DeepMind and Google: the battle to control artificial intelligence". The Economist (1843 (magazine)). 2019. Retrieved 15 March 2020. AGI stands for Artificial General Intelligence, a hypothetical computer program... ^
  2. Kurzweil, Singularity (2005) p. 260 ^
  3. Kurzweil, Ray (5 August 2005), "Long Live AI", Forbes: Kurzweil describes strong AI as "machine intelligence with the full range of human intelligence." ^
  4. Treder, Mike (10 August 2005), "Advanced Human ntelligence", Responsible Nanotechnology, archived from the original on 16 October 2019 ^
  5. "The Age of Artificial Intelligence: George John at TEDxLondonBusinessSchool 2013". Archived from the original on 26 February 2014. Retrieved 22 February 2014. ^
  6. Newell & Simon 1976, This is the term they use for "human-level" intelligence in the physical symbol system hypothesis. ^
  7. Searle 1980, See below for the origin of the term "strong AI", and see the academic definition of "strong AI" in the article Chinese room. ^
  8. How artificial intelligence works, "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020 ^
  9. Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (31 July 2018). "Viewpoint: When Will AI Exceed Human Performance? Evidence from AI Experts". Journal of Artificial Intelligence Research. 62: 729–754. doi:10.1613/jair.1.11222. ISSN 1076-9757. ^
  10. Encyclopædia Britannica Strong AI, applied AI, and cognitive simulation Archived 15 October 2007 at the Wayback Machine or Jack Copeland What is artificial intelligence? Archived 18 August 2007 at the Wayback Machine on ^
  11. "The Open University on Strong and Weak AI". Archived from the original on 25 September 2009. Retrieved 8 October 2007. [dead link] ^
  12. Baum, Seth (12 November 2017). "Baum, Seth, A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy (November 12, 2017). Global Catastrophic Risk Institute Working Paper 17-1". Cite journal requires |journal= (help) ^
  13. AI founder John McCarthy writes: "we cannot yet characterize in general what kinds of computational procedures we want to call intelligent." McCarthy, John (2007). "Basic Questions". Stanford University. Archived from the original on 26 October 2007. Retrieved 6 December 2007. (For a discussion of some definitions of intelligence used by artificial intelligence researchers, see philosophy of artificial intelligence.) ^
  14. This list of intelligent traits is based on the topics covered by major AI textbooks, including:Russell & Norvig 2003 harvnb error: no target: CITEREFRussellNorvig2003 (help),Luger & Stubblefield 2004,Poole, Mackworth & Goebel 1998 andNilsson 1998. ^
  15. Pfeifer, R. and Bongard J. C., How the body shapes the way we think: a new view of intelligence (The MIT Press, 2007). ISBN 0-262-16239-3 ^
  16. White, R. W. (1959). "Motivation reconsidered: The concept of competence". Psychological Review. 66 (5): 297–333. doi:10.1037/h0040934. PMID 13844397. ^
  17. Johnson 1987 ^
  18. deCharms, R. (1968). Personal causation. New York: Academic Press. ^
  19. Muehlhauser, Luke (11 August 2013). "What is AGI?". Machine Intelligence Research Institute. Archived from the original on 25 April 2014. Retrieved 1 May 2014. ^
  20. "What is Artificial General Intelligence (AGI)? | 4 Tests For Ensuring Artificial General Intelligence". Talky Blog. 13 July 2019. Archived from the original on 17 July 2019. Retrieved 17 July 2019. ^
  21. Liu, Feng; Shi, Yong; Liu, Ying (2017). "Intelligence Quotient and Intelligence Grade of Artificial Intelligence". Annals of Data Science. 4 (2): 179–191. arXiv:1709.10242. Bibcode:2017arXiv170910242L. doi:10.1007/s40745-017-0109-0. ^
  22. "Google-KI doppelt so schlau wie Siri". Archived from the original on 3 January 2019. Retrieved 2 January 2019. ^
  23. Shapiro, Stuart C. (1992). Artificial Intelligence Archived 1 February 2016 at the Wayback Machine In Stuart C. Shapiro (Ed.), Encyclopedia of Artificial Intelligence (Second Edition, pp. 54–57). New York: John Wiley. (Section 4 is on "AI-Complete Tasks".) ^
  24. Roman V. Yampolskiy. Turing Test as a Defining Feature of AI-Completeness. In Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) --In the footsteps of Alan Turing. Xin-She Yang (Ed.). pp. 3–17. (Chapter 1). Springer, London. 2013. Archived 22 May 2013 at the Wayback Machine ^
  25. Luis von Ahn, Manuel Blum, Nicholas Hopper, and John Langford. CAPTCHA: Using Hard AI Problems for Security Archived 4 March 2016 at the Wayback Machine. In Proceedings of Eurocrypt, Vol. 2656 (2003), pp. 294–311. ^
  26. Bergmair, Richard (7 January 2006). "Natural Language Steganography and an "AI-complete" Security Primitive". CiteSeerX Cite journal requires |journal= (help) (unpublished?) ^
  27. Crevier 1993, pp. 48–50 harvnb error: no target: CITEREFCrevier1993 (help) ^
  28. Simon 1965, p. 96 quoted in Crevier 1993, p. 109 harvnb error: no target: CITEREFCrevier1993 (help) ^
  29. "Scientist on the Set: An Interview with Marvin Minsky". Archived from the original on 16 July 2012. Retrieved 5 April 2008. ^
  30. Marvin Minsky to Darrach (1970), quoted in Crevier (1993, p. 109) harvtxt error: no target: CITEREFCrevier1993 (help). ^
  31. The Lighthill report specifically criticized AI's "grandiose objectives" and led the dismantling of AI research in England. (Lighthill 1973; Howe 1994) In the U.S., DARPA became determined to fund only "mission-oriented direct research, rather than basic undirected research". See (NRC 1999) under "Shift to Applied Research Increases Investment". See also (Crevier 1993, pp. 115–117) harv error: no target: CITEREFCrevier1993 (help) and (Russell & Norvig 2003, pp. 21–22) harv error: no target: CITEREFRussellNorvig2003 (help) ^
  32. Crevier 1993, p. 211 harvnb error: no target: CITEREFCrevier1993 (help), Russell & Norvig 2003, p. 24 harvnb error: no target: CITEREFRussellNorvig2003 (help) and see also Feigenbaum & McCorduck 1983 ^
  33. Crevier 1993, pp. 161–162,197–203,240 harvnb error: no target: CITEREFCrevier1993 (help); Russell & Norvig 2003, p. 25 harvnb error: no target: CITEREFRussellNorvig2003 (help); NRC 1999, under "Shift to Applied Research Increases Investment" ^
  34. Crevier 1993, pp. 209–212 harvnb error: no target: CITEREFCrevier1993 (help) ^
  35. As AI founder John McCarthy writes "it would be a great relief to the rest of the workers in AI if the inventors of new general formalisms would express their hopes in a more guarded form than has sometimes been the case." McCarthy, John (2000). "Reply to Lighthill". Stanford University. Archived from the original on 30 September 2008. Retrieved 29 September 2007. ^
  36. "At its low point, some computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."Markoff, John (14 October 2005). "Behind Artificial Intelligence, a Squadron of Bright Real People". The New York Times. ^
  37. Russell & Norvig 2003, pp. 25–26 harvnb error: no target: CITEREFRussellNorvig2003 (help) ^
  38. "Trends in the Emerging Tech Hype Cycle". Gartner Reports. Archived from the original on 22 May 2019. Retrieved 7 May 2019. ^
  39. Moravec 1988, p. 20 ^
  40. Harnad, S (1990). "The Symbol Grounding Problem". Physica D. 42 (1–3): 335–346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H. doi:10.1016/0167-2789(90)90087-6. ^
  41. Gubrud 1997 ^
  42. "Who coined the term "AGI"? »". Archived from the original on 28 December 2018. Retrieved 28 December 2018., via Life 3.0: 'The term "AGI" was popularized by... Shane Legg, Mark Gubrud and Ben Goertzel' ^
  43. Goertzel & Wang 2006. See also Wang (2006) harvtxt error: no target: CITEREFWang2006 (help) with an up-to-date summary and lots of links. ^
  44. Goertzel & Pennachin 2006. ^
  45. (Kurzweil 2005, p. 260) harv error: multiple targets (2×): CITEREFKurzweil2005 (help) or see Advanced Human Intelligence Archived 30 June 2011 at the Wayback Machine where he defines strong AI as "machine intelligence with the full range of human intelligence." ^
  46. Goertzel 2007. ^
  47. Markoff, John (27 November 2016). "When A.I. Matures, It May Call Jürgen Schmidhuber 'Dad'". The New York Times. Archived from the original on 26 December 2017. Retrieved 26 December 2017. ^
  48. James Barrat (2013). "Chapter 11: A Hard Takeoff". Our Final Invention: Artificial Intelligence and the End of the Human Era (First ed.). New York: St. Martin's Press. ISBN 9780312622374. ^
  49. "About the Machine Intelligence Research Institute". Machine Intelligence Research Institute. Archived from the original on 21 January 2018. Retrieved 26 December 2017. ^
  50. "About OpenAI". OpenAI. Archived from the original on 22 December 2017. Retrieved 26 December 2017. ^
  51. Theil, Stefan. "Trouble in Mind". Scientific American. pp. 36–42. Bibcode:2015SciAm.313d..36T. doi:10.1038/scientificamerican1015-36. Archived from the original on 9 November 2017. Retrieved 26 December 2017. ^
  52. Lawler, Richard (13 November 2019). "John Carmack takes a step back at Oculus to work on human-like AI". Engadget. Retrieved 4 April 2020. ^
  53. Sandberg & Boström 2008. "The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain." ^
  54. Sandberg & Boström 2008. ^
  55. Drachman 2005. ^
  56. Russell & Norvig 2003. sfn error: no target: CITEREFRussellNorvig2003 (help) ^
  57. In "Mind Children" Moravec 1988, p. 61 1015 cps is used. More recently, in 1997, <"Archived copy". Archived from the original on 15 June 2006. Retrieved 23 June 2006.CS1 maint: archived copy as title (link)> Moravec argued for 108 MIPS which would roughly correspond to 1014 cps. Moravec talks in terms of MIPS, not "cps", which is a non-standard term Kurzweil introduced. ^
  58. Swaminathan, Nikhil (January–February 2011). "Glia—the other brain cells". Discover. Archived from the original on 8 February 2014. Retrieved 24 January 2014. ^
  59. Izhikevich, Eugene M.; Edelman, Gerald M. (4 March 2008). "Large-scale model of mammalian thalamocortical systems" (PDF). PNAS. 105 (9): 3593–3598. Bibcode:2008PNAS..105.3593I. doi:10.1073/pnas.0712231105. PMC 2265160. PMID 18292226. Archived from the original (PDF) on 12 June 2009. Retrieved 23 June 2015. ^
  60. "Project Milestones". Blue Brain. Retrieved 11 August 2008. ^
  61. "Artificial brain '10 years away' 2009 BBC news". 22 July 2009. Archived from the original on 26 July 2017. Retrieved 25 July 2009. ^
  62. University of Calgary news Archived 18 August 2009 at the Wayback Machine, NBC News news Archived 4 July 2017 at the Wayback Machine ^
  63. "Archived copy". Archived from the original on 15 June 2006. Retrieved 23 June 2006.CS1 maint: archived copy as title (link) ^
  64. de Vega, Glenberg & Graesser 2008. A wide range of views in current research, all of which require grounding to some degree ^
  65. "some links to bee brain studies". Archived from the original on 5 October 2011. Retrieved 30 March 2010. ^
  66. In Goertzels' AGI book (Yudkowsky 2006), Yudkowsky proposes 5 levels of organisation that must be understood – code/data, sensory modality, concept & category, thought, and deliberation (consciousness) – in order to use the available hardware ^
  67. Yekutieli, Y; Sagiv-Zohar, R; Aharonov, R; Engel, Y; Hochner, B; Flash, T (August 2005). "Dynamic model of the octopus arm. I. Biomechanics of the octopus reaching movement". J. Neurophysiol. 94 (2): 1443–58. doi:10.1152/jn.00684.2004. PMID 15829594. ^
  68. Williams & Herrup 1988 ^
  69. "nervous system, human." Encyclopædia Britannica. 9 January 2007 ^
  70. Azevedo et al. 2009. ^
  71. Searle 1980 ^
  72. As defined in a standard AI textbook: "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." (Russell & Norvig 2003) harv error: no target: CITEREFRussellNorvig2003 (help) ^
  73. For example:Russell & Norvig 2003 harvnb error: no target: CITEREFRussellNorvig2003 (help),Oxford University Press Dictionary of Psychology Archived 3 December 2007 at the Wayback Machine (quoted in "High Beam Encyclopedia"),MIT Encyclopedia of Cognitive Science Archived 19 July 2008 at the Wayback Machine (quoted in "AITopics")Planet Math Archived 19 September 2007 at the Wayback MachineWill Biological Computers Enable Artificially Intelligent Machines to Become Persons? Archived 13 May 2008 at the Wayback Machine Anthony Tongen ^
  74. Russell & Norvig 2003, p. 947. sfn error: no target: CITEREFRussellNorvig2003 (help) ^
  75. Note that consciousness is difficult to define. A popular definition, due to Thomas Nagel, is that it "feels like" something to be conscious. If we are not conscious, then it doesn't feel like anything. Nagel uses the example of a bat: we can sensibly ask "what does it feel like to be a bat?" However, we are unlikely to ask "what does it feel like to be a toaster?" Nagel concludes that a bat appears to be conscious (i.e. has consciousness) but a toaster does not. See (Nagel 1974) ^
  76. Sotala, Kaj; Yampolskiy, Roman V (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1): 8. doi:10.1088/0031-8949/90/1/018001. ISSN 0031-8949. ^
  77. Joy, Bill (April 2000). "Why the future doesn't need us". Wired. ^
  78. Yudkowsky 2006. ^
  79. Aleksander 1996. ^
  80. Clocksin 2003. ^
  81. McCarthy 2003. sfn error: no target: CITEREFMcCarthy2003 (help) ^
  82. Gelernter 2010. ^
  83. McCarthy 2007. sfn error: multiple targets (2×): CITEREFMcCarthy2007 (help) ^
  84. Holte & Choueiry 2003. ^
  85. Zucker 2003. ^
  86. Kaplan, Andreas; Haenlein, Michael (2019). "Kaplan Andreas and Haelein Michael (2019) Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. ^
  87. How artificial intelligence works, "Concluding remarks: Today's AI is powerful and useful, but remains far from speculated AGI or ASI.", European Parliamentary Research Service, retrieved March 3, 2020 ^
  88. Beyond Mad?: The Race For Artificial General Intelligence, "AGI represents a level of power that remains firmly in the realm of speculative fiction as on date." February 2, 2018, retrieved March 3, 2020 ^
  89. Allen, Paul. "The Singularity Isn't Near". MIT Technology Review. Retrieved 17 September 2014. ^
  90. Winfield, Alan. "Artificial intelligence will not turn into a Frankenstein's monster". The Guardian. Archived from the original on 17 September 2014. Retrieved 17 September 2014. ^
  91. Raffi Khatchadourian (23 November 2015). "The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?". The New Yorker. Archived from the original on 28 January 2016. Retrieved 7 February 2016. ^
  92. Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A survey of expert opinion. In Fundamental issues of artificial intelligence (pp. 555–572). Springer, Cham. ^
  93. Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015. ^
  94. "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?'". The Independent (UK). Retrieved 3 December 2014. ^
  95. Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First ed.). ISBN 978-0199678112. ^
  96. Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1). ^
  97. "But What Would the End of Humanity Mean for Me?". The Atlantic. 9 May 2014. Retrieved 12 December 2015. ^
  98. "Tech Luminaries Address Singularity". IEEE Spectrum: Technology, Engineering, and Science News (SPECIAL REPORT: THE SINGULARITY). 1 June 2008. Retrieved 8 April 2020. ^
  99. Shermer, Michael (1 March 2017). "Apocalypse AI". Scientific American. p. 77. Bibcode:2017SciAm.316c..77S. doi:10.1038/scientificamerican0317-77. Retrieved 27 November 2017. ^