Unveiling the Evolution of Artificial Intelligence: From Clockwork Automata to Neural Networks

Unveiling the Evolution of Artificial Intelligence: From Clockwork Automata to Neural Networks




In the quest to unravel the mysteries of artificial intelligence (AI), one is often led to ponder: what truly constitutes intelligence? Is it a singular essence or a multifaceted amalgamation of abilities? These questions form the crux of exploration in "What is Intelligence?" - a thought-provoking analysis tracing the origins of AI and the emergence of the concept of intelligence within psychology.


The journey commences in the annals of history, where the seeds of intelligent machines were sown over a millennium ago. The 17th century witnessed the inception of clockwork automata, marvels of engineering that bewildered onlookers with their seemingly sentient behavior. From a copper duck mimicking digestion to a wooden flute player, these creations hinted at the possibility of artificial cognition.


Fast forward to the 19th century, where the visionary Charles Babbage envisioned the first programmable digital computer, propelling humanity closer to the realm of AI. However, his contemporary, Ada Lovelace, voiced skepticism regarding machine creativity, asserting that these contraptions merely executed predefined tasks.


The narrative then shifts to the early 20th century, where luminaries like Norbert Wiener and Ross Ashby laid the groundwork for what would later be termed cybernetics. Rooted in the Greek concept of "kubernētēs" - to steer or govern - cybernetics emphasized the role of controllers in decision-making processes, mirroring the intricate feedback loops of the human mind.


Ashby's Homeostat, a physical manifestation of cybernetic principles, demonstrated the necessity of complexity for stability in dynamic systems, embodying the notion that intelligence thrives within interactive environments. This perspective challenges the notion of intelligence as a solitary attribute, highlighting its inseparable connection to context and adaptation.


The pivotal moment in AI history unfolds in the mid-1950s with the seminal Dartmouth workshop organized by John McCarthy. This gathering of interdisciplinary minds laid the groundwork for modern AI, coining the term "artificial intelligence" and igniting a fervor for simulating human cognition in machines.


From discussions on computing theory to neural networks and natural language processing, the workshop heralded a new era of AI research, setting the stage for ongoing exploration into the depths of machine learning, creativity, and abstraction.


As we traverse the evolution of AI, it becomes evident that intelligence is not a monolithic entity but a tapestry of interconnected facets, woven through centuries of human ingenuity and technological advancement. "What is Intelligence?" invites us to ponder the profound implications of our quest to unlock the secrets of artificial minds, reshaping our understanding of what it means to be intelligent in an age of machines.


When you think about it, artificial intelligence might easily have been called something else. For instance, it could be characterised as the quest to create “artificial psychology”1 or to build “synthetic minds”.2 Minds are more obviously multifaceted things, whereas there is an idea that intelligence might just be one key thing and that when we figure that out, the challenge of AI will be solved. Here, we will look at the origins of research in AI and how we came to think of building AI. We will also look at the emergence of the idea of intelligence in psychology and whether intelligence is one thing or many.


THE ORIGINS OF AI

Intelligent machines have been imagined for more than a thousand years. Machines that could surprise people with their seemingly intelligent behaviour began appearing in the 17th century in the form of clockwork automata. These included a duck made of copper that appeared to eat and digest and a mechanical flute player made from wood.3 Charles Babbage brought AI a step closer to reality in the 19th century with his idea for the first programmable digital computer, provoking his friend Ada Lovelace to argue against the possibility of machine creativity. “The Analytical Engine [Babbage’s computer]”, Lovelace wrote, “has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform”. In the first half of the 20th century, beginning with thinkers such as the mathematician Norbert Wiener5 and the psychiatrist Ross Ashby,6 much of the research that we might now call AI went by the name cybernetics. This term comes from the Greek word kubernÄ“tÄ“s, meaning to act as a pilot or helmsman. The ideas of the cyberneticians placed emphasis on the role of a controller, that is, a decision-making mechanism embedded in a feedback loop (see Figure 2.1) within which it senses the world and generates appropriate actions.


In the 1940s, Ashby designed and demonstrated a physical device called the Homeostat that incorporated many cybernetic principles, including Ashby’s law of “requisite variety”—that a machine must be at least as complex as the system it controls if it is to be stable in the face of change. Cybernetics reminds us that intelligence does happen in isolation. We are intelligent to the extent that we act appropriately in the environment that we live in, maintain ourselves, and (hopefully) achieve our personal goals. The idea of intelligence.


The term “artificial intelligence” only really took off after a nowfamous research meeting in the mid-1950s. A young researcher, John


McCarthy, who had just been appointed to his first academic post, organised an eight-week retreat, to which McCarthy, with the help of some leading visionaries, managed to recruit some of the most active and well-known researchers with interests in building smart Figure 2.1 The cybernetic idea of an intelligent controller that is in a closed loop with the world. machines. In proposing the workshop for funding, the term artificial intelligence was chosen, perhaps partly to promote the novelty of this enterprise relative to the existing field of cybernetics:


We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.


In the end, more than twenty people, and perhaps as many as forty, attended the workshop, though some for just a few days. On most days, this mix of engineers, mathematicians, cyberneticians, and psychologists met on the top floor of the Dartmouth mathematics building and held lively discussions on different topics. The themes considered included the theory of computing, natural language, neural networks, creativity, and abstraction, all of which are still core research topics for AI today.


After the participants dispersed, many went on to get substantial research funding to advance the new field of AI. Some believed that the development of machines that could match human intelligence would be possible in their own lifetimes. For instance, in 1960, Herb Simon, a key participant at Dartmouth, a leading AI theorist, and already a Nobel prize winner in economics, wrote that “machines will be capable, within twenty years, of doing any work that a man can do”.8 Interviewed in Life magazine in 1970, Marvin Minsky, one of the co-organisers of the retreat, suggested that “in from three to eight years we will have a machine with the general intelligence of an average human being . . . at that point it will be able to educate itself at fantastic speed”, and, more ominously, that “if we’re lucky, they [the AIs] might decide to keep us as pets”.10 As we will see, these predictions did not quite play out, though in the 2020s, the term artificial intelligence is more popular than ever, and there is (again) no shortage of notable researchers predicting that AIs will soon surpass humans in all respects.


So, we are stuck, for better or worse, with AI as the label for efforts to make machines that think. As we can see from McCarthy’s, AI is the science and engineering of building intelligent machines. However, this rather begs the question, what do we mean by intelligence? Partly to side-step this challenge, AI is often defined in relation to intelligence of the human variety; that is, whatever it is that we humans do that is seen as requiring intelligence. Of course, this does not explain intelligence either; rather, it passes the buck to the science of human behaviour—psychology.


COMPARING HUMANS WITH AI

This comparison with humans suggests that whatever intelligence is, we know it when we see it. Humans are Homo sapiens—the wise hominid—distinctive for our large brains, capacity for tool use and language, and our invention of a technological culture. Though other animals are intelligent, sometimes in different ways, human intelligence has always been seen as the key benchmark for AI.


This idea led Alan Turing, one of the founders of computer science, to propose a test of whether machines can think, originally named the “imitation game”11 but now universally known as the Turing test.


Specifically, Turing proposed that we can judge whether a machine is truly intelligent by setting it against a person in a question-answer session. In his game, any question on any topic is put, in written form only, to both the AI (usually a chatbot) and the human. A second person, typically someone with relevant expertise and experience, is asked to assess which answer came from the human and which from the artefact. If the success of the judge is no better than guesswork (50% correct), then, according to Turing, we should agree that the machine is intelligent or “can think” to use Turing’s precise phrase.


Turing proposed his test in 1950. Since then, it has been the subject of many articles, and multiple competitions have been organised to measure AI’s success in this game. One such competition, the Loebner Prize, ran from 1990 through to 2019 but never awarded its ultimate prize of £100,000. This prize could only be won by an AI deemed to have “passed” the test as assessed by a panel of distinguished judges.


The philosopher John Searle, in a famous series of papers describing a “Chinese Room” thought experiment,12 argued that even if an AI were to pass the Turing test, this would still not be proof of machine intelligence since a machine might pass the test without having any real understanding. Searle’s argument has been hugely influential, so it is important to recount it here.


Searle, as a non-Chinese speaker, imagines himself inside a special room whose purpose is to answer questions in Chinese. Chinese speakers outside the room ask questions by writing them on postcards and posting them under the door. To answer them, Searle uses a book of rules written in English but designed specifically for answering questions in Chinese. By carefully following these rules, Searle is able to translate questions composed of Chinese characters—to him, sequences of meaningless “squiggles”—into answers that are also in Chinese. He provides his answers by postingthem back under the door.


Searle imagines that the rules he is following are sufficiently comprehensive that, to the outside observers, his answers can be taken for those of a native Chinese speaker. In other words, the Chinese Room can pass this variant of Turing’s famous test. However, Searle maintains that he (Searle in the room) understands nothing about Chinese; he is just mechanically following rules. Similarly, an AI, by following a sufficiently sophisticated program, might pass an actual Turing test but understand nothing about the meaning of the answers it was generating. For Searle, this simple thought experiment was proof that the Turing test was not an adequate test of human-level intelligence and that “programs are neither constitutive of, nor sufficient for, minds”.

The Chinese Room critique is highly pertinent today when modern chatbots such as OpenAI’s ChatGPT surprise us with their ability to write and converse in a very human-like way. A dispute is raging as to whether this is a form of “true” intelligence or whether ChatGPT and other recent AIs are, like Searle in the Chinese Room, giving an appearance of intelligence while lacking any genuine understanding.


To the vexation of some AI researchers, behavioural yardsticks for intelligence, of which the conversational capacity required to pass the Turing test is an example, also seem to change over time. For instance, skill at chess playing was once seen as a key benchmark for human-level smarts. However, computers have been beating human grandmasters in chess since 1996 when the IBM chess computer, Deep Blue, defeated the then chess world champion Garry Kasparov in a six-match series. A series of computer programs developed by Google DeepMind, have, in more recent years, triumphed over human champions in the even more intractable game of Go. However, success in game-playing is now seen as a less important indicator than it used to be. These days, critics of AI are more likely to point to emotional intelligence and to highlight capacities such as creativity rather than expertise in game-playing, deductive thinking, theorem proving, and so on, where AI has had significant success. This could be seen as either “moving the goalposts” or as finding out what the challenging aspects of human intelligence really are.


So just imitating aspects of human intelligence may not be enough to satisfy some of those who are sceptical about thinking machines.


Should we perhaps try, instead, to remove some of the gaps in our definitions and to be more specific about what being intelligent entails?


John McCarthy proposed that intelligence is “the computational part of the ability to achieve goals in the world”. This may help a little, reminding us, as emphasised by cyberneticians, that intelligence does not exist in a vacuum but is part of a loop that allows us to take action in the world. However, the keyword computation is doing a lot of heavy lifting here.


First, though, let us look a bit more closely at human intelligence to see if that can provide a better description of what we should be looking for.


THE ORIGINS OF THE IDEA OF INTELLIGENCE

Our modern English word “intelligence” comes from the Latin intelligentia, which, in turn, has two roots, inter meaning between, and legere—to pick out or read. In other words, it means something like “to choose between”, reminding us of the role of intelligence in determining effective action.


The philosopher Aristotle, who lived in the 4th century BCE, is often regarded as the father of psychology and the person to whom we owe the distinction between perception and intelligence. For Aristotle, aesthesis or perception—seeing, hearing, touching, tasting, and so on—was the capacity that distinguished animals from plants, all animals having at least a sense of touch. In contrast, only humans had nuos—which is usually translated as intelligence, intellect, or mind.


Aristotle’s distinction was reinforced during the 17th century by the philosopher René Descartes, famous for his dualist view of mind and body. For a dualist (note, not someone who fights duels!) mind and body are fundamentally different kinds of things, with the mind existing outside the physical realm to which the body belongs. For Descartes, the mind was unique to humans and was the seat of reason, while the body was the realm of the senses. Whilst the mind could be trusted (“cogito ergo sum”—I think; therefore, I exist), the body was a potential source of deception. Indeed, in one of his writings, Descartes imagined an evil demon that manipulated the senses to provide false impressions of the external world.


Modern psychology raises questions about the intelligence– perception distinction. Of course, it is evident now that most animals do more than just perceive and that there is much more continuity between human and animal intelligence than Aristotle or Descartes may have realised. However, more problematically, if we look at how nervous systems work, the distinction between perception and intellect is also fuzzy, nervous systems are built from specialist cells called neurons. There are sensory neurons that directly encode stimuli from the world, such as the light-sensitive neurons that line the interior of the eyeball, and there are motor neurons that directly stimulate muscles to create movement. Between the sensory and motor cells, all other neurons in the nervous system, the vast majority, are interneurons and are doing something a bit different. Even the simplest animals with nervous systems, such as Hydra—a small freshwater organism with a tubular body and tentacles but no brain—have nerve nets that include neurons that are neither sensory nor motor but connect the two in some complex mesh.


When we look at mammalian and human brains, it is possible to label areas as primarily sensory or perceptual and others as primarily motor. But this is a broad brush. Sensory areas can be seen to be involved in choosing actions, for example, and motor areas can receive and process sensory signals. For areas of the brain that seem to be neither, the terminology can be quite loose. For example, most of the cerebral cortex of the human brain, which is made up of left and right cerebral cortices, is labelled as “associative” (see Figure 2.2, left). This is another term we owe to Aristotle, who proposed that learning involved the association, or linking, of mental concepts.


Figure 2.2 Left. A side-on view of the human brain (the front is towards the left) showing some widely used labels and the division into the cerebral cortex and the brainstem. Note that the brainstem (shown by the darker, shaded region) extends inside the cortical mantle. Centre and right. Three example cortical neurons and the six-layer neural network found in a typical cortical column as drawn by the neuroscientist Santiago Ramón y Cajal, who pioneered the microscopic study of the brain.


When we look closely at these associative areas, we find a very wide variety of mental processes for which psychology has developed additional labels. For instance, there are areas thought to be involved in motivation, emotion, language, reasoning, planning, navigation, decision-making, and so on. However, this labelling is not clearly settled, and many areas seem to contribute to different aspects of function, just as different functions seem to be spread across many areas of the brain. The brain, it seems, has not evolved in a way that is easy for scientists to interpret!


Even if we could agree that parts of the brain can be distinguished as being principally perceptual/motor or other (intellect), when we look at these different brain areas in more detail, they are all made of similar stuff. For example, all of human cortex, be it sensory, motor, or associative, is made up of six-layered neural networks laid out in local patches called columns (see Figure 2.2, right, and next chapter). Neuroscientists agree that these columns have a similar network architecture—the way the neurons are wired to each other—regardless of where they are found. Although there are local differences, and critically, the detailed wiring is subject to adjustment and learning during development, what seems to matter most is how these patches connect to each other and, ultimately, to sensory inputs and motor outputs. This commonality of mechanism suggests that the problems of perception and intellect may not be as different as originally thought.


Despite these issues, Aristotle’s distinction between perception and intellect continues to shape how people commonly think about minds and brains. For example, we are more likely to conceive of people as varying in their intellectual capacity, even though there is plentiful evidence of individuals with exceptional sensory capacities, such as so-called “super tasters” who can discriminate many more flavours than the rest of us. Likewise, we recognise that some people can have exceptional sensory and motor capacities, such as the ability to strike a football particularly well. Nevertheless, we are less likely, culturally, to consider this to be a sign of exceptional intellect, even though such skills undoubtedly involve much more of the brain than just those parts that process signals from the eye or drive movement of the legs and feet.


INTELLIGENCE AND IQ TESTING

A key idea that comes from psychology and often leads us to think of intelligence as “one thing” is IQ or the intelligence quotient. The capacity to measure IQ, and thus, potentially, to quantify human intelligence, is one of the best-known achievements of a research area in psychology known as individual differences or differential psychology. If we have a measure of intelligence, does that not imply that we know what it is? Well, not really.


We owe the IQ test to an ingenious psychologist, Alfred Binet, working at the start of the 20th century. At the behest of the French Ministry for Education, Binet wanted to identify children who were performing poorly in class, particularly those who might benefit from special schooling. Binet’s idea was to bring together many short tasks or puzzles drawn from everyday life that, besides requiring the ability to read, would not rely on any formal education. For example, a task might ask you to identify the missing portion of a symmetric pattern, find a missing number in a sequence, or solve arithmetic riddles such as: “Peter, who is twelve, is three times his brother’s age. How old will Peter be when he is twice the age of his brother?” (You can find the answer at the end of this chapter in case you are wondering).


Whereas earlier approaches to measuring intellect had focused on specific abilities, Binet’s intention was to measure many different aspects of thinking and reasoning in the hope that a summary score would reflect the student’s overall capability. Binet also arranged his tasks in order of increasing difficulty, rating each one according to the chronological age at which a typical child should be able to perform it. By asking a child to work through the sequence until they could no longer perform a task successfully, Binet was able to estimate what he called the child’s mental age. Subtracting mental age from chronological age gave a measure of how a child was performing relative to his peer group.


It is notable that Binet did not start with a definition of intelligence; indeed, quite the opposite, he opted for a large number and wide range of tasks, suggesting “it matters very little what the tests are as long as they are numerous”. This approach avoided the problem of saying exactly what it was he was trying to measure. Binet created three versions of his test during his lifetime. In 1912, shortly after Binet’s death, William Stern suggested dividing mental age by chronological age (rather than subtracting), leading to the measure we now call the intelligence quotient. The notion of IQ and the IQ test was born.


IQ testing has been, and still is, very widely applied. However, it has also had a chequered history. Arguably, the misapplication of IQ testing has led to reduced life chances for many rather than the improvement in education practices that Binet had hoped for. For example, the culture of IQ testing has led, in some parts of the world, to the exclusion of many children from access to education and to the invention of unfortunate labels, such as “moron”, for people with low IQ scores.


Although Binet considered that he was measuring a wide range of intellectual abilities and that children with lower scores could improve on his tests by extra schooling, a profoundly different view took hold in psychological circles in the first half of the 20th century.


Based on limited and controversial evidence, this view determined that the capacity to perform well on IQ tests was decided through heredity rather than experience. That is, IQ was thought to measure your intellectual endowment that education could do little to change.


Still more controversially, some researchers asserted that differences in IQ could explain differences in social status—for instance, that the poor and underprivileged are so because of lower intelligence. Finally, some large-scale studies of IQ appeared to show differences in average IQ between ethnic groups. Such findings were seized upon by political factions to justify economic inequality or the supposed superiority of one race to others. Many of these studies can be criticised for the failure to control for differences in diet, education, and life chances of the different groups studied. Cultural factors in some of the tasks included in IQ tests also made those tasks easier for some people than others. The many flaws in theories of intelligence based on IQ testing are explored in the book The Mismeasure of Man by Stephen J. Gould.


Alongside these socially divisive effects of some IQ studies, attempts to measure intelligence via IQ testing have cemented, in Western culture, the idea that IQ tests measure the strength of a general intellectual faculty akin to Aristotle’s nuos. When, in 1994, Richard Hernstein and Charles Murray published their book “The Bell Curve”,20 they summarised one popular assessment of almost one hundred years of research on intelligence assessment: that intelligence is a single monolithic capacity that you are largely born with, and that varies across the population with most people near to the average and a smaller number towards each of the extremes (the “bell curve” of the title).


MULTIPLE INTELLIGENCES

This view of intelligence as a single faculty remains popular in academic psychology, as it is in wider society, but there are alternativeschools of thought. 


In the field of IQ testing, researchers quickly developed sub-scales for different aspects of intelligence. Today if you perform an IQ test, such as the Weschler Adult Intelligence Scale (WAIS), you can be given separate scores for verbal comprehension, working memory, perceptual organisation, and processing speed, alongside your overall IQ. Whereas testing of this kind can yield differences for individuals on different sub-scales, it is always possible to take an average—your IQ. It is also the case that scores on sub-scales correlate highly with each other and with the overall measure; that is, people with high scores on one sub-scale tend, on average, to have high scores on others. This, by itself, does not show that there is one thing we should call intelligence, as we will explore shortly. David Weschler, writing about the WAIS test, which he developed, described intelligence as an “aggregate or global capacity of the individual to act purposefully, to think rationally, and to deal effectively with his environment”. This definition leaves open the idea that intelligence is more than one thing (an “aggregate”).


In the 20th century, several notable psychologists explored what might be called a multiple intelligences view. For instance, Robert Sternberg proposed a triarchic theory of intelligence. One kind, analytical intelligence, corresponds to the kind of problem-solving measured in traditional IQ tests. But Sternberg also argued for a capacity for creative intelligence, the ability to deal with new situations in a successful and appropriate way, and for practical intelligence, the capacity to cope with concrete, real-life challenges.


The cognitive scientist Howard Gardner has also proposed a multiple intelligences view, noting that people have different talents and that being good at one thing does not guarantee being good at another.


Gardner’s list of intelligences includes linguistic, logico-mathematical, and spatial intelligences—again, abilities similar to those measured by traditional intelligence tests. To these, Gardner added intelligences more associated with the creative arts; these included musical and bodilykinaesthetic (as might be used in sport, dancing, or acting) intelligences.


Finally, at least in Gardner’s original list, he included intrapersonal intelligence, the capacity to understand and plan effectively for oneself, and interpersonal intelligence, the ability to understand others, including reasoning about their feelings, goals, and intentions. The idea of a distinct faculty for interpersonal or social intelligence has a long pedigree. For instance, writing in 1920, Edward Thorndike, a psychologist best known for his contributions to understanding learning, proposed that intelligence had distinct mechanical, abstract, and social components.


Another form of intelligence overlooked by traditional intelligence measures is emotional intelligence. As highlighted in an influential 1990 article by Peter Salovey and John Mayer,25 this relates to our capacity to monitor our own feelings and to use this in a discriminative way to guide decision-making. The importance of attending to our feelings in making wise decisions has also been explored by the neuropsychologist Antonio Damasio,26 who looked at the poor life decisions sometimes made by people with damage to the emotional parts of the brain. The science journalist Daniel Goleman further popularised this idea in a best-selling book, arguing that emotional intelligence, including capacities such as self-regulation and empathy, is critical to success in life. A related contrast, proposed by the psychologist Daniel Kahneman, known for his work on economic decisionmaking, is between two systems underlying “fast” and “slow” thinking. The fast system is automatic, emotional, and often instinctive and unconscious. The slow system is deliberate, conscious, effortful, and requires logic and reasoning.


What are we to make of all these different kinds of intelligence? There are several possibilities. One is that there is a single underlying general ability alongside several specific abilities related to these different aspects of intelligence. Following Charles Spearman —one of the first people to investigate these relationships statistically—psychologists often call these g and s. According to this theory, each specific ability, s, is partly due to general intelligence, g, and partly due to processes unique to that intelligence. This proposal is illustrated by the left-hand Venn diagram in Figure 2.3. 


Psychologists, such as Spearman, inferred that the similarity in scores for different IQ sub-tests is caused by the presence of a hidden causal process, g, or general intelligence. This is one result that is consistent with a powerful statistical method, also invented by Spearman, called factor analysis, that has been used ever since to look for patterns in intelligence-testing data. However, the right-hand Venn diagram in Figure 2.3 could also be consistent with scores on the different Figure 2.3 Two alternative models of how general intelligence could influence specific intelligences. Left. A single general ability (g, darker circle) is inferred to be involved in all specific intelligences. Right. This model suggests the presence of multiple underlying processes (grey ovals) with no two specific intelligences involving the same set of hidden processes. sub-tests being similar. Here, I have illustrated the idea that there might not be just one underlying hidden process but several. Each specific intelligence is influenced by two of these processes in this example, but no one underlying process influences all of them.


An alternative model of intelligence, due to Godfrey Thomson, a contemporary of Spearman’s, suggested that there could be many hidden processes influencing different specific measures of intelligence. 


Twenty-first-century analyses have found that modern versions of Thomson’s theory cannot be distinguished statistically from g theory; in other words, both are equally consistent with the data.


INTELLIGENCE A MULTIFACETED ENIGMA

So, we have seen that there are lots of ways to think about intelligence in psychology, many ideas about what intelligence is and what it might be useful for, but a lack of broad consensus.


One view, to me, too narrow, is that there is, at root, one kind of intelligence—the type that is measured by IQ tests—and that this underlies and supports everything else. In this view, if AI can capture this fundamental problem-solving ability, then it could apply it to solve all sorts of challenges that we normally consider as requiring human intelligence.


Another point of view is that intelligence tests measure just one facet of human intelligence and that what matters more broadly is not our ability to solve certain kinds of puzzles but our capacity to perform successfully in the real world in terms of the day-to-day skills and abilities that allow us to survive and thrive. According to this view, emotional and social intelligence may be more critical than the logico-mathematical variety when it comes to succeeding in our highly complex social worlds.


A related view would emphasise that different kinds of intelligence underlie the wide range of human talents, and that specialised intelligences allow the most exceptional to display dazzling skill. These include the intelligences underlying creativity in music and art, or the physical intelligence on display in sports, acrobatics, or dance. 


These different kinds of intelligence might have some overlap, being pieced together of many different component abilities, but every different shade of intelligence may be a different mix. If this multiple intelligences view is correct, then AI may progress at different speeds in different domains of intelligence rather than taking broad strides with each advance towards a machine version of general intelligence.


How are we to distinguish between these approaches when, as we have noted, such widely disparate views can each potentially account for much of the evidence from intelligence testing?


Differential psychology, which we have been exploring here, is a form of behavioural science that seeks to understand people by looking at what they do, for example, their performance on intelligence tests. But psychology, and more broadly cognitive science, has other tools up its sleeves. One is neuroscience. If we can only get a partial understanding of human intelligence by looking at behaviour, we can add to this by examining the substrates in which it arises; that is, in our bodies and, particularly, in our central nervous systems and


Another important approach is theoretical psychology, which looks to understand how the mind works at a functional level. In other words, it tries to build theories that decompose the mind into different parts, explain how each part operates, and how they come together to create the whole. Theoretical psychology is informed by both the science of human behaviour and by neuroscience and seeks to build on insights generated by both sources of evidence. Theories of mind are, inevitably, very complex. Therefore, they are often systematised and tested using computer modelling, just as we use computer models to understand other complex systems, such as the weather and the economy. We can evaluate these models by testing their ability to explain and predict human behaviour, and by their consistency with our understanding of how brains work.


For now, maybe we can agree that we have learned something about the nature of intelligence by looking at both the science of  human intelligence and some initial efforts to replicate it in AI, but there is still much more to discover.


NOTES


1 Freidenberg, J. (2010). Artificial Psychology. London: Psychology Press.


2 Franklin, S. (1995). Artificial Minds. Cambridge, MA: MIT Press, and Valentino

Braitenberg, V. (1986). Vehicles: Experiments in Synthetic Psychology. Cambridge, MA:

MIT Press.


3 For a history of automata, see Kang, M. (2011). Sublime Dreams of Living Machines:

The Automaton in the European Imagination. Cambridge, MA: Harvard University

Press. Also look at https://themadmuseum.co.uk/history-of-automata/


4 Hollings, C., Martin, U., & Rice, A. (2018). Ada Lovelace and the analytical engine.

Retrieved from https://blogs.bodleian.ox.ac.uk/adalovelace/2018/07/26/

ada-lovelace-and-the-analytical-engine/


5 Wiener, N. (1948/1965). Cybernetics: Or Control and Communication in the Animal and

the Machine. Cambridge, MA: MIT Press.


6 Ashby, W. R. (1952). Design for a Brain: The Origin of Adaptive Behaviour. London:

Chapman and Hall.


7 McCarthy, J. et al. (1956). A Proposal for the Dartmouth Summer School Research Project

on Artificial Intelligence. Retrieved from http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf


8 Simon, H. A. (1960). The New Science of Management Decision. New York: Harper &

Row. Quote on p. 38.


9 Quoted in Darrach, B. (1970). Meet Shakey the first artificial person. Life

Magazine, 20th November 1970, 58–68, p. 58c.


10 Ibid, p. 68.


11 Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236),

433–460.


12 Searle, J. (1990). Is the brain’s mind a computer program? Scientific American,

262(1), 20–25. For some possible replies, see Churchland, P. M., & Churchland, P. S. (1990). Could a machine think? Scientific American, 262(1), 26–31.


13 Ibid, p. 27.


14 Aristotle. (350 BCE). De Anima (On the soul), (J. A. Smith, Trans.). Retrieved

from https://classics.mit.edu/Aristotle/soul.1.i.html


15 Descartes, R. (1641/1984). Meditations 1 & 2 (J. Cottingham, Trans.). Boulder: University of Colorado. Retrieved from https://rintintin.colorado.

edu/~vancecd/phil201/Meditations.pdf


16 See SkySports. (2011). Testing Ronaldo to the limits. Retrieved from https://

youtu.be/z3tnhgGzAs0?si=8zMnp2EkvnZydwje.


17 Binet, A., & Simon, T. (1911). A Method of Measuring the Development of the Intelligence

of Young Children. Lincoln, IL: Courier Company.


18 Ibid.


19 Gould, S. J. (2006). The Mismeasure of Man (Revised and Expanded). New York: W. W.

Norton & Company.


20 Hernstein, R., & Murray, C. (1994). The Bell Curve: Intelligence and Class Structure in

American Life. Glencoe, IL: Free Press.


21 Wechsler, D. (1958). The Measurement and Appraisal of Adult Intelligence. Baltimore,

MD: Williams & Wilkins Co.


22 Sternberg, R. J. (1985). Beyond IQ: A Triarchic Theory of Intelligence. Cambridge:

Cambridge University Press.


23 Gardner, H. (2006). Multiple Intelligences: New Horizons. New York: Basic Books.


24 Thorndike, E. L. (1920). Intelligence and its uses. Harper’s Magazine, 1st

January 1920.


25 Salovey, P., & Mayer, J. D. (1990). Emotional intelligence. Imagination, Cognition & Personality, 9, 185–211.


26 Damasio, A. R. (1994). Descartes Error: Emotion, Reason and the Human Brain. New

York: Random House.


27 Goleman, D. (1995). Emotional Intelligence: Why It Can Matter More Than IQ. New

York: Bantam Books.


28 Kahneman, D. (2011). Thinking Fast and Slow. New York: Penguin Books.


29 Spearman, C. (1904). “General intelligence,” objectively determined and

measured. The American Journal of Psychology, 15, 201–292.


30 Thomson, G. H. (1916). A hierarchy without a general factor. British Journal of

Psychology, 8, 271–281.


31 See review in Conway, A. R. A., & Kovacs, K. (2015). New and emerging

models of human intelligence. Wiley Interdisciplinary Reviews: Cognitive Science, 6(5),

419–426.


In conclusion, the journey towards artificial intelligence is a testament to human curiosity and innovation, showcasing our relentless pursuit of understanding the mysteries of the mind and the limitless potential of technology to reshape our world. 

Post a Comment (0)
Previous Post Next Post