You know, I often get told, “Thoughty2, you’re so intelligent! ” To which I always reply,, “Thanks Mum, but really, what is intelligence anyway? ” Which sounds kind of profound, even if I do say so myself.
Turns out, though, it’s a question that’s far from straightforward to answer. We all seem to have an internal intelligence radar, that part of us that unconsciously decides, “She’s a genius” or “He probably can’t wipe his own arse. ” But that level of subjectivity isn’t useful when trying to establish a standard definition of intelligence.
You might see Albert Einstein as your intellectual hero, while I might regard Ronald McDonald as the greatest philosopher of our times. Which I don’t. Anymore.
Unsurprisingly, I’m not the first person to spot this problem. People have been trying to define intelligence for centuries, but so far, nobody has come up with a definition that everyone can agree on. And this is a hot topic of debate, the kind that gets psychologists foaming at the mouth and shouting stuff about Oedipus.
It’s almost as controversial as the testing of intelligence, but more on that a bit later. The Oxford Dictionary defines intelligence as “The ability to acquire and apply knowledge and skills. ” Sounds pretty good, in my opinion.
If it were up to me, we’d all shake hands and break for an early lunch, but there’s no pleasing some people. Over the years, multiple social scientists have offered competing definitions of intelligence. It’s been described as , “The ability to deal with cognitive complexity.
” by some and “Goal-directed adaptive behaviour” by others. Physicist and computer scientist Alex Wissner-Gross even defined intelligence through the power of mathematics using this rather nifty equation, where F is the force of intelligence, T is the strength to maintain future actions, and S is the diversity of future options over the time horizon Tau. Unfortunately, you have to be a genius to actually understand the equation in the first place.
Some brave souls have tried to build a standard view of intelligence. In 1994, professor of educational psychology Linda Gottfredson prepared a statement for the Wall Street Journal claiming to represent the mainstream science on intelligence. She invited 131 experts to sign the statement, including specialists in anthropology, behaviour genetics, mental retardation, neuropsychology, sociology, and a bunch of other jobs that sound interesting at dinner parties.
Fifty-two university professors signed the statement, which laid out 25 different conclusions about intelligence. Sounds convincing, but the declaration was later criticised by several experts and psychologists, not least because it supported the idea that intelligence is genetically determined by race and included such pearls of racist wisdom as: ‘Genetically caused differences in intelligence are not necessarily irremediable’. These days, there are so many different definitions for intelligence that when researchers Shane Legg and Marcus Hutter tried to catalogue all of them, they gave up counting at 70, but said there were many more.
You might be wondering why it’s so important to define intelligence. It’s not like you can put it on a sandwich, so who cares if we disagree about what it is? Well, one reason there’s an argument about this is that people just like to be right.
And another is that recently, people have become sticky about defining intelligence because they want to be able to better define artificial intelligence. At some point, possibly in the not-too-distant future, someone will create a machine with true artificial intelligence. But, to make sure they deserve all the fame, fortune, and cyberporn that comes with being the world’s most famous nerd, the machine’s intelligence will need to be verified; it will need to pass a test.
Since 1950, the accepted standard has been the Turing Test, suggested by the father of computer science himself, Alan Turing. Turing’s simple idea, which he called the Imitation Game, was that a machine could be considered intelligent if someone talking to it was unable to work out whether they were communicating with a human or a computer. In 2014 it was claimed the Turing Test had been passed by a computer program called Eugene Goostman, which simulates a 13-year-old Ukrainian boy.
The claim has been disputed, though, which doesn’t surprise me. I once convinced my mate Dave that I was a 16-year-old schoolgirl named Gretchen, and no-one gave me a prize. Anyway, when theoretical computer scientist Scott Aaronson asked Eugene how many legs a camel has, Eugene answered: ‘Something between 2 and 4.
Maybe, three? ’ So at this point my best guess is that the 30% of judges he fooled were all just high. So the Turing Test probably still stands, but some say it won’t be complex enough to test full AI in the future.
And if we’re going to develop a new test, we’ll first need to agree on what counts as intelligence. Another reason it’s important to define intelligence is that it has consistently been used to make judgements about who should get what job, who should play what role in society, and even who should live and who should die. It began in the late nineteenth century with Charles Darwin’s cousin, Francis Galton.
If you’ve seen my video on the wisdom of crowds you’ll know that Galton had a particular fondness for oxen, and that he was interested in applying evolutionary theory to humans in, shall we say ‘interesting’ ways. He concluded that a person’s abilities were inherited, and theorised that intelligence could be predicted by examining an individual’s physical traits, like their reflexes, muscle grip, or head size. Of course, by those standards Andre the Giant would be a genius, so it’s no shock Galton found no correlation in the end.
He did, however, become certain that some humans are genetically superior to others, and that desirable and undesirable genetic traits could and should be controlled in humans through selective breeding. Basically, the human race would be better off if we got rid of the messy bits. This didn’t go down well with everyone, especially not with the less desirables, but that didn’t bother Galton.
In his own words, "There exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race. " The idea of selective breeding in humans wasn’t new, but Galton was the one to give it a catchy name - eugenics - and it wouldn’t be long before it became closely tied to intelligence testing. At the turn of the 20th-century, British psychologist Charles Spearman was grappling with the question of whether intelligence is one thing or a bunch of different talents and capabilities.
In 1904, he declared the existence of a general human intelligence he called the G-factor - something which, to this day, women claim men are unable to find. According to Spearman, people may have a range of natural talents, such as their skill with words, numbers, or spatial comprehension, but below all of these capabilities is a single intelligence that differs from one person to another. He came to this conclusion through a process called factor analysis, which uses various tests to identify correlations between different variables.
Spearman found that people who did well in one area of cognitive testing often did well in other areas of testing too. So, if you’re good with numbers, you’re more likely to be good with language as well. Another psychologist, L L Thurston, rejected Spearman’s idea, suggesting that instead of one general intelligence, there were seven primary mental abilities: spatial ability, verbal comprehension, word fluency, perceptual speed, numerical ability, inductive reasoning, and memory.
This was where the study of intelligence split into two broad branches of thought: one which sees intelligence as a single, unifying pattern, and another that sees intelligence as multiple separate streams. Spearman’s idea of a single general intelligence that can be measured and scored in the form of a number may sound familiar to you. If you’ve been through a conventional schooling system, there’s a good chance you’ll have done an IQ test before - a series of assessments designed to establish your Intelligence Quotient.
If you have an IQ of 140 or above, you’re regarded as a genius, and if your IQ is around 100, you have what’s thought of as average intelligence. For the origins of the IQ test, we need to go back to 1905, the year after Spearman introduced his G-factor to the world. In France, two psychologists, Alfred Binet and Theodore Simon, devised the Binet-Simon test, which was designed to identify children who were struggling in school and needed to be placed in special education programs.
Building on the idea of a G-factor, Binet and Simon’s tests evaluated verbal reasoning, working memory, and visual-spatial skills to come up with a mental age which could be used to find and help cognitively slower children. Although the Binet-Simon test was created out of a desire to help children who would otherwise have been sent to mental asylums, it wouldn’t be long before it was used as a tool to serve less compassionate ends. In 1912, German psychologist William Stern built on Binet and Simon’s work, coining the phrase ‘intelligence quotient’ and suggesting a new way to calculate it - by dividing a person’s mental age by chronological age and multiplying by 100.
Nothing wrong with that. But when American psychologist Lewis Terman revised the Binet-Simon test and began working with the American government to change army selection criteria, things started to go pear-shaped. To identify the correct candidates for officer training, the American military tested 1.
75 million men during World War I in the first mass testing of human intelligence. By this time, eugenics - that fun fest of selective breeding - had grown in popularity. So, the idea of being able to measure someone’s value to society with a number fitted like a hand in a latex glove.
The military testing was problematic for many reasons, varying from inconsistent assessment across different military camps to the fact that tests were based on innate assumptions, like the idea intelligence is an inherited trait. The tests also included questions related more to American culture than general intelligence. This placed many of the recruits, specifically immigrants to the US, at a disadvantage as they had little exposure to American society and often didn’t speak much English.
As a result, men from minority groups scored poorly, which was used as evidence to support the eugenics argument that intelligence was partly determined by race, and psychologists used the data to build a false hierarchy of ethnic groups based on apparent intellect. Soon, this thinking began to influence public policy too. Henry H Goddard, a psychologist who had worked with Lewis Terman in bringing the IQ test to America, was a eugenicist but, unlike Francis Galton who favoured selective breeding to enhance positive traits, Goddard wanted to eliminate undesirable characteristics too.
He called people who had done poorly in IQ tests “feeble-minded” and argued that, because intelligence was hereditary, they should be prevented from having children. And so it was that states like Virginia passed legislation to allow the forced sterilisation of people with low IQ scores, with the Supreme Court upholding the legality of the laws. By the end of these campaigns, more than 64,000 people had been sterilised in the US based on their IQ scores, with 20,000 in California alone.
The full madness of this approach was made abundantly clear when the Nazis were so impressed by the results in California they came to ask for advice on the best ways to prevent the birth of people they deemed unfit. Call me old-fashioned, but if a Nazi arrived at my door to tell me I was doing a great job, I’dd probably consider a career change. Anyway, the Nazis clearly went home brimming with ideas, because in the years that followed they used pseudo IQ tests and other assessments to justify the sterilisation of more than 400,000 people, and the murder of another 300,000 who were determined to be ‘life unworthy of life’.
Back in the US, the horror of what was happening in Germany led to a rapid decline in the popularity of eugenics - there’s nothing like genocide to take the shine off an ideology, is there? IQ tests were still considered relevant, but the way they were used changed. Scientists began to question the structure of the tests, proving that cultural and environmental factors had a significant impact on test results.
They showed, for example, that IQ test scores improved in successive generations. This was known as the Flynn Effect, and it was widely attributed to factors such as better education, nutrition, and healthcare. Interestingly enough, this generation on generation improvement in IQ test performance seems to have begun to reverse over the last couple of decades according to a whole raft of different studies across Europe and Australia.
So yeah, it turns out people really are getting stupider - something which should be abundantly clear to anyone with an internet connection. Over the second half of the twentieth century, it also became increasingly accepted that IQ tests included unconscious cultural biases that were skewing test scores. These days IQ tests are still pretty similar in terms of questions and design to the original tests, but we have more sophisticated ways of making sure they’re free of bias.
But they’re still used in many public systems, like schooling and the civil service, sometimes in unexpected ways. In some areas of the US, for example, new police recruits may be turned down unless they score below a specific IQ, because anyone scoring higher is expected to get bored. Regardless, IQ tests remain controversial and it's no surprise that psychologists have continued to search for alternative ways of measuring intelligence.
Perhaps the best-known example is the Theory of Multiple Intelligences developed by Harvard professor Howard Gardner as an alternative to the notion of one general intelligence. Echoing the spirit of LL Thurston’s work, Gardner suggested that we have several different intelligences that operate fairly independently of each other. Initially, he pointed to eight types: Verbal-linguistic intelligence, which describes your ability with words and language Logical-mathematical intelligence, which supports your way with numbers and reasoning Visual-spatial intelligence, which relates to how you understand maps and other visual stuff Musical intelligence, which is about, well, music Naturalistic intelligence, which is how well you can comprehend the natural world, like different plant species and the weather Bodily-kinesthetic intelligence is the ability to use your body intelligently - for a real-world example of this, watch out for an upcoming video in which I will communicate the entire message through interpretive dance Interpersonal intelligence describes how well you read and engage with other people; and Intrapersonal intelligence has to do with your capacity for self-awareness Later, Gardner also added Existential intelligence, which centres on your ability to think about and answer the more profound questions about, like, life and stuff.
Intuitively, the idea of multiple intelligences makes sense. It helps explain how a person can be a musical ignoramus but have the navigational abilities of a homing pigeon, or why my mate Dave got arrested for indecency despite being pretty good at maths. This approach also fits well with some other types of intelligence, like the emotional intelligence theory first developed by Peter Salovey and John Mayer, then later popularised by Daniel Goleman.
Gardner’s theory of multiple intelligences has gained massive popularity in education systems around the world and has been applied in various industries, but critics say it lacks the scientific evidence to back it up. As another alternative, psychologist Robert Sternberg came up with the triarchic theory of intelligence. Sternberg agreed with Gardner that there’s more than one type of intelligence, but limited his selection to three: analytical intelligence, or problem-solving ability, creative intelligence, and practical intelligence.
He also argued that these types of intelligence contributed to real-world success, but criticisms of his theory point to the fact that there appears to be a correlation between these different intelligences, so if you’re high in one you’re likely to be high in the others. That, of course, takes us back to the start and the notion of a single general intelligence, G. So, if you’re ever worried that you’re not actually as smart as you think you are, don’t fret.
Nobody seems to have the faintest idea what intelligence is anyway, so you have a whole bunch of different versions to choose from, and you’re bound to be a genius in one of them. At the very least, you’ve proved you’re smart enough to watch brain-stretching youtube videos, and that’s got to count for something.