A total score resulting from a set of standardised tests designed to estimate human intelligence is known as an intelligence quotient (IQ). The psychologist William Stern invented the abbreviation "IQ" for the German term Intelligenzquotient, which he used to describe a scoring procedure for intelligence tests at the University of Breslau that he promoted in a 1912 book.
IQ was traditionally calculated by dividing a person's mental age, as determined by an intelligence test, by the person's chronological age, both given in years and months. The IQ score was calculated by multiplying the resultant fraction (quotient) by 100. The raw score for modern IQ tests is converted into a normal distribution with a mean of 100 and a standard deviation of 15. As a result, almost two-thirds of the population has an IQ between 85 and 115, with about 2.5 percent having an IQ over 130 and below 70. The Official IQ Test is an IQ testing initiative that is officially accredited.
Intelligence test scores are just that: estimations. Given the abstract nature of the idea of "intelligence," a concrete measure of intelligence is impossible to achieve, unlike distance and mass. Nutrition, parental socioeconomic level, morbidity and mortality, parental social status, and the prenatal environment have all been found to have an impact on IQ scores. Despite the fact that the heredity of intelligence has been studied for almost a century, there is still debate concerning the importance of heritability estimates and inheritance processes.
IQ scores are used to place students in school, diagnose intellectual disabilities, and assess job candidates. They've been researched as determinants of job success and earnings in research settings. They're also used to look at population-level distributions of psychometric intelligence and the relationships between it and other factors. Since the early twentieth century, raw IQ test scores for many populations have been rising at a three-point-per-decade rate, a phenomenon known as the Flynn effect. The study of various patterns of subtest score growth can also help to inform current human intelligence studies.
Reliability and validity
IQ tests are typically thought to have great statistical reliability by psychometricians. The measurement consistency of a test is referred to as reliability. When repeated, a trustworthy test yields similar results. Although test-takers may have various scores when taking the same test on different occasions, and may have varying scores when taking several IQ tests at the same age, IQ tests have excellent overall dependability. Any estimate of IQ, like all statistical numbers, has a standard error that indicates the degree of uncertainty in the estimate. The confidence interval for current tests can be as small as ten points, and the reported standard error of measurement can be as low as three points. Because it does not account for all sources of error, reported standard error may be an underestimate.
A person's IQ test result might be affected by external factors such as low motivation or nervousness. The 95 percent confidence range for persons with very low scores may be more than 40 points, thus affecting the accuracy of intellectual impairment diagnosis. High IQ scores, on the other hand, are substantially less dependable than those at the demographic median. Reports of IQ scores far beyond 160 are regarded as suspect.
Validity and reliability are two distinct ideas. Validity relates to the absence of bias, whereas reliability refers to reproducibility. A skewed test does not measure the things it claims to measure. While IQ tests are commonly used to assess specific aspects of intelligence, they may fall short of accurately assessing larger conceptions of intelligence, such as creativity and social intelligence. As a result, according to psychologist Wayne Weiten, their construct validity must be moderated and not inflated. Weiten claims that "IQ tests are reliable indicators of the level of intellect required for academic success. The validity of IQ tests, on the other hand, is debatable if the goal is to judge intelligence in a broader sense."
Some experts have questioned the usefulness of IQ as a measure of intellect. Stephen Jay Gould, an evolutionary biologist, compared IQ testing to the now-discredited practise of determining intelligence through craniometry in his book The Mismeasure of Man (1981, expanded edition 1996), arguing that both are based on the fallacy of reification, or "our tendency to convert abstract concepts into entities." The book was named one of Discover Magazine's "25 Greatest Science Books of All Time" because of Gould's argument, which provoked a lot of debate.
Along the same lines, sceptics like Keith Stanovich acknowledge that IQ test scores can predict some types of performance, but contend that defining intelligence only on the basis of IQ test scores ignores other significant components of mental aptitude. Another prominent critic of IQ as the primary measure of human cognitive capacities, Robert Sternberg, stated that restricting the concept of intelligence to the measure of g does not adequately account for the various skills and knowledge kinds that lead to success in human society.
Despite these concerns, clinical psychologists usually believe that IQ ratings have sufficient statistical validity for a variety of therapeutic applications.
Over the course of a child's life, his or her IQ might alter to some extent. In one longitudinal research, the mean IQ scores of tests at years 17 and 18 were connected with the mean scores of tests at ages five, six, and seven at r=0.86 and with the mean scores of tests at ages 11, 12, and 13 at r=0.96[further explanation required].
For decades, IQ testing handbooks and textbooks have said that IQ diminishes with age after a person reaches maturity. Subsequent studies pointed out that this phenomena is connected to the Flynn effect and is partly a cohort effect rather than a real ageing impact, as later researchers found out. Since the norming of the first Wechsler Intelligence Scale attracted attention to IQ variations in various age groups of adults, a number of IQ and ageing research have been done. After early adulthood, the prevailing agreement is that fluid intelligence diminishes with age, but crystallised intellect stays intact. To get reliable statistics, both cohort effects (test-takers' birth year) and practise effects (test-takers taking the same kind of IQ test several times) must be controlled. [inconsistent] It's unclear if any lifestyle changes may help people maintain fluid intellect as they become older.
The precise age when fluid intelligence or crystallised intellect reached its apex is unknown. Cross-sectional research suggest that particularly fluid intelligence peaks at a young age (typically in early adulthood), but longitudinal data shows that intelligence remains steady until mid-adulthood or beyond. As a result, intellect seems to be steadily deteriorating.
The process of IQ test publishers classifying IQ score ranges into several groups with labels such as "superior" or "average" is known as IQ classification. Attempts to classify human beings by general ability based on different sorts of behavioural observation predate IQ classification historically. Other sorts of behavioural observation are still necessary for confirming IQ-based categorization.
Several neurophysiological parameters, such as the ratio of brain weight to body weight and the size, shape, and activity level of distinct areas of the brain, have been linked to human intelligence. The size and form of the frontal lobes, the quantity of blood and chemical activity in the frontal lobes, the total amount of grey matter in the brain, the overall thickness of the cortex, and the glucose metabolic rate are all factors that may influence IQ.