|
People high in creative intelligence are strong in discovering, creating, and inventing ideas and products.
|
|
The History of Intelligence Testing IQ testing as we know it today has evolved from a century of research and speculation. In the later half of the 19th century, a man named Sir Francis Galton, a scientist and cousin of Charles Darwin, began to speculate about why there seemed to be different limits to people's mental abilities. Although he himself had displayed precociousness during his childhood - thanks to intensive lessons from his eldest sister - his brightest peers eventually caught up with him, and he never excelled academically in comparison to them. He wondered why it was that he, who had such a head start in his education, was still not able to keep pace academically with the best and the brightest.
A Theory of Hereditary Genius
Reading Darwin's The Origin of Species further fueled Galton's idea of individual differences in mental ability. Galton applied Darwin's ideas and the latest statistical concepts to create his own theory of hereditary genius. Unlike the prevailing ideas of the time, he speculated that people are not all born alike, mere puppets to be molded by environmental forces. Rather, he believed that individuals have differences in their mental abilities, which can be passed on to successive generations. He based this theory on his research, which showed that great ability - be it intellectual, musical, artistic, or in other realms - seems to run in families. Some people seemed to be destined for greatness and others for failure. Just as muscle development is limited by body type, Galton believed that mental ability could reach a threshold determined by biology, and that thresholds differed between individuals. The idea of a biological basis for mental differences was extremely new and controversial at the time, but today it has become a basic principle in the study of the human mind.
Galton's Theory Heads in a Controversial Direction
Galton used his theory of hereditary genius as an argument in support of his theory of eugenics. He advocated the development of a genetically superior breed of humans in order to improve society. In order to do that however, he needed to know which people would be likely to pass on high intelligence to their children. Since the majority of people reached middle or old age before reaching the height of their potential, a technique for measuring raw intellect was needed. Thus, intelligence testing was born.
Galton's controversial ideas earned him a notoriety that became much more prominent after the Nazis used similar logic to justify their own eugenics project. Although Galton's goal was to increase the population who reaches the upper limits of intelligence (not to eliminate the lower ranges), it can't be denied that the implications of any eugenics project are dangerous. Galton's reputation as an intelligence researcher suffered due to the fact that he advocated eugenics, and other, less controversial intelligence researchers are more commonly credited with developing the concept of intelligence.
A student of Galton's, James McKeen Cattell, attempted to introduce widespread intelligence testing to America in 1890 and the practice nearly took off. However, research at the time indicated that there was little relationship between school performance and scores on Cattell's test. This finding damaged Cattell's reputation and the status of mental testing in general.
Alfred Binet - Researcher and Critic
Alfred Binet of France began research on children and testing early in the 1900's and caught the attention of school officials in Paris. They asked him to design an intelligence test to help identify children who needed extra guidance in school. His goal in designing the test was to create an objective measurement of intelligence. Even though his test was very well researched, Binet was aware of the potential for misuse, and was reluctant to place a stigmatizing label on children who scored low on his test.
Binet viewed intelligence as unstable and believed that it could be improved with work. Although Galton and others shared his views, their goal was to identify subsets of the human population with high intellect in order to eventually increase the overall level of intelligence. They felt that intelligence was a stable, heritable factor.
In 1905, Binet and his student, Theodore Simon, devised the first modern system for testing intelligence. Scoring was based on standardized, average mental levels for various age groups. In 1916, Lewis Terman of Stanford University expanded it and released it in the United States. The idea that a test could determine a child's "mental age" became enormously popular.
Just before the First World War, a German psychologist named Wilhelm Stern suggested a better way of expressing results than by mental age. Stern determined his results by finding the ratio between the subject's chronological age and their mental age. Thus, the concept of intelligence quotient (IQ) was born. What we now call The Stanford-Binet Intelligence Test took into account Stern's ratio technique and the test has now become the gold standard against which all other IQ tests are measured. The intelligence quotient is equal to Mental Age multiplied by 100, divided by Chronological Age. This formula, however, was not very useful for adults since raw scores start to level off around the age of 16. Although Stern's method for determining IQ is no longer in common use, the term IQ is still used to describe the results on several different kinds of intelligence tests. Today, an average IQ score is considered to be 100.
Wechsler's Standard Deviation Technique
Wechsler, creator of the IQ tests most widely used today (WAIS-III and WAIC-R NI), devised a system of calculating IQ based not on mental and actual age, but on the percent deviation away from the norm. Thus the "deviation IQ" replaced the "ratio IQ". Statisticians quantify the scores with a number called "Standard Deviation" (SD). The standard deviation is a measure of variability in the sample or population, or in other words, the spread of the scores and their distance from the average.
Use of IQ Testing in America
During the drafting of soldiers during the First World War, the military needed a quick way to classify men into ranks. In order to do this, they were asked to take a hastily-created intelligence test, which was essentially based on the research of a man named Arthur Otis. Based on their results, soldiers were placed into different ranks. Within two short years, nearly two million men had taken the Army intelligence test. In reviewing the data from the tests, however, an educational bias in the test came to light, which at the time was not interpreted as a problem, but rather as a "menace of the feeble minded" - the average recruit had a mental age of just 13 years old. Research now shows that the problem was one of measurement; the IQ test was actually more a measure of education level than one of raw intelligence. As a result of the extensive use of intelligence testing by the military, IQ testing became part of American culture. Soon, the tests were not just used by the military, but also by companies when deciding whom to hire, as well as by school systems across the country.
Some Issues in Intelligence Testing
Multiple Versus Singular Intelligence
A popular debate in research on intelligence has been whether intelligence can be said to be a single, uniform entity, or as being made up of many different factors that can vary independently of one another. One individual, for instance, could be excellent in verbal skills, while completely failing to grasp simple mathematical processes. The foremost modern researcher in this topic is a man from Harvard named Howard Gardner, who proposes that there are eight different types of intelligence. These include Auditory-Musical, Logical-Mathematical, Verbal-Linguistic, Visual-Spatial, Bodily-Kinesthetic, Interpersonal, Intrapersonal and Naturalist intelligences. Also currently under consideration is an Existential intelligence. Theories endorsing multiple intelligences tend to encourage acceptance of other forms of success besides academic performance. Gardner believes that intelligence is too narrowly defined and that the current available tests are not complete since they tend to only test Verbal-Linguistic and Logical-Mathematical intelligences. That being said, however, these two intelligence types could be seen as prerequisites for academic success.
Race Discrimination
IQ tests can have a significant impact on the fate of the person taking them. They can be the determining factor in being hired, getting accepted into a good school, or being accepted as an immigrant in a new country, among other high-pressure circumstances. Therefore, the implications of cultural biases in tests are potentially very damaging. Throughout their history, IQ tests have been used to justify the superiority or inferiority of various races. Most notably, this has resulted in the negative stereotyping of African-American individuals. Due to culturally specific test questions, some populations often score lower on average than other populations. This issue has sparked much debate over the years. Fortunately, today there are more sophisticated test-development techniques and laws in place to protect minorities from the effects of unfair tests. The problem isn't yet solved, but people are more aware of the potential for bias, whether intentional or unintentional, and therefore are less likely to draw final conclusions based solely on the results of IQ tests.
The Flynn Effect
According to scholar James R. Flynn, over the years scores on IQ tests are increasing dramatically across at least 20 different cultures. This couldn't be attributed to genetic changes, as the increase has occurred too quickly. This means that there are very likely environmental factors, as opposed to genetic factors, that are responsible for this improvement in IQ scores. Some of the possible reasons for improved scores could include: Improved nutrition.Better parenting in early development.Better educational systems.Today's complex society better prepares people for this type of test.The evidence that environmental factors seem to be at work in the increasing IQ scores raises an important question. Requiring adolescents to take a test in order to graduate from high school, which is a very popular method of assessment in schools and other settings today, is a high-stakes situation. Those individuals who didn't have access to good nutrition, good educational systems, or rich and mentally stimulating environments will be at a disadvantage. Is it fair to punish those individuals for having fewer resources? Unfortunately, this is what can happen when people are forced to take part in high-stakes testing.
IQ Testing: Useful Enough to Justify? Controversy has made IQ testing a very touchy subject. As a result of the potential problems associated with mental testing, it fell out of favor in recent decades. Many people have been concerned about the dangers of creating a self-fulfilling prophecy. Some consider IQ to be a stable trait, and should a child, adolescent or even an adult be labeled as "low IQ", they may then be treated differently. This is why Binet was reluctant to label children back in 1910. IQ Tests can, however, be useful and fair measures of intelligence when used appropriately. Scores can help identify children who need more attention in school, and can be good predictors of academic and professional success. It is the permanent labeling and subsequent stereotyping of individuals as having a "low IQ" that should be discouraged.
What This Classical IQ Test Measures Humans have hundreds of specific mental abilities. Some of these abilities can be assessed more easily and accurately than others, and can then be used reliably as predictors of academic achievement. This test measures mental abilities that are positively correlated with many other skills, as well as academic performance. Her score will be a strong, though not perfect, indication of her true potential in terms of the underlying abilities. This IQ test measures several factors of intelligence - logical reasoning, math skills, language abilities, spatial relations skills, knowledge retained and the ability to solve novel problems. It doesn't take into consideration social or emotional intelligence. This test does not measure all of her potential - no test can do that accurately. Since different IQ tests focus on different factors, her performance will change from test to test. For example, someone who scores 130 on Raven's Progressive matrices or on our Culture-Fair IQ test (both of which measure general intelligence while minimizing the cultural and educational influences) can score 115 on the Wechsler scale, which has both verbal and performance components, the latter of which has been found to be influenced by schooling. The same person may score only 98 on our Verbal IQ test, which is focused purely on verbal skills. The Classical IQ Test is in its third phase of revision. The total sample size for all the validation studies thus far is over 1 million. The changes made subsequent to each validation study have resulted in an extremely reliable measure of intelligence. The Cronbach's alpha (a commonly used measure of reliability) for this IQ test is .8756. The Queendom Classical IQ Test is a consistent, dependable measure of intelligence. Scores on this test were also compared to other, well-established measurements of IQ. Comparisons between the test and others generated validity scores, which provide information about whether the test is actually measuring intelligence. The dual concepts of reliability and validity are two essential aspects of test development. The Classical IQ Test- 3rd Revision well exceeds the accepted standards for these two measurements. References available upon request
|