Admissions Testing
One way of testing my augmented theory of successful intelligence is by showing that it is possible rigorously to assess the elements of the theory and to conduct detailed construct validation of the theory.
Undergraduate Admissions
In previous research on undergraduate admissions, my colleagues and I showed that (a) it is possible factor-analytically to distinguish analytical, creative, and practical skills; (b) tests of these skills substantially and significantly increase prediction of academic success in college; (c) combining creative, practical, and wisdom-based test results with analytically-based test results substantially reduces ethnic-group differences; and (d) applicants and their parents like the expanded tests because it enables a college or university to view an applicant more comprehensively, rather than merely through narrow standardized test scores.
In research on achievement tests, we have shown that expanding the tests (in this case, Advanced Placement tests in psychology, statistics, and physics) to include new items assessing achievement analytically, creatively, and practically reduces ethnic-group differences. In other words, a consistent trend in both the ability and achievement research is that conventional standardized tests tend to augment differences among ethnic groups relative to what is possible when tests are created via the augmented theory of successful intelligence.
I plan to continue my program of research on admissions testing. Currently, we have a grant from Cornell University to develop a test that could be used for graduate admissions in the behavioral and brain sciences. The test will be a simulation of the kinds of activities in which behavioral and brain scientists engage, such as reviewing articles and grant proposals and evaluating teaching. Our hope is that the skills we measure will be more relevant to predicting graduate and professional success than is a general ability test such as the GRE.
I also am interested in developing better and more sophisticated measures for undergraduate admissions, as well as extending the work we have done to professional-school admissions, such as we did in a research project some years back for the University of Michigan Business School.
Key References
Hedlund, J., Wilt, J. M., Nebel, K. R., Ashford, S. J., & Sternberg, R. J. (2006). Assessing practical intelligence in business school admissions: A supplement to the Graduate Management Admissions Test. Learning and Individual Differences, 16, 101–127.
Stemler, S. E., Grigorenko, E. L., Jarvin, L., & Sternberg, R. J. (2006). Using the theory of successful intelligence as a basis for augmenting AP exams in psychology and statistics. Contemporary Educational Psychology, 31(2), 344–376.
Stemler, S., Sternberg, R. J., Grigorenko, E. L., Jarvin, L., & Sharpes, D. K. (2009). Using the theory of successful intelligence as a framework for developing assessments in AP Physics. Contemporary Educational Psychology, 34, 195-209.
Sternberg, R. J. (1972). A decision rule to facilitate the undergraduate admissions process. College and University, 48, 48–53.
Sternberg, R. J. (1973). Cost–benefit analysis of the Yale admissions office interview. College and University, 48, 154–164.
Sternberg, R. J. (2003). Wisdom, intelligence, and creativity synthesized. New York: Cambridge University Press.
Sternberg, R. J. (2007). Rethinking university admissions in the 21st century. Perspectives in Education, 25 (4), 7-16.
Sternberg, R. J. (2008a). Assessing students for medical school admissions: Is it time for a new approach? Academic Medicine, 83 (10), October Supplement, S105-S109.
Sternberg, R. J. (2009a). The Rainbow and Kaleidoscope Projects: A new psychological approach to undergraduate admissions. European Psychologist, 14, 279-287.
Sternberg, R. J. (2010a). College admissions, beyond the No. 2 pencil. Washington Post, November 21, p. B3.
Sternberg, R. J. (2010b). College admissions for the 21st century. Cambridge, MA: Harvard University Press.
Sternberg, R. J. (2010c). The Rainbow Project: Using a psychological theory of intelligence to improve the college admissions process. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world. New York: Worth.
Sternberg, R. J. (2012a). College admissions: Beyond conventional testing. Change, 44(5), 6-13.
Sternberg, R. J. (2012b). College admissions assessments: New techniques for a new millennium. In J. Soares (Ed.), SAT wars: The case for test-optional college admissions (pp. 85-103). New York: Teachers College Press.
Sternberg, R. J. (2013). Character development: Putting it into practice in admissions and instruction. Journal of College and Character, 14(3), 253-258.
Sternberg, R. J., Bonney, C. R., Gabora, L., Karelitz, T., & Coffin, L. (2010). Broadening the spectrum of undergraduate admissions. College and University, 86(1), 2-17.
Sternberg, R. J., Bonney, C. R., Gabora, L, & Merrifield, M. (2012). WICS: A model for college and university admissions. Educational Psychologist, 47(1), 30-41.
Sternberg, R. J., Jarvin, L., & Grigorenko, E. L. (2011). Explorations of the nature of giftedness. New York: Cambridge University Press.
Sternberg, R. J., & The Rainbow Project Collaborators (2006). The Rainbow Project: Enhancing the SAT through assessments of analytical, practical and creative skills. Intelligence, 34 (4), 321-350.
Sternberg, R. J., The Rainbow Project Collaborators, & University of Michigan Business School Project Collaborators (2004). Theory based university admissions testing for a new millennium. Educational Psychologist, 39(3), 185–198.
Sternberg, R. J., & Williams, W. M. (1997). Does the Graduate Record Examination predict meaningful success in the graduate training of psychologists? A case study. American Psychologist, 52, 630–641.
Graduate Admissions
(taken from Sternberg, R. J. (2020). It’s time to stem malpractice in STEM admissions. Inside Higher Ed, https://www.insidehighered.com/views/2020/07/28/colleges-shouldnt-use-standardized-admissions-tests-alone-measure-scientific
In a series of studies, we hypothesized that, whatever it is that college and university admissions tests are measuring, it was not central but rather peripheral to success in STEM education and later research (as well as teaching). So we designed a series of assessments that would measure STEM reasoning in particular. The first assessments included measures of skills in generating alternative hypotheses, generating experiments and drawing conclusions from empirical data. These skills seemed to us to be at the heart of scientific thinking.
We presented students at Cornell University with test items directly measuring those scientific thinking skills in the domain of psychological science. We also presented tests of general academic thinking skills: inductive reasoning (number series and classification of letter sets) of the kinds found on conventional intelligence tests. We further asked the students for self-reports of their SAT scores.
The results suggested that, whatever it is that conventional standardized tests directly measure, it is not scientific thinking skills. In particular, we found that, statistically, the tests of scientific reasoning tended to cluster together into one factor and the tests of general academic thinking skills tended to cluster into another factor. This is not to say that skills measured by conventional admissions tests are irrelevant to STEM success; they just do not appear to be central to it. Relying on them in isolation in admissions can, in fact, be STEM malpractice.
In further research, we sought to replicate these findings and also extend them to another domain of thinking important to STEM careers: teaching. In this work, we had students not only engage in the previous assessments but also in a new one in which they were presented with recorded scenarios of two professors teaching lessons in psychological science. Both professors purposely introduced flaws into their teaching, for example, being disorganized, answering questions poorly or even sarcastically, appearing not to know their material well, and so forth. Student participants were asked to view the teaching and to analyze the flaws in the professors’ teaching. We found that students’ skill in spotting flaws in science teaching clustered with the scientific thinking assessments rather than with the assessments of general academic thinking skills, such as number series and letter-set classifications. STEM research and STEM teaching skills, therefore, are nonidentical but closely related.
But what about other aspects of STEM thinking outside of psychological science? My colleagues and I did a further study in which we assessed the same scientific thinking skills but across a variety of STEM areas, not just psychological science. The results from the earlier studies replicated. It did not matter whether we used scientific thinking items from one STEM area or another: the scientific thinking items clustered together, as did the general academic thinking skills items.
We were still left with another question. In our assessments, students gave free responses to test items. They wrote down their hypotheses, proposed experiments and performed analyses of conclusions to be drawn. What would happen if we instead made these items multiple choice so that they more closely corresponded to the kinds of items used to measure general academic thinking skills? On the one hand, using multiple choice, it seemed to us, would decrease the content validity of the items because, in STEM research and teaching, problems are not presented in multiple-choice format. Scientists, for example, have to figure out their own alternative hypotheses to explain their results rather than selecting from among multiple-choice options created by unknown test constructors. But it seemed to us that introducing multiple-choice format might increase correlations with the conventional tests of general academic thinking skills. And this is exactly what we found. By mimicking the multiple-choice format, we increased correlations with conventional standardized multiple-choice tests.
What can we conclude from this series of studies? We can conclude what most of us, I suspect, already know -- that the standardized tests currently being used in the United States and elsewhere for admission to STEM (and other) programs are remarkably incomplete in what they measure. Without STEM-relevant supplementation, using them in isolation can lead to a generation of scientists…who are much more comfortable critiquing others’ ideas than coming up with their own creative ideas. The conventional tests do not measure creative or practical skills; they do not even directly measure scientific reasoning. They are, for many students, somewhat useful measures of what sometimes is called general mental ability (sometimes called GMA), but not of many of the skills that will matter most, whether the students will go into STEM fields or other fields.
The world is facing enormous problems. Many leaders who went through an educational funnel shaped by standardized tests are failing us. We can -- and given the severity of our problems, we must -- do better.
Key References
Sternberg, R. J., & Sternberg, K. (2017). Measuring scientific reasoning for graduate admissions in psychology and related disciplines. Journal of Intelligence, http://www.mdpi.com/2079-3200/5/3/29/pdf.
Sternberg, R. J., Sternberg, K., & Todhunter, R. J. E. (2017). Measuring reasoning about teaching for graduate admissions in psychology and related disciplines. Journal of Intelligence, www.mdpi.com/2079-3200/5/4/34/pdf.
Sternberg, R. J., Todhunter, R. J. E., Litvak, A., & Sternberg, R. J. (2020). The relation of scientific creativity and evaluation of scientific impact to scientific reasoning and general intelligence. Journal of Intelligence, https://doi.org/10.3390/jintelligence8020017
Sternberg, R. J., Wong, C. H., & Sternberg, K. (2019). The relation of tests of scientific reasoning to each other and to tests of fluid intelligence. Journal of Intelligence, 7(3), 20, https://doi.org/10.3390/jintelligence7030020