2017 |
Gray, Wayne D Game-XP: Action Games as Experimental Paradigms for Cognitive Science Journal Article Topics in Cognitive Science, 9 (2), pp. 289–307, 2017. Abstract | Links | BibTeX | Tags: Action games, Chess, cognitive skill acquisition, Cohort analysis, Computer games, Expertise sampling, extreme expertise, Halo, Longitudinal studies, Skilled performance, Space Fortress, StarCraft, Tetris, Verbal protocol analysis, Video games @article{gray17topiCS.gxp, title = {Game-XP: Action Games as Experimental Paradigms for Cognitive Science}, author = {Gray, Wayne D.}, url = {http://homepages.rpi.edu/~grayw/pubs/papers/2017/gray17topics.gxp.pdf}, doi = {10.1111/tops.12260}, year = {2017}, date = {2017-04-15}, journal = {Topics in Cognitive Science}, volume = {9}, number = {2}, pages = {289--307}, abstract = {Why games? How could anyone consider action games an experimental paradigm for Cognitive Science? In 1973, as one of three strategies he proposed for advancing Cognitive Science, Allen Newell exhorted us to ``accept a single complex task and do all of it.'' More specifically, he told us that rather than taking an ``experimental psychology as usual approach,'' we should ``focus on a series of experimental and theoretical studies around a single complex task'' so as to demonstrate that our theories of human cognition were powerful enough to explain ``a genuine slab of human behavior'' with the studies fitting into a detailed theoretical picture. Action games represent the type of experimental paradigm that Newell was advocating and the current state of programming expertise and laboratory equipment, along with the emergence of Big Data and naturally occurring datasets, provide the technologies and data needed to realize his vision. Action games enable us to escape from our field's regrettable focus on novice performance to develop theories that account for the full range of expertise through a twin focus on expertise sampling (across individuals) and longitudinal studies (within individuals) of simple and complex tasks.}, keywords = {Action games, Chess, cognitive skill acquisition, Cohort analysis, Computer games, Expertise sampling, extreme expertise, Halo, Longitudinal studies, Skilled performance, Space Fortress, StarCraft, Tetris, Verbal protocol analysis, Video games}, pubstate = {published}, tppubtype = {article} } Why games? How could anyone consider action games an experimental paradigm for Cognitive Science? In 1973, as one of three strategies he proposed for advancing Cognitive Science, Allen Newell exhorted us to ``accept a single complex task and do all of it.'' More specifically, he told us that rather than taking an ``experimental psychology as usual approach,'' we should ``focus on a series of experimental and theoretical studies around a single complex task'' so as to demonstrate that our theories of human cognition were powerful enough to explain ``a genuine slab of human behavior'' with the studies fitting into a detailed theoretical picture. Action games represent the type of experimental paradigm that Newell was advocating and the current state of programming expertise and laboratory equipment, along with the emergence of Big Data and naturally occurring datasets, provide the technologies and data needed to realize his vision. Action games enable us to escape from our field's regrettable focus on novice performance to develop theories that account for the full range of expertise through a twin focus on expertise sampling (across individuals) and longitudinal studies (within individuals) of simple and complex tasks. |
Sibert, Catherine ; Gray, Wayne D; Lindstedt, John K Interrogating Feature Learning Models to Discover Insights Into the Development of Human Expertise in a Real-Time, Dynamic Decision-Making Task Journal Article Topics in Cognitive Science, 9 (2), pp. 374–394, 2017. Abstract | Links | BibTeX | Tags: Cognitive skill, cognitive skill acquisition, Cross-entropy reinforcement learning, expertise, Experts, Machine learning, Methods, Perceptual learning, Strategies, Tetris @article{sibert17topiCS.gxp, title = {Interrogating Feature Learning Models to Discover Insights Into the Development of Human Expertise in a Real-Time, Dynamic Decision-Making Task}, author = {Sibert, Catherine and Gray, Wayne D. and Lindstedt, John K.}, url = {http://homepages.rpi.edu/~grayw/pubs/papers/2017/sibert17topics.gxp.pdf}, doi = {10.1111/tops.12225}, year = {2017}, date = {2017-04-15}, journal = {Topics in Cognitive Science}, volume = {9}, number = {2}, pages = {374--394}, abstract = {Tetris provides a difficult, dynamic task environment within which some people are novices and others, after years of work and practice, become extreme experts. Here we study two core skills; namely, (a) choosing the goal or objective function that will maximize performance and (b) a feature-based analysis of the current game board to determine where to place the currently falling zoid (i.e., Tetris piece) so as to maximize the goal. In Study 1, we build cross-entropy reinforcement learning (CERL) models (Szita & Lorincz, 2006) to determine whether different goals result in different feature weights. Two of these optimization strategies quickly rise to performance plateaus, whereas two others continue toward higher but more jagged (i.e., variable) heights. In Study 2, we compare the zoid placement decisions made by our best CERL models with those made by 67 human players. Across 370,131 human game episodes, two CERL models picked the same zoid placements as our lowest scoring human for 43% of the placements and as our three best scoring experts for 65% of the placements. Our findings suggest that people focus on maximizing points, not number of lines cleared or number of levels reached. They also show that goal choice influences the choice of zoid placements for CERLs and suggest that the same is true of humans. Tetris has a repetitive task structure that makes Tetris more tractable and more like a traditional experimental psychology paradigm than many more complex games or tasks. Hence, although complex, Tetris is not overwhelmingly complex and presents a right-sized challenge to cognitive theories, especially those of integrated cognitive systems.}, keywords = {Cognitive skill, cognitive skill acquisition, Cross-entropy reinforcement learning, expertise, Experts, Machine learning, Methods, Perceptual learning, Strategies, Tetris}, pubstate = {published}, tppubtype = {article} } Tetris provides a difficult, dynamic task environment within which some people are novices and others, after years of work and practice, become extreme experts. Here we study two core skills; namely, (a) choosing the goal or objective function that will maximize performance and (b) a feature-based analysis of the current game board to determine where to place the currently falling zoid (i.e., Tetris piece) so as to maximize the goal. In Study 1, we build cross-entropy reinforcement learning (CERL) models (Szita & Lorincz, 2006) to determine whether different goals result in different feature weights. Two of these optimization strategies quickly rise to performance plateaus, whereas two others continue toward higher but more jagged (i.e., variable) heights. In Study 2, we compare the zoid placement decisions made by our best CERL models with those made by 67 human players. Across 370,131 human game episodes, two CERL models picked the same zoid placements as our lowest scoring human for 43% of the placements and as our three best scoring experts for 65% of the placements. Our findings suggest that people focus on maximizing points, not number of lines cleared or number of levels reached. They also show that goal choice influences the choice of zoid placements for CERLs and suggest that the same is true of humans. Tetris has a repetitive task structure that makes Tetris more tractable and more like a traditional experimental psychology paradigm than many more complex games or tasks. Hence, although complex, Tetris is not overwhelmingly complex and presents a right-sized challenge to cognitive theories, especially those of integrated cognitive systems. |
Gray, Wayne D Plateaus and Asymptotes: Spurious and Real Limits in Human Performance Journal Article Current Directions in Psychological Science, 26 (1), pp. 59-67, 2017. Abstract | Links | BibTeX | Tags: asymptotes, cognitive skill acquisition, expertise, memory, performance, plateaus, spurious limits @article{gray17cdps, title = {Plateaus and Asymptotes: Spurious and Real Limits in Human Performance}, author = {Gray, Wayne D.}, url = {http://homepages.rpi.edu/~grayw/pubs/papers/2017/gray17cdps.pdf}, doi = {10.1177/0963721416672904}, year = {2017}, date = {2017-02-15}, journal = {Current Directions in Psychological Science}, volume = {26}, number = {1}, pages = {59-67}, abstract = {One hundred twenty years ago, the emergent field of experimental psychology debated whether plateaus of performance during training were real or not. Sixty years ago, the battle was over whether learning asymptoted or not. Thirty years ago, the research community was seized with concerns over stable plateaus at suboptimal performance levels among experts. Applied researchers viewed this as a systems problem and referred to it as the paradox of the active user. Basic researchers diagnosed this as a training problem and embraced deliberate practice. The concepts of plateaus and asymptotes and the distinction between the two are important as the questions asked and the means of overcoming one or the other differ. These questions have meaning as we inquire about the nature of performance limits in skilled behavior and the distinction between brain capacity and brain efficiency. This article brings phenomena that are hiding in the open to the attention of the research community in the hope that delineating the distinction between plateaus and asymptotes will help clarify the distinction between real versus ``spurious limits'' and advance theoretical debates regarding learning and performance.}, keywords = {asymptotes, cognitive skill acquisition, expertise, memory, performance, plateaus, spurious limits}, pubstate = {published}, tppubtype = {article} } One hundred twenty years ago, the emergent field of experimental psychology debated whether plateaus of performance during training were real or not. Sixty years ago, the battle was over whether learning asymptoted or not. Thirty years ago, the research community was seized with concerns over stable plateaus at suboptimal performance levels among experts. Applied researchers viewed this as a systems problem and referred to it as the paradox of the active user. Basic researchers diagnosed this as a training problem and embraced deliberate practice. The concepts of plateaus and asymptotes and the distinction between the two are important as the questions asked and the means of overcoming one or the other differ. These questions have meaning as we inquire about the nature of performance limits in skilled behavior and the distinction between brain capacity and brain efficiency. This article brings phenomena that are hiding in the open to the attention of the research community in the hope that delineating the distinction between plateaus and asymptotes will help clarify the distinction between real versus ``spurious limits'' and advance theoretical debates regarding learning and performance. |