2017 |
Gray, Wayne D Game-XP: Action Games as Experimental Paradigms for Cognitive Science Journal Article Topics in Cognitive Science, 9 (2), pp. 289–307, 2017. Abstract | Links | BibTeX | Tags: Action games, Chess, cognitive skill acquisition, Cohort analysis, Computer games, Expertise sampling, extreme expertise, Halo, Longitudinal studies, Skilled performance, Space Fortress, StarCraft, Tetris, Verbal protocol analysis, Video games @article{gray17topiCS.gxp, title = {Game-XP: Action Games as Experimental Paradigms for Cognitive Science}, author = {Gray, Wayne D.}, url = {http://homepages.rpi.edu/~grayw/pubs/papers/2017/gray17topics.gxp.pdf}, doi = {10.1111/tops.12260}, year = {2017}, date = {2017-04-15}, journal = {Topics in Cognitive Science}, volume = {9}, number = {2}, pages = {289--307}, abstract = {Why games? How could anyone consider action games an experimental paradigm for Cognitive Science? In 1973, as one of three strategies he proposed for advancing Cognitive Science, Allen Newell exhorted us to ``accept a single complex task and do all of it.'' More specifically, he told us that rather than taking an ``experimental psychology as usual approach,'' we should ``focus on a series of experimental and theoretical studies around a single complex task'' so as to demonstrate that our theories of human cognition were powerful enough to explain ``a genuine slab of human behavior'' with the studies fitting into a detailed theoretical picture. Action games represent the type of experimental paradigm that Newell was advocating and the current state of programming expertise and laboratory equipment, along with the emergence of Big Data and naturally occurring datasets, provide the technologies and data needed to realize his vision. Action games enable us to escape from our field's regrettable focus on novice performance to develop theories that account for the full range of expertise through a twin focus on expertise sampling (across individuals) and longitudinal studies (within individuals) of simple and complex tasks.}, keywords = {Action games, Chess, cognitive skill acquisition, Cohort analysis, Computer games, Expertise sampling, extreme expertise, Halo, Longitudinal studies, Skilled performance, Space Fortress, StarCraft, Tetris, Verbal protocol analysis, Video games}, pubstate = {published}, tppubtype = {article} } Why games? How could anyone consider action games an experimental paradigm for Cognitive Science? In 1973, as one of three strategies he proposed for advancing Cognitive Science, Allen Newell exhorted us to ``accept a single complex task and do all of it.'' More specifically, he told us that rather than taking an ``experimental psychology as usual approach,'' we should ``focus on a series of experimental and theoretical studies around a single complex task'' so as to demonstrate that our theories of human cognition were powerful enough to explain ``a genuine slab of human behavior'' with the studies fitting into a detailed theoretical picture. Action games represent the type of experimental paradigm that Newell was advocating and the current state of programming expertise and laboratory equipment, along with the emergence of Big Data and naturally occurring datasets, provide the technologies and data needed to realize his vision. Action games enable us to escape from our field's regrettable focus on novice performance to develop theories that account for the full range of expertise through a twin focus on expertise sampling (across individuals) and longitudinal studies (within individuals) of simple and complex tasks. |
Sibert, Catherine ; Gray, Wayne D; Lindstedt, John K Interrogating Feature Learning Models to Discover Insights Into the Development of Human Expertise in a Real-Time, Dynamic Decision-Making Task Journal Article Topics in Cognitive Science, 9 (2), pp. 374–394, 2017. Abstract | Links | BibTeX | Tags: Cognitive skill, cognitive skill acquisition, Cross-entropy reinforcement learning, expertise, Experts, Machine learning, Methods, Perceptual learning, Strategies, Tetris @article{sibert17topiCS.gxp, title = {Interrogating Feature Learning Models to Discover Insights Into the Development of Human Expertise in a Real-Time, Dynamic Decision-Making Task}, author = {Sibert, Catherine and Gray, Wayne D. and Lindstedt, John K.}, url = {http://homepages.rpi.edu/~grayw/pubs/papers/2017/sibert17topics.gxp.pdf}, doi = {10.1111/tops.12225}, year = {2017}, date = {2017-04-15}, journal = {Topics in Cognitive Science}, volume = {9}, number = {2}, pages = {374--394}, abstract = {Tetris provides a difficult, dynamic task environment within which some people are novices and others, after years of work and practice, become extreme experts. Here we study two core skills; namely, (a) choosing the goal or objective function that will maximize performance and (b) a feature-based analysis of the current game board to determine where to place the currently falling zoid (i.e., Tetris piece) so as to maximize the goal. In Study 1, we build cross-entropy reinforcement learning (CERL) models (Szita & Lorincz, 2006) to determine whether different goals result in different feature weights. Two of these optimization strategies quickly rise to performance plateaus, whereas two others continue toward higher but more jagged (i.e., variable) heights. In Study 2, we compare the zoid placement decisions made by our best CERL models with those made by 67 human players. Across 370,131 human game episodes, two CERL models picked the same zoid placements as our lowest scoring human for 43% of the placements and as our three best scoring experts for 65% of the placements. Our findings suggest that people focus on maximizing points, not number of lines cleared or number of levels reached. They also show that goal choice influences the choice of zoid placements for CERLs and suggest that the same is true of humans. Tetris has a repetitive task structure that makes Tetris more tractable and more like a traditional experimental psychology paradigm than many more complex games or tasks. Hence, although complex, Tetris is not overwhelmingly complex and presents a right-sized challenge to cognitive theories, especially those of integrated cognitive systems.}, keywords = {Cognitive skill, cognitive skill acquisition, Cross-entropy reinforcement learning, expertise, Experts, Machine learning, Methods, Perceptual learning, Strategies, Tetris}, pubstate = {published}, tppubtype = {article} } Tetris provides a difficult, dynamic task environment within which some people are novices and others, after years of work and practice, become extreme experts. Here we study two core skills; namely, (a) choosing the goal or objective function that will maximize performance and (b) a feature-based analysis of the current game board to determine where to place the currently falling zoid (i.e., Tetris piece) so as to maximize the goal. In Study 1, we build cross-entropy reinforcement learning (CERL) models (Szita & Lorincz, 2006) to determine whether different goals result in different feature weights. Two of these optimization strategies quickly rise to performance plateaus, whereas two others continue toward higher but more jagged (i.e., variable) heights. In Study 2, we compare the zoid placement decisions made by our best CERL models with those made by 67 human players. Across 370,131 human game episodes, two CERL models picked the same zoid placements as our lowest scoring human for 43% of the placements and as our three best scoring experts for 65% of the placements. Our findings suggest that people focus on maximizing points, not number of lines cleared or number of levels reached. They also show that goal choice influences the choice of zoid placements for CERLs and suggest that the same is true of humans. Tetris has a repetitive task structure that makes Tetris more tractable and more like a traditional experimental psychology paradigm than many more complex games or tasks. Hence, although complex, Tetris is not overwhelmingly complex and presents a right-sized challenge to cognitive theories, especially those of integrated cognitive systems. |
2011 |
Destefano, Marc; Lindstedt, John K; Gray, Wayne D Use of complementary actions decreases with expertise Incollection Carlson, Laura ; H"olscher, Christoph ; Shipley, Thomas (Ed.): Proceedings of the 33rd Annual Conference of the Cognitive Science Society, pp. 2709-2014, Cognitive Science Society, Austin, TX, 2011. Abstract | BibTeX | Tags: complementary action, embodied cognition, epistemic action, expertise, games, pragmatic action, soft constraints hypothesis, Tetris @incollection{marc11csc, title = {Use of complementary actions decreases with expertise}, author = { Marc Destefano and John K. Lindstedt and Wayne D. Gray}, editor = {Carlson, Laura and H"olscher, Christoph and Shipley, Thomas}, year = {2011}, date = {2011-01-01}, booktitle = {Proceedings of the 33rd Annual Conference of the Cognitive Science Society}, pages = {2709-2014}, publisher = {Cognitive Science Society}, address = {Austin, TX}, abstract = {Evidence that the use of complementary (or epistemic) actions increases with expertise in the fast-paced interactive video game of Tetris has been previously reported (Kirsh, 1995; Kirsh & Maglio, 1994; Maglio & Kirsh, 1996). However, the range of expertise considered was small and classifying such actions can be difficult. We sample across a wide range of Tetris expertise and define complementary actions across multiple criterion of varying strictness. Contrary to prior work, our data suggest that complementary actions decrease with expertise, regardless of the criteria used. These findings cast into doubt the accepted wisdom on the role of complementary actions in expertise.}, keywords = {complementary action, embodied cognition, epistemic action, expertise, games, pragmatic action, soft constraints hypothesis, Tetris}, pubstate = {published}, tppubtype = {incollection} } Evidence that the use of complementary (or epistemic) actions increases with expertise in the fast-paced interactive video game of Tetris has been previously reported (Kirsh, 1995; Kirsh & Maglio, 1994; Maglio & Kirsh, 1996). However, the range of expertise considered was small and classifying such actions can be difficult. We sample across a wide range of Tetris expertise and define complementary actions across multiple criterion of varying strictness. Contrary to prior work, our data suggest that complementary actions decrease with expertise, regardless of the criteria used. These findings cast into doubt the accepted wisdom on the role of complementary actions in expertise. |
0000 |
Gray, Wayne D; Destefano, Marc ; Lindstedt, John K; Sibert, Catherine ; Sangster, Matthew-Donald D The Essence of Interaction in Boundedly Complex, Dynamic Task Environments Book Chapter Gluck, K A; Laird, J E (Ed.): Chapter 10, pp. 147-165, The MIT Press, Cambridge, Massachusetts, 0000, ISBN: 978-0-262-03882-9. Abstract | BibTeX | Tags: dynamic task environment, interactive behavior, Tetris @inbook{gray18strugmann, title = {The Essence of Interaction in Boundedly Complex, Dynamic Task Environments}, author = {Gray, Wayne D. and Destefano, Marc and Lindstedt, John K. and Sibert, Catherine and Sangster, Matthew-Donald D.}, editor = {Gluck, K. A. and Laird, J. E.}, isbn = {978-0-262-03882-9}, pages = {147-165}, publisher = {The MIT Press}, address = {Cambridge, Massachusetts}, chapter = {10}, series = {Strungmann Forum Reports}, abstract = {Studying the essence of interaction requires task environments in which changes may arise due to the nature of the environment or due to the actions of agents in that environment. In dynamic environments, the agent's choice to do nothing does not stop the task environment from changing. Likewise, making a decision in such environments does not mean that the best decision, based on current information, will remain ``best'' as the task environment changes. In this paper, we summarize work in progress which is bringing the tools of experimental psychology, machine learning, and advanced statistical analyses to bear on understanding the complexity of interactive performance in complex tasks involving single or multiple interactive agents in dynamic environments.}, keywords = {dynamic task environment, interactive behavior, Tetris}, pubstate = {published}, tppubtype = {inbook} } Studying the essence of interaction requires task environments in which changes may arise due to the nature of the environment or due to the actions of agents in that environment. In dynamic environments, the agent's choice to do nothing does not stop the task environment from changing. Likewise, making a decision in such environments does not mean that the best decision, based on current information, will remain ``best'' as the task environment changes. In this paper, we summarize work in progress which is bringing the tools of experimental psychology, machine learning, and advanced statistical analyses to bear on understanding the complexity of interactive performance in complex tasks involving single or multiple interactive agents in dynamic environments. |