Macrocognition Metrics and Scenarios: Design and Evaluation for Real-World Teams translates advances by scientific leaders in the relatively new area of macrocognition into a format that will support immediate use by members of the software testing and evaluation community for large-scale systems as well as trainers of real-world teams. Macrocognition is defined as how activity in real-world teams is adapted to the complex demands of a setting with high consequences for failure. The primary distinction between macrocognition and prior research is that the primary unit for measurement is a real-world team coordinating their activity, rather than individuals processing information, the predominant model for cognition for decades. This book provides an overview of the theoretical foundations of macrocognition, describes a set of exciting new macrocognitive metrics, and provides guidance on using the metrics in the context of different approaches to evaluation and measurement of real-world teams.
Table of Contents
Contents: Preface, Emily S. Patterson, Janet E. Miller, Emilie M. Roth, and David D. Woods; Part I Theoretical Foundations: Theory -> concepts -> measures but policies -> metrics, Robert R. Hoffman; Some challenges for macrocognitive measurement, Robert R, Hoffman; Measuring macrocognition in teams: some insights for navigating the complexities, C. Shawn Burke, Eduardo Salas, Kimberly Smith-Jentsch, Valerie Sims and Michael A. Rosen. Part II Macrocognition Measures for Real-World Teams: Macrocognitive measures for evaluating cognitive work, Gary Klein; Measuring attributes of rigor in information analysis, Daniel J. Zelik, Emily S. Patterson, and David D. Woods; Assessing expertise when performance exceeds perfection, James Shanteau, Brian Friel, Rick P. Thomas, John Raacke and David J. Weiss; Demand calibration in multitask environments: interactions of micro and macrocognition, John D. Lee; Assessment of intent in macrocognitive systems, Lawrence G. Shattuck; Survey of healthcare teamwork rating tools: reliability, validity, ease of use, and diagnostic efficiency, Barbara KÃ¼nzle, Yan Xiao, Anne M. Miller and Colin Mackenzie; Measurement approaches for transfers of work during handoffs, Emily S. Patterson and Robert L. Wears; The pragmatics of communication-based methods for measuring macrocognition, Nancy J. Cooke and Jamie C. Gorman; From data, to information, to knowledge: measuring knowledge building in the context of collaborative cognition, Stephen M. Fiore, John Elias, Eduardo Salas, Norman W. Warner and Michael P. Letsky. Part III Scenario-Based Evaluation Forging new evaluation paradigms: beyond statistical generalization, Emilie M. Roth and Robert G. Eggleston; Facets of complexity in situated work, Emily S. Patterson, Emilie M. Roth and David D. Woods; Evaluating the resilience of a human-computer decision-making team: a methodology for decision-centered testing, Scott S. Potter and Robert Rousseau; Synthetic task environments: measuring macrocognition, John M. Flach, Daniel Schwartz, April M. Courtice, Kyle Behymer and Wayne Shebilske; System evaluation using the cognitive performance indicators, Sterling L. Wiggins and Donald A. Cox; Index.
'Over 100 years of research, focused on measuring and understanding highly constrained human behavior and performance, has broken out of the laboratory and given way to a new paradigm for measuring, predicting and harmonizing the capabilities of humans in complex, unconstrained environments. The shift in paradigms has been roiling and emerging over more than several decades creating the intellectual underpinnings for Cognitive Systems Engineering, Naturalistic Decision making and now Macrocognition, defined by Schraagen, Klein, and Hoffman (2008) as the study of cognitive adaptation to complexity. The revolution in IT, beginning in the late eighties, energized this movement by spawning a stunning growth in complexity of work environments and systems and by stimulating a marked shift towards cognitive work with emphasis on thinking, decision making and problem solving, The failure of the old paradigm of laboratory behavioral research to deal with the challenges and vulnerabilities arising from complexity motivated the critical need for new theory, methods, measures and applications that minimize negative emergent effects and maximize resilience, agility, and real-time and evolutionary adaptivity. This work fills a very important niche in the growing literature of Macrocognition. Its unique contribution is its balanced focus on theory and the state of the art in measurement and evaluation of meta-cognition prepared by an impressive array of current thought-leaders in the field. The book is clearly written and should be required reading for every serious student, scholar, and/or practitioner concerned with harmonizing the capabilities of humans and the demands of work and work environments. I was also very pleased to discover some very provocative original thinking that, in turn, stimulated an epiphany that I expect will pay dividends in my work. I endorse and will recommend this work without reservation.' Kenneth R Boff, Tennenbaum Institute, Georgia Institute of Technology and former Chief Scientist, Human Effectiveness, Air Force Research Laboratory, USA 'Patterson and Miller have synthesised a provocative set of perspectives on the measurement of cognitive processes in team-based work environments. With an authoritative line-up of contributors, this volume provides a wealth of new material on methods of task decomposition for cognitive data gathering in complex team settings. A notable feature is the blend of critical thinking on principles of evaluation with a serious appreciation of real world applications for the emergent techniques.' Rhona Flin, University of Aberdeen, UK