February 28th, 2025: TBD
Start Date: February 28th, 12pm
End Date: February 28th, 1pm
Location: Zoom, Registration TBD
Speaker: Tara Behrend, Professor, Michigan State University (also president of Society for Industrial-Organizational Psychology)
Abstract:
TBD
Bio:
TBD
March 28th, 2025: TBD
Start Date: March 28th, 12pm
End Date: March 28th, 1pm
Location: Zoom, Registration TBD
Speaker: Sharon Hill, Professor, George Washington University
Abstract:
TBD
Bio:
TBD
November 15th, 2024: “When AI Makes the Team”
Lindsay Larson, Assistant Professor, Florida International University
Abstract:
Recent technological advances are shifting the role of AI from tool to teammate, sparking a surge of interest in human-AI teams. In fact, tech giants now market “AI teammates,” and military agencies sponsor research on human-AI teams. This new form of teamwork raises the question: how can we apply our knowledge of human teams to enhance teamwork with AI? Importantly, AI agents differ fundamentally from humans, impacting the potential effectiveness of human-centered models for human-AI teamwork. In this talk, I will review my recent research that draws on human-centered models of teamwork to facilitate effective human-AI teamwork. These projects examine the conditions under which AI “makes the team,” such as when teammates are most receptive to an AI teammate, the functional roles teammates are most comfortable with AI enacting, and the mental capabilities humans should perceive in their AI to be considered an AI teammate. I will also discuss how, by studying human-AI teams, we may also strengthen our foundational understanding of traditional teams.
Bio:
Lindsay Larson is an Assistant Professor of Global Leadership and Management at Florida International University. She earned her PhD in Media, Technology, and Society at Northwestern University. Before joining FIU, Lindsay completed a two-year postdoctoral research associate position at UNC Chapel Hill’s Kenan-Flagler Business School. Lindsay adopts an interdisciplinary lens, integrating work from organizational behavior, psychology, communication, and human-computer interaction, to further organizational theorizing on leading and working in teams in the digital age. In particular, Lindsay is interested in the interpersonal dynamics and functioning of the people in technologically-advanced teams, from human-AI teams to cross-functional, geographically distributed science teams.
October 18th: AI Teaming in Classrooms: Building Inclusive, Engaging, and Teacher-led Learning Experiences & Environments
Noel Kuriakos, Graduate Researcher & Instructor, University of Maryland
Abstract:
This presentation explores the integration of AI agents and assistants in K-12 classrooms, focusing on how these technologies can transform teaching and learning by enhancing various pedagogical approaches and aligning with established learning theories. AI teaming involves AI agents and assistants working alongside teachers and students to provide personalized, adaptive, and interactive support that meets diverse learning needs. Drawing on cognitive, constructivist, experiential, sociocultural, and distributed cognition theories, the presentation demonstrates how AI systems can facilitate differentiated instruction, manage classroom logistics, and offer real-time analytics and feedback. These capabilities enable teachers to reclaim valuable time to build stronger relationships, foster community, and focus on social-emotional learning that supports dynamic complementarity between academic success and personal development.
The presentation emphasizes how AI agents and assistants empower teachers as relationship-builders, allowing them to focus on creating inclusive, engaging, and supportive learning environments that honor students’ cultural backgrounds and social contexts. By handling routine tasks, AI frees teachers to build community, improve school climate, and enhance students’ social-emotional skills, which are critical for academic and personal growth. The integration of culturally relevant, responsive, sustaining, and sensitive pedagogies within AI teaming further ensures that AI technologies respect and reflect the diverse identities and experiences of students. Ultimately, AI teaming not only enhances instructional quality but also reinforces the human elements of teaching that are essential for fostering inclusive, student-centered learning environments where every student can thrive.
Bio:
Noel Kuriakos is a doctoral researcher, focusing on computational literacy in early childhood education and AI teaming in inclusive classrooms. As Director of a PreK Computational Literacy Initiative (a Research Practice Partnership), Noel co-designs and co-develops learning frameworks and teacher professional development programs for PreK students. Additionally, Noel serves as an Independent Researcher & Consultant, leading research projects and developing curriculum for educational organizations and also teaches undergraduate leadership courses. Noel’s diverse background in education, research, and technology—including a prior career in enterprise software product management and experience as a middle school math and computer science teacher—informs a unique and innovative approach to early childhood education. Noel is committed to shaping the future of education and empowering young learners.
September 20th, 2024: Robots that Teach
Brian Scassellati, Professor, Computer Science, Yale University
Abstract:
Robots have long been used to provide assistance to individual users through physical interaction, typically by supporting direct physical rehabilitation or by providing a service such as retrieving items or cleaning floors. Socially assistive robotics (SAR) is a comparatively new field of robotics that focuses on developing robots capable of assisting users through social rather than physical interaction. Just as a good coach or teacher can provide motivation, guidance, and support without making physical contact with a student, socially assistive robots attempt to provide the appropriate emotional, cognitive, and social cues to encourage development, learning, or therapy for an individual. In this talk, I will review some of the reasons why physical robots rather than virtual agents are essential to this effort, highlight some of the major research issues within this area, and describe some of our recent results building supportive robots for teaching social skills to children with autism spectrum disorder.
Bio:
Brian Scassellati is the A. Bartlett Giamatti Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. Using computational modeling and socially interactive robots, his research evaluates models of how infants acquire social skills and assists in the diagnosis and quantification of disorders of social development (such as autism).
May 17th, 2024: Interactive Team Cognition for Humans and Machines
Nancy J. Cooke, Professor Human Systems Engineering, Chief Scientist Center for Human, Artificial Intelligence, and Robot Teaming
Abstract:
A team is a heterogeneous group of team members, each with their own roles and responsibilities who come together to achieve a common goal. Team cognition is the joint processing of information by a team that produces knowledge and actions, beyond what an individual could produce. In this talk I will report on team cognition research that I have conducted in my lab over the last 28 years leading to the theory of Interactive Team Cognition and four discoveries that include: the importance of team interaction; the use of perturbations to improve team cognition; what it takes to be a good team player; and the power of a single teammate or coordination coaching. In addition, I will suggest some future directions for this work that include a focus on team-level measurement and extension of team cognition to human, artificial intelligence, and robot teaming.
Bio:
Nancy J. Cooke is a professor in Human Systems Engineering at the Polytechnic School, one of the Ira A. Fulton Schools of Engineering at Arizona State University. She is also Chief Scientist of the Global Security Initiative’s Center for Human, AI, and Robot Teaming. Professor Cooke’s research interests include the study of individual and team cognition and its application to the development of cognitive and knowledge engineering methodologies, remotely piloted aircraft systems, human-robot teaming, healthcare systems, and emergency response systems. She specializes in the development, application, and evaluation of methodologies to elicit and assess individual and team cognition. Ongoing projects in her group include coordination of human-autonomy teams in the face of unexpected events, Human-Robot Teaming and Situation Awareness, and Human-Machine Teaming for Next Generation Combat Vehicle, Artificial Social Intelligence. Her work is funded by DoD and has been widely published.
May 9th, 2024: Exploring immersive meetings in the Metaverse: A conceptual model and first empirical
insights
Marvin Grabowski, PhD Candidate, University of Hamburg, Germany
Abstract:
New technological developments open up new possibilities for the way teams can work together virtually. In particular, immersive extended reality (XR) meetings enable groups to represent, view, and interact with each other in a shared three-dimensional (3D) space. XR meetings take place in the highly publicized "metaverse", defined as a multi-user interaction space that merges the virtual world with the real world (e.g., Dwivedi et al., 2022). By wearing a headset that blocks off perception from their current physical environment, group members become immersed into a shared virtual environment (i.e., the metaverse). Users generate realistic embodied avatars that are qualitatively different from two-dimensional (2D) video interactions, such as Zoom (e.g., Hennig-Thurau et al., 2023). We developed a conceptual framework of 3D immersive XR group meetings that integrates technological design characteristics, subjective attendee experiences, mediating mechanisms, and meeting outcomes. I am going to present our preliminary findings on meeting outcomes and individual XR experiences (i.e., group interaction characteristics, avatar perception, simulator sickness, and task load). Following the talk, you are cordially invited to discuss about opportunities and challenges of the metaverse as a platform for enabling immersive learning scenarios and conducting workplace meetings in the future.
Bio:
My research as a current PhD Candidate at University of Hamburg, Germany, highlights the future of workplace meetings. Between the interface of Industrial & Organizational Psychology and Human Computer Interaction, the immersive experience through VR glasses opens up new interdisciplinary perspectives. In particular, I am interested in underlying mechanisms of fruitful interactions in immersive meetings in the metaverse. Furthermore, I am interested in success factors of hybrid meetings with the goal of gaining new insights into how the new framework of New Work can be used practically Based on national and international academic stations, I am happy to build bridges between organizational needs and scientific findings. In addition, I am a speaker on career guidance and professional orientation after high-school and published the book “Early Life Crisis”.
March 15th, 2024: Gamification for Team Science & Human-AI Teaming
Josh Strauss, ABD, University of Maryland
Abstract:
An online, multiplayer, gamified task platform was developed to elicit team processes and output detailed log data for both behaviors and outcomes. This task is currently being employed in two ongoing studies. The first study seeks to understand the role of culture on team behavior. The second study aims to deepen understanding of team cognition by assessing the relationship between mental representations’ compatibility (as opposed to similarity, overlap, or distribution) and important consequences: coordination and performance. These studies will be presented as a backdrop for a discussion about the task platform’s utility and flexibility for research on teams in general, and human-AI teaming in particular.
Bio:
Josh Strauss is currently a doctoral candidate in the Social, Decision, and Organizational Science program in UMD’s Psychology Department, and a DEI Consultant at CIDIS LLC. He also holds an MS from the same program and a bachelor’s degree from University of California, Davis in Psychology, Communication, Organizational Sociology, and Linguistics. His research employs multilevel, process-oriented theory and methods primarily related to diversity or team science. Both his work and practice converge in advancing Josh’s larger goal of empowering people to come together in compassion and cooperation.
February 23rd, 2024: The X-Culture Global Research Platform: Challenges, Opportunities, and Lessons Learned
Vas Taras, PhD, Professor of International Business, Department Chair, University of North Carolina at Greensboro
Abstract:
X-Culture is a large-scale international collaboration project. Approximately 7,000 students from 150 universities in 70 countries on six continents participate in X-Culture each semester, and over 110,000 students have completed the program since it launched in 2010.
X-Culture provides a unique opportunity to study work in international teams and on global crowdsourcing platforms.
X-Culture collects immense amounts of data, tracking over 3,000 variables, many measured longitudinally, at multiple levels, and using multiple data sources. Over 800 researchers worldwide in involved in X-Culture’s research projects in various capacities.
The presentation will share details about and lessons learned from X-Culture research, as well as about applications of the findings for crowdsourcing-based international business consulting.
Bio:
Vas Taras is a Professor of International Business and a Department Chair at the University of North Carolina at Greensboro. He is the Vice President-Administration of the Academy of International Business and the founder of the X-Culture Project. He received his Ph.D. in International HR and International Business from the University of Calgary, Canada, and his Master’s in Political Economy from the University of Texas at Dallas. He is an Associate Editor of the International Journal of HRM, Journal of International Management, the International Journal of Cross-Cultural Management, the European Journal of International Management, and Cross-Cultural Strategic Management. Vas has lived, worked, and studied in half a dozen countries and has experience as an academic, manager, entrepreneur, and business consultant.
January 26th, 2024: Human-Agent Team Trust Dynamics
Daniel Nguyen, Ph.D., Florida Institute of Technology
Abstract:
As human and machine collaboration become more common in this era of the Fourth Industrial Revolution, some of these interactions are founded on principles of teamwork. In studying and enhancing the effectiveness of these human-agent teams (HATs), many questions come to light about the differences in which humans work with their agent team members. One focal area of research in the field centers around trust. In a three-year effort funded by the Air Force Office of Scientific Research, this research grant seeks to apply traditional organizational theories of teaming (e.g., multilevel theory, Kozlowski & Klein, 2000; event systems theory, Morgeson et al., 2015) to understand team level trust in HATs, as well as unobtrusive methods for measuring human trust in their agent team members. This talk shares results from the first two years. In year one, a theoretical framework of multi-level trust in HATs was developed based on a systematic literature review. In year two, an experiment was conducted to validate part of the framework, focusing on the multi-level and event-based characteristics of trust violations and how they propagate the degradation of human trust in agent team members.
Bio:
Daniel Nguyen is a recent doctoral graduate in Industrial/Organizational Psychology. He received his B.A. in Psychology at Texas A&M University in 2017, then went on to receive his M.S. and Ph.D. in I/O Psychology at Florida Tech in 2020 and 2023. His research interests and experiences are focused on work teams, with a focus on human-agent teaming (HAT) which has led him to a broader interest in related human-factors topics such as human-performance and trust in automation. During his time at Florida Tech, he served as the team lead for a research grant funded by the Air Force Office of Scientific Research. This 3-year grant was aimed at creating and validating a multilevel framework of team-level trust in HATs and unobtrusively measuring this team-level trust. His work on this topic includes 8 conference presentations, 4 publications, and 4 invited talks. Currently, Daniel works at Aptima Inc. as an Associate Scientist, and continues to be involved in HAT research in applied settings.
October 27, 2023: Multimodal Human-AI Interaction
Snehesh Shrestha, PhD Candidate, The department of Computer Science, University of Maryland College Park
Abstract:
People communicate through verbal and non-verbal cues. AI and ML have made tremendous progress in language understanding. Audio tone, gestures, gaze, and touch, along with speech, offer new challenges and opportunities. My work dissects multimodal human expression, focusing on Human-AI interaction in Robotics and Music. In the first part, I discuss creating a robot capable of understanding natural commands, emphasizing multimodal repair mechanisms. I’ll briefly share data collection challenges, which greatly impact data quality and validity. We used a Wizard-of-Oz setup, deceiving participants into believing we had a human-level AI robot, to capture ‘natural’ interactions. Verbal and non-verbal strategies were studied to train machine learning algorithms for multi-modal commands, highlighting the importance of combining gestures with speech. In the second part, I explore AI-mediated Student-Teacher Interaction systems towards violin education. I will discuss challenges in remote music lessons, which became particularly pronounced during the COVID-19 pandemic. I will discuss data collection challenges for precise motion capture, especially with young students. I share insights into using audio to enhance pose estimation algorithms for 3D player visualization. Lastly, I introduce a novel haptic band designed for remote feedback, prompts, and metronome functions, enhancing online music education experiences.
Bio:
Snehesh Shrestha is a Ph.D. candidate at the University of Maryland College Park. He works in the Perception and Robotics Group (PRG) lab in the Department of Computer Science under the guidance of Prof. Yiannis Aloimonos (CS), Dr. Cornelia Fermüller (UMAICS), Dr. Ge Gao (INFO), and Dr. Irina Muresanu (School of Music). He has also worked with Dr. Michelle Gelfand (Department of Psychology) in the Culture Lab. Additionally, he works at NIST, developing new standards towards recommended practices for the design of human subject studies in human-robot interaction. His research is at the intersection of robotics, artificial intelligence, human factors, arts, and culture. He is interested in multidisciplinary research aimed at building rich and intuitive experiences that ‘amplify human abilities, empowering people and ensuring human control’ inspired from Dr. Ben Shneiderman’s Human-Centered AI book. His recent work has focused on human-robot interaction and AI for music education.
October 20, 2023: Conflict and Compromise among Museum Exhibit Teams: The Impacts of Organizational Change and Professionalization on Curating Smithsonian’s Fossil Halls
Diana E. Marsh, Assistant Professor of Archives and Digital Curation, University of Maryland, College Park
Abstract:
In this talk, I will highlight interdisciplinary teamwork behind large-scale exhibitions and the politics among different experts who curate scientific knowledge for the public. Presenting highlights from my book, Extinct Monsters to Deep Time: Conflict, Compromise, and the Making of Smithsonian’s Fossil Halls, I describe participant observation among the Smithsonian’s exhibition team tasked with the National Museum of Natural History (NMNH)’s largest-ever exhibit renovation, Deep Time. I highlight how process of negotiating, planning and designing scientific knowledge in exhibits is shaped by the intersections of different expertises involved in the planning process—including Education, Design, Exhibit Writing, Project Management, and three subfields of Paleobiology—as well as broader institutional cultures and pressures. Drawing on ethnographic fieldwork as well as interview, oral history and archival research, the work contextualizes the contemporary exhibits process by tracing trends in exhibit development from late-19th century to the present. I show how telling the story of the Deep Time is mediated through 1) different techniques and technologies for museum communication, 2) the recent professionalization of museum disciplines, and 3) the expanding institutional split between the museum’s missions of “research” and “outreach,” leading to new “frictions” and “complementarities” among exhibit teams.
Bio:
Diana E. Marsh is an Assistant Professor of Archives and Digital Curation at the University of Maryland’s College of Information Studies (iSchool) who explores how heritage institutions communicate with the public and communities. Her current research focuses on improving discovery and access to colonially-held archives for Native American and Indigenous communities. Previously, she completed her PhD in Anthropology (Museum Anthropology) at the University of British Columbia, an MPhil in Social Anthropology with a Museums and Heritage focus at the University of Cambridge in 2010, and a BFA in Visual Arts and Photography at the Mason Gross School of the Arts of Rutgers University in 2009. Her recent work has appeared in The American Archivist, Archival Science, Archivaria, and Archival Outlook, and her book, From Extinct Monsters to Deep Time: Conflict, Compromise, and the Making of Smithsonian’s Fossil Halls was released in paperback with Berghahn Books in Fall 2022.
September 23, 2023: Virtual Team Creativity and Innovation
Roni Reiter-Palmon, Distinguished professor of I/O Psychology, University of Nebraska at Omaha
Abstract:
As communication technology capabilities have improved and the globalization of the workforce has resulted in distributed teams, organizations have been shifting towards virtual teams and virtual meetings over the last decade. This trend has been accelerated with current work-from-home orders due to COVID-19. Even though virtual collaboration has, in the past, been the focus of multiple studies, there are some surprising gaps in our knowledge. For instance, there are few empirical studies examining the impact of virtual devices and tools on creative problem-solving. While there is a substantial body of research on electronic brainstorming and the use of virtual tools for idea generation, less is known about earlier processes such as problem construction or later processes such as idea evaluation and idea selection. Furthermore, as a dynamic process, creativity and innovation is heavily influenced by the people engaged in the process and their collaborative environment, yet there is a gap in the literature regarding the type of virtual tools used in the process (for example, audio + video vs. audio alone, or the use of file-sharing technologies). In this paper, we will review the current literature on virtual teams, virtual meetings, and creativity. We will then explore theoretical frameworks such as media richness theory that can help us understand how virtuality and virtual tools may influence team creativity across the dynamic range of the creative problem-solving process. Finally, we provide questions to help guide future research.
Bio:
Dr. Roni Reiter-Palmon is a Distinguished Professor of Industrial/Organizational (I/O) Psychology at the University of Nebraska at Omaha. She is also the Director of Innovation for the Center for Collaboration Science, an inter-disciplinary program at UNO. Her research focuses on creativity and innovation in the workplace, cognitive processes of creativity, team creativity, development of teamwork and creative problem-solving skills, and leading creative individuals and teams. Her research has been published in leading journals in I/O psychology, management, and creativity. She is the former Editor of The Psychology of Creativity, Aesthetics and the Arts and the current editor of Organizational Psychology Research. She serves on multiple editorial boards of I/O, management, and creativity journals. She has obtained over 9 million dollars in grant and contract funding focusing on creativity, leadership, and teams. She is a fellow of Divisions 10 and 14 of APA, and has won the system wide research award from the University of Nebraska system in 2017.
April 28, 2023: Building Human-agent Teams in Multi-agent Systems
Dr. Susan Campbell
Abstract:
Prior work on the interaction of humans and autonomous systems has focused on
human control of systems. Though meaningful human control is imperative, only some
humans will exercise control over systems. Other humans will act as teammates,
sharing goals and interdependence with systems that they perceive to be autonomous.
Unfortunately, current systems have a long way to go before they can perform the
behaviors that would make them effective team members. The Artificial Intelligence and
Autonomy for Multi-Agent Systems (ArtIAMAS) human-machine teaming area seeks to
bridge the gap between what is currently possible with autonomous systems and future
human-machine teaming concepts. In this talk, I will discuss our goals and current
work.
Bio:
Dr. Campbell is interested in what makes people good at interacting with complex
technologies and technological systems. She holds a joint appointment between the
Applied Research Laboratory for Intelligence and Security (as an Associate Research
Scientist) and the College of Information Studies (as a Senior Lecturer) at the University
of Maryland. She is the university lead for human-machine teaming on the ArtIAMAS
cooperative agreement. Her work focuses on measuring individual differences related to
cognitive performance, training cognitive skills, and building systems that complement
human strengths. She holds a PhD in Psychology from the University of Maryland at
College Park and a BS in Cognitive Science from Carnegie Mellon.
March 17, 2023: Human-Machine Teaming: What Skills Do the Humans Need?
Dr. Samantha Dubrow, Lead Human-Centered Engineering Researcher, The MITRE Corporation
Abstract:
Over the last few decades, technology has become increasingly intelligent. Technology is no longer a passive tool that supports a single human in their work, but an active teammate that collaborates and learns as a critical entity of the team. To date, human-machine teaming research has primarily focused on the machines – how to design them, what their capabilities are, and how they can “learn.” This presentation takes the opposite view, focusing on the importance of selecting and training humans to be effective human-machine teammates. The presentation addresses two questions: What unique skills do humans need to work well with machines as teammates, and how are those skills different from those required for effective human-human interactions? Details about the human traits and abilities that can be selected for and the human skills that can be trained to maximize human-machine teaming effectiveness will be discussed.
Bio:
Dr. Samantha Dubrow is a Lead Human-Centered Engineering Researcher at The MITRE Corporation. At MITRE, Samantha conducts applied research and development in Industrial-Organizational Psychology, teamwork and leadership, hybrid teaming, decision-making, human factors, user experience, human-machine teaming, and multiteam system collaboration management. She helps teams and multiteam systems across a variety of government agencies utilize technology to improve their teamwork processes and job performance. Samantha holds a PhD in Industrial-Organizational Psychology from George Mason University under Dr. Stephen Zaccaro. Her dissertation focused on team mental models and leadership transitions in ad hoc decision-making teams. During graduate school, Samantha was also involved with projects regarding multidisciplinary teams, multiteam systems, team leadership, simulation and training, and social network analysis.
February 24, 2023: Promoting Astronaut Autonomy in Human Spaceflight Missions (Seminar)
Dr. Jessica Marquez, NASA, Ames Research Center
Seminar Abstract:
Mission operations will have to adapt for long duration, long distance human spaceflight missions. This change is driven mainly by the significantly different communication availability between Earth and space. As astronauts travel farther from Earth, the one-way communication latency increases; the amount of bandwidth will be limited; and there will be period of long and/or no communication. Currently, ground flight controllers collaborate and cooperate with astronauts in space to accomplish essential operational functions. Astronaut autonomy, i.e., the crew’s ability to work more independently from mission control, will be a key enabler in future exploration missions. Over the last several years, the NASA Ames Human-Computer Interaction (HCI) Group has investigated various ways to promote and support astronaut autonomy in human spaceflight missions. Software prototypes are researched, designed, implemented, and assessed for their ability to enable astronaut autonomy. From integrated Internet of Thing for Space, advanced procedures interfaces, comm-delayed chats, and self-scheduling tools, the HCI Group has explored different aspects of astronaut autonomy. Specifically, the self-scheduling tool Playbook has been evaluated in analog extreme environments and onboard the International Space Station, successfully paving the way for future autonomous astronauts.
Bio:
Short Bio: Since 2007, Dr. Jessica Marquez has been working at the NASA Ames Research Center within the Human Systems Integration Division. As part of the Human-Computer Interaction Group, she has supported the development and deployment of planning and scheduling software tools for various space missions, including the International Space Station Program. She now leads the team that is developing Playbook, a web-based planning, scheduling, and execution software tool. Her work has led to supporting different NASA analog missions that simulate planetary missions and spacewalks. Dr. Marquez also is a subject matter expert for space human factors engineering, specifically in human-automation-robotic integration. She lends her expertise across different NASA research programs, like the Space Technology Research Institutes and the Human Research Program. She currently is the PI for the research project “Crew Autonomy through Self-Scheduling: Guidelines for Crew Scheduling Performance Envelope and Mitigation Strategies.” Dr. Marquez has a Ph.D. in Human Systems Engineering and S.M. in Aeronautics/Astronautics from the Massachusetts Institute of Technology and a B.S.E. in Mechanical Engineering from Princeton University.
November 18, 2022 Talk: Understanding AI as Socio-Technical Systems
Reva Schwartz, Research Scientist, NIST
Abstract:
The rise of automated decision systems has helped increase awareness about the risks that come from artificial intelligence. To fully tackle these risks, and identify practice improvements, it is important to recognize that AI systems are socio-technical in nature. This perspective requires a multi-disciplinary approach, strong governance, and actively broad engagement with stakeholders. I will discuss each of these factors, and how NIST’s AI Risk Management Framework seeks to operationalize this perspective.
Suggested readings:
https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf
https://www.nist.gov/system/files/documents/2022/08/18/AI_RMF_2nd_draft.pdf
Bio:
Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST). She serves as Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. Her research focuses on the role of context in human language and behavior, the nature of subject matter expertise and expert judgment in socio-technical systems, and the role of gatekeepers within institutions. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings. Reva received her MA from the University of Florida in acoustics and socio-phonetics, and her BA in political science from Kent State University. Her background includes a forensic science posting for almost 15 years at the United States Secret Service, advising forensic science practice at NIST, a temporary duty assignment at the National Security Agency, and adjunct researcher at the Johns Hopkins University Human Language Technology Center of Excellence.
October 28, 2022 Talk: Why interdisciplinary knowledge synthesis is so hard, and what we can do about it: A proposal and discussion
Joel Chan, Assistant Professor, University of Maryland College of Information Studies
Abstract:
Sharing, reusing, and synthesizing knowledge is central to the research process, both individually, and with others. These core functions are in theory served by our formal scholarly publishing infrastructure, as well as individual and collaborative tools such as reference management software. But converging lines of empirical and anecdotal evidence suggest otherwise: instead of the smooth functioning of infrastructure, researchers resort to laborious “hacks” and workarounds to “mine” publications for what they need, and struggle to efficiently share the resulting information with others. One key reason for this problem is the privileging of the narrative document as the primary unit. The dream of an alternative infrastructure based on more appropriately granular discourse units like theories, concepts, claims, and evidence — along with key rhetorical relationships between them — has been in motion for decades but remains severely hampered by a lack of sustainable authorship models. In this talk, I sketch out a novel sociotechnical authorship model for a sustainable discourse-based scholarly communication infrastructure. The key insight is to achieve sustainability by seamlessly integrating discourse-graph authorship work into scholars’ research and social practices, such as research idea development, literature reviewing, and reading groups. In this way, this model both draws from and augments core collaborative research processes. I will describe 1) the grounding of this concept in formative research on scholars’ workflows, 2) working prototypes for integrated authoring and sharing of discourse graphs, and 3) field study insights into their promise and path towards a larger synthesis-oriented infrastructure.
Bio:
Joel Chan is an Assistant Professor in the University of Maryland’s College of Information Studies (iSchool) and Human-Computer Interaction Lab (HCIL). Previously, he was a Postdoctoral Research Fellow and Project Scientist in the Human-Computer Interaction Institute (HCII) at Carnegie Mellon University, and received his PhD in Cognitive Psychology at the University of Pittsburgh. His research investigates systems that support creative knowledge work, such as scientific discovery and innovative design. His long-term goal is to help create a future where innovation systems are characterized by openness and sustainability. His research has received funding from the National Science Foundation, the Office of Naval Research, the Institute for Museum and Library Sciences, Adobe Research, and Protocol Labs, and received Best Paper awards from the ASME Conference for Design Theory and Methodology, the Journal of Design Studies, and the ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD).
September 23, 2022 Talk: How Leaders Drive Followers’ Unethical Behavior
Professor Debra Shapiro
Abstract:
Numerous organizational scandals have implicated leaders in encouraging employees to advance organizational objectives through unethical means. However, leadership research has not examined leaders’ encouragement of unethical behaviors. We define leader immorality encouragement (LIE) as an employee’s perception that their leader encourages unethical behaviors on behalf of the organization. Across four studies, we found, as hypothesized, that: (1) LIE promotes employees’ unethical behavior carried out with the intention to aid the organization (unethical pro-organizational behavior); (2) this relationship is mediated by employees’ moral disengagement and the expectation of rewards; (3) LIE, via moral disengagement, enhances employees’ self-serving unethical behavior; and (4) the relationship between LIE and unethical behavior is stronger when the leader has long-presumed “good qualities,” such as a higher (rather than lower) quality exchange relationship with the employee and higher (rather than lower) organizational status. Debra’s presentation to OTTRS aims to provoke discussion about how AI (artificial intelligence) in and outside organizations increases as well as decreases the likelihood of unethical behavior (e.g., the spread, as well as fact-correction, of disinformation), hence how AI might moderate this study’s predicted and observed findings.
Keywords: leader immorality encouragement, unethical pro-organizational behavior, leader-member exchange, leader’s organizational status, self-serving unethical behavior
Bio:
Debra L. Shapiro (Ph.D. Northwestern U) is the Clarice Smith Professor at the U of Maryland (UMD), formerly the Willard Graham Distinguished Professor at University of North Carolina-Chapel Hill (UNC) where she was 1986-2003. Dr. Shapiro has led UNC’s and MD’s business schools’ PhD Programs (as Associate Dean at UNC from 1998-2001 and as Assistant Dean at UMD from 2008-2011). Debra has also been Division Chair of The Academy of Management’s (AOM’s) Conflict Management Division, Representative-at-Large on AOM’s Board of Governors, Associate Editor of The Academy of Management Journal (as well as member of the editorial boards of AMJ, AMR, and other journals), AOM Program Chair/Vice-President, AOM President, and executive committee member for the Society of Organizational Behavior (SOB). Debra studies interpersonal-level dynamics in organizations such as negotiating, mediating, dispute-resolving, and procedural justice-enhancing strategies that enhance integrative (win-win) agreements, organizational justice, ethical work behaviors, and more generally, positive work attitudes and their associated behaviors. Debra studies, also, the challenges of obtaining positive results with the latter strategies when they involve culturally-diverse and/or artificially-intelligent work-colleagues. To study interpersonal dynamics, Debra has used varied methods, such as ethnography, interviews, surveys, negotiation- and dispute-resolving simulations, experiments (including some with electronic confederates), and longitudinal archival data. Debra is a Fellow of the AOM, SOB, and Association for Psychological Science (APS).
April 22, 2022 Talk: Emotional Contagion in Online Groups as a Function of Valence and Status
Aimée A. Kanei
Associate Professor of Management
Palumbo-Donahue School of Business
Duquesne University
Abstract:
This study examines emotional contagion in online group discussions, examining language as a mechanism of emotional contagion. In a lab study 235 participants interacted online with a partner who was an electronic confederate. We manipulated exposure to emotional language to test how a partner’s use of positive versus negative emotional language impacts participants’ felt emotions and their displayed emotional language. Status of one’s partner was manipulated to test how status moderates emotional contagion. We find that felt emotions are contagious in an online setting. Further, partner’s emotional language affect participant’s use of emotional language. We examine whether participant’s emotional language mediates the effect of partner’s emotional language on participant’s felt emotion and find some evidence for mediation through negative emotional language when interacting with a high-status partner. By controlling partner’s language, we find that positive emotional language of one’s partner leads to more group reflection and less perception of conflict, both task and relational.
Bio:
Aimée A. Kane is an Associate Professor of Management at the Palumbo-Donahue School of Business at Duquesne University. Dr. Kane’s research focuses on group processes and reveals how members come together, learn, and collaborate effectively, despite the boundaries that separate them. Her research contributes to the organizational sciences, psychological sciences, communication sciences, and computer sciences. It has been published in key journals, conference proceedings, and handbooks, and won awards. Small Group Research awarded her article Language and group processes: An integrative, interdisciplinary review a Best Article Award. The Academy of Management identified “Am I still one of them?”: Bicultural immigrant managers navigating social identity threats when spanning global boundaries as a finalist for the International Human Resource Management Scholarly Research Award. The Palumbo-School of Business awarded her a Harry W. Witt Faculty Fellowship, and awards for research excellence. Dr. Kane is currently an associate editor at Group Dynamics: Theory, Research and Practice and on the editorial boards of the Academy of Management Discoveries and Organization Science journals. She holds a Ph.D. and a M.S. in organizational behavior and theory from the Tepper School of Business at Carnegie Mellon University and a B.A. from Duke University, where she was elected to Phi Beta Kappa. Prior to joining Duquesne, Kane was an Assistant Professor of Management and Organizations at New York University’s Stern School of Business.