Group Photo

Invited Speaker

Photo Bio Title
Photo of Jenn Dr. Jenn Wortman Vaughan Intelligibility Throughout the Machine Learning Lifecycle

Accepted Papers

Human+AI Modeling & Design

AI Explainability & Experimentation

AI In the Real World

Social Morality / Ethics of AI

Other Guest Attendees

  1. Steve Whittaker (University of Santa Cruz)

  2. Martin Schuessler (Technische Universität Berlin, Weizenbaum Institute) - Workshop SV

  3. Divya Ramesh (University of Michigan) - Chairs SV

Organizers

Photo Bio
Photo of Kori Kori Inkpen is a Principal Researcher at Microsoft and a member of the MSR AI team. Dr. Inkpen’s research interests are currently focused on Human+AI Collaboration to enhance decision making, particularly in high-impact social contexts which inevitably delves into issues of Bias and Fairness in AI. Kori has been a core member of the CHI community for over 20 years.
Photo of Munmun Munmun De Choudhury is an Assistant Professor in the School of Interactive Computing at Georgia Tech where she directs the Social Dynamics and Wellbeing Lab. Dr. De Choudhury’s research interests lie at the intersection of machine learning, social media, and health, with a focus on assessing, understanding, and improving mental health from online social interactions.
Photo of Stevie Stevie Chancellor is a PhD Candidate in Human Centered Computing at Georgia Tech. She researches data-driven algorithms to understand deviant mental health behavior in online communities. Her work combines techniques from machine learning and data science with human-centered insights around online communities and mental health, focusing on identifying and predicting content from pro-eating disorder communities on social networks.
Photo of Michael Michael Veale is a doctoral researcher in responsible public sector machine learning at the Dept. of Science, Technology, Engineering & Public Policy at University College London. His work spans HCI, law and policy, looking at how societal and legal concerns around machine learning are understood and coped with on the ground.
Photo of Eric Eric P.S Baumer is an Assistant Professor of Computer Science and Engineering at Lehigh University. His research focuses on interactions with AI and machine learning algorithms in the context of social computing systems. Recent work includes using computational tools to identify the language of political framing, and studying technology refusal in the context of Facebook.

In Short

In recent years, AI systems have become both more powerful and increasingly promising for integration in a variety of application areas. Attention has also been called to the social challenges these systems bring, particularly in how they might fail or even actively disadvantage marginalised social groups, or how their opacity might make them difficult to oversee and challenge. In the context of these and other challenges, the roles of humans working in tandem with these systems will be important, yet the HCI community has been only a quiet voice in these debates to date. This workshop aims to catalyse and crystallise an agenda around HCI and AI. Topics of interest (See Guiding Research Questions) include methods to collaboratively explore AI systems to identify unusual failure modes; identifying, methods to capture and replicate social practices of critical engagement and oversight with AI systems; integrating insights from AI systems with complementary human insights; avoiding over- and under-reliance while preserving enjoyable user experiences; creating useable ML/AI systems for HCI practitioners; and modes of engagement with novel AI ‘users’, such as those captured and represented in training sets.

Call for Participation

We invite submissions for a one-day workshop to discuss critical questions in bringing the human into the development and deployment of artificial intelligence (AI) systems.

Papers should be 2-4 pages long in the CHI Extended Abstract format, and may address any topics related to the intersections of HCI, AI, and machine learning. This includes but is not limited to ongoing work; reflections on past work; combining methods from HCI and design to AI; and emergent ethical, political, and social challenges. A set of guiding research questions has been provided below for guidance.

The due date for submissions is no later than February 12, 2019. The submission site is here. Participants will be selected based on the quality and clarity of their submissions as they reflect the interests of the workshop. Notifications will go out no later than March 1, 2019. At least one author of each accepted position paper must attend the workshop, and all participants must register for both the workshop and at least one day of the conference.

Participants will be selected based on their prior experience and interest in the workshop as well as the quality of their submissions. We will focus on recruiting from a diverse group of participants, with a balance of students and faculty; industry practitioners and academic audiences; contribution areas within HCI and AI research; and representation of different cultures, genders, and ethnic backgrounds.

Context

Advances in deep machine learning as well as hardware have pushed the development of artificial intelligence (AI) systems. By developing machine learning (ML) techniques to process large volumes and modalities of data, by turning voluminous sources of data into signals, and by providing robust predictions of critical outcomes, AI systems can both supplement and replace human-decision making [36]. AI has begun to make great strides in many problems of societal significance and has already made contributions to tasks like development and poverty [21], education [18], agriculture and the environment [10,33], and healthcare [22]. At the same time, AI has begun to expand our ability to make important decisions in business, law, finance, and politics [2,3,6,13,23,25,32], to more easily reach and help vulnerable populations [7,8,20], to predict health and wellbeing [5,9], to more quickly detect people at risk of poor outcomes and provide early interventions [31], and sometimes to identify actionable or personalized solutions [4,19].

However, in recent years, with the pervasive adoption and prevalence of AI systems in real-world contexts, they have also raised the concerns of both researchers and practitioners for issues of bias, accountability, fairness, and discrimination. To solve these problems, machine learning researchers and practitioners have focused on providing mathematical insights to correct issues such as bias, discrimination, and transparency of algorithm choice. These researchers have focused on improving the algorithms themselves to correct for bias [11,17,30] and improve interpretability [28,38]. This area has seend tremendous group in the past few years as can be seen in the many outlets for this work including the ACM Fairness, Accountability, and Transparency (FAT*) conference and multiple iterations of FAT- workshops (FAT/ML for recommendations, FATREC for recommender systems, etc) at premier ML conferences. This work is providing important computational prerequisites and scaffolding necessary for responsible deployment of machine learning.

However, the human is still a critical, if not the central component of many scenarios where AI is being advocated as both an assistant or augmentation for human intelligence. While most AI systems offer robust empirical performance for real-world problems, many of these approaches are developed opaquely and in isolation, without appropriate involvement of the human stakeholders who use these systems or are most affected by them. As the following fictional exemplar implies [26]: “[..] we can’t just tell the doctor ‘my neural network says this patient has cancer!’ The doctor just won’t accept that!”. Human involvement in AI system design, development, and evaluation is critical [16] to ensure that the insights being derived are practical, and the systems built are meaningful and relatable to those who need them. Some recent HCI work has focussed on adoption issues of this kind [35,37], but it remains unclear how the characteristics of emerging AI technologies may interact with existing understanding around decision-support or expert systems in-the-wild.

It is also important to prevent unintended consequences, and alleviate various risks stemming from bias, errors, irresponsible use, misaligned expectations, privacy concerns, and potential issues around lack of trust, interpretability, and accountability. Moreover, human activities and behaviors are deeply contextual, nuanced, and laden with subjectivity – aspects that many current AI systems often fail to account for adequately [24]. We need to be able to transcribe AI systems in interactive, usable, and actionable technologies that function in the natural contexts of all human stakeholders in a bias-free manner. This, in turn, requires augmenting these systems with orthogonal but complementary human-centered insights, that go beyond aggregated assessments and inferences to ones that factor in individuals’ differences, demands, values, expectations, and preferences [12,29]. Finally, the success of AI systems in the real world requires multi-disciplinary partnerships who bring diverse perspectives to solve these problems which are as much human problems as they are AI.

Summarily, despite the importance of people in the development, deployment, and use of AI systems, Human Computer Interaction (HCI) is often not a core component of these research questions. While AI researchers have recently begun to note this important gap in popular discourse, e.g., “Despite its name, there is nothing ‘artificial’ about this technology – it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.”1, we argue that more comprehensive inclusion of HCI’s unique perspectives are essential to solving these challenging societal questions. Therefore, through this workshop, we ask the fundamental question: Where is the human in AI research?

Guiding Research Questions

Our workshop provides an opportunity for researchers and practitioners interested in the intersection of HCI and AI to come together to share interests and discuss ways to move the field forward. We provide a set of guiding questions for the community to consider:

  1. Explainable and Explorable AI: What does the human need to effectively utilize AI insights? How can users explore AI systems’ results and logic to identify failure modes that might not be easy to spot? Examples might be undesirable impacts on latent groups not corresponding to categories in the dataset [34], difficult-to-spot changes (‘concept drift’), or feedback loops in the socio-technical phenomena the AI system is modelling over time [14].

  2. Documentation and Review: Some work is beginning to understand how models and datasets might be documented in context [15,27]. Something less considered, but called for by practitioners on-the-ground [35], is how social routines supporting oversight, human-AI synergy, etc, might be effectively packaged up and documented, particularly in new or changing environments with high staff turnover, or in the context of model trading.

  3. Integrating Artificial and Human Intelligence: AI systems and humans both have unique abilities and are typically better at certain complementary tasks than others. For instance, while AI systems can summarize voluminous data to identify latent patterns, humans can extract meaningful, relatable, and theoretically grounded insights from such patterns. What kind of research designs or problems are most amenable to and would benefit the most from combining artificial and human intelligence? What challenges might surface in attempting to do so?

  4. Collaborative Decision Making: How can we harness the best of humans and algorithms to make better decisions than either alone? How do we ensure that when there is a human-in-the-loop—such as in complex or life-changing decision-making—they remain critical and meaningful, while creating and maintaining an enjoyable user experience? Where is the line between decision support anticipating the needs of the user and it removing the user’s ability to bring in novel, qualitative critical knowledge to enable the system’s goals?

  5. AI/ML in the HCI Design Process: How can algorithmic tools be made more readily accessible during the HCI design process to those whose expertise lies outside of machine learning, and computer science more broadly? What are some successful AI-HCI collaborations? What made them work? Where do the barriers exist and how might we overcome them?

  6. Representing Diverse Human Roles and Relationships in AI Systems: AI systems often involve humans in capacities other than the traditional "user"; for instance, individuals who conceptualize the system, the developers, the people who evaluate the underlying machine learning models, and those whose data the system draws from to make inferences (human stakeholders). What approaches – across the design, implementation, evaluation, and deployment processes – help account for the variety of relationships that people have with AI systems?

  7. Critical Views of AI: Work in fields such as science and technology studies (STS), media studies, and other areas has examined the social, political, and economic ramifications of AI systems [16]. To date, little of such work has been incorporated into HCI [1]. How can critical perspectives be brought into a meaningful, productive dialog with design- and implementation-oriented work? In short, how do we foster a productive dialog between researchers

Workshop Schedule

Room: Carron 1

0900 - Welcome and Introduction (Kori Inkpen)

0915 - Keynote Speaker - Jenn Wortman Vaughan

1020 - Mid-Morning Break

1045 - Research Speed Dating (Stevie Chancellor)

1130 - Brainstorm Key Areas for HCI Growth (Eric Baumer)

1220 - Lunch Break

1330 - Breakout Groups (Munmun de Choudhury)

1520 - Mid-afternoon Break

1545 - Report Back from Breakout Groups (Munmun de Choudhury)

1630 - Brainstorm Next Steps (Kori Inkpen)

1730 - Workshop Concludes

1900 - Workshop Happy Hour (optional): Taphouse Bar & Kitchen, 1046 Argyle St, Glasgow

Research “Speed Dating”. Participants will get 60 seconds to introduce themselves to another member of the workshop as well as a brief description of their research and what they hope to get out of participating in the workshop. This will serve as an ice breaker activity for participants, and we have found this particular style of introduction very effective in past workshops.

Next Steps. Brainstorm important next steps to continue these conversations and strengthen the community of HCI researchers working on Human+AI problems and facilitate rich collaborations with others disciplines. Additionally, we will discuss ways we can have broader impact by ensuring that this topic is central to HCI education.

References

1. Eric P. S. Baumer. 2017. Toward Human-Centered Algorithm Design. Big Data & Society 4, 2. https://doi.org/10.1177/2053951717718854

2. Richard Berk. 2012. Criminal justice forecasts of risk: A machine learning approach. Springer Science & Business Media.

3. Adam Bermingham and Alan Smeaton. 2011. On using twitter to monitor political sentiment and predict election results. In Proceedings of the workshop on sentiment analysis where ai meets psychology (saaip 2011), 2–10.

4. Peter Brusilovski, Alfred Kobsa, and Wolfgang Nejdl. 2007. The adaptive web: Methods and strategies of web personalization. Springer Science & Business Media.

5. Stevie Chancellor, Zhiyuan Lin, Erica L Goodman, Stephanie Zerwas, and Munmun De Choudhury. 2016. Quantifying and predicting mental illness severity in online pro-eating disorder communities. In Proceedings of the 19th acm conference on computer-supported cooperative work & social computing, 1171–1184.

6. Hsinchun Chen, Roger HL Chiang, and Veda C Storey. 2012. Business intelligence and analytics: From big data to big impact. MIS quarterly 36, 4: 1165–1188.

7. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. ICWSM 13: 1–10.

8. Munmun De Choudhury and Emre Kiciman. 2018. Integrating artificial and human intelligence in complex, sensitive problem domains: Experiences from mental health. AI Magazine 39, 3.

9. Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 chi conference on human factors in computing systems, 2098–2110.

10. Thomas G Dietterich. 2009. Machine learning in ecosystem informatics and sustainability. In IJCAI, 8–13.

11. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS’12. 214–226. https://doi.org/10.1145/2090236.2090255

12. Hamid Ekbia, Michael Mattioli, Inna Kouper, Gary Arave, Ali Ghazinejad, Timothy Bowman, Venkata Ratandeep Suri, Andrew Tsou, Scott Weingart, and Cassidy R Sugimoto. 2015. Big data, bigger dilemmas: A critical review. Journal of the Association for Information Science and Technology 66, 8: 1523–1545.

13. Jorge Galindo and Pablo Tamayo. 2000. Credit risk assessment using statistical and machine learning: Basic methodology and risk modeling applications. Computational Economics 15, 1-2: 107–143.

14. J Gama, Indre Žliobaitė, A Bifet, M Pechenizkiy, and A Bouchachia. 2013. A survey on concept drift adaptation. ACM Computing Surveys 1, 1. https://doi.org/10.1145/2523813

15. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé Iii, and Kate Crawford. 2018. Datasheets for datasets. In Presented at fat/ml 2019. Retrieved from https://arxiv.org/abs/1803.09010

16. Tarleton Gillespie and Nick Seaver. Critical Algorithm Studies: A Reading List. Social Media Collective.

17. Sara Hajian and Josep Domingo-Ferrer. 2012. Direct and indirect discrimination prevention methods. In Discrimination and privacy in the information society. Springer, Berlin, Heidelberg, 241–254.

18. Jiazhen He, James Bailey, Benjamin IP Rubinstein, and Rui Zhang. 2015. Identifying at-risk students in massive open online courses. In AAAI, 1749–1755.

19. Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, and Stuart Bowers. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the eighth international workshop on data mining for online advertising, 1–9.

20. Muhammad Imran, Carlos Castillo, Ji Lucas, Patrick Meier, and Sarah Vieweg. 2014. AIDR: Artificial intelligence for disaster response. In Proceedings of the 23rd international conference on world wide web, 159–162.

21. Neal Jean, Marshall Burke, Michael Xie, W Matthew Davis, David B Lobell, and Stefano Ermon. 2016. Combining satellite imagery and machine learning to predict poverty. Science 353, 6301: 790–794.

22. Hian Chye Koh and Gerald Tan. 2011. Data mining applications in healthcare. J. Healthcare. Info. Manag. 19, 2: 65.

23. Bjoern Krollner, Bruce Vanstone, and Gavin Finnie. 2010. Financial time series forecasting with machine learning techniques: A survey.

24. David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of google flu: Traps in big data analysis. Science 343, 6176: 1203–1205.

25. Wei-Yang Lin, Ya-Han Hu, and Chih-Fong Tsai. 2012. Machine learning in financial crisis prediction: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 4: 421–436.

26. Zachary C Lipton. 2017. The doctor just won’t accept that! arXiv preprint arXiv:1711.08037.

27. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa D Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In ACM fat*‘19. Retrieved October 17, 2018 from arxiv.org/abs/1810.03993

28. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65: 211–222.

29. Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2016. Social data: Biases, methodological pitfalls, and ethical boundaries.

30. Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining. In ACM KDD’08. ACM, Las Vegas, Nevada.

31. John Pestian, Henry Nasrallah, Pawel Matykiewicz, Aurora Bennett, and Antoon Leenaars. 2010. Suicide note classification using natural language processing: A content analysis. Biomedical informatics insights 3: BII–S4706.

32. Harry Surden. 2014. Machine learning and law. Wash. L. Rev. 89: 87.

33. Deepak Vasisht, Zerina Kapetanovic, Jongho Won, Xinxin Jin, Ranveer Chandra, Sudipta N Sinha, Ashish Kapoor, Madhusudhan Sudarshan, and Sean Stratman. 2017. FarmBeats: An iot platform for data-driven agriculture. In NSDI, 515–529.

34. Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2. https://doi.org/10/gdcfnz

35. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In CHI’18. https://doi.org/10/ct4s

36. Justin Wolfers and Eric Zitzewitz. 2004. Prediction markets. Journal of economic perspectives 18, 2: 107–126.

37. Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the heart pump implant decision process: Opportunities for decision support tools to help. In CHI 2016, 4477–4488.

38. Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3: 689–722. https://doi.org/10.1111/rssa.12227

  1. https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html