- Invited Speaker
- Accepted Papers
- Organizers
- In Short
- Context
- Guiding Research Questions
- Workshop Schedule
- Call for Participation
- References
Invited Speaker
Photo | Bio | Title |
---|---|---|
Dr. Jenn Wortman Vaughan | Intelligibility Throughout the Machine Learning Lifecycle |
Accepted Papers
Human+AI Modeling & Design
- Building Shared Mental Models between Humans and AI for Effective Collaboration. Harmanpreet Kaur (University of Michigan); Alex C. Williams (University of Waterloo); Walter S. Lasecki (University of Michigan)
-
Click for abstract
Intelligent systems have become increasingly common in settings ranging from performing everyday tasks more easily to decision-making for complex domains (e.g., healthcare, autonomous driving, criminal justice). Given this rising ubiquity of artificial intelligence (AI), both researchers and industry practitioners are exploring ways to better integrate AI agents in tasks that people do at home or work. However, these systems are currently limited because of gaps in the understanding between humans and their AI counterparts. In this paper, we propose methods for building shared mental models between humans and AI to enable human-AI collaboration at a level where both can be equal partners working on a shared task. We ground our approach in existing literature from CSCW and UX design.
-
-
A Human-Centered Approach to Designing Teachable Systems. Christopher J. MacLellan (Soar Technology, Inc.); Erik Harpstead (Carnegie Mellon University); Rob Sheline (Soar Technology, Inc.)
-
Transparency in Maintenance of Recruitment Chatbots. Kit Kuksenok (jobpal) and Nina Praß (jobpal)
- How concepts from distributed cognition inform human-AI work. Antti Salovaara (Aalto University), Esko Penttinen (Aalto Universit), and Tapani Rinta-Kahila (Aalto University); Aleksandre Asatiani (Aston University)
-
Click for abstract
Through several case studies on organizations deploying AI, we have started to develop a framework human–AI collaboration and its management in work contexts. The literature on distributed cognition, in which organizational activities are viewed as an alliance of computation-capable agents, humans and computers alike, offers several useful concepts and viewpoints for this task. In this paper, we summarise some parts of our ongoing theoretical but practice-oriented work, addressing especially the organizing for reliability of AI-based systems in high-risk contexts, the risk of worker deskilling, and management of collective knowledge across human and AI agents.
-
-
Human Trust Modeling for Bias Mitigation in Artificial Intelligence. Fabian Sperrle (University of Konstanz), Udo Schlegel (University of Konstanz), Mennatallah El-Assady (University of Konstanz), and Daniel A. Keim (University of Konstanz)
- Accounting for the Human when Designing with AI - Challenges Identified. Nathalie Stembert (Rotterdam University of Applied Sciences) and Maaike Harbers (Rotterdam University of Applied Sciences)
-
Click for abstract
Now that the application of Artificial Intelligence (AI) is becoming more mainstream, it is applied in many different fields, and consequently, it is starting to play a more prominent role in design processes. In current mainstream HCI (Human-Computer Interaction) design frameworks, the human (or user) is seen as the main focus, and stakeholders’ perspectives are taken into account throughout the whole problem-solving process to design thoughtful solutions. The increased complexity of design processes caused by the rise of AI, however, pose new challenges to these existing approaches, particularly for involving the human in the design process. Five challenges that can be of influence on accounting for the human in design processes involving AI are identified and elaborated upon: 1) insufficient AI literacy of designers and users, 2) the black-box nature of neural networks, 3) where to start: design vs data, 4) customized solutions for narrow user segments, and 5) thinking ahead: an extended collaborative design process. These challenges arise across the exploration, design, implementation, evaluation, and deployment phases. This extended abstract discusses possible approaches per challenge on how to warrant integration of the human perspective.
-
- Exploring the User Experience of Artificial Intelligence Applications: User Survey and Human-AI Relationship Model. Kaisa Väänänen (Tampere University), Helinä Pohjola (Tampere University), and Aino Ahtinen (Tampere University)
-
Click for abstract
Artificial Intelligence (AI) applications are spreading to our everyday lives. Supporting positive User Experience (UX) of novel applications is a crucial factor in the acceptance of those applications. Even though much of AI functionality happens “automatically” without users’ need to interact with it or be aware of it, issues specific to users’ perceptions of AI applications deserve attention. Whereas the AI community has started to address important issues such as transparency and bias in algorithm design, the broader UX perspective of AI applications has received less attention. In this paper we approach exploring this issue in two ways: First, by presenting results from a pilot survey on users’ perspective of AI applications and second, by proposing a model for Human-AI interaction based on the metaphor of human relationships.
-
AI Explainability & Experimentation
- Describing Agent Behavior to People Ofra Amir (Technion - Israel Institute of Technology)
-
Click for abstract
People increasingly interact with AI-based agents. Improving people's ability to understand and anticipate the behavior of such agents is important as it can help facilitate trust and collaboration between people and agents. In this paper, we review our recent efforts towards the development of methods for summarizing agent behavior to people. The goal of such summaries is to provide people with a global understanding of the agent's strategy, thus improving their understanding of the agent's strength and limitations. We present initial methods for generating such summaries, and discuss open challenges which require integration of HCI and AI perspectives.
-
-
Explainable AI and Smart Home Systems For Health. Rachel Eardley (University of Bristol), Ewan Soubutts (University of Bristol), and Aisling Ann O’Kane (University of Bristol)
- Visualizing Item Spaces to Increase Transparency and Control in Recommender Systems. Johannes Kunkel (University of Duisburg-Essen) and Jürgen Ziegler (University of Duisburg-Essen)
-
Click for abstract
Recommender systems (RS) are very common tools designed to help users choose items from a large number of alternatives. While research in RS has been mainly focusing on algorithmic precision, it slowly starts to take user-centric aspects into account as well. In this paper we present two demonstrative applications that target at increasing transparency and control in RS. Both prototypes follow the same method. As a baseline, the entire item space of a domain is visualized using a map-like interface. Inside this depiction users can express their preferences to which the RS reacts with matching recommendations. To change recommendations, users can alter their preferences expressed, which creates a continuous feedback loop between user and RS.
-
-
The Role of Experimentation in Human-AI Interaction. Edith Law (University of Waterloo) and Xiaowei Kuang (University of Waterloo)
- Considerations on Explainable AI and Users’ Mental Models. Heleen Rutjes (Eindhoven University of Technology), Martijn C. Willemsen (Eindhoven University of Technology), and Wijnand A. IJsselsteijn (Eindhoven University of Technology)
-
Click for abstract
As the aim of explaining is understanding, XAI is successful when the user has a good understanding of the AI system. This paper shows, using theories from the social sciences and HCI, that appropriately capturing and accounting for the user's mental model while explaining is key to successful XAI.
-
- The Curious Case of Providing Intelligibility for Smart Speakers. Jo Vermeulen (Aarhus University); Brian Lim (National University of Singapore); Mirzel Avdic (Aarhus University); Danding Wang and Ashraf Abdul (National University of Singapore)
AI In the Real World
- Flud: a hybrid crowd-algorithm approach for visualizing biological networks. Aditya Bharadwaj (Virginia Tech); David Gwizdala (Bridgewater Associates); Yoonjin Kim (Virginia Tech); Kurt Luther (Virginia Tech); T. M. Murali (Virginia Tech)
-
Click for abstract
Many fields of science require meaningful and visually appealing layouts of graphs. However, the problem remains challenging due to multiple conflicting criteria and complex domain-specific constraints. In this workshop paper, we present a gamified graph layout task where the goal of the players is to create a layout that optimises a score based on user-defined priorities. We propose a novel hybrid approach wherein non-experts and simulated annealing algorithm build on each other’s progress. To facilitate this collaborative process, we have developed Flud, an online game with a purpose that leverages the combination of cognitive abilities of humans to observe patterns, and the computational accuracy of simulated annealing to draw graph layouts that can help scientists visualize and understand complex networks. visualize and understand complex networks.
-
- Meeting in the Middle: The Interpretation Gap Between People and Machines. Anastasia Kuzminykh (University of Waterlo); Sean Rintel (Microsoft Research, Cambridge)
-
Click for abstract
Effectively bridging the fields of HCI and AI requires operationalizing what human users treat as meaningful in the stream of environmental and content information. Research has yet to systematically address the significant gap between levels of granularity and interpretation of machine labels and of human comprehension. To illustrate the problem, we provide some preliminary results from our study on using machine vision to make work meetings more inclusive, particularly for visually impaired participants.
-
-
Enhancing education with instructor-in-the-loop algorithms. Weiwen Leung (University of Toronto) and Joseph Jay Williams (University of Toronto)
-
How to Blend Journalistic Expertise with Artificial Intelligence For Research and Verifying News Stories? Sondess Missaoui (City, University of London), Marisela Gutierrez-Lopez, Andrew MacFarlane, and Stephann Makri, Colin Porlezza and Glenda Cooper (City, University of London)
- A B C, IT’s Not Easy as 1 2 3: Making Choices at a Danish Jobcenter. Anette C. M. Petersen and Lars Rune Christensen (IT University of Copenhagen); Thomas Hildebrandt and Naja Holten-Møller (Copenhagen University)
-
Click for abstract
In this paper we investigate the reasoning of caseworkers, in the Danish public sector, tasked with placing or moving receivers benefits in or between “target groups" for administrative purposes. The investigation aims to contribute to a discussion of collaborative decision making, which may potentially involve humans and algorithms. Our preliminary findings show that caseworkers make choices based on foundations that may differ from the assumptions of mainstream AI. As such, our study raises important questions for the design of AI, especially concerning citizen-centric casework.
-
Social Morality / Ethics of AI
-
Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions. Ali Aklhatib (Stanford University) and Michael Bernstein (Stanford University)
- Privacy Implications of Human Intelligence Powered Assistive Tools for Visually Impaired People. Taslima Akter (Indiana University), Bryan Dosono (Syracuse University), Tousif Ahmed (Indiana University), Apu Kapadia (Indiana University), Bryan Semaan (Syracuse University)
-
Click for abstract
Camera-based assistive technologies have the potential to empower people with visual impairments to obtain more independence. People with visual impairments are adopting artificial intelligence (AI) and human intelligence (HI) based technologies in their daily lives to overcome their accessibility barriers. We focus on the privacy concerns experienced by visually impaired people while using HI-based assistive technologies and report their preferences on AI versus HI-based assistive technologies in different situational contexts.
-
- Fairness Sample Complexity and the Case for Human Intervention. Ananth Balashankar (New York University); Alyssa Lees (Google)
-
Click for abstract
With the aim of building machine learning systems that incorporate standards of fairness and accountability, we explore explicit subgroup sample complexity bounds. The work is motivated by the observation that classifier predictions for real world datasets often demonstrate drastically different metrics, such as accuracy, when subdivided by specific sensitive variable subgroups. The reasons for these discrepancies are varied and not limited to the influence of mitigating variables, institutional bias, underlying population distributions as well as sampling bias. Among the numerous definitions of fairness that exist, we argue that at a minimum, principled ML practices should ensure that classification predictions are able to mirror the underlying sub-population distributions. However, as the number of sensitive variables increase, populations meeting at the intersectionality of these variables may simply not exist or may not be large enough to provide accurate samples for classification. In these increasingly likely scenarios, we make the case for human intervention and applying situational and individual definitions of fairness. In this paper we present lower bounds of subgroup sample complexity for metric-fair learning based on the theory of Probably Approximately Metric Fair Learning. We demonstrate that for a classifier to approach a definition of fairness in terms of specific sensitive variables, adequate subgroup population samples need to exist and the model dimensionality has to be aligned with subgroup population distributions. In cases where this is not feasible, we propose an approach using individual fairness definitions for achieving alignment. We look at two commonly explored UCI datasets under this lens and suggest human interventions for data collection for specific subgroups to achieve approximate individual fairness for linear hypotheses.
-
-
Contextual Morality for Human-Centered Machine Learning. Niels van Berkel (The University of Melbourne), Jorge Goncalves (The University of Melbourne), Benjamin Tag (Keio University); Simo Hosio (University of Oulu)
-
Designing for Infrastructural AI: Hidden Labors, Unruly Contingencies, and Ecological Costs. Elizabeth Kaziunas (AI Now Institute) and Roel Dobbe (AI Now Institute)
- What Makes Automated Suggestions Problematic? The Human Factor. Alexandra Olteanu and Fernando Diaz (Microsoft Research); Gabriella Kazai (Microsoft); Luke Stark and Mohamed Musbah (Microsoft Research)
Other Guest Attendees
-
Steve Whittaker (University of Santa Cruz)
-
Martin Schuessler (Technische Universität Berlin, Weizenbaum Institute) - Workshop SV
-
Divya Ramesh (University of Michigan) - Chairs SV
Organizers
Photo | Bio |
---|---|
Kori Inkpen is a Principal Researcher at Microsoft and a member of the MSR AI team. Dr. Inkpen’s research interests are currently focused on Human+AI Collaboration to enhance decision making, particularly in high-impact social contexts which inevitably delves into issues of Bias and Fairness in AI. Kori has been a core member of the CHI community for over 20 years. | |
Munmun De Choudhury is an Assistant Professor in the School of Interactive Computing at Georgia Tech where she directs the Social Dynamics and Wellbeing Lab. Dr. De Choudhury’s research interests lie at the intersection of machine learning, social media, and health, with a focus on assessing, understanding, and improving mental health from online social interactions. | |
Stevie Chancellor is a PhD Candidate in Human Centered Computing at Georgia Tech. She researches data-driven algorithms to understand deviant mental health behavior in online communities. Her work combines techniques from machine learning and data science with human-centered insights around online communities and mental health, focusing on identifying and predicting content from pro-eating disorder communities on social networks. | |
Michael Veale is a doctoral researcher in responsible public sector machine learning at the Dept. of Science, Technology, Engineering & Public Policy at University College London. His work spans HCI, law and policy, looking at how societal and legal concerns around machine learning are understood and coped with on the ground. | |
Eric P.S Baumer is an Assistant Professor of Computer Science and Engineering at Lehigh University. His research focuses on interactions with AI and machine learning algorithms in the context of social computing systems. Recent work includes using computational tools to identify the language of political framing, and studying technology refusal in the context of Facebook. |
In Short
In recent years, AI systems have become both more powerful and increasingly promising for integration in a variety of application areas. Attention has also been called to the social challenges these systems bring, particularly in how they might fail or even actively disadvantage marginalised social groups, or how their opacity might make them difficult to oversee and challenge. In the context of these and other challenges, the roles of humans working in tandem with these systems will be important, yet the HCI community has been only a quiet voice in these debates to date. This workshop aims to catalyse and crystallise an agenda around HCI and AI. Topics of interest (See Guiding Research Questions) include methods to collaboratively explore AI systems to identify unusual failure modes; identifying, methods to capture and replicate social practices of critical engagement and oversight with AI systems; integrating insights from AI systems with complementary human insights; avoiding over- and under-reliance while preserving enjoyable user experiences; creating useable ML/AI systems for HCI practitioners; and modes of engagement with novel AI ‘users’, such as those captured and represented in training sets.
Call for Participation
We invite submissions for a one-day workshop to discuss critical questions in bringing the human into the development and deployment of artificial intelligence (AI) systems.
Papers should be 2-4 pages long in the CHI Extended Abstract format, and may address any topics related to the intersections of HCI, AI, and machine learning. This includes but is not limited to ongoing work; reflections on past work; combining methods from HCI and design to AI; and emergent ethical, political, and social challenges. A set of guiding research questions has been provided below for guidance.
The due date for submissions is no later than February 12, 2019. The submission site is here. Participants will be selected based on the quality and clarity of their submissions as they reflect the interests of the workshop. Notifications will go out no later than March 1, 2019. At least one author of each accepted position paper must attend the workshop, and all participants must register for both the workshop and at least one day of the conference.
Participants will be selected based on their prior experience and interest in the workshop as well as the quality of their submissions. We will focus on recruiting from a diverse group of participants, with a balance of students and faculty; industry practitioners and academic audiences; contribution areas within HCI and AI research; and representation of different cultures, genders, and ethnic backgrounds.
Context
Advances in deep machine learning as well as hardware have pushed the development of artificial intelligence (AI) systems. By developing machine learning (ML) techniques to process large volumes and modalities of data, by turning voluminous sources of data into signals, and by providing robust predictions of critical outcomes, AI systems can both supplement and replace human-decision making [36]. AI has begun to make great strides in many problems of societal significance and has already made contributions to tasks like development and poverty [21], education [18], agriculture and the environment [10,33], and healthcare [22]. At the same time, AI has begun to expand our ability to make important decisions in business, law, finance, and politics [2,3,6,13,23,25,32], to more easily reach and help vulnerable populations [7,8,20], to predict health and wellbeing [5,9], to more quickly detect people at risk of poor outcomes and provide early interventions [31], and sometimes to identify actionable or personalized solutions [4,19].
However, in recent years, with the pervasive adoption and prevalence of AI systems in real-world contexts, they have also raised the concerns of both researchers and practitioners for issues of bias, accountability, fairness, and discrimination. To solve these problems, machine learning researchers and practitioners have focused on providing mathematical insights to correct issues such as bias, discrimination, and transparency of algorithm choice. These researchers have focused on improving the algorithms themselves to correct for bias [11,17,30] and improve interpretability [28,38]. This area has seend tremendous group in the past few years as can be seen in the many outlets for this work including the ACM Fairness, Accountability, and Transparency (FAT*) conference and multiple iterations of FAT- workshops (FAT/ML for recommendations, FATREC for recommender systems, etc) at premier ML conferences. This work is providing important computational prerequisites and scaffolding necessary for responsible deployment of machine learning.
However, the human is still a critical, if not the central component of many scenarios where AI is being advocated as both an assistant or augmentation for human intelligence. While most AI systems offer robust empirical performance for real-world problems, many of these approaches are developed opaquely and in isolation, without appropriate involvement of the human stakeholders who use these systems or are most affected by them. As the following fictional exemplar implies [26]: “[..] we can’t just tell the doctor ‘my neural network says this patient has cancer!’ The doctor just won’t accept that!”. Human involvement in AI system design, development, and evaluation is critical [16] to ensure that the insights being derived are practical, and the systems built are meaningful and relatable to those who need them. Some recent HCI work has focussed on adoption issues of this kind [35,37], but it remains unclear how the characteristics of emerging AI technologies may interact with existing understanding around decision-support or expert systems in-the-wild.
It is also important to prevent unintended consequences, and alleviate various risks stemming from bias, errors, irresponsible use, misaligned expectations, privacy concerns, and potential issues around lack of trust, interpretability, and accountability. Moreover, human activities and behaviors are deeply contextual, nuanced, and laden with subjectivity – aspects that many current AI systems often fail to account for adequately [24]. We need to be able to transcribe AI systems in interactive, usable, and actionable technologies that function in the natural contexts of all human stakeholders in a bias-free manner. This, in turn, requires augmenting these systems with orthogonal but complementary human-centered insights, that go beyond aggregated assessments and inferences to ones that factor in individuals’ differences, demands, values, expectations, and preferences [12,29]. Finally, the success of AI systems in the real world requires multi-disciplinary partnerships who bring diverse perspectives to solve these problems which are as much human problems as they are AI.
Summarily, despite the importance of people in the development, deployment, and use of AI systems, Human Computer Interaction (HCI) is often not a core component of these research questions. While AI researchers have recently begun to note this important gap in popular discourse, e.g., “Despite its name, there is nothing ‘artificial’ about this technology – it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.”1, we argue that more comprehensive inclusion of HCI’s unique perspectives are essential to solving these challenging societal questions. Therefore, through this workshop, we ask the fundamental question: Where is the human in AI research?
Guiding Research Questions
Our workshop provides an opportunity for researchers and practitioners interested in the intersection of HCI and AI to come together to share interests and discuss ways to move the field forward. We provide a set of guiding questions for the community to consider:
-
Explainable and Explorable AI: What does the human need to effectively utilize AI insights? How can users explore AI systems’ results and logic to identify failure modes that might not be easy to spot? Examples might be undesirable impacts on latent groups not corresponding to categories in the dataset [34], difficult-to-spot changes (‘concept drift’), or feedback loops in the socio-technical phenomena the AI system is modelling over time [14].
-
Documentation and Review: Some work is beginning to understand how models and datasets might be documented in context [15,27]. Something less considered, but called for by practitioners on-the-ground [35], is how social routines supporting oversight, human-AI synergy, etc, might be effectively packaged up and documented, particularly in new or changing environments with high staff turnover, or in the context of model trading.
-
Integrating Artificial and Human Intelligence: AI systems and humans both have unique abilities and are typically better at certain complementary tasks than others. For instance, while AI systems can summarize voluminous data to identify latent patterns, humans can extract meaningful, relatable, and theoretically grounded insights from such patterns. What kind of research designs or problems are most amenable to and would benefit the most from combining artificial and human intelligence? What challenges might surface in attempting to do so?
-
Collaborative Decision Making: How can we harness the best of humans and algorithms to make better decisions than either alone? How do we ensure that when there is a human-in-the-loop—such as in complex or life-changing decision-making—they remain critical and meaningful, while creating and maintaining an enjoyable user experience? Where is the line between decision support anticipating the needs of the user and it removing the user’s ability to bring in novel, qualitative critical knowledge to enable the system’s goals?
-
AI/ML in the HCI Design Process: How can algorithmic tools be made more readily accessible during the HCI design process to those whose expertise lies outside of machine learning, and computer science more broadly? What are some successful AI-HCI collaborations? What made them work? Where do the barriers exist and how might we overcome them?
-
Representing Diverse Human Roles and Relationships in AI Systems: AI systems often involve humans in capacities other than the traditional "user"; for instance, individuals who conceptualize the system, the developers, the people who evaluate the underlying machine learning models, and those whose data the system draws from to make inferences (human stakeholders). What approaches – across the design, implementation, evaluation, and deployment processes – help account for the variety of relationships that people have with AI systems?
-
Critical Views of AI: Work in fields such as science and technology studies (STS), media studies, and other areas has examined the social, political, and economic ramifications of AI systems [16]. To date, little of such work has been incorporated into HCI [1]. How can critical perspectives be brought into a meaningful, productive dialog with design- and implementation-oriented work? In short, how do we foster a productive dialog between researchers
Workshop Schedule
Room: Carron 1
0900 - Welcome and Introduction (Kori Inkpen)
0915 - Keynote Speaker - Jenn Wortman Vaughan
1020 - Mid-Morning Break
1045 - Research Speed Dating (Stevie Chancellor)
1130 - Brainstorm Key Areas for HCI Growth (Eric Baumer)
1220 - Lunch Break
1330 - Breakout Groups (Munmun de Choudhury)
1520 - Mid-afternoon Break
1545 - Report Back from Breakout Groups (Munmun de Choudhury)
1630 - Brainstorm Next Steps (Kori Inkpen)
1730 - Workshop Concludes
1900 - Workshop Happy Hour (optional): Taphouse Bar & Kitchen, 1046 Argyle St, Glasgow
Research “Speed Dating”. Participants will get 60 seconds to introduce themselves to another member of the workshop as well as a brief description of their research and what they hope to get out of participating in the workshop. This will serve as an ice breaker activity for participants, and we have found this particular style of introduction very effective in past workshops.
Next Steps. Brainstorm important next steps to continue these conversations and strengthen the community of HCI researchers working on Human+AI problems and facilitate rich collaborations with others disciplines. Additionally, we will discuss ways we can have broader impact by ensuring that this topic is central to HCI education.
References
1. Eric P. S. Baumer. 2017. Toward Human-Centered Algorithm Design. Big Data & Society 4, 2. https://doi.org/10.1177/2053951717718854
2. Richard Berk. 2012. Criminal justice forecasts of risk: A machine learning approach. Springer Science & Business Media.
3. Adam Bermingham and Alan Smeaton. 2011. On using twitter to monitor political sentiment and predict election results. In Proceedings of the workshop on sentiment analysis where ai meets psychology (saaip 2011), 2–10.
4. Peter Brusilovski, Alfred Kobsa, and Wolfgang Nejdl. 2007. The adaptive web: Methods and strategies of web personalization. Springer Science & Business Media.
5. Stevie Chancellor, Zhiyuan Lin, Erica L Goodman, Stephanie Zerwas, and Munmun De Choudhury. 2016. Quantifying and predicting mental illness severity in online pro-eating disorder communities. In Proceedings of the 19th acm conference on computer-supported cooperative work & social computing, 1171–1184.
6. Hsinchun Chen, Roger HL Chiang, and Veda C Storey. 2012. Business intelligence and analytics: From big data to big impact. MIS quarterly 36, 4: 1165–1188.
7. Munmun De Choudhury, Michael Gamon, Scott Counts, and Eric Horvitz. 2013. Predicting depression via social media. ICWSM 13: 1–10.
8. Munmun De Choudhury and Emre Kiciman. 2018. Integrating artificial and human intelligence in complex, sensitive problem domains: Experiences from mental health. AI Magazine 39, 3.
9. Munmun De Choudhury, Emre Kiciman, Mark Dredze, Glen Coppersmith, and Mrinal Kumar. 2016. Discovering shifts to suicidal ideation from mental health content in social media. In Proceedings of the 2016 chi conference on human factors in computing systems, 2098–2110.
10. Thomas G Dietterich. 2009. Machine learning in ecosystem informatics and sustainability. In IJCAI, 8–13.
11. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness through awareness. In ITCS’12. 214–226. https://doi.org/10.1145/2090236.2090255
12. Hamid Ekbia, Michael Mattioli, Inna Kouper, Gary Arave, Ali Ghazinejad, Timothy Bowman, Venkata Ratandeep Suri, Andrew Tsou, Scott Weingart, and Cassidy R Sugimoto. 2015. Big data, bigger dilemmas: A critical review. Journal of the Association for Information Science and Technology 66, 8: 1523–1545.
13. Jorge Galindo and Pablo Tamayo. 2000. Credit risk assessment using statistical and machine learning: Basic methodology and risk modeling applications. Computational Economics 15, 1-2: 107–143.
14. J Gama, Indre Žliobaitė, A Bifet, M Pechenizkiy, and A Bouchachia. 2013. A survey on concept drift adaptation. ACM Computing Surveys 1, 1. https://doi.org/10.1145/2523813
15. Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé Iii, and Kate Crawford. 2018. Datasheets for datasets. In Presented at fat/ml 2019. Retrieved from https://arxiv.org/abs/1803.09010
16. Tarleton Gillespie and Nick Seaver. Critical Algorithm Studies: A Reading List. Social Media Collective.
17. Sara Hajian and Josep Domingo-Ferrer. 2012. Direct and indirect discrimination prevention methods. In Discrimination and privacy in the information society. Springer, Berlin, Heidelberg, 241–254.
18. Jiazhen He, James Bailey, Benjamin IP Rubinstein, and Rui Zhang. 2015. Identifying at-risk students in massive open online courses. In AAAI, 1749–1755.
19. Xinran He, Junfeng Pan, Ou Jin, Tianbing Xu, Bo Liu, Tao Xu, Yanxin Shi, Antoine Atallah, Ralf Herbrich, and Stuart Bowers. 2014. Practical lessons from predicting clicks on ads at facebook. In Proceedings of the eighth international workshop on data mining for online advertising, 1–9.
20. Muhammad Imran, Carlos Castillo, Ji Lucas, Patrick Meier, and Sarah Vieweg. 2014. AIDR: Artificial intelligence for disaster response. In Proceedings of the 23rd international conference on world wide web, 159–162.
21. Neal Jean, Marshall Burke, Michael Xie, W Matthew Davis, David B Lobell, and Stefano Ermon. 2016. Combining satellite imagery and machine learning to predict poverty. Science 353, 6301: 790–794.
22. Hian Chye Koh and Gerald Tan. 2011. Data mining applications in healthcare. J. Healthcare. Info. Manag. 19, 2: 65.
23. Bjoern Krollner, Bruce Vanstone, and Gavin Finnie. 2010. Financial time series forecasting with machine learning techniques: A survey.
24. David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of google flu: Traps in big data analysis. Science 343, 6176: 1203–1205.
25. Wei-Yang Lin, Ya-Han Hu, and Chih-Fong Tsai. 2012. Machine learning in financial crisis prediction: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 42, 4: 421–436.
26. Zachary C Lipton. 2017. The doctor just won’t accept that! arXiv preprint arXiv:1711.08037.
27. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa D Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In ACM fat*‘19. Retrieved October 17, 2018 from arxiv.org/abs/1810.03993
28. Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. 2017. Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recognition 65: 211–222.
29. Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2016. Social data: Biases, methodological pitfalls, and ethical boundaries.
30. Dino Pedreschi, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining. In ACM KDD’08. ACM, Las Vegas, Nevada.
31. John Pestian, Henry Nasrallah, Pawel Matykiewicz, Aurora Bennett, and Antoon Leenaars. 2010. Suicide note classification using natural language processing: A content analysis. Biomedical informatics insights 3: BII–S4706.
32. Harry Surden. 2014. Machine learning and law. Wash. L. Rev. 89: 87.
33. Deepak Vasisht, Zerina Kapetanovic, Jongho Won, Xinxin Jin, Ranveer Chandra, Sudipta N Sinha, Ashish Kapoor, Madhusudhan Sudarshan, and Sean Stratman. 2017. FarmBeats: An iot platform for data-driven agriculture. In NSDI, 515–529.
34. Michael Veale and Reuben Binns. 2017. Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data. Big Data & Society 4, 2. https://doi.org/10/gdcfnz
35. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making. In CHI’18. https://doi.org/10/ct4s
36. Justin Wolfers and Eric Zitzewitz. 2004. Prediction markets. Journal of economic perspectives 18, 2: 107–126.
37. Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the heart pump implant decision process: Opportunities for decision support tools to help. In CHI 2016, 4477–4488.
38. Jiaming Zeng, Berk Ustun, and Cynthia Rudin. 2017. Interpretable classification models for recidivism prediction. Journal of the Royal Statistical Society: Series A (Statistics in Society) 180, 3: 689–722. https://doi.org/10.1111/rssa.12227
-
https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html ↩