Discover our research outputs and cite our work. In this paper, we investigate the presence of social biases in sentence-level representations and propose a new method, Sent-Debias, to reduce these biases. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. While some methods were proposed to debias these word-level embeddings, there is a need to perform debiasing at the sentence-level given the recent shift towards new contextualized sentence representations such as ELMo and BERT. 2010. Request. Get the latest machine learning methods with code. • Irene Mengze Li, ACL materials are Copyright © 1963–2020 ACL; other materials are copyrighted by their respective copyright holders. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847–4853, Brussels, Belgium. Towards Debiasing Sentence Representations. biases and stereotypes. When computing the similarity of an input sentence with a known set of sentences (e.g. 2018. While some methods were proposed to debias these word-level embeddings, there is a need to perform debiasing at the sentence-level given the recent shift towards new contextualized sentence representations such as ELMo and BERT. • Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cotterell, Vicente Ordonez, and Kai-Wei Chang. What are the biases in my word embedding? In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 3 0 obj Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and KaiWei Chang. 2019. Semantics derived automatically from language corpora contain human-like biases. 2018. Abstract:As natural language processing methods are increasingly deployed inreal-world scenarios such as healthcare, legal systems, and social science, itbecomes necessary to recognize the role they … • Models for natural language understanding (NLU) tasks often rely on the idiosyncratic biases of the dataset, which make them brittle against test cases outside the trai In Proc. EI. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 43–53, New Orleans, Louisiana. 11 0 obj Association for Computational Linguistics. In this paper, we investigate the presence of social biases in sentence-level representations and propose a new method, Sent-Debias, to reduce these biases. OpenReview.net. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Visualizing data using t-SNE. Add a CoRR, abs/1904.05342. https://github.com/pliang279/sent_debiasmd. Association for Computing Machinery. Jeffrey Pennington, Richard Socher, and Christopher Manning. Rev., 104:671. Materials prior to 2016 here are licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International License. Understanding and using the implicit association test: Iii. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Site last built on 19 November 2020 at 15:29 UTC with commit 6df3f486. Reducing gender bias in abusive language detection. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 1–11, Valencia, Spain. <> To install bert models, go to debias-BERT/, run pip install . The variant BERT post CoLA is BERT fine-tuned on the Corpus of Linguistic Acceptability (CoLA), Towards Debiasing Sentence Representations, This paper investigated the post-hoc removal of social biases from pretrained sentence representations, Machine learning tools for learning from language are increasingly deployed in real-world scenarios such as healthcare (, When compared to word-level representations, these models have achieved better performance on multiple tasks in NLP (, As their usage proliferates across various real-world applications (, To ensure that debiasing does not hurt the performance on downstream tasks, we report the performance of our debiased BERT and ELMo on SST-2 and CoLA by training a linear classifier on top of debiased BERT sentence representations, We proposed the SENT-DEBIAS method that accurately captures the bias subspace of sentence representations by using a diverse set of templates from naturally occurring text corpora. Louis-Philippe Morency, As natural language processing methods are increasingly deployed in real-world scenarios such as healthcare, legal systems, and social science, it becomes necessary to recognize the role they potentially play in shaping social biases and stereotypes. <> Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. • Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. Roberta: A robustly optimized BERT pretraining approach. Calif. L. <> To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Browse our catalogue of tasks and access state-of-the-art solutions. <> The authors present these debiasing results in Table 3, and see that for both binary gender bias and multiclass religion bias, the proposed method reduces the amount of bias as measured by the given tests and metrics. Association for Computational Linguistics. widely adopted sentence representations for fairer NLP.Comment: ACL 2020, code available at https://github.com/pliang279/sent_debia. other social constructs. <> Yao Chong Lim, Title: Towards Debiasing Sentence Representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktaschel, Douwe Kiela, Arthur Szlam, and Jason Weston. 2014. 2017. While some methods were proposed to debias these word-level embeddings, there is a need to perform debiasing at the sentence-level given the recent shift towards new contextualized sentence representations such as ELMo and BERT. While some methods were proposed to debias these word-level embeddings, there is a need to…, CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models, Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation, Language (Technology) is Power: A Critical Survey of "Bias" in NLP, Efficiently Mitigating Classification Bias via Transfer Learning, Exploring Text Specific and Blackbox Fairness Algorithms in Multimodal Clinical NLP, Analyzing Text Specific vs Blackbox Fairness Algorithms in Multimodal Clinical NLP, Think Locally, Act Globally: Federated Learning with Local and Global Representations, On Measuring Social Biases in Sentence Encoders, Identifying and Reducing Gender Bias in Word-Level Language Models, Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them, Measuring Bias in Contextualized Word Representations, Man is to Computer Programmer as Woman is to Homemaker? 2019. Journal of Machine Learning Research, 9:2579–2605. Association for Computational Linguistics. Neural network acceptability judgments.