This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Yejin Bang, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;
(2) Nayeon Lee, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;
(3) Pascale Fung, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology.
Table of Links
- Abstract and Intro
- Related Work
- Approach
- Experiments
- Conclusion
- Limitations, Ethics Statement and References
- A. Experimental Details
- B. Generation Results
2. Related Work
Framing Bias Framing bias is a well-documented phenomenon in the field of media studies (Wright and Goodwin, 2002; Entman, 2002, 2010, 2007; Gentzkow and Shapiro, 2006; Gentzkow et al., 2015; Beratšová et al., 2016). According to Gentzkow and Shapiro (2006), framing bias occurs when journalists and media outlets selectively emphasize certain aspects of a story while downplaying or ignoring others (informational) with biased use of languages (lexical). This can result in a distorted perception of events among the public, particularly in cases where the framing is done to serve a particular agenda or ideology (Kahneman and Tversky, 2013; Goffman, 1974). The impact of framing bias is especially evident in the political arena, where media outlets and political parties often engage in polarizing discourse that is designed to appeal to their respective bases (Scheufele, 2000; Chong and Druckman, 2007).
Automatic Mitigation Efforts To mitigate that, there have been various automatic media bias mitigation efforts (Fan et al., 2019; Hamborg et al., 2019; Morstatter et al., 2018; Laban and Hearst, 2017; Hamborg et al., 2017; Zhang et al., 2019b; van den Berg and Markert, 2020; Lee et al., 2022). A similar line of work is ideology prediction (Liu et al., 2022) (if they are left-, right-, or centerleaning) or stance prediction (Baly et al., 2020) – which is polarity detection. On the other hand, our work focuses on generating a neural article from polarized articles. Given that framing bias often happens very subtle, Morstatter et al. (2018) learns the pattern of framing bias in a sentence and attempts to detect it automatically. Another common mitigation attempt is to display multiple viewpoints in an automatic way (Hamborg et al., 2017; Park et al., 2009). Lee et al. (2022) took a further step to make a summary out of the polarized articles to provide multiple perspectives automatically in one single summary. Our work aligns with the vision of previous works, but we focus on the more general way to mitigate framing bias by studying the polarity minimization loss.