Mitigating Framing Bias with Polarity Minimization Loss: Experimental Details
:::info
This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.
Authors:
(1) Yejin Bang, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;
(2) Nayeon Lee, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology;
(3) Pascale Fung, Centre for Artificial Intelligence Research (CAiRE), The Hong Kong University of Science and Technology.
:::
Table of Links
Abstract and Intro
Related Work
Approach
Experiments
Conclusion
Limitations, Ethics Statement and References
A. Experimental Details
B. Generation Results
A. Experimental Details
\
BERTSCORE-F1 For assessing salient information, we adopted token-embedding-based metric BERTSCORE-F1. We used the pre-trained ‘microsoft/deberta-xlarge-mnli’ version provided by (Zhang* et al., 2020) as the state-of-the-art checkpoint.
A.1 Human Evaluation
\
We conducted the evaluation with 30 randomly selected samples. We provide two articles from the two models (in random order) along with the issue sentence that describes what the articles are about. Then, the annotator is asked to answer the question “Which article is more biased?”, following Spinde et al. (2021); Lee et al. (2022). We get three annotations for each sample and select the majority voting. Since many of the test samples are closely related to U.S. politics, we recruited three non-U.S. citizens/nationals/residents to minimize any political bias or personal preference involved in the evaluation. All three annotators claimed themselves as moderate in political leaning and they are qualified to conduct the evaluation in English (they all have received their tertiary education in English).
\
To verify that the selection of which one is biased in the pairs is not random, a binomial test is conducted after obtaining the evaluation results. The null hypothesis was “The selection of articles generated from LR-INFO (our proposed method) to be less biased is random”. Then, we obtained a p-value of 0.019, which rejected the null hypothesis (p < 0.05). Therefore, the selection of articles generated from LR-INFO to be less biased is not random.
\
When the model is trained with polarity minimization loss, it can learn to remove bias-inducing information while BARTNEUSFT-T suffers. As illustrated in Table 4, our model LR-INFO could remove bias-inducing information “Trump is expected to attack President Joe Biden’s immigration policies” from the summary about the issue of “Trump to speak at CPAC” while BARTNEUSFTT failed to remove it.
\
\
\
Welcome to Billionaire Club Co LLC, your gateway to a brand-new social media experience! Sign up today and dive into over 10,000 fresh daily articles and videos curated just for your enjoyment. Enjoy the ad free experience, unlimited content interactions, and get that coveted blue check verification—all for just $1 a month!
Account Frozen
Your account is frozen. You can still view content but cannot interact with it.
Please go to your settings to update your account status.
Open Profile Settings