An online message board uses a machine learning model to detect toxic comments. The model shows bias, incorrectly flagging non-toxic comments from underrepresented religious groups. This leads to false positives and user complaints.
The team has limited budget and is already overextended.
The suggested solution is to add synthetic training data (A), which addresses the bias without requiring a complete model overhaul or incurring significant additional costs.