Posted inNews

Twitter users are now receiving prompts if their tweets are offensive

Thinking about publishing that nasty tweet? Well, think again.

Twitter users are now receiving prompts if their tweets are offensive

Twitter has introduced updated prompts that warn users if their tweet replies are offensive prior to uploading them on the platform. The feature is currently compatible for tweets in English across iOS and Android devices.

Twitter initially began beta-testing the feature in February in order to collect user feedback and to learn how to better differentiate between offensive comments and sarcasm or friendly banter:

“In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter. Throughout the experiment process, we analyzed results, collected feedback from the public, and worked to address our errors, including detection inconsistencies.”

Based on the feedback received, the new feature will assess offensive replies by 2 major factors:

1. The basis of the relationship between the author and the replier

2. The presence of offensive language

Prompts in the beta-test phase have resulted in a decrease in the number of offensive tweets circulating on the platform –  with  34% users editing their offensive replies, 11% reducing their offensive replies, and users receiving lesser offensive replies overall.