Content Moderation Process

From GM-RKB
Jump to navigation Jump to search

A Content Moderation Process is a workflow that filters and monitors digital content for compliance with guidelines and policies.



References

2023a

  • (OpenAI, 2023) ⇒ https://openai.com/blog/using-gpt-4-for-content-moderation
    • QUOTE: Content moderation plays a crucial role in sustaining the health of digital platforms. A content moderation system using GPT-4 results in much faster iteration on policy changes, reducing the cycle from months to hours. GPT-4 is also able to interpret rules and nuances in long content policy documentation and adapt instantly to policy updates, resulting in more consistent labeling. We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators. Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system.

2023b

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Content_moderation Retrieved:2023-8-29.
    • On Internet websites that invite users to post comments, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting, in contrast to useful or informative contributions, frequently for censorship or suppression of opposing viewpoints. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves.[1]

      Various types of Internet sites permit user-generated content such as comments, including Internet forums, blogs, and news sites powered by scripts such as phpBB, a Wiki, or PHP-Nuke. and etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, they will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.

      Major platforms use a combination of algorithmic tools, user reporting and human review.[1] Social media sites may also employ content moderators to manually inspect or remove content flagged for hate speech or other objectionable content. Other content issues include revenge porn, graphic content, child abuse material and propaganda.[1] Some websites must also make their content hospitable to advertisements.[1]

  1. 1.0 1.1 1.2 1.3 Grygiel, Jennifer; Brown, Nina (June 2019). "Are social media companies motivated to be good corporate citizens? Examination of the connection between corporate social responsibility and social media safety". Telecommunications Policy. 43 (5): 2, 3. doi:10.1016/j.telpol.2018.12.003. S2CID 158295433. Retrieved 25 May 2022.