Fri, Feb 26th 2021 9:37am — Mike Masnick I've said over and over and over again that content moderation at scale is impossible to do well, and one of the biggest reasons for this is that it's difficult to understand context. Indeed, I've heard that some content moderation policies explicitly suggest that moderators don't try to take context into account. This is not because companies don't think it's important, but the recognition that understanding the context behind every bit of content, would make the process so slow as to be absolutely useless. Yes, it would be great if every content moderator had the time and resources to understand the context of every tweet or Facebook post, but the reality is that we'd then need to employ basically every human being alive to be researching context. Low level content moderators tend to only have a few seconds to make decisions on content, or the entire process slows to a crawl, and then the media will slam those companies for leaving "dangerous" content up too long. So tradeoffs are made, and often that means that understanding context is a casualty of the process.