Human moderators were "put offline" during the pandemic, with AI-based filters taking the forefront.
Nearly 11 million videos were removed from the Google-owned video streaming platform between April and June, which is almost double the usual rate. Around 320,000 of those takedowns were appealed, with half of them being reinstated.
According to Financial Times, AI systems were "over-zealous" in their attempts to spot harmful content. Earlier in March, YouTube stated they would rely more on machine learning to flag and remove content that violated its policies on things like hate speech and misinformation.
YouTube chief product officer, Neal Mohan, said computers lack the human ability to understand the exact cultural context and nuance of various claims. But he added the ML systems "definitely have their place, even if it is to just remove the most obvious offenders."
[3 minute read]