YouTube may block video sharing if AI algorithms suspect questionable quality

YouTube will soon be able to stop sharing videos on sensitive or abusive topics by directly blocking the use of sharing options if real-time analysis using AI algorithms raises suspicions about the “quality” of the equipment.

YouTube will not directly remove videos labeled as “borderline” in the sense that, even if they do not explicitly violate Google’s posting policies, they may be considered offensive by certain categories of people. Instead, YouTube administrators will do everything possible to ensure that these messages are seen by as few people as possible, by preventing them from being shown as viewing suggestions in the content feed and by blocking completely using the Share option. Thus, suspicious content can only be discovered by directly accessing the associated YouTube channel, or using direct links, possibly distributed via other messaging platforms.

Faced with negotiating a real minefield, on the border between reasonable moderation of freedom of expression and censorship, YouTube could initially suggest that the user reconsider the video posted. If the first version fails, with the user choosing to keep the clip on their YouTube page in its original form, its visibility will be severely limited, with further penalties after further verification.

Important to know, suspicious videos will not necessarily be removed if posted, with moderators later determining what to do.

Tasked with constantly identifying potentially offensive posts, learning from what has been repeatedly reported by users, the AI ​​system will constantly improve its effectiveness. Depending on the results, YouTube is likely to rely less and less on teams of moderators.

Shirley K. Rosa