Zuckerberg Admits He’s Developing Artificial Intelligence to Censor Content


(ANTIMEDIA) — This week we were treated to a veritable carnival attraction as Mark Zuckerberg, CEO of one of the largest tech companies in the world, testified before Senate committees about privacy issues related to Facebook’s handling of user data. Besides highlighting the fact that most United States senators — and most people, for that matter — do not understand Facebook’s business model or the user agreement they’ve already consented to while using Facebook, the spectacle made one fact abundantly clear: Zuckerberg intends to use artificial intelligence to manage the censorship of hate speech on his platform.



Over the two days of testimony, the plan for using algorithmic AI for potential censorship practices was discussed multiple times under the auspices of containing hate speech, fake news, election interference, discriminatory ads, and terrorist messaging. In fact, AI was mentioned at least 30 times. Zuckerberg claimed Facebook is five to ten years away from a robust AI platform. All four of the other Big 5 tech conglomerates — Google, Amazon, Apple, and Microsoft — are also developing AI, many for the shared purposes of content control.


For obvious reasons, this should worry civil liberty activists and anyone concerned about the erosion of first amendment rights online. The encroaching specter of a corporate-government propaganda alliance is not a conspiracy theory. Barely over a month ago, Facebook, Google, and Twitter testified before Congress to announce the launch of a ‘counterspeech’ campaign in which positive and moderate posts will be targeted at people consuming and producing extremist or radical content.



Like the other major social networks, Facebook has already been assailed by accusations of censorship against conservative and alternative news sources. The Electronic Frontier Foundation (EFF) outlined some other examples of the company’s “overzealous censorship” in just the last year:



“High-profile journalists in PalestineVietnam, and Egypt have encountered a significant rise in content takedowns and account suspensions, with little explanation offered outside a generic ‘Community Standards’ letter. Civil discourse about racism and harassment is often tagged as ‘hate speech’ and censored. Reports of human rights violations in Syria and against Rohingya Muslims in Myanmar, for example, were taken down—despite the fact that this is essential journalist content about matters of significant global public concern.”


Facebook now thinks AI will be the answer to all its woes. “We started off in my dorm room with not a lot of resources and not having the AI technology to be able to proactively identify a lot of this stuff,” Zuckerberg said during his testimony. “Over the long term, building AI tools is going to be the scalable way to identify and root out most of this harmful content.”


To be clear, AI is already at work for Facebook. “Today, as we sit here, 99 percent of the ISIS and al-Qaeda content that we take down on Facebook, our AI systems flag before any human sees it,” Zuckerberg said.


But he admits that the linguistic nuances of hate speech will be one of the thornier problems for AI.



Is it even possible for the ‘information gatekeepers’ like Facebook and Google to use AI for content regulation without practicing censorship? As EFF notes, “Decision-making software tends to reflect the prejudices of its creators, and of course, the biases embedded in its data.”


Of course, in an age when our government increasingly seems to be a corporatocracy with a revolving door between Silicon Valley and the State Department, a discussion of corporate censorship invariably includes an acknowledgment of government propaganda, which was officially legalized in 2012 under Obama’s NDAA. Is it realistic for us not to expect an overlap between what the government wants us to believe and what corporations allow as free speech?


At one point during his testimony, a senator asked Zuckerberg whether he thinks Facebook is more trustworthy with user data than the government. After a long pause, Zuckerberg replied, “Yes.” This moment has been completely overlooked, but Zuckerberg essentially confirmed in one word that despite all of the talk of privacy violations, he still believes the government is worse on the issue of privacy. And after everything revealed to us by Edward Snowden and Wikileaks, is he wrong?


More importantly, since AI will harbor the biases and values of the entity that creates it, why would we assume that AI will make humans safer? AI (at least early AI) will do the bidding of its maker. While machine learning may be the future arbiter of free speech, it will be corporate and government programmers who determine its protocols. And as we already know, the rights of citizens and the rights of technocrats are not the same.


Creative Commons / Anti-Media / Report a typo