Facebook Faces Ongoing Challenges Around Message Encryption and Election Security
With the 2020 US Presidential election looming, Facebook is going to be a key focus, with government and regulatory bodies of various kinds scrutinizing the ways in which the company is enacting its security features, and the plans its putting in place as part of its broader shift towards enhanced privacy.
This week, we’ve seen updates on a few of these areas of concern, potentially putting Facebook on the back foot, and setting the stage for the next wave of criticism against the social media giant.
Here are some of the areas where Facebook is facing tough questions.
Facebook Responds on Encryption Plans
First off, Facebook has this week responded to a request from a coalition of government groups to halt its plans for full messaging encryption.
Back in October, US Attorney General Bill Barr, UK Home Secretary Priti Patel, acting US Homeland Security Secretary Kevin McAleenan, and Australian Minister for Home Affairs Peter Dutton collectively published an open letter in which they called on Facebook to abandon its plans to introduce full messaging encryption, as it would:
“…[put] our citizens and societies at risk by severely eroding a company’s ability to detect and respond to illegal content and activity, such as child sexual exploitation and abuse, terrorism, and foreign adversaries’ attempts to undermine democratic values and institutions”
The argument is that by enabling full messaging encryption, Facebook will “significantly increase the risk of child sexual exploitation or other serious harms” by facilitating secret conversations, which authorities won’t be able to access. As a compromise, the letter asked Facebook to provide an access point to such data for law enforcement, ensuring that there’s some level of transparency into these types of communications.
Facebook has now responded by saying that it understands these concerns, but it’s going ahead with its encryption plans anyway.
As per Facebook:
“Cybersecurity experts have repeatedly proven that when you weaken any part of an encrypted system, you weaken it for everyone, everywhere. The ‘backdoor’ access you are demanding for law enforcement would be a gift to criminals, hackers and repressive regimes, creating a way for them to enter our systems and leaving every person on our platforms more vulnerable to real-life harm. It is simply impossible to create such a backdoor for one purpose and not expect others to try and open it.”
Facebook’s stance is largely what industry watchers expected, but still, it could put the company on a collision course with various governments, which may not pan out well for Zuck and Co. in the end.
The shift towards a more privacy-focused approach, driven by an increase in user activity within messaging apps, will give people more freedom to communicate without the concern of being monitored – which, given Facebook’s track record on data tracking, could well be key to the platform maintaining usage. But if government officials insist on having some access, that could put Facebook under increasing scrutiny – and with the company already being examined by a House Judiciary Committee, and with presidential candidates calling for its break-up, the pressure could be set to increase on the world’s largest social network.
That’s not to say Facebook should necessarily cede to any government request, but given the current climate around its operations, its refusal looks set to spark more regulatory angst.
Ad Regulation Confusion
This also comes as Facebook faces more questions around its fact-checking in ads and violations of its Community Standards – and what it subsequently allows to be shared with its 2.4 billion active users.
This week, separate investigations have found that Facebook’s ad-checking process has allowed misleading messages about medication meant to prevent the transmission of HIV, while also denying requests for the removal of Islamophobic posts, with Facebook insisting that such content meets its Community Standards.
Each incident within itself has varying levels of nuance, but the ongoing questions around Facebook’s standards, in combination with its stance against fact-checking political ads, paints a picture of a company not equipped to handle the mounting challenges of regulating content at such a scale.
And really, no company is, because no company has ever had to be – Facebook is the largest social media network in the world, and the largest network of its kind in history. No one knows the best way to tackle all of these issues, because no organization has ever done it before, and Facebook, despite its size and influence, is also learning on the fly in many respects.
But Facebook is hugely influential, not matter how you look at it, what’s published on Facebook does have a significant impact. Given its size, there are always bound to be misinterpretations and missteps in regulating content, but the question now is whether Facebook is actually equipped to deal with such. Or, as per the first point, does it actually need to be broken up and/or be subjected to more stringent regulatory measures?
Whether you personally feel like Facebook influences your perspective on major issues or not, the fact is that around 74% of US adults who use Facebook visit the platform every day, and around 43% get at least some news content from the platform.
Even if that’s not you, people around you are being influenced by such exposure, and their opinions likely impact your own. This makes Facebook a crucial aggregator of news sentiment, and if the results of the 2016 US Presidential Election told us anything, its that such influence needs to be taken seriously.
Which brings us to the next element – various admins of local news groups on Facebook have been reporting that political activists are infiltrating their discussions, making it very difficult for these untrained admins to manage – and again, helping to spread misinformation and fuel bias via small, but influential, corners of the site.
As outlined by Marc Owen Jones, a Professor of Middle Eastern Studies and Digital Humanities, activist groups are using bots to flood misinformation in response to selectedg news stories, in order to fuel counter-movements and argument.
2/ Firstly, the bots and sock puppet accounts are on the case on Twitter. As you can see, an identical tweet claiming the mother staged the photo was circulated on Twitter. It’s literally copied and pasted, and the accounts are targeting it at various influencers pic.twitter.com/OtdfBnRKnw
— Marc Owen Jones (@marcowenjones) December 9, 2019
These same types of bot floods are also active on Facebook, which then leads to such misinformation reaching into groups – as Jones notes, this particular false report infiltrated several large, local Facebook groups, sparking intense debate.
4/ In one example, Jason Crosby pastes the tweet on the FB group for “Seaham Have Your Say”. Seaham have your say is a page with 24k followers serving the North Eastern coastal town of Seaham. His post gets 91 comments and 26 shares. pic.twitter.com/kgW3x31a57
— Marc Owen Jones (@marcowenjones) December 9, 2019
Jones specifically notes the new trend towards targeting influential local Facebook groups, as opposed to just the major national ones, and with each of these posts sparking such high levels of engagement, you can see why group admins may be incentivized, at least to some degree, to let them stay up. That then enables such rumors to spread, and despite it being fake, the report, in this case, actually ended up being shared by several high-profile identities, giving credence to misinformation.
There are several challenges here, starting with the distribution of such content by bots, and extending to the reliance on volunteer, untrained group admins to police such material. Basically, in this structure, Facebook can’t stop such misinformation from spreading – it may be able to use machine learning tools to detect and remove specific posts, but at Facebook’s scale, that’s a never-ending task, and one which also opens the platform up to accusations of bias and favoritism.
The latter may not be such a big deal in the scheme of things – but then again, if users don’t trust Facebook to host such discussions, maybe they’ll shift to other, more accommodating platforms instead. Facebook has societal impacts to consider, but its also a business. Is it a good thing that such an influential platform needs to also consider the bottom line impacts?
All in all, as we move into the next round of election cycles across the world, there remains significant implications for Facebook, and regulatory groups, to consider. There are no easy answers, and we’re likely to see a lot more questions raised before we have any indication of the optimal way forward.