I was asked by the Society of Computers and Law (SCL) to give my predictions, as a media and technology lawyer, on what 2021 has in store for us! Here are my thoughts on social media regulations and artificial intelligence laws. You can read the transcript below the video.
SCL is a registered educational charity in England that seeks to cultivate discussion and provide foundational and advanced training at the intersection of information technology and law.
My name is Kelsey Farish, and I’m a media and technology lawyer at DAC Beachcroft.
I have two predictions for 2021. The first concerns social media, and the second concerns artificial intelligence.
Firstly, I think that the conversation around how social media is regulated is really going to amplify. This will likely include a debate over content moderation and censorship, as well as the legal consequences for platforms when individual users post content that violates copyright, privacy or defamation laws, or is otherwise problematic.
I think this will be a key concern for lawmakers because political pressure as well as social pressure is mounting, and shows no signs of abatement. In 2020, we saw US President Donald Trump take aim at TikTok as well as Twitter. We also saw the US Federal Trade Commission – which is similar to the CMA or OFCOM here in the UK – file a lawsuit against Facebook, seeking to unwind its acquisition of WhatsApp and Instagram. It seems lawmakers are finally getting wise to the power and influence that these platforms have. Could 2021 be the year that the laws catch up? Maybe.
This Summer, the United Kingdom, the United States, Australia, the European Union and other jurisdictions formed the Global Partnership on Artificial Intelligence (GPAI or Gee-Pay). As the founding members, they have promised to support the “responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values.”
The US and the UK then got together to form their own little club and signed a Declaration of on Cooperation in Artificial Intelligence Research and Development – this covers a what they’re calling a Shared Vision for Driving Technological Breakthroughs in Artificial Intelligence.
But what does this actually mean in practice? I have no idea, and I’m not really sure that other people do, either. In my opining, there are a lot of nice, fluffy documents floating around which capture some good intentions, but they don’t really have teeth with which to bite. It’s true that European, Commonwealth and North American governments want to collaborate on AI – ostensibly perhaps against competition from Asia. But Will we really see some truly remarkable framework for using AI responsibly? I don’t think so.
Look at Deepfakes for example. They pose a real threat but, we’re still waiting to see what legislation, if any, will come into force. In the U.S., a deepfake bill entitled the Identifying Outputs of Generative Adversarial Networks Act is set to become law in 2021. But this legislation is only 1000 words long (that’s like four pages!) and basically just says that the U.S. Government should promote research to detect and defend against realistic-looking fakery that can be used for purposes of deception, harassment, or misinformation. Yeah, we know… that’s sorta obvious!