Who’s watching? Protecting children from harm online is a challenge for governments everywhere. Credit: zhenzhong liu/Unsplash
Governments across the world are racing to balance the needs of online freedom and openness with those of safety and security. At a recent GGF webinar, experts from Europe and Asia explored how policy-makers are responding to the challenge. Adam Branson reports
“It’s not easy, but what is the alternative? We cannot throw up our hands in the air and say: ‘No, we’re not going to try’,” said June Lowery-Kingston, when asked whether civil servants could ever hope to keep citizens safe online.
“It is a bit of an arms race – the technology is evolving all the time,” added the head of unit accessibility, multilingualism and safer internet at the Directorate General for Communications Networks, Content and Technology at the European Commission (EC).
“If I think about how children are accessing the internet, it’s completely different to how it was 10 years ago. Your basic portable devices now have interconnectivity in a way that was inconceivable 10 years ago and a very, very high-quality camera, which means that things like self-generated child sexual abuse material and online grooming takes on this whole new dimension. But just because it’s hard, doesn’t mean to say we shouldn’t be trying to stop it.”
Lowery-Kingston was speaking at a webinar, organised by Global Government Forum last month. The panel – which included experts from the UK, Singapore and Estonia as well as the EC – sought to explore how governments are trying to protect citizens from a wide range of online harms including misinformation, sexual exploitation and fraud. And while the road is long, promoting digital literacy, partnering with the private sector and legislation are all forming part of policy-makers’ response.
Working in tandem
Legislation – such as Australia’s Online Safety Bill and the UK’s move to introduce a duty of care for online businesses – can do a lot of good by regulating online companies and platforms, but laws alone aren’t sufficient. “No matter how brilliant civil services are, we don’t have all the answers,” said Lowery-Kingston, noting that governments also need to work with both the private sector and citizens.
“No one has the monopoly on good ideas,” she added. “We’re working together with the technology firms that enjoy the benefits of the digital marketplace for all their goods and services. They have to play their part and so do we as individuals – we have to accept that there is some responsibility there as well. And then it’s about us making sure that the rules of the game are clear, and that we catch anyone who’s abusing our values.”
As part of working with citizens, the state has a prominent role to play in educating people about the threat from online harms and how they can keep themselves safe, noted Sarah Connolly, director of security and online harms at the Department for Digital, Culture, Media and Sport in the UK. “Individuals absolutely have a role in this and there’s definitely something about increasing digital literacy, digital skills, as well as much more awareness of data ownership,” she said.
Focus on digital literacy
Many people do not know their rights and cannot identify scams, noted Anna Piperal, managing director of the e-Estonia Briefing Centre. “We have to start teaching that from a very early age… Everybody should get an alarm going when they see some content or some actions from companies that are blocking their privacy,” she said.
Piperal illustrated how even seemingly benign online forums can endanger personal data. “I was trying to join a Facebook group of pregnant women and they asked me to send them a picture of a medical certificate confirming that I was pregnant,” she said.
“They had 300 members who had sent in their medical information to the social group and no one had written a question asking whether that was okay. Well, of course, for me it was not okay as I’m a bit more aware [of the dangers] than that. So, I talked to the Data Protection Agency and got them to contact this social media group to stop this illegal activity,” she explained.
Educating citizens so they can protect themselves and raising awareness about online safety through public information campaigns are both part of Singapore’s approach, according to Kelvin Kow, the country’s 2nd director of information policy at the Ministry of Communications and Information. The government has taken a number of steps to this end including setting up the Media Literacy Council: a public-private partnership that educates the public in areas such as cyberbullying, scams and misinformation, and advises the government on policy.
Last year, the Media Literacy Council launched its Better Internet Campaign 2020. This included specific projects with big tech companies to promote safety online, for instance, working with Instagram to produce a guide for parents on how to use the social media platform safely. “It highlighted a few steps that people could take: for example, to add two-factor authentication measures, as well as account settings that would allow people to protect themselves on Instagram,” said Kow.
A similar project was developed with TikTok. “They put out a message that all you need to do is think before you post, basically helping people to understand that actions online have real-life impacts offline as well,” Kow added. “They encouraged people to think carefully before they say things online.”
Legislation and regulation
One far more complex challenge is how governments regulate online content and intervene to remove harmful posts. While some content – such as images of child sexual exploitation or incitements to acts of terrorism – is outright illegal, there is also much where the line is less clearly defined. How, for example, should governments balance people’s right to self-expression and freedom of speech against the harms caused by the dissemination of inaccurate, fabricated and misleading information?
In Singapore, the government has legislated to help prevent the spread of misinformation, but platforms do not have to remove misleading content, said Kow. Rather, it requires them to publish corrections or qualifications, in much the same way that Twitter did when former US president Donald Trump made false claims about ballot fraud following the 2020 election.
“How we’ve tried to navigate this is to have the law enable the requirement of the publishers to put up correction notices next to the content that is false, so essentially labelling,” he said. “And the objective has not been to get content removed but to leave the original content up and then put next to the original content messages that direct users to accurate content. It says ‘look, here you can get accurate information’ and then we let readers themselves decide what facts are right.”
Addressing concerns that this approach allows government to decide what is true and false, Kow said that the legislation made it clear that such decisions are in the hands of the judiciary. “The act provides [an ability to] appeal to the courts and the courts have the final say on whether representation is true or false,” he said.
Taking a different tack, the UK government published a white paper in December last year that introduces a statutory duty of care on companies. This will make them “take more responsibility for the safety of their users and tackle harm caused by content or activity on their services,” the document noted. While this will help, Connolly said, “there are a whole set of things that make this a genuinely challenging policy problem.”
“I think that’s why it’s really incumbent on countries that think that we have a solution that might help to share that information and to keep sharing,” she added. “We have to make sure that we are able to continue to balance an internet that is safe and secure, but that is also free and open. Both of those things are really important.”