UK’s AI adviser calls for tougher social media regulations

UK’s AI adviser calls for tougher social media regulations

An independent advisory board on artificial intelligence (AI) has urged the government to regulate how social media platforms target users with posts, videos and ads

New regulations on social media companies should be tightened to include more transparency around how firms target users online, an independent advisory board on artificial intelligence (AI) has said.

It has called on the government to implement new rules on how social media firms target users with posts, videos and ads.

The Centre for Data Ethics and Innovation (CDEI) has urged the Government to also focus on online targeting because the public is concerned about how the technology is being used.

In a new report, it said an analysis of public attitudes on the issue found that many appreciated the value of targeting – where platforms use people’s online habits to target them with content they believe will interest them – but were also concerned about the potential for such data to be exploited.

The CDEI has published three sets of recommendations as part of its research, urging the Government to hold any companies which use online targeting to a higher standard of accountability, as well as calling for transparency of online targeting systems to be increased and more control be given to users to edit how they are targeted.

A number of high-profile internet and social media platforms, including Google and Facebook, use different forms of online targeting to show users adverts or other content which they believe will interest users.

CDEI chairman Roger Taylor said, most people do not want targeting stopped. But they do want to know that it is being done safely and responsibly. And they want more control.

Tech platforms’ ability to decide what information people see puts them in a position of real power. To build public trust over the long term it is vital for the Government to ensure that the new online harms regulator looks at how platforms recommend content, establishing robust processes to protect vulnerable people, he said.

Last year, a White Paper on online harms published by the Government proposed stricter regulation for internet and social media companies, including a statutory duty of care and measures to increase web safety, particularly in protecting young and vulnerable people from illegal content, while making tech giants liable to fines or criminal prosecution if they breach their responsibilities.

According to the CDEI report, only 29% of people trust platforms to target them in a responsible way, and 61% said they were in favour of greater regulatory oversight of online targeting.

Only around a third of those asked (34%) trust internet companies to change their settings when they ask them to.

Last month, a separate report by the Royal College of Psychiatrists said social media giants should be forced to hand over data and pay towards research into their potential harms.

Risk Warning:

This article is for information purposes only.

Please remember that financial investments may rise or fall and past performance does not guarantee future performance in respect of income or capital growth; you may not get back the amount you invested.

There is no obligation to purchase anything but, if you decide to do so, you are strongly advised to consult a professional adviser before making any investment decisions.

Leave a Comment

14 − 6 =