Safeguarding voters from misuse of AI

THE public needs to be protected from any exploitation or possible misuse of content produced by artificial intelligence (AI), especially during elections.

Experts and politicians say that the trustworthiness of such material needs to be ensured through transparency in the face of the possible automatic production of profound speeches or images, among others.

Such a threat is real; some social media platforms have long put in place gatekeeping measures to reduce negative content by using a machine learning process.

TikTok, for example, uses automated moderation technology to identify content that violates its community guidelines.

“These systems look at a variety of content signals, including keywords, images, titles, descriptions, and audio,” TikTok said on its website.

It also has human moderators whose responsibilities include reviewing flagged content, reports from users, and trending harmful content.

It was reported in April that TikTok had updated its guidelines by including content created or modified by AI, which it classified as “synthetic media”. According to, creators should label such content to prevent potential risks of spreading misinformation.

“To minimize potential harm, synthetic media featuring real private individuals is prohibited. Private individuals include anyone under 18 and adults who are not public figures.

Use of public figures over 18 – government officials, politicians, business leaders, and celebrities – is permitted, but with restrictions.

“Creators must not use synthetic media to violate policies against hate speech, sexual exploitation, and extreme harassment. They must also clearly disclose synthetic media and manipulated content that depicts realistic scenes with fake people, places, or events.”

If not labeled, any content created using synthetic media will be removed upon detection, TikTok said.

As many tech experts have said, however, AI-generated text isn’t easy to spot.

On Friday, it was reported that the president of the United States, Joe Biden, announced seven of the most influential AI tech companies voluntary commitments – including Google and OpenAI – to put safeguards in the use of generative AI. Biden is also working to draft legislation to regulate AI, over concerns expressed by consumers and ethicists.

Meta, which owns Facebook, Instagram and WhatsApp, among others, implements several measures to prevent the spread of false information or false content, including labeling them as such.

“We include warnings on posts that are rated false so you can decide what to read or share.

“When third-party fact-checkers label content as false, we significantly reduce its distribution so fewer people see it,” Meta said on its site.

In the same vein, former Shah Alam Umno youth leader Mohd Dendera Billah Zamzuri said that the drafting of speeches by the AI-driven ChatGPT bot, for example, needs to be labeled as “produced with AI assistance” for the responsibility of its owner and reader in the delivery.

He said this was particularly important for top political leaders, whose words would dictate party or government policy.

“The reliability of information or proposals made by generative AI remains to be questioned especially during electoral campaigns. As such, the speeches created by ChatGPT should be cited as such. This is transparency, and is particularly important for party leaders.”

A political party insider echoed Mohd Dendera’s views, saying that this is a way to protect the interests of the people, including party members, during the election when promises or promises are widespread.

“At the end of the day, the speeches will be seen as the policy of their organization.”

Agreeing, Muda data and technology director Faezrah Rizalman said political parties and candidates should be obliged to disclose their AI strategies and provide clarity on how AI algorithms are used to target voters.

“This transparency will enable scrutiny and ensure the responsible and ethical use of AI.”

He highlighted specific regulations and guidelines that should be established to govern the use of AI in electoral campaigns.

“These rules can define permissible applications of AI, emphasize the importance of fair and transparent campaigning practices, and establish boundaries to prevent the dissemination of false or misleading information.

“Independent regulatory bodies may be entrusted with overseeing compliance and enforcement of these guidelines. For example, regulations could address the ethical collection, storage, and analysis of voter data, drawing lessons from international best practices such as the General Data Protection Regulation in the European Union.”

With the possibility of AI being used to create fake news or target voters with personalized messages designed to manipulate them, it remains pertinent to be aware of such risks.

The development of ethical guidelines for the use of AI in electoral campaigns should be undertaken by various parties, said policy and political lobbying firm Imperium Consulting’s chief executive officer Rahman Hussin.

“These guidelines should be developed by various stakeholders, including political parties, civil society groups, and technology companies.”

Ultimately, it is necessary for people to be aware of AI’s capabilities, uses and abuses, said Bar Council technology, cyber and privacy law committee chairman Sarah Yong Li Hsien.

“When people know that what they see or hear may not be true or real, or may be designed to manipulate them, they will know that they should not take information at face value and should always check the truth.”

Apart from awareness and AI-literacy programs, he also suggests more transparency on online platforms regarding curated content and profiling.

“We also need to promote more transparency from online platforms when it comes to targeted advertising and profiling, as well as the criminalization of profiling based on some sensitive personal data.

“One of the initiatives of the Bar Council’s law committee on technology, cyber and privacy is to conduct webinars on AI for awareness, not only within the legal fraternity, but for the public at large.”

#Safeguarding #voters #misuse

Leave a Comment