How to unite AI tech and safety? Reface’s policy advisor answers
The role of policies in AI startups
Today we are talking about a new world of user-generated content, which will provide a range of new services, products, and entertainment. The overall aim here is to ensure we all benefit from this new world of user-generated content while ensuring this new environment is safe and secure for users.
The role of policies in AI startups
Q: If Reface just seems like a fun app, can you please explain why it needs a policy advisor at all?
Anna: Reface is a fun app, but the company recognizes the potential for its misuse. To create a safe space for users and technology and to maximize their creative potential, we see the need to merge a security policy mindset and tech thinking. By bringing security into product design from the outset, the aim is to ensure security and public safety issues are not bolted on as an afterthought but integrated into product design from the get-go.
Q: What kind of impact is policy having at Reface currently?
Anna: At this stage, policy regarding synthetic media is at an initial stage, but becoming increasingly topical among policymakers. The European Commission draft regulation is already sending some signs and a more precise direction regarding how the Commission will consider developing risk mitigation requirements for synthetically modified content.
But what we see as immediately necessary is for fast-growing applications that generate unprecedented amounts of synthetically modified content to start proactively creating codes of practices of good behavior and educating their users.
We will discuss and consult with regulators regarding such codes of practice, but clearly, we cannot wait until legislation is enacted or sectoral codes of practice are agreed upon. We are developing our own safety and security policies alongside our product development. This is a parallel development, product and security.
Regulatory issues and a “duty of care”
Q: Which upcoming government policies need to be considered by startups working with AI or machine learning? Could you name some of the recent regulatory documents? And what impact will they have?
Anna: The UK White Paper and the follow-up draft legislation contained in the Online Safety Bill follow a common direction of travel across democracies. While we are keeping a close eye on the progress of the bill, we are working on a very high level of consumer protection, which will ensure more than full compliance with the proposed legislation.
We are also following developments in Brussels with the DMA (Digital Media Act) and the DSA (Digital Services Act), and the implications of the EС AI White paper and draft AI legislation. In the US, we are keeping a close eye on policy development in the Biden White House and US case law, for instance, the comments on Justice Thomas in Biden v. Knight.
What is critical in developing effective legislative responses across the globe is to give voice to the fast-growing synthetic media startups.
They move fast and have to be proactive, not reactive, in adopting codes of practice to create a safe space for users and technology.
Q: Could you please explain a bit about “duty of care”? What that means and how it would work?
Anna: It means building an approach that recognizes a company's duty of care in the tools it gives to users.
The duty of care responsibilities proposed in the Online Safety Bill fit very well with how we envisage approaching consumer safety and security protection. Part of our security by design approach is thinking about the risk of harm to users from the very beginning. We are also developing a range of systems and processes to enhance user safety as we develop the product.
There we see the importance of:
- Content moderation (tech and human)
- User data privacy (the app does not trade its data)
- Limiting functionality of the technology we give to users
- User authentication/identity verification
- Labeling when the content leaves the platforms and reaches third platforms
Q: Are there any other significant policy challenges you expect to face in the foreseeable future?
Anna: I don't believe there are huge challenges for those ready to play a fair game and approach it responsibly. Instead, I feel that policymakers themselves should be open to engaging in active discussion with the fast-growing companies and taking input on what we see, how users interact with the tools, and what drives them to create synthetic media content.
I see more of a challenge in keeping the gap between policy and tech as narrow as possible, considering the speed of tech growth and user adoption of these technologies.
De-escalate the stigma of synthetic media
Q: Do you feel that there is sometimes a disagreement about the danger that deepfakes pose? How dangerous do you think synthetic media is?
Anna: Clearly, there is a potential for abuse and misuse. However, we can mitigate this with good regulation. Codes of practice and responsible firms deal with the abuse and misuse and create a safe environment in which consumers can enjoy the product.
I think any stigma will disappear by effective policing by companies and the state of product abuse and the fun users have in generating their own content. There are potential forms of misuse, and we should focus on tools that will create a safe space for users.
Some of the initial concerns focused around political applications, with the widely circulated deepfake of former US President Barack Obama. However, probably the most commonly deployed harmful user content are not political but ethical and criminal, such as porn deepfakes. Deepfakes have also been deployed to defame individuals, assist in illegal plundering activities, and to provide opportunities for disinformation operations.
It is the intent of the user that matters in such cases. We are working on communicating with our users to improve their literacy and understanding of these new technologies.
🎧 In May 2021, Anna Bulakh was a guest of the Synthetic Society Podcast. Listen to the episode "Giving Digital Policy A Facelift", where she discussed the importance of baking safety and security into apps to future proof against malicious use.
The Synthetic Society Podcast explores the future of a technological society, with each episode featuring interviews with leading experts, political figures, and entrepreneurs discussing and teaching us about complex issues. The podcast is hosted by disinformation researcher Tom Ascott.