Friday, April 13, 2018

Will Deepfakes Lead to a Video Forensics Boom?

Deepfakes are still a relatively new phenomenon, having first appeared in December 2017. Thus far their range of illicit use stretches from harmless mischief (putting Nick Cage's face in every movie ever made) to potential harassment (overlaying the faces of celebrities into porn videos).

deepfake fake celebrity pornography
At present, a deepfake can still be easily spotted by the untrained eye. The effect is impressive, but there are still obvious artifacts and inconsistencies in most of these videos that make it clear they have been doctored.

That won't always be the case, however. The point at which anyone with a computer can make a very convincing fake video featuring any public figure may only be months away. Imagine the ramifications of a perfectly-faked video in which a CEO announces a revolutionary new product that sends the stock market into a frenzy, or a leader of a nation announcing they have launched missiles at another country.

Deepfakes could potentially be very bad for society, but very good for entrepreneurs working in the field of video verification.

Video As Evidence

In a court setting, video evidence has already long been inadmissible without corroborating evidence and a clear chain of custody. Deepfakes are very unlikely to survive the rigorous examination that takes place during a court case even if the technology is perfected.

There are areas of concern in which scrutiny will be much less rigorous, however. Posting videos directly to social media or a website and getting the public riled up about them seems to be where the greatest potential for damage is, given the current "fake news" climate.

Security camera footage is also another major point of concern. This type of footage is often already of low quality, lacking detail or shot in poor lighting conditions. While it may not be admissible in court, a faked video could find its way to local media and get a manhunt for an innocent person underway.

So what are some potential solutions? 

Trusted Time-stamping Services

Though they are not universally applicable as a solution, trusted third-party timestamp services may well see a huge uptick in business due to deepfakes.

These services cannot protect against the fabrication of an entirely new video, but they can protect against altered versions of videos being put forward as authentic (for example, a hacker gaining illicit access to a server to alter an existing video). The completed authentic video generates a "hash", or data signature, that is entirely unique. If it is altered in any way, the hash will change. The hash is then periodically sent to the trusted third-party service to verify it has not been altered. The third-party service adds a timestamp to the hash, verifying that it has not been altered since their last check of it.

This can also be done in an entirely automated and decentralized way through the use of blockchain technology. Of course, this method relies entirely on the integrity and ongoing security of the particular blockchain being used.

Public-Private Key Schemes

The creation of a harmful deepfake will often require the shooting of entirely new base footage over which to superimpose someone else's face. This could be nipped in the bud by widespread adoption of private-public key schemes in recording devices.

As the camera films, it would generate a timestamped watermark or signature that is applied to every individual frame using its private key. The private key could then later be matched to the camera manufacturer's public key in each frame to verify the video has not been doctored.

There are obvious weaknesses to this scheme, not the least of which being the large amount of existing recording devices that don't employ such a system. It could be used effectively in certain situations, however. For example, social media platforms could maintain a policy of not allowing videos that contain content that could be harmful if they do not have key verification in place.

New Video File Formats

A truly revolutionary step would be the creation of an entirely secure read-only video format used as the "gold standard" for any video of a sensitive nature.

As with the key systems, this would primarily be a security measure at the point of upload; platforms could reject videos containing sensitive content (like the declaration of a political leader) if it is not in the secure read-only format.

Just Scratching the Tip

These are very basic ideas that you may see entrepreneurs expand on in the near future. The responsibility for detecting deepfakes is going to fall heavily on the outlets that publish the video, such as social media platforms and journalists, in particular. They'll need to ensure they are effectively screening video for telltale signs of adulteration before publishing it, another avenue for entrepreneurial software developers to explore.

  • What are your thoughts on deepfakes? 
  • Is this the first you are hearing of this sort of high-tech faceswap? 
  • Will a new niche of video forensics emerge in the next several as a result?

No comments: