The FaceApp phenomenon is a microcosm of consumer interest in privacy; they have a rising awareness of how data is being used but ultimately this is outweighed by a love for ephemeral, gimmicky apps. So, at this stage it seems that people care more about their data use but do not care enough to change their behaviour. Likewise, FaceApp puts into sharp focus potential future areas of concern for those working in fraud and cybercrime.
FaceApp: An app with good or bad intentions?
For those working in fraud, data and tech, the recent reappearance of FaceApp, a mobile face-editing application, offers a timely talking point around consumer identity and its current and future vulnerability.
When using social media, consumers are often relaxed and have a predisposition to share information. On the face of it, ‘gimmicks’ like FaceApp appear to be harmless, giving everyone the opportunity to laugh photographs of their friends or famous people.
FaceApp highlights the evolving nature of consumers relationship with technology. Those of us who have downloaded, used it and – after digesting and finally accepting how our future self is going to turn out – are likely to have shared our selfie prophecy across social media with those we know and those we don’t.
As the LOLZ kicked in so did the press coverage that included listicles of celebrities using the app. Highlights include the Jonas Brothers transitioning into elderly grape growing, retired tech bros, Drake aging elegantly into a Blaxploitation anti-hero and the jokey memes that populated our timelines. Following on from these fun pieces were more serious, earnest pieces raising concerns about the software developers and whether users should be worried about the app’s T&Cs and the way their data could be used.
With Cambridge Analytica and the threat of political interference fresh in our minds, the apps connections raised questions. Was our camera roll being uploaded in its entirety and in perpetuity? Who was ultimately harvesting our information and for what purpose? The speculation went into overdrive until it emerged that only the photograph modified was being accessed and images were stored in a server that wasn’t based in Russia.
The question is, where do consumers stand on this? As live Google Trends data indicates, UK consumer interest in data security has fairly regular but small peaks and this is dwarfed by interest in FaceApp. This measurement, although admittedly crude, hammers home the point that consumers care about data security (or privacy) but care more about the fun they can have with an app.
The data risks associated with apps
The fact is that there are lots of applications out there which regularly capture data from users. And a lot have badly written or unintelligible T&Cs. Whilst the vast majority are not doing this with any adverse intentions, the proliferation of these apps means more data sources that can be breached and increased risk.
Consumer education, including the press coverage is vital to ensuring that people are aware of the risks and become more vigilant when deciding whether or not to use these applications. However, it is unlikely to completely mitigate the risk at the end of the day as the general public will still continue to use them. As a result it’s vital that organisations, such as those in the financial services sector, continue to deploy state of the art controls to prevent misuse of any compromised data.
As Matt Stanton in our fraud consultancy team explains; “The amount of data points present opportunities both to the fraudsters but also, when harnessed correctly, fraud prevention teams.
“We work with organisations, deploying market leading identity verification and financial crime and fraud prevention services including document and liveness verification, device and digital identity risk assessment services.”
The risks presented by deepfakes
Whilst FaceApp perhaps demonstrates a level of consumer indifference or lack of understanding around privacy, it is interesting to consider the criminal risks facial editing applications could present in the future.
At the recent TransUnion Summit, digital journalist and author Jamie Bartlett presented on the Dark Web and afterwards sat down for a short Q&A. One of the questions asked was around the threat of deepfakes, a technique for human image synthesis based on artificial intelligence.
Deepfakes combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network. The recent deepfake that swaps Jim Carey for Jack Nicholson in The Shinning is an excellent if frightening example of the software in action.
Because of these capabilities, deepfakes can be used to create fake news and malicious hoaxes and Bartlett is of the opinion that the technique may transition further into the public realm. It is worth considering how consumers can be educated of the potential uses of facial applications and who they give access of their data to. And for organisations to consider the criminal opportunities these techniques present around identity and verification, and how they can begin to develop solutions to counter them.