LinkedIn is a bit of an odd social network that is far more focused on the networking than the social aspect — unless you're very into sharing vaguely inspirational stories full of mostly meaningless platitudes about the importance of working hard that ultimately say nothing. But assuming that isn't your thing, it's mostly about making connections with people in your industry regardless if you have any idea who they are. A recent story from the Associated Press might just give you pause before hitting "Accept" the next time you get an invitation to connect: spies are using LinkedIn to keep tabs on potential targets.
Katie Jones, young woman in her 30s, appeared to be climbing the corporate ladder and connecting with lots of high-profile people throughout Washington, D.C. Her connections included fellows at the Brookings Institute, the Heritage Foundation, aides of senators, employees in the State Department and high-profile economists and experts. The problem though, as the AP discovered, is that Katie Jones isn't a real person. Her credentials, including a role at the Center for Strategic and International Studies and a degree from the University of Michigan, were made up.
Also made up: her face. The AP consulted a number of experts who are convinced that Katie Jones' face is completely computer generated, the result of thousands of images processed by machine. While her face looks like any other at a glance, a closer examination shows some telltale signs that she isn't as she appears.
It's most likely that her face was created using generative adversarial networks (GAN). GANs are based on neural network architectures that are designed to recognize specific patterns. The neural nets are fed huge collections of data, in this instance, photos of faces, which they are tasked with processing.
What makes GANs interesting is they pit two different neural nets against one another. The first is a generator, which uses all of the information it is given to create new instances of whatever it's processing. So in this case, the generator would start creating computer-generated faces based on the photos it processed. At first, it's not very good at this, which is where the second neural net — the discriminator — comes in. The discriminator has the same original data set, but it also processes the work created by the generator. It evaluates every image for authenticity based on its original dataset, eliminating anything that is clearly distinguishable from real images. Eventually, the generator gets good enough at what it is tasked with to create work that can deceive the discriminator. And that's how Katie Jones was born.
It might sound a little out there that an algorithm can create a such a realistic human face effectively out of thin air. Check out This Person Doesn't Exist for an endless number of samples of this technology in action. The site, created by software engineer Phil Wang, endlessly produces photos of people who never were and never will be. The site Which Face Is Real, developed by Jevin West and Carl Bergstrom at the University of Washington, takes the photos from This Person Doesn't Exist and pits them up against real pictures, tasking you with determining which one is authentic. It's a bizarre experience that will make you question every face you've ever seen online that doesn't belong to someone you know.
While the odds are low that you are the target of some foreign adversary biding its time before attempting to compromise you, this still serves as a worthwhile cautionary tale. People online are not always what they seem, and this technology can be applied in any number of ways. It doesn't need to be on LinkedIn, where the stakes are a bit higher and everyone is very focused on their careers. These types of fake outs can sneak into (relatively) lower stakes situations like, say, Tinder.
Between face swaps, Snapchat's gender filter (that has already been used to great effect to fool everyone from thirsty dudes online to a cop who was allegedly looking to have sex with an underage girl), deepfakes and GAN-generated faces, it's getting harder to know what is and isn't authentic online. All the more reason to remain vigilant.
Machines are still bad at generating finer details like hair and teeth, per West and Bergstrom, and often produce artifacts like water splotches that, once you notice them become telltale signs of a fake photo. You can also use reverse image searches like Tineye or Google Photos to determine if a photo appears anywhere else online. If you can't find it anywhere, it might be worth some additional scrutiny. Of course, the best way to avoid any of this is to not accept random invitations on social networks. You never know when you're connecting with a double agent.