As we all know, LinkedIn is a useful platform for connecting with others, but last week, a startling NPR article covered a dystopian AI story that could have implications for how we interact with candidates online.
It all started when a researcher at the Stanford Internet Observatory was contacted on LinkedIn by a certain ‘Keenan Ramsey’, who wrote an emoji-laden message with a strange sales pitch. The recipient (an expert in online disinformation and conspiracies) was suspicious about the sender’s profile image. Or as the article explains:
Little things seemed off in what should have been a typical corporate headshot. Ramsey was wearing only one earring. Bits of her hair disappeared and then reappeared. Her eyes were aligned right in the middle of the image.
Subsequently, the researcher teamed up with a colleague at Stanford and did some digging. In fact, they embarked on an investigation and uncovered what they believed to be more than 1,000 AI-generated fake faces on LinkedIn.
LinkedIn is aware that fake accounts are a problem. According to its latest Community Report, LinkedIn removed 11.6 million fake accounts at registration in the first half of 2021. But detecting fake faces is a trickier proposition. In fact, one recent study suggested that computerised faces are now almost indistinguishable from the real thing.
Concerns about ‘deepfakes’ are nothing new. Generative adversarial network (GAN) technology has been around since 2014, and is based on datasets of real people’s faces to create new likelike (but entirely fake) visages. But what is new is the emergence of fake profiles designed for marketing purposes. What does this mean for hiring teams and their online presence?
In a world of disinformation and deepfakes, candidates need recruiters they can trust, and recruiters need to have a fair shot at earning that trust. But if a candidate can’t be sure if they’re being contacted by an actual person or a replicant from Blade Runner, it poisons the well for all of us.
We all want to improve our hiring efficiency, so naturally, we turn to automation tools like Lemlist, Gem and Expandi. While tools can bring about short-term efficiencies, there is a danger of over-saturation, where candidates get bombarded by spammy messages, and simply tune out. Too much noise but no signal.
Simply mimicking the tools that salespeople have been using for years is not a healthy hiring tactic. But what you can do is focus on creating a great place to work, which starts with thinking about your Employee Value Proposition (EVP).
A good EVP, rather than spammy hiring tools, can ensure you attract people who actually want to work for your company. Personalisation should not become pestering. This Vadim Liberman piece covers the importance of nurturing positive relationships with candidates – even if they’re not hired – and why simply ‘filling roles’ is not the sole name of the game. Too many tools can make recruiters lazy, but choosing the right tech stack (and fine-tuning your EVP) can help you forge meaningful long-term relationships, rather than disengaged, desensitized candidates with LinkedIn fatigue.
Of course, it works both ways. You may remember that a few weeks ago (newsletter #8) we shared the story of the lip-synching virtual interview candidate who wasn’t actually the person being filmed. Technology is evolving at the speed of light, and it’s vital that recruiters and candidates alike can navigate new tools and platforms with a healthy skepticism, rather than toxic mistrust.
Practice makes perfect, and if you’re looking to test your AI-spotting abilities, try this fun online game, Which Face Is Real, which challenges users to guess which faces are actual people or fake digital avatars. How can you tell apart a fake profile from the real deal? Here are some telltale signs:
We all know that robots are basically taking over the world, but are we actually aware of the pace of change in AI? Matt Clifford, co-founder and CEO of Entrepreneur First, used his newsletter last week to comment on the scale at which machine learning techniques are evolving. Read an excerpt from the blog below.
‘Jack Clark (whose Import AI newsletter is consistently excellent) reports on an interesting AI study out of China. Researchers trained a “brain-scale” model with over a trillion parameters (and demonstrated that the model should scale to close to 200 trillion parameters). These are enormous numbers. OpenAI’s GPT-3, which we’ve discussed many times before, was trained with 175 billion parameters. It’s not as simple as “more parameters = more powerful model”, but as we discussed in TiB 128 and 152, there’s good evidence that we’ve not yet reached the point where the benefits of scaling existing machine learning techniques are tapped out.’
There’s no doubt that AI is the unstoppable force of our time. More than 85 million jobs will be eliminated due to AI by 2025 (replaced by 97 million new ones). And if you don’t believe us, you can ask one of the three billion voice assistants now in use. But navigating these challenges doesn’t mean beating the robots at their own game – it means we have to make the way we hire as human as possible.
Want a chance to flex your entrepreneurial muscles and transform the way companies build teams?
Contact our support team for any specific requests and you can expect us to reach back within 24 hours.