Authorities in Lahore, Pakistan, have arrested and questioned a web developer named as Farhan Asif on suspicion of cyber terrorism in relation to misinformation posted in the wake of the 29 July mass stabbing at a children’s Taylor Swift-themed dance workshop in Southport, Merseyside, in which three were murdered.
The attack on a community dance studio was carried out by a 17 year-old British national of Rwandan background, later named as Axel Rudakubana. However, in the wake of the atrocity, misinformation was circulated online suggesting that the perpetrator was a Muslim asylum seeker who had arrived in the UK having crossed the English Channel on a small boat. A fake name was also given and widely circulated as a result.
These claims were propagated by a ‘news’ site called Channel3Now, which is operated as an account on X (formerly Twitter) by Asif, and were seized upon by far right extremists in the UK.
Although later retracted, the claims led directly to an attack on a Southport mosque by a violent mob, and racist anti-immigration riots spread across the UK, fuelled further by statements made by prominent right wing figures including Reform Party leader Nigel Farage and erratic tech billionaire Elon Musk.
Over 1,000 people have now been arrested in connection with the riots, with hundreds charged and many imprisoned.
According to Lahore police, Asif operated the Channel3Now account alone, and had written his post based on misinformation he had copied from a UK-based social media account, without bothering to verify it.
Computer Weekly understands the case has been handed to Pakistan’s Federal Investigation Agency (FIA).
It was initially reported that Asif had been charged with cyber terrorism offences but according to the BBC, the FIA has since stated that Asif has not been charged. It is unknown if the UK has requested his extradition.
Verify information
Asif’s arrest serves as an apt reminder to verify the authenticity of online information before sharing it, particularly when it relates to controversial issues, such as the UK’s response to asylum seekers and immigration, the recent General Election, or the upcoming US Presidential Election.
The growing ease with which artificial intelligence can be co-opted in the service of creating extremely convincing, deepfake propaganda material, as some state-backed threat actors are already demonstrating, makes it even more important for security pros to accurately communicate the risks.
Some key actions that anybody can take include the following:
- Verify the credibility of a source before trusting it or sharing it – established, reputable news outlets are more likely to be trustworthy and less likely to distribute false information or fake content;
- Crosscheck information across multiple reliable sources, using trusted verification websites such as Full Fact in the UK, or FactCheck.org in the US;
- Be alert to inconsistences in video and audio quality – deepfake material can have subtle indications that it has been tampered with or created using AI;
- Where appropriate, use tools and software designed to spot deepfakes;
- Be sceptical, if something seems too outrageous to be true, it may well be nonsense – reflect and investigate further before sharing it;
- Try to keep abreast of new developments in deepfakes and misinformation, and talk to those around you, especially children, teens, or elderly family members, about the risks.