Apple should scan iPhones for child abuse images, says inventor of scanning technology | Apple

Apple should heed warnings from UK security services and relaunch its controversial plans to scan iPhones for child abuse images, the inventor of the scanning technology has claimed.

Professor Hany Farid, an expert in image analysis at the University of California, Berkeley, is the inventor of PhotoDNA, an “image hashing” technique used by companies on the web to identify and remove illegal images . He said that following an intervention from technical officials at GCHQ and the National Cyber ​​Security Center backing an extension of the technology to individual phones, Apple should be encouraged to revive its abandoned plans to do just that. .

“The denial came from a relatively small number of privacy groups,” Farid said, speaking to the Internet Watch Foundation (IWF) on the child safety group’s latest podcast. “I would argue that the vast majority of people would have said ‘sure, that sounds perfectly reasonable’, but yet a relatively small but vocal group put enormous pressure on Apple and I think a bit of a coward’s Apple succumbed to this pressure.

“I think they should have stood firm and said, ‘It’s the right thing to do and we’re going to do it.’ And I’m a huge proponent of not just Apple, but Snap, Google, all online services.

Apple first announced plans to perform “client-side analysis” in August 2021, alongside other child safety proposals that have since come to iPhones. The company intended to update iPhones with software that would allow them to match child abuse images stored in a user’s photo library with identical copies already known to authorities to have been shared on the web, and to report such users to child protection agencies.

After an outcry from privacy groups, the company shelved the proposal in September of that year and hasn’t discussed it publicly since. But in July, UK security officials published an article detailing their belief that such a sweep could be deployed in a way that allays some fears, such as fears that an oppressive nation could hijack the sweep to seek politically controversial images.

“Details matter when talking about this subject,” wrote Ian Levy and Crispin Robinson. “Discussing the subject in generalities, using ambiguous language or hyperbole, will almost certainly lead to a bad outcome.”

Farid argued that now is the time for Apple and other tech companies to act and get ahead of the legislation. “With the Online Safety Bill making its way through the UK Government and with the DSA [Digital Services Act] and AMD [Digital Markets Act] crossing Brussels, I think now is the time for companies to say: “We’re going to do it, we’re going to do it on our terms”. And, if they don’t, then I think we have to step in with a very heavy hand and insist that they do.

“We regularly scan our devices, our emails, our cloud services for everything including spam, malware, viruses and ransomware, and we do it willingly because it keeps us safe. I don’t think it’s hyperbolic to say that if we’re ready to protect ourselves, we should be ready to protect the most vulnerable among us.

“It’s the same basic core technology, and I reject those who say it’s kind of giving up on something. I would say that’s, in fact, exactly the balance we should have to protect the children online and to protect our privacy and rights.

Sign up for First Edition, our free daily newsletter – every weekday morning at 7am

Speaking about the Levy/Robinson article, Mike Tunks, head of policy and public affairs at the IWF, said: “Over the last few years the government has said, ‘We want tech companies to do this more to address child sexual abuse in end-to-end encrypted environments.’

“As we know, at present, no technology can do this, but this document presents some ways to achieve this.”

Comments are closed.