Endpoint Security , Governance & Risk Management , Internet of Things Security

Apple's Image Abuse Scanning Worries Privacy Experts

Expert: Tool Could Open Door to Broader Device Content Checks
Apple's Image Abuse Scanning Worries Privacy Experts

Apple on Thursday unveiled a new system for detecting child sexual abuse photos on its devices, but computer security experts fear the system may morph into a privacy-busting tool.

See Also: 5 Ways to Ensure Digital Business Success

The system, called CSAM Detection, is designed to catch offensive material that's uploaded to iCloud accounts from devices. It works partially on a device itself - a detail that privacy and security experts say could open a door to broader monitoring of devices.

"I don’t particularly want to be on the side of child porn and I’m not a terrorist," tweets Matthew Green, a cryptographer who is a professor at Johns Hopkins University. "But the problem is that encryption is a powerful tool that provides privacy, and you can’t really have strong privacy while also surveilling every image anyone sends."

The system will be implemented later this year in iOS 15, watchOS and macOS Monterey, which is the next desktop iteration. The Financial Times reports that it will only apply to U.S. devices.

"CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos," Apple says on its website. "This program is ambitious, and protecting children is an important responsibility. These efforts will evolve and expand over time."

Apple is also tweaking its Messages app to address explicit content either sent or received by children. Messages will use machine learning to automatically analyze image attachments and blur content that is sexually explicit. That will happen on the device, and Apple says it does not have access to the content. Apple will also display different types of warning messages, including to parents.

"When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources and reassured it is okay if they do not want to view this photo," Apple says.

Examples of messages Apple will display when children send or receive explicit material (Source: Apple)

Flagging Abuse

Apple's CSAM Detection system flags abusive images before they're uploaded to iCloud. The system uses a database of known abusive material compiled by the National Center for Missing and Exploited Children.

Apple renders that database into a set of unreadable hashes using a system called NeuralHash. Images are converted into a unique number that is specific to the image. The list is then uploaded and securely stored on a user's device. The system makes it virtually impossible for someone to figure out what images would trigger a positive detection.

Apple says it can also detect what is essentially the same image but with slightly different attributes.

"Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value," according to an Apple technical document.

Apple says it's using a technique called private set intersection, or PSI, to detect photos with CSAM content and also ensure other photos remain private.

If the system detects enough matches for CSAM content, Apple will review the material and in some cases, a user's account may be disabled and a report sent to the NCMEC.

Apple says the system preserves users' privacy in that it doesn't see images that lack a match with the CSAM database.

Surveillance Creep?

It's not entirely clear why Apple has chosen to do part of the analysis on users' devices, which some security experts find concerning.

iCloud backups are unencrypted and easy game for law enforcement agencies. There's no reason why Apple can't do CSAM scanning on the material once it has been uploaded from a device, which is already done by other cloud storage providers.

Green postulates the system's design may be part of an effort to perform broader device scanning, even for files not shared with iCloud and possibly also content that is encrypted end to end.

From there, it may be a slippery slope, Green contends. Apple has "sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you," he writes.

But Apple's website features three assessments of CSAM Detection from cryptography and security experts who endorsed the system.

"In conclusion, I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum," writes Benny Pinkas of the Department of Computer Science at Bar-Ilan University in Israel, in a three-page paper.


About the Author

Jeremy Kirk

Jeremy Kirk

Executive Editor, Security and Technology, ISMG

Kirk was executive editor for security and technology for Information Security Media Group. Reporting from Sydney, Australia, he created "The Ransomware Files" podcast, which tells the harrowing stories of IT pros who have fought back against ransomware.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cuinfosecurity.com, you agree to our use of cookies.