Apple has announced plans for what it calls “Expanded Protections for Children” to be implemented across Apple services and products, such as iCloud, iOS and iPadOS. Whilst praised by some for efforts in protecting children and reducing the spread of CSAM, concerns have been raised regarding the invasion of privacy particularly when it comes to the scanning of personal data – and links to law enforcement.
What was announced
There are three main child protection features that Apple wish to introduce:
- Communication tools – Messages will use on-device machine learning to warn about sensitive content. Apple says conversations will not be read by the company.
- Cryptography changes – CSAM detection will be implemented by scanning hashes of images in iCloud and comparing them to a known database of CSAM. Importantly, Apple will “provide valuable information to law enforcement” regarding the results.
- Siri & search – These will be updated to provide more information and support when users encounter “unsafe situations”.
These changes will be introduced in “updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey”.
Furthermore, Apple says these child protection measures will “expand and evolve over time”.
Privacy concerns raised
The most notable of the announcements is that of CSAM image scanning. Whilst everyone is in agreement that CSAM is a problem needing attention, Apple has sparked debate on how it should be approached.
Let’s first look at how the image scanning will work in more detail:
- After updating to the new version of Apple Operating Systems that include this feature, all devices will contain a database of hashes of known CSAM material.
- Before an image is uploaded to iCloud Photos, it has its hash compared on-device with those in the (also on-device) database. The image, along with a voucher with the scan result is then uploaded to iCloud.
- If the iCloud account is determined to have a certain threshold of results that may indicate CSAM possession, Apple may access a report of the account. They claim the chance of false positives is 1 in a trillion.
- When the threshold is exceeded, Apple will review the report, disable the associated Apple account, and send a report to the US National Center for Missing and Exploited Children (NCMEC).
Apple continues to emphasise its privacy-preserving techniques. But can a system designed to scan users private data really be 100% privacy-preserving? Those in Cybersecurity and advocating for better user privacy say no.
Let’s go through some of the concerns:
- Could governments enact regulation to expand the use of this type of technology? For example, countries may wish to censor those who possess content that can be seen as opposing the government.
- How sensitive is the system and could it generate false positives? Cryptography researcher Matthew Green told the AP: “the system could be used to frame innocent people”. It has been demonstrated in the past that such scanning systems can be tricked and thus result in positive results.
- Doesn’t this system undermine Apple’s reputation as a leading privacy and security focused company? Apple has ran advertising campaigns focused on privacy – “what happens on your iPhone, stays on your iPhone” – and is often chosen by consumers for that reason.
The Electronic Frontier Foundation, a group advocating for digital liberties and privacy was more than disappointed, claiming Apple is opening a “backdoor to your private life”. They cite how the system could be abused by authoritarian governments passing laws to expand scanning of citizens for anti-government content. Even concerns that anti-LGBTQ+ governments will wish to scan for content and undermine people’s safety.
The Center for Democracy and Technology was also disapproving of the proposals for similar reasons.
Analysis
Possession of CSAM and related content is a disgusting, heinous crime that must be dealt with firmly. But whilst Apple should be applauded for their intentions, the gaping privacy holes and potential for “expansion” of such a system raises major privacy concerns. This system could result in the investigation of innocent people and may be abused by governments.
Apple wishes to use machine learning – which is difficult for third parties to oversee – in conjunction with a database stored on all Apple devices of CSAM – once again, no one knows the details or has oversight on this. Other platforms have tried and failed to filter out inappropriate sexual content – Tumblr is a good example, as their system filtered out various innocent images. In the case of Apple, the changes essentially make their Messaging service less secure and break end-to-end encryption. As a truly secure and private service has zero access or scanning of user content, hashed or not.
I argue that it is not Apple’s job to perform roles of law enforcement. And where do we draw the line between the protection of citizens against crime and the protection of our right to a private life? It is a debate that will forever last, and continue to intensify as we move to more digital lives.
These changes will be initially implemented to US Apple devices only and image scanning applies only to data stored in iCloud. You can read Apple’s plans in full and see links to technical assessments on their website.