Apple is postponing the implementation of its child safety features.


 Following backlash from critics, Apple says it will delay the release of Child Sexual Abuse Material (CSAM) detection tools "to make improvements." One of the features is a check for known CSAM in iCloud Photos, which has raised concerns among privacy activists.

The CSAM detection systems are expected to be included in subsequent OS upgrades, such as iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. The company intends to make them available in the upcoming weeks. Apple didn't go into much detail about the improvements we can expect from it. Engadget has contacted the company for a response.

One of the proposed features for Messages was to alert youngsters and their parents if Apple's on-device machine learning technologies detected sexually explicit photographs being shared in the app. The visuals would be blurry and contain cautions if they were delivered to youngsters. When someone asks how to report CSAM or tries to conduct CSAM-related searches, Siri and the built-in search functions on iOS and macOS will guide them to the proper resources.




The iCloud Photos feature is likely the most divisive of Apple's CSAM detecting technologies. It intends to match photos against a database of known CSAM image hashes (a type of digital fingerprint for such images) maintained by the National Center for Missing and Exploited Children and other organizations using an on-device technology.

Before an image is posted to iCloud Photos, this analysis is meant to take place. Apple would disable the person's account and send a complaint to NCMEC if the system detected CSAM and human reviewers manually confirmed a match.

In the words of Apple, "privacy improvements over previous solutions because Apple only learns about customers' photos if they've got a collection of known CSAM in their iCloud Photos account" Groups concerned with privacy, on the other hand, are alarmed by the data center relocation proposal.

According to some, CSAM photo scanning could lead to law enforcement or governments pressuring Apple to hunt for other types of images, such as dissidents, to crack down on them. The technology is "hazardous," according to two Princeton University researchers who claim to have constructed a comparable device. “Our approach may be simply repurposed for surveillance and censorship,” they said. The concept wasn't limited to a single type of content; a service could easily replace it with any content-matching database, and the user would be unaffected.”

Apple was also chastised by critics for allegedly abandoning its long-held policy of protecting customer privacy. It made headlines when it refused to unlock the iPhone used by the San Bernardino attacker in 2016, sparking a legal dispute with the FBI.

In mid-August, Apple admitted that poor communication caused uncertainty about the features, which it had unveiled just over a week prior. The picture scanning technology includes "several degrees of audibility," according to Craig Federighi, the company's senior vice president of software engineering. Nonetheless, Apple is reconsidering its strategy. It hasn't given an update on when the functionalities will be available.

Post a Comment

Previous Post Next Post