From “Apple’s New ‘Child Safety’ Initiatives, and the Slippery Slope” posted Friday on Daring Fireball:
In short, if these features work as described and only as described, there’s almost no cause for concern. In an interview with The New York Times for its aforelinked report on this initiative, Erik Neuenschwander, Apple’s chief privacy engineer, said, “If you’re storing a collection of CSAM material, yes, this is bad for you. But for the rest of you, this is no different.” By all accounts, that is fair and true.
But the “if” in “if these features work as described and only as described” is the rub. That “if” is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.
What happens, for example, if China demands that it provide its own database of image fingerprints for use with this system — a database that would likely include images related to political dissent. Tank man, say, or any of the remarkable litany of comparisons showing the striking resemblance of Xi Jinping to Winnie the Pooh.
This slippery-slope argument is a legitimate concern. Apple’s response is simply that they’ll refuse…
Will Apple actually flatly refuse any and all such demands? If they do, it’s all good. If they don’t, and these features creep into surveillance for things like political dissent, copyright infringement, LGBT imagery, or adult pornography — anything at all beyond irrefutable CSAM — it’ll prove disastrous to Apple’s reputation for privacy protection. The EFF seems to see such slipping down the slope as inevitable.
My take: A good backgrounder by an honest Apple enthusiast who cares about privacy. Recommended.