From "Expanded Protections for Children," an Apple's Q&A posted Monday:
What are the differences between communication safety in Messages and CSAM [Child Sexual Abuse Material] detection in iCloud Photos?
Communication safety in Messages is designed to give parents and children additional tools to help protect their children from sending and receiving sexually explicit images in the Messages app. It works only on images sent or received in the Messages app for child accounts set up in Family Sharing. It analyzes the images on-device, and so does not change the privacy assur- ances of Messages. When a child account sends or receives sexually explicit images, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view or send the photo. As an additional precaution, young children can also be told that, to make sure they are safe, their parents will get a message if they do view it...
Does this break end-to-end encryption in Messages?
No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple...
Will parents be notified without children being warned and given a choice?
No. First, parent/guardian accounts must opt-in to enable communication safety in Messages, and can only choose to turn on parental notifications for child accounts age 12 and younger. For child accounts age 12 and younger, each instance of a sexually explicit image sent or received will warn the child that if they continue to view or send the image, their parents will be sent a notification. Only if the child proceeds with sending or viewing an image after this warning will the notification be sent. For child accounts age 13–17, the child is still warned and asked if they wish to view or share a sexually explicit image, but parents are not notified.
My take: I'm impressed by how much thought went into the design of this program -- produced, I have no doubt, to fend off three-letter-agency pressure dating back to San Bernardino. It reads like government policy created by a competent government. But there's no getting around the fact that it's complicated and has subtle, possibly unknowable privacy implications. Unlike the hot new Letter to Apple that's already drawn more than 6,000 signatures ("Apple's proposal introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products"), it does't come with an explanation that fits in a tweet.