How can Apple monitor a kid’s Messages without breaking end-to-end encryption?

From “Expanded Protections for Children,” an Apple’s Q&A posted Monday:

What are the differences between communication safety in Messages and CSAM [Child Sexual Abuse Material] detection in iCloud Photos?

Communication safety in Messages is designed to give parents and children additional tools to help protect their children from sending and receiving sexually explicit images in the Messages app. It works only on images sent or received in the Messages app for child accounts set up in Family Sharing. It analyzes the images on-device, and so does not change the privacy assur- ances of Messages. When a child account sends or receives sexually explicit images, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view or send the photo. As an additional precaution, young children can also be told that, to make sure they are safe, their parents will get a message if they do view it…

Does this break end-to-end encryption in Messages?

No. This doesn’t change the privacy assurances of Messages, and Apple never gains access to communications as a result of this feature. Any user of Messages, including those with with communication safety enabled, retains control over what is sent and to whom. If the feature is enabled for the child account, the device will evaluate images in Messages and present an intervention if the image is determined to be sexually explicit. For accounts of children age 12 and under, parents can set up parental notifications which will be sent if the child confirms and sends or views an image that has been determined to be sexually explicit. None of the communications, image evaluation, interventions, or notifications are available to Apple…

Will parents be notified without children being warned and given a choice?

No. First, parent/guardian accounts must opt-in to enable communication safety in Messages, and can only choose to turn on parental notifications for child accounts age 12 and younger. For child accounts age 12 and younger, each instance of a sexually explicit image sent or received will warn the child that if they continue to view or send the image, their parents will be sent a notification. Only if the child proceeds with sending or viewing an image after this warning will the notification be sent. For child accounts age 13–17, the child is still warned and asked if they wish to view or share a sexually explicit image, but parents are not notified.

apple child safety encryptionMy take: I’m impressed by how much thought went into the design of this program — produced, I have no doubt, to fend off three-letter-agency pressure dating back to San Bernardino. It reads like government policy created by a competent government. But there’s no getting around the fact that it’s complicated and has subtle, possibly unknowable privacy implications. Unlike the hot new Letter to Apple that’s already drawn more than 6,000 signatures (“Apple’s proposal introduces a backdoor that threatens to undermine fundamental privacy protections for all users of Apple products“), it does’t come with an explanation that fits in a tweet.

14 Comments

  1. Fred Stein said:
    All this will blow over.

    Surely Apple thoroughly all the issues raised by all sides. And examined all technical options. (“All” meaning 99%.)

    Good news: Apple now uses the word safety more frequently. That’s a stronger positioning statement than privacy.

    4
    August 9, 2021
    • Steven Noyes said:
      Safety != Privacy.

      Not even related topics IMO.

      0
      August 9, 2021
  2. David Emery said:
    “it does’t come with an explanation that fits in a tweet.”

    And that’s Yet Another Reason/Justification for why I despise Twitter. It’s only useful for yelling, not for serious nuanced complex discussion.

    What remains to be seen is whether Apple’s approach does set a pattern for future legislation.

    0
    August 9, 2021
  3. David Baraff said:
    I admit to now being concerned as to how Apple prevents governments from adding to the list of “bad” photos content that has to do with, for example, political dissent or LGBTQ content.

    On the one hand I trust Apple put a lot of thought into it. On the other hand, I really don’t see how Apple can robustly control what is deemed as reportable and what is not, and I sure would like to understand how and why they maintain a belief that this is doable.

    2
    August 9, 2021
    • David Drinkwater said:
      This is a complicated problem, and I will try to state this “non-aggressively “ which can be a challenge for me as a sex-positivity activist:

      The rules and laws regarding sexual activity and/or content are shaped in many states based on the gender of the respective partners with the implicit (but potentially incorrect) understanding of sexual orientation. (E.g. in some states, the age of consent is different for a male-female couple than for a male-male couple. (Don’t get me started on gender …)

      It seems to me that Apple is making a “reasonable effort” to protect kids. Such efforts are rarely perfect, but I’m glad they are trying.

      1
      August 9, 2021
      • David Baraff said:
        I agree with “ am glad they are trying.”

        I want to understand though how they don’t end up sliding down the slippery slope while trying to do something I agree with.

        Perhaps if you can’t predict the future, which we can’t, the morally correct thing is to reasonably fix what you can, today, and deal with the future when it comes.

        But I still worry.

        0
        August 9, 2021
        • Bart Yee said:
          To paraphrase:
          Doing hard things is hard. Doing the impossible takes a little longer.

          Progress takes forward and backward steps in very haphazard fashion with many different forces pushing and pulling. I’m glad that Apple is taking this issue on with what seems like thoughtful approaches to some sides of the problems: imagery, warnings and prompts, available parental controls and monitoring, and educating young people on appropriate behavior, at least from an American legal and moral perspective. I believe (but not confirmed by Apple) that they have had a lot of input or discussion from advocacy groups about what was needed and how Apple could create effective first step solutions that work within Apple’s ecosystem and ethos principles.

          Remember, we have a world where some areas and men believe it’s ok to sexually assault young and underage women/girls, entrap women/girls into sexting, send / receive and extort explicit photos, traffic in imagery, fraud, and human lives. All because they believe the rules and mores don’t apply to them, they have mental illnesses, or their society allows it indirectly or explicitly.

          Criminals of all types have always exploited whatever tools they have in their era. The internet, email, messaging, file sharing, encrypted storage, etc. are all great but are also tools that can be used for bad, even criminal behavior. If Apple can help users protect themselves, IMO, that’s great. If Apple comes under fire for questions about tech overreach, frankly, I think Apple is ready to have this conversation and articulate why it takes the thoughtful and mindful approaches it does. Contrast that to the very different approaches Google, Facebook, etc. takes with their users’ data, personal info, and tagged / identified photos, etc.?

          Would it surprise anyone that an “advertiser” seeks info on a demo that is socially active, aged 12-21, maybe posts a lot of selfie pictures, likes to chat, location specified, and seems interested in meeting others? That demo might be great for selling clothing ads, Social media and dating apps, concert ads, etc. Or might just be exploited by traffickers, sexually abusive or worse individuals or groups, or extortionists looking forward their next marks.

          If Apple can help us address the above, IMO they are working to better their ecosystems and users.

          1
          August 9, 2021
    • Steven Noyes said:
      This is why I like the parental control of CSIM but find CSAM deeply troubling.

      0
      August 9, 2021
  4. Fred Stein said:
    Two important quotes from the Q&A:

    “Existing techniques as implemented by other companies scan all user photos stored in the cloud. This creates privacy risk for all users.”

    “In most countries, including the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.”

    3
    August 9, 2021
    • David Emery said:
      It will be interesting to see if/how TikTok and others currently throwing rocks at Apple address the legal requirement. They could just say “We report all images we detect” (with the hidden footnote that ‘we don’t have any way to detect them.’) Hence my thought about legal action on this as part of Big Tech regulation.

      0
      August 9, 2021
  5. Gary Gouriluk said:
    6000 complaints! That’s a lot of pedophiles.

    1
    August 9, 2021
  6. Steven Noyes said:
    The answer to the first question is a red herring. The question is about CSAM and the answer is about iMessage. CSAM is dealing with uploads to iCloud photos NOT about iMessage. I have no issue with parental control over CSIM but CSAM is evil.

    CSAM compares uploaded photos to a government supplied hash of known “bad” photos. What if that photo is a meme critical of the current regime? The government could sneak in the hash and track people they consider “subversive”.

    CSAM has serious serious issues and should be scrapped.

    0
    August 9, 2021
    • Lalit Jagtap said:
      “The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes.” As per Apple FAQ about child safety.

      0
      August 10, 2021

Leave a Reply