Episode #47: The Apple Turns

Has Apple Abandoned Privacy to Snoop on Your Photos?

Apologies! Given its importance, this episode was intended to be published for all subscribers, both paid and free. I accidentally published for paid subscribers only so am now correcting that error!


Paid subscribers got access to Episode #42, detailing how we can better protect and secure our identities, finances, and other important digital data when going on to major gaming platforms.

Free subscribers only receive access to a third of what I research and write; paid subscribers have access to every episode. They all get to comment and ask questions on our Substack discussion board.

Join us! Become a paid subscriber for the cost of one Starbucks Latte per month: $60/year which equates to just $5/month. Yes, coffee is important… however, in the Digital Age, so is knowing how to improve your privacy and security practices.

Make the leap and join us as a paid subscriber.


The Shit Hits The Fan

Well, this is a pickle…

Apple, Inc. - the iconic company that’s made customer privacy the centerpiece of its marketing and PR - has been harpooned by privacy and security advocates regarding new features that hundreds of millions of users will have access to if they upgrade to iOS 15 in the Fall of 2021.

Subscriber Charles Yaker brought this to my attention a few days before the story made international headlines, so thank you, Charles.

Let’s break things down in easy-to-understand language, and let’s begin with the obvious, first question:

Should We Be Concerned or Alarmed With What Apple is Doing?

I’ll speak for myself:

Yes, I am concerned. I believe some concern here is reasonable, given that Apple is changing how it handles the execution of users’ privacy and security on its devices and services.

No, I am not alarmed? Not yet, at least. I’m not seeing enough evidence from Apple - nor from their detractors - to suggest that Apple is attempting anything with malicious intent.

You might think or feel differently and that’s cool. My only goals with this episode are to share my opinions and to help you make an informed decision for yourself.

Until then, just know: I don’t consider this to be a black & white matter, so your final opinions will probably depend on who you are, who you trust, and how you view the world. It’s a complicated recipe with some rather tasty ingredients:

  • humanity’s increasing dependence on technology to communicate

  • profit-driven, publicly-traded technology companies

  • autocratic (and democratic!) nations who use technology to oppress or surveil their own citizens

With that backdrop in mind, let’s get into the weeds...


What $@*&! Is Apple Actually Doing?!

Apple is releasing three new tools in the Fall of 2021 as part of iOS 15 which they developed in collaboration with child safety and cryptography experts. Some of those experts and their PDF reports are provided at the bottom of Apple’s Child Safety announcement page and are worth reading.

Apple says the new features will “help protect children from predators who use communication tools to recruit and exploit them and limit the spread of Child Sexual Abuse Material (CSAM).”

While this sure sounds like Apple’s protecting minors, let’s learn HOW the company plans to accomplish its goals.

Feature #1: Changes To The Messages App

The Messages app will now include a parental control feature. If this feature is activated - and it is not by default - it alerts minors that an image they’ve received may be sexual in nature. The minors then have a choice whether to view the image or not. If they DON’T view the image(s) in question, the conversation goes unreported. If they DO elect to view the image(s) in question and they are 12-years-old- or younger, their parents or guardians will be notified. Apple illustrates how this feature will work:

Four, important details:

  1. For this new feature, all image scanning is done on-device. This means that there’s no uploading of images to Apple’s servers for checking. Instead, Apple’s software and hardware running on a user’s iOS device will handle this task. The also means that scanned images will not be checked against known child porn images in the CSAM database.

  2. These new features aren’t activated by default. Users need to opt-in. To opt-in, parents or guardians must add a child’s phone into the iCloud account as a family device. Once activated, parental notification only happens for children aged 12 or younger.

  3. Apple’s new changes are on the app, not with the service. As longtime Apple blogger John Gruber rightly points out, the Messages app can be empowered to scan all images from anyone using any messaging service - iMessage or SMS - for questionable or sensitive images.

  4. Image scanning “happens before (for sending) or after (for receiving)” on people’s devices. Gruber maintains that because of this, Apple can still claim that its messaging system is still end-to-end encrypted.

Feature #2: Changes To The iCloud Photo Library Services

Apple’s second feature is designed to detect photos of known child porn images from the CSAM database that users have uploaded or keep in their iCloud Photo libraries.

In this system, all photos uploaded to the iCloud Photo library will receive a fingerprint. Think of this as a secure, digital identifier. In Apple’s case, the fingerprints of a user’s images will each have two layers of encryption, including something called a safety voucher. This process ensures that Apple won’t be able to see anyone’s private images. Not at first, anyway. However…

If the fingerprints of a user’s image match the fingerprints of any known child-porn image kept in the CSAM database, that user account is noted and a threshold is activated. If that noted account has an iCloud Photo library with 30 or more known child porn images, only then is Apple is notified.

At that point - and only at that point - a human review team at Apple checks those specific images to confirm or deny the presence of any known child porn. If confirmed by Apple’s human review team, the user is then reported to the proper authorities. If no child porn exists, the user is not reported.

Three important details:

  1. Apple has not yet detailed the communication procedures for this practice. Will users be notified if their accounts are flagged? Will they be locked out of their accounts temporarily while a human team reviews the photos in question? We don’t know and Apple should make that clear to the public ASAP.

  2. Only users who sign up for or currently use iCloud Photo Library will undergo these new scanning and - if necessary - human review features. Users who do not use iCloud Photos will not be scanned, reviewed, or affected.

  3. Apple now joins other, large tech companies that have been scanning for child porn and reporting users for years: Facebook’s been at it for years, Google’s been doing it for over a decade in both search AND in Gmail, and Microsoft’s been doing it since at least 2015.

Feature #3: Changes to Siri

Apple’s final announcement - and the least contentious - concerns changes that the company made to Siri, its voice assistant.

In simple terms, if you ask Siri for help in reporting child porn, it will direct you to the proper authorities (image at bottom left). But if you ask Siri for help in searching for child porn, Siri will warn you that this is illegal (image at bottom right) and offer to get you help. These changes will, according to Apple, rollout across iOS 15, iPadOS 15, watchOS 8, and macOS Monterey.


So What’s The Fuss All About?!?

Let’s start with comments from The Electronic Frontier Foundation or “EFF”, a beloved non-profit for its stances regarding online safety, freedom, and privacy. Despite a long history of supporting Apple, the EFF recently posted a scathing response to Apple, criticizing the company’s efforts to combat child porn. They make three claims which other privacy & security advocates are also making:

  1. That Apple is using artificial intelligence or “AI” to determine what is and is not a sexually-explicit image on the Messages app, a poor choice as AI is prone to malfunctioning and is difficult-to-audit.

  2. That oppressive or authoritarian governments might force Apple to change or expand their classifiers to restrict LGBTQ+ or any other political or social content that leadership decides is unsavory.

  3. That “people have the right to communicate privately without backdoors or censorship, including when those people are minors.”

My Take on These Claims

On Apple’s use AI to scan users’ photos: I disagree with the EFF.

Like others, I’m concerned about Apple’s use of AI to determine what is and is not “sexually explicit” in the Messages app. My concern is understandable when you account for the large number of embarrassing and very racist AI failures at other, notable companies like Facebook, Microsoft, Flickr, Google Photos, Google Maps, & Google Ads; and, sadly, the AI tool used by US Courts to assess criminal risk which, incorrectly found that Black defendants were twice as likely to re-offend as white people.

Frankly, none of this bodes well for Apple. But… if there’s one thing we know about Apple it’s that they never enter a market first. They watch, wait, study, learn, plan, and then - finally - share with the world a better way. They’ve done this with personal computers, buying music and apps, and making pocket computers and wristwatches that many of us now use to make phone calls.

I believe Apple can and has made something that will be far more elegant than what’s come before it. Will it be perfect? Uh, no. Apple has wisely acknowledged that its on-device system isn’t perfect but claims that its software is difficult to fool. Time will tell.

But being imperfect isn’t the same as being ineffective.

Traffic lights aren’t perfect, but they’re very effective at helping traffic.

That’s why, on this matter, I disagree with the EFF. It seems premature to judge a company simply because they’ve decided to use AI and because some AI is imperfect.


On Apple’s being coerced in the future by those who wish to target sensitive communities such as LGBTQ+, political dissidents, religious minorities, journalists, etc: I partly agree with the EFF.

The fear of authoritarian governments pushing Apple to make changes to this kind of system is understandable. Apple will, most certainly, be asked by unsavory political movements to target anyone who might challenge authoritarian power. Apple has responded to this in the following statement:

“We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.”

While that’s reassuring to hear, sadly, it’s also a blatant lie as the New York Times revealed in a rather shocking exposé from May of 2021.

Apple’s deal with China has clearly served the needs of that country’s authoritarian leaders to oppress, suppress, restrict, undermine, and target its citizens. Apple is doing this, presumably, because 20% of its profits - and, I assume, a huge part of its component manufacturing and assembly - come from China.

Therefore, if you’re Apple and you fuck with China’s political wishes then you also most certainly fuck with your corporate bottom line. And while Apple is far from the only technology company that’s bowed to China like this, it still disgusts me as an Apple shareholder and as a human being. This is why it makes me wonder:

Is China is the exception to Apple’s stated ethics, morals, and policies… or is it the rule? What I mean is this: other than China, are there other oppressive nations who use the lure of a giant customer base and profits to get Apple to cave to their demanded politics?

Well… we might have our answer in just a few days, actually. As of two days ago - September, 2nd, 2021 - Russia demanded that Apple and Google remove an app created by political opposition leader Alexei Navalny to help Russians vote out corruption, including Vladimir Putin.

So far, Apple’s allowed the app to stay in the App Store and receive updates. Will they continue to do so? Russia has announced that they’ll start to levy fines of $55,000 on each company.

For some of you, the fact that Apple’s already bowed to the political will of China will be enough for you to not trust them. I understand, respect, and appreciate that. This is why I agree, partly, with the EFF’s fears on this matter.

But for me personally, I’ll need to see that Apple demonstrates that same behavior with at least one other nation for me to truly know if a pattern has emerged. If Apple demonstrates a pattern of making human dignity and basic freedoms less important than profits, then I’ll need to find another platform.

Time will tell.


On people having “the right to communicate privately without backdoors or censorship, including when those people are minors”: I partly agree with the EFF.

Adults, regardless of where they live, should have the right to communicate freely and privately, including without censorship. I couldn’t agree more. People should always be allowed to freely share what they think, believe, know, and feel. None of that should be disallowed. But…

Societies around the world have always had a vested interest in protecting the youngest among us. In most countries, you must be a certain age before you can drive, vote, drink, have sex, serve in the military, ride certain rides at amusement parks, see certain films, or be hired to perform certain kinds of jobs. I don’t consider that to be oppression or censorship: I consider it to be common sense.

For that reason, I disagree with the EFF’s take on minors deserving the same freedoms as adults.

Parents can and should take an interest in ensuring that their children are safe from both strangers and known associates. Does that mean overseeing EVERYTHING our children do? No. Does it mean overseeing NOTHING our children do? Also, no. The answer isn’t black or white: it’s grey. Much depends on the children, the adults who protect them, and the activities in question.

It, therefore, strikes me as positive that Apple’s provided an extra tool for parents to use - if they decide to use it - to help guard their children against sexual predators or others who might seek to groom or abuse their child.

Some privacy advocates are also saying that this makes being an LGBTQ+ minor even more challenging as it allows parents and guardians to digitally snoop on their children. Maybe, but let’s be honest: they can already do that using other tech that’s readily available. Apple’s newest features don’t differ much from other, currently available services:

  • OpenDNS, a free internet filtering system that keeps a safe network (think not allowing porn, gambling, social media, or other categories of websites). The service can also be used to log which devices are TRYING to access forbidden websites.

  • Apple’s Screentime feature, which is similar to OpenDNS, including managing children’s devices, restricting which websites they can visit, and setting up reports to determine what sites they have visited.

  • Commercial software titles like CyberSitter and NetNanny, which allow for filtering, blocking, and reporting of suspect web activity.

  • Network routers like the Firewalla and Gryphon Guardian (affiliate link) that come equipped for blocking parts of the Internet for different people and alerting administrators (or parents) about attempted violations of those restrictions. I covered this in-depth in Episode #40.

Kids are smart and love is love is love. If a child knows that her phone is being watched and has a romantic partner, then how pictures are shared will shift. Instead of sharing over a network or an app that’s monitored, pics can be shared in person locally using AirDrop. Or Signal. Or Telegram. Or Kik.

You get the idea. And so do the kids who, by the way, are way smarter than you when it comes to tech.


Final Verdict: Cautious Optimism

In my opinion, Apple is not trying to be malicious. They’ve gone to great lengths to ensure that their features and systems are well-documented, auditable, created in partnership with other individuals and organizations, and designed around the increased safety of kids.

They’ve ensured that the new tools and features aren’t mandatory and, to help clear up some of the confusion, Apple’s SVP Craig Federighi sat down with the Wall Street Journal for a very rare and transparent interview. You can (and should) watch that interview in its 12min entirety here.

John Gruber, the noted blogger I mentioned earlier, perfectly captures my opinions here when he says that:

“…if Apple’s new features work as described and only as described, there’s almost no cause for concern. In an interview with The New York Times for its aforelinked report on this initiative, Erik Neuenschwander, Apple’s chief privacy engineer, said, ‘If you’re storing a collection of CSAM material, yes, this is bad for you. But for the rest of you, this is no different.’ By all accounts, that is fair and true.

But the ‘if’ in ‘if these features work as described and only as described’ is the rub. That ‘if’ is the whole ballgame. If you discard alarmism from critics of this initiative who clearly do not understand how the features work, you’re still left with completely legitimate concerns from trustworthy experts about how the features could be abused or misused in the future.

But What About All of Those Alarmed Voices…

There are those who - on this matter and others - loudly oppose Apple. Some of that is 100% warranted: technology is an extremely powerful tool in our lives and the companies who make our tech should hear our concerns.

But being concerned and being alarmed aren’t the same thing and shouldn’t be confused by consumers.

To illustrate what I mean, there’s usually a worthy discussion among security professionals about how any new technology might be used in the future. I like to call this the “If they can do that now, then they’ll certainly do other, much worse things later” argument. Here’s a notable example:

While I’m usually a fan of Snowden’s, his tweet here is not only alarmist but it’s also factually incorrect based on what Apple has detailed in its technical documentation. Therefore, alarm isn’t what’s called for here: keep a level head, gather evidence, and critically examine it.

When I do that, I see that Apple doesn’t have a compelling or long-standing track record of malicious intent or creating and using software to track, target or oppress people. While I’m disappointed in Apple’s caving to China on certain issues, that appears to be a one-off situation. So far.

People aren’t perfect and, therefore, neither are corporations. Like all companies, Apple’s made notable blunders in the past - like this one - where they’ve used confusing or no messaging around how their technology works. They are then forced to address a concerned and sometimes frightened public who might not understand or see the full picture.

I just don’t see a future where a tool that Apple has created to help protect children from porn gets used by others to target or oppress citizens. Therefore, on this matter, I am not worried about what might happen and choose, instead, to watch vigilantly what does happen.

So… let’s just say that I “think different” than Apple’s detractors at this time.

But We Do Have… One More Thing

It turns out that Apple’s scanning for child porn is nothing new.

Since 2019, Apple’s been scanning all users’ iCloud Mail attachments to check if any are known images from the CSAM database. That fascinating detail would have gone unknown were it not for the great reporting by Ben Lovejoy over at 9-to-5 Mac.


And that’s a wrap for today’s episode, everyone. Thanks again to my subscribers for subscribing and supporting independent technology journalism. As a reminder, please use the link below to share Tech Talk with friends, family, and colleagues. It’s a quick way you can help me spread the word about this newsletter.

Share Tech Talk - The Technology Newsletter for Everyone

Thank you and, as always… Surf safe.


Most Popular Past Issues:


Transparency Statement

Some products or services that I recommend in my articles pay me a commission if you decide to purchase them. This makes me something called an “affiliate”. Making affiliate purchases doesn’t cost you a penny extra and sometimes they actually SAVE you money! It’s an easy way that you can help me earn additional money from my writing, so thank you for supporting my work, research, and expertise by considering those products and services that I recommend.

To keep things transparent, when I share a link with you that points to an affiliate sales page, it will now parenthetically say (affiliate link) for you to clearly see.

Please know: I’ve personally purchased, tested, researched, or used everything I recommend. Additionally, I am never paid to sell software or hardware to you and I retain 100% editorial control over everything I write. The companies, products, or services I recommend don’t know in advance that I’ll be mentioning them.