Voluntary restraint by big tech companies isn't a sustainable solution.
There is a clear and simple distinction between the use cases you are worried about — Clearview-style "who is this person?" — and the use cases you want to preserve for the general public — "a face recognition feature that only works within a single user’s photo library." It's the difference between identifying someone you don't know from a photo and identifying a photo of someone you already know.
This distinction leads to a technical basis for legislation. Facial recognition is safer when *the user brings the dataset* of faces to match against but very dangerous when *the app brings the dataset*. So a face-recognition law could prohibit recognition against an app-supplied database except by law enforcement with a warrant, but allow apps to perform recognition against a user-supplied dataset (perhaps limited to some reasonable size of number of images or number of distinct people the app can identify). This would allow companies to create sophisticated models that are good at face recognition in general, but which do not have specific identified faces embedded in them. Instead, the models could be fine-tuned on an individual user's photos. Apple, for example, does face recognition on-device. (https://machinelearning.apple.com/research/recognizing-people-photos)
Just on your early point re captchas, the writing has been on the wall for some time (also existing algorithms can already do quite a good job on captchas, not to mention services that just farm out the task to human workers).
Google has released v3 of their widely used ReCaptcha service and notably, it does not actually use captchas. It takes the algorithms they’ve built to determine whether someone is likely to be a bot from their browser session context (or something like that) which previously were used to decide whether a captcha was necessary or not, and makes that risk score alone into the entire product. It won’t serve you a captcha if you exceed the score threshold, it just won’t let you in at all.
"Lately I’ve been frustrated with the vagueness of the “AI safety” debate. A lot of people want stricter oversight of AI. But we’re far from a consensus about the nature of the threat—to say nothing of what to do about it."
All the talk about "stricter oversight" is mostly delusional. US and EU law has no jurisdiction over 90% of humanity. As your story makes clear, those with few scruples are going to take the AI ball and run with it. Some of the world's largest nations are ruled by psychopaths with very few if any scruples.
Talk of AI governance is mostly a PR stunt designed to present developers as being responsible, and politicians as being in control, neither of which are really very true.
A Face Search Engine Anyone Can Use Is Alarmingly Accurate
PimEyes is a paid service that finds photos of a person from across the internet, including some the person may not want exposed. “We’re just a tool provider,” its owner said.
May 26, 2022
Top PimEyes Free Alternatives – 12 Secure Image Search Tools
"You can run but you can't hide" in today's world. Nothing is truly private.
How do I say this. Until bias goes way of which they probably never will, facial recognition software is too dangerous to life and liberty. I am not just speaking on the law enforcement aspect but giving the general public to ID someone can lead to nefarious encounters. I am usually for technology advancement but dare I say all facial recognition software should be scrapped and outlawed. It is already abused to the point that law enforcement can pick you up on street just solely based on facial recognition and the various system are extremely flawed, especially for people whom look like me.
Don't trust it one bit, which is weird coming from me.
"For example, think of a young woman who meets a stranger at a bar ... in the future, the man might be able to pull out his phone, snap a photo, and upload it to a facial recognition app. That might enable him to show up uninvited at her home or workplace the next day."
While I think this is a very legitimate concern, I think there is a flip side to it. Imagine the scenario where a guy shows up at a young woman's work place repeatedly and stares at her, makes her feel uncomfortable, but never reveals who he is. It seems like members of the general public, like her, would appreciate being able to identify people who might pose a threat to them. You could even rig up a system that automatically flags people that have certain criminal histories if you encounter them on the street, making it easier for potential victims to escape or prepare to defend themselves
We need facial recognition. Just look at how much safer China is.
There is no privacy any longer. Get over it. It is too late to cram Pandora back into her box.