Tech
Online age-verification tools spread across U.S. for child safety
New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.
“There’s a big spectrum,” said Joe Kaufman, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.
Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.
“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.
Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.
Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gerwitz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist.
Still, many users perceive mandatory identity checks as invasive. “Having another way to be forced to provide that information is intrusive to people,” said Heidi Howard Tandy, a partner at Berger Singerman who specializes in intellectual property and internet law. Some users may attempt workarounds — including prepaid cards or alternative credentials — or turn to unauthorized distribution channels. “It’s going to cause a piracy situation,” she added.
Where adult data goes
In many implementations, verification vendors — not the websites themselves — process and retain the identity information, returning only a pass-fail signal to the platform.
Gerwitz Little said Socure does not sell verification data and that in lightweight age-estimation scenarios, where platforms use quick facial analysis or other signals rather than government documentation, the company may store little or no information. But in fuller identity-verification contexts, such as gaming and fraud prevention that require ID scans, certain adult verification records may be retained to document compliance. She said Socure can keep some adult verification data for up to three years while following applicable privacy and purging rules.
Civil liberties’ advocates warn that concentrating large volumes of identity data among a small number of verification vendors can create attractive targets for hackers and government demands. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting the security risks associated with storing sensitive identity information.
In addition, they warn that expanding age-verification systems represent not only a usability challenge but a structural shift in how identity becomes tied to online behavior. Age verification risks tying users’ “most sensitive and immutable data” — names, faces, birthdays, home addresses — to their online activity, according to Molly Buckley, a legislative analyst at the Electronic Frontier Foundation. “Age verification strikes at the foundation of the free and open internet,” she said.
Even when vendors promise to safeguard personal information, users ultimately rely on contractual terms they rarely read or fully understand. “There’s language in their terms-of-use policies that says if the information is requested by law enforcement, they’ll hand it over. They can’t confirm that they will always forever be the only entity who has all of this information. Everyone needs to understand that their baseline information is not something under their control,” Tandy said.
As more platforms route age checks through third-party vendors, that concentration of identity data is also creating new legal exposure for the companies that rely on them. “A company is going to have some of that information passing through their own servers,” Tandy said. “And you can’t offload that kind of liability to a third party.”
Companies can distribute risk through contracts and insurance, she said, but they remain responsible for how identity systems interact with their infrastructure. “What you can do is have really good insurance and require really good insurance from the entities that you’re contracting with,” she said.
Tandy also cautioned that retention promises can be more complex than they appear. “If they say they’re holding it for three years, that’s the minimum amount of time they’re holding it for,” she said. “I wouldn’t feel comfortable trusting a company that says, ‘We delete everything one day after three years.’ That is not going to happen,” she added.
Legal battles are not over
Federal and state regulators argue that age-verification laws are primarily a response to documented harms to minors and insist the rules must operate under strict privacy and security safeguards.
An FTC spokesperson told CNBC that companies must limit how collected information is used. While age-verification technologies can help parents protect children online, the agency said firms are still bound by existing consumer protection rules governing data minimization, retention, and security. The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity.
CNBC
Tech
AI’s got a gender gap: Women are more skeptical
The artificial intelligence craze faces a significant gender gap, with more men showing enthusiasm about the technology, and women expressing greater skepticism. That’s according to CNBC’s 5th annual SurveyMonkey Women at Work survey.
Some 69% of men polled say that AI is a “valuable assistant and collaborator,” while just 61% of women agreed with that statement. Half of women in the survey view AI with suspicion and say that “using AI at work feels like cheating.” Only 43% of men agree.
The survey, conducted from Feb. 10 through Feb. 16, with participation from 6,330 people, landed just over three years after the generative AI boom took off with the launch of OpenAI’s ChatGPT. Since then, chatbots have spread rapidly and were followed by other services like AI-generated photo and video services, coding agents and all sorts of tools that now make it easy to create apps with just a few text prompts and mouse clicks.
Wall Street is betting that AI will displace much of the enterprise software stack, which explains why software stocks have taken a beating over the past year.
Within the workplace, men use AI more frequently than women. Almost two-thirds (64%) of women say they never use AI at work, compared to 55% of men. And when it comes to AI power users, they’re also more likely to be men, with 14% saying they use AI “multiple times a day,” compared to 9% for women.
It’s a constant topic now for company executives. JPMorgan Chase CEO Jamie Dimon has called AI “critical to our company’s future success,” and he said at the bank’s 2026 investor day that nearly two-thirds of the company now uses an internal large language model. Dimon said AI will eliminate jobs, so companies are better off retraining people.
Notably, while men are more likely to use AI, they still say they need to work more at it. Some 59% of men in the survey say they need more training on how to use AI at work, and 39% express a fear of missing out (FOMO) if they don’t embrace it, compared to 35% of women. And 42% of women “strongly disagree” with the idea that failing to embrace AI will result in them missing out at work, with the sentiment at 36% for men.
What happens if women don’t jump into AI training at the same pace as men? LeanIn.Org founder and former Meta operating chief Sheryl Sandberg addressed this question in an interview in December.
“We know that AI is going to be challenging for jobs, and it’s going to be the most challenging for the people that don’t know how to use those tools,” Sandberg said.
If more men than women use AI, especially early in their careers, that could broaden gender gaps at a time when women miss out on the first promotion to a manager level position. That has ripple effects for the rest of their careers.
“We are going to see disproportionate impacts,” Sandberg said, “and that would be a real shame for our companies [and] bad for our economy.”
CNBC
Tech
Annoyances cost Americans $165 billion every year
Sorting through scam messages. Waiting on hold with your insurance provider. Annoyances like these drain our time and even our bank accounts.
In a new report published by Groundwork Collaborative, economists took a stab at calculating just how much consumers pay in time, fees, and irritation to navigate the economy.
“So I think it’s just the tip of the iceberg,” said Neal Mahoney, a professor of economics at Stanford University and the co-author of a new report on the annoyance economy. “But what we tried to do in the piece is taught up how much time and money we are spending on health insurance paperwork, dealing with spam calls and text messages, waiting on hold for customer service … and we got to was $165 billion.”
Mahoney spoke with “Marketplace” host Kai Ryssdal about this report.
Tech
Social Media Age Checks Raise Fresh Privacy Concerns
As governments push stricter online child safety rules, digital rights advocates warn about the risks of collecting IDs and facial data.
A landmark trial against Meta and YouTube is underway, as the companies face claims that their platforms harm children’s mental health.
This comes as lawmakers around the world are advancing new child safety laws — including age-verification requirements that could require users to upload a government ID or submit facial scans to confirm their age. But some digital rights advocates warn that efforts to make the internet safer for children could introduce new privacy risks, especially if sensitive personal data is collected or stored by third-party vendors.
Marketplace’s David Brancaccio spoke with Kian Vesteinsson, senior researcher at Freedom House — a nonprofit focused on democracy and human rights — for more on the tension between child safety legislation and online privacy. The following is an edited transcript of their conversation.
David Brancaccio: Age verification for what we get access to online — I mean, to keep younger people away from harmful or age-inappropriate content — you’re not against that in itself?
Kian Vesteinsson: That’s right. Protecting children from the worst of the internet is a pressing policy aim. There’s plenty of evidence that children using social media platforms can face real harms. But the important thing here is that online anonymity has long been a key enabler for free expression, free speech, and access to online information, and we need to make sure that we protect it.
Brancaccio: And you have specific concerns about if we are asked to verify our age before getting access to certain content, what are people doing with the ID that we present?
Vesteinsson: So it might be helpful to take a step back, because there are a couple of different ways that companies go about doing this. When a platform has a lot of data about a user, it is possible to forecast their age based on their online activities. This is usually called “age inference,” and it tends to require really sophisticated machine learning tools.
For example, you know, my YouTube history has been live videos of Prince guitar solos and instructions on how to make the best chicken stock. That’s a pretty good signal that I’m an adult. My account has been active for around 20 years on YouTube; that’s another great signal that I’m an adult. But this sort of inference isn’t always possible, so in those circumstances, companies need to check someone’s age by guessing using analysis of their facial features — like their facial hair, for example, or wrinkles — or by scanning a government-issued identification card. And it’s at this stage that we see really sensitive personal information introduced into the picture. That’s where the privacy and security concerns come in.
Brancaccio: It’s happened to me before. There was somebody tampering with one of my online accounts, and I think it was Meta[‘s] Facebook asked me to take a picture of myself holding up my driver’s license. That should have made me more nervous at the time?
Vesteinsson: Well, that’s a really good example where you are opting into this face comparison to get something that’s yours. But age verification measures introduced at scale pull an incredible amount of personal data into the online ecosystem. Last fall, Discord disclosed that hackers had breached a vendor doing age verification services. Discord estimates that in this one single breach, around 70,000 people had their government ID cards exposed in the hack, and now presumably transacted by cyber criminals on the internet. We should also anticipate that these companies will be a target for state-backed hackers.
Brancaccio: Because there are good ways and bad ways to do this. There are ways that are more vulnerable, but there are ways — you’re persuaded in this world of hackers, where there’s a decent chance that your data will be safeguarded?
Vesteinsson: There are promising efforts being developed right now to do age verification in a way that’s privacy-preserving, but they’re not ready to go to market. One model that’s gaining steam involves creating third-party digital infrastructure that would check a government-issued identification card and then immediately delete any associated sensitive data. This would be [a] nonprofit third-party tool. That service could then supply a token confirming someone’s age when they request it in order to access a social media platform. But it’s going to take time and money to figure out how to do this in a privacy-preserving way, and as we invest in developing these tools, policymakers should look towards other mechanisms, rather than these sort of blunt-hammer age-verification approaches.
Brancaccio: I’ve been focused on hackers, however we define those. Do you have an additional worry that, depending on which government you’re talking about in some part of the world, that, in fact, governments could get a hold of this private data and misuse it?
Vesteinsson: Yes, age verification laws are ripe for abuse in countries with weak rule of law and widespread government surveillance. Freedom House puts out a report each year that assesses conditions for free expression and privacy online in 72 countries around the world. Our research has found that authorities in many countries deploy censorship and surveillance to target online expression of dissent. In fact, we estimate that 81% of the world’s internet users live in countries where people have been arrested or imprisoned for posting content about political or social issues as of mid-2025.
In environments like these, there is considerable risk in connecting a person’s online activities to a photo of their face or their identification card. Now, most countries have legal procedures in place that empower law enforcement to request user data from private companies in order to investigate crimes. This is standard practice. It’s normal, and it’s necessary, but our research has found that repressive governments routinely abuse standard legal process for data requests in order to target activists or people criticizing government conduct on the internet. And age verification poses an enormous risk to empower authorities to abuse those laws even further.
Market place
-
Discover2 months agoIs February 2026 really a once-in -283-years MiracleIn?
-
Football3 months agoAlgeria, Burkina Faso, Côte d’Ivoire win AFCON 2025 openers
-
Health3 months agoBascom Palmer Eye Institute Abu Dhabi and Emirates Society of Ophthalmology Sign Strategic Partnership Agreement
-
Health2 months agoNMC Royal Hospital, Khalifa City, performs rare wrist salvage, restoring function for young patient
-
Health4 months agoEmirates Society of Colorectal Surgery Concludes the 3rd International Congress Under the Leadership of Dr. Sara Al Bastaki
-
Health4 months agoBorn Too Soon: Understanding Premature Birth and the Power of Modern NICU Care
-
Football4 months agoGlobe Soccer Awards 2025 nominees announced as voting opens in Dubai
-
Health3 months agoDecline in Birth Rate in the UAE
