Connect with us

For inquiry and send press release please email us to : info@ksajournal.com

Tech

TikTok’s U.S. user base stabilizes after rocky start

TikTok’s U.S. joint venture seems to have survived a turbulent rollout with minimal change in usership, as early narratives of a mass user exodus prompted by service outages and censorship concerns now appear overstated, according to new figures.

Survey data from market intelligence firm Sensor Tower show that, despite a surge in deletions following the announcement of TikTok’s U.S. joint venture on Jan. 23, the average number of TikTok’s daily active users in the U.S. remains around 95% of its usership compared to the week of Jan. 19-25.

The joint venture — officially the TikTok USDS Joint Venture — was established in compliance with U.S. President Donald Trump’s executive order mandating the divestiture of TikTok in the U.S. from its Chinese parent company ByteDance.

While ByteDance retains a 19.9% stake in TikTok’s U.S. operations after the agreement, Oracle, Silver Lake, and Abu Dhabi-based investment firm MGX each own a 15% share, with the remaining shares divvied among several other firms.

Following the announcement, users were quick to express discontent over TikTok’s new ownership.

The deal drew scrutiny, with prominent figures like Sen. Bernie Sanders (I-Vt.) raising concerns about cronyism over Oracle co-founder and Chief Technical Officer Larry Ellison’s involvement.

Following the joint venture’s announcement that Ellison’s Oracle would “retrain, test, and update the content recommendation algorithm on U.S. user data”, online speculation mounted that TikTok would begin mining user data or promoting content supportive of Trump’s policy positions.

Such concerns spiked on Jan. 25, with users claiming that TikTok was suppressing content critical of controversial Immigration and Customs Enforcement operations, and censoring buzzwords like ‘Epstein’ on the platform.

Last month, CNBC confirmed that messages containing the word “Epstein” triggered an error message, but was unable to independently verify broader claims of political censorship.

Asked about the issues, a spokesperson for the TikTok joint venture told CNBC in January that the platform does not prohibit sharing the name ‘Epstein’ in messages and that it was investigating why some users are experiencing the problem, among others.

CNBC reached out to the White House and TikTok for comment but did not receive a response by publication.

Engagement metrics unchanged

Although TikTok attributed last month’s disruptions to power outages, the glitches “no doubt impact[ed] how and what content was being served, even without any intent or motive,” according to Jim Johnston, partner at law firm Davis+Gilbert LLP.

Yet despite various user pledges to boycott the platform over apparent political suppression, engagement metrics among U.S. users suggest there has been little sign of a mass exodus.

The average daily time spent by American users on the platform has since returned to around 80 minutes, after dipping to an average of 77 minutes during the week of the reported disruptions, according to Sensor Tower data.

Additionally, while deletions spiked after the reported disruptions, they tapered off the following week, suggesting a temporary surge rather than a sustained boycott of the app.

“It is plausible that the short-lived rise in observed uninstalls was due to an attempt to troubleshoot the app,” Abraham Yousef, senior insights analyst at Sensor Tower, told CNBC, as the number of uninstalls followed by re-installations on the same day surged more than 70% on Jan. 25 from the day before.

While Yousef grants that the data suggests a “slight impact to overall usage” in the weeks after the Joint Venture was announced, there is no clear indication of a structural shift in user trends, as many sites touted as alternatives to TikTok have also struggled to sustain interest.

According to Sensor Tower, the number of new installs for UpScrolled – a social media platform that offers an algorithm free of automated systems that filter out content from some users known as shadow banning – surged by about 770% from the previous week, with more than 955,000 new U.S. downloads over the week of Jan. 26 to Feb. 1.

New UpScrolled downloads, however, fell sharply by about 80% the following week, bringing in only around 191,000 new users. In comparison, TikTok registered 870,000 downloads over the week of Jan. 26 to Feb. 1, and around 800,000 the following week.

Similarly, new downloads of other alternative platforms such as Skylight Social and Red Note respectively declined by 96% and 33% week-on-week from the week of Jan. 26.

Tenuous evidence of mass exodus

Sensor Tower’s user data more fundamentally seems to suggest that beyond anecdotal claims, users have largely been unable to identify tangible changes in TikTok’s American operations, or at least, not enough to meaningfully shift user sentiment.

“The idea of a mass exodus from TikTok now looks overblown,” Kelsey Chickering, principal analyst from Forrester, told CNBC. “Anecdotally, most users say the app feels largely the same – the algorithm hasn’t meaningfully changed, and the experience is still strong,”

While some American users may have perceived changes in the operation of their TikTok algorithms, “some changes to content suggestions are bound to occur simply due to the changed data set,” according to Johnston, referring to the Joint Venture’s announcement to retrain the algorithm on U.S. data.

But while analysts have been unable to find evidence that TikTok’s new American owners have engineered the platform in their favor, this is not a foregone conclusion.

According to Johnston, there are at least three notable changes to TikTok’s new terms of use, including the platform’s ability to collect precise location data from enabled devices, its collection of data on interactions with artificial intelligence tools on the app and its explicit integration with ad networks.

Although there has been no hard evidence of its occurrence, it remains technically possible to adjust TikTok’s algorithm to enhance or diminish the impact of certain types of content on recommendations, Johnston said.

Chickering adds that under its new owners, TikTok has more control over what shows up on American feeds, but this control, according to Chickering, is where TikTok’s opportunity – and risk – lies.

“If moderation starts to feel politically slanted or misinformation isn’t adequately addressed, the platform could face backlash from users and advertisers alike,” Chickering said, “We’ve seen this play out before: Twitter’s shift to X is a recent reminder of how quickly trust can erode.”

For now, however, the discontent from TikTok’s American users that marred its first few weeks under new ownership seems to have largely subsided.

As Chickering notes, “we’ve seen time and time again, if the product works, users tend to stick around regardless of who owns it.”

— CNBC

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Online age-verification tools spread across U.S. for child safety

New U.S laws designed to protect minors are pulling millions of adult Americans into mandatory age-verification gates to access online content, leading to backlash from users and criticism from privacy advocates that a free and open internet is at stake. Roughly half of U.S. states have enacted or are advancing laws requiring platforms — including adult content sites, online gaming services, and social media apps — to block underage users, forcing companies to screen everyone who approaches these digital gates.

“There’s a big spectrum,” said Joe Kaufman, global head of privacy at Jumio, one of the largest digital identity-verification and authentication platforms. He explained that the patchwork of state laws vary in technical demands and compliance expectations. “The regulations are moving in many different directions at once,” he said.  

Social media company Discord announced plans in February to roll out mandatory age verification globally, which the company said would rely on verification methods designed so facial analysis occurs on a user’s device and submitted data would be deleted immediately. The proposal quickly drew backlash from users concerned about having to submit selfies or government IDs to access certain features, which led Discord to delay the launch until the second half of this year.

“Let me be upfront: we knew this rollout was going to be controversial. Any time you introduce something that touches identity and verification, people are going to have strong feelings,” Discord chief technology officer and co-founder Stanislav Vishnevskiy wrote in a Feb. 24 blog post.

Websites offering adult content, gambling, or financial services often rely on full identity verification that requires scanning a government ID and matching it to a live image. But most of the verification systems powering these checkpoints — often run by specialized identity-verification vendors on behalf of websites — rely on artificial intelligence such as facial recognition and age-estimation models that analyze selfies or video to determine in seconds whether someone is old enough to access content. Social media and lower-risk services may use lighter estimation tools designed to confirm age without permanently storing detailed identity records.  

Vendors say a challenge is balancing safety with how much friction users will tolerate. “We’re in the business of ensuring that you are absolutely keeping minors safe and out and able to let adults in with as little friction as possible,” said Rivka Gerwitz Little, chief growth officer at identity-verification platform Socure. Excessive data collection, she added, creates friction that users resist. 
 
Still, many users perceive mandatory identity checks as invasive. “Having another way to be forced to provide that information is intrusive to people,” said Heidi Howard Tandy, a partner at Berger Singerman who specializes in intellectual property and internet law. Some users may attempt workarounds — including prepaid cards or alternative credentials — or turn to unauthorized distribution channels. “It’s going to cause a piracy situation,” she added. 

Where adult data goes 

In many implementations, verification vendors — not the websites themselves — process and retain the identity information, returning only a pass-fail signal to the platform. 

Gerwitz Little said Socure does not sell verification data and that in lightweight age-estimation scenarios, where platforms use quick facial analysis or other signals rather than government documentation, the company may store little or no information. But in fuller identity-verification contexts, such as gaming and fraud prevention that require ID scans, certain adult verification records may be retained to document compliance. She said Socure can keep some adult verification data for up to three years while following applicable privacy and purging rules.  

Civil liberties’ advocates warn that concentrating large volumes of identity data among a small number of verification vendors can create attractive targets for hackers and government demands. Earlier this year, Discord disclosed a data breach that exposed ID images belonging to approximately 70,000 users through a compromised third-party service, highlighting the security risks associated with storing sensitive identity information. 

In addition, they warn that expanding age-verification systems represent not only a usability challenge but a structural shift in how identity becomes tied to online behavior. Age verification risks tying users’ “most sensitive and immutable data” — names, faces, birthdays, home addresses — to their online activity, according to Molly Buckley, a legislative analyst at the Electronic Frontier Foundation.  “Age verification strikes at the foundation of the free and open internet,” she said.

Even when vendors promise to safeguard personal information, users ultimately rely on contractual terms they rarely read or fully understand. “There’s language in their terms-of-use policies that says if the information is requested by law enforcement, they’ll hand it over. They can’t confirm that they will always forever be the only entity who has all of this information. Everyone needs to understand that their baseline information is not something under their control,” Tandy said. 

As more platforms route age checks through third-party vendors, that concentration of identity data is also creating new legal exposure for the companies that rely on them. “A company is going to have some of that information passing through their own servers,” Tandy said. “And you can’t offload that kind of liability to a third party.” 

Companies can distribute risk through contracts and insurance, she said, but they remain responsible for how identity systems interact with their infrastructure. “What you can do is have really good insurance and require really good insurance from the entities that you’re contracting with,” she said. 

Tandy also cautioned that retention promises can be more complex than they appear. “If they say they’re holding it for three years, that’s the minimum amount of time they’re holding it for,” she said. “I wouldn’t feel comfortable trusting a company that says, ‘We delete everything one day after three years.’ That is not going to happen,” she added. 

Legal battles are not over

Federal and state regulators argue that age-verification laws are primarily a response to documented harms to minors and insist the rules must operate under strict privacy and security safeguards. 

An FTC spokesperson told CNBC that companies must limit how collected information is used. While age-verification technologies can help parents protect children online, the agency said firms are still bound by existing consumer protection rules governing data minimization, retention, and security. The agency pointed to existing rules requiring firms to retain personal information only as long as reasonably necessary and to safeguard its confidentiality and integrity. 

CNBC

Continue Reading

Tech

AI’s got a gender gap: Women are more skeptical

The artificial intelligence craze faces a significant gender gap, with more men showing enthusiasm about the technology, and women expressing greater skepticism. That’s according to CNBC’s 5th annual SurveyMonkey Women at Work survey.

Some 69% of men polled say that AI is a “valuable assistant and collaborator,” while just 61% of women agreed with that statement. Half of women in the survey view AI with suspicion and say that “using AI at work feels like cheating.” Only 43% of men agree.

The survey, conducted from Feb. 10 through Feb. 16, with participation from 6,330 people, landed just over three years after the generative AI boom took off with the launch of OpenAI’s ChatGPT. Since then, chatbots have spread rapidly and were followed by other services like AI-generated photo and video services, coding agents and all sorts of tools that now make it easy to create apps with just a few text prompts and mouse clicks. 

Wall Street is betting that AI will displace much of the enterprise software stack, which explains why software stocks have taken a beating over the past year. 

Within the workplace, men use AI more frequently than women. Almost two-thirds (64%) of women say they never use AI at work, compared to 55% of men. And when it comes to AI power users, they’re also more likely to be men, with 14% saying they use AI “multiple times a day,” compared to 9% for women.

It’s a constant topic now for company executives. JPMorgan Chase CEO Jamie Dimon has called AI “critical to our company’s future success,” and he said at the bank’s 2026 investor day that nearly two-thirds of the company now uses an internal large language model. Dimon said AI will eliminate jobs, so companies are better off retraining people.

Notably, while men are more likely to use AI, they still say they need to work more at it. Some 59% of men in the survey say they need more training on how to use AI at work, and 39% express a fear of missing out (FOMO) if they don’t embrace it, compared to 35% of women. And 42% of women “strongly disagree” with the idea that failing to embrace AI will result in them missing out at work, with the sentiment at 36% for men.

What happens if women don’t jump into AI training at the same pace as men? LeanIn.Org founder and former Meta operating chief Sheryl Sandberg addressed this question in an interview in December.

“We know that AI is going to be challenging for jobs, and it’s going to be the most challenging for the people that don’t know how to use those tools,” Sandberg said.

If more men than women use AI, especially early in their careers, that could broaden gender gaps at a time when women miss out on the first promotion to a manager level position. That has ripple effects for the rest of their careers. 

“We are going to see disproportionate impacts,” Sandberg said, “and that would be a real shame for our companies [and] bad for our economy.”

CNBC

Continue Reading

Tech

Annoyances cost Americans $165 billion every year

Sorting through scam messages. Waiting on hold with your insurance provider. Annoyances like these drain our time and even our bank accounts.

In a new report published by Groundwork Collaborative, economists took a stab at calculating just how much consumers pay in time, fees, and irritation to navigate the economy.

“So I think it’s just the tip of the iceberg,” said Neal Mahoney, a professor of economics at Stanford University and the co-author of a new report on the annoyance economy. “But what we tried to do in the piece is taught up how much time and money we are spending on health insurance paperwork, dealing with spam calls and text messages, waiting on hold for customer service … and we got to was $165 billion.”

Mahoney spoke with “Marketplace” host Kai Ryssdal about this report.

Continue Reading

Trending