web analytics
a

Facebook

Twitter

Copyright 2015 Libero Themes.
All Rights Reserved.

8:30 - 6:00

Our Office Hours Mon. - Fri.

703-406-7616

Call For Free 15/M Consultation

Facebook

Twitter

Search
Menu
Westlake Legal Group > Posts tagged "Social Media"

Peloton’s Cringe-y Ad Got Everyone Talking. Its C.E.O. Is Silent.

Westlake Legal Group merlin_165682872_c87c9270-f863-4be3-bacd-5b87e611adf7-facebookJumbo Peloton’s Cringe-y Ad Got Everyone Talking. Its C.E.O. Is Silent. Social Media Reynolds, Ryan Peloton Interactive Inc Foley, John (Fitness Executive) Aviation American Gin Advertising and Marketing

During a talk in New York on Monday, John Foley, the chief executive of Peloton, did not laugh off the negative reaction to the fitness company’s holiday commercial, a 30-second spot that drew intense criticism and caused Peloton stock to drop 9 percent in one day. Neither did he apologize or defend it. In fact, Mr. Foley did not mention it at all.

During his 40-minute appearance at a Midtown Manhattan conference hosted by the financial firm UBS, Mr. Foley discussed profitability and international expansion. The closest he got to discussing the commercial — which has been derided as sexist, classist, dystopian and tone deaf — was talking about the high prices of the company’s equipment.

“We have a fun challenge, and we’re going to solve it as marketers, because the reality is that it is an incredible value, and we’re changing lives, and we’re allowing people to get more fit and get more healthy and get those endorphins and be better versions of themselves and all this existential stuff that we’re excited about at the top of Maslow’s hierarchy of needs,” he said, referring to the 1943 theory by the psychologist Abraham Maslow.

“But we need to communicate that better,” Mr. Foley continued, in what seemed a tacit acknowledgment that the commercial may have hurt the company.

After his session, he refused to answer reporters’ questions about the ad, saying only, “It was in the news last week.”

The broad strokes of the commercial, called “The Gift That Gives Back,” have become cringe canon. A svelte mother, played by the actress Monica Ruiz, receives a Peloton stationary bike from her husband for Christmas. She spends the next year filming herself in her luxurious home as she approaches the contraption or pedals like mad, often appearing anxious, perhaps terrified. She turns the footage into a video for her spouse and declares that she “didn’t realize how much this would change me.”

Viewed more than seven million times on YouTube, the ad drew the wrath of social media and generated a viral parody video by the comedian Eva Victor. On Friday, Aviation American Gin, a brand owned in part by the “Deadpool” star Ryan Reynolds, released a response ad featuring a deadpan Ms. Ruiz as a woman who seeks the consolation of good friends and spirits after having apparently endured a crisis. “Saturday Night Live” made multiple references to the Peloton commercial over the weekend, with the “Weekend Update” co-anchor Colin Jost joking, “At least they decided against using the slogan ‘Peloton: You’d better keep it tighter than the babysitter.’”

The intense reaction took the company by surprise. Peloton said in a statement last week that it was “disappointed in how some have misinterpreted this commercial.” Sean Hunter, the actor who played Ms. Ruiz’s spouse, told Psychology Today that he was worried about potential repercussions to his career. Ms. Ruiz, who declined to comment, put out a statement Saturday, saying she “was shocked and overwhelmed by the attention this week (especially the negative).”

The advertising agency behind “The Gift That Gives Back” is Mekanism, a San Francisco shop that had also created campaigns for Ben & Jerry’s, HBO and Uber. Mekanism did not respond to requests for comment.

The ad was the talk of the Ad Council’s 66th Annual Public Service Award Dinner last week, a black-tie event known in the industry as “ad prom.” Many of the guests said Peloton should have done more to make sure the commercial was interpreted not as a call to lose weight but as an invitation to gain strength.

In an interview on Sunday, Mr. Reynolds said he had heard about the Peloton commercial on Tuesday, when his business partner, George Dewey, sent him a 2:34 p.m. text about the backlash. Mr. Reynolds and Mr. Dewey run Maximum Effort Productions, an entertainment and marketing company that has gained a reputation for advertising stunts, including a mock Twitter war between Mr. Reynolds and his fellow superhero-movie star Hugh Jackman.

Mr. Reynolds and Mr. Dewey decided to respond to the Peloton ad with a promotion of the gin company, knowing they had to do work quickly, before the social media furor had died down.

“Ads are generally disposable pieces of content,” Mr. Reynolds said. “If you’re going to do something like this, you have to jump on the zeitgeist-y moment as it happens.”

On Wednesday morning, Mr. Reynolds and Mr. Dewey contacted Ms. Ruiz. They shot the gin commercial, called “The Gift That Doesn’t Give Back,” for less than $100,000 in several hours on Friday and released it that night, as Mr. Reynolds was boarding a flight to Brazil. In the ad, Ms. Ruiz sits at a bar between two friends, gazing blankly ahead, as if stunned or traumatized. An awkward silence follows.

“You’re safe here,” one friend says.

“To new beginnings,” Ms. Ruiz responds, before guzzling one gin cocktail and accepting another from her concerned pal.

Mr. Reynolds said he had sympathy for Ms. Ruiz’s plight as collateral damage in the backlash to the Peloton commercial.

“As an actor, I can certainly relate to creating a piece of content or being part of something that’s not well received, and how alienating that can feel,” he said. “We had immense respect for any reservations she might have had. We don’t want to make the situation any worse for her.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

How Huawei Lost the Heart of the Chinese Public

Westlake Legal Group 03newworld-1-facebookJumbo How Huawei Lost the Heart of the Chinese Public Social Media Ren Zhengfei Prisons and Prisoners Layoffs and Job Reductions Huawei Technologies Co Ltd extradition Computers and the Internet Blogs and Blogging (Internet)

On the first anniversary of her arrest in Canada, Meng Wanzhou, the chief financial officer of the Chinese telecom giant Huawei, issued an open letter describing how she experienced fear, pain, disappointment, helplessness, torment and acceptance of the unknown.

She wrote at length about the support she received from her colleagues, about friendly people at a courthouse in Vancouver and about “numerous” Chinese online users who expressed their trust.

Her letter, posted on Monday, was not well received on the Chinese internet, where Ms. Meng is known — in a term meant to be endearing — as “princess” because she is a daughter of Huawei’s founder, Ren Zhengfei.

On the Twitter-like social media platform Weibo, many users posted the numbers 985, 996, 251 and 404 in the comment section below her letter. They were slyly referring to a former Huawei employee who graduated from one of the country’s 985 top universities, worked from 9 a.m. to 9 p.m. six days a week and was jailed for 251 days after he demanded severance pay when his contract wasn’t renewed.

His story went viral in China, generating angry responses online. That resulted in 404 error messages as articles and comments were deleted, a sign of China’s censors at work.

The former employee, Li Hongyuan, was eventually released from jail with no charges and received $15,000 in government compensation last week. He shared his story online last week, and that’s when the hit to Huawei’s reputation began.

“One enjoyed a sunny Canadian mansion while the other enjoyed the cold and damp detention cell in Shenzhen,” Jiang Feng, a psychologist, commented on the Quora-like question-and-answer site Zhihu. Ms. Meng has been under house arrest in a six-bedroom home, awaiting potential extradition to the United States on charges that she conspired to defraud banks about Huawei’s relationship with an Iranian company.

The anger directed toward Ms. Meng reflected an uneasy moment for both Huawei and China’s middle-class professionals. In the past year, Huawei had been fending off claims by the United States government that it is secretive and unreliable and that it spies for Beijing, an allegation the company has repeatedly denied.

In China, however, Huawei has been considered the crown jewel of the country’s tech industry and has enjoyed tremendous good will. Many Chinese proudly abandoned their iPhones for Huawei phones. But the backlash to the jailing of a longtime employee after a labor dispute has made it clear that people in China are starting to sour on the company.  

The anger on social media was also indicative of new insecurity among members of China’s middle class, who have never experienced an economic downturn and have always thought they had more protections than lower-paid migrant workers. People said they could see themselves in Mr. Li.

“Many middle-class Chinese used to believe that if they went to good schools, worked hard and cared little about the current affairs they would be able to realize their Chinese dreams,” a blogger wrote on Weibo. “Now their dreams are in tatters.”

Huawei declined to comment on the public response.

Mr. Li, a Huawei employee for 12 years, negotiated a $48,000 severance package in March 2018, according to interviews he gave to Chinese media outlets. But he didn’t get an end-of-the-year bonus that he said had been promised to him. He sued Huawei in November last year.

A month later, he was detained in Shenzhen and accused of leaking commercial secrets. He was officially arrested in January on an extortion accusation. But he was released in August with no charges. He did not respond to interview requests.

Huawei insisted in a statement that it had done nothing wrong and challenged Mr. Li to prove that he had been treated unfairly.

“Huawei has the right, and in fact a duty, to report the facts of any suspected illegal conduct to authorities. We respect the decisions made by the authorities,” the statement said. “If Li Hongyuan believes that he has suffered damages or that his rights have been infringed, we support his right to seek satisfaction through legal means, up to and including lawsuit against Huawei.”

Online commentators called the statement “arrogant” and “cold blooded.” “The elephant stepped on you, but you can step back on it,” one popular WeChat article said. “What a response of justice!”

Jiang Jingjing, a blogger, criticized Huawei for trampling on its employees’ rights with its tough performance evaluation system and legal firepower. “Once a company becomes a cold, dehumanized grinding machine, what’s the point for it to exist?” he wrote.

In some ways, new criticism of Huawei harks back to the early days of the company. Huawei cultivated an aggressive “wolf culture” that encouraged its employees to work extremely hard.

New employees would get a mattress when they joined because everyone was expected to work late and often sleep in the office. Over a decade ago, a series of employee deaths drew harsh scrutiny of the company. An investigative report by a news weekly counted six unnatural deaths in two years, including four suicides.

Since then, especially after the United States started a global campaign to try to stop its allies from using Huawei’s next-generation wireless technology, known as 5G, Huawei has become a symbol of China’s technology prowess and American attempts to keep China down.

After Ms. Meng’s arrest, there was an outpouring of support for Huawei. In the most recent quarter, Huawei’s smartphone sales in China grew 66 percent from a year earlier. Sales for Apple and most of Huawei’s domestic competitors declined, according to the research firm Canalys.

Now many people are talking about boycotting Huawei products. Images of a pair of Huawei-branded handcuffs are circulating online as a new, smart-fitness wristband. One of the “bands” is called the “free meal and accommodation version,” referring to jail life.

Tang Ting, a public relations executive, posted on his WeChat social media timeline that the outrage could cause long-lasting damage for Huawei’s brand. Chinese companies think consumers respond only to freebies and discounts, he wrote, “but a very high percentage of the young generation care about values, too.”

In a sign that many middle-class professionals are worried that what happened to Mr. Li could happen to them, online users circulated articles about jail life, especially in the Longgang detention center in Shenzhen, where Mr. Li spent more than eight months. Huawei is based in Shenzhen’s Longgang district.

Some online users are circulating a three-part blog post by a programmer who spent over a year in the detention center for working on gaming and gambling software. Gambling is illegal in China. The blogger wrote in detail what it was like to live in a 355-square-foot cell with 55 people in tropical weather — what they ate, wore and did every day.

Many Chinese are especially outraged by the degree to which news coverage and online responses have been censored. They say they feel helpless because they can’t criticize the government. Now they feel they are also not able to criticize a giant corporation.

One of the Weibo posts of Ms. Meng’s letter received 1,400 comments. Many simply said 251, the number of days Mr. Li was detained. Fewer than 10 comments, sympathetic ones, are still visible to the public.

“A company that’s too big to criticize is even scarier than a company that’s too big to fail,” Nie Huihua, an economics professor at Renmin University in Beijing, told the news site Jiemian on Tuesday.

Jiemian’s interview with Mr. Li, published on Monday, was deleted.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

TikTok Reverses Ban on Teen Who Slammed China’s Muslim Crackdown

Westlake Legal Group 28tiktok-facebookJumbo TikTok Reverses Ban on Teen Who Slammed China’s Muslim Crackdown Video Recordings, Downloads and Streaming Uighurs (Chinese Ethnic Group) TikTok (ByteDance) Social Media Muslim Americans China

SHANGHAI — The video app TikTok on Wednesday reversed its decision to block an American teenager who posted a clip in which she discussed the mass internment of minority Muslims in China, and acknowledged that its moderation system had overreached in shutting her out of her account.

The incident raised fresh concerns about whether TikTok, which is owned by the Chinese tech giant ByteDance, muzzles its users in line with censorship directives from Beijing — an accusation the company has denied.

TikTok said the teenager, Feroza Aziz, 17, had been barred from using her personal device to access the app, but not because of her video this past week about China’s detention camps. Rather, the company said, it was because she had used a previous account earlier this month to post a clip that included a photo of Osama bin Laden.

After TikTok banned that account for terrorist imagery, Ms. Aziz used a different one to post her video about the plight of Muslims in China. As the second video began to go viral, TikTok on Monday blocked more than 2,400 devices associated with accounts that had been banned for terrorist content and other malicious material, in what it called a scheduled enforcement action.

This, TikTok said, resulted in Ms. Aziz being locked out from her new account, even though her videos on that account, including the one on China, were still visible to others.

On Wednesday, Ms. Aziz expressed skepticism about TikTok’s explanation. She was blocked only after she had posted about Muslims in internment camps in China. Did she believe that TikTok had actually shut her out for her earlier video? “No,” she wrote on Twitter.

Earlier in the day, the episode had taken another turn when TikTok took down Ms. Aziz’s video about China for 50 minutes. The company said that this was the result of a human moderation error, and that the video should not have been removed.

In a statement, Eric Han, the head of safety for TikTok in the United States, apologized for the mistake. He also said the platform banned devices associated with a blocked account to prevent the spread of “coordinated malicious behavior.”

“It’s clear that this was not the intent here,” Mr. Han wrote.

Earlier this week, Ms. Aziz had told The Times that her video containing an image of Bin Laden was satirical in nature, an attempt to use humor to defuse the discrimination that she felt as a young Muslim in the United States.

“While we recognize that this video may have been intended as satire, our policies on this front are currently strict,” Mr. Han wrote. But he added that TikTok would consider exempting satirical and educational videos in the future.

Mr. Han also said TikTok would conduct a broader review of its moderation process and publish a “much fuller” version of its guidelines on acceptable content within the next two months.

TikTok has risen quickly over the past year to become a veritable cultural phenomenon. But its Chinese ownership has also aroused the concerns of United States lawmakers, who have voiced worries both about potential censorship on the platform and about how user data is stored and secured.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

TikTok Blocks Teen Who Posted About China’s Detention Camps

Westlake Legal Group 26tiktok-facebookJumbo TikTok Blocks Teen Who Posted About China’s Detention Camps Uighurs (Chinese Ethnic Group) TikTok (ByteDance) Social Media China Censorship

SHANGHAI — The teenage girl, pink eyelash curler in hand, begins her video innocently: “Hi, guys. I’m going to teach you guys how to get long lashes.”

After a few seconds, she asks viewers to put down their curlers. “Use your phone that you’re using right now to search up what’s happening in China, how they’re getting concentration camps, throwing innocent Muslims in there,” she says.

The sly bait-and-switch puts a serious topic — the mass detentions of minority Muslims in northwest China — in front of an audience that might not have known about it before. The 40-second clip has amassed more than 498,000 likes on TikTok, a social platform where the users skew young and the videos skew silly.

But the video’s creator, Feroza Aziz, said this week that TikTok had suspended her account after she posted the clip. That added to a widespread fear about the platform: that its owner, the Chinese social media giant ByteDance, censors or punishes videos that China’s government might not like.

A ByteDance spokesman, Josh Gartner, said Ms. Aziz had been blocked from her TikTok account because she used a previous account to post a video that contained an image of Osama bin Laden. This violated TikTok’s policies against terrorist content, Mr. Gartner said, which is why the platform banned both her account and the devices from which she was posting.

“If she tries to use the device that she used last time, she will probably have a problem,” Mr. Gartner said.

Ms. Aziz, a 17-year-old Muslim high school student in New Jersey, told BuzzFeed News on Tuesday that this was not the first time TikTok had taken down her account or removed her videos in which she talked about her religion. She did not respond to The New York Times’s requests to comment on the specifics of her situation.

In recent months, United States lawmakers have expressed concerns that TikTok censors video content at Beijing’s behest and shares user data with the Chinese authorities.

The head of TikTok, Alex Zhu, denied those accusations in an interview with The Times this month. Mr. Zhu said that Chinese regulators did not influence TikTok in any way, and that even ByteDance could not control TikTok’s policies for managing video content in the United States.

But episodes such as Ms. Aziz’s show how difficult it might be for TikTok to escape the fog of suspicion that surrounds it and other Chinese tech companies.

China’s government rigidly controls the internet within the nation’s borders. It exerts influence, sometimes subtly, over the activities of private businesses. The concern is that, when companies like ByteDance and the telecom equipment maker Huawei expand overseas, Beijing’s long arm follows them.

China would certainly prefer that the world did not talk about its clampdown on Muslims. Over the past few years, the government has corralled as many as one million ethnic Uighurs, Kazakhs and others into internment camps and prisons.

Chinese leaders have presented their efforts as a mild and benevolent campaign to fight Islamic extremism. But internal Communist Party documents reported by The Times this month provided an inside glimpse at the crackdown and confirmed its coercive nature.

On Tuesday, Secretary of State Mike Pompeo said at a news conference in Washington that the documents showed “brutal detention and systematic repression” of Uighurs and called on China to immediately release those who were detained.

Davey Alba contributed reporting from New York and Edward Wong from Austin, Texas.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

TikTok Blocks Teen Who Posted About China’s Detention Camps

Westlake Legal Group 26tiktok-facebookJumbo TikTok Blocks Teen Who Posted About China’s Detention Camps Uighurs (Chinese Ethnic Group) TikTok (ByteDance) Social Media China Censorship

SHANGHAI — The teenage girl, pink eyelash curler in hand, begins her video innocently: “Hi, guys. I’m going to teach you guys how to get long lashes.”

After a few seconds, she asks viewers to put down their curlers. “Use your phone that you’re using right now to search up what’s happening in China, how they’re getting concentration camps, throwing innocent Muslims in there,” she says.

The sly bait-and-switch puts a serious topic — the mass detentions of minority Muslims in northwest China — in front of an audience that might not have known about it before. The 40-second clip has amassed more than 498,000 likes on TikTok, a social platform where the users skew young and the videos skew silly.

But the video’s creator, Feroza Aziz, said this week that TikTok had suspended her account after she posted the clip. That added to a widespread fear about the platform: that its owner, the Chinese social media giant ByteDance, censors or punishes videos that China’s government might not like.

A ByteDance spokesman, Josh Gartner, said Ms. Aziz had been blocked from her TikTok account because she used a previous account to post a video that contained an image of Osama bin Laden. This violated TikTok’s policies against terrorist content, Mr. Gartner said, which is why the platform banned both her account and the devices from which she was posting.

“If she tries to use the device that she used last time, she will probably have a problem,” Mr. Gartner said.

Ms. Aziz, a 17-year-old Muslim high school student in New Jersey, told BuzzFeed News on Tuesday that this was not the first time TikTok had taken down her account or removed her videos in which she talked about her religion. She did not respond to The New York Times’s requests to comment on the specifics of her situation.

In recent months, United States lawmakers have expressed concerns that TikTok censors video content at Beijing’s behest and shares user data with the Chinese authorities.

The head of TikTok, Alex Zhu, denied those accusations in an interview with The Times this month. Mr. Zhu said that Chinese regulators did not influence TikTok in any way, and that even ByteDance could not control TikTok’s policies for managing video content in the United States.

But episodes such as Ms. Aziz’s show how difficult it might be for TikTok to escape the fog of suspicion that surrounds it and other Chinese tech companies.

China’s government rigidly controls the internet within the nation’s borders. It exerts influence, sometimes subtly, over the activities of private businesses. The concern is that, when companies like ByteDance and the telecom equipment maker Huawei expand overseas, Beijing’s long arm follows them.

China would certainly prefer that the world did not talk about its clampdown on Muslims. Over the past few years, the government has corralled as many as one million ethnic Uighurs, Kazakhs and others into internment camps and prisons.

Chinese leaders have presented their efforts as a mild and benevolent campaign to fight Islamic extremism. But internal Communist Party documents reported by The Times this month provided an inside glimpse at the crackdown and confirmed its coercive nature.

On Tuesday, Secretary of State Mike Pompeo said at a news conference in Washington that the documents showed “brutal detention and systematic repression” of Uighurs and called on China to immediately release those who were detained.

Davey Alba contributed reporting from New York and Edward Wong from Austin, Texas.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

TikTok Blocks Teen Who Posted About China’s Detention Camps

Westlake Legal Group 26tiktok-facebookJumbo TikTok Blocks Teen Who Posted About China’s Detention Camps Uighurs (Chinese Ethnic Group) TikTok (ByteDance) Social Media China Censorship

SHANGHAI — The teenage girl, pink eyelash curler in hand, begins her video innocently: “Hi, guys. I’m going to teach you guys how to get long lashes.”

After a few seconds, she asks viewers to put down their curlers. “Use your phone that you’re using right now to search up what’s happening in China, how they’re getting concentration camps, throwing innocent Muslims in there,” she says.

The sly bait-and-switch puts a serious topic — the mass detentions of minority Muslims in northwest China — in front of an audience that might not have known about it before. The 40-second clip has amassed more than 498,000 likes on TikTok, a social platform where the users skew young and the videos skew silly.

But the video’s creator, Feroza Aziz, said this week that TikTok had suspended her account after she posted the clip. That added to a widespread fear about the platform: that its owner, the Chinese social media giant ByteDance, censors or punishes videos that China’s government might not like.

A ByteDance spokesman, Josh Gartner, said Ms. Aziz had been blocked from her TikTok account because she used a previous account to post a video that contained an image of Osama bin Laden. This violated TikTok’s policies against terrorist content, Mr. Gartner said, which is why the platform banned both her account and the devices from which she was posting.

“If she tries to use the device that she used last time, she will probably have a problem,” Mr. Gartner said.

Ms. Aziz, a 17-year-old Muslim high school student in New Jersey, told BuzzFeed News on Tuesday that this was not the first time TikTok had taken down her account or removed her videos in which she talked about her religion. She did not respond to The New York Times’s requests to comment on the specifics of her situation.

In recent months, United States lawmakers have expressed concerns that TikTok censors video content at Beijing’s behest and shares user data with the Chinese authorities.

The head of TikTok, Alex Zhu, denied those accusations in an interview with The Times this month. Mr. Zhu said that Chinese regulators did not influence TikTok in any way, and that even ByteDance could not control TikTok’s policies for managing video content in the United States.

But episodes such as Ms. Aziz’s show how difficult it might be for TikTok to escape the fog of suspicion that surrounds it and other Chinese tech companies.

China’s government rigidly controls the internet within the nation’s borders. It exerts influence, sometimes subtly, over the activities of private businesses. The concern is that, when companies like ByteDance and the telecom equipment maker Huawei expand overseas, Beijing’s long arm follows them.

China would certainly prefer that the world did not talk about its clampdown on Muslims. Over the past few years, the government has corralled as many as one million ethnic Uighurs, Kazakhs and others into internment camps and prisons.

Chinese leaders have presented their efforts as a mild and benevolent campaign to fight Islamic extremism. But internal Communist Party documents reported by The Times this month provided an inside glimpse at the crackdown and confirmed its coercive nature.

On Tuesday, Secretary of State Mike Pompeo said at a news conference in Washington that the documents showed “brutal detention and systematic repression” of Uighurs and called on China to immediately release those who were detained.

Davey Alba contributed reporting from New York and Edward Wong from Austin, Texas.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

When Mom Slams a Brand on Instagram

Westlake Legal Group 26mominfluencers-4-facebookJumbo When Mom Slams a Brand on Instagram Social Media parenting BlogHer Armstrong, Heather B

As a mom with influence, Caitlin Houston felt the need to act.

Ms. Houston, of Wallingtonford, Conn., read a post on Instagram this summer by Karen Feldman, the founder of the Striped Sheep, a mommy-and-me clothing company, claiming that one of her designs had been ripped off by Tuckernuck, a high-end women’s clothing retailer.

So Ms. Houston did what comes naturally to her: She posted stories about the Striped Sheep’s predicament on Instagram, and encouraged friends who are mom influencers to do the same. Members of her audience — she gets roughly 34,000 views a month through her website and social media — commented on Tuckernuck’s various online platforms, demanding the company not only apologize but explain itself. Soon Tuckernuck removed the disputed items from its website and apologized via a comment on Ms. Feldman’s Instagram page, offering her a share of profits from the products. The company also called Ms. Feldman directly to apologize.

The Striped Sheep gained thousands of new followers and sold out of its inventory. “Now I’m making six new colors in the same style, as well as a new style,” Ms. Feldman said. “It was a huge boost for me, the best thing that could have happened. It was tons of free P.R.”

Tuckernuck, started by three friends who also happen to be mothers, was not as pleased.

“I think it told her story, but we were very misrepresented,” Jocelyn Gailliot, the chief executive and a founder of Tuckernuck, said of the situation. She declined to provide details, adding: “But we are a positive brand, a happy brand, and we are women who have children. We don’t feel social media is the right place to engage over this, and it’s a bad precedent to set for future generations.”

A card from one of the companies that send Ms. Houston products to promote.Credit…Yael Malka for The New York Times Recording an Instagram video for a product review.Credit…Yael Malka for The New York Times

There are 4.5 million mom influencers in the United States, according to Mom 2.0, a conference that brings them together. They use websites and social media to record seemingly every detail of their lives: the sweater they bought for the fall, their child’s favorite new toy, which coffee helps them wake up. What began, for many, as a creative outlet or a way to build community has morphed over the years into big business with a more direct link to brands and companies.

Their followers spend money on the items they endorse and boycott the ones they pan. (Some influencers receive money if people buy products after clicking on the links they provide, and they sometimes have sponsorship deals with brands.) Millennial mothers are 18 percent more likely than those from Generation X to rely on advice from their fellow moms, according to research done by Mom 2.0.

“People think: ‘This woman also drives around four kids and gets crumbs in her seat. She is just like me, so I am going to listen to her,’” said Elisa Camahort Page, a co-founder and former chief operating officer of BlogHer, a media company that put on events and acted as a publisher for female content creators.

Influential moms have been using their power to pressure brands for at least a decade. Heather B. Armstrong, who lives in Salt Lake City and runs a website that receives 250,000 views a month, discovered her power in 2009 after buying a $1,300 Maytag washing machine that kept breaking. After her interactions with customer service and corporate headquarters frustrated her, she posted five tweets that other mom influencers picked up.

“Within 12 hours, someone from headquarters called me, and then they flew someone out to fix my washing machine,” said Ms. Armstrong, who is known as Dooce to her followers. “I got their attention.”

But with influence comes responsibility, and some have raised questions about when and how mom influencers should use their power. How much research should be done before discussing a brand? What constitutes an experience worth sharing? What is, in other words, proper mom influencer etiquette?

“We have to look out for each other,” Ms. Houston said. “If I see a mom who has started a business and is now getting squashed by a bigger person, I want to use the influence I have.”

She acknowledged that in the Striped Sheep situation her research did not extend beyond reading the blog post by Ms. Feldman and talking to other mom influencers. “I had to take her word for it,” Ms. Houston said.

For Ms. Armstrong, keeping companies in check is a positive use of her influence. “It’s like we have these tools, we have power against these companies who want to take advantage of us,” she said. Her rule is if she can’t resolve something through regular customer service channels, she can engage on social media.

Other influencers feel that endorsing or deflating a brand, regardless of how much interaction they have had with it, is their responsibility to their audience. It’s part of portraying their day-to-day lives authentically.

An indoor herb garden sent to Ms. Armstrong for her to promote.Credit…Daniel Dorsa for The New York Times For Ms. Armstrong, keeping companies in check is a positive use of her influence. “It’s like we have these tools, we have power against these companies who want to take advantage of us,” she said.Credit…Daniel Dorsa for The New York Times

Kermilia White, a full-time influencer who lives in Birmingham, Ala., has 75,000 followers across her social media platforms, including on Instagram, where she posts as themillennialsahm. She often writes about products meeting her expectations or disappointing her. (Her position is complicated by the fact that these brands often give her free products or sponsorships. She says her reviews represent her honest opinion, whether she gets the product free or not.)

“There was a start-up company I worked with in January that had a stroller attachment,” she said. “I felt there was a good return on investment for that brand, so I made sure they could get some eyes on it as soon as they launched. I have a huge new-mom following.”

She added: “I always make it clear this is my experience. I say, ‘It won’t necessarily be the same for you, but for me, this is what happened.’”

Some influencers aim to be more journalistic. Liz Gumbinner’s website, Cool Mom Picks, gets millions of readers and hundreds of thousands of social followers. She also runs Cool Mom Tech, Cool Mom Eats and social media communities for mothers. Part of what the sites do is review products, especially start-up brands owned by women and mothers.

Ms. Gumbinner has a team of writers, whom she pays, and she encourages them to do thorough research before posting a review.

“Our writers may try a new cosmetic product for a month, for example, before recommending it,” she said in an email. “With every product we write about, we need to earn our readers’ trust.”

Companies clearly like positive coverage. But when they are on the receiving end of negative chatter, they often feel at a disadvantage. Once a criticism goes viral, the damage is hard to undo.

Mom 2.0 and Dad 2.0, a conference for father content creators, hold training sessions that help participants communicate more effectively with brands and share their experiences with audiences. Keynote speakers and members of panels discuss the topic, and there are small-group workshops and round-table meetings.

At this year’s Mom 2.0 Summit session topics included “How to Connect and Work With Global Brands” and “Activism as an Influencer: How to Be the Driver of Change in Your Community.”

“I’ve seen somebody go after a hotel when they could have called the front desk to fix the problem,” said John Pacini, a co-founder of the conference. “We are trying to apply the same standards to our industry that work in normal face-to-face culture.”

Sometimes, Ms. Houston said, it is best to just back away from the internet. She recounted what she told to a friend, a new influencer popular with mothers, who asked Ms. Houston whether she should talk publicly about a bad experience at her neighborhood’s new coffee shop.

“I said: ‘No! They are brand new. Why would you want to ruin them?’” Ms. Houston said. “It’s good she even asked us, though. A lot of people, that thought, to check, doesn’t even cross their mind.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

How Taylor Swift Dragged Private Equity Into Her Fight Over Music Rights

Westlake Legal Group 24swift-carlyle1-facebookJumbo How Taylor Swift Dragged Private Equity Into Her Fight Over Music Rights Universal Music Group Swift, Taylor Social Media royalties Private Equity Pop and Rock Music Ocasio-Cortez, Alexandria Music McEntire, Reba Jay-Z Copyrights and Copyright Violations Carlyle Group LP Braun, Scooter (1981- ) Borchetta, Scott Big Machine Records Bieber, Justin Audio Recordings, Downloads and Streaming American Music Awards

A little over a week ago, a troubling alert appeared on the smartphone of an executive at the private equity giant the Carlyle Group: The firm had been invoked by Taylor Swift.

In an open letter posted to social media, the pop megastar had implored her fans to intervene in what might otherwise have been an obscure music industry dispute. The new owners of her former record company, she said, were trying to prevent her from playing her old hits at the American Music Awards on Sunday night. Ms. Swift asked her followers to tell Scooter Braun, the top music manager who now controlled her music catalog, how they felt. She added that she was “especially asking for help” from the Carlyle Group, which had backed Mr. Braun’s deal for the label, Big Machine.

After Ms. Swift’s post, her fans swarmed Mr. Braun — he has said that he and his family have received death threats — and the issue even became a high-profile political talking point, with Senator Elizabeth Warren and Representative Alexandria Ocasio-Cortez using it to attack private equity.

The public callout also got the attention of Carlyle, one of the world’s biggest private equity firms, which moved quickly to encourage a deal between the two sides and urged Mr. Braun to reach out to Ms. Swift, according to four people close to the situation.

Carlyle’s intervention — which people in both camps say has brought the bitter fight closer to a resolution — says as much about the zeitgeist as it does about private equity. At a time of public outrage over corporate greed and a heightened awareness of gender-based power dynamics, the 29-year-old Ms. Swift was able to turn a commercial dispute into a cause célèbre.

According to the four people close to the discussions, a deal could take various forms, including a partnership or joint-venture arrangement. But the artist’s ideal outcome — and probably the only one she will accept, according to her team — appears to be a sale that would give Ms. Swift possession of her master recordings from Big Machine. Such an agreement could well cost her hundreds of millions of dollars.

Ownership of her master recordings, and the copyrights associated with them, would put Ms. Swift in rare company. Relatively few major-label artists — among them Jay-Z, Metallica and Janet Jackson — have gained such control, since labels typically own those assets in exchange for the risks they take in financing artists’ careers. But Ms. Swift has made no secret of her desire to control her work, which became a crucial contract point when she signed a new deal last year with Universal Music Group.

The singer’s battle with Mr. Braun and his partners at Carlyle represents an unusually public collision between a superstar’s social media power and the status quo of the music business. It has also exposed the awkward fit between artists and private equity investors more accustomed to dealing with corporate boards at large industrial and consumer companies than with popular entertainment figures.

“People in private equity look at music copyrights and think, ‘It’s like real estate,’ but it’s not,” said Matt Pincus, the founder of Songs Music Publishing, which was sold in 2017. “You’re dealing with living, breathing artists.”

Carlyle bought into Beats Electronics, Dr. Dre’s headphones company, in 2013 and made an initial investment in 2017 in Ithaca Holdings, the entertainment holding company that Mr. Braun runs. With its headquarters in Washington and its past advisory arrangements with former heads of state like President George Bush and John Major, the former prime minister of Britain, the firm is in some ways a natural mediator in the Swift dispute.

But its portfolio managers, who tend to wear dark suits and skinny ties, were not used to dealing with sneaker- and leather-jacket-wearing musicians and managers for whom business is often personal. And then, there were the obsessive, protective fans known as Swifties. While social media heat can be directed at anyone, anytime, Carlyle was unhappy to be dragged into the dispute in such a public way, three of the people said.

Ms. Swift has long railed against what she considers injustices in the music business, including past disputes with Apple and Spotify. She first went public in June with her displeasure over the deal that brought Carlyle into the picture.

On June 30, Mr. Braun, who discovered Justin Bieber and has managed Ariana Grande and Kanye West, took over most of Ms. Swift’s music catalog as part of Ithaca’s purchase of Big Machine, an independent Nashville label whose founder, Scott Borchetta, signed Ms. Swift as a 15-year-old country singer. Big Machine’s ownership of Ms. Swift’s first six albums — all of them multiplatinum hits — allows it to make money anytime that music is used, whether it’s streamed by a fan or placed in a Hollywood movie.

Ithaca paid $300 million to $350 million for Big Machine, according to two people briefed on its valuation at the time. But it also brought Ms. Swift into Mr. Braun’s orbit, stirring up heated feelings toward him largely because of his association with Mr. West, a longtime antagonist. The deal, she wrote on Tumblr, meant that Mr. Braun and Mr. Borchetta, who joined Ithaca’s board, would be in the position of “controlling a woman who didn’t want to be associated with them. In perpetuity.”

Carlyle’s stake in Ithaca was overseen by Jay W. Sammons, who runs the firm’s media, retail and consumer team. Carlyle added to its investment in Ithaca with the recent Big Machine deal. (It currently owns roughly a third of Ithaca, according to a person briefed on the size of the position.) And Mr. Sammons knew Carlyle was taking on exposure to an outspoken pop figure in Ms. Swift. But given the substantial value of her music catalog and the chance to invest in other Big Machine label artists, including Reba McEntire and Florida Georgia Line, he saw it as a compelling opportunity, a person familiar with his thinking said.

Soon after the deal was signed, Ms. Swift threatened in interviews to make new recordings of her old songs, potentially devaluing Mr. Braun’s investment in her original catalog. Mr. Braun and Mr. Borchetta accused Ms. Swift of bending the facts to fit a victim’s narrative, and said that she had rejected attempts to negotiate over the last six months.

Then, on Nov. 14, the pop star published another Tumblr post. Mr. Braun and Mr. Borchetta had essentially told Ms. Swift to “be a good little girl and shut up,” she wrote. The two men were imposing “tyrannical control” over her, she added, threatening to block her American Music Awards performance and refusing to license her music for a Netflix documentary in the works.

Ms. Swift’s note sent Swifties on a passionate mission to familiarize themselves with the Carlyle portfolio. Some shared clips from the socially conscious comedian Hasan Minhaj in which he discussed the company’s investments and its connection — through its ownership of the aerospace component manufacturer Wesco Aircraft Holdings, which supplies parts used to make a combat aircraft — to Saudi Arabia’s war against Yemen.

“When you think about it, you either support Taylor Swift or the war in Yemen,” one fan wrote on Twitter, earning more than 3,700 retweets.

Mr. Sammons soon saw Ms. Swift’s post on his phone. Concerned by the turn things had taken, he rolled into action, according to two people familiar with his role, contacting Mr. Braun and others to see what could be done to assuage Ms. Swift — potentially even by selling her masters back to her. Shortly thereafter, both public and private overtures were made to Ms. Swift, inviting her to explore an outcome that would satisfy all parties.

On Friday, Mr. Braun used Instagram to publish his first public statement on the matter, urging Ms. Swift to consider her role in the threats facing his family. But he added that he hoped to “come together and try to find a resolution,” adding: “I’m open to ALL possibilities.”

Despite Mr. Braun’s open-minded public stance, he may be reluctant to sell. Big Machine is a crucial part of a larger strategy to expand the reach of Ithaca, which has investments in music publishing, artist management and even a film studio, Mythos. Big Machine would also help establish Mr. Braun, 38, as a mogul in the style of his hero, David Geffen, who controlled his own major record label and became a top Hollywood producer.

Private equity has long been a part of the financial backdrop of the music industry, with mixed results. Some deals have gone well; in 2013, for example, Kohlberg Kravis Roberts doubled its investment in BMG, the recorded music and publishing company, after five years.

Others have been disasters. Terra Firma Capital Partners paid about $8 billion for EMI in a highly leveraged deal in 2007, shortly before credit markets collapsed. The deal fell apart four years later, but by then Terra Firma had already alienated top artists with a tone-deaf approach to cost cutting. Paul McCartney, the Rolling Stones, Radiohead and other stars fled the label, some with harsh words for its management.

Mr. Pincus, the music publisher who now advises a financial firm, put it this way: “The question of whether the power of a global superstar can wildly swing the asset value of music is hard to quantify in a spreadsheet.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com