web analytics
a

Facebook

Twitter

Copyright 2015 Libero Themes.
All Rights Reserved.

8:30 - 6:00

Our Office Hours Mon. - Fri.

703-406-7616

Call For Free 15/M Consultation

Facebook

Twitter

Search
Menu

Real-Time Surveillance Will Test the British Tolerance for Cameras

CARDIFF, Wales — A few hours before a recent Wales-Ireland rugby match in Cardiff, amid throngs of fans dressed in team colors of red and green, and sidewalk merchants selling scarves and flags, police officers popped out of a white van.

The officers stopped a man carrying a large Starbucks coffee, asked him a series of questions and then arrested him. A camera attached to the van had captured his image, and facial recognition technology used by the city identified him as someone wanted on suspicion of assault.

The presence of the cameras, and the local police’s use of the software, is at the center of a debate in Britain that’s testing the country’s longstanding acceptance of surveillance.

Britain has traditionally sacrificed privacy more than other Western democracies, mostly in the name of security. The government’s use of thousands of closed-circuit cameras and its ability to monitor digital communications have been influenced by domestic bombings during years of conflict involving Northern Ireland and attacks since Sept. 11, 2001.

But now a new generation of cameras is beginning to be used. Like the one perched on the top of the Cardiff police van, these cameras feed into facial recognition software, enabling real-time identity checks — raising new concerns among public officials, civil society groups and citizens. Some members of Parliament have called for a moratorium on the use of facial recognition software. The mayor of London, Sadiq Khan, said there was “serious and widespread concern” about the technology. Britain’s top privacy regulator, Elizabeth Denham, is investigating its use by the police and private businesses.

And this month, in a case that has been closely watched because there is little legal precedent in the country on the use of facial recognition, a British High Court ruled against a man from Cardiff, the capital of Wales, who sued to end the use of facial recognition by the South Wales Police. The man, Ed Bridges, said the police had violated his privacy and human rights by scanning his face without consent on at least two occasions — once when he was shopping, and again when he attended a political rally. He has vowed to appeal the decision.

“Technology is driving forward, and legislation and regulation follows ever so slowly behind,” said Tony Porter, Britain’s surveillance camera commissioner, who oversees compliance with the country’s surveillance camera code of practice. “It would be wrong for me to suggest the balance is right.”

Britain’s experience mirrors debates about the technology in the United States and elsewhere in Europe. Critics say the technology is an intrusion of privacy, akin to constant identification checks of an unsuspecting public, and has questionable accuracy, particularly at identifying people who aren’t white men.

ImageWestlake Legal Group merlin_160144890_51d99fe5-4687-417a-abbd-c3115b9bcfb3-articleLarge Real-Time Surveillance Will Test the British Tolerance for Cameras Surveillance of Citizens by Government Regulation and Deregulation of Industry Privacy Politics and Government London (England) Great Britain facial recognition software Computer Vision Cardiff (Wales) cameras

A Cardiff man who sued to end the use of facial recognition by the South Wales Police lost a ruling by the British High Court this month. CreditFrancesca Jones for The New York Times

In May, San Francisco became the first American city to ban the technology, and some other cities have followed. Some members of Congress want to limit its use across the United States, with Representative Jim Jordan of Ohio, the top Republican on the House Oversight Committee, comparing the technology to George Orwell’s “1984” and a threat to free speech and privacy. A school in Sweden was fined after using facial recognition to keep attendance. The European Commission is considering new restrictions.

Britain’s use of facial recognition technology is nowhere close to as widespread as that used in China, where the government uses it in a variety of ways, including to track ethnic Muslims in the country’s western region. Opponents of the software say its use in a democratic country needs to be more carefully considered, not left to the police to determine.

But the British public has already grown accustomed to the use of surveillance cameras. The roughly 420,000 closed-circuit television cameras in London are more than in any other city except Beijing, equaling about 48 cameras per 1,000 people, more than Beijing, according to a 2017 report by the Brookings Institution. A recent government poll showed a mixed reaction to facial recognition, with about half of the people surveyed supporting its use if certain privacy safeguards were in place.

The South Wales police have arrested 58 people using facial recognition technology since 2017.CreditFrancesca Jones for The New York Times

The Metropolitan Police Service in London tested facial recognition technology 10 times from 2016 until July of this year. Officers were often stationed in a control center near the cameras monitoring computers with a real-time feed of what was being recorded. The system sent an alert when it had identified a person who matched someone on the watch list. If officers agreed it was a match, they would radio to police officers on the street to pick up the person.

During one deployment near a subway station in London, officers detained a person intentionally seeking to obscure his face from the cameras to avoid detection. He was released after being ordered to pay a fine. In other instances, researchers found that the system flagged people who had been wanted for a past crime that had already been dealt with by the legal system.

Daragh Murray, a researcher at Essex University who spent time observing the use of facial recognition technology by the London police, said officials discussed integrating the technology in cameras around the city, including on buses.

“They were seeing it as the first step in a much bigger deployment,” said Mr. Murray, who published a 128-page report in July on use of the technology in London. He added, “The potential for really invasive technology is very high, but it can also be incredibly useful under certain circumstances.”

The technology has been most widely used by the South Wales Police after it received funding for systems from the Home Office, the agency that oversees domestic security across Britain. The police force uses the cameras about twice per month at large events like the Wales-Ireland rugby match, which was held at a stadium that fits more than 70,000 fans. At the national air show in July, more than 21,000 faces were scanned, according to the police. The system identified seven people from a watch list — four incorrectly.

Stephen Williams, who volunteers for the Socialist Party in Cardiff, said police vans with facial recognition cameras were now frequent sights at busy events.CreditFrancesca Jones for The New York Times

In Cardiff, the largest city in Wales, vans carrying facial recognition cameras have become a common sight over the past year. On game days, the vehicles have taken the place of vans the police used to detain fans causing trouble, said Stephen Williams, 57, who volunteers for the Socialist Party at a table nearby. “On most occasions, if it’s a busy event, you’ll see a van there,” he said.

The South Wales Police said the technology was necessary to make up for years of budget cuts by the central government. “We are having to do more with less,” said Alun Michael, the South Wales police and crime commissioner. He said the technology was “no different than a police officer standing on the corner looking out for individuals and if he recognizes somebody, saying, ‘I want to talk to you.’”

The police said that since 2017, 58 people had been arrested after being identified by the technology.

New questions are being raised about facial recognition’s use extending beyond the police to private companies. This month, after a report was published by the Financial Times, a large London property developer acknowledged that it used the technology at Kings Cross, a commercial and transit hub.

Critics say there has been a lack of transparency about the technology’s use, particularly about the creation of watch lists, which are considered the backbone of the technology because they determine which faces a camera system is hunting for. In tests in Britain, the police often programmed the system to look for a few thousand wanted people, according to a research paper published in July. But the potential could be far greater: Another government report said that as of July 2016, there were over 16 million images of people who had been taken into custody in the country’s Police National Database that could be searchable with facial recognition software.

Critics of the technology in Britain say there is little transparency about its use, particularly about the creation of watch lists of wanted individuals.CreditFrancesca Jones for The New York Times

Silkie Carlo, the executive director of Big Brother Watch, a British privacy group calling for a ban on the technology’s use, said the murky way watch lists were created showed that police departments and private companies, not elected officials, were making public policy about the use of facial recognition.

“We’ve skipped some real fundamental steps in the debate,” Ms. Carlo said. “Policymakers have arrived so late in the discussion and don’t fully understand the implications and the big picture.”

Sandra Wachter, an associate professor at Oxford University who focuses on technology ethics, said that even if the technology could be proven to identify wanted people accurately, laws were needed to specify when the technology could be used, how watch lists were created and shared, and the length of time images could be stored.

“We still need rules around accountability,” she said, “which right now I don’t think we really do.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Is “private correspondence” truly private?

In the ongoing search of every constitutional nook and cranny for potential tactics to deploy against the Government, Dominic Grieve and his colleagues provoked an angry row by seeking to require nine named individuals to release correspondence on private channels relating to prorogation.

The battle lines write themselves. One side declares that it seeks only truth, and there’s definitely no element of wishing to intimidate, disrupt or harass the specific named people it is picking out, your honour. The other denounces the effort as an outrageous intrusion into personal privacy, an assault on civil liberties, and an effort to make Government an impossibly hostile environment in which to operate.

You may already have a preferred side in the debate; but the truth lies somewhere between the two.

First, the actual rules. The sought messages are widely described as “private correspondence”, meaning that (if they exist at all) they are alleged to be privately held, through personal accounts on various platforms, and on personal phones and computers. But there are circumstances in which such material can still legally be considered public.

To understand this, we need to look back to the early days of the Coalition. In particular to the Department for Education, between 2010 and 2012. Then the Financial Times and the DfE fought a lengthy battle over exactly this question: were ‘private” messages really private if they related to public officials doing their official work?

The battleground then was Freedom of Information – a journalist used the FOI Act to request emails which he knew existed, between named individuals (including Dominic Cummings, just like Grieve’s proposal). The Department replied that it did not hold the data – which was true, because it was held in private inboxes hosted by whichever email providers were being used.

The FT’s case was that this was a failure of the Department to hold (and thereby disclose) all the required data on its official work, not a failure of the FOI request. Ultimately, the Information Commissioner upheld the journalist’s appeal, and defined public employees discussing public work, no matter the medium used, to be public domain, not private correspondence.

So this might be correspondence on private media, but if it’s on an official topic discussing public work, then it would generally fall under the definition of disclosable public data.

That isn’t the end of the story, however. As Sam Freedman – then a DfE civil servant, and also one of the people whose supposed correspondence was sought under that FOI request – notes, the ruling was clear on the theory but the practice was more blurry: the disclosure was never successfully enforced.

There may yet be legalistic skirmishing over this different route to demand similar publication. But even if it succeeds on paper, it might well run up against the same issue.

Whose job is it to enforce Parliament’s resolution? By what means will Grieve secure the passwords to the named individuals’ phones, email accounts, WhatsApp accounts, Signol accounts, Facebook accounts? In what way will he or his enforcement agents ascertain that they have got all correspondence from all of the nine? I can chat to people via a messenger built in to Words With Friends, have chats with other players on Call of Duty, or whistle coded messages as I pass someone’s house while on a stroll – how is it proposed for Parliament to capture all these possibilities and more?

It’s at this point, of practical implementation, that the civil liberties issues which don’t seem to apply in terms of the messages being “private” do actually start to bite.

And that’s before we wonder what the model is for adjudicating whether disclosed (or seized) data is indeed relevant to the official’s public duties or not. Would the assumption be that everything on all their possible correspondence media should be collected and pores over? Again, who by? Grieve himself? The Information Commissioner? The Cabinet Office?

Now that threatens an invasion of privacy and trampling important rights.

From the invasive prospect of strangers riffling through messages to their partners or pictures of their children or personal financial information, through to the politically controversial idea of leaky officials or even one’s opponents getting access to records of separate but sensitive conversations with the press or one’s party colleaghes, it doesn’t have to go very far before it starts feeling rather hazardous. We must assume those hazards are unintended, rather than a deliberate form of intimidation of those being targeted, but they are undesirable nonetheless.

It seems unlikely that Grieve would be satisfied to let the subjects of the proposal sift messages for disclosure themselves. But the alternative is both impractical and unpalatable.

Even in systems where there is a structure for such tasks – like the workings of the police and CPS when investigating crimes – the job is hard, intrusive, open to misuse, and very controversial. Here, there isn’t a structure or system at all.

If Parliament really wants such a principle, it seems irresponsible and ineffective at best to vote for the desired outcome without any preparation for or consideration of securing it reasonably and successfully.

Even some of those former Tory MPs who lost the Whip last week appear to be aware of these dangers. They haven’t criticised the idea publicly, but it seems unlikely to be a coincidence that former ministers like Rory Stewart and Caroline Nokes voted against Grieve’s measure, while others including Philip Hammond abstained. They know how government works and can presumably imagine some of the downsides of such a tactic.

It also, of course, risks opening this field to tit for tat retaliation. The former Chancellor might perhaps not be that keen for his own ex-advisers to be made to submit to such a trawl on, for example, preparations (or lack of) for No Deal.

Sometimes there is a difference between what you can do and what you should do. Much of Conservatism rests on that principle, and it’s one Dominic Grieve himself has in the past been quite sympathetic towards. It’s telling that even some of his allies on the Brexit issue seem to think he’s on the wrong side of the line this time.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Dave Chappelle and Big Tech Rotten Tomatoes: Everyone Is Biased About Everything

Westlake Legal Group Capture-9-620x371 Dave Chappelle and Big Tech Rotten Tomatoes: Everyone Is Biased About Everything wireless Wired Web Search washington D.C. Technology technolgy Section 230 satellite progressives Privacy Politics Policy News network neutrality Net Neutrality law Internet Government Google Front Page Stories Front Page Foreign Policy Economy China California Business & Economy bias

My last actual gig prior to venturing out and founding Less Government – was at the Media Research Center (MRC).

I described MRCs toil as: “We work to expose and catalogue Leftist media bias.”  And then always added “We’re understaffed.”

But I have always held what I know was a minority perspective at the MRC.  On the media – and perhaps all humans everywhere.

MRC’s Mission Statement:

“To create a media culture in America where truth and liberty flourish.”

The MRC at least used to want to get back to the alleged halcyon days of unbiased journalism.

Except there was never any such thing as “unbiased journalism.”  Because human nature.  Humans – are biased.

CBS News anchor Walter Cronkite – was “The Most Trusted Man in America.”  And, we know now, a pronounced Leftist liar.

Because media culture is human culture – truth and liberty will never, ever flourish.  Personal perspectives on things – always will.

The Leftist perspective – has nigh always dominated the news media.

Right now, the Leftist perspective dominates the entertainment media and Hollywood.  But ’twas not always so – at least amongst the actors.

Before there was the heinousness of Rob Reiner and the beautiful ignorance of Alyssa Milano – there was the Golden Era of Tinsel Town.  When Ronald Reagan and John Wayne, Jimmy Stewart and Bob Hope – and many, many others – were huge stars…and conservative as the day is long.

But in all of this: One word you would never, ever use to describe any of these people – is “unbiased.”

Because humans – are biased.  Because human nature.

Humans do media – so media has been, is, and always will be biased.

When robots start doing journalism – they will reflect the biases of their programmers.

Speaking of media, Hollywood, Big Tech and pronounced bias….

Rotten Tomatoes Gives Dave Chappelle Special Zero Percent Rating

Dave Chappelle – is absolutely one of the funniest people alive.  Even if he simply came on stage and Bill Burr-style extruded a profanity-laced insult-fest of the city in which he was appearing – it would get something higher than…zero.

But you see – Chappelle’s “Sticks and Stones” special – is an actually funny thing.  (I know – I’ve watched it.)

Which means it draws humor from and pokes fun at – everyone.  And everyone – includes Leftists.

Rotten Tomatoes couldn’t let that go unchecked.  So Rotten Tomatoes – rigged their Chappelle rating:

“Chappelle’s latest hour-long stand-up set for Netflix received an extremely rare 0 percent Rotten Tomatoes rating from five professional critics….”

Wait – only “five professional critics?”  Rotten Tomatoes’ business model is – per their website:

“(T)he leading online aggregator of movie and TV show reviews from critics, we provide fans with a comprehensive guide to what’s Fresh – and what’s Rotten – in theaters and at home….

“The Tomatometer score – based on the opinions of hundreds of film and television critics – is a trusted measurement of critical recommendation for millions of fans.”

“Hundreds of…critics….”  So why was the entirety of the Chappelle review – limited to five obviously Leftist, pre-selected-by-Rotten-Tomatoes reviewers?

Because Rotten Tomatoes – is biased.  Because Rotten Tomatoes – is made up of humans.  In modern Hollywood and Big Tech – that means Leftist humans.

An important part of Rotten Tomatoes and its success – is serving as a place where We the Viewing Public can also rate what the “experts” rate.

When Rotten Tomatoes finally published the ratings of We the Viewing Public….:

“Rotten Tomatoes unveiled Chappelle’s special has received an equally rare 99 percent audience score.

“The high audience rating was the cumulative score from at least 3,753 casual reviewers who praised the comedian for daring to broach controversial topics that most comic stars have avoided in the era of ‘cancel culture.’

“Such a stark contrast among critics and regular viewers is almost unheard of and illustrates the wide cultural divide among the general public and media elites.”

This is…oh, I don’t know…about the nine millionth instance of Leftist Big Tech abusing their nigh-monopoly online platforms to screw anyone not in lockstep with their hard Leftism.

Of course, anecdotes of Big Tech Leftist bias – no matter if they number in the infinities – do not add up to data.

So one particular Leftist – delivered us the data on one particularly influential Leftist Big Tech function: Web Search.  How you get answers – when you ask the Internet questions.

Rotten Tomatoes rigging entertainment ratings is…bad.  But not fundamentally transformational of our politics – and thus our nation.

Leftist Big Tech rigging Web searches…is exceedingly awful.  In a great many ways – including politically.

And in 2018, 87.3% of all Web searches in the United States – took place via uber-Left, uber-huge Google (Market Cap: $851 billion).

So when you want to search the Web – you “Google” something.  Both rhetorically – and literally.

So when Google rigs things – they’re rigging nigh everything Americans see on the Web.

We all by our onesies have documented dozens and dozens of instances of Google (and other Big Tech joints) screwing conservatives.

But again, let’s get scientific.  Meet Dr. Robert Epstein – a self-avowed man of the Left.  I would not call him a Leftist – because he’s actually honest.

He’s been studying Big Tech’s political bias – for quite a while:

“Regarding elections, Dr. Epstein has found in multiple studies that search rankings that favor a political candidate drive the votes of undecided voters toward that candidate, an effect he calls SEME (“seem”), the Search Engine Manipulation Effect….

“(B)iased search rankings exercise undue influence over voter’s opinions – influence that cannot be counteracted by individual candidates but that can easily determine who will win a close election.”

Before Dr. Epstein did an in-depth study of Google manipulating voters and potential voters in the lead-up to the 2016 election – he called his shot:

How Google Could Rig the 2016 Election – August 19, 2015

Shocker – Google did.

In fact, Dr. Epstein’s study reveals – the election doesn’t even have to be close…to be transformationally affected by Google’s Leftist bias.

“In 2016, I set up the first-ever monitoring system that allowed me to look over the shoulders of a diverse group of American voters — there were 95 people in 24 states,” (Epstein) said….

“The study looked into ‘politically oriented searches’ from a ‘diverse group of American voters,’….

“‘I looked at politically oriented searches that these people were conducting on Google, Bing and Yahoo. I was able to preserve more than 13,000 searches and 98,000 web pages, and I found very dramatic bias in Google’s search results… favoring Hillary Clinton — whom I supported strongly.”

Again: The election doesn’t have to be close – to be won by Google:

“‘That level of bias was sufficient, I calculated, to have shifted over time somewhere between 2.6 and 10.4 million votes to Hillary without anyone knowing that this had occurred….’”

We have heard INCESSANTLY since Donald Trump defeated Hillary Clinton – that she won the popular vote by about three million votes.  That popular-vote-Clinton-victory – has re-ginned-up the Left’s push to end the electoral college.

And it turns out that if 90%-of-US-Search Google hadn’t uber-rigged their results for Clinton – Trump most likely would have also won the popular vote.  Maybe by a lot.

And how did Google get so uber-huge?  So as to wield such huge, Leftist, stealth political power?

Government cronyism.

Platform, or Publisher?:

“Section 230 of the (1996) Communications Decency Act immunizes online platforms for their users’ defamatory, fraudulent, or otherwise unlawful content. Congress granted this extraordinary benefit to facilitate ‘forum[s] for a true diversity of political discourse.’

“This exemption from standard libel law is extremely valuable to the companies that enjoy its protection, such as Google, Facebook, and Twitter, but they only got it because it was assumed that they would operate as impartial, open channels of communication—not curators of acceptable opinion.”

Get that?  Google and the rest of Big Tech get this massive cronyism – so long as they do not act as “curators of acceptable opinion.”

But Google – and the rest of Big Tech – have done exactly that.  Trillions and trillions of times.

The accumulated tonnage of anecdotes – proves it.

Dr. Epstein’s in-depth study – documents it.

Big Tech is biased – because they’re human.  And human nature…trumps…all.

Thus legislation dependent upon humans not behaving like humans – is folly.

Thus Section 230 – has gots to go.

The post Dave Chappelle and Big Tech Rotten Tomatoes: Everyone Is Biased About Everything appeared first on RedState.

Westlake Legal Group Capture-9-300x180 Dave Chappelle and Big Tech Rotten Tomatoes: Everyone Is Biased About Everything wireless Wired Web Search washington D.C. Technology technolgy Section 230 satellite progressives Privacy Politics Policy News network neutrality Net Neutrality law Internet Government Google Front Page Stories Front Page Foreign Policy Economy China California Business & Economy bias   Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Google Is Fined $170 Million for Violating Children’s Privacy on YouTube

Google on Wednesday agreed to pay a record $170 million fine and to make changes to protect children’s privacy on YouTube, as regulators said the video site had knowingly and illegally harvested personal information from youngsters and used that data to profit by targeting them with ads.

The measures were part of a settlement with the Federal Trade Commission and New York’s attorney general. They said YouTube had violated a federal children’s privacy law known as the Children’s Online Privacy Protection Act, or COPPA.

Regulators said YouTube, which is owned by Google, had illegally gathered children’s data — such as identification codes that are used to track web browsing over time — without their parents’ consent. The site also marketed itself as a top destination for young children to advertisers, even as it told some advertising companies that no compliance with the children’s privacy law was needed because it did not have viewers younger than 13. YouTube then made millions of dollars by using the information harvested from children to target them with ads, regulators said.

To settle the charges, YouTube agreed to pay $170 million, of which $136 million will go to the F.T.C. and $34 million to New York. The sum represents the largest civil penalty ever obtained by the F.T.C. in a children’s privacy case, dwarfing the previous record fine of $5.7 million that the agency levied this year against the owner of TikTok, a social video-sharing app.

Under the settlement, which the F.T.C. approved in a 3 to 2 vote, YouTube also agreed to set up a system that asks video channel owners to identify the children’s content they post so that targeted ads are not placed in those videos. YouTube must also obtain consent from parents before collecting or sharing personal details like their child’s name or photos, regulators said.

The move adds to the enforcement actions that American regulators have recently taken against tech companies for violations of people’s privacy, indicating the Trump administration’s willingness to aggressively pursue the powerful corporations. It follows a $5 billion privacy settlement between the F.T.C. and Facebook in July over how the social network collected and handled its users’ data.

But critics warned that the fine and measures against YouTube did not go far enough to protect children’s privacy.

Children’s advocates said the $170 million penalty was a slap on the wrist for one of the world’s richest companies. They added that Google had simply agreed to abide by a children’s privacy law that it was already obligated to comply with. COPPA prohibits operators of online services from collecting personal data, like home addresses, from children under 13 without a parent’s verifiable permission.

ImageWestlake Legal Group merlin_150518694_68e9464a-9285-4e68-885c-8af3143cf3d0-articleLarge Google Is Fined $170 Million for Violating Children’s Privacy on YouTube YouTube.com Video Recordings, Downloads and Streaming United States Politics and Government Privacy Online Advertising Google Inc Fines (Penalties) Federal Trade Commission Data-Mining and Database Marketing Computers and the Internet Children's Online Privacy Protection Act Children and Childhood

Susan Wojcicki, YouTube’s chief executive, said in a blog post on Wednesday that “nothing is more important than protecting kids and their privacy.”CreditPeter Prato for The New York Times

“Merely requiring Google to follow the law, that’s a meaningless sanction,” said Jeffrey Chester, executive director of the Center for Digital Democracy, a nonprofit whose efforts in the 1990s helped trigger COPPA’s passage. “It’s the equivalent of a cop pulling somebody over for speeding at 110 miles an hour — and they get off with a warning.”

The agreement split the F.T.C. down partisan lines, with the agency’s three Republican commissioners voting to approve the settlement and the two Democratic commissioners dissenting.

In a statement, two of the Republican commissioners, Joseph J. Simons, the agency’s chairman, and Christine S. Wilson, said that the settlement “achieves a significant victory for the millions of parents whose children watch child-directed content on YouTube.” In particular, they said, this was the first time a platform would have to ask its content producers to identify themselves as creators of children’s material. The settlement, they said, “sends a strong message to children’s content providers and to platforms about their obligation to comply with the COPPA rule.”

But while the agreement prohibits YouTube and Google from using or sharing the children’s data they have already obtained, one of the Democratic commissioners, Rohit Chopra, said it did not hold company executives personally accountable for illegal data-mining of children. The other Democratic commissioner, Rebecca Kelly Slaughter, said the agreement did not go far enough by requiring YouTube itself to proactively identify children’s videos on its platform.

“No individual accountability, insufficient remedies to address the company’s financial incentives, and a fine that still allows the company to profit from its lawbreaking,” Mr. Chopra wrote in his dissent. “The terms of the settlement were not even significant enough to make Google issue a warning to its investors.”

COPPA is the strongest federal consumer privacy statute in the United States, enabling the F.T.C. to level fines of up to $42,530 for each violation.

Noah Phillips, a Republican commissioner, argued that Congress should give the F.T.C. more guidance about how to levy fines.

In a blog post on Wednesday about the settlement, YouTube’s chief executive, Susan Wojcicki, said that “nothing is more important than protecting kids and their privacy.” She added, “From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased.”

YouTube said that not only had it agreed to stop placing targeted ads on children’s videos, it would also stop gathering personal data about anyone who watches those videos, even if the company believes the viewer is an adult. It said it would also eliminate other features on children’s videos, such as comments and notifications, that require the use of personal data.

YouTube said it planned to promote YouTube Kids, its child-focused app, to shift parents away from letting their children use the main YouTube app for watching videos.CreditAndrew Harrer/Bloomberg

Ms. Wojcicki said YouTube also plans to use artificial intelligence to scan for content that targets young audiences, like videos featuring kids’ toys, games or characters, in addition to relying on creator reports.

Under the settlement, YouTube must implement the changes by early next year.

The privacy case against YouTube began in 2016 after the New York attorney general’s office, which has been active in enforcing the federal children’s privacy law in the state, notified the F.T.C. of apparent children’s privacy violations on the video site.

“Google and YouTube knowingly and illegally monitored, tracked and served targeted ads to young children just to keep advertising dollars rolling in,” Letitia James, New York’s attorney general, said in a statement on Wednesday. “These companies put children at risk and abused their power.”

Google has repeatedly dealt with privacy violations in recent years. The internet search company is subject to a 20-year federal consent order from 2011 for deceptive data-mining involving its now-defunct social network Buzz. That order required Google to put a comprehensive privacy program in place and forbade it from misrepresenting how it handles people’s data.

In 2012, Google also agreed to pay $22.5 million to settle F.T.C. charges that it had violated that consent order by deceiving users of Apple’s Safari browser about its data-mining practices.

The Silicon Valley company is also the subject of a state lawsuit over alleged children’s privacy violations, brought by Hector Balderas, the attorney general of New Mexico. The suit alleges the company failed to ensure that children’s apps in its Google Play store complied with the children’s privacy law. Google has asked that the case be dismissed.

The settlement on Wednesday is likely to have implications beyond YouTube. The changes required under the agreement could limit how much video makers earn on the platform because they will no longer be able to profit from targeted ads on children’s videos.

To offset some of those losses, YouTube said it would funnel $100 million to creators of children’s content over the next three years. It will also heavily promote YouTube Kids, its child-focused app, to shift parents away from using the main YouTube app when allowing their kids to watch videos.

The crackdown on creators of children’s content could make it financially difficult to produce videos for kids, said Maureen Ohlhausen, a former acting chairwoman of the F.T.C.

“There is a lot of free content available for children,” she said. “You want to be sure that you don’t kill the goose that lays the golden egg.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Regulators Fine Google $170 Million for Violating Children’s Privacy on YouTube

Westlake Legal Group 04youtubeftc-facebookJumbo Regulators Fine Google $170 Million for Violating Children’s Privacy on YouTube YouTube.com Video Recordings, Downloads and Streaming United States Politics and Government Privacy Online Advertising Google Inc Fines (Penalties) Federal Trade Commission Data-Mining and Database Marketing Computers and the Internet Children's Online Privacy Protection Act Children and Childhood

Google on Wednesday agreed to pay a record $170 million fine and to make changes to protect children’s privacy on YouTube, as regulators said the video site had knowingly and illegally harvested personal information from youngsters and used that data to profit by targeting them with ads.

The measures were part of a settlement with the Federal Trade Commission and New York’s attorney general. They said YouTube had violated a federal children’s privacy law known as the Children’s Online Privacy Protection Act, or COPPA.

Regulators said YouTube, which is owned by Google, had illegally gathered children’s data — such as identification codes that are used to track web browsing over time — without their parents’ consent. The site also marketed itself as a top destination for young children to advertisers, even as it told some advertising companies that no compliance with the children’s privacy law was needed because it did not have viewers younger than 13. YouTube then made millions of dollars by using the information harvested from children to target them with ads, regulators said.

To settle the charges, YouTube agreed to pay $170 million, of which $136 million will go to the F.T.C. and $34 million to New York. The sum represents the largest civil penalty ever obtained by the F.T.C. in a children’s privacy case, dwarfing the previous record fine of $5.7 million that the agency levied this year against the owner of TikTok, a social video-sharing app.

Under the settlement, which the F.T.C. approved in a 3 to 2 vote, YouTube also agreed to set up a system that asks video channel owners to identify the children’s content they post so that targeted ads are not placed in those videos. YouTube must also obtain consent from parents before collecting or sharing personal details like their child’s name or photos, regulators said.

The move adds to the enforcement actions that American regulators have recently taken against tech companies for violations of people’s privacy, indicating the Trump administration’s willingness to aggressively pursue the powerful corporations. It follows a $5 billion privacy settlement between the F.T.C. and Facebook in July over how the social network collected and handled its users’ data.

But critics warned that the fine and measures against YouTube did not go far enough to protect children’s privacy.

Children’s advocates said the $170 million penalty was a slap on the wrist for one of the world’s richest companies. They added that Google had simply agreed to abide by a children’s privacy law that it was already obligated to comply with. COPPA prohibits operators of online services from collecting personal data, like home addresses, from children under 13 without a parent’s verifiable permission.

“Merely requiring Google to follow the law, that’s a meaningless sanction,” said Jeffrey Chester, executive director of the Center for Digital Democracy, a nonprofit whose efforts in the 1990s helped trigger COPPA’s passage. “It’s the equivalent of a cop pulling somebody over for speeding at 110 miles an hour — and they get off with a warning.”

The agreement split the F.T.C. down partisan lines, with the agency’s three Republican commissioners voting to approve the settlement and the two Democratic commissioners dissenting.

In a statement, two of the Republican commissioners, Joseph J. Simons, the agency’s chairman, and Christine S. Wilson, said that the settlement “achieves a significant victory for the millions of parents whose children watch child-directed content on YouTube.” In particular, they said, this was the first time a platform would have to ask its content producers to identify themselves as creators of children’s material. The settlement, they said, “sends a strong message to children’s content providers and to platforms about their obligation to comply with the COPPA rule.”

But while the agreement prohibits YouTube and Google from using or sharing the children’s data they have already obtained, one of the Democratic commissioners, Rohit Chopra, said it did not hold company executives personally accountable for illegal data-mining of children. The other Democratic commissioner, Rebecca Kelly Slaughter, said the agreement did not go far enough by requiring YouTube itself to proactively identify children’s videos on its platform.

“No individual accountability, insufficient remedies to address the company’s financial incentives, and a fine that still allows the company to profit from its lawbreaking,” Mr. Chopra wrote in his dissent. “The terms of the settlement were not even significant enough to make Google issue a warning to its investors.”

COPPA is the strongest federal consumer privacy statute in the United States, enabling the F.T.C. to level fines of up to $42,530 for each violation.

Noah Phillips, a Republican commissioner, argued that Congress should give the F.T.C. more guidance about how to levy fines.

In a blog post on Wednesday about the settlement, YouTube’s chief executive, Susan Wojcicki, said that “nothing is more important than protecting kids and their privacy.” She added, “From its earliest days, YouTube has been a site for people over 13, but with a boom in family content and the rise of shared devices, the likelihood of children watching without supervision has increased.”

YouTube said that not only had it agreed to stop placing targeted ads on children’s videos, it would also stop gathering personal data about anyone who watches those videos, even if the company believes the viewer is an adult. It said it would also eliminate other features on children’s videos, such as comments and notifications, that require the use of personal data.

Ms. Wojcicki said YouTube also plans to use artificial intelligence to scan for content that targets young audiences, like videos featuring kids’ toys, games or characters, in addition to relying on creator reports.

Under the settlement, YouTube must implement the changes by early next year.

The privacy case against YouTube began in 2016 after the New York attorney general’s office, which has been active in enforcing the federal children’s privacy law in the state, notified the F.T.C. of apparent children’s privacy violations on the video site.

“Google and YouTube knowingly and illegally monitored, tracked and served targeted ads to young children just to keep advertising dollars rolling in,” Letitia James, New York’s attorney general, said in a statement on Wednesday. “These companies put children at risk and abused their power.”

Google has repeatedly dealt with privacy violations in recent years. The internet search company is subject to a 20-year federal consent order from 2011 for deceptive data-mining involving its now-defunct social network Buzz. That order required Google to put a comprehensive privacy program in place and forbade it from misrepresenting how it handles people’s data.

In 2012, Google also agreed to pay $22.5 million to settle F.T.C. charges that it had violated that consent order by deceiving users of Apple’s Safari browser about its data-mining practices.

The Silicon Valley company is also the subject of a state lawsuit over alleged children’s privacy violations, brought by Hector Balderas, the attorney general of New Mexico. The suit alleges the company failed to ensure that children’s apps in its Google Play store complied with the children’s privacy law. Google has asked that the case be dismissed.

The settlement on Wednesday is likely to have implications beyond YouTube. The changes required under the agreement could limit how much video makers earn on the platform because they will no longer be able to profit from targeted ads on children’s videos.

To offset some of those losses, YouTube said it would funnel $100 million to creators of children’s content over the next three years. It will also heavily promote YouTube Kids, its child-focused app, to shift parents away from using the main YouTube app when allowing their kids to watch videos.

The crackdown on creators of children’s content could make it financially difficult to produce videos for kids, said Maureen Ohlhausen, a former acting chairwoman of the F.T.C.

“There is a lot of free content available for children,” she said. “You want to be sure that you don’t kill the goose that lays the golden egg.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

When Apps get Your Medical Data, Your Privacy May go With It

Americans may soon be able to get their medical records through smartphone apps as easily as they order takeout food from Seamless or catch a ride from Lyft.

But prominent medical organizations are warning that patient data-sharing with apps could facilitate invasions of privacy — and they are fighting the change.

The battle stems from landmark medical information-sharing rules that the federal government is now working to complete. The rules will for the first time require health providers to send medical information to third-party apps, like Apple’s Health Records, after a patient has authorized the data exchange. The regulations, proposed this year by the Department of Health and Human Services, are intended to make it easier for people to see their medical records, manage their illnesses and understand their treatment choices.

Yet groups including the American Medical Association and the American College of Obstetricians and Gynecologists warned regulators in May that people who authorized consumer apps to retrieve their medical records could open themselves up to serious data abuses. Federal privacy protections, which limit how health providers and insurers may use and share medical records, no longer apply once patients transfer their data to consumer apps.

The American Medical Association, the American Hospital Association and other groups said they had recently met with health regulators to push for changes to the rules. Without federal restrictions in place, the groups argued, consumer apps would be free to share or sell sensitive details like a patient’s prescription drug history. And some warned that the spread of such personal medical information could lead to higher insurance rates or job discrimination.

“Patients simply may not realize that their genetic, reproductive health, substance abuse disorder, mental health information can be used in ways that could ultimately limit their access to health insurance, life insurance or even be disclosed to their employers,” said Dr. Jesse M. Ehrenfeld, an anesthesiologist who is the chair of the American Medical Association’s board. “Patient privacy can’t be retrieved once it’s lost.”

Enabling people to use third-party consumer apps to easily retrieve their medical data would be a milestone in patient rights.

ImageWestlake Legal Group merlin_159509988_d979d0b6-f747-4581-baae-d3ba332596d9-articleLarge When Apps get Your Medical Data, Your Privacy May go With It Smartphones Regulation and Deregulation of Industry Privacy Mobile Applications Health and Human Services Department Electronic Health Records American Medical Assn

“Patient privacy can’t be retrieved once it’s lost,” said Dr. Jesse M. Ehrenfeld, the chair of the American Medical Association’s board.CreditDavid Kasnic for The New York Times

Dr. Don Rucker, the federal health department’s national coordinator for health information technology, said that allowing people convenient access to their medical data would help them better manage their health, seek second opinions and understand medical costs. He said the idea was to treat medicine as a consumer service, so people can shop for doctors and insurers on their smartphones as easily as they pay bills, check bus schedules or buy plane tickets.

“This is major, major, major,” he said. “The provision of health care will be brought into the app economy and, through that, to a much, much higher degree of patient control.”

The new rules are emerging just as Amazon, Apple, Google and Microsoft are racing to capitalize on health data and capture a bigger slice of the health care market. Opening the floodgates on patient records now, Dr. Rucker said, could help tech giants and small app makers alike develop novel consumer health products.

The regulations are part of a government effort to push health providers to use and share electronic health records. Regulators have long hoped that centralizing medical data online would let doctors get a fuller, more accurate picture of patient health and help people make more informed medical choices, with the promise of better health outcomes.

In reality, digital health records have been cumbersome for many physicians to use and difficult for many patients to retrieve.

Americans have had the right to obtain copies of their medical records since 2000 under the federal Health Insurance Portability and Accountability Act, known as HIPAA. But many health providers still send medical records by fax or require patients to pick up paper or DVD copies of their files.

The new regulations are intended to banish such bureaucratic hurdles.

Dr. Rucker said it was self-serving for physicians and hospitals, which may benefit financially from keeping patients and their data captive, to play up privacy concerns.

“All we’re saying is that patients have a right to choose as opposed to the right being denied them by the forces of paternalism,” he said.

Dr. Don Rucker of the Department of Health and Human Services said it was self-serving for doctors and hospitals to play up privacy concerns.CreditDepartment of Health and Human Services

The Department of Health and Human Services proposed two new data-sharing rules this year to carry out provisions in the 21st Century Cures Act, a 2016 law designed to speed medical innovation.

Dr. Rucker’s office developed the one that would allow patients to send their electronic medical information, including treatment pricing, directly to apps from their health providers. It will require vendors of electronic health records to adopt software known as application programming interfaces, or A.P.I.s. Once the software is in place, Dr. Rucker said, patients will be able to use smartphone apps “in an Uber-like fashion” to get their medical data.

To foster such data-sharing, a coalition of tech giants — including Amazon, Google and Microsoft — has committed to using common standards to categorize and format health information. Microsoft, for instance, has developed cloud services to help health providers, insurers and health record vendors make data available to patients.

“What that lets an individual consumer do is to connect an app or service of their own choice into their health care records and pull down data about their historical lab tests, about their medical problems or condition, about medication prescription,” said Josh Mandel, chief architect for Microsoft Healthcare.

The other proposed rule, developed by the Centers for Medicare and Medicaid Services, would require Medicare and Medicaid plans, and plans participating in the federal health insurance marketplace, to adopt A.P.I.s so people could use third-party apps to get their insurance claims and benefit information.

The regulations are expected to become final this year. Health providers and health record vendors will have two years to comply with the A.P.I. requirements. Electronic health record vendors that impede data-sharing — a practice called “information blocking” — could be fined up to $1 million per violation. Doctors accused of information blocking could be subject to federal investigation.

Brett Meeks, vice president of policy and legal for the Center for Medical Interoperability, a nonprofit that works to advance data sharing among health care technologies, said it would be better for regulators to help foster a trustworthy data-sharing platform before requiring doctors to entrust patients’ medical records to consumer tech platforms.

“Facebook, Google and others are currently under scrutiny for being poor stewards of consumer data,” he said. “Why would you carte blanche hand them your health data on top of it so they could do whatever they want with it?”

Tech executives are promoting data-sharing in health care. From left, Taha Kass-Hout of Amazon, Aashima Gupta of Google and Peter Lee of Microsoft attended a conference in July for Medicare’s Blue Button system.CreditMicrosoft

Physicians’ organizations and others said the rules failed to give people granular control over their data. They added that the regulations could require them to share patients’ sensitive medical or financial information with apps and insurers against their better judgment.

The current protocols for exchanging patients’ data, for instance, would let people use consumer apps to get different types of information, like their prescription drug history. But it is an all-or-nothing choice. People who authorized an app to collect their medication lists would not be able to stop it from retrieving specific data — like the names of H.I.V. or cancer drugs — they might prefer to keep private.

Dr. Rucker said that current information-sharing standards could not accommodate granular data controls and that privacy concerns needed to be balanced against the benefits of improved patient access to their medical information.

In any case, he said, many people are comfortable liberally sharing personal health details — enabling, say, fitness apps to collect their heart rate data — that are not covered by federal protections. Patients, he said, have the right to make similar choices about which apps to entrust with their medical data.

“A lot of this actually will be enforced by people picking apps they trust from brand names they trust in exactly the same way that people don’t let their banking data and their financial data just go out randomly,” he said.

Apple’s Health Records app, for instance, lets people send a subset of their medical data directly to their iPhones from more than 300 health care centers. Apple said it did not have access to that information because it was encrypted and stored locally on people’s personal devices.

But even proponents of the new regulations are calling for basic privacy and security rules for tech platforms that collect and use people’s medical information.

“The moment our data goes into a consumer health tech solution, we have no rights,” said Andrea Downing, a data rights advocate for people with hereditary cancers. “Without meaningful protections or transparency on how data is shared, it could be used by a recruiter to deny us jobs,” or by an insurer to deny coverage.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Getting Your Medical Records Through an App? There’s a Catch. And a Fight.

Americans may soon be able to get their medical records through smartphone apps as easily as they order takeout food from Seamless or catch a ride from Lyft.

But prominent medical organizations are warning that patient data-sharing with apps could facilitate invasions of privacy — and they are fighting the change.

The battle stems from landmark medical information-sharing rules that the federal government is now working to complete. The rules will for the first time require health providers to send medical information to third-party apps, like Apple’s Health Records, after a patient has authorized the data exchange. The regulations, proposed this year by the Department of Health and Human Services, are intended to make it easier for people to see their medical records, manage their illnesses and understand their treatment choices.

Yet groups including the American Medical Association and the American College of Obstetricians and Gynecologists warned regulators in May that people who authorized consumer apps to retrieve their medical records could open themselves up to serious data abuses. Federal privacy protections, which limit how health providers and insurers may use and share medical records, no longer apply once patients transfer their data to consumer apps.

The American Medical Association, the American Hospital Association and other groups said they had recently met with health regulators to push for changes to the rules. Without federal restrictions in place, the groups argued, consumer apps would be free to share or sell sensitive details like a patient’s prescription drug history. And some warned that the spread of such personal medical information could lead to higher insurance rates or job discrimination.

“Patients simply may not realize that their genetic, reproductive health, substance abuse disorder, mental health information can be used in ways that could ultimately limit their access to health insurance, life insurance or even be disclosed to their employers,” said Dr. Jesse M. Ehrenfeld, an anesthesiologist who is the chair of the American Medical Association’s board. “Patient privacy can’t be retrieved once it’s lost.”

Enabling people to use third-party consumer apps to easily retrieve their medical data would be a milestone in patient rights.

ImageWestlake Legal Group merlin_159509988_d979d0b6-f747-4581-baae-d3ba332596d9-articleLarge Getting Your Medical Records Through an App? There’s a Catch. And a Fight. Smartphones Regulation and Deregulation of Industry Privacy Mobile Applications Health and Human Services Department Electronic Health Records Computers and the Internet American Medical Assn

“Patient privacy can’t be retrieved once it’s lost,” said Dr. Jesse M. Ehrenfeld, the chair of the American Medical Association’s board.CreditDavid Kasnic for The New York Times

Dr. Don Rucker, the federal health department’s national coordinator for health information technology, said that allowing people convenient access to their medical data would help them better manage their health, seek second opinions and understand medical costs. He said the idea was to treat medicine as a consumer service, so people can shop for doctors and insurers on their smartphones as easily as they pay bills, check bus schedules or buy plane tickets.

“This is major, major, major,” he said. “The provision of health care will be brought into the app economy and, through that, to a much, much higher degree of patient control.”

The new rules are emerging just as Amazon, Apple, Google and Microsoft are racing to capitalize on health data and capture a bigger slice of the health care market. Opening the floodgates on patient records now, Dr. Rucker said, could help tech giants and small app makers alike develop novel consumer health products.

The regulations are part of a government effort to push health providers to use and share electronic health records. Regulators have long hoped that centralizing medical data online would let doctors get a fuller, more accurate picture of patient health and help people make more informed medical choices, with the promise of better health outcomes.

In reality, digital health records have been cumbersome for many physicians to use and difficult for many patients to retrieve.

Americans have had the right to obtain copies of their medical records since 2000 under the federal Health Insurance Portability and Accountability Act, known as HIPAA. But many health providers still send medical records by fax or require patients to pick up paper or DVD copies of their files.

The new regulations are intended to banish such bureaucratic hurdles.

Dr. Rucker said it was self-serving for physicians and hospitals, which may benefit financially from keeping patients and their data captive, to play up privacy concerns.

“All we’re saying is that patients have a right to choose as opposed to the right being denied them by the forces of paternalism,” he said.

Dr. Don Rucker of the Department of Health and Human Services said it was self-serving for doctors and hospitals to play up privacy concerns.CreditDepartment of Health and Human Services

The Department of Health and Human Services proposed two new data-sharing rules this year to carry out provisions in the 21st Century Cures Act, a 2016 law designed to speed medical innovation.

Dr. Rucker’s office developed the one that would allow patients to send their electronic medical information, including treatment pricing, directly to apps from their health providers. It will require vendors of electronic health records to adopt software known as application programming interfaces, or A.P.I.s. Once the software is in place, Dr. Rucker said, patients will be able to use smartphone apps “in an Uber-like fashion” to get their medical data.

To foster such data-sharing, a coalition of tech giants — including Amazon, Google and Microsoft — has committed to using common standards to categorize and format health information. Microsoft, for instance, has developed cloud services to help health providers, insurers and health record vendors make data available to patients.

“What that lets an individual consumer do is to connect an app or service of their own choice into their health care records and pull down data about their historical lab tests, about their medical problems or condition, about medication prescription,” said Josh Mandel, chief architect for Microsoft Healthcare.

The other proposed rule, developed by the Centers for Medicare and Medicaid Services, would require Medicare and Medicaid plans, and plans participating in the federal health insurance marketplace, to adopt A.P.I.s so people could use third-party apps to get their insurance claims and benefit information.

The regulations are expected to become final this year. Health providers and health record vendors will have two years to comply with the A.P.I. requirements. Electronic health record vendors that impede data-sharing — a practice called “information blocking” — could be fined up to $1 million per violation. Doctors accused of information blocking could be subject to federal investigation.

Brett Meeks, vice president of policy and legal for the Center for Medical Interoperability, a nonprofit that works to advance data sharing among health care technologies, said it would be better for regulators to help foster a trustworthy data-sharing platform before requiring doctors to entrust patients’ medical records to consumer tech platforms.

“Facebook, Google and others are currently under scrutiny for being poor stewards of consumer data,” he said. “Why would you carte blanche hand them your health data on top of it so they could do whatever they want with it?”

Tech executives are promoting data-sharing in health care. From left, Taha Kass-Hout of Amazon, Aashima Gupta of Google and Peter Lee of Microsoft attended a conference in July for Medicare’s Blue Button system.CreditMicrosoft

Physicians’ organizations and others said the rules failed to give people granular control over their data. They added that the regulations could require them to share patients’ sensitive medical or financial information with apps and insurers against their better judgment.

The current protocols for exchanging patients’ data, for instance, would let people use consumer apps to get different types of information, like their prescription drug history. But it is an all-or-nothing choice. People who authorized an app to collect their medication lists would not be able to stop it from retrieving specific data — like the names of H.I.V. or cancer drugs — they might prefer to keep private.

Dr. Rucker said that current information-sharing standards could not accommodate granular data controls and that privacy concerns needed to be balanced against the benefits of improved patient access to their medical information.

In any case, he said, many people are comfortable liberally sharing personal health details — enabling, say, fitness apps to collect their heart rate data — that are not covered by federal protections. Patients, he said, have the right to make similar choices about which apps to entrust with their medical data.

“A lot of this actually will be enforced by people picking apps they trust from brand names they trust in exactly the same way that people don’t let their banking data and their financial data just go out randomly,” he said.

Apple’s Health Records app, for instance, lets people send a subset of their medical data directly to their iPhones from more than 300 health care centers. Apple said it did not have access to that information because it was encrypted and stored locally on people’s personal devices.

But even proponents of the new regulations are calling for basic privacy and security rules for tech platforms that collect and use people’s medical information.

“The moment our data goes into a consumer health tech solution, we have no rights,” said Andrea Downing, a data rights advocate for people with hereditary cancers. “Without meaningful protections or transparency on how data is shared, it could be used by a recruiter to deny us jobs,” or by an insurer to deny coverage.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Fake Indian Elizabeth Warren – Has a Lot of Really Terrible Ideas

Westlake Legal Group warren-4-620x266 Fake Indian Elizabeth Warren – Has a Lot of Really Terrible Ideas wireless Wired washington D.C. technolgy Section 230 satellite progressives Privacy Politics Policy News network neutrality Net Neutrality law Internet Government Front Page Stories Front Page Foreign Policy Economy China California Business & Economy

Translucent Massachusetts Democrat Senator Elizabeth Warren dined out for decades and made millions of dollars – lying about being an American Indian.

Lyingly laying claim to much more melanin than she actually possesses – is but one of many, many awful things to which Warren has dedicated much of her life.

Warren is almost certainly a lifelong Leftist.  Almost no one gets this hardcore radical Left – if they started out on the Right.

As one gets older – and learns more and sees more of the world – one moves Right, not Left.  Ask former avowed Marxist Thomas Sowell.  Ask Winston Churchill:

“Any man under 30 who is not a liberal has no heart, and any man over thirty who is not a conservative has no brains.”

Oh sure, like Hillary Clinton, James Comey, Robert Mueller and many others – Warren and her Leftist media lackeys lyingly claim….

‘Liz Was a Diehard Conservative’

But they tell that lie – only so they can all then follow with lying variations of…

“Elizabeth Warren doesn’t like to talk about it, but for years she was a registered Republican. Why she left the GOP – and what it means for her campaign.”

See, Regular America?  ‘Liz’ used to be like you are now – but the GOP radicals drove her away.

So you should join with her – and her modest plan for $20-trillion-per-year in additional federal government spending.  And we can all hold hands together – as we hurl ourselves off the fiscal cliff into our final Oblivion.

Many, many of the really terrible ideas Warren has championed – have along the way become really terrible laws.  And they are now nigh all – REALLY destroying our nation.

Medicare for Seniors is more than $38 trillion short.  Warren wants Medicare for All.

Social Security is more than $32 trillion short.  Warren wants to further expand the expenditures.

And on.  And on.  And….

Warren is now, of course, running for President of the United States of America (God help us all).

And – believe it or not – the Leftist media has deemed Warren…the Idea Candidate.

‘I Have a Plan for That.’ Elizabeth Warren Is Betting That Americans Are Ready for Her Big Ideas

Elizabeth Warren’s Excellent Ideas

The Secret to Elizabeth Warren’s Surge? Ideas

Elizabeth Warren Has Lots of Plans. Together, They Would Remake the Economy

Elizabeth Warren Has a Plan to Fix Everything

Elizabeth Warren Has the Plans

Elizabeth Warren Unveils Immigration Plan That Would Decriminalize Border Crossings

Elizabeth Warren Outlines Sweeping New Gun Control Plans

Elizabeth Warren: Here’s My Plan to Cancel Student Loan Debt

And on.  And on.  And….

The Leftist idea that Warren is the Idea Candidate has now so thoroughly soaked the zeitgeist….

‘Elizabeth Warren Has a Plan for That’ Memes Are Here:

“Rest assured that if we need someone to inspire new memes, Elizabeth Warren is the woman for that.”

Seriously – I am now exceedingly nauseous.  Whereas as I contemplated this – I was merely a bit queasy.

Never mind that the seventy years’ worth of Warren’s really terrible ideas – have ALL turned out terrible.

Never mind that Warren’s really terrible ideas for tomorrow – are nigh identical to all the Left’s really terrible ideas for today.  And yesterday.  And for more than an entire century.

Never mind ALL of that really terrible history.  The Left’s March to Oblivion – looks and moves only…Forward!!!

For the Left – there is no Yesterday.  There is no Today.  There is only Tomorrow.

Because Yesterday and Today – prove the Left wrong.  The Left isn’t yet wrong Tomorrow.

Yet another really terrible idea from the Idea Candidate?

Universal Broadband? Warren Has a Plan for That

Of course she does.  She even put quill to parchment – to detail her Idea!-Plan! for the Washington Post….

Elizabeth Warren: Here’s How We Get Broadband Internet to Rural America

(What did Leftists use to write before quill and parchment?  Computers.)

Guess what Warren’s Idea!-Plan! is?

Is Warren calling for less government?  Or more government.

As with all things – Warren is calling for the most government.

As with all things – Warren is calling for government, the whole government and nothing but the government.

As really terrible ideas go – hers could be the absolute worst:

“The Federal Communications Commission (FCC) reports that a staggering 21.3 million Americans don’t have access to high-speed broadband – no doubt an underestimate given the notorious loopholes in FCC reporting requirements.”

Stop right there.

What didn’t Warren tell you?:

“The number of Americans lacking access to a terrestrial fixed broadband connection meeting the FCC’s benchmark of at least 25 Mbps/3 Mbps has dropped from 26.1 million Americans at the end of 2016 to 21.3 million Americans at the end of 2017, a decrease of more than 18%.”

The 21.3 million number – was from 2017.  And that number – is down 18% from just one year prior.

And the FCC still bizarrely only counts a hardline (terrestrial fixed) Internet connection as a connection.  Meaning buried cables criss-crossing America – all the way into your abode.

But nearly everyone in America has a 4G (Fourth Generation) wireless smartphone. On which you can seamlessly stream High Definition (HD) video – the most bandwidth intensive thing currently to do on the Internet.  (And we’re but a few years away from exponentially-faster-5G).

And then there is Satellite Internet – for the remaining very few who are bereft of both wired and cellular wireless.

And those 25mbs (download) and 3Mbs (upload) speeds that serve as the FCC minimum?  They are WAY faster than 98+% of Americans currently need.

The Truth About Faster Internet: It’s Not Worth It:

“Typical U.S. households don’t use most of their bandwidth while streaming and get marginal gains from upgrading speeds.”

So even while streaming High Definition (HD) video – the most bandwidth intensive thing currently to do on the Internet – “typical U.S. households don’t use most of their bandwidth.”

In other words – just about everyone in America has more-than-sufficient access to the Internet.

But Government Everywhere Warren – remains steadfastly impervious to facts:

“I have a plan for a new public option for broadband Internet…that would manage an $85 billion federal grant program. Only electricity and telephone cooperatives, nonprofit organizations, tribes, cities, counties and other state subdivisions would be eligible for grants.”

Get that?  No private sector companies are eligible.  The entities that, you know, actually built the modern Internet – from scratch.  The only entities that have consistently connected anyone – and have already connected basically everyone – are forbidden from participating.

And would Warren’s be the government’s first foray into pretending to be an Internet Service Provider (ISP)?  Heavens no – as Warren admits:

“(The 21.3 million American tally – which is really metaphysically close to zero) is despite more than a decade of efforts by policymakers at the state and federal level to end the ‘digital divide’ and deliver universal access to high-speed Internet.”

Government has tried and tried and tried again to be an ISP.  And government has failed and failed and failed again.

Municipal Broadband Is A Failed Model

The Failures of Government-Owned Internet

Municipal Broadband: A Bad Deal For Taxpayers

Municipal Broadband Fails – Again

There’s even an interactive online map – documenting all the very many places government’s very many attempts at being an ISP have failed.

Broadband Boondoggles: A Map of Failed Taxpayer-Funded Networks:

“For decades, local governments have made promises of faster and cheaper broadband networks. Unfortunately, these municipal networks often don’t deliver or fail, leaving taxpayers to foot the bill. Explore the map to learn about the massive debt, waste and broken promises left behind by these failed government networks.”

Decades of failed attempts by government – to pretend to be an ISP.

Which Warren – readily admits.

But Warren demands she drag us once more into that breach.

Off the fiscal cliff – and into our final Oblivion.

Because for the Left – there is no Yesterday.  There is no Today.  There is only Tomorrow.

And Tomorrow.  And….

The post Fake Indian Elizabeth Warren – Has a Lot of Really Terrible Ideas appeared first on RedState.

Westlake Legal Group warren-4-300x129 Fake Indian Elizabeth Warren – Has a Lot of Really Terrible Ideas wireless Wired washington D.C. technolgy Section 230 satellite progressives Privacy Politics Policy News network neutrality Net Neutrality law Internet Government Front Page Stories Front Page Foreign Policy Economy China California Business & Economy   Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

YouTube Said to Be Fined Up to $200 Million for Children’s Privacy Violations

Westlake Legal Group 30youtubekids1-facebookJumbo YouTube Said to Be Fined Up to $200 Million for Children’s Privacy Violations YouTube.com Social Media Regulation and Deregulation of Industry Privacy Online Advertising Mobile Applications Law and Legislation Google Inc Fines (Penalties) Federal Trade Commission Data-Mining and Database Marketing Computers and the Internet Children and Childhood

The Federal Trade Commission has voted to fine Google $150 million to $200 million to settle accusations that its YouTube subsidiary illegally collected personal information about children, according to three people briefed on the matter.

The case could have significant repercussions for other popular platforms used by young children in the United States.

The settlement would be the largest civil penalty ever obtained by the F.T.C. in a children’s privacy case. It dwarfs the previous record fine of $5.7 million for children’s privacy violations the agency levied this year against the owners of TikTok, a social video-sharing app.

Politico first reported the amount of the settlement, which would have to be approved by the Justice Department.

The news of the F.T.C.’s settlement with Google comes at a moment when regulators and lawmakers in Washington and the European Union are challenging the power — and the aggressive data-mining practices — of tech giants like Facebook and Google.

Last month, the F.T.C. announced a $5 billion fine against Facebook for abusing its users’ personal data. Members of Congress this year have also introduced at least dozen privacy and transparency bills to bolster protections for Americans’ social media data, genetic data, facial recognition data and other kinds of information.

The F.T.C.’s agreement with YouTube involves a larger fine than in previous children’s privacy settlements, but the case has renewed complaints from consumer advocates that the agency has generally failed to require privacy violators to make substantive change to their data-mining practices.

“Once again, this F.T.C. appears to have let a powerful company off the hook with a nominal fine for violating users’ privacy online,” Senator Edward Markey, Democrat of Massachusetts, said in a statement on Friday.

If regulators fail to take a tougher stand to protect children’s privacy, the problematic practices of social media companies will not change, said Josh Golin, the executive director of the Campaign for a Commercial-Free Childhood, a nonprofit group.

“YouTube has reaped huge profits by ignoring federal children’s privacy law and engaging in illegal data collection and targeted marketing,” Mr. Golin said.

The F.T.C., which is expected to announce the settlement in September, declined to comment. YouTube declined to comment.

Google has faced scrutiny before over how it collects and uses people’s data. It is already subject to an F.T.C. consent order from 2011 for deceptive data-mining involving its now-defunct social network Buzz.

That order required the internet search company to institute a comprehensive privacy program and prohibited it from misrepresenting its data-handling practices.

In 2012, Google agreed to pay $22.5 million to settle F.T.C. charges that it had violated the consent order by deceiving users of Apple’s Safari browser about its data-mining practices.

Whether YouTube’s alleged misuse of children’s data also violates that order is not known.

Children are among the most avid viewers of YouTube. Yet the video site has struggled to police content intended for and featuring them. In February, a video documenting how pedophiles used the comments on videos of children to guide other predators went viral on YouTube.

The revelations were especially damaging because YouTube had pledged in 2017 to do more to protect families after reports of pedophiles cruising the site for videos of minors and leaving lewd or sexual comments. YouTube addressed the latest issue by disabling comments on most videos featuring children under 13 years old after brands threatened to suspend advertising on the site.

In June, The New York Times published an investigation into how YouTube’s recommendation system automatically promoted videos of scantily clad children to people who had watched other videos of young children in compromised positions or sexually themed content.

The accusations against YouTube emerged last year after a coalition of more than 20 consumer advocacy groups filed a complaint to the F.T.C. saying that the video platform was violating a federal privacy law by collecting and exploiting the personal information of children.

That law, called the Children’s Online Privacy Protection Act, prohibits online services aimed at children under 13 from collecting personal details — like a child’s birth date, contact information, photos or precise location — without a parent’s permission. The law also prohibits children’s apps from using persistent identifiers to target youngsters with ads based on their behavior.

YouTube has long maintained that its platform is not intended for children under 13, even as some of its most popular channels — with names like Cocomelon Nursery Rhymes and ChuChu TV — are clearly aimed at youngsters, offering colorful animated videos that have been viewed more than a billion times.

People who set up accounts on YouTube must affirm that they are at least 13 and must agree to Google’s terms of service, enabling the company to track users’ video-viewing activities, internet browsing habits and other details. YouTube has said that it deletes accounts when it determines that a user is under 13.

YouTube maintains a separate app for younger users, called YouTube Kids, which says that it does not allow behavior-based ads.

But the children’s groups said in their complaint that YouTube was aware that millions of children were watching its main channel and collected the children’s personal details anyway, without parental consent.

The F.T.C. settlement with YouTube could have major implications for other popular, general interest apps — like animated video games — in the United States that have millions of users under the age of 13.

Google is also the subject of a state lawsuit over alleged children’s privacy violations brought by Hector Balderas, the attorney general of New Mexico. Google has asked that the case be dismissed.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

The Baroness Fighting to Protect Children Online

MENLO PARK, Calif. — Beeban Kidron, Silicon Valley’s latest antagonist, sat on the patio of a boutique hotel near Facebook’s headquarters recently, camouflaged in the local uniform of jeans and sneakers.

A member of the House of Lords, she had just flown in from London to attend an international meeting hosted by the social network. And now, in a hotel thronging with tech executives, she was recounting her plan to overhaul how their companies treat children.

The problem, as Baroness Kidron sees it, is that apps like YouTube and Instagram use data-fueled enticements — such as tallying “likes” and automatically personalizing videos that play one after another — to get youngsters hooked on their services. Children, she says, are no match for the turbocharged influence tactics, and often stay glued to the services even if doing so makes them unhappy.

“The idea that it’s O.K. to nudge kids into endless behaviors, just because you are pushing their evolutionary buttons — it’s not a fair fight,” Lady Kidron told me, as she sat a few tables away from a Facebook policy executive. “It’s little Timmy in his bedroom versus Mark Zuckerberg in his Valley.”

Her goal is to counter that power dynamic, so that children’s rights and protections in the digital world more closely resemble those in real life. And she’s not just talking about it — she is changing the law.

Lady Kidron, who was born a commoner, helped lead a campaign to remake how Facebook, Google and other tech companies treat children online in Britain. Two years ago, she persuaded her peers in Parliament to push through sweeping new rules meant to stop online services from exploiting children’s personal data to manipulate their behavior. The changes are meeting fierce resistance, with tech companies and trade groups lobbying to weaken the new rules before they are codified this year.

Lady Kidron’s efforts are part of a drive by lawmakers and regulators on both sides of the Atlantic to rein in the immense power, and data abuses, of Big Tech. Amid that intensifying scrutiny, some officials have zeroed in on the companies’ treatment of children.

In March, Senators Edward J. Markey, a Massachusetts Democrat, and Josh Hawley, a Missouri Republican, introduced a bill to strengthen children’s online privacy. In July, the Federal Trade Commission said it was considering updating a children’s privacy rule to keep pace with advances in technology. Also last month, Senator Hawley introduced a “social media addiction bill” that would require online services to turn off “nudging” techniques like autoplaying videos — an effort that seems inspired by the new British rules.

“Parents are concerned. Parliamentarians are concerned,” said Elizabeth Denham, Britain’s information commissioner, an independent government regulator who wrote the new children’s rules after Lady Kidron persuaded Parliament to support the protections. “The kids aren’t all right.”

It took a long time for Silicon Valley to see Lady Kidron coming. That may be because she does not fit the mold — privacy advocate, remorseful former tech executive — of a typical industry challenger.

Lady Kidron, 58, says she developed her antennae for power, and how people wield it, as a child in London. While healing from surgery to repair a cleft palate when she was 10 years old, she was not allowed to run or speak. To communicate, she carried around a pencil, a notebook and a horn — “like Harpo Marx,” she said.

ImageWestlake Legal Group 00kidsprivacy5-articleLarge The Baroness Fighting to Protect Children Online Social Media Regulation and Deregulation of Industry Privacy Mobile Applications Legislatures and Parliaments Data-Mining and Database Marketing Computers and the Internet Children and Childhood

Lady Kidron was a filmmaker, directing movies like “Bridget Jones: The Edge of Reason,” with Renee Zellweger and Colin Firth.CreditUniversal Studios, via Everett Collection

“One thing about being silent for a period of a year is that you very much notice who is included, who is excluded, who is dominant, who is excluded and where the power lies,” Lady Kidron recounted.

She channeled that interest in power dynamics into a career as a filmmaker. She alternated directing popular feature films — “Bridget Jones: The Edge of Reason” — with making documentaries on social justice issues like child prostitution in India.

In 2012, Lady Kidron decided to make a documentary about children and the internet, called “InRealLife.”CreditDogwoof Pictures

In 2012, she was one of two commoners nominated to serve in the House of Lords, an appointment that came with the lifetime aristocratic title of baroness. That year, Lady Kidron decided to make a documentary about children and the internet, “InRealLife.”

She spent months embedded with tweens and teenagers, sitting next to them as they lived their digital lives. They exchanged messages with friends, fell in love with strangers, were bullied, watched pornography and played video games. The documentary features the children describing how they felt simultaneously interested, hooked, influenced and repulsed by online services.

“I was worried about some of the feelings they were having,” Lady Kidron said.

By 2014, Lady Kidron had started a foundation, 5Rights, to promote children’s digital rights. But she felt frustrated with major tech companies. They scrambled to handle one problem after another for children on their sites, she said, but appeared unwilling to make major changes to try to avert such problems in the first place. She concluded that the only way to give children more privacy, more freedom and more control over their online experiences was through regulation.

In 2017, she proposed the idea of children’s online protections in Parliament as an amendment to a national data protection bill. The bill generally called for protections for children’s data, but Lady Kidron pushed to detail those protections.

In real life, “we’ve already decided that children have rights,” Lady Kidron said. “For me, it’s all a part of just making tech normal.”

The proposed rules are called the Age Appropriate Design Code. Among other things, they would require that online services turn on by default the highest privacy settings for all minors, preventing things like automated location tracking. They would also require the services to automatically turn off techniques that could push children to stay online longer or provide the companies with more personal data than necessary.

The changes present a challenge to platforms like YouTube and Instagram, which have said their services are not intended for people under 13 — even though tens of millions of children use them. The British code would apply to all online services, including social networks and messaging apps, that are likely to have users who are under 18.

“It is provocative to technology companies,” Ms. Denham, the information commissioner, acknowledged. “It is going to change systematically, and on a systems level, how services are delivered for kids.”

The tech industry is pushing back, saying the code is too broad and so vague that it will be difficult to comply with.

In particular, companies have objected to the code’s definition of a child as a person under 18 — instead of, say, 13 or 16. They have also objected to applying the protections not just to children’s sites but to all sites in Britain that are likely to have minors as users. A children’s online privacy law in the United States, by contrast, applies only to nursery rhyme apps and other services directed at children under 13.

Industry and civil liberties experts also warn that the code could reduce user privacy, the opposite of its intended outcome. To comply with the code, these critics say, apps might need to collect more information about users to determine their age.

In real life, “we’ve already decided that children have rights,” Lady Kidron said. “For me, it’s all a part of just making tech normal.”CreditEleonora Agostini for The New York Times

Facebook, which owns Instagram, said it had held discussions with the Information Commissioner’s Office about the children’s code. Last month, the company hosted a design challenge to create ways to show teens online how their data is being collected and used.

“Facebook works with parents, experts, and policymakers to ensure that privacy and safety measures are in place to protect teens and limit the ways young people can be targeted by advertisers,” said Jay Nancarrow, a Facebook spokesman. “We are committed to continuing this work.”

Google, which owns YouTube and offers an app to help parents control their children’s internet use, declined to comment.

In public comments filed with Ms. Denham’s office last year, the companies said parents should be the ultimate arbiters of their children’s internet use. Facebook also warned that strict new standards might inadvertently push young people to use untrustworthy services that ignored the rules.

“The code is trying to break new ground and being very ambitious,” said Vinous Ali, the head of policy for TechUK, a trade organization in London whose members include Amazon, Apple, Google and Facebook. “Yet it is not taking a very targeted, narrow approach to where the harms are.”

Lady Kidron and Ms. Denham said some companies seemed not to have processed the idea that under the new rules, their businesses would need to put the best interests of children above their corporate bottom lines.

“What I haven’t heard from the companies is acknowledgment about the kind of practices that are in place right now to keep kids’ eyes online,” Ms. Denham said, practices “that push for a quantification of friends, that enable more data-mining, that treat our kids like commodities.”

Back at the boutique hotel, it was time for Lady Kidron to head to her room. She was meeting the next day with Nick Clegg, Facebook’s global policy head and a former deputy prime minister of Britain, to discuss the company’s objections to the children’s code. And she wanted to prepare.

“The main thing they are asking me is: ‘Are you really expecting companies to give up profits by restricting the data they collect on children?’” she said, referring to various online services she had met with this year. “Of course I am! Of course, everyone should.”

As she arose from a rattan couch and crossed the largely empty hotel patio, a Facebook policy lawyer spied her and came over to welcome her to Silicon Valley.

“Am I really welcome?” Lady Kidron asked pointedly in a comment that was half charm, half stiletto.

“You are always welcome,” the lawyer said.

The conversation quickly finished, and Lady Kidron, still mulling the comment, moved off toward the lobby.

“The more trouble you are,” she concluded, “the more they say that to you.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com