web analytics
a

Facebook

Twitter

Copyright 2015 Libero Themes.
All Rights Reserved.

8:30 - 6:00

Our Office Hours Mon. - Fri.

703-406-7616

Call For Free 15/M Consultation

Facebook

Twitter

Search
Menu
Westlake Legal Group > Posts tagged "Research"

Panicking About Your Kids’ Phones? New Research Says Don’t

Westlake Legal Group 17screentime1-facebookJumbo Panicking About Your Kids’ Phones? New Research Says Don’t Social Media Smartphones Research Mental Health and Disorders Children and Childhood

SAN FRANCISCO — It has become common wisdom that too much time spent on smartphones and social media is responsible for a recent spike in anxiety, depression and other mental health problems, especially among teenagers.

But a growing number of academic researchers have produced studies that suggest the common wisdom is wrong.

The latest research, published on Friday by two psychology professors, combs through about 40 studies that have examined the link between social media use and both depression and anxiety among adolescents. That link, according to the professors, is small and inconsistent.

“There doesn’t seem to be an evidence base that would explain the level of panic and consternation around these issues,” said Candice L. Odgers, a professor at the University of California, Irvine, and the lead author of the paper, which was published in the Journal of Child Psychology and Psychiatry.

The debate over the harm we — and especially our children — are doing to ourselves by staring into phones is generally predicated on the assumption that the machines we carry in our pockets pose a significant risk to our mental health.

Worries about smartphones have led Congress to pass legislation to examine the impact of heavy smartphone use and pushed investors to pressure big tech companies to change the way they approach young customers.

The World Health Organization said last year that infants under a year old should not be exposed to electronic screens and that children between the ages of 2 and 4 should not have more than an hour of “sedentary screen time” each day.

Even in Silicon Valley, technology executives have made a point of keeping the devices and the software they develop away from their own children.

But some researchers question whether those fears are justified. They are not arguing that intensive use of phones does not matter. Children who are on their phones too much can miss out on other valuable activities, like exercise. And research has shown that excessive phone use can exacerbate the problems of certain vulnerable groups, like children with mental health issues.

They are, however, challenging the widespread belief that screens are responsible for broad societal problems like the rising rates of anxiety and sleep deprivation among teenagers. In most cases, they say, the phone is just a mirror that reveals the problems a child would have even without the phone.

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

The new article by Ms. Odgers and Michaeline R. Jensen of the University of North Carolina at Greensboro comes just a few weeks after the publication of an analysis by Amy Orben, a researcher at the University of Cambridge, and shortly before the planned publication of similar work from Jeff Hancock, the founder of the Stanford Social Media Lab. Both reached similar conclusions.

“The current dominant discourse around phones and well-being is a lot of hype and a lot of fear,” Mr. Hancock said. “But if you compare the effects of your phone to eating properly or sleeping or smoking, it’s not even close.”

Mr. Hancock’s analysis of about 226 studies on the well-being of phone users concluded that “when you look at all these different kinds of well-being, the net effect size is essentially zero.”

The debate about screen time and mental health goes back to the early days of the iPhone. In 2011, the American Academy of Pediatrics published a widely cited paper that warned doctors about “Facebook depression.”

But by 2016, as more research came out, the academy revised that statement, deleting any mention of Facebook depression and emphasizing the conflicting evidence and the potential positive benefits of using social media.

Megan Moreno, one of the lead authors of the revised statement, said the original statement had been a problem “because it created panic without a strong basis of evidence.”

Dr. Moreno, a professor of pediatrics at the University of Wisconsin, said that in her own medical practice, she tends to be struck by the number of children with mental health problems who are helped by social media because of the resources and connections it provides.

Concern about the connection between smartphones and mental health has also been fed by high-profile works like a 2017 article in The Atlantic — and a related book — by the psychologist Jean Twenge, who argued that a recent rise in suicide and depression among teenagers was linked to the arrival of smartphones.

In her article, “Have Smartphones Ruined a Generation?,” Ms. Twenge attributed the sudden rise in reports of anxiety, depression and suicide from teens after 2012 to the spread of smartphones and social media.

Ms. Twenge’s critics argue that her work found a correlation between the appearance of smartphones and a real rise in reports of mental health issues, but that it did not establish that phones were the cause.

It could, researchers argue, just as easily be that the rise in depression led teenagers to excessive phone use at a time when there were many other potential explanations for depression and anxiety. What’s more, anxiety and suicide rates appear not to have risen in large parts of Europe, where phones have also become more prevalent.

“Why else might American kids be anxious other than telephones?” Mr. Hancock said. “How about climate change? How about income inequality? How about more student debt? There are so many big giant structural issues that have a huge impact on us but are invisible and that we aren’t looking at.”

Ms. Twenge remains committed to her position, and she points to several more recent studies by other academics who have found a specific link between social media use and poor mental health. One paper found that when a group of college students gave up social media for three weeks, their sense of loneliness and depression declined.

Ms. Odgers, Mr. Hancock and Mr. Przybylski said they had not taken any funding from the tech industry, and all have been outspoken critics of the industry on issues other than mental health, such as privacy and the companies’ lack of transparency.

Ms. Odgers added that she was not surprised that people had a hard time accepting her findings. Her own mother questioned her research after one of her grandsons stopped talking to her during the long drives she used to enjoy. But children tuning out their elders when they become teenagers is hardly a new trend, she said.

She also reminded her mother that their conversation was taking place during a video chat with Ms. Odgers’s son — the kind of intergenerational connection that was impossible before smartphones.

Ms. Odgers acknowledged that she was reluctant to give her two children more time on their iPads. But she recently tried playing the video game Fortnite with her son and found it an unexpectedly positive experience.

“It’s hard work because it’s not the environment we were raised in,” she said. “It can be a little scary at times. I have those moments, too.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Panicking About Your Kids’ Phones? New Research Says Don’t

Westlake Legal Group 17screentime1-facebookJumbo Panicking About Your Kids’ Phones? New Research Says Don’t Social Media Smartphones Research Mental Health and Disorders Children and Childhood

SAN FRANCISCO — It has become common wisdom that too much time spent on smartphones and social media is responsible for a recent spike in anxiety, depression and other mental health problems, especially among teenagers.

But a growing number of academic researchers have produced studies that suggest the common wisdom is wrong.

The latest research, published on Friday by two psychology professors, combs through about 40 studies that have examined the link between social media use and both depression and anxiety among adolescents. That link, according to the professors, is small and inconsistent.

“There doesn’t seem to be an evidence base that would explain the level of panic and consternation around these issues,” said Candice L. Odgers, a professor at the University of California, Irvine, and the lead author of the paper, which was published in the Journal of Child Psychology and Psychiatry.

The debate over the harm we — and especially our children — are doing to ourselves by staring into phones is generally predicated on the assumption that the machines we carry in our pockets pose a significant risk to our mental health.

Worries about smartphones have led Congress to pass legislation to examine the impact of heavy smartphone use and pushed investors to pressure big tech companies to change the way they approach young customers.

The World Health Organization said last year that infants under a year old should not be exposed to electronic screens and that children between the ages of 2 and 4 should not have more than an hour of “sedentary screen time” each day.

Even in Silicon Valley, technology executives have made a point of keeping the devices and the software they develop away from their own children.

But some researchers question whether those fears are justified. They are not arguing that intensive use of phones does not matter. Children who are on their phones too much can miss out on other valuable activities, like exercise. And research has shown that excessive phone use can exacerbate the problems of certain vulnerable groups, like children with mental health issues.

They are, however, challenging the widespread belief that screens are responsible for broad societal problems like the rising rates of anxiety and sleep deprivation among teenagers. In most cases, they say, the phone is just a mirror that reveals the problems a child would have even without the phone.

The researchers worry that the focus on keeping children away from screens is making it hard to have more productive conversations about topics like how to make phones more useful for low-income people, who tend to use them more, or how to protect the privacy of teenagers who share their lives online.

“Many of the people who are terrifying kids about screens, they have hit a vein of attention from society and they are going to ride that. But that is super bad for society,” said Andrew Przybylski, the director of research at the Oxford Internet Institute, who has published several studies on the topic.

The new article by Ms. Odgers and Michaeline R. Jensen of the University of North Carolina at Greensboro comes just a few weeks after the publication of an analysis by Amy Orben, a researcher at the University of Cambridge, and shortly before the planned publication of similar work from Jeff Hancock, the founder of the Stanford Social Media Lab. Both reached similar conclusions.

“The current dominant discourse around phones and well-being is a lot of hype and a lot of fear,” Mr. Hancock said. “But if you compare the effects of your phone to eating properly or sleeping or smoking, it’s not even close.”

Mr. Hancock’s analysis of about 226 studies on the well-being of phone users concluded that “when you look at all these different kinds of well-being, the net effect size is essentially zero.”

The debate about screen time and mental health goes back to the early days of the iPhone. In 2011, the American Academy of Pediatrics published a widely cited paper that warned doctors about “Facebook depression.”

But by 2016, as more research came out, the academy revised that statement, deleting any mention of Facebook depression and emphasizing the conflicting evidence and the potential positive benefits of using social media.

Megan Moreno, one of the lead authors of the revised statement, said the original statement had been a problem “because it created panic without a strong basis of evidence.”

Dr. Moreno, a professor of pediatrics at the University of Wisconsin, said that in her own medical practice, she tends to be struck by the number of children with mental health problems who are helped by social media because of the resources and connections it provides.

Concern about the connection between smartphones and mental health has also been fed by high-profile works like a 2017 article in The Atlantic — and a related book — by the psychologist Jean Twenge, who argued that a recent rise in suicide and depression among teenagers was linked to the arrival of smartphones.

In her article, “Have Smartphones Ruined a Generation?,” Ms. Twenge attributed the sudden rise in reports of anxiety, depression and suicide from teens after 2012 to the spread of smartphones and social media.

Ms. Twenge’s critics argue that her work found a correlation between the appearance of smartphones and a real rise in reports of mental health issues, but that it did not establish that phones were the cause.

It could, researchers argue, just as easily be that the rise in depression led teenagers to excessive phone use at a time when there were many other potential explanations for depression and anxiety. What’s more, anxiety and suicide rates appear not to have risen in large parts of Europe, where phones have also become more prevalent.

“Why else might American kids be anxious other than telephones?” Mr. Hancock said. “How about climate change? How about income inequality? How about more student debt? There are so many big giant structural issues that have a huge impact on us but are invisible and that we aren’t looking at.”

Ms. Twenge remains committed to her position, and she points to several more recent studies by other academics who have found a specific link between social media use and poor mental health. One paper found that when a group of college students gave up social media for three weeks, their sense of loneliness and depression declined.

Ms. Odgers, Mr. Hancock and Mr. Przybylski said they had not taken any funding from the tech industry, and all have been outspoken critics of the industry on issues other than mental health, such as privacy and the companies’ lack of transparency.

Ms. Odgers added that she was not surprised that people had a hard time accepting her findings. Her own mother questioned her research after one of her grandsons stopped talking to her during the long drives she used to enjoy. But children tuning out their elders when they become teenagers is hardly a new trend, she said.

She also reminded her mother that their conversation was taking place during a video chat with Ms. Odgers’s son — the kind of intergenerational connection that was impossible before smartphones.

Ms. Odgers acknowledged that she was reluctant to give her two children more time on their iPads. But she recently tried playing the video game Fortnite with her son and found it an unexpectedly positive experience.

“It’s hard work because it’s not the environment we were raised in,” she said. “It can be a little scary at times. I have those moments, too.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Science Under Attack: How Trump Is Sidelining Researchers and Their Work

WASHINGTON — In just three years, the Trump administration has diminished the role of science in federal policymaking while halting or disrupting research projects nationwide, marking a transformation of the federal government whose effects, experts say, could reverberate for years.

Political appointees have shut down government studies, reduced the influence of scientists over regulatory decisions and in some cases pressured researchers not to speak publicly. The administration has particularly challenged scientific findings related to the environment and public health opposed by industries such as oil drilling and coal mining. It has also impeded research around human-caused climate change, which President Trump has dismissed despite a global scientific consensus.

But the erosion of science reaches well beyond the environment and climate: In San Francisco, a study of the effects of chemicals on pregnant women has stalled after federal funding abruptly ended. In Washington, D.C., a scientific committee that provided expertise in defending against invasive insects has been disbanded. In Kansas City, Mo., the hasty relocation of two agricultural agencies that fund crop science and study the economics of farming has led to an exodus of employees and delayed hundreds of millions of dollars in research.

“The disregard for expertise in the federal government is worse than it’s ever been,” said Michael Gerrard, director of the Sabin Center for Climate Change Law at Columbia University, which has tracked more than 200 reports of Trump administration efforts to restrict or misuse science since 2017. “It’s pervasive.”

Hundreds of scientists, many of whom say they are dismayed at seeing their work undone, are departing.

Among them is Matthew Davis, a biologist whose research on the health risks of mercury to children underpinned the first rules cutting mercury emissions from coal power plants. But last year, with a new baby of his own, he was asked to help support a rollback of those same rules. “I am now part of defending this darker, dirtier future,” he said.

This year, after a decade at the Environmental Protection Agency, Mr. Davis left.

“Regulations come and go, but the thinning out of scientific capacity in the government will take a long time to get back,” said Joel Clement, a former top climate-policy expert at the Interior Department who quit in 2017 after being reassigned to a job collecting oil and gas royalties. He is now at the Union of Concerned Scientists, an advocacy group.

Mr. Trump has consistently said that government regulations have stifled businesses and thwarted some of the administration’s core goals, such as increasing fossil-fuel production. Many of the starkest confrontations with federal scientists have involved issues like environmental oversight and energy extraction — areas where industry groups have argued that regulators have gone too far in the past.

“Businesses are finally being freed of Washington’s overreach, and the American economy is flourishing as a result,” a White House statement said last year. Asked about the role of science in policymaking, officials from the White House declined to comment on the record.

The administration’s efforts to cut certain research projects also reflect a longstanding conservative position that some scientific work can be performed cost-effectively by the private sector, and taxpayers shouldn’t be asked to foot the bill. “Eliminating wasteful spending, some of which has nothing to do with studying the science at all, is smart management, not an attack on science,” two analysts at the conservative Heritage Foundation wrote in 2017 of the administration’s proposals to eliminate various climate change and clean energy programs.

ImageWestlake Legal Group 00CLI-SCIENCE-dorian-articleLarge Science Under Attack: How Trump Is Sidelining Researchers and Their Work United States Politics and Government Trump, Donald J Science and Technology Research Regulation and Deregulation of Industry national institutes of health National Academies of the United States Justice Department Interior Department Greenhouse Gas Emissions Government Employees Global Warming Food and Drug Administration Environmental Protection Agency environment Commerce Department

The president’s desk.Credit…Erin Schaff/The New York Times

Industry groups have expressed support for some of the moves, including a contentious E.P.A. proposal to put new constraints on the use of scientific studies in the name of transparency. The American Chemistry Council, a chemical trade group, praised the proposal by saying, “The goal of providing more transparency in government and using the best available science in the regulatory process should be ideals we all embrace.”

In some cases, the administration’s efforts to roll back government science have been thwarted. Each year, Mr. Trump has proposed sweeping budget cuts at a variety of federal agencies like the National Institutes of Health, the Department of Energy and the National Science Foundation. But Congress has the final say over budget levels and lawmakers from both sides of the aisle have rejected the cuts.

For instance, in supporting funding for the Department of Energy’s national laboratories, Senator Lamar Alexander, Republican of Tennessee, recently said, “it allows us to take advantage of the United States’ secret weapon, our extraordinary capacity for basic research.”

As a result, many science programs continue to thrive, including space exploration at NASA and medical research at the National Institutes of Health, where the budget has increased more than 12 percent since Mr. Trump took office and where researchers continue to make advances in areas like molecular biology and genetics.

Nevertheless, in other areas, the administration has managed to chip away at federal science.

At the E.P.A., for instance, staffing has fallen to its lowest levels in at least a decade. More than two-thirds of respondents to a survey of federal scientists across 16 agencies said that hiring freezes and departures made it harder to conduct scientific work. And in June, the White House ordered agencies to cut by one-third the number of federal advisory boards that provide technical advice.

The White House said it aimed to eliminate committees that were no longer necessary. Panels cut so far had focused on issues including invasive species and electric grid innovation.

At a time when the United States is pulling back from world leadership in other areas like human rights or diplomatic accords, experts warn that the retreat from science is no less significant. Many of the achievements of the past century that helped make the United States an envied global power, including gains in life expectancy, lowered air pollution and increased farm productivity are the result of the kinds of government research now under pressure.

“When we decapitate the government’s ability to use science in a professional way, that increases the risk that we start making bad decisions, that we start missing new public health risks,” said Wendy E. Wagner, a professor of law at the University of Texas at Austin who studies the use of science by policymakers.

Skirmishes over the use of science in making policy occur in all administrations: Industries routinely push back against health studies that could justify stricter pollution rules, for example. And scientists often gripe about inadequate budgets for their work. But many experts say that current efforts to challenge research findings go well beyond what has been done previously.

In an article published in the journal Science last year, Ms. Wagner wrote that some of the Trump administration’s moves, like a policy to restrict certain academics from the E.P.A.’s Science Advisory Board or the proposal to limit the types of research that can be considered by environmental regulators, “mark a sharp departure with the past.” Rather than isolated battles between political officials and career experts, she said, these moves are an attempt to legally constrain how federal agencies use science in the first place.

Some clashes with scientists have sparked public backlash, as when Trump officials pressured the nation’s weather forecasting agency to support the president’s erroneous assertion this year that Hurricane Dorian threatened Alabama.

But others have garnered little notice despite their significance.

This year, for instance, the National Park Service’s principle climate change scientist, Patrick Gonzalez, received a “cease and desist” letter from supervisors after testifying to Congress about the risks that global warming posed to national parks.

“I saw it as attempted intimidation,” said Dr. Gonzalez, who added that he was speaking in his capacity as an associate adjunct professor at the University California, Berkeley, a position he also holds. “It’s interference with science and hinders our work.”

Even though Congress hasn’t gone along with Mr. Trump’s proposals for budget cuts at scientific agencies, the administration has still found ways to advance its goals.

One strategy: eliminate individual research projects not explicitly protected by Congress.

For example, just months after Mr. Trump’s election, the Commerce Department disbanded a 15-person scientific committee that had explored how to make National Climate Assessments, the congressionally mandated studies of the risks of climate change, more useful to local officials. It also closed its Office of the Chief Economist, which for decades had conducted wide-ranging research on topics like the economic effects of natural disasters. Similarly, the Interior Department has withdrawn funding for its Landscape Conservation Cooperatives, 22 regional research centers that tackled issues like habitat loss and wildfire management. While California and Alaska used state money to keep their centers open, 16 of 22 remain in limbo.

A Commerce Department official said the climate committee it discontinued had not produced a report, and highlighted other efforts to promote science, such as a major upgrade of the nation’s weather models.

An Interior Department official said the agency’s decisions “are solely based on the facts and grounded in the law,” and that the agency would continue to pursue other partnerships to advance conservation science.

Research that potentially posed an obstacle to Mr. Trump’s promise to expand fossil-fuel production was halted, too. In 2017, Interior officials canceled a $1 million study by the National Academies of Sciences, Engineering, and Medicine on the health risks of “mountaintop removal” coal mining in places like West Virginia.

Mountaintop removal is as dramatic as it sounds — a hillside is blasted with explosives and the remains are excavated — but the health consequences still aren’t fully understood. The process can kick up coal dust and send heavy metals into waterways, and a number of studies have suggested links to health problems like kidney disease and birth defects.

“The industry was pushing back on these studies,” said Joseph Pizarchik, an Obama-era mining regulator who commissioned the now-defunct study. “We didn’t know what the answer would be,” he said, “but we needed to know: Was the government permitting coal mining that was poisoning people, or not?”

While coal mining has declined in recent years, satellite data shows that at least 60 square miles in Appalachia have been newly mined since 2016. “The study is still as important today as it was five years ago,” Mr. Pizarchik said.

The cuts can add up to significant research setbacks.

For years, the E.P.A. and the National Institute of Environmental Health Sciences had jointly funded 13 children’s health centers nationwide that studied, among other things, the effects of pollution on children’s development. This year, the E.P.A. ended its funding.

At the University of California, San Francisco, one such center has been studying how industrial chemicals such as flame retardants in furniture could affect placenta and fetal development. Key aspects of the research have now stopped.

“The longer we go without funding, the harder it is to start that research back up,” said Tracey Woodruff, who directs the center.

In a statement, the E.P.A. said it anticipated future opportunities to fund children’s health research.

At the Department of Agriculture, Secretary of Agriculture Sonny Perdue announced in June he would relocate two key research agencies to Kansas City from Washington: The National Institute of Food and Agriculture, a scientific agency that funds university research on topics like how to breed cattle and corn that can better tolerate drought conditions, and the Economic Research Service, whose economists produce studies for policymakers on farming trends, trade and rural America.

Nearly 600 employees had less than four months to decide whether to uproot and move. Most couldn’t or wouldn’t, and two-thirds of those facing transfer left their jobs.

In August, Mick Mulvaney, the acting White House chief of staff, appeared to celebrate the departures.

“It’s nearly impossible to fire a federal worker,” he said in videotaped remarks at a Republican Party gala in South Carolina. “But by simply saying to people, ‘You know what, we’re going to take you outside the bubble, outside the Beltway, outside this liberal haven of Washington, D.C., and move you out in the real part of the country,’ and they quit. What a wonderful way to sort of streamline government and do what we haven’t been able to do for a long time.”

The White House declined to comment on Mr. Mulvaney’s speech.

The exodus has led to upheaval.

At the Economic Research Service, dozens of planned studies into topics like dairy industry consolidation and pesticide use have been delayed or disrupted. “You can name any topic in agriculture and we’ve lost an expert,” said Laura Dodson, an economist and acting vice president of the union representing agency employees.

The National Institute of Food and Agriculture manages $1.7 billion in grants that fund research on issues like food safety or techniques that help farmers improve their productivity. The staff loss, employees say, has held up hundreds of millions of dollars in funding, such as planned research into pests and diseases afflicting grapes, sweet potatoes and fruit trees.

Former employees say they remain skeptical that the agencies could be repaired quickly. “It will take 5 to 10 years to rebuild,” said Sonny Ramaswamy, who until 2018 directed the National Institute of Food and Agriculture.

Mr. Perdue said the moves would save money and put the offices closer to farmers. “We did not undertake these relocations lightly,” he said in a statement. A Department of Agriculture official added that both agencies were pushing to continue their work, but acknowledged that some grants could be delayed by months.

In addition to shutting down some programs, there have been notable instances where the administration has challenged established scientific research. Early on, as it started rolling back regulations on industry, administration officials began questioning research findings underpinning those regulations.

In 2017, aides to Scott Pruitt, the E.P.A. administrator at the time, told the agency’s economists to redo an analysis of wetlands protections that had been used to help defend an Obama-era clean-water rule. Instead of concluding that the protections would provide more than $500 million in economic benefits, they were told to list the benefits as unquantifiable, according to Elizabeth Southerland, who retired in 2017 from a 30-year career at the E.P.A., finishing as a senior official in its water office.

“It’s not unusual for a new administration to come in and change policy direction,” Dr. Southerland said. “But typically you would look for new studies and carefully redo the analysis. Instead they were sending a message that all the economists, scientists, career staff in the agency were irrelevant.”

Internal documents show that political officials at the E.P.A. have overruled the agency’s career experts on several occasions, including in a move to regulate asbestos more lightly, in a decision not to ban the pesticide chlorpyrifos and in a determination that parts of Wisconsin were in compliance with smog standards. The Interior Department sidelined its own legal and environmental analyses in advancing a proposal to raise the Shasta Dam in California.

Michael Abboud, an E.P.A. spokesman, disputed Dr. Southerland’s account in an emailed response, saying “It is not true.”

The E.P.A. is now finalizing a narrower version of the Obama-era water rule, which in its earlier form had prompted outrage from thousands of farmers and ranchers across the country who saw it as overly restrictive.

“E.P.A. under President Trump has worked to put forward the strongest regulations to protect human health and the environment,” Mr. Abboud said, noting that several Obama administration rules had been held up in court and needed revision. “As required by law E.P.A. has always and will continue to use the best available science when developing rules, regardless of the claims of a few federal employees.”

Past administrations have, to varying degrees, disregarded scientific findings that conflicted with their priorities. In 2011, President Obama’s top health official overruled experts at the Food and Drug Administration who had concluded that over-the-counter emergency contraceptives were safe for minors.

But in the Trump administration, the scope is wider. Many top government positions, including at the E.P.A. and the Interior Department, are now occupied by former lobbyists connected to the industries that those agencies oversee.

Scientists and health experts have singled out two moves they find particularly concerning. Since 2017, the E.P.A. has moved to restrict certain academics from sitting on its Science Advisory Board, which provides scrutiny of agency science, and has instead increased the number of appointees connected with industry.

And, in a potentially far-reaching move, the E.P.A. has proposed a rule to limit regulators from using scientific research unless the underlying raw data can be made public. Industry groups like the Chamber of Commerce have argued that some agency rules are based on science that can’t be fully scrutinized by outsiders. But dozens of scientific organizations have warned that the proposal in its current form could prevent the E.P.A. from considering a vast array of research on issues like the dangers of air pollution if, for instance, they are based on confidential health data.

“The problem is that rather than allowing agency scientists to use their judgment and weigh the best available evidence, this could put political constraints on how science enters the decision-making process in the first place,” said Ms. Wagner, the University of Texas law professor.

The E.P.A. says its proposed rule is intended to make the science that underpins potentially costly regulations more transparent. “By requiring transparency,” said Mr. Abboud, the agency spokesman, “scientists will be required to publish hypothesis and experimental data for other scientists to review and discuss, requiring the science to withstand skepticism and peer review.”

“In the past, when we had an administration that was not very pro-environment, we could still just lay low and do our work,” said Betsy Smith, a climate scientist with more than 20 years of experience at the E.P.A. who in 2017 saw her long-running study of the effects of climate change on major ports get canceled.

“Now we feel like the E.P.A. is being run by the fossil fuel industry,” she said. “It feels like a wholesale attack.”

After her project was killed, Dr. Smith resigned.

The loss of experienced scientists can erase years or decades of “institutional memory,” said Robert J. Kavlock, a toxicologist who retired in October 2017 after working at the E.P.A. for 40 years, most recently as acting assistant administrator for the agency’s Office of Research and Development.

His former office, which researches topics like air pollution and chemical testing, has lost 250 scientists and technical staff members since Mr. Trump came to office, while hiring 124. Those who have remained in the office of roughly 1,500 people continue to do their work, Dr. Kavlock said, but are not going out of their way to promote findings on lightning-rod topics like climate change.

“You can see that they’re trying not to ruffle any feathers,” Dr. Kavlock said.

The same can’t be said of Patrick Gonzalez, the National Park Service’s principle climate change scientist, whose work involves helping national parks protect against damages from rising temperatures.

In February, Dr. Gonzalez testified before Congress about the risks of global warming, saying he was speaking in his capacity as an associate adjunct professor at the University of California, Berkeley. He is also using his Berkeley affiliation to participate as a co-author on a coming report by the Intergovernmental Panel on Climate Change, a United Nations body that synthesizes climate science for world leaders.

But in March, shortly after testifying, Dr. Gonzalez’s supervisor at the National Park Service sent the cease-and-desist letter warning him that his Berkeley affiliation was not separate from his government work and that his actions were violating agency policy. Dr. Gonzalez said he viewed the letter as an attempt to deter him from speaking out.

The Interior Department, asked to comment, said the letter did not indicate an intent to sanction Dr. Gonzalez and that he was free to speak as a private citizen.

Dr. Gonzalez, with the support of Berkeley, continues to warn about the dangers of climate change and work with the United Nations climate change panel using his vacation time, and he spoke again to Congress in June. “I’d like to provide a positive example for other scientists,” he said.

Still, he noted that not everyone may be in a position to be similarly outspoken. “How many others are not speaking up?” Dr. Gonzalez said.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

A Few Cities Have Cornered Innovation Jobs. Can That Be Changed?

There are about a dozen industries at the frontier of innovation. They include software and pharmaceuticals, semiconductors and data processing. Most of their workers have science or tech degrees. They invest heavily in research and development. While they account for only 3 percent of all jobs, they account for 6 percent of the country’s economic output.

And if you don’t live in one of a handful of urban areas along the coasts, you are unlikely to get a job in one of them.

Boston, Seattle, San Diego, San Francisco and Silicon Valley captured nine out of 10 jobs created in these industries from 2005 to 2017, according to a report released on Monday. By 2017, these five metropolitan regions had accumulated almost a quarter of these jobs, up from under 18 percent a dozen years earlier. On the other end, about half of America’s 382 metro areas — including big cities like Los Angeles, Chicago and Philadelphia — lost such jobs.

And the concentration of prosperity does not appear to be slowing down.

America’s deepening inequality has become a cause for alarm. The picture of a country cloven between a small set of prosperous urban “haves” and a large collection of “have-nots” has come sharply into focus as an opioid epidemic has overtaken vast swaths of the country. It gained the attention of the political class in 2016, when voters across the industrial heartland embraced Donald J. Trump’s populist message.

The search for ideas that could improve the economic conditions of deprived areas, long derided by economists as a fool’s errand — why spend money on improving the lot of places rather than people, many experts argued — is now at the top of policymakers’ lists.

The report is by Mark Muro and Jacob Whiton from the Brookings Institution’s Metropolitan Policy Program, and Rob Atkinson of the Information Technology and Innovation Foundation, a research group that gets funding from tech and telecom companies. They identified 13 “innovation industries” — which include aerospace, communications equipment production and chemical manufacturing — where at least 45 percent of the work force has degrees in science, tech, engineering or math, and where investments in research and development amount to at least $20,000 per worker.

The authors argue that a broad federal push is needed to spread the business of invention beyond the 20 cities that dominate it. “Hoping for economic convergence to reassert itself would not be a good strategy,” Mr. Muro said.

Westlake Legal Group innovation_maps-335 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have

gained innovation jobs . . .

Gained the most

In thousands

Raleigh, N.C.

San Francisco

Madison, Wis.

Silicon Valley

Salt Lake City

Charleston, S.C.

. . . and those that

have lost them.

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

Los Angeles

Albuquerque

Colorado Springs

Durham, N.C.

Philadelphia

Washington

Westlake Legal Group innovation_maps-600 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have gained

innovation jobs . . .

Gained the most

In thousands

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

San Francisco

Raleigh, N.C.

Los Angeles

Albuquerque

Madison, Wis.

Colorado Springs

Silicon Valley

Durham, N.C.

Philadelphia

Salt Lake City

Washington

Charleston, S.C.

. . . and those that

have lost them.

Westlake Legal Group innovation_maps-1050 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have gained

innovation jobs . . .

. . . and those that

have lost them.

In thousands

Gained the most

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

San Francisco

Raleigh, N.C.

Los Angeles

Albuquerque

Madison, Wis.

Colorado Springs

Silicon Valley

Durham, N.C.

Philadelphia

Salt Lake City

Washington

Charleston, S.C.

Data are the change in jobs from 2005 to 2017 in 13 industries including scientific research and development services, Aerospace product and parts manufacturing and Software publishers.

Source: Brookings Institution analysis of Emsi data

By Karl Russell

Expanding the knowledge economy across all of America might indeed be a fool’s errand. As Mr. Atkinson noted, Erie, Pa., and Flint, Mich., might never attract the Googles or Apples of the world. But midsize cities like St. Louis, Pittsburgh and Columbus, Ohio, could feasibly transform into hubs of technological entrepreneurship.

The report’s authors propose identifying eight to 10 cities, far from the coasts, that already have a research university and a critical mass of people with advanced degrees. The government would then spend about $700 million a year for research and development in each of them for a decade. Lawmakers could give high-tech businesses that set up shop in these cities tax and regulatory breaks. Mr. Atkinson suggested a limited break from antitrust law to allow businesses to coordinate location decisions.

Battling the forces driving concentration will be tough. Unlike the manufacturing industries of the 20th century, which competed largely on cost, the tech businesses compete on having the next best thing. Cheap labor, which can help attract manufacturers to depressed areas, doesn’t work as an incentive. Instead, innovation industries cluster in cities where there are lots of highly educated workers, sophisticated suppliers and research institutions.

Unlike businesses in, say, retail or health care, innovation businesses experience a sharp rise in the productivity of their workers if they are in places with lots of other such workers, according to research by Enrico Moretti, who is an economist at the University of California, Berkeley, and others.

Other industries and workers are also better off if they have the good fortune of being near leading-edge companies. The report points out that the average output per worker in the 20 cities with the most employment in the 13 high-tech industries is $109,443, one-third more than in the other 363 metros across the country.

The cycle is hard to break: Young educated workers will flock to cities with large knowledge industries because that’s where they will find the best opportunities to earn and learn and have fun. And start-ups will go there to seek them out.

Even skyrocketing housing costs have not stopped the concentration of talent in a few superstar cities. High-tech companies that seek cheaper places to set up beyond their hubs often go to Bangalore, India, rather than Birmingham, Ala.

“They keep the core team in Silicon Valley or Seattle but put the other stuff in Shenzhen or Vancouver or Bangalore,” Mr. Atkinson said. Shenzhen, China, may not be much cheaper than Indianapolis, he added, but Shenzhen is already a tech hub in its own right.

Westlake Legal Group innovation-productivity-335 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Annual output

per worker

Health care

Basic manufacturing

Innovation industries

Innovation jobs in the most

concentrated metro areas

are the most productive.

For metro areas in the bottom 75%

of employment in each sector.

Westlake Legal Group innovation-productivity-600 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Annual output per worker

Innovation

industries

Basic

manufacturing

Health care

Innovation jobs in the most

concentrated metro areas

are the most productive.

For metro areas in the bottom 75% of employment in each sector.

Source: Brookings Institution and Information Technology and Innovation Foundation analysis of Emsi data

By Karl Russell

It is uncertain whether government support could pull innovation out of the clutches of superstar cities. The proposal by Brookings and the Information Technology Foundation will not come cheap: They estimate a $100 billion price tag over 10 years.

The payoff, however, would extend beyond the new technology hubs. Jon Gruber, an economist at the Massachusetts Institute of Technology, noted that in a world where Cincinnati becomes a hub of entrepreneurship, “we don’t need to fix opioid country” in Appalachia. That’s because many of those areas are within commuting distance of Cincinnati.

What’s more, not trying also entails risks. In his book “Jump-Starting America,” Mr. Gruber and his co-writer, M.I.T.’s Simon Johnson, argue for a sustained national effort to seed new technology clusters widely. Without federal government support, Mr. Gruber said, the United States is unlikely to produce many new high-tech hubs.

The risk, he said, is not only that much of America will be left to founder as superstar cities become more congested and less affordable. Political support for publicly funded research will crumble unless more of the country enjoys the benefits from innovation.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

A Few Cities Have Cornered Innovation Jobs. Can That Be Changed?

There are about a dozen industries at the frontier of innovation. They include software and pharmaceuticals, semiconductors and data processing. Most of their workers have science or tech degrees. They invest heavily in research and development. While they account for only 3 percent of all jobs, they account for 6 percent of the country’s economic output.

And if you don’t live in one of a handful of urban areas along the coasts, you are unlikely to get a job in one of them.

Boston, Seattle, San Diego, San Francisco and Silicon Valley captured nine out of 10 jobs created in these industries from 2005 to 2017, according to a report released on Monday. By 2017, these five metropolitan regions had accumulated almost a quarter of these jobs, up from under 18 percent a dozen years earlier. On the other end, about half of America’s 382 metro areas — including big cities like Los Angeles, Chicago and Philadelphia — lost such jobs.

And the concentration of prosperity does not appear to be slowing down.

America’s deepening inequality has become a cause for alarm. The picture of a country cloven between a small set of prosperous urban “haves” and a large collection of “have-nots” has come sharply into focus as an opioid epidemic has overtaken vast swaths of the country. It gained the attention of the political class in 2016, when voters across the industrial heartland embraced Donald J. Trump’s populist message.

The search for ideas that could improve the economic conditions of deprived areas, long derided by economists as a fool’s errand — why spend money on improving the lot of places rather than people, many experts argued — is now at the top of policymakers’ lists.

The report is by Mark Muro and Jacob Whiton from the Brookings Institution’s Metropolitan Policy Program, and Rob Atkinson of the Information Technology and Innovation Foundation, a research group that gets funding from tech and telecom companies. They identified 13 “innovation industries” — which include aerospace, communications equipment production and chemical manufacturing — where at least 45 percent of the work force has degrees in science, tech, engineering or math, and where investments in research and development amount to at least $20,000 per worker.

The authors argue that a broad federal push is needed to spread the business of invention beyond the 20 cities that dominate it. “Hoping for economic convergence to reassert itself would not be a good strategy,” Mr. Muro said.

Westlake Legal Group innovation_maps-335 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have

gained innovation jobs . . .

Gained the most

In thousands

Raleigh, N.C.

San Francisco

Madison, Wis.

Silicon Valley

Salt Lake City

Charleston, S.C.

. . . and those that

have lost them.

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

Los Angeles

Albuquerque

Colorado Springs

Durham, N.C.

Philadelphia

Washington

Westlake Legal Group innovation_maps-600 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have gained

innovation jobs . . .

Gained the most

In thousands

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

San Francisco

Raleigh, N.C.

Los Angeles

Albuquerque

Madison, Wis.

Colorado Springs

Silicon Valley

Durham, N.C.

Philadelphia

Salt Lake City

Washington

Charleston, S.C.

. . . and those that

have lost them.

Westlake Legal Group innovation_maps-1050 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Metro areas that have gained

innovation jobs . . .

. . . and those that

have lost them.

In thousands

Gained the most

Lost the most

In thousands

Wichita, Kan.

Oxnard, Calif.

San Francisco

Raleigh, N.C.

Los Angeles

Albuquerque

Madison, Wis.

Colorado Springs

Silicon Valley

Durham, N.C.

Philadelphia

Salt Lake City

Washington

Charleston, S.C.

Data are the change in jobs from 2005 to 2017 in 13 industries including scientific research and development services, Aerospace product and parts manufacturing and Software publishers.

Source: Brookings Institution analysis of Emsi data

By Karl Russell

Expanding the knowledge economy across all of America might indeed be a fool’s errand. As Mr. Atkinson noted, Erie, Pa., and Flint, Mich., might never attract the Googles or Apples of the world. But midsize cities like St. Louis, Pittsburgh and Columbus, Ohio, could feasibly transform into hubs of technological entrepreneurship.

The report’s authors propose identifying eight to 10 cities, far from the coasts, that already have a research university and a critical mass of people with advanced degrees. The government would then spend about $700 million a year for research and development in each of them for a decade. Lawmakers could give high-tech businesses that set up shop in these cities tax and regulatory breaks. Mr. Atkinson suggested a limited break from antitrust law to allow businesses to coordinate location decisions.

Battling the forces driving concentration will be tough. Unlike the manufacturing industries of the 20th century, which competed largely on cost, the tech businesses compete on having the next best thing. Cheap labor, which can help attract manufacturers to depressed areas, doesn’t work as an incentive. Instead, innovation industries cluster in cities where there are lots of highly educated workers, sophisticated suppliers and research institutions.

Unlike businesses in, say, retail or health care, innovation businesses experience a sharp rise in the productivity of their workers if they are in places with lots of other such workers, according to research by Enrico Moretti, who is an economist at the University of California, Berkeley, and others.

Other industries and workers are also better off if they have the good fortune of being near leading-edge companies. The report points out that the average output per worker in the 20 cities with the most employment in the 13 high-tech industries is $109,443, one-third more than in the other 363 metros across the country.

The cycle is hard to break: Young educated workers will flock to cities with large knowledge industries because that’s where they will find the best opportunities to earn and learn and have fun. And start-ups will go there to seek them out.

Even skyrocketing housing costs have not stopped the concentration of talent in a few superstar cities. High-tech companies that seek cheaper places to set up beyond their hubs often go to Bangalore, India, rather than Birmingham, Ala.

“They keep the core team in Silicon Valley or Seattle but put the other stuff in Shenzhen or Vancouver or Bangalore,” Mr. Atkinson said. Shenzhen, China, may not be much cheaper than Indianapolis, he added, but Shenzhen is already a tech hub in its own right.

Westlake Legal Group innovation-productivity-335 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Annual output

per worker

Health care

Basic manufacturing

Innovation industries

Innovation jobs in the most

concentrated metro areas

are the most productive.

For metro areas in the bottom 75%

of employment in each sector.

Westlake Legal Group innovation-productivity-600 A Few Cities Have Cornered Innovation Jobs. Can That Be Changed? Urban Areas Research Productivity Metropolitan Policy Program, Brookings Institution Labor and Jobs Innovation Information Technology and Innovation Foundation Income Inequality

Annual output per worker

Innovation

industries

Basic

manufacturing

Health care

Innovation jobs in the most

concentrated metro areas

are the most productive.

For metro areas in the bottom 75% of employment in each sector.

Source: Brookings Institution and Information Technology and Innovation Foundation analysis of Emsi data

By Karl Russell

It is uncertain whether government support could pull innovation out of the clutches of superstar cities. The proposal by Brookings and the Information Technology Foundation will not come cheap: They estimate a $100 billion price tag over 10 years.

The payoff, however, would extend beyond the new technology hubs. Jon Gruber, an economist at the Massachusetts Institute of Technology, noted that in a world where Cincinnati becomes a hub of entrepreneurship, “we don’t need to fix opioid country” in Appalachia. That’s because many of those areas are within commuting distance of Cincinnati.

What’s more, not trying also entails risks. In his book “Jump-Starting America,” Mr. Gruber and his co-writer, M.I.T.’s Simon Johnson, argue for a sustained national effort to seed new technology clusters widely. Without federal government support, Mr. Gruber said, the United States is unlikely to produce many new high-tech hubs.

The risk, he said, is not only that much of America will be left to founder as superstar cities become more congested and less affordable. Political support for publicly funded research will crumble unless more of the country enjoys the benefits from innovation.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

China’s Genetic Research on Ethnic Minorities Sets Off Science Backlash

Westlake Legal Group 04dnabacklash-5-facebookJumbo China’s Genetic Research on Ethnic Minorities Sets Off Science Backlash Uighurs (Chinese Ethnic Group) Surveillance of Citizens by Government Springer Nature Research Reed Elsevier NV Nature (Journal) Human Genetics (Journal) Genetics and Heredity Forensic Science facial recognition software Ethics and Official Misconduct DNA (Deoxyribonucleic Acid) China Biometrics

BEIJING — China’s efforts to study the DNA of the country’s ethnic minorities have incited a growing backlash from the global scientific community, as a number of scientists warn that Beijing could use its growing knowledge to spy on and oppress its people.

Two publishers of prestigious scientific journals, Springer Nature and Wiley, said this week that they would re-evaluate papers they previously published on Tibetans, Uighurs and other minority groups. The papers were written or co-written by scientists backed by the Chinese government, and the two publishers want to make sure the authors got consent from the people they studied.

Springer Nature, which publishes the influential journal Nature, also said that it was toughening its guidelines to make sure scientists get consent, particularly if those people are members of a vulnerable group.

The statements followed articles by The New York Times that describe how the Chinese authorities are trying to harness bleeding-edge technology and science to track minority groups. The issue is particularly stark in Xinjiang, a region on China’s western frontier, where the authorities have locked up more than one million Uighurs and other members of predominantly Muslim minority groups in internment camps in the name of quelling terrorism.

Chinese companies are selling facial recognition systems that they claim can tell when a person is a Uighur. Chinese officials have also collected blood samples from Uighurs and others to build new tools for tracking members of minority groups.

In some cases, Western scientists and companies have provided help for those efforts, often unwittingly. That has included publishing papers in high-profile journals, which grants prestige and respectability to the authors that can lead to access to funding, data or new techniques.

When Western journals publish such papers by Chinese scientists affiliated with the country’s surveillance agencies, it amounts to selling a knife to a friend “knowing that your friend would use the knife to kill his wife,” said Yves Moreau, a professor of engineering at the University of Leuven in Belgium.

On Tuesday, Nature published an essay by Dr. Moreau calling for all publications to retract papers written by scientists backed by Chinese security agencies that focus on the DNA of minority ethnic groups.

“If you produce a piece of knowledge and know someone is going to take that and harm someone with it, that’s a huge problem,” said Dr. Moreau.

The scientific reaction is part of a broader backlash to China’s actions in Xinjiang. Lawmakers in the United States and elsewhere are taking an increasingly critical stance toward Beijing’s policies. On Tuesday, the House voted almost unanimously for a bill condemning China’s treatment of Uighurs and others.

Dr. Moreau and other scientists worry that China’s research into the genes and personal data of ethnic minorities is being used to build databases, facial recognition systems and other methods for monitoring and subjugating China’s ethnic minorities.

They also worry that research into DNA in particular violates widely followed scientific rules involving consent. In Xinjiang, where so many people have been confined to camps and a heavy police presence dominates daily life, they say, it is impossible to verify that Uighurs have given their blood samples willingly.

China’s Ministry of Public Security and the Ministry of Science and Technology did not respond to requests for comment.

In September, Dr. Moreau and three other scientists asked Wiley to retract a paper on the faces of minorities it published last year, citing the potential for abuse and the tone of discussion about race.

“The point of this work was to improve surveillance capabilities on all Tibetans and Uighurs,” said Jack Poulson, a former Google research scientist and founder of the advocacy group Tech Inquiry, and another member of the group that reached out to Wiley. Even if the authors obtained consent from those they studied, he added, that would be “insufficient to satisfy their ethical obligations.”

Wiley initially declined, but said this week that it would reconsider. Last week, Curtin University, an Australian institution that employs one of the authors of the study, said it had found “significant concerns” with the paper.

Science journals are now setting different standards.

In February, a journal called Frontiers in Genetics rejected a paper that was based on findings from the DNA of more than 600 Uighurs. Some of its editors cited China’s treatment of Uighurs, people familiar with the deliberations said.

The paper was instead accepted by Human Genetics, a journal owned by Springer Nature, and published in April.

Philip Campbell, the editor of Springer Nature, said this week that Human Genetics would add an editorial note to the study saying that concerns had been raised regarding informed consent. Springer Nature will also bolster guidelines across its journals and is contacting their editors to “request that they exercise an extra level of scrutiny and care in handling papers where there is a potential that consent was not informed or freely given,” it said in an email.

The paper published in Human Genetics was a subject of a Times article on Tuesday that raised questions about whether the Uighurs had contributed their blood samples willingly. Those Uighurs lived in Tumxuk, a city in Xinjiang that is ringed by paramilitary forces and is home to two internment camps.

Scientists like Dr. Moreau are not calling for a blanket ban on Chinese research into the genetics of China’s ethnic minorities. He drew a distinction between fields like medicine, where research is aimed at treating people, and forensics, which involves matters of criminal justice.

But Dr. Moreau found that recent genetic forensics research from China focused overwhelmingly on ethnic minorities and was increasingly driven by Chinese security agencies.

Of 529 studies in the field published between 2011 and 2018, he found, about half had a co-author from the police, military or judiciary. He also found that Tibetans were over 40 times more frequently studied than China’s ethnic Han majority, and that the Uighur population was 30 times more intensely studied than the Han.

Over the past eight years, he wrote, three leading forensic genetics journals — one published by Springer Nature and two by Elsevier — have published 40 articles co-authored by members of the Chinese police that describe the DNA profiling of Tibetans and Muslim minorities.

Tom Reller, a spokesman for Elsevier, said the company was in the process of producing more comprehensive guidelines for the publication of genetic data. But he added that the journals “are unable to control the potential misuse of population data articles” by third parties.

The principle of informed consent has been a scientific mainstay after forced experiments on inmates in Nazi death camps came to light. To verify that those standards are followed, academic journals and other outlets depend heavily on ethical review committees at individual institutions. Bioethicists say that arrangement can break down when an authoritarian state is involved. Already, Chinese scientists are under scrutiny for publishing papers on organ transplantation without saying whether there was consent.

In its own review of more than 100 papers published by Chinese scientists in international journals on biometrics and computer science, The Times found a number of examples of what appeared to be inadequate consent from study participants or no consent at all. Those concerns have also dogged facial recognition research in the United States.

One 2016 facial recognition paper published by Springer International was based on 137,395 photos of Uighurs, which the scientists said were from identification photos and surveillance cameras at railway stations and shopping malls. The paper does not mention consent.

A 2018 study, focused on using traffic cameras to identify drivers by beard, uses surveillance footage without mentioning whether it got permission from the subjects. The paper was also published by Springer.

A second 2018 Springer article that analyzes Uighur cranial shape to determine gender was based on “whole skull CT scans” of 267 people, mostly Uighurs. While the study said the subjects were “voluntary,” it made no mention of consent forms.

The latter two papers were part of a book published by Springer as part of a biometrics conference in Xinjiang’s capital, Urumqi, in August 2018, months after rights groups had documented the crackdown in the region. In a statement, Steven Inchcoombe, chief publishing officer of Springer Nature, said that conference organizers were responsible for editorial oversight of the conference proceedings. But he added that the company would in the future strengthen its requirements of conference organizers and ensure that their proceedings also comply with Springer Nature’s editorial policies.

Two papers assembled databases of facial expressions for different minority groups, including Tibetans, Uighurs and Hui, another Muslim minority. The papers were released in journals run by Wiley and the Institute of Electrical and Electronics Engineers. Wiley said the paper “raises a number of questions that are currently being reviewed.” It added that the paper was published on behalf of a partner, the International Union of Psychological Science, and referred further questions to it. The engineers institute did not respond to an emailed request for comment.

The science world has been responding to the pressure. Thermo Fisher, a maker of equipment for studying genetics, said in February that it would suspend sales to Xinjiang, though it will continue to sell to other parts of China. Still, Dr. Moreau said, the issue initially received little traction among academia.

“If we don’t react in the community, we are going to get more and more into trouble,” he said. “The community has to take a major step and say: ‘This is not us.’”

Let’s block ads! (Why?)

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Latin Dictionary’s Journey: A to Zythum in 125 Years (and Counting)

MUNICH — When German researchers began working on a new Latin dictionary in the 1890s, they thought they might finish in 15 or 20 years.

In the 125 years since, the Thesaurus Linguae Latinae (T.L.L.) has seen the fall of an empire, two world wars and the division and reunification of Germany. In the meantime, they are up to the letter R.

This is not for lack of effort. Most dictionaries focus on the most prominent or recent meaning of a word; this one aims to show every single way anyone ever used it, from the earliest Latin inscriptions in the sixth century B.C. to around A.D. 600. The dictionary’s founder, Eduard Wölfflin, who died in 1908, described entries in the T.L.L. not as definitions, but “biographies” of words.

A slip of paper from the T.L.L. archive for the word “regina,” which means “queen.”Credit…Gordon Welters for The New York Times The T.L.L.’s offices and library are in the Bavarian Academy of Sciences, in a former palace.Credit…Gordon Welters for The New York Times

The first entry, for the letter A, was published in 1900. The T.L.L. is expected to reach its final word — “zythum,” an Egyptian beer — by 2050. A scholarly project of painstaking exactness and glacial speed, it has so far produced 18 volumes of huge pages with tiny text, the collective work of nearly 400 scholars, many of them long since dead. The letters Q and N were set aside, because they begin too many difficult words, so researchers will have to go back and work on those, too.

“Its scale is prodigious,” David Butterfield, a senior lecturer in Classics at Cambridge, said in an email, adding that when the first publication appeared in 1900, “it did not go unnoticed that the word closing that installment was ‘absurdus.’”

It’s a monumental effort aimed at a small group of classicists, for whom the ability to understand every way a word was used is important not only for reading literature, but also understanding language and history.

Once the language of a vast physical empire, then a vast spiritual one, Latin is now spoken mostly within the walls of the Vatican and among a handful of “living Latin” enthusiasts, who promote speaking the language as an educational tool.

In the United States, Latin education dropped off sharply through the 1970s, but it has held steady in recent decades. About 210,000 public school students are learning the language (slightly fewer than are learning Chinese, and a tiny fraction of the 7.3 million in Spanish classes), according to Sherri Halloran, a spokeswoman for the American Council on the Teaching of Foreign Languages.

But because it was Europe’s primary literary language for over a thousand years, Latin is “the key for a considerable piece of human history,” said Michael Hillen, the project’s director.

Around half of English words are also derived directly or indirectly from Latin. (We also, of course, use intact phrases such as “quid pro quo,” a theme of the recent impeachment hearings. It means “this for that.”)

The poet and classicist A.E. Housman, who died in 1936, once referred to “the chaingangs working at the dictionary in the ergastulum [dungeon] at Munich,” but the T.L.L. is now housed in two sunny floors of a former palace. Sixteen full-time staffers and some visiting lexicographers work in offices and a library, which contains editions of all the surviving Latin texts from before A.D. 600, and about 10 million yellowing paper slips, arranged in stacks of boxes reaching to the ceiling.

There are about 10 million paper slips in stacks of boxes.Credit…Gordon Welters for The New York Times Boxes of slips for the word “ego,” meaning “I.”Credit…Gordon Welters for The New York Times

These slips form the heart of the project. There is a piece of paper for every surviving piece of writing from the classical period. The words, arranged chronologically, are given in context: they come from poems, prose, recipes, medical texts, receipts, dirty jokes, graffiti, inscriptions, and anything else that survived the vicissitudes of the last two thousand years.

Most Latin students read from the same rarefied canon without much contact with how the language was used in everyday life. But the T.L.L. insists that the anonymous person who insulted an enemy with graffiti on a wall in Pompeii is as valuable a witness to the meaning of a Latin word as a poet or emperor. (“Phileros spado,” reads one barb, or “Phileros is a eunuch.”)

Reading these texts creates “respect, empathy, and understanding — which doesn’t mean condoning the things they did,” said Kathleen Coleman, a member of the board overseeing the dictionary’s progress. “We don’t have to think gladiators were a great idea. But to try and understand what they were getting at. What they thought. Why they thought what they were doing was right. And you get that kind of depth from language.”

About 90,000 of the slips represent uses of the word “et.” In order to grasp every possible shade of the word’s meaning, the researcher who wrote the entry read each of the passages in which it occurred and sorted them into categories of usage, like a scientist cataloging specimens. It took years.

“Et,” an apparently simple word that usually means “and,” can also mean a range of slightly different things, including “even,” “and also,” “and then,” “and moreover,” et cetera.

An excerpt from the Latin thesaurus that has been in the works for over 100 years.

Westlake Legal Group thumbnail Latin Dictionary’s Journey: A to Zythum in 125 Years (and Counting) Thesaurus Linguae Latinae Research Munich (Germany) Latin Language Dictionaries   24 pages, 2.86 MB

“You have to know about all kinds of texts: Roman law and medicine and poetry and prose and history,” said Marijke Ottink, an editor at the T.L.L. She has been working on the word “res,” which means “thing,” on and off for a decade.

Visiting researchers often come to look into particular words — the guest book outside the library contains, in faint letters, the name Joseph Ratzinger, better known as Pope Benedict XVI. He came to consult the boxes for “populus,” which means “masses” or “people.”

Some assignments are more coveted than others: Josine Schrickx, an editor, said she would like to write the entry for the word “thesaurus.” In Latin, it means “treasury.”

On the horizon, however, is “non,” which means “no.” With nearly 50,000 slips, it is a source of anxiety at the T.L.L. “I don’t know how to deal with a word on that scale,” said Adam Gitner, a researcher. “And that does frighten me.”

The complicated conjunction and adverb “ut” also looms. Mr. Butterfield said that it is “the sort of infernal business that would make Sisyphus and Ixion smile kindly on the job satisfaction they got from their daily toil,” referring to figures from classical mythology forced to labor in pain for eternity.

The dictionary is not only difficult to produce, but also to use. Written in Latin, entries are made up of “dense print in numbered columns, subdivided by capital Roman numerals, then capital letters, then Arabic numerals, then perhaps more Arabic numerals, then lowercase letters, then — if you’re still on the trail — Greek letters,” said Mr. Butterfield. But the difficulty in using the T.L.L. was “an essential hurdle of scholarship,” he added; it was “a tool that is without parallel in understanding how Latin was deployed.”

It is also expensive: An online version costs $379 for individual yearly access. Many universities have subscriptions, but to improve access, this year the T.L.L. posted PDFs of entries through the letter P for free online.

The T.L.L. has survived a chaotic century: A significant portion of its staff died in combat at the beginning of the First World War. During the Second, the slips were moved to a monastery to escape the bombing of Munich. In response to postwar nuclear fears, they were copied onto microfilm, which was placed in a bunker below the Black Forest, where it remains, alongside other culturally significant work.

Originally a German state undertaking, the project became international after the Second World War. Its 1.25 million euro annual budget still mostly comes from German taxpayers, but international partners, including the United States, send researchers to Munich.

Judging by the accuracy of previous estimates, the 2050 end date may be optimistic. Many of the researchers at the dictionary say they don’t expect to live to see it finished.

But Christian Flow, a visiting assistant professor at Mississippi State University who wrote a dissertation about the T.L.L., said that its duration is also its strength. “The irony is that the timelessness of the thesaurus,” he said, lay “in its inability to finish itself.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Four Problems With 2016 Trump Polling That Could Play Out Again in 2020

Westlake Legal Group 19poll-failures1-facebookJumbo Four Problems With 2016 Trump Polling That Could Play Out Again in 2020 Trump, Donald J Research Presidential Election of 2020 Presidential Election of 2016 Polls and Public Opinion American Association of Public Opinion Research

Meetings of the American Association of Public Opinion Research tend to be pretty staid affairs. But when members of the group gathered for a conference call at this time in 2016, the polling industry was experiencing a crisis of confidence.

Donald J. Trump had swept most of the Midwest to win a majority in the Electoral College, a shocking upset that defied most state-by-state polls and prognoses. An association task force, which was already working on a routine report about pre-election poll methodologies, was suddenly tasked with figuring out what had gone wrong.

“We moved from doing this sort of niche industry report to almost like more of an autopsy,” said Courtney Kennedy, the director of survey research at Pew Research Center, who headed the task force. “Something major just happened, and we have to really understand from A to Z why it happened.”

The group released its report the following spring. Today, with the next presidential election less than a year away, pollsters are closely studying the findings of that document and others like it, looking for adjustments they can make in 2020 to avoid the misfires of 2016.

“Polling is one of those things like military battles: You always re-fight the last war,” said Joshua D. Clinton, who co-directs the University of Vanderbilt’s poll and served on the AAPOR committee. The 2020 election “might have a different set of considerations,” he said, but pollsters have an obligation to learn from the last cycle’s mistakes.

By and large, nationwide surveys were relatively accurate in predicting the popular vote, which Hillary Clinton won by two percentage points. But in crucial parts of the country, especially in the Midwest, individual state polls persistently underestimated Mr. Trump’s support. And election forecasters used those polls in Electoral College projections that gave the impression Mrs. Clinton was a heavy favorite.

AAPOR’s analysis found several reasons the state polls missed the mark. Certain groups were underrepresented in poll after poll, particularly less educated white voters and those in counties that had voted decisively against President Barack Obama in 2012. Respondents’ unwillingness to speak honestly about their support for Mr. Trump may have also been a factor.

These and other issues could reappear in 2020, pollsters warn, if they’re not addressed directly.

To make sure their results reflect the true makeup of the population, pollsters typically “weight” their data, adding emphasis to certain respondents so that a group that was underrepresented in the random sample still has enough influence over the poll’s final result. Polls typically weight by age, race and other demographic categories.

But some state-level polls in 2016 did not weight by education levels, therefore giving short shrift to less educated voters, who tend to be harder to reach.

This often understated Mr. Trump’s support, since he was markedly more popular than past Republican nominees among less educated voters — and noticeably less popular among those with higher degrees, who research suggests are more likely to participate in polls.

The AAPOR analysts found that many polls in swing states would have achieved significantly different results had they been weighted for education. This, in turn, would have noticeably decreased Mrs. Clinton’s lead in much-watched polling averages and forecasts of these states.

A Michigan State University poll that found Mrs. Clinton holding a 17-point lead in that state just before the election did not weight by education. If it had, her lead would have dropped to 10 points in that poll, the AAPOR researchers found — still a far cry from predicting Mr. Trump’s narrow victory there, but a significant change.

And a University of New Hampshire poll put Mrs. Clinton up by 16 points in that state on the eve of the election, though in the end she barely won it. That poll’s gap would have closed entirely if its analysts had weighted for education, according to the AAPOR report.

Some polling firms that did not weight by education in 2016 have since taken up the practice, but not all of them. Mark Blumenthal, the former head of polling at Survey Monkey and a member of the AAPOR task force, said that weighting by education ought to be accepted as necessary. “I think that’s a reasonable line to draw,” he said.

A pre-election study by Morning Consult warned that wealthier, more educated Republicans appeared slightly more reluctant to tell phone interviewers that they supported Mr. Trump, compared with similar voters who responded to online polls.

Pollsters refer to this phenomenon as the “shy Trump” effect, or — in academic parlance — a form of “social-desirability bias.” Studies have affirmed that in races where a candidate or cause is perceived as controversial or otherwise undesirable, voters can be wary of voicing their support, especially to a live interviewer.

Charles Franklin, the director of the Marquette University Law School poll of Wisconsin voters, said he worried that the shy Trump effect had played a role in skewing the poll’s results away from Mr. Trump in 2016.

Mr. Franklin, who was a member of the AAPOR team, suggested how telephone interviewers might confront the issue with respondents next year: “When they indicate they’re undecided or maybe considering a third-party vote, maybe push people a little more on whether they could change their mind,” he said.

One polling firm that showed Mr. Trump narrowly leading in some of the most inaccurately polled states — Michigan, Pennsylvania and Florida, all of which he won — was Trafalgar Group, a Republican polling and consulting firm that uses a variety of nontraditional polling methodologies.

It sought to combat the shy Trump effect by asking respondents not only how they planned to vote but also how they thought their neighbors would vote — possibly offering Trump supporters a way to project their feelings onto someone else.

The AAPOR report posited that the neighbor question could help overcome shyness among Trump supporters, particularly in phone interviews. It “warrants experimentation in a broad array of contests,” the report said.

That was not the only way Trafalgar innovated. Polls typically use a formula based on past elections to determine which voters are likely to show up on Election Day. They then discard or devalue responses from those who seem less predisposed — typically those without much history of voting, or who don’t express much enthusiasm about politics.

Trafalgar used a generously inclusive model, with a particular eye toward less frequent voters whom Mr. Trump’s anti-establishment campaign had drawn in.

“With Trump, we saw in the primary how new people were being brought into the process, and so we widened the net of who we reached out to,” Robert C. Cahaly, a pollster at Trafalgar, said in an interview.

When the Census Bureau in 2017 released detailed voting information from the 2016 election, it revealed that turnout had surged in many counties that Mr. Obama had lost by 10 points or more in 2012 — particularly in Michigan, Pennsylvania and Wisconsin.

It is a reminder that who voted in the previous election is not always a good indicator of who will vote the next time.

“This is where the art comes in, and it’s hard to know until it actually happens which approach is the best approach,” Mr. Clinton said, referring to how polling firms construct their likely-voter models.

The polls from 2016 make clear that finding a representative sample is both the hardest and the most important part of conducting an effective survey. This is not new knowledge for public-opinion professionals, but many said it was a lesson worth relearning.

Compounding all the other factors in 2016 was the simple fact that — in a race with two historically unpopular candidates — many voters didn’t reach a decision until just before Election Day.

In Michigan, Pennsylvania and Wisconsin, between 13 and 15 percent of respondents in exit polls said they had decided in the last week of the campaign. Those voters broke for Mr. Trump by a wide margin; in Wisconsin, it was about 30 points.

Pew researchers also called back respondents of their pre-election polls and found that many had changed their minds and voted differently than they’d said they would, which is not uncommon. But these voters broke for Mr. Trump by a 16-point margin — a heavier tilt than in any other year on record.

So, in a volatile election, even a perfectly effective poll might not be able to gauge the outcome; a poll can only take the pulse of where voters’ feelings lie in a particular moment.

That points to a major source of agita for some observers of the 2016 election: electoral forecasts in the news media and elsewhere that used polling data to suggest Mrs. Clinton was highly likely to win. Most of them put her chances at somewhere between 70 and 99 percent.

“I’m not sure people understand how these probabilistic projections are produced or what they mean,” Gary Langer, a pollster who works with ABC News, said in an email. “I’d suggest that predicting election outcomes is the least important contribution of pre-election polls. Bringing us to a better understanding of how and why the nation comes to these choices is the higher value that good-quality polls provide.”

Election forecasters do not mean to convey absolute certainty. Just before Election Day, The New York Times’s Upshot forecast gave Mr. Trump a 15 percent chance of winning, and FiveThirtyEight’s model put his chances at 29 percent, indicating that a Republican win was not out of the question.

But the Princeton Election Consortium, which had predicted the 2012 results with striking accuracy, was more certain of a Clinton win, giving her a 99 percent chance in the days leading up to the election.

Sam Wang, a neuroscientist who runs the Princeton model, said in an email that in 2016 he had not factored in enough potential “systematic error” — a catchall variable that accounts for imperfections in individual polls. In 2016, he never set that variable higher than 1.1 percentage points, but in 2020 he plans to set it at two points.

“That will increase the uncertainty much more,” he said, “which will set expectations appropriately in case the election is close.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com