web analytics
a

Facebook

Twitter

Copyright 2015 Libero Themes.
All Rights Reserved.

8:30 - 6:00

Our Office Hours Mon. - Fri.

703-406-7616

Call For Free 15/M Consultation

Facebook

Twitter

Search
Menu
Westlake Legal Group > Artificial Intelligence

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Internet Companies Prepare to Fight the ‘Deepfake’ Future

Westlake Legal Group 24DEEPFAKES-01-facebookJumbo Internet Companies Prepare to Fight the ‘Deepfake’ Future YouTube.com Video Recordings, Downloads and Streaming Social Media Rumors and Misinformation Research Presidential Election of 2020 Google Inc Facebook Inc Deepfakes Computers and the Internet Artificial Intelligence

SAN FRANCISCO — Several months ago, Google hired dozens of actors to sit at a table, stand in a hallway and walk down a street while talking into a video camera.

Then the company’s researchers, using a new kind of artificial intelligence software, swapped the faces of the actors. People who had been walking were suddenly at a table. The actors who had been in a hallway looked like they were on a street. Men’s faces were put on women’s bodies. Women’s faces were put on men’s bodies. In time, the researchers had created hundreds of so-called deepfake videos.

By creating these digitally manipulated videos, Google’s scientists believe they are learning how to spot deepfakes, which researchers and lawmakers worry could become a new, insidious method for spreading disinformation in the lead-up to the 2020 presidential election.

For internet companies like Google, finding the tools to spot deepfakes has gained urgency. If someone wants to spread a fake video far and wide, Google’s YouTube or Facebook’s social media platforms would be great places to do it.

Imagine a fake Senator Elizabeth Warren, virtually indistinguishable from the real thing, getting into a fistfight in a doctored video. Or a fake President Trump doing the same. The technology capable of that trickery is edging closer to reality.

“Even with current technology, it hard for some people to tell what is real and what is not,” said Subbarao Kambhampati, a professor of computer science at Arizona State University who is among the academics partnering with Facebook on its deepfake research.

Video

transcript

[HIGH-PITCHED NOTE] “You know when a person is working on something and it’s good, but it’s not perfect? And he just tries for perfection? That’s me in a nutshell.” [MUFFLED SPEECH] “I just want to recreate humans.” “O.K. But why?” “I don’t know. I mean, it’s that feeling you get when you achieve something big. (ECHOING) “It’s really interesting. You hear these words coming out in your voice, but you never said them.” “Let’s try again.” “We’ve been working to make a convincing total deepfake. The bar we’re setting is very high.” “So you can see, it’s not perfect.” “We’re trying to make it so the population would totally believe this video.” “Give this guy an Oscar.” [LAUGHTER] “There are definitely people doing it at Google, Samsung, Microsoft. The technology moves super fast.” “Somebody else will beat you to it if you wait a year.” “Someone else will. And that will hurt.” “O.K., let’s try again.” “Just make it natural, right?” “It’s hard to be natural.” “It’s hard to be natural when you’re faking it.” “O.K.” “What are you up to these days?” “Today, I’m announcing my candidacy for the presidency of the United States.” [LAUGHTER] “And I would like to announce my very special running mate, the most famous chimp in the world, Bubbles Jackson. Are we good?” “People do not realize how close this is to happen. Fingers crossed. It’s going to happen, like, in the upcoming months. Yeah, the world is going to change.” “I squint my eyes.” “Yeah.” “Look, this is how we got into the mess we’re in today with technology, right? A bunch of idealistic young people thinking, we’re going to change the world.” “It’s weird to see his face on it.” [LAUGHTER] “I wondered what you would say to these engineers.” “I would say, I hope you’re putting as much thought into how we deal with the consequences of this as you are into the realization of it. This is a Pandora’s box you’re opening.” [THEME MUSIC]

Deepfakes — a term that generally describes videos doctored with cutting-edge artificial intelligence — have already challenged our assumptions about what is real and what is not.

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

For decades, computer software has allowed people to manipulate photos and videos or create fake images from scratch. But it has been a slow, painstaking process usually reserved for experts trained in the vagaries of software like Adobe Photoshop or After Effects.

Now, artificial intelligence technologies are streamlining the process, reducing the cost, time and skill needed to doctor digital images. These A.I. systems learn on their own how to build fake images by analyzing thousands of real images. That means they can handle a portion of the workload that once fell to trained technicians. And that means people can create far more fake stuff than they used to.

The technologies used to create deepfakes is still fairly new and the results are often easy to notice. But the technology is evolving. While the tools used to detect these bogus videos are also evolving, some researchers worry that they won’t be able to keep pace.

Google recently said that any academic or corporate researcher could download its collection of synthetic videos and use them to build tools for identifying deepfakes. The video collection is essentially a syllabus of digital trickery for computers. By analyzing all of those images, A.I. systems learn how to watch for fakes. Facebook recently did something similar, using actors to build fake videos and then releasing them to outside researchers.

Engineers at a Canadian company called Dessa, which specializes in artificial intelligence, recently tested a deepfake detector that was built using Google’s synthetic videos. It could identify the Google videos with almost perfect accuracy. But when they tested their detector on deepfake videos plucked from across the internet, it failed more than 40 percent of the time.

They eventually fixed the problem, but only after rebuilding their detector with help from videos found “in the wild,” not created with paid actors — proving that a detector is only as good as the data used to train it.

Their tests showed that the fight against deepfakes and other forms of online disinformation will require nearly constant reinvention. Several hundred synthetic videos are not enough to solve the problem, because they don’t necessarily share the characteristics of fake videos being distributed today, much less in the years to come.

“Unlike other problems, this one is constantly changing,” said Ragavan Thurairatnam, Dessa’s founder and head of machine learning.

In December 2017, someone calling themselves “deepfakes” started using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. As the practice spread across services like Twitter, Reddit and PornHub, the term deepfake entered the popular lexicon. Soon, it was synonymous with any fake video posted to the internet.

The technology has improved at a rate that surprises A.I. experts, and there is little reason to believe it will slow. Deepfakes should benefit from one of the few tech industry axioms that have held up over the years: Computers always get more powerful and there is always more data. That makes the so-called machine-learning software that helps create deepfakes more effective.

“It is getting easier, and it will continue to get easier. There is no doubt about it,” said Matthias Niessner, a professor of computer science at the Technical University of Munich who is working with Google on its deepfake research. “That trend will continue for years.”

The question is: Which side will improve more quickly?

Researchers like Dr. Niessner are working to build systems that can automatically identify and remove deepfakes. This is the other side of the same coin. Like deepfake creators, deepfake detectors learn their skills by analyzing images.

Detectors can also improve by leaps and bounds. But that requires a constant stream of new data representing the latest deepfake techniques used around the internet, Dr. Niessner and other researchers said. Collecting and sharing the right data can be difficult. Relevant examples are scarce, and for privacy and copyright reasons, companies cannot always share data with outside researchers.

Though activists and artists occasionally release deepfakes as a way of showing how these videos could shift the political discourse online, these techniques are not widely used to spread disinformation. They are mostly used to spread humor or fake pornography, according to Facebook, Google and others who track the progress of deepfakes.

Right now, deepfake videos have subtle imperfections that can be readily detected by automated systems, if not by the naked eye. But some researchers argue that the improved technology will be powerful enough to create fake images without these tiny defects. Companies like Google and Facebook hope they will have reliable detectors in place before that happens.

“In the short term, detection will be reasonably effective,” said Mr. Kambhampati, the Arizona State professor. “In the longer term, I think it will be impossible to distinguish between the real pictures and the fake pictures.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Would You Like Fries With That? McDonald’s Already Knows the Answer

Westlake Legal Group 00mctech-facebookJumbo Would You Like Fries With That? McDonald’s Already Knows the Answer Start-ups restaurants McDonald's Corporation Fast Food Industry Easterbrook, Stephen J (1967- ) Artificial Intelligence

McDonald’s has a new plan to sell more Big Macs: act like Big Tech.

Over the last seven months, the fast-food chain has spent hundreds of millions of dollars to acquire technology companies that specialize in artificial intelligence and machine learning. McDonald’s has even established a new tech hub in the heart of Silicon Valley — the McD Tech Labs — where a team of engineers and data scientists is working on voice-recognition software.

The goal? To turn McDonald’s, a chain better known for supersized portions than for supercomputers, into a saltier, greasier version of Amazon.

As fast-food sales decline across the increasingly competitive marketplace, McDonald’s is looking for new ways to lure customers. On Tuesday, the chain said same-store sales in the United States were weaker than expected for the third quarter, sending shares lower.

But in the coming years, its machine learning technology could change how consumers decide what to eat — and, in a potentially ominous development for their waistlines, make them eat more.

So far, the new technological advances can be experienced mostly at the chain’s thousands of drive-throughs, where for years menu boards have displayed a familiar array of McDonald’s favorites: Big Macs, Quarter Pounders, Chicken McNuggets.

Now, the chain has digital boards programmed to market that food more strategically, taking into account such factors as the time of day, the weather, the popularity of certain menu items and the length of the wait. On a hot afternoon, for example, the board might promote soda rather than coffee. At the conclusion of every transaction, screens now display a list of recommendations, nudging customers to order more.

At some drive-throughs, McDonald’s has tested technology that can recognize license-plate numbers, allowing the company to tailor a list of suggested purchases to a customer’s previous orders, as long as the person agrees to sign away the data.

“You just grow to expect that in other parts of your life. Why should it be different when you’re ordering at McDonald’s?” said Daniel Henry, the chain’s chief information officer. “We don’t think food should be any different than what you buy on Amazon.”

As the evolution of the McDonald’s drive-through shows, the internet shopping experience, with its recommendation algorithms and personalization, is increasingly shaping the world of brick-and-mortar retail, as restaurants, clothing stores, supermarkets and other businesses use new technology to collect consumer data and then deploy that information to encourage more spending.

At some stores, Bluetooth devices now track shoppers’ movements, allowing companies to send texts and emails recommending products that customers lingered over but did not buy. And a number of retailers are experimenting with facial-recognition tools and other technologies — sometimes known as “offline cookies” — that allow businesses to gather information about customers even when they are away from their computers.

In the restaurant world, the increasingly popular food-delivery apps have produced a slew of customer data. But much of that information is controlled by third-party technology companies rather than by the restaurants themselves, underlining the importance of tech expertise in an increasingly competitive industry.

“A lot of the restaurant chains, the larger ones that have the cash and the clout and the depth, are really turning into quasi-technology companies,” said Michael Atkinson, who runs Orderscape, a company that provides voice-ordering technology. “All of them have that ambition.”

In recent years, Domino’s Pizza has distinguished itself as a technology leader in the slow-moving world of pizza (it’s hard to disrupt a crust recipe), aiming to capture the growing food-delivery market with streamlined phone and online ordering systems, data-collection techniques and even self-driving cars.

Like the new McD Tech Labs in California, Domino’s also has a tech headquarters: the “innovation garage” in Ann Arbor, Mich., where teams of employees drawn from departments across the company work on specific projects under one roof — an approach borrowed from Silicon Valley.

“That’s 60 years’ worth of legacy corporate structure that we have blown up by moving into this building,” said Dennis Maloney, the company’s chief digital officer. “Domino’s started off as a pizza company that sells online, and we’ve managed to transform ourselves into an e-commerce company that sells pizza.”

So far, however, Domino’s has stopped short of the latest McDonald’s play: acquiring entire tech start-ups. (Pizza Hut, however, recently acquired a company that produces online ordering software.) In March, McDonald’s spent more than $300 million to buy Dynamic Yield, the Tel Aviv-based company that developed the artificial intelligence tools now used at thousands of McDonald’s drive-throughs.

The deal “has changed the way the high-tech industry thinks about potential M&A,” said Liad Agmon, a former Israeli intelligence official who co-founded Dynamic Yield. “We’ll see more nontraditional tech companies buying tech companies as an accelerator for their digital efforts. It was genius on McDonald’s side.”

Already, the recommendation algorithms built into the drive-through menu boards have generated larger orders, the McDonald’s chief executive, Steve Easterbrook, said during an earnings call in July. (Mr. Henry, the chain’s information executive, declined to reveal the size of the increase.) By the end of the year, the new system is expected to be in place at nearly every McDonald’s drive-through in the U.S.

In September, McDonald’s purchased a second tech company, Apprente, a start-up based in Mountain View, Calif., that develops voice-activated platforms that can process orders in multiple languages and accents. In recent months, McDonald’s has tested voice recognition at some of its restaurants, seeking to replace the human workers who take orders with a faster system.

McDonald’s insists that the rollout of the voice technology will not cost jobs. But at a time when it faces renewed protests from workers over low wages and sexual harassment, the chain’s new focus on technology could intensify scrutiny of how it treats its workers and how they might be affected by automation. While McDonalds has reported impressive growth over the last couple of years, some employees at its restaurants make less than $10 an hour.

“Try raising a family on that,” said Adriana Alvarez, an employee in Cicero, Ill., who has helped lead the high-profile campaign for a $15 hourly wage at McDonald’s. “The company should be able to balance tech and other investments and, in the process, ensure workers like me are safe on the job and have a seat at the table.”

With unemployment at just 3.5 percent in the United States, the fast-food industry is facing one of its worst labor shortages in decades. Rather than eliminate jobs, McDonald’s claims that voice-recognition technology would allow franchise owners to reassign workers to understaffed areas of their restaurants. But across the industry, fast-food experts say, some chains may attempt to use voice tools and other technologies to replace workers.

“The labor shortage frankly has done more to push restaurants toward technology than almost anything else,” said Jonathan Maze, the executive editor of Restaurant Business Magazine, a trade publication. “It enables you theoretically to be able to run your restaurant with fewer people.”

At the McDonald’s drive-through on Fort Hamilton Parkway in Brooklyn, every order still must go through a human being: Last week, the voice on the other end of the speaker sounded perplexed when a reporter turned down the free soda that usually comes with a cheeseburger and fries.

But the rest of the drive-through experience — with its digital screens and recommendation algorithms — does indeed feel a bit like shopping online.

“It’s a great, efficient way to take people’s money,” said Marayah Jerry as she waited at the drive-through to collect a Ranch Snack Wrap. “I’ll come with an idea of what I want, and then I see the pictures, and I’m like, ‘That looks good.’”

Another drive-through customer, Dalila Ruiz, said she noticed the suggested add-ons at the bottom of the menu board but resisted the temptation to splurge. “I don’t want to be so fat,” Ms. Ruiz said.

Not all McDonald’s customers are likely to show such discipline. Critics of artificial intelligence have long warned that the technology could lead to a dystopian future in which humans are subordinate to machines.

Before the robot apocalypse, however, A.I. might simply make us fatter.

“There are real, significant unintended consequences of something like this further driving unhealthy eating and more fast-food eating and obesity rates and diabetes rates going up,” said Scott Kahan, a doctor who directs the National Center for Weight and Wellness, an obesity clinic in Washington, D.C. “These sorts of technologies are making it hard for people to just find some reasonable moderation.”

There is plenty of precedent for companies like McDonald’s finding creative ways to persuade Americans to consume more calories. But the marriage of a fast-food giant and an artificial-intelligence start-up marks an unusual new chapter.

When Mr. Agmon, the co-founder of Dynamic Yield, announced the McDonald’s acquisition in a company WhatsApp chat in March, his colleagues thought he was joking. “When you start working for a tech company,” Mr. Agmon said, “you don’t expect this.”

Soon, however, the news began to sink in: The next day, 250 McDonald’s hamburgers arrived at Dynamic Yield’s headquarters in Tel Aviv, along with fries for the whole staff.

But this wasn’t really a McDonald’s crowd. By the time the staff finished hugging and congratulating each other, the burgers were cold.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Should we be modeling AI on octopus brains instead of humans?

Westlake Legal Group octopus Should we be modeling AI on octopus brains instead of humans? The Blog Science octopus model brains Artificial Intelligence

Here’s an odd thing to ponder if you’re tired of impeachment theater. Big tech continues to pursue the elusive dream (or nightmare, depending on who you ask) of achieving Artificial General Intelligence, or perhaps even “Strong AI” – an AI entity caple of possessing consciousness. To date, none of them have succeeded, at least that we know of. (Yes, some people believe the AI has already “woken up” but it’s hiding in the internet so we don’t discover it, but that’s a debate for another day.)

One possible problem, at least according to some tech experts, is that we’ve been trying to model Artificial Intelligence on the human brain. Why is that a problem? Because our brains are too complicated to replicate, at least for now. But there are other intelligent species with powerful brains hanging around. Why not model AI on one of those? One candidate being suggested is the octopus. (Boston Globe)

Many believe that mimicking the human brain is the optimal way to create artificial intelligence. But scientists are struggling to do this, due to the substantial intricacies of the human mind. Billye [the octopus] reminds us that there is a vast array of nonhuman life that is worthy of emulation.

Much of the excitement around state-of-the-art artificial intelligence research today is focused on deep learning, which utilizes layers of artificial neural networks to perform machine learning through a web of nodes that are modeled on interconnections between neurons in the vertebrate brain cortex. While this science holds incredible promise, given the enormous complexity of the human brain, it is also presenting formidable challenges, including that some of these AI systems are arriving at conclusions that cannot be explained by their designers.

The linked article is lengthy and rather dense, but definitely worth diving into if you are at all interested in this subject. The author, Flynn Coleman, may not be a computer scientist but she’s proposing some interesting ideas that are already being explored by experts in the field. In terms of why our brains are so difficult to recreate in a computer, she lists some of the aspects of our own brains that we don’t even understand yet. These include:

– We don’t know exactly how we make decisions
– We don’t have an accepted definition of what human intelligence is
– We don’t exactly know why we sleep or dream
– We don’t know how we process memories
– We don’t know what consciousness is
– We don’t have an equation to define what we call “common sense”

When you consider how much we don’t really understand about how our own brains work it’s no wonder we can’t teach our computers how to replicate them.

But if we’re looking for a different model, is the octopus really the way to go? They apparently have impressive brains to be sure. They are spread out, with much of the brain existing in the legs and different parts can work independently or in tandem with the rest. But since we can’t really communicate with them (yet) it’s probably going to be hard to reverse engineer their gray matter. Besides, some scientists are pretty well convinced that octopuses are actually aliens.

The octopus just seems frightening and, well… alien to me. Do they understand concepts like compassion or empathy? If we’re going to “wake up” the AI one of these days, I’d rather roll the dice and hope that the newly sentient technology at least has a chance of containing some compassion for us before it starts rolling out the terminator robots at the first automated car factory it takes over.

But I don’t want to be too hard on the octopuses. For a different and more intriguing look, here’s a video of an octopus dreaming. You won’t be sorry you clicked.

The post Should we be modeling AI on octopus brains instead of humans? appeared first on Hot Air.

Westlake Legal Group octopus-300x153 Should we be modeling AI on octopus brains instead of humans? The Blog Science octopus model brains Artificial Intelligence   Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

When You Take a Great Photo, Thank the Algorithm in Your Phone

Westlake Legal Group 15Techfix-print-facebookJumbo When You Take a Great Photo, Thank the Algorithm in Your Phone Smartphones Samsung Group Photography Google Inc cameras Artificial Intelligence Apple Inc

Not too long ago, tech giants like Apple and Samsung raved about the number of megapixels they were cramming into smartphone cameras to make photos look clearer. Nowadays, all the handset makers are shifting focus to the algorithms, artificial intelligence and special sensors that are working together to make our photos look more impressive.

What that means: Our phones are working hard to make photos look good, with minimal effort required from the user.

On Tuesday, Google showed its latest attempt to make cameras smarter. It unveiled the Pixel 4 and Pixel 4 XL, new versions of its popular smartphone, which comes in two screen sizes. While the devices include new hardware features — like an extra camera lens and an infrared face scanner to unlock the phone — Google emphasized the phones’ use of so-called computational photography, which automatically processes images to look more professional.

Among the Pixel 4’s new features is a mode for shooting the night sky and capturing images of stars. And by adding the extra lens, Google augmented a software feature called Super Res Zoom, which allows users to zoom in more closely on images without losing much detail.

Apple also highlighted computational photography last month when it introduced three new iPhones. One yet-to-be released feature, Deep Fusion, will process images with an extreme amount of detail.

The big picture? When you take a digital photo, you’re not actually shooting a photo anymore.

“Most photos you take these days are not a photo where you click the photo and get one shot,” said Ren Ng, a computer science professor at the University of California, Berkeley. “These days it takes a burst of images and computes all of that data into a final photograph.”

Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.

Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.

Google gave me a preview of its Pixel phones last week. Here’s what they tell us about the software that’s making our phone cameras tick, and what to look forward to. (For the most part, the photos will speak for themselves.)

Last year, Google introduced Night Sight, which made photos taken in low light look as though they had been shot in normal conditions, without a flash. The technique took a burst of photos with short exposures and reassembled them into an image.

With the Pixel 4, Google is applying a similar technique for photos of the night sky. For astronomy photos, the camera detects when it is very dark and takes a burst of images at extra-long exposures to capture more light. The result is a task that could previously be done only with full-size cameras with bulky lenses, Google said.

Apple’s new iPhones also introduced a mode for shooting photos in low light, employing a similar method. Once the camera detects that a setting is very dark, it automatically captures multiple pictures and fuses them together while adjusting colors and contrast.

A few years ago, phone makers like Apple, Samsung and Huawei introduced cameras that produced portrait mode, also known as the bokeh effect, which sharpened a subject in the foreground and blurred the background. Most phone makers used two lenses that worked together to create the effect.

Two years ago with the Pixel 2, Google accomplished the same effect with a single lens. Its method largely relied on machine learning — computers analyzing millions of images to recognize what’s important in a photo. The Pixel then made predictions about the parts of the photo that should stay sharp and created a mask around it. A special sensor inside the camera, called dual-pixel autofocus, helped analyze the distance between the objects and the camera to make the blurring look realistic.

With the Pixel 4, Google said, it has improved the camera’s portrait-mode ability. The new second lens will allow the camera to capture more information about depth, which lets the camera shoot objects with portrait mode from greater distances.

In the past, zooming in with digital cameras was practically taboo because the image would inevitably become very pixelated, and the slightest hand movement would create blur. Google used software to address the issue last year in the Pixel 3 with what it calls Super Res Zoom.

The technique takes advantage of natural hand tremors to capture a burst of photos in varying positions. By combining each of the slightly varying photos, the camera software composes a photo that fills in detail that wouldn’t have been there with a normal digital zoom.

The Pixel 4’s new lens expands the ability of Super Res Zoom by adjusting to zoom in, similar to a zoom lens on a film camera. In other words, now the camera will take advantage of both the software feature and the optical lens to zoom in extra close without losing detail.

Computational photography is an entire field of study in computer science. Dr. Ng, the Berkeley professor, teaches courses on the subject. He said he and his students were researching new techniques like the ability to apply portrait-mode effects to videos.

Say, for example, two people in a video are having a conversation, and you want the camera to automatically focus on whoever is speaking. A video camera can’t typically know how to do that because it can’t predict the future. But in computational photography, a camera could record all the footage, use artificial intelligence to determine which person is speaking and apply the auto-focusing effects after the fact. The video you’d see would shift focus between two people as they took turns speaking.

“These are examples of capabilities that are completely new and emerging in research that could completely change what we think of that’s possible,” Dr. Ng said.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

When You Take a Great Photo, Thank the Algorithm in Your Phone

Westlake Legal Group 15Techfix-print-facebookJumbo When You Take a Great Photo, Thank the Algorithm in Your Phone Smartphones Samsung Group Photography Google Inc cameras Artificial Intelligence Apple Inc

Not too long ago, tech giants like Apple and Samsung raved about the number of megapixels they were cramming into smartphone cameras to make photos look clearer. Nowadays, all the handset makers are shifting focus to the algorithms, artificial intelligence and special sensors that are working together to make our photos look more impressive.

What that means: Our phones are working hard to make photos look good, with minimal effort required from the user.

On Tuesday, Google showed its latest attempt to make cameras smarter. It unveiled the Pixel 4 and Pixel 4 XL, new versions of its popular smartphone, which comes in two screen sizes. While the devices include new hardware features — like an extra camera lens and an infrared face scanner to unlock the phone — Google emphasized the phones’ use of so-called computational photography, which automatically processes images to look more professional.

Among the Pixel 4’s new features is a mode for shooting the night sky and capturing images of stars. And by adding the extra lens, Google augmented a software feature called Super Res Zoom, which allows users to zoom in more closely on images without losing much detail.

Apple also highlighted computational photography last month when it introduced three new iPhones. One yet-to-be released feature, Deep Fusion, will process images with an extreme amount of detail.

The big picture? When you take a digital photo, you’re not actually shooting a photo anymore.

“Most photos you take these days are not a photo where you click the photo and get one shot,” said Ren Ng, a computer science professor at the University of California, Berkeley. “These days it takes a burst of images and computes all of that data into a final photograph.”

Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.

Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.

Google gave me a preview of its Pixel phones last week. Here’s what they tell us about the software that’s making our phone cameras tick, and what to look forward to. (For the most part, the photos will speak for themselves.)

Last year, Google introduced Night Sight, which made photos taken in low light look as though they had been shot in normal conditions, without a flash. The technique took a burst of photos with short exposures and reassembled them into an image.

With the Pixel 4, Google is applying a similar technique for photos of the night sky. For astronomy photos, the camera detects when it is very dark and takes a burst of images at extra-long exposures to capture more light. The result is a task that could previously be done only with full-size cameras with bulky lenses, Google said.

Apple’s new iPhones also introduced a mode for shooting photos in low light, employing a similar method. Once the camera detects that a setting is very dark, it automatically captures multiple pictures and fuses them together while adjusting colors and contrast.

A few years ago, phone makers like Apple, Samsung and Huawei introduced cameras that produced portrait mode, also known as the bokeh effect, which sharpened a subject in the foreground and blurred the background. Most phone makers used two lenses that worked together to create the effect.

Two years ago with the Pixel 2, Google accomplished the same effect with a single lens. Its method largely relied on machine learning — computers analyzing millions of images to recognize what’s important in a photo. The Pixel then made predictions about the parts of the photo that should stay sharp and created a mask around it. A special sensor inside the camera, called dual-pixel autofocus, helped analyze the distance between the objects and the camera to make the blurring look realistic.

With the Pixel 4, Google said, it has improved the camera’s portrait-mode ability. The new second lens will allow the camera to capture more information about depth, which lets the camera shoot objects with portrait mode from greater distances.

In the past, zooming in with digital cameras was practically taboo because the image would inevitably become very pixelated, and the slightest hand movement would create blur. Google used software to address the issue last year in the Pixel 3 with what it calls Super Res Zoom.

The technique takes advantage of natural hand tremors to capture a burst of photos in varying positions. By combining each of the slightly varying photos, the camera software composes a photo that fills in detail that wouldn’t have been there with a normal digital zoom.

The Pixel 4’s new lens expands the ability of Super Res Zoom by adjusting to zoom in, similar to a zoom lens on a film camera. In other words, now the camera will take advantage of both the software feature and the optical lens to zoom in extra close without losing detail.

Computational photography is an entire field of study in computer science. Dr. Ng, the Berkeley professor, teaches courses on the subject. He said he and his students were researching new techniques like the ability to apply portrait-mode effects to videos.

Say, for example, two people in a video are having a conversation, and you want the camera to automatically focus on whoever is speaking. A video camera can’t typically know how to do that because it can’t predict the future. But in computational photography, a camera could record all the footage, use artificial intelligence to determine which person is speaking and apply the auto-focusing effects after the fact. The video you’d see would shift focus between two people as they took turns speaking.

“These are examples of capabilities that are completely new and emerging in research that could completely change what we think of that’s possible,” Dr. Ng said.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

If a Robotic Hand Solves a Rubik’s Cube, Does It Prove Something?

SAN FRANCISCO — Last week, on the third floor of a small building in San Francisco’s Mission District, a woman scrambled the tiles of a Rubik’s Cube and placed it in the palm of a robotic hand.

The hand began to move, gingerly spinning the tiles with its thumb and four long fingers. Each movement was small, slow and unsteady. But soon, the colors started to align. Four minutes later, with one more twist, it unscrambled the last few tiles, and a cheer went up from a long line of researchers watching nearby.

The researchers worked for a prominent artificial intelligence lab, OpenAI, and they had spent several months training their robotic hand for this task.

Though it could be dismissed as an attention-grabbing stunt, the feat was another step forward for robotics research. Many researchers believe it was an indication that they could train machines to perform far more complex tasks. That could lead to robots that can reliably sort through packages in a warehouse or to cars that can make decisions on their own.

ImageWestlake Legal Group merlin_162679119_3d7f1386-00bf-45db-b682-bbb2fd35b93a-articleLarge If a Robotic Hand Solves a Rubik’s Cube, Does It Prove Something? Rubik's Cube Robots and Robotics Research OpenAI Labs Artificial Intelligence

Researchers at the OpenAI lab in San Francisco spent months training their robotic hand to solve the Rubik’s Cube.CreditMatt Edge for The New York Times

“Solving a Rubik’s Cube is not very useful, but it shows how far we can push these techniques,” said Peter Welinder, one of the researchers who worked on the project. “We see this as a path to robots that can handle a wide variety of tasks.”

The project was also a way for OpenAI to promote itself as it seeks to attract the money and the talent needed to push this sort of research forward. The techniques under development at labs like OpenAI are enormously expensive — both in equipment and personnel — and for that reason, eye-catching demonstrations have become a staple of serious A.I. research.

The trick is separating the flash of the demo from the technological progress — and understanding the limitations of that technology. Though OpenAI’s hand can solve the puzzle in as little as four minutes, it drops the cube eight times out of 10, the researchers said.

“This is an interesting and positive step forward, but it is really important not to exaggerate it,” said Ken Goldberg, a professor at the University of California, Berkeley, who explores similar techniques.

A robot that can solve a Rubik’s Cube is not new. Researchers previously designed machines specifically for the task — devices that look nothing like a hand — and they can solve the puzzle in less than a second. But building devices that work like a human hand is a painstaking process in which engineers spend months laying down rules that define each tiny movement.

The OpenAI project was an achievement of sorts because its researchers did not program each movement into their robotic hand. That might take decades, if not centuries, considering the complexity of a mechanical device with a thumb and four fingers. The lab’s researchers built a computer system that learned to solve the Rubik’s Cube largely on its own.

“What is exciting about this work is that the system learns,” said Jeff Clune, a robotics professor at the University of Wyoming. “It doesn’t memorize one way to solve the problem. It learns.”

OpenAI trains its hand in simulation, randomly changing the environment as it learns.CreditCreditBy Openai

Development began with a simulation of both the hand and the cube — a digital recreation of the hardware on the third floor of OpenAI’s San Francisco headquarters. Inside the simulation, the hand learned to solve the puzzle through extreme trial and error. It spent the equivalent of 10,000 years spinning the tiles up, down, left and right, completing the task over and over again.

The researchers randomly changed the simulation in small but distinct ways. They changed the size of the hand and the color of the tiles and the amount of friction between the tiles. After the training, the hand learned to deal with the unexpected.

When the researchers transferred this computer learning to the physical hand, it could solve the puzzle on its own. Thanks to the randomness introduced in simulation, it could even solve the puzzle when wearing a rubber glove or with two fingers tied together.

At OpenAI and similar labs at Google, the University of Washington and Berkeley, many researchers believe this kind of “machine learning” will help robots master tasks they cannot master today and deal with the randomness of the physical world. Right now, robots cannot reliably sort through a bin of random items moving through a warehouse.

The hope is that will soon be possible. But getting there is expensive.

That is why OpenAI, led by the Silicon Valley start-up guru Sam Altman, recently signed a billion-dollar deal with Microsoft. And it’s why the lab wanted the world to see a demo of its robotic hand solving a Rubik’s Cube. On Tuesday, the lab released a 50-page research paper describing the science of the project. It also distributed a news release to news outlets across the globe.

“In order to keep their operation going, this is what they have to do,” said Zachary Lipton, a professor in the machine learning group at Carnegie Mellon University in Pittsburgh. “It is their life blood.”

When The New York Times was shown an early version of the news release, we asked to see the hand in action. On the first attempt, the hand dropped the cube after a few minutes of twisting and turning. A researcher placed the cube back into its palm. On the next attempt, it completed the puzzle without a hitch.

Many academics, including Dr. Lipton, bemoaned the way that artificial intelligence is hyped through news releases and showy demonstrations. But that is not something that will change anytime soon.

“These are serious technologies that people need to think about,” Dr. Lipton said. “But it is difficult for the public to understand what is happening and what they should be concerned about and what will actually affect them.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

A.I. Researchers See Danger of Haves and Have-Nots

Each big step of progress in computing — from mainframe to personal computer to internet to smartphone — has opened opportunities for more people to invent on the digital frontier.

But there is growing concern that trend is being reversed at tech’s new leading edge, artificial intelligence.

Computer scientists say A.I. research is becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products like self-driving cars or digital assistants that can see, talk and reason.

The danger, they say, is that pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data centers.

In the have-not camp, they warn, will be university labs, which have traditionally been a wellspring of innovations that eventually power new products and services.

“The huge computing resources these companies have pose a threat — the universities cannot compete,” said Craig Knoblock, executive director of the Information Sciences Institute, a research lab at the University of Southern California.

The research scientists’ warnings come amid rising concern about the power of the big tech companies. Most of the focus has been on the current generation of technology — search, online advertising, social media and e-commerce. But the scientists are worried about a barrier to exploring the technological future, when that requires staggering amounts of computing.

The modern data centers of the big tech companies are sprawling and secretive. The buildings are the size of a football field, or larger, housing rack upon rack with hundreds of thousands of computers. The doors are bulletproof. The walls are fireproof. Outsiders are rarely allowed in.

These are the engine rooms of cloud computing. They help deliver a cornucopia of entertainment and information to smartphones and laptops, and they enable millions of developers to write cloud-based software applications.

But artificial intelligence researchers, outside the big tech companies, see a worrying trend in their field. A recent report from the Allen Institute for Artificial Intelligence observed that the volume of calculations needed to be a leader in A.I. tasks like language understanding, game playing and common-sense reasoning has soared an estimated 300,000 times in the last six years.

All that computing fuel is needed to turbocharge so-called deep-learning software models, whose performance improves with more calculations and more data. Deep learning has been the primary driver of A.I. breakthroughs in recent years.

“When it’s successful, there is a huge benefit,” said Oren Etzioni, chief executive of the Allen Institute, founded in 2014 by Paul Allen, the billionaire co-founder of Microsoft. “But the cost of doing research is getting exponentially higher. As a society and an economy, we suffer if there are only a handful of places where you can be on the cutting edge.”

The evolution of one artificial intelligence lab, OpenAI, shows the changing economics, as well as the promise of deep-learning A.I. technology.

Founded in 2015, with backing from Elon Musk, OpenAI began as a nonprofit research lab. Its ambition was to develop technology at the frontier of artificial intelligence and share the benefits with the wider world. It was a vision that suggested the computing tradition of an inspired programmer, working alone on a laptop, coming up with a big idea.

This spring, OpenAI used its technology to defeat the world champion team of human players at a complex video game called Dota 2. Its software learned the game by constant trial and error over months, the equivalent of more than 45,000 years of game play.

The OpenAI scientists have realized they are engaged in an endeavor more like particle physics or weather simulation, fields demanding huge computing resources. Winning at Dota 2, for example, required spending millions of dollars renting access to tens of thousands of computer chips inside the cloud computing data centers run by companies like Google and Microsoft.

ImageWestlake Legal Group merlin_160095267_aacb1c25-c778-4891-8027-01980fec116f-articleLarge A.I. Researchers See Danger of Haves and Have-Nots Research OpenAI Labs Google Inc Facebook Inc Computers and the Internet Cloud Computing Artificial Intelligence Amazon.com Inc Allen Institute for Artificial Intelligence

“As a society and an economy, we suffer if there are only a handful of places where you can be on the cutting edge,” said Oren Etzioni, the chief executive of the Allen Institute.CreditKyle Johnson for The New York Times

Earlier this year, OpenAI morphed into a for-profit company to attract financing and, in July, announced that Microsoft was making a $1 billion investment. Most of the money, OpenAI said, would be spent on the computing power it needed to pursue its goals, which still include widely sharing the benefits of A.I., after paying off their investors.

As part of OpenAI’s agreement with Microsoft, the software giant will eventually become the lab’s sole source of computing.

“If you don’t have enough compute,you can’t make a breakthrough,” said Ilya Sutskever, chief scientist of OpenAI.

Academics are also raising concerns about the power consumed by advanced A.I. software. Training a large, deep-learning model can generate the same carbon footprint as the lifetime of five American cars, including gas, three computer scientists at the University of Massachusetts, Amherst, estimated in a recent research paper. (The big tech companies say they buy as much renewable energy as they can, reducing the environmental impact of their data centers.)

Mr. Etzioni and his co-authors at the Allen Institute say that perhaps both concerns — about power use and the cost of computing — could be at least partially addressed by changing how success in A.I. technology is measured.

The field’s single-minded focus on accuracy, they say, skews research along too narrow a path.

Efficiency should also be considered. They suggest that researchers report the “computational price tag” for achieving a result in a project as well.

Since their “Green A.I.” paper was published in July, their message has resonated with many in the research community.

Henry Kautz, a professor of computer science at the University of Rochester, noted that accuracy is “really only one dimension we care about in theory and in practice.” Others, he said, include how much energy is used, how much data is required and how much skilled human effort is needed for A.I. technology to work.

A more multidimensional view, Mr. Kautz added, could help level the playing field between academic researchers and computer scientists at the big tech companies, if research projects relied less on raw computing firepower.

Big tech companies are pursuing greater efficiency in their data centers and their artificial intelligence software, which they say will make computing power more available to the outside developers and academics.

John Platt, a distinguished scientist in Google’s artificial intelligence division, points to its recent development of deep-learning models, EfficientNets, which are 10 times smaller and faster than conventional ones. “That democratizes use,” he said. “We want these models to be trainable and accessible by as many people as possible.”

The big tech companies have given universities many millions over the years in grants and donations, but some computer scientists say they should do more to close the gap between the A.I. research haves and have-nots. Today, they say, the relationship that tech giants have to universities is largely as a buyer, hiring away professors, graduate students and even undergraduates.

The companies would be wise to also provide substantial support for academic research including much greater access to their wealth of computing — so the competition for ideas and breakthroughs extends beyond corporate walls, said Ed Lazowska, a professor at the University of Washington.

A more supportive relationship, Mr. Lazowska argues, would be in their corporate self-interest. Otherwise, he said, “We’ll see a significant dilution of the ability of the academic community to produce the next generation of computer scientists who will power these companies.”

At the Allen Institute in Seattle, Mr. Etzioni said, the team will pursue techniques to improve the efficiency of artificial intelligence technology. “This is a big push for us,” he said.

But Mr. Etzioni emphasized that what he was calling green A.I. should be seen as “an opportunity for additional ingenuity, not a restraint” — or a replacement for deep learning, which relies on vast computing power, and which he calls red A.I.

Indeed, the Allen Institute has just reached an A.I. milestone by correctly answering more than 90 percent of the questions on a standard eighth-grade science test. That feat was achieved with the red A.I. tools of deep learning.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

At Tech’s Leading Edge, Worry About a Concentration of Power

Each big step of progress in computing — from mainframe to personal computer to internet to smartphone — has opened opportunities for more people to invent on the digital frontier.

But there is growing concern that trend is being reversed at tech’s new leading edge, artificial intelligence.

Computer scientists say A.I. research is becoming increasingly expensive, requiring complex calculations done by giant data centers, leaving fewer people with easy access to the computing firepower necessary to develop the technology behind futuristic products like self-driving cars or digital assistants that can see, talk and reason.

The danger, they say, is that pioneering artificial intelligence research will be a field of haves and have-nots. And the haves will be mainly a few big tech companies like Google, Microsoft, Amazon and Facebook, which each spend billions a year building out their data centers.

In the have-not camp, they warn, will be university labs, which have traditionally been a wellspring of innovations that eventually power new products and services.

“The huge computing resources these companies have pose a threat — the universities cannot compete,” said Craig Knoblock, executive director of the Information Sciences Institute, a research lab at the University of Southern California.

The research scientists’ warnings come amid rising concern about the power of the big tech companies. Most of the focus has been on the current generation of technology — search, online advertising, social media and e-commerce. But the scientists are worried about a barrier to exploring the technological future, when that requires staggering amounts of computing.

The modern data centers of the big tech companies are sprawling and secretive. The buildings are the size of a football field, or larger, housing rack upon rack with hundreds of thousands of computers. The doors are bulletproof. The walls are fireproof. Outsiders are rarely allowed in.

These are the engine rooms of cloud computing. They help deliver a cornucopia of entertainment and information to smartphones and laptops, and they enable millions of developers to write cloud-based software applications.

But artificial intelligence researchers, outside the big tech companies, see a worrying trend in their field. A recent report from the Allen Institute for Artificial Intelligence observed that the volume of calculations needed to be a leader in A.I. tasks like language understanding, game playing and common-sense reasoning has soared an estimated 300,000 times in the last six years.

All that computing fuel is needed to turbocharge so-called deep-learning software models, whose performance improves with more calculations and more data. Deep learning has been the primary driver of A.I. breakthroughs in recent years.

“When it’s successful, there is a huge benefit,” said Oren Etzioni, chief executive of the Allen Institute, founded in 2014 by Paul Allen, the billionaire co-founder of Microsoft. “But the cost of doing research is getting exponentially higher. As a society and an economy, we suffer if there are only a handful of places where you can be on the cutting edge.”

The evolution of one artificial intelligence lab, OpenAI, shows the changing economics, as well as the promise of deep-learning A.I. technology.

Founded in 2015, with backing from Elon Musk, OpenAI began as a nonprofit research lab. Its ambition was to develop technology at the frontier of artificial intelligence and share the benefits with the wider world. It was a vision that suggested the computing tradition of an inspired programmer, working alone on a laptop, coming up with a big idea.

This spring, OpenAI used its technology to defeat the world champion team of human players at a complex video game called Dota 2. Its software learned the game by constant trial and error over months, the equivalent of more than 45,000 years of game play.

The OpenAI scientists have realized they are engaged in an endeavor more like particle physics or weather simulation, fields demanding huge computing resources. Winning at Dota 2, for example, required spending millions of dollars renting access to tens of thousands of computer chips inside the cloud computing data centers run by companies like Google and Microsoft.

ImageWestlake Legal Group merlin_160095267_aacb1c25-c778-4891-8027-01980fec116f-articleLarge At Tech’s Leading Edge, Worry About a Concentration of Power Research OpenAI Labs Google Inc Facebook Inc Computers and the Internet Cloud Computing Artificial Intelligence Amazon.com Inc Allen Institute for Artificial Intelligence

“As a society and an economy, we suffer if there are only a handful of places where you can be on the cutting edge,” said Oren Etzioni, the chief executive of the Allen Institute.CreditKyle Johnson for The New York Times

Earlier this year, OpenAI morphed into a for-profit company to attract financing and, in July, announced that Microsoft was making a $1 billion investment. Most of the money, OpenAI said, would be spent on the computing power it needed to pursue its goals, which still include widely sharing the benefits of A.I., after paying off their investors.

As part of OpenAI’s agreement with Microsoft, the software giant will eventually become the lab’s sole source of computing.

“If you don’t have enough compute,you can’t make a breakthrough,” said Ilya Sutskever, chief scientist of OpenAI.

Academics are also raising concerns about the power consumed by advanced A.I. software. Training a large, deep-learning model can generate the same carbon footprint as the lifetime of five American cars, including gas, three computer scientists at the University of Massachusetts, Amherst, estimated in a recent research paper. (The big tech companies say they buy as much renewable energy as they can, reducing the environmental impact of their data centers.)

Mr. Etzioni and his co-authors at the Allen Institute say that perhaps both concerns — about power use and the cost of computing — could be at least partially addressed by changing how success in A.I. technology is measured.

The field’s single-minded focus on accuracy, they say, skews research along too narrow a path.

Efficiency should also be considered. They suggest that researchers report the “computational price tag” for achieving a result in a project as well.

Since their “Green A.I.” paper was published in July, their message has resonated with many in the research community.

Henry Kautz, a professor of computer science at the University of Rochester, noted that accuracy is “really only one dimension we care about in theory and in practice.” Others, he said, include how much energy is used, how much data is required and how much skilled human effort is needed for A.I. technology to work.

A more multidimensional view, Mr. Kautz added, could help level the playing field between academic researchers and computer scientists at the big tech companies, if research projects relied less on raw computing firepower.

Big tech companies are pursuing greater efficiency in their data centers and their artificial intelligence software, which they say will make computing power more available to the outside developers and academics.

John Platt, a distinguished scientist in Google’s artificial intelligence division, points to its recent development of deep-learning models, EfficientNets, which are 10 times smaller and faster than conventional ones. “That democratizes use,” he said. “We want these models to be trainable and accessible by as many people as possible.”

The big tech companies have given universities many millions over the years in grants and donations, but some computer scientists say they should do more to close the gap between the A.I. research haves and have-nots. Today, they say, the relationship that tech giants have to universities is largely as a buyer, hiring away professors, graduate students and even undergraduates.

The companies would be wise to also provide substantial support for academic research including much greater access to their wealth of computing — so the competition for ideas and breakthroughs extends beyond corporate walls, said Ed Lazowska, a professor at the University of Washington.

A more supportive relationship, Mr. Lazowska argues, would be in their corporate self-interest. Otherwise, he said, “We’ll see a significant dilution of the ability of the academic community to produce the next generation of computer scientists who will power these companies.”

At the Allen Institute in Seattle, Mr. Etzioni said, the team will pursue techniques to improve the efficiency of artificial intelligence technology. “This is a big push for us,” he said.

But Mr. Etzioni emphasized that what he was calling green A.I. should be seen as “an opportunity for additional ingenuity, not a restraint” — or a replacement for deep learning, which relies on vast computing power, and which he calls red A.I.

Indeed, the Allen Institute has just reached an A.I. milestone by correctly answering more than 90 percent of the questions on a standard eighth-grade science test. That feat was achieved with the red A.I. tools of deep learning.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Video: BoJo versus the “terrifying, limbless chickens”

Westlake Legal Group BoJoUN Video: BoJo versus the “terrifying, limbless chickens” United Nations United Kingdom The Blog Technology Speech chickens Boris Johnson Artificial Intelligence

I don’t know how well UK Prime Minister Boris Johnson is going to do with his Brexit plan, but after his rather stunning speech at the United Nations yesterday I’m certainly a fan of his when it comes to the future of technology. We’ve been warning everyone for years about the advent of Artificial Intelligence, killer robots and they dystopian future that awaits us once our technology wakes up and takes control. As it turns out, BoJo thinks about these things all the time too.

If’ you’ve never gotten the chance to listen to Johnson Give a speech, don’t miss the supercut from The Guardian below. They’ve culled out two minutes of the best bits, where the PM warns the world about the internet of things, Alexa running your lives, nanorobots living in your bodies and, yes… terrifying limbless chickens. All of this awaits us if we don’t keep an eye on what companies like Google and Amazon are up to. (Associated Press)

Things the beleaguered British prime minister said in his astonishing speech to the U.N. General Assembly on Tuesday night: “Pink-eyed Terminators from the future.” ″Terrifying limbless chickens.” ″Your fridge will beep for more cheese.” …

Many didn’t know what to expect Tuesday after the court ruling came down hours before Johnson’s inaugural U.N. General Assembly speech as prime minister.

But it’s safe to say few anticipated what he dramatically and energetically delivered: a caffeinated screed about the damage that technology can do if misused — and the glories it can hand humanity if it is delivered properly.

Here’s the video. It’s really worth the click, trust me.

Johnson is always an entertaining speaker. If you Google Boris Johnson greatest quotes you’ll find a goldmine of memorable one liners. But even knowing how eccentric he can be at the podium, I didn’t see the “terrifying limbless chickens” line coming.

Still, BoJo may be on to something here. We don’t have to go so far as the terrifying killer robots of Boston Dynamics to find areas for concern. Google’s algorithms are getting smarter all the time and worming their way into all aspects of society. For just one example, it was only this week that McDonalds announced that you can now use either the Google Assistant or Amazon’s Alexa to apply for a job at the fast food chain. Just tell your phone or home-based device, “Help me find a job at McDonalds” and it will launch you into a process of landing an interview. After each question you answer it plays the “I’m loving it” jingle.

But what else will your digital servants be telling your prospective employer? Will it send along a greatest hits compliation of your dodgy tweets and Facebook posts? Perhaps it will forward a list of what porn sites you’ve been visiting or how often you go to the doctor. Imagine the possibilities.

The future is here, folks. And it’s inspiring and terrifying all at the same time. Boris Johnson knows this and he’s trying to warn the world. So don’t blame him when your refrigerator produces a limbless chicken for your dinner.

The post Video: BoJo versus the “terrifying, limbless chickens” appeared first on Hot Air.

Westlake Legal Group BoJoUN-300x159 Video: BoJo versus the “terrifying, limbless chickens” United Nations United Kingdom The Blog Technology Speech chickens Boris Johnson Artificial Intelligence   Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com