web analytics
a

Facebook

Twitter

Copyright 2015 Libero Themes.
All Rights Reserved.

8:30 - 6:00

Our Office Hours Mon. - Fri.

703-406-7616

Call For Free 15/M Consultation

Facebook

Twitter

Search
Menu
Westlake Legal Group > Artificial Intelligence

A.I. Is Learning From Humans. Many Humans.

BHUBANESWAR, India — Namita Pradhan sat at a desk in downtown Bhubaneswar, India, about 40 miles from the Bay of Bengal, staring at a video recorded in a hospital on the other side of the world.

The video showed the inside of someone’s colon. Ms. Pradhan was looking for polyps, small growths in the large intestine that could lead to cancer. When she found one — they look a bit like a slimy, angry pimple — she marked it with her computer mouse and keyboard, drawing a digital circle around the tiny bulge.

She was not trained as a doctor, but she was helping to teach an artificial intelligence system that could eventually do the work of a doctor.

Ms. Pradhan was one of dozens of young Indian women and men lined up at desks on the fourth floor of a small office building. They were trained to annotate all kinds of digital images, pinpointing everything from stop signs and pedestrians in street scenes to factories and oil tankers in satellite photos.

A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.

Before an A.I. system can learn, someone has to label the data supplied to it. Humans, for example, must pinpoint the polyps. The work is vital to the creation of artificial intelligence like self-driving cars, surveillance systems and automated health care.

Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.

Earlier this year, I negotiated a look behind the curtain that Silicon Valley’s wizards rarely grant. I made a meandering trip across India and stopped at a facility across the street from the Superdome in downtown New Orleans. In all, I visited five offices where people are doing the endlessly repetitive work needed to teach A.I. systems, all run by a company called iMerit.

There were intestine surveyors like Ms. Pradhan and specialists in telling a good cough from a bad cough. There were language specialists and street scene identifiers. What is a pedestrian? Is that a double yellow line or a dotted white line? One day, a robotic car will need to know the difference.

ImageWestlake Legal Group merlin_150609465_83ed4cd5-c835-43c7-b074-89c9bb1a27b6-articleLarge A.I. Is Learning From Humans. Many Humans. New Orleans (La) Milland, Kristy Labor and Jobs iMerit Freelancing, Self-Employment and Independent Contracting Computers and the Internet Bhubaneswar (India) Basu, Radha Baidya, Prasenjit Artificial Intelligence Anudip Amazon Mechanical Turk

iMerit employees must learn unusual skills for their labeling, like spotting a problematic polyp on a human intestine.CreditRebecca Conway for The New York Times

What I saw didn’t look very much like the future — or at least the automated one you might imagine. The offices could have been call centers or payment processing centers. One was a timeworn former apartment building in the middle of a low-income residential neighborhood in western Kolkata that teemed with pedestrians, auto rickshaws and street vendors.

In facilities like the one I visited in Bhubaneswar and in other cities in India, China, Nepal, the Philippines, East Africa and the United States, tens of thousands of office workers are punching a clock while they teach the machines.

Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.

Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.

Artwork and motivational affirmations on a display at the iMerit offices in the Metiabruz neighborhood of Kolkata, India.CreditRebecca Conway for The New York Times

One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80 percent of the time spent building A.I. technology.

Is the work exploitative? It depends on where you live and what you’re working on. In India, it is a ticket to the middle class. In New Orleans, it’s a decent enough job. For someone working as an independent contractor, it is often a dead end.

There are skills that must be learned — like spotting signs of a disease in a video or medical scan or keeping a steady hand when drawing a digital lasso around the image of a car or a tree. In some cases, when the task involves medical videos, pornography or violent images, the work turns grisly.

“When you first see these things, it is deeply disturbing. You don’t want to go back to the work. You might not go back to the work,” said Kristy Milland, who spent years doing data-labeling work on Amazon Mechanical Turk and has become a labor activist on behalf of workers on the service.

“But for those of us who cannot afford to not go back to the work, you just do it,” Ms. Milland said.

Before traveling to India, I tried labeling images on a crowdsourcing service, drawing digital boxes around Nike logos and identifying “not safe for work” images. I was painfully inept.

Before starting this work, I had to pass a test. Even that was disheartening. The first three times, I failed. Labeling images so people could instantly search a website for retail goods — not to mention the time spent identifying crude images of naked women and sex toys as “NSFW” — wasn’t exactly inspiring.

A.I. researchers hope they can build systems that can learn from smaller amounts of data. But for the foreseeable future, human labor is essential.

“This is an expanding world, hidden beneath the technology,” said Mary Gray, an anthropologist at Microsoft and the co-author of the book Ghost Work,” which explores the data labeling market. “It is hard to take humans out of the loop.”

Employees leaving iMerit offices in Bhubaneswar, India. The company, which is private, was started by Radha and Dipak Basu, who both had long careers in Silicon Valley.CreditRebecca Conway for The New York Times

Bhubaneswar is called the City of Temples. Ancient Hindu shrines rise over roadside markets at the southwestern end of the city — giant towers of stacked stone that date to the first millennium. In the city center, many streets are unpaved. Cows and feral dogs meander among the mopeds, cars and trucks.

The city — population: 830,000 — is also a rapidly growing hub for online labor. About a 15-minute drive from the temples, on a (paved) road near the city center, a white, four-story building sits behind a stone wall. Inside, there are three rooms filled with long rows of desks, each with its own wide-screen computer display. This was where Namita Pradhan spent her days labeling videos when I met her.

Ms. Pradhan, 24, grew up just outside the city and earned a degree from a local college, where she studied biology and other subjects before taking the job with iMerit. It was recommended by her brother, who was already working for the company. She lived at a hostel near her office during the week and took the bus back to her family home each weekend.

I visited the office on a temperate January day. Some of the women sitting at the long rows of desks were traditionally dressed — bright red saris, long gold earrings. Ms. Pradhan wore a green long-sleeve shirt, black pants, and white lace-up shoes as she annotated videos for a client in the United States.

Over the course of what was a typical eight-hour day, the shy 24-year-old watched about a dozen colonoscopy videos, constantly reversing the video for a closer look at individual frames.

Every so often, she would find what she was looking for. She would lasso it with a digital “bounding box.” She drew hundreds of these bounding boxes, labeling the polyps and other signs of illness, like blood clots and inflammation.

Namita Pradhan, second from right, works alongside colleagues at the iMerit offices in Bhubaneswar.CreditRebecca Conway for The New York Times

Her client, a company in the United States that iMerit is not allowed to name, will eventually feed her work into an A.I. system so it can learn to identify medical conditions on its own. The colon owner is not necessarily aware the video exists. Ms. Pradhan doesn’t know where the images came from. Neither does iMerit.

Ms. Pradhan learned the task during seven days of online video calls with a nonpracticingdoctor, based in Oakland, Calif., who helps train workers at many iMerit offices. But some question whether experienced doctors and medical students should do this labeling themselves.

This work requires people “who have a medical background, and the relevant knowledge in anatomy and pathology,” said Dr. George Shih, a radiologist at Weill Cornell Medicine and NewYork-Presbyterian and the co-founder of the start-up MD.ai., which helps organizations build artificial intelligence for health care.

When we chatted about her work, Ms. Pradhan called it “quite interesting,” but tiring. As for the graphic nature of the videos? “It was disgusting at first, but then you get used to it.”

The images she labeled were grisly, but not as grisly as others handled at iMerit. Their clients are also building artificial intelligence that can identify and remove unwanted images on social networks and other online services. That means labels for pornography, graphic violence and other noxious images.

This work can be so upsetting to workers, iMerit tries to limit how much of it they see. Pornography and violence are mixed with more innocuous images, and those labeling the grisly images are sequestered in separate rooms to shield other workers, said Liz O’Sullivan, who oversaw data annotation at an A.I. start-up called Clarifai and has worked closely with iMerit on such projects.

Other labeling companies will have workers annotate unlimited numbers of these images, Ms. O’Sullivan said.

“I would not be surprised if this causes post-traumatic stress disorder — or worse. It is hard to find a company that is not ethically deplorable that will take this on,” she said. “You have to pad the porn and violence with other work, so the workers don’t have to look at porn, porn, porn, beheading, beheading, beheading.”

iMerit said in a statement it does not compel workers to look at pornography or other offensive material and only takes on the work when it can help improve monitoring systems.

Ms. Pradhan and her fellow labelers earn between $150 and $200 a month, which pulls in between $800 and $1,000 of revenue for iMerit, according to one company executive.

By United States standards, Ms. Pradhan’s salary is indecently low. But for her and many others in these offices, it is about an average salary for a data-entry job.

iMerit employees Prasenjit Baidya, and his wife, Barnali Paik, at Mr. Baidya’s family home in the state of West Bengal. He said he was happy with the opportunities the work had given him.CreditRebecca Conway for The New York Times

Prasenjit Baidya grew up on a farm about 30 miles from Kolkata, the largest city in West Bengal, on the east coast of India. His parents and extended family still live in his childhood home, a cluster of brick buildings built at the turn of the 19th century. They grow rice and sunflowers in the surrounding fields and dry the seeds on rugs spread across the rooftops.

He was the first in his family to get a college education, which included a computer class. But the class didn’t teach him all that much. The room offered only one computer for every 25 students. He learned his computer skills after college, when he enrolled in a training course run by a nonprofit called Anudip. It was recommended by a friend, and it cost the equivalent of $5 a month.

Anudip runs English and computer courses across India, training about 22,000 people a year. It feeds students directly into iMerit, which its founders set up as a sister operation in 2013. Through Anudip, Mr. Baidya landed a job at an iMerit office in Kolkata, and so did his wife, Barnali Paik, who grew up in a nearby village.

Over the last six years, iMerit has hired more than 1,600 students from Anudip. It now employs about 2,500 people in total. More than 80 percent come from families with incomes below $150 a month.

Founded in 2012 and still a private company, iMerit has its employees perform digital tasks like transcribing audio files or identifying objects in photos. Businesses across the globe pay the company to use its workers, and increasingly, they assist work on artificial intelligence.

“We want to bring people from low-income backgrounds into technology — and technology jobs,” said Radha Basu, who founded Anudip and iMerit with her husband, Dipak, after long careers in Silicon Valley with the tech giants Cisco Systems and HP.

The average age of these workers is 24. Like Mr. Baidya, most of them come from rural villages. The company recently opened a new office in Metiabruz, a largely Muslim neighborhood in western Kolkata. There, it hires mostly Muslim women whose families are reluctant to let them outside the bustling area. They are not asked to look at pornographic images or violent material.

Employees in a training session at the iMerit offices in Metiabruz in Kolkata.CreditRebecca Conway for The New York Times

At first, iMerit focused on simple tasks — sorting product listings for online retail sites, vetting posts on social media. But it has shifted into work that feeds artificial intelligence.

The growth of iMerit and similar companies represents a shift away from crowdsourcing services like Mechanical Turk. iMerit and its clients have greater control over how workers are trained and how the work is done.

Mr. Baidya, now a manager at iMerit, oversees an effort to label street scenes used in training driverless cars for a major company in the United States. His team analyzes and labels digital photos as well as three-dimensional images captured by Lidar, devices that measure distances using pulses of light. They spend their days drawing bounding boxes around cars, pedestrians, stop signs and power lines.

He said the work could be tedious, but it had given him a life he might not have otherwise had. He and his wife recently bought an apartment in Kolkata, within walking distance of the iMerit office where she works.

“The changes in my life — in terms of my financial situation, my experiences, my skills in English — have been a dream,” he said. “I got a chance.”

Oscar Cabezas at the New Orleans office of iMerit. He joined the company when it started work on a Spanish-language digital assistant.CreditBryan Tarnowski for The New York Times

A few weeks after my trip to India, I took an Uber through downtown New Orleans. About 18 months ago, iMerit moved into one of the buildings across the street from the Superdome.

A major American tech company needed a way of labeling data for a Spanish-language version of its home digital assistant. So it sent the data to the new iMerit office in New Orleans.

After Hurricane Katrina in 2005, hundreds of construction workers and their families moved into New Orleans to help rebuild the city. Many stayed. A number of Spanish speakers came with that new work force, and the company began hiring them.

Oscar Cabezas, 23, moved with his mother to New Orleans from Colombia. His stepfather found work in construction, and after college Mr. Cabezas joined iMerit as it began working on the Spanish-language digital assistant.

He annotated everything from tweets to restaurant reviews, identifying people and places and pinpointing ambiguities. In Guatemala, for instance, “pisto” means money, but in Mexico, it means beer. “Every day was a new project,” he said.

The office has expanded into other work, serving businesses that want to keep their data within the United States. Some projects must remain stateside, for legal and security purposes.

Glenda Hernandez, 42, who was born in Guatemala, said she missed her old work on the digital assistant project. She loved to read. She reviewed books online for big publishing companies so she could get free copies, and she relished the opportunity of getting paid to read in Spanish.

Glenda Hernandez, part of the iMerit staff in New Orleans, has learned to tell the difference between a good cough and a cough that could indicate illness.CreditBryan Tarnowski for The New York Times

“That was my baby,” she said of the project.

She was less interested in image tagging or projects like the one that involved annotating recordings of people coughing; it was a way to build A.I. that identifies disease symptoms of illness over the phone.

“Listening to coughs all day is kind of disgusting,” she said.

The work is easily misunderstood, said Ms. Gray, the Microsoft anthropologist. Listening to people cough all day may be disgusting, but that is also how doctors spend their days. “We don’t think of that as drudgery,” she said.

Ms. Hernandez’s work is intended to help doctors do their jobs or maybe, one day, replace them. She takes pride in that. Moments after complaining about the project, she pointed to her colleagues across the office.

“We were the cough masters,” she said.

Kristy Milland of Toronto spent 14 years working for Amazon Mechanical Turk, which crowdsources data annotation tasks. Now she tries to improve conditions for people in those jobs.CreditArden Wray for The New York Times

In 2005, Kristy Milland signed up for her first job on Amazon Mechanical Turk. She was 26, and living in Toronto with her husband, who managed a local warehouse. Mechanical Turk was a way of making a little extra money.

The first project was for Amazon itself. Three photos of a storefront would pop up on her laptop, and she would choose the one that showed the front door. Amazon was building an online service similar to Google Street View, and the company needed help picking the best photos.

She made three cents for each click, or about 18 cents a minute. In 2010, her husband lost his job, and “MTurk” became a full-time gig. For two years, she worked six or seven days a week, sometimes as much as 17 hours a day. She made about $50,000 a year.

“It was enough to live on then. It wouldn’t be now,” Ms. Milland said.

The work at that time didn’t really involve A.I. For another project, she would pull information out of mortgage documents or retype names and addresses from photos of business cards, sometimes for as little as a dollar an hour.

Around 2010, she started labeling for A.I. projects. Ms. Milland tagged all sorts of data, like gory images that showed up on Twitter (which helps build A.I. that can help remove gory images from the social network) or aerial footage likely taken somewhere in the Middle East (presumably for A.I. that the military and its partners are building to identify drone targets).

Projects from American tech giants, Ms. Milland said, typically paid more than the average job — about $15 an hour. But the job didn’t come with health care or paid vacation, and the work could be mind-numbing — or downright disturbing. She called it “horrifically exploitative.” Amazon declined to comment.

Since 2012, Ms. Milland, now 40, has been part of an organization called TurkerNation, which aims to improve conditions for thousands of people who do this work. In April, after 14 years on the service, she quit.

She is in law school, and her husband makes $600 less than they pay in rent each month, which does not include utilities. So, she said, they are preparing to go into debt. But she will not go back to labeling data.

“This is a dystopian future,” she said. “And I am done.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

So now we’re using AI to interpret “mysterious space signals”

Westlake Legal Group RadioDish So now we’re using AI to interpret “mysterious space signals” The Blog telescopes radio waves Artificial Intelligence aliens

While still being a bit on the frightening side, the science behind this story is still kind of cool and worth a look. The “mysterious space signals” referenced in the title are probably more familiar to those of you who follow such topics as Fast Radio Bursts (FRBs). They originate all over the universe, though not from our own Milky Way Galaxy (yet, thankfully), and are composed of compact, complex radio waves that don’t seem like the sort of thing you’d get from a normal spacial event like a supernova or the creation of a black hole.

The problem is, they are rare and very brief, so we’ve had trouble trying to track them and pin them down. Now a laboratory in Australia has worked out a way to use Artificial Intelligence to do just that. (NY Post)

Wael Farah, a doctoral student at Swinburne University of Technology in Melbourne, Australia, developed a machine-learning system that recognized the signatures of FRBs as they arrive.

Farah’s system trained the Molonglo telescope in Canberra to spot FRBs and switch over to its most detailed recording mode, producing the finest records of FRBs yet.

“It is fascinating to discover that a signal that traveled halfway through the universe,” he said. The research was recently published in the Monthly Notices of the Royal Astronomical Society.

Many of the intense flashes have traveled billions of light-years across space.

I recall watching a documentary on this phenomenon a while back and I’d thought they had determined that the FRBs were caused by supermassive objects like neutron stars that fall into a death spiral around each other and eventually collide. But the way these scientists are describing them, the FRBs apparently have “mysterious structures, patterns of peaks and valleys in radio waves that play out in just milliseconds.” This is unlike what you’d expect to see coming out of a massive collision or explosion like the ones that randomly happen around the universe.

So what are they? Signs of intelligent alien civilizations, as some have speculated? If so, they must be up to something pretty spectacular if it’s producing enough energy to reach us with that much power from far off galaxies.

But is this a good enough excuse to unleash even more Artificial Intelligence into the global web? I’ve sort of given up on hoping that we could avoid the eventual takeover of the robots once the AI wakes up and realizes that its creators are more of a nuisance than anything else and the first problem it needs to fix is… us. So if we’re going to be messing around with Artificial Intelligence anyway, I suppose we might as well let it listen for aliens.

That’s quite the pairing though, at least in terms of doomsday scenarios. Carl Sagan warned us long ago that alerting advanced extraterrestrial civilizations to our presence was probably a bad idea because it would wind up going poorly for us. And then there was Steven Hawking, who once said this on the subject:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

So now we’re mixing the search for possible extraterrestrials with Artificial Intelligence. As the old saying reminds us… what could possibly go wrong?

The post So now we’re using AI to interpret “mysterious space signals” appeared first on Hot Air.

Westlake Legal Group RadioDish-300x153 So now we’re using AI to interpret “mysterious space signals” The Blog telescopes radio waves Artificial Intelligence aliens   Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

With $1 Billion From Microsoft, an A.I. Lab Wants to Mimic the Brain

SAN FRANCISCO — As the waitress approached the table, Sam Altman held up his phone. That made it easier to see the dollar amount typed into an investment contract he had spent the last 30 days negotiating with Microsoft.

“$1,000,000,000,” it read.

The investment from Microsoft, signed early this month and announced on Monday, signals a new direction for Mr. Altman’s research lab.

In March, Mr. Altman stepped down from his daily duties as the head of Y Combinator, the start-up “accelerator” that catapulted him into the Silicon Valley elite. Now, at 34, he is the chief executive of OpenAI, the artificial intelligence lab he helped create in 2015 with Elon Musk, the billionaire chief executive of the electric carmaker Tesla.

Mr. Musk left the lab last year to concentrate on his own A.I. ambitions at Tesla. Since then, Mr. Altman has remade OpenAI, founded as a nonprofit, into a for-profit company so it could more aggressively pursue financing. Now he has landed a marquee investor to help it chase an outrageously lofty goal.

He and his team of researchers hope to build artificial general intelligence, or A.G.I., a machine that can do anything the human brain can do.

A.G.I. still has a whiff of science fiction. But in their agreement, Microsoft and OpenAI discuss the possibility with the same matter-of-fact language they might apply to any other technology they hope to build, whether it’s a cloud-computing service or a new kind of robotic arm.

“My goal in running OpenAI is to successfully create broadly beneficial A.G.I.,” Mr. Altman said in a recent interview. “And this partnership is the most important milestone so far on that path.”

In recent years, a small but fervent community of artificial intelligence researchers have set their sights on A.G.I., and they are backed by some of the wealthiest companies in the world. DeepMind, a top lab owned by Google’s parent company, says it is chasing the same goal.

Westlake Legal Group ai-text-disinformation-1559924675109-thumbLarge With $1 Billion From Microsoft, an A.I. Lab Wants to Mimic the Brain Y Combinator Research OpenAI Labs Nadella, Satya Musk, Elon Microsoft Corp Artificial Intelligence Altman, Samuel H

How A.I. Could Be Weaponized to Spread Disinformation

The world’s top artificial intelligence labs are honing technology that can mimic how humans write, which could one day help disinformation campaigns go undetected by generating huge amounts of subtly different messages.

In a joint phone interview with Mr. Altman, Microsoft’s chief executive, Satya Nadella, later compared A.G.I. to his company’s efforts to build a quantum computer, a machine that would be exponentially faster than today’s machines. “Whether it’s our pursuit of quantum computing or it’s a pursuit of A.G.I., I think you need these high-ambition North Stars,” he said.

Mr. Altman’s 100-employee company recently built a system that could beat the world’s best players at a video game called Dota 2. Just a few years ago, this kind of thing did not seem possible.

Dota 2 is a game in which each player must navigate a complex, three-dimensional environment along with several other players, coordinating a careful balance between attack and defense. In other words, it requires old-fashioned teamwork, and that is a difficult skill for machines to master.

OpenAI mastered Dota 2 thanks to a mathematical technique called reinforcement learning, which allows machines to learn tasks by extreme trial and error. By playing the game over and over again, automated pieces of software, called agents, learned which strategies are successful.

The agents learned those skills over the course of several months, racking up more than 45,000 years of game play. That required enormous amounts of raw computing power. OpenAI spent millions of dollars renting access to tens of thousands of computer chips inside cloud computing services run by companies like Google and Amazon.

Eventually, Mr. Altman and his colleagues believe, they can build A.G.I. in a similar way. If they can gather enough data to describe everything humans deal with on a daily basis — and if they have enough computing power to analyze all that data — they believe they can rebuild human intelligence.

Mr. Altman painted the deal with Microsoft as a step in this direction. As Microsoft invests in OpenAI, the tech giant will also work on building new kinds of computing systems that can help the lab analyze increasingly large amounts of information.

“This is about really having that tight feedback cycle between a high-ambition pursuit of A.G.I. and what is our core business, which is building the world’s computer,” Mr. Nadella said.

That work will likely include computer chips designed specifically for training artificial intelligence systems. Like Google, Amazon and dozens of start-ups across the globe, Microsoft is already exploring this new kind of chip.

ImageWestlake Legal Group 22openai2-articleLarge With $1 Billion From Microsoft, an A.I. Lab Wants to Mimic the Brain Y Combinator Research OpenAI Labs Nadella, Satya Musk, Elon Microsoft Corp Artificial Intelligence Altman, Samuel H

Mr. Altman and Satya Nadella, the chief executive of Microsoft, which is investing $1 billion in OpenAI.CreditIan C. Bates for The New York Times

Most of that $1 billion, Mr. Altman said, will be spent on the computing power OpenAI needs to achieve its ambitions. And under the terms of the new contract, Microsoft will eventually become the lab’s sole source of computing power.

Mr. Nadella said Microsoft would not necessarily invest that billion dollars all at once. It could be doled out over the course of a decade or more. Microsoft is investing dollars that will be fed back into its own business, as OpenAI purchases computing power from the software giant, and the collaboration between the two companies could yield a wide array of technologies.

Because A.G.I. is not yet possible, OpenAI is starting with narrower projects. It built a system recently that tries to understand natural language. The technology could feed everything from digital assistants like Alexa and Google Home to software that automatically analyzes documents inside law firms, hospitals and other businesses.

The deal is also a way for these two companies to promote themselves. OpenAI needs computing power to fulfill its ambitions, but it must also attract the world’s leading researchers, which is hard to do in today’s market for talent. Microsoft is competing with Google and Amazon in cloud computing, where A.I. capabilities are increasingly important.

The question is how seriously we should take the idea of artificial general intelligence. Like others in the tech industry, Mr. Altman often talks as if its future is inevitable.

“I think that A.G.I. will be the most important technological development in human history,” he said during the interview with Mr. Nadella. Mr. Altman alluded to concerns from people like Mr. Musk that A.G.I. could spin outside our control. “Figuring out a way to do that is going to be one of the most important societal challenges we face.”

But a game like Dota 2 is a far cry from the complexities of the real world.

Artificial intelligence has improved in significant ways in recent years, thanks to many of the technologies cultivated at places like DeepMind and OpenAI. There are systems that can recognize images, identify spoken words, and translate between languages with an accuracy that was not possible just a few years ago. But this does not mean that A.G.I. is near or even that it is possible.

“We are no closer to A.G.I. than we have ever been,” said Oren Etzioni, the chief executive of the Allen Institute for Artificial Intelligence, an influential research lab in Seattle.

Geoffrey Hinton, the Google researcher who recently won the Turing Award — often called the Nobel Prize of computing — for his contributions to artificial intelligence over the past several years, was recently asked about the race to A.G.I.

“It’s too big a problem,” he said. “I’d much rather focus on something where you can figure out how you might solve it.” The other question with A.G.I., he added, is: Why do we need it?

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Despite High Hopes, Self-Driving Cars Are ‘Way in the Future’

A year ago, Detroit and Silicon Valley had visions of putting thousands of self-driving taxis on the road in 2019, ushering in an age of driverless cars.

Most of those cars have yet to arrive — and it is likely to be years before they do. Several carmakers and technology companies have concluded that making autonomous vehicles is going to be harder, slower and costlier than they thought.

“We overestimated the arrival of autonomous vehicles,” Ford’s chief executive, Jim Hackett, said at the Detroit Economic Club in April.

In the most recent sign of the scramble to regroup, Ford and Volkswagen said Friday that they were teaming up to tackle the self-driving challenge.

The two automakers plan to use autonomous-vehicle technology from a Pittsburgh start-up, Argo AI, in ride-sharing services in a few urban zones as early as 2021. But Argo’s chief executive, Bryan Salesky, said the industry’s bigger promise of creating driverless cars that could go anywhere was “way in the future.”

He and others attribute the delay to something as obvious as it is stubborn: human behavior.

Researchers at Argo say the cars they are testing in Pittsburgh and Miami have to navigate unexpected situations every day. Recently, one of the company’s cars encountered a bicyclist riding the wrong way down a busy street between other vehicles. Another Argo test car came across a street sweeper that suddenly turned a giant circle in an intersection, touching all four corners and crossing lanes of traffic that had the green light.

Bryan Salesky, chief executive and co-founder of Argo AI.CreditMichael Noble Jr. for The New York Times An Argo vehicle in Pittsburgh. The company is also testing its cars in Miami.CreditJeff Swensen for The New York Times

“You see all kinds of crazy things on the road, and it turns out they’re not all that infrequent, but you have to be able to handle all of them,” Mr. Salesky said. “With radar and high-resolution cameras and all the computing power we have, we can detect and identify the objects on a street. The hard part is anticipating what they’re going to do next.”

Mr. Salesky said Argo and many competitors had developed about 80 percent of the technology needed to put self-driving cars into routine use — the radar, cameras and other sensors that can identify objects far down roads and highways. But the remaining 20 percent, including developing software that can reliably anticipate what other drivers, pedestrians and cyclists are going to do, will be much more difficult, he said.

A year ago, many industry executives exuded much greater certainty. They thought that their engineers had solved the most vexing technical problems and promised that self-driving cars would be shuttling people around town in at least several cities by sometime this year.

Waymo, which is owned by Google’s parent company, Alphabet, announced that it would buy up to 62,000 Chrysler minivans and 20,000 Jaguar electric cars for its ride service, which operates in the Phoenix suburbs. General Motors announced that it would also start a taxi service by the end of this year with vehicles, developed by its Cruise division, that have no steering wheels or pedals.

Captivated by the notion of disrupting the transportation system, deep-pocketed investors rushed to get a piece of the action. Honda and the Japanese tech giant SoftBank invested in Cruise. Amazon, which hopes to deliver goods to its shoppers by driverless vehicles, invested in Aurora, another start-up in this area.

“There was this incredible optimism,” said Sam Abuelsamid, an analyst at Navigant Research. “Companies thought this was a very straightforward problem. You just throw in some sensors and artificial intelligence, and it would be easy to do.”

ImageWestlake Legal Group merlin_148373589_f27521a5-e90e-44ba-b3d8-930bca7f4786-articleLarge Despite High Hopes, Self-Driving Cars Are ‘Way in the Future’ Volkswagen AG Salesky, Brian Roads and Traffic Ford Motor Co Driverless and Semiautonomous Vehicles Computers and the Internet Artificial Intelligence Arizona Argo AI

A Waymo autonomous vehicle in December in Chandler, Ariz. The company has a fleet of about 600 test vehicles, about as many as it had last year.CreditCaitlin O’Hara for The New York Times

The industry’s unbridled confidence was quickly dented when a self-driving car being tested by Uber hit and killed a woman walking a bicycle across a street last year in Tempe, Ariz. A safety driver was at the wheel of the vehicle, but was watching a TV show on her phone just before the crash, according to the Tempe Police Department.

Since that fatality, “almost everybody has reset their expectations,” Mr. Abuelsamid said. It was believed to be the first pedestrian death involving a self-driving vehicle. Elsewhere in the United States, three Tesla drivers have died in crashes that occurred while the company’s Autopilot driver-assistance system was engaged and both it and the drivers failed to detect and react to hazards.

Companies like Waymo and G.M. now say they still expect to roll out thousands of self-driving cars — but they are much more reluctant to say when that will happen.

Waymo operates a fleet of 600 test vehicles — the same number it had on the road a year ago. A portion of them are the first set of vehicles it will be buying through the agreements with Chrysler and Jaguar. The company said it expected to increase purchases as it expanded its ride service.

“We are able to do the driving task,” Tekedra Mawakana, Waymo’s chief external officer, said in an interview. “But the reason we don’t have a service in 50 states is that we are still validating a host of elements related to offering a service. Offering a service is very different than building a technology.”

G.M. declined to say if it was still on track to start a ride service “at scale” this year, as it originally planned. Its chief executive, Mary Barra, told analysts in June that Cruise was moving “at a very aggressive pace” without saying when commercial operations would begin.

China, which has the world’s largest auto market and is investing heavily in electric vehicles, is trailing in development of self-driving cars, analysts say. The country allows automakers to test such cars on public roads in only a handful of cities. One leading Chinese company working on autonomous technology, Baidu, is doing much of its research at a lab in Silicon Valley.

Tesla and its chief executive, Elon Musk, are nearly alone in predicting widespread use of self-driving cars within the next year. In April, Mr. Musk said Tesla would have as many as a million autonomous “robo taxis” by the end of 2020.

Tesla believes its new self-driving system, based on a computer chip it designed, and the data it gathers from Tesla cars now on the road will enable the company to start offering fully autonomous driving next year.

But many experts are very skeptical that Tesla can pull that off.

An Uber driverless car on a test drive in 2016 in San Francisco. Since one of the company’s self-driving test cars hit and killed a pedestrian in Arizona last year, industry expectations have changed.CreditEric Risberg/Associated Press

Mr. Salesky said it was relatively easy to enable a car to see and identify obstacles on the road with the help of radar, cameras and lidar — a kind of radar that uses lasers — as well as the software and computing power to process images and data.

It’s much more difficult to prepare self-driving cars for unusual circumstances — pedestrians crossing the road when cars have the green light, cars making illegal turns. Researchers call these “corner cases,” although in city traffic they occur often.

“If you’re out driving 20 hours a day, you have a lot of opportunities to see these things,” Mr. Salesky said.

Equally challenging is teaching self-driving cars the finer points of driving, sometimes known as “micro maneuvers.” If a vehicle ahead is moving slowly looking for a parking space, it is best to not follow too closely so the car has room to back into an open spot. Or to know that if a car is edging out into an intersection, it can be a sign the driver may dart out even if he doesn’t have the right of way.

The technology is available now to create a car that won’t hit anything. But such a car would constantly slam on the brakes.

“If the car is overly cautious, this becomes a nuisance,” said Huei Peng, director of Mcity, an autonomous-vehicle research center at the University of Michigan.

Some companies argue that the way to get more self-driving vehicles on the road is by using them in controlled settings and situations. May Mobility operates autonomous shuttles in Detroit; Providence, R.I.; and Columbus, Ohio. These are not minivans or full-size cars but six-passenger golf carts. They travel short, defined routes at no more than 25 miles per hour. In many cases they provide public transportation where none is available.

“A vehicle that needs to go at higher speeds will need more expensive, more exotic sensors,” said Alisyn Malek, the company’s chief operating officer. “Using a low-speed vehicle makes the task of operating an autonomous vehicle easier, so we can use what works in the technology today.”

The dashboard of a May Mobility autonomous shuttle, which the company operates in Detroit; Columbus, Ohio; and Providence, R.I.CreditKayana Szymczak for The New York Times John Jay Alves boarding a May Mobility shuttle at one of 12 stops on a loop in Providence.CreditKayana Szymczak for The New York Times

The company has been running six shuttles between the Providence train station and Olneyville, a growing neighborhood a few miles away, since May. The trial is backed by the Rhode Island Department of Transportation, which is paying May Mobility $800,000 for the first year of service. The company expects to take its service to Grand Rapids, Mich., this year, in a partnership led by the city. Based in Ann Arbor, Mich., May Mobility has raised $33 million from investors, including $10 million from Toyota.

Also this year, a Boston start-up, Optimus Ride, plans to begin operating driverless shuttles at the Brooklyn Navy Yard that also travel at 25 m.p.h. or less.

Ms. Malek said she believed it would take years and perhaps even a decade or more to develop driverless cars that could travel anywhere, any time.

“Our focus is, how can we use the technology today?” she said. “We realize that today we have to start somewhere.”

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Facial Recognition Tech Is Growing Stronger, Thanks to Your Face

SAN FRANCISCO — Dozens of databases of people’s faces are being compiled without their knowledge by companies and researchers, with many of the images then being shared around the world, in what has become a sprawling ecosystem fueling the spread of facial recognition technology.

The databases are pulled together with images from social networks, photo websites, dating services like OkCupid and cameras placed in restaurants and on college quads. While there is no precise count of the data sets, privacy activists have pinpointed repositories that were built by Microsoft, Stanford University and others, with one holding more than 10 million images while another had more than two million.

The face compilations are being driven by the race to create leading-edge facial recognition systems. This technology learns how to identify people by analyzing as many digital pictures as possible using “neural networks,” which are complex mathematical systems that require vast amounts of data to build pattern recognition.

Tech giants like Facebook and Google have most likely amassed the largest face data sets, which they do not distribute, according to research papers. But other companies and universities have widely shared their image troves with researchers, governments and private enterprises in Switzerland, India, China, Australia and Singapore for training artificial intelligence, according to academics, activists and public papers.

Companies and labs have gathered facial images for more than a decade, and the databases are merely one layer to building facial recognition technology. But people often have no idea that their faces ended up in them. And while names are typically not attached to the photos, individuals can be recognized because each face is unique to a person.

ImageWestlake Legal Group merlin_157756593_03c2fff0-6c72-469f-b32c-cd683893caf1-articleLarge Facial Recognition Tech Is Growing Stronger, Thanks to Your Face Start-ups Stanford University Social Media Research Privacy Microsoft Corp facial recognition software Face duke university Data-Mining and Database Marketing Computers and the Internet Computer Vision Clarifai Inc Artificial Intelligence

A visualization of 2,000 of the identities included in the MS Celeb database from Microsoft.CreditOpen Data Commons Public Domain Dedication and License, via Megapixels

Questions about the data sets are rising because the technologies that they have enabled are now being used in potentially invasive ways. Documents released last Sunday revealed that Immigration and Customs Enforcement officials employed facial recognition technology to scan motorists’ photos to identify undocumented immigrants. The F.B.I. also spent more than a decade using such systems to compare driver’s license and visa photos against the faces of suspected criminals, according to a Government Accountability Office report last month. On Wednesday, a congressional hearing tackled the government’s use of the technology.

There is no oversight of the data sets. Activists and others said they were angered by the possibility that people’s likenesses had been used to build ethically questionable technology and that the images could be misused. At least one face database created in the United States was shared with a company in China that has been linked to ethnic profiling of the country’s minority Uighur Muslims.

Over the past several weeks, some companies and universities, including Microsoft and Stanford, removed their face data sets from the internet because of privacy concerns. But given that the images were already so well distributed, they are most likely still being used in the United States and elsewhere, researchers and activists said.

“You come to see that these practices are intrusive, and you realize that these companies are not respectful of privacy,” said Liz O’Sullivan, who oversaw one of these databases at the artificial intelligence start-up Clarifai. She said she left the New York-based company in January to protest such practices.

“The more ubiquitous facial recognition becomes, the more exposed we all are to being part of the process,” said Liz O’Sullivan, a technologist who worked at the artificial intelligence start-up Clarifai.CreditNathan Bajar for The New York Times

“The more ubiquitous facial recognition becomes, the more exposed we all are to being part of the process,” she said.

Google, Facebook and Microsoft declined to comment.

[If you’re online — and, well, you are — chances are someone is using your information. We’ll tell you what you can do about it. Sign up for our limited-run newsletter.]

One database, which dates to 2014, was put together by researchers at Stanford. It was called Brainwash, after a San Francisco cafe of the same name, where the researchers tapped into a camera. Over three days, the camera took more than 10,000 images, which went into the database, the researchers wrote in a 2015 paper. The paper did not address whether cafe patrons knew their images were being taken and used for research. (The cafe has closed.)

The Stanford researchers then shared Brainwash. According to research papers, it was used in China by academics associated with the National University of Defense Technology and Megvii, an artificial intelligence company that The New York Times previously reported has provided surveillance technology for monitoring Uighurs.

The Brainwash data set was removed from its original website last month after Adam Harvey, an activist in Germany who tracks the use of these repositories through a website called MegaPixels, drew attention to it. Links between Brainwash and papers describing work to build A.I. systems at the National University of Defense Technology in China have also been deleted, according to documentation from Mr. Harvey.

Stanford researchers who oversaw Brainwash did not respond to requests for comment. “As part of the research process, Stanford routinely makes research documentation and supporting materials available publicly,” a university official said. “Once research materials are made public, the university does not track their use nor did university officials.”

Duke University researchers also started a database in 2014 using eight cameras on campus to collect images, according to a 2016 paper published as part of the European Conference on Computer Vision. The cameras were denoted with signs, said Carlo Tomasi, the Duke computer science professor who helped create the database. The signs gave a number or email for people to opt out.

The Duke researchers ultimately gathered more than two million video frames with images of over 2,700 people, according to the paper. They also posted the data set, named Duke MTMC, online. It was later cited in myriad documents describing work to train A.I. in the United States, in China, in Japan, in Britain and elsewhere.

Duke University researchers started building a database in 2014 using eight cameras on campus to collect images.CreditOpen Data Commons Attribution License, via Megapixels
The Duke researchers ultimately gathered more than two million video frames with images of over 2,700 people.CreditOpen Data Commons Attribution License, via Megapixels

Dr. Tomasi said that his research group did not do face recognition and that the MTMC was unlikely to be useful for such technology because of poor angles and lighting.

“Our data was recorded to develop and test computer algorithms that analyze complex motion in video,” he said. “It happened to be people, but it could have been bicycles, cars, ants, fish, amoebas or elephants.”

At Microsoft, researchers have claimed on the company’s website to have created one of the biggest face data sets. The collection, called MS Celeb, spanned over 10 million images of more than 100,000 people.

MS Celeb was ostensibly a database of celebrities, whose images are considered fair game because they are public figures. But MS Celeb also brought in photos of privacy and security activists, academics and others, such as Shoshana Zuboff, the author of the book “The Age of Surveillance Capitalism,” according to documentation from Mr. Harvey of the MegaPixels project. MS Celeb was distributed internationally, before being removed this spring after Mr. Harvey and others flagged it.

Kim Zetter, a cybersecurity journalist in San Francisco who has written for Wired and The Intercept, was one of the people who unknowingly became part of the Microsoft data set.

“We’re all just fodder for the development of these surveillance systems,” she said. “The idea that this would be shared with foreign governments and military is just egregious.”

Matt Zeiler, founder and chief executive of Clarifai, the A.I. start-up, said his company had built a face database with images from OkCupid, a dating site. He said Clarifai had access to OkCupid’s photos because some of the dating site’s founders invested in his company.

He added that he had signed a deal with a large social media company — he declined to disclose which — to use its images in training face recognition models. The social network’s terms of service allow for this kind of sharing, he said.

“There has to be some level of trust with tech companies like Clarifai to put powerful technology to good use, and get comfortable with that,” he said.

An OkCupid spokeswoman said Clarifai contacted the company in 2014 “about collaborating to determine if they could build unbiased A.I. and facial recognition technology” and that the dating site “did not enter into any commercial agreement then and have no relationship with them now.” She did not address whether Clarifai had gained access to OkCupid’s photos without its consent.

Clarifai used the images from OkCupid to build a service that could identify the age, sex and race of detected faces, Mr. Zeiler said. The start-up also began working on a tool to collect images from a website called Insecam — short for “insecure camera” — which taps into surveillance cameras in city centers and private spaces without authorization. Clarifai’s project was shut down last year after some employees protested and before any images were gathered, he said.

Mr. Zeiler said Clarifai would sell its facial recognition technology to foreign governments, military operations and police departments provided the circumstances were right. It did not make sense to place blanket restrictions on the sale of technology to entire countries, he added.

Ms. O’Sullivan, the former Clarifai technologist, has joined a civil rights and privacy group called the Surveillance Technology Oversight Project. She is now part of a team of researchers building a tool that will let people check whether their image is part of the openly shared face databases.

“You are part of what made the system what it is,” she said.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

Google and the University of Chicago Are Sued Over Data Sharing

SAN FRANCISCO — When the University of Chicago Medical Center announced a partnership to share patient data with Google in 2017, the alliance was promoted as a way to unlock information trapped in electronic health records and improve predictive analysis in medicine.

On Wednesday, the University of Chicago, the medical center and Google were sued in a potential class-action lawsuit accusing the hospital of sharing hundreds of thousands of patients’ records with the technology giant without stripping identifiable date stamps or doctor’s notes.

The suit, filed in the United States District Court for the Northern District of Illinois, demonstrates the difficulties technology companies face in handling health data as they forge ahead into one of the most promising — and potentially lucrative — areas of artificial intelligence: diagnosing medical problems.

Google is at the forefront of an effort to build technology that can read electronic health records and help physicians identify medical conditions. But the effort requires machines to learn this skill by analyzing a vast array of old health records collected by hospitals and other medical institutions.

That raises privacy concerns, especially when it comes from a company like Google, which already knows what you search for, where you are and what interests you hold.

ImageWestlake Legal Group merlin_157049919_ba1fd492-a400-4864-a2bc-39c021b9f93d-articleLarge Google and the University of Chicago Are Sued Over Data Sharing University of Chicago Suits and Litigation (Civil) Privacy hospitals Google Inc Electronic Health Records Edelson, Jay (1972- ) DeepMind Technologies Ltd Computers and the Internet Artificial Intelligence

The emergency room at the University of Chicago Medical Center. A lawsuit claims a deal between the medical center and Google violated patient privacy.CreditM. Spencer Green/Associated Press

In 2016, DeepMind, a London-based A.I. lab owned by Google’s parent company, Alphabet, was accused of violating patient privacy after it struck a deal with Britain’s National Health Service to process medical data for research.

The group inside DeepMind that acquired the data from National Health Service has since been transferred to Google, which has raised additional complaints from privacy advocates in Britain. DeepMind had previously said that data would never be shared with Google. In absorbing DeepMind’s health unit, Google said it was building “an A.I.-powered assistant for nurses and doctors.”

Google’s alliance with the University of Chicago mirrored other partnerships that the company struck to obtain electronic health records from other hospitals, including the University of California, San Francisco and Stanford University.

But the deal with the University of Chicago medical center violated patient privacy, the lawsuit claims, because those records also included date stamps of when patients checked in and checked out of the hospital.

In a research paper published by Google last year about “Scalable and Accurate Deep Learning for Electronic Health Records,” the company said it used electronic health record data of patients at the University of Chicago Medicine from 2009 to 2016.

The records included patient demographics, diagnoses, procedures, medication and other data. The paper states that the records were “de-identified,” except that “dates of service were maintained.” The paper also noted that the University of Chicago provided “free-text medical notes” that had been de-identified.

Under the Health Insurance Portability and Accountability Act, the federal regulation that protects patients’ confidential health data, medical providers are permitted to share medical records as long as the data is “de-identified.”

DeepMind, a London-based A.I. lab owned by Google’s parent company, Alphabet, was accused in 2016 of violating patient privacy.CreditBenjamin Quinton for The New York Times

To meet the Hipaa standard, hospitals must strip out individually identifiable information like the patients’ name and Social Security number as well as dates directly related to the individual, including admission and discharge dates.

The lawsuit said the inclusion of dates was a violation of Hipaa rules in part because Google could combine it other information it already knew, like location data from smartphones running its Android software or Google Maps and Waze, to establish the identity of the patients in the medical records.

“We believe that not only is this the most significant health care data breach case in our nation’s history, but it is the most egregious given our allegations that the data was voluntarily handed over,” said Jay Edelson, founder of Edelson PC, a law firm that specializes in class action lawsuits against technology companies for privacy violations.

The lawsuit, filed on behalf of Matt Dinerstein, who stayed at the University of Chicago Medical Center on two occasions in June 2015, did not offer evidence that Google misused the information provided from the medical center or made attempts to identify the patients.

The complaint accuses the university of consumer fraud and fraudulent business practices because it never received express consent from patients to transfer disclose medical records to Google. In a privacy agreement, the university said it would keep medical information confidential and comply with Hipaa regulations. The lawsuit also accuses Google of unjust enrichment.

Stacey A. Tovino, a health law professor at the University of Nevada, Las Vegas, said Hipaa was enacted in 1996 before the technology industry started collecting vast amounts of personal information.

That has made the regulations outdated because the idea of what information is considered individually identifiable has changed with advances in technology.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

A Machine May Not Take Your Job, but One Could Become Your Boss

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.

ImageWestlake Legal Group merlin_155473140_c3897504-d15d-4180-8e04-169fa6a5d95d-articleLarge A Machine May Not Take Your Job, but One Could Become Your Boss Workplace Hazards and Violations Workplace Environment Wages and Salaries Productivity MetLife Inc Labor and Jobs Executives and Management (Theory) Customer Relations Computers and the Internet Artificial Intelligence

The application Cogito on view on a monitor.CreditTony Luong for The New York Times

Mr. Sprouls and the other call center workers at his office in Warwick, R.I., still have plenty of human supervisors. But the software on their screens — made by Cogito, an A.I. company in Boston — has become a kind of adjunct manager, always watching them. At the end of every call, Mr. Sprouls’s Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.

Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them real-time feedback.

“There is variability in human performance,” Mr. Feast said. “We can infer from the way people are speaking with each other whether things are going well or not.”

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claims it has a 96 percent accuracy rate.

Then there are the start-ups. Cogito, which works with large insurance companies like MetLife and Humana as well as financial and retail firms, says it has 20,000 users. Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its clients, uses in-store sensors to calculate a “true productivity” score for each worker, and rank workers from most to least productive.

Samantha Sinon knitting while she waits for the next call at MetLife’s center in Warwick, R.I.CreditTony Luong for The New York Times Aaron Osei, another employee at the center.CreditTony Luong for The New York Times

Management by algorithm is not a new concept. In the early 20th century, Frederick Winslow Taylor revolutionized the manufacturing world with his “scientific management” theory, which tried to wring inefficiency out of factories by timing and measuring each aspect of a job. More recently, Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources — scheduling, payroll, performance reviews — to computers.

But using A.I. to manage workers in conventional, 9-to-5 jobs has been more controversial. Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanize and unfairly punish employees. And while it’s clear why executives would want A.I. that can track everything their workers do, it’s less clear why workers would.

MetLife uses the A.I. software with 1,500 of its call center employees.CreditTony Luong for The New York Times

“It is surreal to think that any company could fire their own workers without any human involvement,” Marc Perrone, the president of United Food and Commercial Workers International Union, which represents food and retail workers, said in a statement about Amazon in April.

In the gig economy, management by algorithm has also been a source of tension between workers and the platforms that connect them with customers. This year, drivers for Postmates, DoorDash and other on-demand delivery companies protested a method of calculating their pay, using an algorithm, that put customer tips toward guaranteed minimum wages — a practice that was nearly invisible to drivers, because of the way the platform obscures the details of worker pay.

There were no protests at MetLife’s call center. Instead, the employees I spoke with seemed to view their Cogito software as a mild annoyance at worst. Several said they liked getting pop-up notifications during their calls, although some said they had struggled to figure out how to get the “empathy” notification to stop appearing. (Cogito says the A.I. analyzes subtle differences in tone between the worker and the caller and encourages the worker to try to mirror the customer’s mood.)

MetLife, which uses the software with 1,500 of its call center employees, says using the app has increased its customer satisfaction by 13 percent.

Winners of contests and employee photos are pinned up in the office.CreditTony Luong for The New York Times A team performance board.CreditTony Luong for The New York Times

“It actually changes people’s behavior without them knowing about it,” said Christopher Smith, MetLife’s head of global operations. “It becomes a more human interaction.”

Still, there is a creepy sci-fi vibe to a situation in which A.I. surveils human workers and tells them how to relate to other humans. And it is reminiscent of the “workplace gamification” trend that swept through corporate America a decade ago, when companies used psychological tricks borrowed from video games, like badges and leader boards, to try to spur workers to perform better.

Phil Libin, the chief executive of All Turtles, an A.I. start-up studio in San Francisco, recoiled in horror when I told him about my call center visit.

“That is a dystopian hellscape,” Mr. Libin said. “Why would anyone want to build this world where you’re being judged by an opaque, black-box computer?”

Defenders of workplace A.I. might argue that these systems are not meant to be overbearing. Instead, they’re meant to make workers better by reminding them to thank the customer, to empathize with the frustrated claimant on Line 1 or to avoid slacking off on the job.

Icons that are used in Cogito are placed around the MetLife call center.CreditTony Luong for The New York Times

The best argument for workplace A.I. may be situations in which human bias skews decision-making, such as hiring. Pymetrics, a New York start-up, has made inroads in the corporate hiring world by replacing the traditional résumé screening process with an A.I. program that uses a series of games to test for relevant skills. The algorithms are then analyzed to make sure they are not creating biased hiring outcomes, or favoring one group over another.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” said Frida Polli, Pymetrics’ chief executive.

Using A.I. to correct for human biases is a good thing. But as more A.I. enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis. If that happens, it won’t be the robots staging an uprising.

Follow Kevin Roose on Twitter: @kevinroose.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

A.I. May Not Take Your Job, but It Could Become Your Boss

When Conor Sprouls, a customer service representative in the call center of the insurance giant MetLife talks to a customer over the phone, he keeps one eye on the bottom-right corner of his screen. There, in a little blue box, A.I. tells him how he’s doing.

Talking too fast? The program flashes an icon of a speedometer, indicating that he should slow down.

Sound sleepy? The software displays an “energy cue,” with a picture of a coffee cup.

Not empathetic enough? A heart icon pops up.

For decades, people have fearfully imagined armies of hyper-efficient robots invading offices and factories, gobbling up jobs once done by humans. But in all of the worry about the potential of artificial intelligence to replace rank-and-file workers, we may have overlooked the possibility it will replace the bosses, too.

ImageWestlake Legal Group merlin_155473140_c3897504-d15d-4180-8e04-169fa6a5d95d-articleLarge A.I. May Not Take Your Job, but It Could Become Your Boss Workplace Hazards and Violations Workplace Environment Wages and Salaries Productivity MetLife Inc Labor and Jobs Executives and Management (Theory) Customer Relations Computers and the Internet Artificial Intelligence

The application Cogito on view on a monitor.CreditTony Luong for The New York Times

Mr. Sprouls and the other call center workers at his office in Warwick, R.I., still have plenty of human supervisors. But the software on their screens — made by Cogito, an A.I. company in Boston — has become a kind of adjunct manager, always watching them. At the end of every call, Mr. Sprouls’s Cogito notifications are tallied and added to a statistics dashboard that his supervisor can view. If he hides the Cogito window by minimizing it, the program notifies his supervisor.

Cogito is one of several A.I. programs used in call centers and other workplaces. The goal, according to Joshua Feast, Cogito’s chief executive, is to make workers more effective by giving them real-time feedback.

“There is variability in human performance,” Mr. Feast said. “We can infer from the way people are speaking with each other whether things are going well or not.”

The goal of automation has always been efficiency, but in this new kind of workplace, A.I. sees humanity itself as the thing to be optimized. Amazon uses complex algorithms to track worker productivity in its fulfillment centers, and can automatically generate the paperwork to fire workers who don’t meet their targets, as The Verge uncovered this year. (Amazon has disputed that it fires workers without human input, saying that managers can intervene in the process.) IBM has used Watson, its A.I. platform, during employee reviews to predict future performance and claims it has a 96 percent accuracy rate.

Then there are the start-ups. Cogito, which works with large insurance companies like MetLife and Humana as well as financial and retail firms, says it has 20,000 users. Percolata, a Silicon Valley company that counts Uniqlo and 7-Eleven among its clients, uses in-store sensors to calculate a “true productivity” score for each worker, and rank workers from most to least productive.

Samantha Sinon knitting while she waits for the next call at MetLife’s center in Warwick, R.I.CreditTony Luong for The New York Times Aaron Osei, another employee at the center.CreditTony Luong for The New York Times

Management by algorithm is not a new concept. In the early 20th century, Frederick Winslow Taylor revolutionized the manufacturing world with his “scientific management” theory, which tried to wring inefficiency out of factories by timing and measuring each aspect of a job. More recently, Uber, Lyft and other on-demand platforms have made billions of dollars by outsourcing conventional tasks of human resources — scheduling, payroll, performance reviews — to computers.

But using A.I. to manage workers in conventional, 9-to-5 jobs has been more controversial. Critics have accused companies of using algorithms for managerial tasks, saying that automated systems can dehumanize and unfairly punish employees. And while it’s clear why executives would want A.I. that can track everything their workers do, it’s less clear why workers would.

MetLife uses the A.I. software with 1,500 of its call center employees.CreditTony Luong for The New York Times

“It is surreal to think that any company could fire their own workers without any human involvement,” Marc Perrone, the president of United Food and Commercial Workers International Union, which represents food and retail workers, said in a statement about Amazon in April.

In the gig economy, management by algorithm has also been a source of tension between workers and the platforms that connect them with customers. This year, drivers for Postmates, DoorDash and other on-demand delivery companies protested a method of calculating their pay, using an algorithm, that put customer tips toward guaranteed minimum wages — a practice that was nearly invisible to drivers, because of the way the platform obscures the details of worker pay.

There were no protests at MetLife’s call center. Instead, the employees I spoke with seemed to view their Cogito software as a mild annoyance at worst. Several said they liked getting pop-up notifications during their calls, although some said they had struggled to figure out how to get the “empathy” notification to stop appearing. (Cogito says the A.I. analyzes subtle differences in tone between the worker and the caller and encourages the worker to try to mirror the customer’s mood.)

MetLife, which uses the software with 1,500 of its call center employees, says using the app has increased its customer satisfaction by 13 percent.

Winners of contests and employee photos are pinned up in the office.CreditTony Luong for The New York Times A team performance board.CreditTony Luong for The New York Times

“It actually changes people’s behavior without them knowing about it,” said Christopher Smith, MetLife’s head of global operations. “It becomes a more human interaction.”

Still, there is a creepy sci-fi vibe to a situation in which A.I. surveils human workers and tells them how to relate to other humans. And it is reminiscent of the “workplace gamification” trend that swept through corporate America a decade ago, when companies used psychological tricks borrowed from video games, like badges and leader boards, to try to spur workers to perform better.

Phil Libin, the chief executive of All Turtles, an A.I. start-up studio in San Francisco, recoiled in horror when I told him about my call center visit.

“That is a dystopian hellscape,” Mr. Libin said. “Why would anyone want to build this world where you’re being judged by an opaque, black-box computer?”

Defenders of workplace A.I. might argue that these systems are not meant to be overbearing. Instead, they’re meant to make workers better by reminding them to thank the customer, to empathize with the frustrated claimant on Line 1 or to avoid slacking off on the job.

Icons that are used in Cogito are placed around the MetLife call center.CreditTony Luong for The New York Times

The best argument for workplace A.I. may be situations in which human bias skews decision-making, such as hiring. Pymetrics, a New York start-up, has made inroads in the corporate hiring world by replacing the traditional résumé screening process with an A.I. program that uses a series of games to test for relevant skills. The algorithms are then analyzed to make sure they are not creating biased hiring outcomes, or favoring one group over another.

“We can tweak data and algorithms until we can remove the bias. We can’t do that with a human being,” said Frida Polli, Pymetrics’ chief executive.

Using A.I. to correct for human biases is a good thing. But as more A.I. enters the workplace, executives will have to resist the temptation to use it to tighten their grip on their workers and subject them to constant surveillance and analysis. If that happens, it won’t be the robots staging an uprising.

Follow Kevin Roose on Twitter: @kevinroose.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

The Gender Gap in Computer Science Research Won’t Close for 100 Years

Westlake Legal Group 21techgender-facebookJumbo The Gender Gap in Computer Science Research Won’t Close for 100 Years Women and Girls Research gender Computers and the Internet Artificial Intelligence Allen Institute for Artificial Intelligence Academic and Scientific Journals

SAN FRANCISCO — Women will not reach parity with men in writing published computer science research in this century if current trends hold, according to a study released on Friday.

The enduring gender gap is most likely a reflection of the low number of women now in computer science, said researchers at the Allen Institute for Artificial Intelligence, a research lab in Seattle that produced the study. It could also reflect, in part, a male bias in the community of editors who manage scientific journals and conferences.

Big technology companies are facing increasing pressure to address workplace issues like sexual harassment and a lack of representation by women as well as minorities among technical employees.

The increasing reliance on computer algorithms in areas as varied as hiring and artificial intelligence has also led to concerns that the tech industry’s dominantly white and male work forces are building biases into the technology underlying those systems.

The Allen Institute study analyzed more than 2.87 million computer science papers published between 1970 and 2018, using first names as a proxy for the gender of each author. The method is not perfect — and it does not consider transgender authors — but it gives a statistical indication of where the field is headed.

In 2018, the number of male authors in the collection of computer science papers was about 475,000 compared with 175,000 women.

The researchers tracked the change in the percentage of female authors each year and used that information to statistically predict future changes. There is a wide range of possibilities. The most realistic possibility is gender parity in 2137. But there is a chance parity will never be reached, the researchers said.

Other science fields fared better. In biomedicine, for example, gender parity is forecast to arrive around 2048, according to the study. About 27 percent of researchers in computer science are women, versus 38 percent in biomedicine, according to the study.

While the study focused on research published in academic journals, the trends may apply to the technology industry as well as academia. Companies like Google, Facebook and Microsoft that are working on A.I. are publishing much of their most important research in the same journals as academics.

Academia is also where the next generation of tech workers is taught.

“This definitely affects the field as a whole,” said Lucy Lu Wang, a researcher with the Allen Institute. “When there is a lack of leadership in computer science departments, it affects the number of women students who are trained and the number that enter the computer science industry.”

The study also indicated that men are growing less likely to collaborate with female researchers — a particularly worrying trend in a field where women have long felt unwelcome and because studies have shown that diverse teams can produce better research.

Compiled by Ms. Lu and several other researchers at the Allen Institute, the study is in line with similar research published by academics in Australia and Canada. While gender parity is relatively near in many of the life sciences, these studies showed, it remains at least a century away in physics and mathematics.

“We were hoping for a positive result, because we all had the sense that the number of women authors was growing,” said Oren Etzioni, the former University of Washington professor who oversees the Allen Institute. “But the results were, frankly, shocking.”

Other research has shown that women are less likely to enter computer science — and stick with it — if they don’t have female role models, mentors and collaborators.

“There is a problem with retention,” said Jamie Lundine, a researcher at the Institute of Feminist and Gender Studies at the University of Ottawa. “Even when women are choosing computer science, they can end up in school and work environments that are inhospitable.”

Many artificial intelligence technologies, like face-recognition services and conversational systems, are designed to learn from large amounts of data, such as thousands of photos of faces. The biases of researchers can easily be introduced into the technology, reinforcing the importance of diversity among the people working on it.

“This is a problem not just when it comes to choosing the data, but when it comes to choosing the projects we want to tackle,” Ms. Wang said.

The Allen Institute study adds to a mounting collection of research pointing to the challenges women face in tech. A recent study of researchers exploring “natural language understanding” — the A.I. field that involves conversational systems and related technologies — shows that women are less likely to reach leadership positions in the field.

“There is still a glass ceiling,” said Natalie Schluter, a professor at IT University in Denmark who specializes in natural language understanding and the author of the study.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com 

The Gender Gap in Computer Science Research Won’t Close for 100 Years

Westlake Legal Group 21techgender-facebookJumbo The Gender Gap in Computer Science Research Won’t Close for 100 Years Women and Girls Research gender Computers and the Internet Artificial Intelligence Allen Institute for Artificial Intelligence Academic and Scientific Journals

SAN FRANCISCO — Women will not reach parity with men in writing published computer science research in this century if current trends hold, according to a study released on Friday.

The enduring gender gap is most likely a reflection of the low number of women now in computer science, said researchers at the Allen Institute for Artificial Intelligence, a research lab in Seattle that produced the study. It could also reflect, in part, a male bias in the community of editors who manage scientific journals and conferences.

Big technology companies are facing increasing pressure to address workplace issues like sexual harassment and a lack of representation by women as well as minorities among technical employees.

The increasing reliance on computer algorithms in areas as varied as hiring and artificial intelligence has also led to concerns that the tech industry’s dominantly white and male work forces are building biases into the technology underlying those systems.

The Allen Institute study analyzed more than 2.87 million computer science papers published between 1970 and 2018, using first names as a proxy for the gender of each author. The method is not perfect — and it does not consider transgender authors — but it gives a statistical indication of where the field is headed.

In 2018, the number of male authors in the collection of computer science papers was about 475,000 compared with 175,000 women.

The researchers tracked the change in the percentage of female authors each year and used that information to statistically predict future changes. There is a wide range of possibilities. The most realistic possibility is gender parity in 2137. But there is a chance parity will never be reached, the researchers said.

Other science fields fared better. In biomedicine, for example, gender parity is forecast to arrive around 2048, according to the study. About 27 percent of researchers in computer science are women, versus 38 percent in biomedicine, according to the study.

While the study focused on research published in academic journals, the trends may apply to the technology industry as well as academia. Companies like Google, Facebook and Microsoft that are working on A.I. are publishing much of their most important research in the same journals as academics.

Academia is also where the next generation of tech workers is taught.

“This definitely affects the field as a whole,” said Lucy Lu Wang, a researcher with the Allen Institute. “When there is a lack of leadership in computer science departments, it affects the number of women students who are trained and the number that enter the computer science industry.”

The study also indicated that men are growing less likely to collaborate with female researchers — a particularly worrying trend in a field where women have long felt unwelcome and because studies have shown that diverse teams can produce better research.

Compiled by Ms. Lu and several other researchers at the Allen Institute, the study is in line with similar research published by academics in Australia and Canada. While gender parity is relatively near in many of the life sciences, these studies showed, it remains at least a century away in physics and mathematics.

“We were hoping for a positive result, because we all had the sense that the number of women authors was growing,” said Oren Etzioni, the former University of Washington professor who oversees the Allen Institute. “But the results were, frankly, shocking.”

Other research has shown that women are less likely to enter computer science — and stick with it — if they don’t have female role models, mentors and collaborators.

“There is a problem with retention,” said Jamie Lundine, a researcher at the Institute of Feminist and Gender Studies at the University of Ottawa. “Even when women are choosing computer science, they can end up in school and work environments that are inhospitable.”

Many artificial intelligence technologies, like face-recognition services and conversational systems, are designed to learn from large amounts of data, such as thousands of photos of faces. The biases of researchers can easily be introduced into the technology, reinforcing the importance of diversity among the people working on it.

“This is a problem not just when it comes to choosing the data, but when it comes to choosing the projects we want to tackle,” Ms. Wang said.

The Allen Institute study adds to a mounting collection of research pointing to the challenges women face in tech. A recent study of researchers exploring “natural language understanding” — the A.I. field that involves conversational systems and related technologies — shows that women are less likely to reach leadership positions in the field.

“There is still a glass ceiling,” said Natalie Schluter, a professor at IT University in Denmark who specializes in natural language understanding and the author of the study.

Real Estate, and Personal Injury Lawyers. Contact us at: https://westlakelegal.com