With robots and AI-driven machines expected to perform half of all productive functions in the workplace by 2025, it’s no wonder many of us are raising legitimate concerns over the future role of human employees, as well as fears about safety and other issues.
“Danger, Will Robinson!” That’s the warning heard often from the robot in the 1960’s TV series Lost in Space, as he kept a watchful eye on the boy actor.
While the robot dutifully protected his young companion, scientists and others caution that AI must be applied responsibly and ethically in order to avoid creating future “monsters.”
With robots and AI-driven machines expected to perform half of all productive functions in the workplace by 2025, it’s no wonder many of us are raising legitimate concerns over the future role of human employees, as well as fears about safety and other issues.
“Danger, Will Robinson!” That’s the warning heard often from the robot in the 1960’s TV series Lost in Space, as he kept a watchful eye on the boy actor.
While the robot dutifully protected his young companion, scientists and others caution that AI must be applied responsibly and ethically in order to avoid creating future “monsters.”
BCG Digital Ventures’ CTO Dharmesh Syal wrote in a recent column for the World Economic Forum that, “The only way to make sure we don’t create a monster that could turn against us is to incorporate ethical safeguards into the architecture of the AI we’re creating today.”
There are three ways to help assure this, he says, including:
- Bring in a human in sensitive scenarios. AI should be part of a human-in-the-loop, or HITL, system, in which machines do the work with people ready to handle questionable situations.
- Put safeguards in place so machines can self-correct. Ideally, these defenses should be built into AI products when they are being built.
- Create an ethics code. There should be standard AI operating policies around data privacy, personalization and deep learning.
This last item “may seem obvious,” says Syal, “but you’d be surprised how few companies are actually doing this.”
Arizona State University professor Nancy Cooke offers a few more best practices, such as giving robots specific roles in a team setting. Similar to a surgical team whose members may include a nurse, surgeon and anesthesiologist—individuals with separate, yet interdependent roles and responsibilities—members of a human/robot team should be assembled to take on different elements of a complex task.
“Robots should do things that they are best at, or that people don’t want to do—like lifting heavy items, testing chemicals and crunching data. That frees up people to do what they’re best at—like adapting to changing situations and coming up with creative solutions to problems,” advises Cooke.
“Most importantly, humans should not be asked to adapt to their nonhuman teammates. Rather, developers should design and create technology to serve as a good team player alongside people.”
And if you end up working alongside a robot named HAL, and your name is Dave, it’s probably wise to keep a safe distance.