Robotic assistants may adapt to humans in the factory, thanks to new algorithm

In today鈥檚 manufacturing plants, the division of labor between humans and robots is quite clear: Large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail.
But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Shah envisions robotic assistants performing tasks that would otherwise hinder a human鈥檚 efficiency, particularly in airplane manufacturing.
鈥淚f the robot can provide tools and materials so the person doesn鈥檛 have to walk over to pick up parts and walk back to the plane, you can significantly reduce the idle time of the person,鈥� says Shah, who leads the Interactive Robotics Group in MIT鈥檚 Computer Science and Artificial Intelligence Laboratory (CSAIL). 鈥淚t鈥檚 really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase the productivity of the overall factory.鈥�
A robot working in isolation has to simply follow a set of preprogrammed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant may prefer to work differently 鈥� and Shah says a robotic assistant would have to effortlessly adapt to an individual鈥檚 particular style to be of any practical use.
Now Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual鈥檚 preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney in July.
鈥淚t鈥檚 an interesting machine-learning human-factors problem,鈥� Shah says. 鈥淯sing this algorithm, we can significantly improve the robot鈥檚 understanding of what the person鈥檚 next likely actions are.鈥�
Taking wing
As a test case, Shah鈥檚 team looked at spar assembly, a process of building the main structural element of an aircraft鈥檚 wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, a mechanic applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualized: For example, one mechanic may choose to apply sealant to every hole before hammering in bolts, while another may like to completely finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes.
The researchers say robots such as FRIDA, designed by Swiss robotics company ABB, may be programmed to help in the spar-assembly process. FRIDA is a flexible robot with two arms capable of a wide range of motion that Shah says can be manipulated to either fasten bolts or paint sealant into holes, depending on a human鈥檚 preferences.
To enable such a robot to anticipate a human鈥檚 actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that a mechanic may make 鈥� for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole?
鈥淚f the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?鈥� Shah says. 鈥淭here are many branches.鈥�
Using the model, the group performed human experiments, training a laboratory robot to observe an individual鈥檚 chain of preferences. Once the robot learned a person鈥檚 preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person鈥檚 particular style of work.
Working side by side
Shah says in a real-life manufacturing setting, she envisions robots and humans undergoing an initial training session off the factory floor. Once the robot learns a person鈥檚 work habits, its factory counterpart can be programmed to recognize that same person, and initialize the appropriate task plan. Shah adds that many workers in existing plants wear radio-frequency identification (RFID) tags 鈥� a potential way for robots to identify individuals.
Steve Derby, associate professor and co-director of the Flexible Manufacturing Center at Rensselaer Polytechnic Institute, says the group鈥檚 adaptive algorithm moves the field of robotics one step closer to true collaboration between humans and robots.
鈥淭he evolution of the robot itself has been way too slow on all fronts, whether on mechanical design, controls or programming interface,鈥� Derby says. 鈥淚 think this paper is important 鈥� it fits in with the whole spectrum of things that need to happen in getting people and robots to work next to each other.鈥�
Shah says robotic assistants may also be programmed to help in medical settings. For instance, a robot may be trained to monitor lengthy procedures in an operating room and anticipate a surgeon鈥檚 needs, handing over scalpels and gauze, depending on a doctor鈥檚 preference. While such a scenario may be years away, robots and humans may eventually work side by side, with the right algorithms.
鈥淲e have hardware, sensing, and can do manipulation and vision, but unless the robot really develops an almost seamless understanding of how it can help the person, the person鈥檚 just going to get frustrated and say, 鈥楴ever mind, I鈥檒l just go pick up the piece myself,鈥欌€� Shah says.
This research was supported in part by Boeing Research and Technology and conducted in collaboration with ABB.
Provided by Massachusetts Institute of Technology
This story is republished courtesy of MIT News (), a popular site that covers news about MIT research, innovation and teaching.