Tariq Iqbal is striving for robots that work with people, not replace them. (Photo by Dan Addison, University Communications)
Tariq Iqbal has a vision of humans and robots working together and trusting one another through a combination of native and artificial intelligence.
Iqbal, an assistant professor of systems engineering and computer science in the University of Virginia’s School of Engineering and Applied Science, works in the realm of artificial intelligence and machine learning to make robots more useful in their interactions with humans. He stresses that he strives for machines to work with people, not replace them.
“The overall goal of my research is how can we build a fluid and efficient human/robot team where both human and robot can share the same physical space, materials and environment,” Iqbal said. “The hypothesis is that by doing so, we can achieve that which neither the human nor the robot can achieve alone. We can achieve something bigger.”
Humans and robots, however, need a bond of trust.
“Can a robot understand how much trust the human counterpart is putting into it?” Iqbal said. “Can it calibrate that trust? If I’m a factory worker and there is a robot, but I’m under-trusting it and thinking that the robot cannot perform the task, then I will not utilize that robot as much as I could.”
That requires an “appropriate level of trust” between the users.
“It happens in human teams and group settings,” Iqbal said. “Whenever I delegate something, I trust that the human in my team can do it.”
He said while applications such as ChatGPT have advanced the field in the direction of producing text, it is still a long way off in the robotics field. Iqbal said the more sophisticated algorithms give him larger tools to use and he is exploring how to use language sets in robotics.
“Right now, ChatGPT can generate content,” Iqbal said. “But generation doesn’t mean understanding. It doesn’t have the simplest understanding of the thing. We are still exploring how we can utilize these large language models in our systems.”
Iqbal hopes combining many systems will form the human-robot interaction to understand how both sides of the equation work.
Sujan Sarker, a member of Tariq Iqbal’s laboratory, studies notes on a free-standing robot. The robot can move around on its own legs. (Photos by Dan Addison, University Communications)
“It encompasses various disciplines such as systems engineering and human factors where we try to understand the human behaviors, then from there to artificial intelligence and machine learning,” he said. “We model the human behaviors, their interaction with the environment, with others in groups. We work on various perceptions, decision-making and control algorithms on the robot side. We do a lot of manipulation and navigation on the robot side so that the robot can help the human to perform the task better.”
Iqbal, leader of the Collaborative Robotics Lab, uses multiple cameras and other physiological sensors such as smart watches to track and record gross and subtle human movements, gestures and expressions. Iqbal cited a factory setting where humans work with “cobots,” or “collaborative robots,” with the humans performing fine motor skills and the robots performing gross motor tasks, such as fetching tools for a human worker. Iqbal wants artificial intelligence and machine learning to inform the robot what is expected of it.
“We want the robotic manipulator to try to understand what activity humans are performing, which phases of that activity the human is currently in, and then go and bring the object that the human will need in the near future so that the human doesn’t need to go back and forth,” Iqbal said.
He acknowledged it is difficult to translate the plethora of human social cues for a machine to understand.
“Whenever we try to build something to understand human behavior, it always changes,” Iqbal said. “Understanding the human intent is so hard of a problem itself, because we are expressing ourselves in so many immense ways, and capturing all those is very hard. In many cases, every time we learn something new, it’s hard to teach the machine how to interpret the human intent.”
Part of the difficulty is the variety of human expression.
“Whatever I’m saying is not just the message that I’m passing,” Iqbal said. “I’m passing a lot of my messages with my gestures, so just understanding the verbal message is not sufficient. If I am saying, ‘Give me that thing,’ from just the audio, there is no way for you to know which thing I’m referring to because I’m referring to some objects with my hand gesture.”
To overcome this, Iqbal works with “multimodal representation learning,” a system of verbal messages; nonverbal gestures such as pointing, eye gaze and head motion; and even human physiological sensing, such as heart rate dynamics and skin temperature dynamics.
“Now the challenge is we have so many different types of data, how to fuse those together to find what exactly it means, because all those data are referring to something similar,” Iqbal said. “We work on the decision-making part. Now what can the robot do with those understandings?”
Iqbal has been working on these issues for more than 10 years. He earned his bachelor’s degree at Bangladesh University of Engineering and Technology, his doctorate at the University of California San Diego, and was a postdoctoral associate at the Massachusetts Institute of Technology.
“I was very excited about understanding not only AI behaviors, but various human behaviors,” Iqbal said. “And how to incorporate that knowledge in building effecient human-robot teams.”
“The whole goal or the purpose of the robot should be to support the human. It’s not replacing the human in any means,” Iqbal said. “That means our goal should be to find out where the human needs support and build a robot to help the human in those dimensions.”