Generating Synthetic Objects to Train Robots
Written by AZoRoboticsJan 20 2021
Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world s largest producer of consumer robots (mainly through its Roomba robotic vacuum).
To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images.
22nd January 2021 3:00 am 21st January 2021 12:30 pm
Computer scientists from University of Texas at Arlington, USA, are exploring the use of AI and supercomputers for generating synthetic objects to train robots.
Examples of 3D point clouds synthesised by the progressive conditional generative adversarial network (PCGAN) for an assortment of object classes. Credit: William Beksi
William Beksi, assistant professor in UT Arlington’s Department of Computer Science and Engineering and founder of the university’s Robotic Vision Laboratory, is leading the research with a group including six PhD Computer Science students.
Having previously interned at consumer robot producer iRobot, where researchers were interested in using machine and deep learning to train robots, Beksi said he was particularly interested in developing algorithms that enable machines to learn from their interactions with the physical world and autonomously acquire skills necessary to execute high-le
William Beksi, UT Arlington
Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world s largest producer of consumer robots (mainly through its Roomba robotic vacuum).
To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed.
Computer scientists from UT Arlington developed a deep learning method to create realistic objects for virtual environments that can be used to train robots. The researchers used TACC s Maverick2 supercomputer to train the generative adversarial network. The network is the first that can produce colored point clouds with fine details at multiple resolutions. The team presented their results at the International Conference on 3D Vision (3DV) in Nov. 2020.