The University of Texas at Arlington, funded by a $450,000 Air Force grant, is innovating augmented reality for aircraft maintenance training, aiding expertise transition.
Targeted News Service provides Washington Bureau coverage, federal contract and contracting information for awards and for subcontracting opportunities, a congressional vote chart, federal and congressional press releases.
Augmented Reality Tech Boosts Military Aircraft Maintenance miragenews.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from miragenews.com Daily Mail and Mail on Sunday newspapers.
'I'm not going to take it for granted' miragenews.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from miragenews.com Daily Mail and Mail on Sunday newspapers.
At UTA, Joe Cloud has helped build a supercomputer and developed robots for space eurekalert.org - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from eurekalert.org Daily Mail and Mail on Sunday newspapers.
UTA doctoral student earns Tau Beta Pi Fellowship uta.edu - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from uta.edu Daily Mail and Mail on Sunday newspapers.
Generating Synthetic Objects to Train Robots
Written by AZoRoboticsJan 20 2021
Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum).
To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images.
22nd January 2021 3:00 am 21st January 2021 12:30 pm
Computer scientists from University of Texas at Arlington, USA, are exploring the use of AI and supercomputers for generating synthetic objects to train robots.
Examples of 3D point clouds synthesised by the progressive conditional generative adversarial network (PCGAN) for an assortment of object classes. Credit: William Beksi
William Beksi, assistant professor in UT Arlington’s Department of Computer Science and Engineering and founder of the university’s Robotic Vision Laboratory, is leading the research with a group including six PhD Computer Science students.
Having previously interned at consumer robot producer iRobot, where researchers were interested in using machine and deep learning to train robots, Beksi said he was particularly interested in developing algorithms that enable machines to learn from their interactions with the physical world and autonomously acquire skills necessary to execute high-level tasks.
William Beksi, UT Arlington
Before he joined the University of Texas at Arlington as an Assistant Professor in the Department of Computer Science and Engineering and founded the Robotic Vision Laboratory there, William Beksi interned at iRobot, the world's largest producer of consumer robots (mainly through its Roomba robotic vacuum).
To navigate built environments, robots must be able to sense and make decisions about how to interact with their locale. Researchers at the company were interested in using machine and deep learning to train their robots to learn about objects, but doing so requires a large dataset of images. While there are millions of photos and videos of rooms, none were shot from the vantage point of a robotic vacuum. Efforts to train using images with human-centric perspectives failed.