July 9, 2021 by Sarah Yang Delivery services may be able to overcome snow, rain, heat and the gloom of night, but a new class of legged robots is not far behind. Artificial intelligence algorithms developed by a team of researchers from UC Berkeley, Facebook and Carnegie Mellon University are equipping legged robots with an enhanced ability to adapt to and navigate unfamiliar terrain in real time. Their test robot successfully traversed sand, mud, hiking trails, tall grass and dirt piles without falling. It also outperformed alternative systems in adapting to a weighted backpack thrown onto its top or to slippery, oily slopes. When walking down steps and scrambling over piles of cement and pebbles, it achieved 70% and 80% success rates, respectively, still an impressive feat given the lack of simulation calibrations or prior experience with the unstable environments.
Stumble-proof robot adapts to challenging terrain in real time – TechCrunch
techcrunch.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from techcrunch.com Daily Mail and Mail on Sunday newspapers.
2021 APS Board of Directors Election
psychologicalscience.org - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from psychologicalscience.org Daily Mail and Mail on Sunday newspapers.
When I was in graduate school in the 1990s, one of my favorite classes was neural networks. Back then, we didn’t have access to TensorFlow, PyTorch, or Keras; we programmed neurons, neural networks, and learning algorithms by hand with the formulas from textbooks. We didn’t have access to cloud computing, and we coded sequential experiments that often ran overnight. There weren’t platforms like Alteryx, Dataiku, SageMaker, or SAS to enable a machine learning proof of concept or manage the end-to-end MLops lifecycles.
I was most interested in reinforcement learning algorithms, and I recall writing hundreds of reward functions to stabilize an inverted pendulum. I never got it working and was never sure whether I coded the algorithms incorrectly, chose less-optimal reward functions, or selected imperfect learning parameters. But today, I can find examples of reinforcement learning applied to the inverted pendulum problem and even the schematics to build one.