Robots Learn to Complete Actions by Simply Watching

first_imgStay on target Robot Dog Astro Can Sit, Lie Down, and Save LivesYou Can’t Squish This Cockroach-Inspired Robot The key, according to NVIDIA, is synthetic data.While existing approaches to machine learning require large amounts of labeled training data—” a serious bottleneck in these systems”—synthetic data generation allows an “almost infinite” amount of data to be produced “with very little effort.”To demonstrate, the team used colored blocks (red, yellow, green, blue) and a toy car (also red).In the clip above, a human operator shows a pair of stacked cubes to the robot, which infers an appropriate program to place them in the correct order. The system is able to recover from mistakes in real time.“If someone pours water into a glass, the intent of the demonstration remains ambiguous. Should the robot also pour water? If so, then into which glass?” the study said. “Should it also pour water into an adjacent mug? When should it do so? How much water should it pour? What should it do if there is no water? And so forth.“Concrete actions themselves are insufficient to answer such questions,” researchers continued. “Rather, abstract concepts must be inferred from the actions.”Birchfield, Tremblay & Co. will present their research at this week’s International Conference on Robotics and Automation (ICRA) in Brisbane, Australia.Moving forward, the group plans to fix existing issues and explore additional features, including increasing robustness of domain randomization, leveraging past execution information, and expanding the vocabulary of the human-readable programs. Let us know what you like about Geek by taking our survey. Another week, another step closer to the robot revolution: Researchers at NVIDIA are teaching bots to complete tasks by simply observing humans.The team, led by Stan Birchfield and Jonathan Tremblay, developed a deep learning system to “enhance communication between humans and robots.”Using NVIDIA Titan X GPUs, researchers trained a series of neural networks to execute actions based on a single real-world demonstration.AdChoices广告“In order for robots to perform useful tasks in real-world settings, it must be easy to communicate the task to the robot; this includes both the desired end result and any hints as to the best means to achieve that result,” a recently published paper said.Live video of a scene—someone stacking colored cubes, for instance—is fed into the neural network, which infers the positions and relationships of objects; another machine then generates a plan to recreate those perceptions.Finally, an execution network reads that proposal and generates an intelligible description of steps, which a user can edit directly before the android even moves.last_img

Leave a Reply

Your email address will not be published. Required fields are marked *