how to make a site

Guilherme Maeda
Researcher in robotics and robot motor skill learning


Eventually, our society will depend on robots as general-purpose helpers that will assist us throughout our entire lives. This vision requires robots capable of learning skills by themselves to manipulate and interact with the world in a way that is useful for us. To this end, my work involves the investigation and design of learning control methods, and their implementation and validation using real robots. 


News and upates

  1. December 2019 - New NeurIPS workshop paper. Zhangwei H, Nagarajan P, Maeda G. "Swarm-Inspired Reinforcement Learning via Collaborative Inter-Agent Knowledge Distillation". In: NeurIPS 2019 Deep Reinforcement Learning Workshop. 2019. [pdf][BibTeX]
  2. December 2019 - New IJRR journal article. Lioutikov, R.; Maeda, G.; Veiga, F.; Kersting, K.; Peters, J. (2019) “Learning Attribute Grammars for Movement Primitive Sequencing”. The International Journal of Robotics Research (IJRR). In press. [BibTeX]
  3. December 2019 - I am starting to move my previous website into here. The previous one (https://gjmaeda.com/) is not going to be updated anymore. It will be eventually shut down.

Research posts

Mobirise

Pre-print:
Phase Portrait Movement Primitives

Animals plan complex motor actions not only fast but seemingly with little effort even on unseen tasks. This natural sense of time and coordination motivates us to approach robot control from a motor skill learning perspective to design fast and computationally light controllers that can be learned autonomously by the robot under mild modeling assumptions.

We propose Phase Portrait Movement Primitives. A new primitive representation that includes a phase predictor that can be trained to adapt the timings of the robot's actions. We tested the method on a task comprising 20 degrees-of-freedom using a hydraulic upper body humanoid. Click here for details and videos.

Mobirise

Reinforcement Learning of Motor Skills using Policy Search and Human Corrective Advice (IJRR 2019)

A research project led by Carlos Celemin (now at TU Delft) regarding a human-in-the-loop approach where human feedback can come at any time during the execution of an exploration roll-out. The human feedback can also come sporadically, where most of the roll-outs do not even need to include human feedback. This human feedback is seamlessly incorporated into the policy update as an informed/biased exploration noise. The learning rate increases by factors of 4 to 40 times depending on the task. You can watch the video of a ball-in-cup where the policy starts from a blank state here, and find the details of method in the paper here.

Mobirise

Active Incremental Learning of Robot Movement Primitives (PMLR/CoRL 2017)

Robots must be capable of learning new tasks incrementally, via demonstrations. The problem then is to decide when the user should teach the robot a new skill, or when to trust the robot generalizing its own actions. In this paper, we propose a method where the robot actively make such decisions by quantifying the suitability of its own skill set for a given query via Gaussian Processes.

In this video you can see a robot indicating to the user which demonstrations should be provided to increase its repertoire of skills. The experiment also shows that the robot becomes confident in reaching objects for whose demonstrations were never provided, by incrementally learning from the neighboring demonstrations.  

Mobirise

Interaction ProMPs for Human-Robot Collaboration

An interaction learning method for collaborative and assistive robots based on movement primitives. Our method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations.

Potential applications of the method are personal caregiver robots, control of intelligent prosthetic devices, and robot coworkers in factories.

We first introduced the Interaction ProMP in this paper, and improved the action recognition here.