Guilherme Maeda
Robotics Researcher at Sony AI


Since 2022, I am a senior research scientist at Sony AI in Tokyo, Japan. Previously, I held positions at Preferred Networks Inc and at the ATR Computational Neuroscience Laboratory. Between 2013-2017, I was with the Intelligent Autonomous Systems group in TU Darmstadt. Since my Ph.D at the Australian Centre for Field Robotics I have been a robotics researcher, with a particular interest in learning and control methods

News and upates


  1. August 2022. Our paper (while at Preferred Networks) "F1 Hand: A Versatile Fixed-Finger Gripper for Delicate Teleoperation and Autonomous Grasping" was accepted for publication at the IEEE Robotics and Automation Letters (RA-L) and for presentation at IROS 2022. (pdf)
  2. June 2022. Our paper "F3 Hand: A Versatile Robot Hand Inspired by Human Thumb and Index Fingers" was accepted for presentation at the IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022), 
  3. April 2022. I have joined the Steering Committee for ROBOTS.ieee.org
  4. April 2022. Our paper ICRA 2022 paper "Blending Primitive Policies in Shared Control for Assisted Teleoperation" is now available in arxiv  (while at Preferred Networks)

Past news and updates [here].


Selected papers


F1 Hand: A Versatile Fixed-Finger Gripper for Delicate Teleoperation and Autonomous Grasping



[Accepted for publication at RA-L and presentation at IROS(2022)]


In this research, we introduce the F1 hand, a single-motor gripper that facilitates teleoperation with the use of a fixed finger. The hand is capable of grasping objects as thin as a paper clip, and as heavy and large as a cordless drill. However, the atypical asymmetric structure and actuation of the hand renders usual grasping strategies no longer applicable. We propose a grasping  controller that approximately recovers  actuation symmetry by using the motion of the whole arm. The controller can be used in both teleoperation and full autonomous grasping modes. 

Phase Portrait Movement Primitives

[Neural Networks Journal, 2020]

Animals plan complex motor actions not only fast but seemingly with little effort even on unseen tasks. This natural sense of time and coordination motivates us to approach robot control from a motor skill learning perspective to design fast and computationally light controllers that can be learned autonomously by the robot under mild modeling assumptions.
We propose Phase Portrait Movement Primitives. A new primitive representation that includes a phase predictor that can be trained to adapt the timings of the robot's actions. We tested the method on a task comprising 20 degrees-of-freedom using  a hydraulic upper body humanoid. 

Interaction ProMPs for Human-Robot Collaboration

[Autonomous Robots, 2015; International Journal of Robotics Research, 2017 ]

An interaction learning method for collaborative and assistive robots based on movement primitives. Our method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations.

Reinforcement Learning of Motor Skills using Policy Search and Human Corrective Advice

[International Journal of Robotics Research, 2019]

A research project led by Carlos Celemin (now at TU Delft) regarding a human-in-the-loop approach where human feedback can come at any time during the execution of an exploration roll-out. The human feedback can also come sporadically, where most of the roll-outs do not even need to include human feedback. This human feedback is seamlessly incorporated into the policy update as an informed/biased exploration noise. The learning rate increases by factors of 4 to 40 times depending on the task. You can find the details of method in the journal.

Visual Task Progress Estimation with Appearance Invariant Embeddings for Robot Control and Planning

[IROS 2020]

This work explores the consistency of the progress among different examples and viewpoints of a task to train a deep neural network to map images into measurable features. Our method builds upon Time-Contrastive Networks (TCNs), originally proposed as a representation for continuous visuomotor skill learning, to train the network using only discrete snapshots taken at different stages of a task. The intent is to make the network sensitive to differences in task phases. We associate these embeddings to a sequence of images representing gradual task accomplishment, allowing a robot to iteratively query its motion planner with the current visual state to solve long-horizon tasks.

Active Incremental Learning of Robot Movement Primitives

[CoRL, 2017]

Robots must be capable of learning new tasks incrementally, via demonstrations. The problem then is to decide when the user should teach the robot a new skill, or when to trust the robot generalizing its own actions. In this paper, we propose a method where the robot actively make such decisions by quantifying the suitability of its own skill set for a given query via Gaussian Processes.

In this video you can see a robot indicating to the user which demonstrations should be provided to increase its repertoire of skills. The experiment also shows that the robot becomes confident in reaching objects for whose demonstrations were never provided, by incrementally learning from the neighboring demonstrations. 

Made with ‌

WYSIWYG HTML Editor