I am a research scientist working with robots. The motivation of my work is to contribute to the technology that will eventually enable robots to autonomously learn how to help us, without the need of hand-coded programs.
Autonomous learning is essential when we cannot anticipate which tasks the robot will face after deployment---for example, robots deployed at home, in the hospital, or in another planet---so an engineer cannot pre-program it in advance. Also, there are tasks that cannot be easily described by physics modeling and first-order principles such as in this paper ; in which case the only hope is that the robot learns the underlying dynamics by itself, autonomously. A collaborative learning example is when a robot must assess what a human partner is doing so that it can generate corresponding control commands to assist him/her . A learning control example is when a remotely-located robot must self-adapt to the environment without human supervision .
Why should we care about such capabilities? Ultimately, we may have to rely on robots to fill existing/future gaps in our society. Think about the inevitable and unprecedented growth of the elderly population around the globe, the assistance of communities and exploration of resources in remote areas, rescue and disaster response, optimized and adaptive rehabilitation and prosthetics. It is hard to imagine there will be someone available to hard-code the robot program under these scenarios, where each scenario may encompass numerous and unforeseeable tasks.
Bio: since 2019, I am working as a researcher at Preferred Networks Inc in Tokyo, Japan. Previously, I was at the ATR Computational Neuroscience Lab in the Department of Brain Robot Interface (BRI) also in Japan. Between 2013-2017, I was with the Intelligent Autonomous Systems group (IAS) in TU Darmstadt working with Jan Peters where I led the participation of the IAS group in the European Funded Project 3rd Hand. I received my Ph.D from the Australian Centre for Field Robotics (ACFR) under the supervision of Hugh Durrant-Whyte, Surya Singh, David Rye, and Ian Manchester. Between 2005-2007 I did a master's at the Tokyo Institute of Technology (TITECH) in control engineering.
 Maeda, G.; Koc, O.; Morimoto, J. (2018). “Reinforcement Learning of Phase Oscillators for Fast Adaptation to Moving Targets”, Proceedings of Machine Learning Research (PMLR). Conference on Robot Learning (CoRL)
 Maeda, G.; Neumann, G.; Ewerton, M; Lioutikov, R.; Kroemer, O.; Peters, J. (2017). “Probabilistic Movement Primitives for Coordination of Multiple Human-Robot Collaborative Tasks”, Autonomous Robots (AURO), vol. 41, no. 3, pp. 593–612.
 Maeda, G.; Manchester, I.; Rye, D. (2015). “Combined ILC and disturbance observer for the rejection of near-repetitive disturbances, with application to excavation”, IEEE Transactions on Control Systems Technology, vol. 23, no. 5, pp. 1754–1769.
Contact : firstname.lastname@example.org