Past news and updates [here].
In this research, we introduce the F1 hand, a single-motor gripper that facilitates teleoperation with the use of a fixed finger. The hand is capable of grasping objects as thin as a paper clip, and as heavy and large as a cordless drill. However, the atypical asymmetric structure and actuation of the hand renders usual grasping strategies no longer applicable. We propose a grasping controller that approximately recovers actuation symmetry by using the motion of the whole arm. The controller can be used in both teleoperation and full autonomous grasping modes.
Animals plan complex motor actions not only fast but seemingly with little effort even on unseen tasks. This natural sense of time and coordination motivates us to approach robot control from a motor skill learning perspective to design fast and computationally light controllers that can be learned autonomously by the robot under mild modeling assumptions.
We propose Phase Portrait Movement Primitives. A new primitive representation that includes a phase predictor that can be trained to adapt the timings of the robot's actions. We tested the method on a task comprising 20 degrees-of-freedom using a hydraulic upper body humanoid.
An interaction learning method for collaborative and assistive robots based on movement primitives. Our method allows for both action recognition and human–robot movement coordination. It uses imitation learning to construct a mixture model of human–robot interaction primitives. This probabilistic model allows the assistive trajectory of the robot to be inferred from human observations.
A research project led by Carlos Celemin (now at TU Delft) regarding a human-in-the-loop approach where human feedback can come at any time during the execution of an exploration roll-out. The human feedback can also come sporadically, where most of the roll-outs do not even need to include human feedback. This human feedback is seamlessly incorporated into the policy update as an informed/biased exploration noise. The learning rate increases by factors of 4 to 40 times depending on the task. You can find the details of method in the journal.
This work explores the consistency of the progress among different examples and viewpoints of a task to train a deep neural network to map images into measurable features. Our method builds upon Time-Contrastive Networks (TCNs), originally proposed as a representation for continuous visuomotor skill learning, to train the network using only discrete snapshots taken at different stages of a task. The intent is to make the network sensitive to differences in task phases. We associate these embeddings to a sequence of images representing gradual task accomplishment, allowing a robot to iteratively query its motion planner with the current visual state to solve long-horizon tasks.
Robots must be capable of learning new tasks incrementally, via demonstrations. The problem then is to decide when the user should teach the robot a new skill, or when to trust the robot generalizing its own actions. In this paper, we propose a method where the robot actively make such decisions by quantifying the suitability of its own skill set for a given query via Gaussian Processes.
In this video you can see a robot indicating to the user which demonstrations should be provided to increase its repertoire of skills. The experiment also shows that the robot becomes confident in reaching objects for whose demonstrations were never provided, by incrementally learning from the neighboring demonstrations.
Made with
WYSIWYG HTML Editor