This paper introduces a control framework for bimanual robotic systems, aiming to automate the dynamic task of grabbing and tossing objects onto a moving target. The proposed method utilizes a mixed learning-optimization approach to compute tossing parameters, incorporating an inverse throwing map for accurate task execution. This map is integrated into a kinematics-based bi-level optimization process to determine feasible release states for the dual-arm robot. Additionally, the paper presents a closed-form model of the robot's tossable workspace, allowing prediction of high-probability intercept or landing locations for successful task outcomes. The coordinated motion of the dual-arm system is generated using dynamical systems, accompanied by an adaptation strategy to ensure robust interception in the face of target perturbations. Experimental validation on two 7-DoF robotic arms demonstrates the accuracy, robustness, speed, and energy advantages of the proposed approach over traditional pick-and-place strategies.
Picking up objects and tossing them on a conveyor belt are activities generated daily in the industry. These tasks are still done largely by humans. This article proposes a unified motion generator for a bimanual robotic system that enables two seven-degree-of-freedom robotic arms to grab and toss an object in one swipe. Unlike classical approaches that grab the object with quasi-zero contact velocity, the proposed approach can grasp the object while in motion. We control the contact forces before and following impact to stabilize the robots’ grip on the object. We show that such swift grasping speeds up the pick-and-place process and reduces energy expenditure for tossing. Continuous control of the reach, grab, and toss motion is achieved by combining a sequence of time-invariant dynamical systems (DS) in a single control framework. We introduce a state-dependent modulation function to control the generated velocity in different directions. The framework is validated in simulation and on a real dual-arm system. We show that we can precisely toss objects within a workspace of 0.2×0.4 square meters. Moreover, we show that the algorithm can adapt on-the-fly to changes in object location.
Impact-Aware Manipulation: Pick and Toss Objects using Dynamical Systems (DS)
Tested at the Innovation Lab of Vanderlande at the Technical University of Eindhoven (TU/e)
This paper proposes a manipulation scheme based on learning the motion of objects after being hit by a robotic end-effector. This allows for the object to be positioned at a desired location outside the physical workspace of the robot. An estimate of the object dynamics under friction and collisions is learnt and used to predict the desired hitting parameters (speed and direction), given the initial and desired location of the object. Based on the obtained hitting parameters, the desired pre-impact velocity of the end-effector is generated using a stable dynamical system. The performance of the proposed DS is validated in simulation and and is used to learn a model for hitting using real robot. The approach is tested on real robot with a KUKA LBR IIWA robot.
This paper presents an approach to achieve stable bi-manual reach-to-grasp and compliant manipulation of an object by a humanoid robot. It uses dynamical systems and exploits a concept of a shrinkable virtual object to achieve motion coordination by imposing virtual constraints to the robot’s hands. Moreover, through its shrinkage, it ensures a smooth transition from virtual constraints in free motion to real constraints when the object is grasped. Also, the controller computes contacts-consistent optimal wrenches that stabilize the grasp and achieve desired manipulation tasks. Then, this manipulation algorithm is integrated into a whole-body controller that stabilizes the robot. Finally, the proposed solution is validated on the humanoid robot iCub.
This paper proposes a capture-point-based reactive omnidirectional controller for bipedal locomotion. The proposed scheme, formulated within Model Predictive Control (MPC) framework, exploits concurrently the Center of Mass (CoM) and Capture Point (CP) dynamics. It allows the online generation of the CoM reference trajectory and the automatic generation of footstep positions and orientations in response to a given velocity to be tracked, or a disturbance to be rejected by the robot while accounting explicitly for different walking constraints. For instance, to cope with disturbance such as a push, the proposed controller not only adjusts the position of the Center of Pressure (CoP) within the support foot but can also induce at least one step with appropriate length allowing thus to maintain the stability of the robot. Finally, the proposed algorithm is validated through simulations and actual experiments on the humanoid robot iCub.
Admittance-based Reactive Walking in Robot-Robot Collaboration
Compliant interaction between a mobile manipulator and a humanoid robot using reactive walking
This Project deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve configuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this project explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO.