You are here
Me, myself and robotics – how the human body has inspired robotics
Yet the earliest walking robot took a decade to design and used so much energy that its battery lasted only 15 minutes.
Human joints are amongst the most complicated parts of our anatomy. Knee or hip joints must bear our bodyweight, yet be flexible enough to allow for movement. Human joints are impressively well-designed for this; they use the elasticity in our muscles and tendons to help us balance and they can even store energy on a down step to use on an up step, meaning we waste less energy.
With early robots requiring large amounts of energy, robot engineers turned to human bodies for inspiration in designing robotic joints. Robots use devices called actuators to create or prevent movement, and increasingly these are inspired by human muscles, allowing robots to have more stable and accurate force control and even preserve energy like humans do. One example of this is Hitachi’s EMIEW3, which utilises “adaptive suspension” in its leg mechanism to absorb impacts generated by running over obstacles on the floor; the actuator and springs work together to maintain the robot’s balance in an energy-efficient way. It can do this while travelling at 6kph, allowing it to keep up with humans.
However, these man-made “muscles” have specific use-cases. For example, electric coreless motor actuators are suited to high-speed activity, but not to moving heavy loads, while hydraulic actuators are better with heavy-loads, but can only achieve low speeds. It means robot muscles are not as adaptable as those in humans; a robot’s muscles must be designed with a specific activity in mind. Improvements in actuator design could be the next leap forward in the design of humanoid robotics.
Evidence shows that another technique humans use to save energy while walking is being permanently “off balance”. It is suggested that we are, in effect, constantly falling forwards then catching ourselves. The earliest robots were permanently balanced while walking because, paradoxically, the balancing technology was not yet advanced enough to manage human-style imbalance. It meant that every step the robot took required fresh energy to be used, which further contributed to their short battery lives. Modern robots are using these very human strategies to preserve energy so they can be active longer.
Another example is grasping. We’ve talked about the complexities of knee and hip joints, but hands must also be capable of handling heavy loads, with the necessity to synchronise the movement of each individual knuckle joint. Such is the complexity of this that there are entire companies focused on mastering the design of hands. Some designs use tiny electric motors and others use “air muscles” that force air into a rubber bladder causing pushing and pulling motions that mimic the extension and contraction in humans. Whichever technology is used, engineers are turning to human bodies to overcome technical barriers.
From a sensory perspective, when you’re hurrying to grab a coffee on the way to work while flicking through emails, you probably don't stop to think about how you grasp the cup. It might be too hot, requiring you to add another layer of insulation; you avoid grabbing it by the lid because it may come off and spill the coffee; and you keep it level so the contents don’t splash out. While we have a subconscious brain to handle this for us, robots need to learn it all. While much of it can be taught to a robot piecemeal by programmers for each specific situation, in the future they will be able to learn it for themselves and react to new situations. For example, say a robot decided to ascend a spiral staircase, and then halfway up, it detected people coming down in the opposite direction. It may have never used this staircase before, so it would need to respond to a developing situation incredibly fast, and then make numerous decisions and calculations in the space of a few milliseconds.
Compared to the earliest robots, today’s versions are incredibly capable. They can now be used in real-world situations, like speaking to people and guiding them to their destinations. This would simply not have been possible when the first robots were being produced. The next step is to take this further and create robots that learn, make decisions and identify new situations without prior programming. That is our ambition.