Idea: Full-body Myo for robotic avatar


by Jester

Idea: Full-body Myo for robotic avatar

Imagine a humanoid, upright robot.
* The moving parts are head, two arms, hip, two legs.
* The head contains a camera.
* The upper torso contains a sensor for measuring the tilt of the body with respect to the surface.

Imagine a full-body-suit.
* Myo sensors wrapped around neck, shoulders, upper arms, lower arms, belly, thighs and lower leg.
* Four modules, located on front, sides and back of the user's torso, deliver a buzzing sensation whenever the body of the robot is not upright, with a strength correlated to the degree of the tilt.

The suit can be used to tele-operate the robot as an avatar.
* Movements of arms and legs can be directly translated into movements of arms and legs.
* By including the movement of the hip and the feedback to the user, the robot can walk human-like while using the human instincts (read: the computational capacities of the human brain) to keep the balance.
* The movement of the user's neck steers the movement of the robot's neck. The robot's cameras are connected to a virtual reality display the user is wearing.

If the robot is used in a low-gravity-environment (on the surface of Mars or Moon or in space), it gets even easier.
And with an additional myo-sensor on each foot (for the toe-muscles) a zero-gravity-robot with four arms, who effectively climbs like a monkey, is possible.
by @smngreenberg TL

Idea: Full-body Myo for robotic avatar

Interesting idea! Unfortunately though I think you might run into a number of challenges with your setup. Placing a Myo on your legs will lead to unpredictable results (e.g. no gesture recognition), and it wouldn't fit around a thigh or shoulder.

Regardless, I'm willing to bet you could find a subset of these ideas that would work. I would love to control a robot like that with Myo.

Cheers,
Scott
by Jester

Idea: Full-body Myo for robotic avatar

Gesture-recognition is an option of Myo, not the defining feature. The Myo is able to recognize which muscles are in use. Those muscles might the finger-muscles, they might be any muscles.

The Myo makes a measurement, nothing more, nothing less. WE tell him that particular results correspond to particular gestures. And in the next layer, WE tell him that particular gestures correspond to particular commands. WE decide what the Myo thinks it's doing. Using the Myo for other body-regions is simply a matter of recalibrating his driver: From now on, these measurements now longer correspond to these hand-gestures but have new meanings.

We don't even need gesture-recognition. The flexing of certain muscles alone could be the signal to trigger a certain program. For example: The muscles in the thigh are used in different patterns whether it's a short-distance sprint or whether it's a long-distance run or whether you walk normally.
by @smngreenberg TL

Idea: Full-body Myo for robotic avatar

by Jester
he Myo makes a measurement, nothing more, nothing less. WE tell him that particular results correspond to particular gestures. And in the next layer, WE tell him that particular gestures correspond to particular commands. WE decide what the Myo thinks it's doing. Using the Myo for other body-regions is simply a matter of recalibrating his driver: From now on, these measurements now longer correspond to these hand-gestures but have new meanings.

Hi Jester,

Unfortunately you may want to look through the SDK docs for a bit more information. The raw data from the EMG sensors is not available in the API, only discrete gestures.

Cheers,
Scott
by dalejandro89

Idea: Full-body Myo for robotic avatar

Which is actually really sad because I am a biometrics specialist and one of the main things I wanted to do with the Myo was use the EMG to create a security protocol whereas the computer systems are able to recognize that the person using the Myo is a certified user through the EMG readings. Creating a level of security for the Myo would be nice especially if programs are developed that allow your Myo to have a lot of permissions on your different systems. I am still going to try and see what I can do using the spatial information combined with the gesture, however it is going to be a much more difficult arena to to work in.
by bengrossmann

Idea: Full-body Myo for robotic avatar

Jester, you can accomplish a fair amount of that as you presently envision, but might want to incorporate different technologies. I work in motion capture and motion control from time to time, and I think that the “noise” from trying to run a complete robot with this present implementation of EMG-driven/gesture translation would probably result in a robot that's difficult to control.

Since a lot of the muscle movements humans make are for basic things like “balance”, translating those movements onto another object isn't necessarily what you need to do, when what you really want to translate is the intent. The muscle movements that I make as a human to move forward, or remain balanced on a moving surface, aren't going to translate to robot that isn't constructed using the same system of muscles and bones as a human, with the same mass and weight distribution.

For example, we sometimes use a suit that uses accelerometers to judge change in position on a person, and then we translate that motion change to a rig (skeleton) and then we translate that rig from a human to another object (dinosaur, robot, bird, whatever) and we get the translation of the desired intent from the human, to the new object.
This system is not without its problems, by the way, but I suspect that the data it's getting is probably “cleaner and faster” than trying to translate EMG to muscle-change to bone rigs to determine motion, and then using that to determine intent. At a minimum, it's probably a better intent-driven system anyway. (Here's a suit that does that: http://www.xsens.com/products/xsens-mvn/)

You could/should still be able to use Myo for the hands to control different devices or functions that are more detailed than gross movement (arms, legs, head)

Worth looking into if that's what you really want to do?

-Ben

Last edit: Oct. 14, 2014 11:37 AM

by nestorcaro

Idea: Full-body Myo for robotic avatar

by scott_greenberg
by Jesterhe Myo makes a measurement, nothing more, nothing less. WE tell him that particular results correspond to particular gestures. And in the next layer, WE tell him that particular gestures correspond to particular commands. WE decide what the Myo thinks it's doing. Using the Myo for other body-regions is simply a matter of recalibrating his driver: From now on, these measurements now longer correspond to these hand-gestures but have new meanings.Hi Jester,Unfortunately you may want to look through the SDK docs for a bit more information. The raw data from the EMG sensors is not available in the API, only discrete gestures. Cheers,Scott

Hello Scott

I'm quite sad to hear it's not possible to access the EMG. Nevertheless, I was wandering if we could trick the Myo, to characterize this “Unpredictible results”? And attempt to assign commands to a leg?

Thank you
by @smngreenberg TL

Idea: Full-body Myo for robotic avatar

Welcome to the forums!

Actually this post is quite old and is now out of date. Access to the raw EMG data *is* now available, so feel free to try out leg gestures . No promises from our end (we haven't tried it) but I assume you may be able to do some simple things.

Cheers,
Scott
by nestorcaro

Idea: Full-body Myo for robotic avatar

Thank you
by ProteanVI

Idea: Full-body Myo for robotic avatar

by @smngreenberg
Welcome to the forums!Actually this post is quite old and is now out of date. Access to the raw EMG data *is* now available, so feel free to try out leg gestures . No promises from our end (we haven't tried it) but I assume you may be able to do some simple things.Cheers,Scott

To confirm, the Myo armband now allows access to EGM raw data correct?
Moderator control