Context aware apps recognize objects touched by user
Researchers at Carnegie Mellon University, including Gierad Laput, a third-year doctoral student in human-computer interaction, have developed a new technique, called EM-Sense, that can detect whether a person is in contact with an electrical or electromechanical object and then identify the object. This technique, when incorporated into smartwatches, could provide more information about the user’s activities than current technology, thus making the user-experience more personalized and accurate.
The technology utilizes a sensor in your smartwatch that detects the object that you are touching. The EM-Sense technology works on the principle that every electrical or electromechanical object — for example iPads, electric toothbrushes, electric drills, and printers — emits electromagnetic noise, a unique signal in the radio frequency range that can be used to uniquely identify an object. Different types of objects have unique electromagnetic signatures, and when an object is touched, those signals flow through the body and can be picked up by the sensor in the smartwatch. The smartwatch is in contact with your skin, which detects and matches the signal to its specific object. Although signals emitted by the same objects are the same, pairing mechanisms can be applied to differentiate between two objects of the same type. This technology can also be extended to large metallic objects like ladders. Being metallic, such objects absorb the electromagnetic noise from the other objects around them, which allows a unique signal to be detected by the watch when they are touched, depending on their surroundings.
The data obtained using this technology in a smartwatch can be used to develop context-aware apps. Such an app would use the information that it receives to plan out and suggest activities to the user. As demonstrated in a video released by Disney entitled, “EM-Sense: Touch Recognition of Uninstrumented Electrical and Electromechanical Objects,” the EM-Sense technology could be used to augment activities throughout the day. The video shows that the smartwatch could record data when a user steps on a scale and display it in relation to previous data, or start a timer when the user begins using an electronic toothbrush, or remind the user about appointments and meetings as soon as they touch the handle of the door to their office. This technology could be future of smartwatches, making them smarter than ever.
When asked about the future plans for the project, Laput said, “[In the future], we’d be happy if giant manufacturers pick the project up. It’s a patent-pending project, so if a large company could integrate this into existing technology, we’d love to see it out there in the real world.”
As of now, the technology is limited to electromagnetic objects. The next step in the project is to detect objects that don’t emit electromagnetic noise, like wooden, plastic, or cloth objects. The detection of objects that don’t emit electromagnetic noise is, in fact, the next research problem that Laput may be working on.
On collaborating with Disney, Laput said that “Disney is an awesome place. We are basically Imagineers, so we just imagine. We figure out cool things that can enhance people’s experiences. And I think that’s a very fun agenda.”
Along with Gierad Laput, the research team is comprised of Alanson Sample, a research scientist at Disney; Jack Yang, also from Disney; Chris Harrison, assistant professor of human-computer interaction; and Robert Xiao, a Ph.D. student in HCII. The researchers discussed the EM-Sense technology at the User Interface Software and Technology Symposium, which was hosted in Charlotte, North Carolina on Nov. 8–11, at which the research team’s talk won the “Best Talk Award.” “We did live demos and people really enjoyed it. People loved it,” Laput said.
As technology becomes a larger part of human life, the study of human-computer interaction becomes increasingly relevant and important. Laput emphasized this importance of research in this area, saying that researchers today are beginning to understand that “focusing on the human as opposed to just focusing just on the algorithms is actually a problem worth solving.”