@hannusalonen tweeted this article this morning which was in the Financial Times. Including the article b/c it was behind an annoying signup/registration process. http://tinyurl.com/9hmpj4
"The Tog: Mouse users are "little more than cavemen, running around pointing at symbols and 'grunting' with each click". http://bit.ly/yP1M" Gestures will force the mouse into retirement By Jessica Twentyman Published: September 17 2008 03:00 | Last updated: September 17 2008 03:00 At almost 30 years old, is the computer mouse ready for retirement? Certainly, a growing band of human-computer interaction (HCI) specialists believe so. The crude language of "point and click", they argue, seriously limits the "conversations" we have with our computers. Among them is Bruce "Tog" Tognazzini, a veteran HCI expert who joined Apple in 1978 as its 66th employee and founded the company's Human Interface Group during his 14 years there. These days, after spells at Sun Microsystems and online healthcare company WebMD, Mr Tognazzini is a respected consultant, author and speaker with usability company, the Nielsen Norman Group. "In many ways, our continued reliance on the computer mouse reduces us to little more than cavemen, running around pointing at symbols and 'grunting' with each click," he says. "A revolution is long overdue, because we need more sophisticated tools that will allow us to increase our vocabulary way beyond that caveman grunt." Plus, the link between the computer mouse and cases of repetitive strain injury (RSI) are hardly an argument in its favour, he adds. Luckily, he says, those "more sophisticated" tools are right in front of our faces and we already know how to use them. They are, in fact, our fingers. "Look at the facts: we've typically got 10 of these 'tools'; they move in a multitude of different ways; and gestural language, which came long before verbal language, is an established and intuitive form of self-expression. Even primates can be trained to express needs and intentions using their fingers," he points out. What has historically been lacking, is the ability of computers to read and understand our gestures - but that is changing very quickly. In fact, real-time video interpretation and inertial sensors are already being used to recognise facial expression and physical movement in a number of consumer technology devices, says Steven Prentice, an analyst with IT market research company Gartner. He traces the roots of this migration to two recent events: the launch of the Nintendo Wii games console in 2006 and of the Apple iPhone in 2007. Through clever use of accelerometers and optical sensor technology, the Wii Remote (or "wiimote") is already enabling millions of people to practise their golf swings, play rock guitar or swordfight with imaginary enemies. And since the iPhone was launched, strong sales and high user satisfaction have reinforced just how powerful and intuitive a multitouch interface can be. These early announcements have been followed by a string of others in consumer technology. In recent months, Panasonic, Sony and NEC have all demonstrated applications that use facial and movement recognition. These include, for example, video displays from Panasonic that can identify users from their faces, serve up content choices based on their individual preferences, and that allow screen control by hand gestures. It's easy for business leaders and chief information officers to dismiss such trends in consumer preference as minimally relevant to enterprise computing - but that's a "dangerous oversimplification", warns Mr Prentice. "Not all consumer-targeted technologies find their way directly into enterprise IT environments," he concedes, "but the growing adoption of these technologies by individuals in their 'personal infrastructures' is leading to increasing frustration and dissatisfaction with constraints and restrictions the corporate IT environment often imposes on users." Fortunately, it's not just the consumer technology firms that have their eyes on gestural technologies. At Accenture Technology Labs, research director Kelly Dempski has a long track record in exploring how they can be used in business applications, most recently concentrating on building multi-touch, interactive display walls. Accenture has installed such walls, for example, in O'Hare International Airport in Chicago and John F. Kennedy Airport in New York. Consisting of multiple screens housed in giant custom frames, they use graphics and touch-screen technology to allow passengers to check the weather at their destination, read the latest news from CNN, or find out how their team scored while they were in flight, by simply touching areas on the screen. This technology, says Dempski, could have equally valuable back-office applications, presenting vital internal data from back-end enterprise resource planning (ERP) systems to employees in a control room at a utility firm, for example. "The aim is to create a mode of interaction that requires zero training but offers a high degree of interactivity," says Mr Dempski. At Microsoft, meanwhile, researcher Desney Tan is taking HCI to new levels: muscle-computer interaction. Mr Tan and his colleagues, alongside researchers from the Universities of Washington and Toronto, have developed an armband worn on the forearm that recognises finger movements by monitoring muscle activity. They have called it MUCI, which stands for muscle-computer interface and its aim is to make controlling computers and gadgets easier in situations where the user is otherwise engaged - for example, when driving a car or in a meeting. "The human body is a prolific signal-generator," he says. "My work is focusing on the potential of tapping into the electromagnetic signals that the brain sends to muscles, which has the potential to harness a whole range of subtler movements than simply a press or a pinch on an interactive screen." MUCI currently works extremely well in situations where major arm movements are constrained and finger gestures are made on a flat surface, he says. Tests on volunteers have shown that after calibration, the system can recognise the position and pressure of all 10 digits with 95 per cent accuracy. "Where we want to take this research next is to capturing gestures made in three-dimensional space," he says, adding that the ability to do that and still recognise gestures with a fair degree of accuracy will start to open the door to a huge range of potential applications - even recognising and translating sign language used by deaf people. Naturally, applications based on gestural computing place a huge strain on underlying hardware, which is forced to process a larger volume and wider range of more subtle signals. Among chip manufacturers, this is forcing a shift in focus from traditional central processing units [CPUs] to the graphics processing units [GPUs] that, up to now, have primarily been used in gaming and virtual world environments. In essence, a GPU is a dedicated graphics rendering device for personal computers and games consoles that is very efficient at manipulating and displaying computer graphics. More important, the ability to process information in a highly parallel way makes GPUs far more effective at handling a large range of complex algorithms than CPUs, which process them in a linear, one-at-a-time fashion, explains Richard Huddy, worldwide head of developer relations at chip company Advanced Micro Devices (AMD). "Say you've got an application that uses a webcam to capture shots of a human subject and analyse their gestures. It will need to figure out the relationship between each frame and its predecessor in, perhaps, one-sixtieth of a second, and there's a lot of maths involved in doing that in a smooth and uninterrupted way. A GPU will do that much, much better," he says. In order to get its slice of the gestural computing market, AMD is already locked in a pitched battle with rivals Intel and Nvidia to deliver advanced GPU capabilities to hardware manufacturers as soon as possible, with a slew of new product announcements planned for 2009. All this means that businesses need to be prepared. No one is predicting the instant demise of the computer mouse, and certainly not of the keyboard as a text-entry tool. "Despite the many disadvantages of a design nearing its centenary, nothing else currently comes close to the functionality of the conventional tactile keyboard," says Mr Prentice of Gartner. But there will be a "strong and unstoppable" trend towards a control interface for technology that is based on simple human gestures, rather than on indirect manipulation via physical objects such as a mouse, he predicts. He says that revolution is three to five years off for mainstream business, but it's not too soon for business leaders to "suspend their natural scepticism" and start to think about how gestural computing might be used to address their organisations' most intractable user interface issues. "The phrase 'paradigm shift' is an overused one, but it's not often that such fundamental elements of the computer interface change, and the opportunities for enterprises able to capitalise on these changes will be substantial," he says. ~ will "Where you innovate, how you innovate, and what you innovate are design problems" --------------------------------------------------------------------------------------------- Will Evans | User Experience Architect tel: +1.617.281.1281 | w...@semanticfoundry.com aim: semanticwill gtalk: semanticwill twitter: semanticwill skype: semanticwill --------------------------------------------------------------------------------------------- ________________________________________________________________ Welcome to the Interaction Design Association (IxDA)! To post to this list ....... disc...@ixda.org Unsubscribe ................ http://www.ixda.org/unsubscribe List Guidelines ............ http://www.ixda.org/guidelines List Help .................. http://www.ixda.org/help