sorry for mutilating that subject line!

Mark Ahlenius wrote:
Hi,

I am curious if this group has some insight or experience into good methods for documenting multimodal designs. Specifically the type of multimodal designs I am referring to would include speech (voice recognition) and other modalities such as; touch screen, keypad, and of course GUI. Allowing the end user to be able to select (and switch to) the best type of interaction with a device for their current context can have value IMHO. But how to document such designs is pretty cumbersome. Been involved with some designs using call flows and state tables plus textual docs, but it quickly becomes quite complex. Any thoughts on this matter?

thanks all,

'mark
________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

________________________________________________________________
Welcome to the Interaction Design Association (IxDA)!
To post to this list ....... disc...@ixda.org
Unsubscribe ................ http://www.ixda.org/unsubscribe
List Guidelines ............ http://www.ixda.org/guidelines
List Help .................. http://www.ixda.org/help

Reply via email to