At 02:00 AM 12/12/2007 -0800, Mimi Yin wrote:
I see our biggest
challenge as articulating what that mindset is to users in a way they
can understand at a gut-level. I have a feeling that it's not in any
of the ways we've tried to articulate it thus far.

Right - the trick is that a "mindset" isn't something that exists operationally in brains, any more than a program's requirements definition is written in a program's source code.

In order to use a program -- or any tool, including one's own body parts -- the brain needs to construct a goal-action or "forward/backward prediction" model. That is, "If I want this, these are the actions to take" (backward model) and "If I do this, this will happen." (forward model)

People learn everything from how to twitch their left big toe, to how to be a professional photographer using this identical mechanism. This is how "gut-level" understanding literally operates.

Unfortunately, these understandings cannot be directly constructed verbally; they have to be unpacked into a series of experiences in order to be communicated -- *unless the user already has the relevant experiences*, in which case an understanding can be *assembled* verbally.

A few ways to convery the relevant experiences or assemble them:

* Encoding explicit goal information into the program ("opinionated" - works by making goals explicit and letting behavior be discovered)

* Make the behavior of the program simple enough to be trivially predicted by the user, without them having to think about it ("tool" - leverages experience of similar systems and relies on user for assembling to goals)

* Create a series of structured experiences so the user incrementally discovers behavior and updates their mental model. This can be done via books, tutorials, one-on-one training, etc, but the experiences have to be sequenced and structured if the model isn't clearly discoverable by either naive or highly-experienced users.

In other words, if you don't want to either simplify the behavior or make the goal-action links explicit, the only thing left is teaching.

I think it's important to reiterate: conveying only a usage "philosophy" or "mindset" *cannot* give anyone the ability to use the software. They *must* build a 2-way predictive model in their mind, even if the way you give it to them is by doing things with the program and talking about it, or showing pictures and instructions in a book, etc.

By the way, to be clear, when I call something "simple", I mean "predictable using fewer mental variables". The more state-dependent the system behavior, the harder it is to construct a forward/backward model in your head -- or more precisely, in your gut.

For example, Chandler's overlay mechanism and the non-orthogonal interplay between filter/view selection and the overlay mechanism is incredibly frustrating to me as a user, because I don't know what the heck anything is going to do. If I stop and think about it, I can -- usually -- figure it out, but if I just try to *use* it, it constantly surprises me. So my attempts at getting into flow are frustrated.

The thing is, you *have* to have, as you say, a "gut-level" understanding. Problem is, guts usually have to learn complex things from simple (and ideally independent) pieces. But Chandler is not "simple" in this sense.

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Open Source Applications Foundation "Design" mailing list
http://lists.osafoundation.org/mailman/listinfo/design

Reply via email to