Users accomplish goals in a series of mental steps. Each step involves a mode of thinking. And each mode of thinking requires certain bits of information, but not others. Our apps must help users accomplish definite goals. But our apps must also take into account the mental steps the user uses, the mental modes these steps involve, and the information required for these modes. These steps should govern the organization of our interfaces.
Each mental mode should correspond to either a screen or window. Elements (buttons, scroll bars, etc.) and information that go along with a certain mental mode should be put together into one screen (or window). And these steps/screens should then be stacked in space (a la Jesse James Garret). The order in which these steps are stacked hinges upon the overarching goal the user is trying to accomplish.
Let’s give a concrete example. When I use iMovie, I have a definite goal: to awe my family, various film festivals, and, indeed, nothing less than the entire world with my cinematographic genius. I have an idea for an art film about black blocks from space that teach humans how to turn into glowing star-babies. The purpose of iMovie should be to facilitate these delusions of grandeur.
But: this goal requires many steps, each of which involves a distinct mode of thinking. First, I must conjure up some video material. Second, I need to organize this material, clipping and trimming it to perfection. Third, I must fudge with the soundtrack. Fourth, I need to name and save my masterpiece. And, fifth, I must share it with various family members, film fanatics, studio execs, and award committees.
Budget constraints in mind, I decide to film my vision using my iPad. iMovie has a button that takes you straight to the iPad’s video camera. Note: the camera has its own mode, its own screen, and only information relevant to its filming function.
I then go to another mode on another screen for editing.