Monday, December 18, 2006

Progressive User Adoption: More thoughts on building an adoption profile--
Before moving off the topic of laying out a suggested progression for user adoption, I would like to discuss the two main progressive adoption dimensions: efficiency improvement and feature adoption. In essence, efficiency improvement says, "There's a better way to do what you're doing." Feature adoption says, "You can do more things than you're doing."

Efficiency Improvement
The biggest challenge you face with efficiency improvement is that you are coming in low on the benefit/effort ratio. By that I mean that the user is already getting the task done and you're trying to get the user to invest immediate time and energy for a longer term gain. This ranks right up there with telling overweight people they need to diet and telling smokers they need to quit. In other words, don't expect the user community to hoist you on their shoulders and carry you in a display of triumphant gratitude. In this dimension, we are going to want to look at strategies that minimize the adoption effort, possibly embedding shortcuts and some degree of functionality within the user assistance itself.

Feature Adoption
Depending on your business model, increased feature adoption could be a real sweet spot for you. Even though there is still an increased effort required on the part of the user, the one thing you have going for you is being able to offer benefits the user is not currently getting. The dominant strategies here will be to show (illustrate or demonstrate) the new state and communicate the ease with which the user can get there (and cancel back out).

Considerations
When laying out your adoption profile, think about along which dimension you will be taking the user. If you are only going to improve efficiency and the user does not do that task very often--you might just want to let that dog lie sleeping, or at worst, give it a gentle nudge and move on.

The better payoff is along the feature adoption dimension; spend your time and creative energy there.

Wednesday, December 13, 2006

Progressive Adoption Principle 2: Establish an Adoption Profile--
The secret to progressive adoption is to stop thinking of adoption as a Yes/No state on the part of the user, rather think of it as incremental adoptions over a period of time. Map out the basic core features that would represent minimal adoption and apply principle 1 to those ("don't get in the way"). Next decide what levels logically lead the user through a comfortable progression pattern over time.

For example, in online bill pay, we decided it was too much to ask a user to start by turning off paper bills and having the system pay electronic bills automatically. They first had to build a trust in the system. The best progression seemed to be:
  1. Get the bill in the mail and pay manually online.
  2. Authorize getting the bill electronically but still pay manually online.
  3. Authorize routine bills to be received electronically and payed automatically online.

Two elements you should consider when planning a progression profile are:

  • Level of trust required. Plan a progression that allows the user to build trust with the system. Trust can mean a lot of things, trust you with my data, trust you with my SSN, trust you with my credit card number, etc. It can also mean I trust that all this work is going to get me what I want. For example, MS Excel's Chart Wizard lets you see how your data will be graphically displayed at each step in the process.
  • Level of skill required. Move the user along incrementally from basic skills to get core value to more advanced skills to leverage greater value. For example, MS Word starts with a default template in place. Using templates should not be an initial requirement, but should be planned as a step that happens after the user has made the initial adoption. Steps along the skill dimension should be sized for easily managed progression. Don't make the user have to learn a lot to get more value. As long as the perceived increase in value is proportional to perceived effort to get there, you have a workable progression profile.

I will discuss concrete user assistance techniques that can be applied to support progressive adoption over my next several blogs.

Stay posted.

Monday, December 11, 2006

Principles of Progressive Adoption--
In my last blog entry I introduced the concept of progressive user adoption, moving a user further along in terms of the frequency of use, number of features used, or the depth of functionality (moving from basic to advanced). This week I will start to explore principles of progressive adoption, especially where user assistance can be involved.

Priniciple One: Don't interfere with core functionality.
Keep the basic tasks (the prime reason for the user being in your application) easy to do. This could be Clippy's fatal flaw—he intrudes when I don't need him, forcing me to get off task to dismiss him. His lame attempts to be precious do not make me want to kill him more, just kill him more slowly and in imaginative ways.

How do you apply this principle? For one, when the user assistance intervenes, make the intervention easy to ignore without action. If you force the user to dismiss the intervention, you are detracting from the core experience. Mirosoft Project does this fairly well. For example, if you add a resource to a task, an icon lets you know there is a tip. If you click the icon, a popup opens asking do you want to increase the work or shorten the duration? Based upon what mode you are in, it has already made the appropriate decision and has marked it as the default choice. If you just plow ahead and keep working, the popup goes away and the default choice stays in effect. So as a user, I get two opportunities to ignore the progressive help. I learn to ignore the tip icon when I know what the tip is about, and I can just keep working when I get the tip without having to select the default choice. I do have to click back into the desktop, however; it would be even better if I did not have to even do that.

Probably one of the most important dynamics in progressive adoption is "readiness," the user must be at a state that is ready to accept the change. Until then, coaching or coaxing the user to a new level of product use can detract from the quality of the core experience and you end up losing the user [insert clever fishing metaphor here—it's early in the morning and I'm too tired to do it myself].

So the bottom line in progressive user adoption is to measure all interventions against the yardstick of "Does this interrupt the core task?" If the answer is yes, change the intervention.

Thursday, December 07, 2006

Progressive User Adoption--
I'd like to start a series of entries about the role that user assistance can play in what I call progressive user adoption. User adoption describes the rate that users will accept a new product or new technology. People who discuss user adoption usually mean it in the sense of the initial decision to accept or reject the technology or product as well as the ongoing re-enforcement of that decision. By progressive user adoption, however, I will be focusing on the tendency of users (or reluctance) to progress more deeply into the features, functionality, or frequency with which they use a new technology or product. Of partciular interest is why users' adoption curves typically plateau out at a suboptimal level.

Let me start with some concrete examples of what I'm talking about. Whenever anyone of us starts a new job one of the first questions we ask about the phone system is, "How do I dial out?" We learn what we have to learn in order to make the initial adoption decision. Fast forward several months (years) later and see if that person has learned to do a 3-way conference call (or in my case, even make a simple transfer). In many cases the answer is NO.

And how often have you had to edit a document someone did in Word only to find out that no style tags have been applied. All layout and typographic effects have be done with tabs, paragraph returns (sometimes one between paragraphs, sometimes two or three) and by manually bolding and changing text size to create headings? Why was this person not using the style tag feature that would have made the process so much easier and the output more consistent?

In short, why do people quit learning before they're done learning what they need to know?

Why Care?
Well, first off, why do we care about this premature leveling of the learning curve? If they've bought the software, why should we care how well they use it?

As is so often the case, the first question you need to ask is, "What's your business model?" More and more, due to e-commerce on the Internet, revenue around a product is transaction based. For example, I worked for a company that provided online bill pay software and the processing of the payment that went on behind the scenes. It made money everytime someone paid a bill with its product. The more bills someone paid, the more money the company made. Transaction-based products have a lot of skin in the game around progressive user adoption.

Of course, there is e-commerce, where user activity is directly related to revenue. Do you think Amazon.com wants me to stop shopping on their web site after I've bought my books? Do you think they would like me to progressively adopt them as my music and electronic gadget supplier as well?

And even non-transaction based applications have an interest in my progressive adoption of the features that give their product a competitive advantage or increase my satisfaction and loyalty. Nobody uses WordStar anymore, not because it did not produce good looking documents, but because it was displaced by GUI-based word processors that made it easier to adopt advanced features, such as style tags, automatic headings, etc.

And as we see Google and Microsoft moving into the web app space where revenue will be tied into usage, progressive user adoption will become critical in those kinds of applications as well.

So What's the Problem?
Having been involved in online banking and online bill pay applications, I have been very interested in understanding why users' adoption stops at less than optimal utilization of a product. The following explanation is based on observations made in formal usability tests, focus group research, contextual studies, and is supported by published research such as Everett Rogers' seminal work in Rogers, E. M. Diffusions of Innovations (4th ed.), New York: Free Press, 1995 and an interesting model called the Technology Acceptance Model, see Davis, F. D., Bagozzi, R. P., and Washaw, P. R. “User Acceptance of Computer Technology: A Comparison of Two Theoretical Models,” Management Science, 35, 1989, pp. 982-1003

In short, people quit learning before they're done learning for the following two reasons:

  • They shift from a learning/exploration mode to a task orientation mode. When users can meet their initial goals, they stop exploring. Instead, they focus on doing what they came to do, e.g., paying bills or writing a report. In other words, they don’t look for ways to do what they don’t know they could do. I discuss this problem in general with why users abandon help procedures in a proceedings paper called "Procedures: The Sacred Cow Blocking the Road?"
  • A reduced benefit/effort ratio. The benefit/effort ratio is less attractive for incremental improvement than for initial adoption. There is a big difference between “If I don’t learn how to make a phone call, I cannot get in touch with my essential contacts.” and “If I don’t learn how to transfer a call, I can’t pass an outside caller on to someone else in my organization.” The benefit side of the ratio is often diminished in the eyes of the user by existing alternatives that allow the user to reach a goal, although in a less efficient manner. In the call transfer example, the user can always give the outside caller the third party’s extension and ask them to redial that party directly.

What's Next?
I think that user assistance can have a positive effect on progressive user adoption if designed to do so. It can also have catastrophic consequences if done poorly. (I'm not making any specific references to Clippy here; I'm only saying.)

My next series of blogs will continue to explore how user assistance can be an asset to a company where progressive adoption advances the business model.

Stay posted.

Wednesday, December 06, 2006

Given-New vs. Analogy--
Today's blog is for die-hard writers who get a buzz from talking about rhetoric. No tools or technology today; I'm going through enough of that on the day job :-)

I was structuring a formal analogy the other day, you know--A:B::C:D (read A is to B the way that C is to D), and wondered what the preferred sequence should be. Should the new relationship be in the AB slot with CD being the relationship the reader is already familiar with, or should AB be the familiar relationship and CD be the one that is new to the reader?

I've always been a big fan of using a Given-New rhetoric when trying to explain complicated material. In that scheme you make the topic (subject) of the sentence some concept the reader is already familiar with, and you introduce the new concept in the predicate. Then the next sentence can take the predicate from the previous sentence and make it the subject, since it has now become a "given." The technique allows you to build up a knowledge base, so to speak, within the reader in small, manageable steps.

For example, let's say you had to explain DITA to a reader base for whom it would be a new concept. Watch how in the following text, the subjects of the sentences are concepts that are already familiar to the reader. Pay particular attention to the dance that ensues from a concept going from the predicate position in one sentence (where it was the "new" concept) to being the subject in the next sentence (because it is now a "given" concept). The following explanation assumes that the concept of structured writing is a familiar one to the reader.


A form of structured writing that has gained much popularity in recent years is DITA. DITA stands for Darwin Information Type Archictecture and is an XML-based approach to authoring. XML is the mark-up language that enables authors to share content across different platforms and among different documents.
You get the idea. This is a blog and that was a quick example, so don't edit me too critically on it. Like any horse, Given-New can be ridden to death and its overuse can leave your discourse sounding "sing-songish" and feeling mechanical. None-the-less, I have found that it is often a good technique for first drafts of paragraphs where I feel I have to move the readers across a rather large gap between what they already know and what they need to know.

But that logic didn't "feel" right to me when trying to put an analogy together; the order of New-Given seemed better within that device. For example, let's say I am trying to explain DITA topics to someone who is already familiar with Information Mapping. Which of the following analogies works better?
  • Topics in DITA are similar to maps in Information Mapping.
  • Maps in Information Mapping are similar to topics in DITA.

I think the first works better even though it is leading with the new concept and relating it to a given. Maybe because in context, it would appear in a discussion about topics, and, at least in that context, it would be the given topic.

But beyond that, I think there is something to be gained in an analogy by posing the strange relationship first and then grounding it in the familiar. It seems to be consistent with a principle I have noticed in instructional design: Students have no way to process a solution until they experience the problem. In other words, it's best to raise the question before providing the answer as an isolated fact.





Thursday, November 30, 2006

Gap Analysis--
Sometimes we operate under the false myth that we must write user assistance for the lowest common denominator. I think this leads to bad help quite frankly. The better approach is to have a multichannel approach to user assistance and target channels toward the appropriate level of expertise for that channel.

I'm working on an embedded user assistance model (a dedicated help pane on the application UI), and this principle has suddenly clarified things for me. The issue came up, how far do we go with the embedded user assistance? My answer for embedded user assistance is, "Not too far." This channel is excellent for users that are almost smart enough to not need assistance. If the gap is large, other channels like elearning, tutorials, etc. are the appropriate place to deal with those needy ones.

In other words, it's OK to say, "You have to be this tall to ride this ride."

Once we accept this, then we can focus user assistance at the audience more appropriately.

Example
Let's say you were doing an embedded user assistance for a word processor, specifically the part of the application where you do Headings and Footers. I'd note in the embedded user assistance that headings can be automated by inserting a StyleRef field. I might add that this helps users find a topic by browsing the document header.

But what if someone doesn't understand style tags, should we put help about that in the embedded UA? What about principles of document design in general and what constitutes good heading hierarchies and should the StyleRef refer to Heading 1, Heading 2 or what?

Nope, nope, and nope. A snippet of help in a narrow sidebar in the middle of an off-main-page task is no time and place to educate the user about document design. It is a good place to ooch a fairly competent user to a higher level of efficiency or performance.

Put the training bit somewhere else.

Besides, what are the odds that your lowest common denominator is doing headings anyway?

Wednesday, November 29, 2006

Why I'm Not a Technical Writer--
And as Jerry Seinfeld would say, "Not that there'd be anything wrong with that." But I need to regroup and get my strategist hat back on here at my day job, and I feel the need to articulate and summarize what it is I do as a User Assistance Architect that is different from what I did as a technical writer.

Models
I seem to spend more time building models than producing documents. I do task analysis, just as a technical writer would do, but I seem to be less interested in "what a user needs to do" as much as "what would a user need to know?" And beyond that, I abstract one more level to "what kinds of information does a user need?"

I define patterns a lot. We have a department Wiki and I have a published pattern language I follow in posting patterns to our Wiki. By the way, I have an article coming out in the January/February issue of Interactions, the SIGCHI magazine. That issue will be a special topic issue edited by Fred Sampson focusing on User Assistance. My article is entitled "A pattern language approach to user assistance" (so much for coy titles). I hope folks get a chance to read it. I will be doing a presentation on this same topic at the WritersUA conference in Long Beach in March.

I wireframe a lot. I never did that as a technical writer, and frankly, I don't see a lot of technical communicators doing that. Wireframes let me model how the user assistance will behave. One reason we don't do a lot of that as technical communicators is that we are bound by the authoring tools. But that is tied into the model that Help is a separate application. As we get into more interactive models where user assistance is blended into the application, we need to wireframe how that works. Wireframing and use case modeling are two nifty disciplines I picked up while working as a UX designer at my previous job.

But I don't do a lot of use case modeling, and I'm not sure why not. Perhaps the pattern language approach fills the need that use cases did when I was designing UIs. But the other day, I did find myself looking at wireframes and asking about alternate and exception cases, so the discipline is still there and seems to influence me.

Content Management and Publishing Technologies
I spend a lot of time researching how we can author, store, retrieve, compile, and display information. Five years ago I would have been thinking about writing and publishing documents.

And somewhere in all that will eventually come architecture and tools.

Conclusion
Thanks for your patient ear. I'm stoked again.

My job is (1) to understand how our users apply information to their tasks, (2) how best to structure and deliver that information within the contexts of those tasks, and (3) how to author and manage that information so that it can be meet 1 and 2.

I gotta get to work!

Monday, November 27, 2006

Research, Periscopes, Cable TV, CSI, and Goldilocks--
In a tryptophan-induced semi coma this weekend, I experienced a convergence that tied my previous blog in with several seemingly disparate topics. The blog is from November 17, where I bemoan having to wade through so much meta discourse to get to actual content, e.g., "This chapter is about... This topic is about... This procedure is about..." Over the long holiday weekend, I was reading an article in the current issue of Technical Communication by some researchers in Washington state (BTW, kudos to the research leadership of Jan Spyridakis at the Un. of Washington) who studied the effects of the frequency of headings in online and print documents. The upshot of the research is that having too many headings is distracting in both print and online, but even more so for online documentation.

In my blog, I noted that the problem seemed more annoying to me when navigating a PDF through the bookmarks (which coincided with the block label headings) instead of scanning the printed manual. The research seems to validate that was not an isolated reaction. The extra navigation adds cognitive loading. But, the research also pointed out that too many headings had an aggravated negative effect in online documents even when the headings were not part of the navigation scheme, but occurred when readers scrolled through a multi-heading, monolithic block of text.

Periscopic Focus
My explanation is that reading online is like looking through a periscope; whereas reading print is like looking at the landscape from an open deck. In looking through a periscope, we seem to focus on detail more; therefore, we are more likely to interrupted by the headings (the speed-bump effect I describe in my earlier blog). The same thing happens to me when I read the program listings for my cable. The movie listing gives the cast first and then the blurb about the movie. Even though I have no interest in the cast, I find myself reading it. I think it is an effect of the periscopic focus from scrolling through the movie list.

Have you noticed on CSI that when the investigators enter the crime scene, they never turn the overhead light on? They use flashlights instead. My theory is that it helps them focus on detail and not be distracted by the broader landscape, so to speak. It forces periscopic focus.

Conclusions
The research reminded me and validated again that the online reader experience is less forgiving than the print experience. We need to get to the point as directly as possible.

As an avid Information Mapper, it also gave me pause to consider the potential downside to chunking at a too granular level, especially where limited screen real estate promotes aligning block labels with the body of the text (as opposed to the marginal outdenting more common in print presentation). In that presentation scheme, headings are more likely to interrupt the flow.

It also raises some interesting questions about structured writing in general where content is written independently of presentation media. Can content be authored with media-agnostic assumptions?

The good news is that the Goldilocks principle still prevails: Although too much is much too much online, just right seems to be just right in both print and online.

Friday, November 17, 2006

Metadiscourse--
[Warning: Taking any of the advice in today's blog could prevent you from winning awards in publication competitions.]

Metadiscourse is talking about the talking or writing about the writing. For example, the beginning of this sentence is metadiscourse; it has no content but tells you that what follows is an example. Metadiscourse can be a useful device to help listeners and readers know how to process what is about to come. Which, as an aside, has always made me doubt the effectiveness of putting them at the end of the discourse, as in this sentence for example.

Metadiscourse exists at the document level as well. For example, a table of contents is a form of metadiscourse.

I sometimes find myself having to wade through layers of metadiscourse to get to the value of a document. This seems most inconvenient when I am trying to navigate a PDF manual using the bookmarks. It seems like it takes me way too many clicks to get to where I find anything of value.
Example:
Ah, the chapter on Painting Widgets, just what I need. Let me click on Introduction:
"Introduction: This chapter is about how to paint widgets."
Hmmm. OK. Let me click on Overview.
"Overview: This chapter has the following topics:
  • All About Widgets
  • All About Paint
  • Procedures"

I'll just click on Procedures:

"This section describes the following procedures:

  • Selecting a color
  • Preparing the widget
  • Painting the widget"

Let's just go to Painting the Widget

"Follow this procedure to paint a widget"

AAAARRRRGGGGHHHHHHHH!!!!!!

We need to get readers to the good stuff quicker. How?

Don't make the TOC (or bookmarks in a PDF) overly detailed. Maybe just a listing of the chapters is all that is needed. An information-mapped document probably only needs chapter titles and map titles. Listing every block label in the bookmarks or TOC is probably excessive.

Stop writing chapters called "About this Guide" where we tell the reader why we italicize some words, do some in courier, some in bold, etc. I don't think the following scenario happens:

Hmmm here's a definition and the word browser is in italics. Let me go to Chapter One and see what's up with that. Oh, apparently browser is the term being defined. Glad I looked up the Conventions Used in this Guide piece.

And let's avoid Intros that restate the Topic title in sentence formats, or stem sentences that restate topic headings, etc. Let's rethink if every chapter needs a local TOC (or not call it Overview in the Bookmarks).

It's not that I don't know about nor value advanced organizers. But they're kind of like speed bumps; they're good too, but too many in short succession make me put the Wrangler in 4-wheel drive and take to the sidewalk.

End of Rant

Thursday, November 16, 2006

The Heisenberg Uncertainty Principle--
Heisenberg says the more accurately we know where an electron is, the less accurately we can know what its velocity is. And vice versa. John Carroll talks about the Heisenberg Uncertainty Principle of Training: The more complete the training is, the less usable it is; the more usable it is, the less complete it is.

I believe the same goes for user assistance. The weight of being complete is not without its costs. For example, I recently read a user manual that explained the log in screen. By the way, this screen has two fields and one button. One field is labeled UserName and the other is labeled Password. The button is labeled Login.

It took a page with a screen shot to document how to log in. It turns out, after reading the manual, that I am supposed to put my UserName in the UserName field and my Password in the Password field. Then, according to the manual, I need to click on the button called Login.

There was some extra information: If I don't know my UserName and Password I should contact my System Administrator. And to get to the login page I need to type the IP address of the machine that is hosting this particular application in the URL address of my browser. Well, if I didn't know either of those things and went to the user assistance, I still don't know.

My question of the day is: If the UI is well designed and tells the user everything the user needs to know, do we need to document it at all?

What's the harm?
Why not document even the obvious? I can think of two reasons:
  • It gets in the way. This user guide was 86 pages long. I can very quickly make it 85: Don't document the login screen. Let's assume that somewhere in that document is the one page the user needs. An 86-page document has 85 distractors (wrong or useless pages for that problem). An 85-page document has only 84 distractors. Not a big improvement, but hey, I was only on page two, who knows what I could do if I dug deeper.
  • It fools us (the writers) into thinking we have documented the user's need. Maybe what this guide should have documented is how to read the IP address off of a machine. It was odd that it assumed a user would need help figuring out what to put in the Password field but would be adept at figuring out an IP address.

Wednesday, November 15, 2006

No Wonder Good Help Is Hard to Find--
I attended a delightful presentation at the local STC meeting last night called "Why I Didn't Hire You." Slides were clever, speaker was witty, and the content was a good encapsulation of conventional wisdom and sound advice for technical writers looking to get hired.

And that's what disturbed me.

The only part I liked was the part was about using the applicant's resume as an indication of the applicant's document design and information organizational skills. Right on!

The disturbing part was the behavioral advice concerning the interview: Hiring managers make their decisions based on criteria that have no correlation to what makes a writer successful.

Speaker's advice: "Dress professionally; who would you hire from the four men in this slide?" The right answer was the older white guy in the suit. One of the wrong answers was the younger African American man well-dressed but wearing a turtleneck shirt. Anyone want to venture a guess as to the speaker's demographic?

Similar question for the slide with four women. The winner was the attractive woman in a dress suit and perky tie. Loser was the slightly overweight woman wearing slacks and a man's tie.

My question was, "Who in these pictures look like the really good writers and editors I've worked with?" Losers in that category included the older white guy in the suit and the woman wearing the dress suit and perky tie.

OK, bad question. Try this one, "Who in these pictures look like the development team our writers would work with. Oops, same answer as before.

Other disturbing advice (disturbing because it really is practical and accurate): Don't ask questions about the work hours or the environment, like cubes versus offices. Yes, God forbid that hiring managers should act like they are recruiting talent, like they have a need to fill and they should try to understand what the candidates would like to know about where they will spend the majority of their conscious hours. The jobs are things the managers have and they will choose who is worthy to receive them.

What is wrong here? We have set up a system that evaluates candidates on criteria unrelated to success on the job, and we encourage candidates to present themselves disingenuously. What makes us think this is a formula for success?

I'd like to change the rules:
Candidates: Dress appropriately for the work environment and people you most likely will interface with. Be clean and neat, but be you.
Hiring Managers: Does the person look and act like someone who would fit in with the writers and SMEs he or she would work with.

Candidates: Ask questions that will help you make your job decision, don't make up stuff to sound good.
Hiring Mangers: Answer the candidate's questions and take them at face value. They have skin in the game too and have a right to interview you about how they will be treated by you.


Interviewing and hiring are fraught with subjectivity. Don't make it harder by introducing artificial criteria that at best can only tell you how well someone interviews.

It's bad enough that we practice all this deception when choosing life partners and people to make babies with. Must we muddy up the workplace as well?

IMHO :-)

Monday, November 13, 2006

Getting a Backbone--
I'm working right now on what will be essentially a getting started workbook. It will probably consists of an interactive document that queries the user for configuration-specific information, such as network topology, operating modes, etc., and it will provide specific user assistance for the user's configuration requirements. The latter might be delivered in discrete deployment guides (probably delivered as conventional PDFs) while the interactive document would be used primarily to determine which guide to point the user to.

So the core of the workbook is NOT procedural information, that comes later. The core must be conceptual information and guidance information so that the user can make informed decisions about how to configure the product. The flow of the topics will be determined by the deployment process, with the most important information being conceptual (background about the product) and guidance (considerations, criteria, and consequences of decisions that the user must make). After that, the user can be directed to detailed procedural information.

I've had a tendency in the past to view procedural information as the backbone of a user assistance document. The old P-K analysis approach: Define what procedures the user must do, then analyze what other kinds of knowledge you must impart for them to understand the procedures. I'm certainly not throwing that baby away with the bath, but I'm coming to see less and less importance in defining the sequence of steps and seeing more importance in imparting expertise to support the user's application-level goals.

In short, if it's that hard to figure out how to work the application, shoot the UI developer. The challenge for the UA should be helping the user figure out how to apply the application to the user's goals.

Conclusion
Make the higher order information (what I've been calling conceptual and guidance) the backbone of the user assistance, and let procedural information branch off and out from that core.

Friday, November 10, 2006

Dumb or Brilliant?---
This one has me pondering. We have a user interface where the user can enter IP addresses. If the user wishes to enter multiple IP addresses, the instruction is to separate them with a comma. The UI displays the addresses as the user has entered them. Very similar to how you see multiple email address in the To field of an email. No problem.

On a new interface, when the user types a comma, the display treats it the way it would a carriage return, putting the new IP address on the next line. Huh! Easier to read and see what addresses have been added; different user experience. It raises two questions I find interesting:
  • Should the UI display what the user typed or what the user decided? Using the comma to tell the computer, "This is a new IP address" is easy for the input phase, but should it preclude the computer from acknowledging that input in a way that is easier to process visually for the user.
  • To what degree should innovation be constrained by convention (or consistency)?

No answers today. I'm enjoying the questions too much :-)

Thursday, November 09, 2006

An Old Standard Revisited: Flow-Charting--
My earliest exposure to flow charts was as trouble-shooting aids. As I watched people use them, they did not seem very effective; users often got lost and the user experience rarely seemed to end with the trouble getting shot.

This last year, however, I have found myself going to flowcharting as an analysis tool, one to help me understand complex navigations or tasks where logical branching played an important part. For example, in one application, clicking the Done button could take different users to different locations depending on what path they had taken or decisions they had made.

More recently, I have been using flowcharts to understand how a complex task is done (configuring a network security appliance), especially to understand the different contingencies and how the user path is affected.

I use Visio's standard template for flowcharting and sit in design sessions with my laptop projected. The team of SMEs, information architect, technical writer, and I have been mapping the flow and logical branches of a very complicated process in order to create an interactive guide that could query the user about configuration decisions and deliver the appropriate information.

I have also created four new icons in my template, one for each of the main kinds of information:
  • Conceptual
  • Procedural
  • Guidance
  • Reference
I use these icons to annotate the flow chart as to what kind of information a user would need at the various steps and phases in the flow.

An interesting pattern is emerging. Where there are decision/branching diamonds, there is often a need for conceptual and guidance information. In other words, the user needs some background about the domain and also needs expert insight into the decision to be made. For example, if a branch requires that the user decide between "transparent" or "routing" mode, the user assistance must make sure the user understands these terms (conceptual information) and also provide guidelines for when to choose one over the other, implications for that choice, etc (guidance information).

Procedural information icons tend to show up at action blocks in the flow.

Nothing shocking here, but it's nice to change lenses every now and again and find that the same features you thought were important still show up in the landscape.

So don't discount the value of flow-charting as a collaborative task-analysis tool, and be aware that it can then be easily turned into a contextual information requirements tool.

Wednesday, November 08, 2006

Applications as Tools--
In one of my earliest blogs I pointed out that the term "user" implied something "used." Part of understanding how to craft effective user assistance requires an understanding of different ways things are used.

I see applications as falling into one of (or drifting among) three levels of toolness:
  • Extension tool
  • Cognitive tool
  • Electronic Performance Support System (hat-tip to Gloria Geary)

Extension tools are a lot like simple mechanical tools: They extend a natural physical capacity. Think of a crescent wrench. It grabs a nut much the way our fingers would, just stronger and tighter. The wrench's handle amplifies the natural torque our arm provides. Similar thinking for a hammer: Its head is like a small hard fist and its handle amplifies the power of our arm.

A simple word processor (or one where just the basic functions of text entry and editing are used) is an extension tool. We can type, erase, and print pretty much the way we would write (or talk) manually--just faster and more legibly.

Cognitive tools help us think. What if instead of just sitting down at the word processor and typing a memo, I started in the outline format and organized my thoughts. Then as I wrote, I used the outline to evaluate the flow of my argument and dragged elements around until I felt the document flowed better? There is more going on here than using the word processor to make the task easier from the mechanical perspective. The tool is supporting higher order processes of rhetoric, composition, and critical thinking.

An Electronic Performance Support System (EPSS) brings data and domain expertise (guidance) to the user. Today's word processor, with spell check, templates, wizards, and collaboration tools, is very close to acting like an EPSS if not actually doing so.

Implications for User Assistance Architecture
As user assistance matures in a product, it moves the product up the tool hierarchy. When user assistance as a separate help file largely goes away and is replaced by more proactive strategies within the UI, it has elevated the product to an EPSS.

THAT is the value sweet spot.

Sunday, November 05, 2006

What's Your Underlying World View?--
Philisophically, technical writers fall into two epistemological camps (hey! it's my blog; I can use words like that if I want): Positivism and Constructivism.

Positivists
Positivists view reality as singular and rigid: An apprehendable reality is assumed to exist, driven by immutable natural laws and mechanisms. Knowledge of the way things are is conventionally summarized in the form of time- and context-free generalizations. (Guba and Lincoln 1994, p. 109)

Positivist technical communicators tend to define a product by a finite set of features and functions. An accurate and complete cataloging and description of the features and functions will render an accurate and complete description of the product.

Positivists view the relationship between the knower and the thing known as dualistic: There is a distinct separation between the knower and the known. They view reality as being objective: Facts are true or false. They view the role of the technical communicator as being an unbiased describer of a product's functionality.

Constructivists
Constructivists view reality as pluralistic: Reality is expressible in a variety of symbol and language systems. They also see it as plastic: Reality is stretched and shaped to fit purposeful acts of intentional human agents. (Schwandt 1994, p. 125)

Constructivist technical communicators define a product by how people interact with it. No description can ever be complete or totally accurate since the permutations of possible user contexts are too complex.

Constructivists view the relationship between the knower and the thing known as transactional: Meanings are created, negotiated, sustained, and modified within a specific context of human action. The means or process by which the inquirer arrives at this kind of interpretation of human action (as well as the ends or aim of the process) is called Verstehen (understanding). (Schwandt 1994, p. 120).

They also see it as subjective: Facts are deemed viable or not viable within a community of practice.

Constructivist technical communicators interpret product functionality in light of both the user contexts and the developers intentions.

...

Whereas we take many of our disciplines and values in technical communication from our positivist past, the future of user assistance lies in a constructivist vision .
___

Guba, E. G., and Y. S. Lincoln. 1994. Competing paradigms
in qualitative research. In Handbook of qualitative
research, ed. N. K. Denzin and Y. S. Lincoln. Thousand
Oaks, CA: Sage Publications.


Schwandt, T. A. 1994. Constructivist, interpretivist
approaches to human inquiry. In Handbook of qualitative
research, ed. N. K. Denzin and Y. S. Lincoln. Thousand
Oaks, CA: Sage Publications.

Friday, November 03, 2006

An Information Taxonomy for User Assistance Architects--
[Musical segue into this piece: Paul McCartney singing in the background, "Some people say we've had enough of silly taxonomies"]

Well, I look around me and I say it isn't so.

Actually, I just want to tweak a couple that have been around for awhile just to put them into more of an architectural context.

Two taxonomies that dominate technical writing are Information Mapping® and one whose origins escape me, but which I read most recently in the Wiley Encyclopedia of Electrical and Electronics Engineering.

Information Mapping identifies the following seven types of information:

  • Procedures—steps
  • Process description—explanations
  • Structure—descriptions
  • Concepts—definitions and examples
  • Principles—rules
  • Facts—physical characteristics
  • Classification—types and categories

The Wiley Encyclopedia of Electrical and Electronics Engineering identifies the following four types:

  • Procedural
  • Conceptual
  • Reference
  • Instructional
The Information Mapping Model is most useful when you have a bunch of poorly structured information and you are trying to figure out how to organize it and present it.

As a user assistance architect, however, I am more interested in a taxonomy that lets me analyze the user's information needs, i.e., go through a workflow or screenflow and ask, "What kind of information would the user need here? Information Mappers will argue that its taxonomy will work fine--and I won't disagree.

But I like the simpler Wiley model, with one tweak. I would replace Instructional with Guidelines. For the kinds of products I support, that makes more sense for me.

Definitions
By and large, I use the first three the way the encyclopediaclopedia defines them.

Conceptual, in the sense I want to use it, is broader than the information mapping definition and applies to any background information that the user might need to understand a screen or procedure. In essence, conceptual information is about the product or application domain, but it has no action context.

Procedural is what it has always meant—steps in the right order.

Reference is the look-up details like specifications, glossaries, and command syntax. Meant to be dived into at some particular snippet and not meant to be read like a coherent discourse.

Guidelines is a somewhat different twist than instructional. Guidelines are provided at distinct points in a workflow or screenflow where the user must make a decision, e.g., enter a value or select/deselect a feature. Guidelines coincide somewhat with Information Mapping's Principles. They should be action oriented and help users understand the following:

  • What should they consider when making the decision?
  • What are typical or recommended starting points or selections?
  • What are the impacts of their selection?
  • How would they monitor the correctness of their decision?
  • How would they adjust or tune their decision?
Example
Let's say that a user is in MS Excel and is using a statistical function to calculate the probability outcome for a t-test of independent means.

Conceptual help would explain what a t-test is and define the required inputs/ouputs, i.e., alpha and p.

Procedural help would go through the steps, including navigation to get to the function arguments dialog and how to select the data fields directly from the spreadsheet.

Reference help might give the actual formulas being used in the function.

Guidance help would assist the user in selecting the appropriate value for alpha. And that's the rub! You need a researcher to give you that insight, not the worksheet designer or programmer. But it sure would be helpful for someone to tell you:

Alpha lets you set the level of risk you are willing to take for rejecting a true difference. A typical value of 0.1 is used for many marketing and social science research projects. Where harm could come from accepting a false finding as true, for example in a medical research project or one that would influence a high dollar investment, more conservative values of 0.05 and even 0.01 are often used.

Setting this value too high could result in your claiming there was a real difference between the two samples when in fact there wasn't.

Setting this value too low could result in your rejecting the claim that there was a real difference between the two samples when in fact there was.

Lower alpha values usually require higher sample sizes to be practical.

That information, combined with the conceptual help that would have elaborated a bit more on the definition of alpha, would help the user make a better-informed decision.

Conclusion
As you plan a user assistance design for an application, look for opportunities for the four types of user assisatnce described above, and be particularly diligent about identifying the need for Guidance help. It is probably our biggest shortcoming in the user assistance world.

Wednesday, November 01, 2006

The Hokey Pokey, That's What It's All About--
This week I judged some online competition entries for STC, and I reviewed an encyclopedia article on Electronic Documentation. The encyclopedia article talked about the main navigational schemes: Linear, Hierarchical, Web, and Grid. The entries I looked at for STC had classic HTML Help structures of hierarchical TOC with extensive web linking among the topics.

I think we overlook the basic structure that works best in user assistance: The Hub (or its extended model, the Snowflake). A hub has a central page with links off of that page. The navigation is fairly limited, however, between hub and satellite pages. You go to the satellite page and you return to the hub. Kind of like the Hokey Pokey: "You put your right leg in; you take your right leg out." The Snowflake consists of hubs arranged in larger systems of hubs.

Hub structures let users explore safely: step in, step back. They maintain a mental model that is easy to visualize and keep track of.

I'm not recommending pure hubs. It's great to be able to take shortcuts back to the top of the structure, and any good navigation system will be a hybrid. But I think a dominant model must emerge if the user is going to be able to create a mental map of the land. Overall, I think the hub is the easiest model.

Practical Implications for UA Design
Keep your information model simple, and resist the urge to link to topics that do not have direct and immediate impact on the user's context. For example, there is no need to link to a topic on Configuring Reports from a topic on Configuring Work Flows, just because they both deal with "configuring."

If the Hokey Pokey teaches us an important lesson in UA architecture, another childhood lesson can also be relevant. An elaborate trail of breadcrumbs never got anyone out of the woods.

Monday, October 30, 2006

Expanded Text vs. New Topic--
When should you use expanded text (type 2 link) versus going to a new topic (type 3 link)? (See 10/26 blog for more on types of links.) Normally, we think of expanded text as applying to lists like menus--showing subchoices-- or to term definitions--expanding the paragraph to include the definition of a term used in that paragraph.

I have seen expanded text used effectively, however, to open up larger discourse elements, such as procedures or tables. In these cases, the alternative could easily have been to link to a new topic page. Let me give an example where I think expanded text could be the more effective alternative.

Let's say you have a help topic that is about Configuring the Widget. And let's say that there are three distinct procedures that relate to configuring the widget: Procedures A. B. And C. Furthermore, let's say that you're not done configuring the widget unless you've done all three.

You could certainly have an overview page that linked to separate topics for procedures A, B, and C. And there would be NOTHING WRONG with that. But you could also show the 3 procedure titles as type 2 links, that is, links that expanded the text to display the procedural information on the same screen. Some advantages of this approach would be:
  • User stays in the same topic in the help--less chance of cyber-disorientation.
  • User can expand all three topics and read a coherent description of the uber-procedure.
  • User can print the uber-procedure as one document.
  • If an expanded link changes color as a visited link, user gets a visual aid in tracking his progress through the uber-procedure.

Some disadvantages could be:

  • Screen could get unmanageably long.
  • Accessing just a sub-procedure, for example, just procedure B, from a search or index would not be as precise.

This weighing of advantages and disadvantages in determining when to use a specific pattern is called Claims Analysis, and is an important part of applying pattern language to user assistance design. (see 10/27 blog for more about pattern language.) It avoids hard-fast rules of "do it this way" in favor of more contextual guidance for the user assistance writer, more like: "In these conditions, consider these forces."

Friday, October 27, 2006

User Assistance Behaviors--
In the upcoming February issue of SIGCHI's Interactions, I have an article entitled "A Pattern Language for User Assistance." The gist of its premise is that we often have style guides for technical communications that describe how information is to be presented, e.g., how elements in the GUI should be referenced, how procedures should be worded, etc., but we don't describe how the user assistance should behave. In the article I say:

Best practices in user assistance can no longer be developed and communicated in terms of "These kinds of words need to be displayed this way;" rather they need to be communicated in terms of "In these scenarios, the user assistance needs to behave this way."

In short, we need to treat the GUI and user interaction aspects of the user assistance itself in the way UI designers treat their GUIs and interactions.

Link Behaviors
In yesterday's blog I identified the following four kinds of links that can appear in user assistance:

  1. Initiate a popup
  2. Expand the text being displayed to reveal additional text
  3. Jump to a new topic and display it in the current pane--replacing the current text
  4. Jump to a new topic and display it in a new window or pane--keeping the current text in tact
Guidelines for user assistance writers should include when and how those links should be used. For example:

Definition links occurring within a paragraph or procedure should use a type 1 or type 2 link. If it is likely that the user would print the topic, consider using a type 2 link (expanded text). If the displacement of text could be disruptive or obtrusive for some reason, favor using a type 1 link (popup). If both types of links are used, use affordances or pliancies that differentiate between the two.
You may wish to be more specific in order to maximize consistency across writers, but the point is that the behavior of the user assistance needs to be part of the style guide or defined patterns. And those defined behaviors are not based on principles of composition, rather on principles of human computer interaction and usability.

The user assistance writer's reference bookshelf needs to look more and more like the UI or UX designer's bookshelf. How does yours look?

Thursday, October 26, 2006

Hyperjacked!!--
There is a thin line between the skillful interconnection of related information in an online help file and the fragmentation of the user's cognitive flow. Some user assistance authors seem seduced by a converse Cartesian credo of sum ergo tripudio, I am therefore I link. The result can disorient the users by forcing a cyber version of attention deficit disorder upon them by jumping them around too casually--what I term hyperjacking them. The following are some guidelines to consider:

Signal what kind of link the user is about to take.
There are four ways the user assistance can respond to a link:

  1. Initiate a popup
  2. Expand the text being displayed to reveal additional text
  3. Jump to a new topic and display it in the current pane--replacing the current text
  4. Jump to a new topic and display it in a new window or pane--keeping the current text in tact

I was in a help file yesterday that had links of types 1, 2, and 3, yet used the same presentation for all three. In one instance I clicked on a term in the middle of a paragraph and a definition popup appeared; in another situation, that same scenario jumped me to a new topic page. The solution is to provide different affordances (how the link looks) or pliancy (how it changes when moused over) behaviors to signal to the user what kind of response to expect from clicking on a link.

Don't break chunks into fragments.
In Information Mapping (R) lingo, don't jerk someone out of the middle of a block and send them to another map. Blocks such as paragraphs, tables, and illustrations should use only type 1 and type 2 links (as defined above). Consider the following scenario I experienced recently:

  1. I clicked on a link that was the title of a procedure (out of a list of 3) and it expanded to display the steps.
  2. I then clicked on a term in one of the steps (expecting a definition popup) and was hyperjacked to a new topic. It looked very involved and had lots of other links and I decided it was much more of a commitment to the term than I was willing to make, so I clicked the back button.
  3. I was returned to the page I had just come from, but it was in the default state with all the expanded text collapsed. I wasn't really sure which procedure I had been in when I took the original link.

Breaking up the user assistance in the middle of a unit of discourse (sentence, paragraph, etc.) moves the user into a new thought before you've let them get through the current one. Decide what your smallest unit of discourse is going to be. Info mappers would probably define this as the block. For non-mappers, I recommend the paragraph, procedure, and table as being unbreakable units of discourse.

Tomorrow I will discuss guidelines for when to employ which type of link behavior in a help file.

Stay posted!

Tuesday, October 24, 2006

Knowledge Harvesting--
My big aha came several years ago when I was documenting a predictive dialer product (software/PBX switch that automatically makes outbound calls for a call center based on calling lists, and operator/outbound line availability). You know, that nasty technology that lets annoying telemarketers and collectors bother more people than if they were looking up numbers and dialing them manually.

I had to document the screen that let the system administrator set the "busy call-back time" i.e., how long the system would wait until before it redialed a number that was busy. My online help was flawless: it defined the parameter, gave the minimum and maximum values, and explained how the spinner control changed the value: "click the up arrow to increase the call-back time; click the down arrow to decrease it." Took it to a usability lab and sat poised with victory cigar in hand just waiting for the positive feedback that would say "Light 'em if you've got 'em."

Interesting turn of events, however. The parameter was pretty well named apparently; no one really needed the definition to figure out what it meant. The whole up arrow and down arrow thing seemed to work pretty well; the users figured that one out without the help. The min and max values became pretty obvious when the numbers quit going down or up.

Even so, users still went to help. What they wanted to know was, "What's a good number?"

You know, there's a reason it's sometimes called the "anything but help" button. Mine certainly fit the mold.

There is a happy ending...
I found one of our customer consultants, the folks who went onsite and helped customers tune their systems, and I asked him what a good number was.

"Ten minutes," he said. "You know there's a warm body there and you don't want to let him or her slip away."

I followed up, "Why would you make it higher or lower?" After all, there had to be a reason for the spinners that allowed the system administrator to change the parameter setting.

"Oh, I check the daily reports," he said. "If the line utilization rates are low, I change the busy call-back time to make it higher--you see, I'm calling the same busy number too many times and I'm wasting an outbound line. If the hit rate starts going low, I decrease the busy call-back time, I'm letting the live bodies get away by not calling back."

OK, there's a lot of fuzziness here, no hard and fast algorithm, but some real meaningful heuristics. That's what the users were looking for: guidance.

The Challenge
Writing up what he told me was easy, any half-dead Information Mapper (R) could do the If/then table on the way to post-op. Even anticipating the question should have been easy: any time a user is asked to set a parameter or make a decision such as enable or disable, they will want some guidance. Even the pattern was obvious: State a typical starting point, describe considerations and impacts for changing, and point to system outputs that can give you feedback on the impact of the decision made.

So why didn't I put it in the help from the get-go?

I wasn't tied into the source of application expertise--I worked with developers primarily. I'm not being snotty; it's unfair to expect that experts in C++, Java, and database creation to also be experts in running a call center. I had to leave my social and disciplinary comfort zones and seek partial truths from segments of the business I was not used to dealing with.

The challenge, then, is threefold:
  • Identifying the kind of information the user will need
  • Designing how to route that information from the Content Management System
  • Finding the source of the information in order to articulate it and get it into the CMS

The third bullet is why I emphasize that a robust user assistance system has elements of a Knowledge Management System. It is the harvesting of useful content from experts, when that content is often loosey-goosey (aka "fuzzy") and tacit.

The upcoming STC conference in May will include a conference-in-a-conference called the Knowledge Management Institute. Larry Todd Wilson will be doing a presentation on Knowledge Harvesting--I highly recommend it for any UA architect or content developer who has to document complex applications where the guidelines users seek cannot be culled from the product technical specification.

Monday, October 23, 2006

Wireframing--
I don't think that wireframing user assistance is probably a common skill among UA writers, since the user interactions are largely defined by the authoring tools. I certainly never routinely did wireframes until I became a UX designer and was designing the user interface for the application. As UA becomes more integrated into the user interface, however, and as user interactions within the UA become more complex or application-like, e.g., Wizards, wireframing should become a routine activity for UA Architects and Information Designers.

The purpose of the wireframe is to show the layout of the elements within the User Assistance Interface, describe their content or presentation rules, define how the user can act on them, and how the system should respond to those allowable user actions. The wireframe is, in essence, the blueprint that communicates the following:
  • Tells the UI developer how to present the UA--in all of its possible states
  • Tells the technical writer what kinds of content needs to be provided
  • Tells the QA tester how the UA interface behaves

What the wireframe does NOT show is the actual content delivered. It might not even define the source(s) of the information--that might be better described with an information flow diagram.

Tool Talk
Wireframes can be built with a number of tools: Visio, special wireframing software such as Axure (see www.Axure.com for a free 30-day download of a very useful wireframing tool), or even PowerPoint. Using the action buttons and hyperlink tool in PowerPoint allows you make a fairly robust demo/prototype of how the user interactions would work.

For the project I am doing now, I first used PowerPoint to experiment with user actions and system responses, using sample content to do reality-checking on the user experience. I then did production-level wireframes in Visio, using placeholders and lorem ipsum text to illustrate content. I used the Software/Windows and Dialogs stencil to show the UI and the UA components. For each view or state I used two pages, one to show the UI wireframe and a second one to document the elements and interactions. On that second page I used an object table that has the following columns:

  • The callout number I use on the wireframe for that element
  • The name of the element
  • The user action
  • The system response
  • Comments

I have created a row with those elements that I have stored as a widget in my custom stencil. I drag that widget onto the wireframe, document the element, and then cut and paste the row to the object table I am building on the following page. I have found this to work better than what I used to do, that is, try to document the wireframe with notes on the same page as the wireframe. That just got way busy and limited the information you could put in the note. The down side is that you have to have two pages in front of you to understand the page's look and interactions.

A tool like Axure has a more friendly approach, allowing you to see an element's notes while still viewing the wireframe, but it has the same downside as above if you want to see the notes of several elements at once. It has an additional limitation of not letting you edit your notes if you are viewing multiple elements (in its Word document output). This is a pain if you are editing your notes for things like consistency, e.g., did I hyphenate single-click in the other notes or not? Hey! We're tech writers; we worry about stuff like that :-)

Thursday, October 19, 2006

Ontology and User Assistance Architecture--
User assistance architecture has more to do with elements: how they are presented, organized, and behave--rather than the actual content within a specific instance. Ontology is the inventory, so to speak, of what elements you have at your disposal as an architect.

I am currently working on a wireframe for an embedded help pane and must decide what can go into or be accessed within that pane. The following is a sample ontology for such a pane:
Search: A way for the user to enter search criteria and initiate a search
Search Results: The list that the search returns
Links: Interactive text that navigates through content
Buttons: Command devices that initiate action
Headings: Elements that describe associated content
Multimedia: Elements such as graphics, e-learning, show-me demos
Documents: Large prewritten discourses such as user guides in PDF
Contacts: Tech support or other users who can help solve a problem
Knowledge Base: A database of known problems and recommendations
Information Blocks: Information displayed in small chunks. Blocks can contain several sub-types of information objects. It is useful to identify them, since you may want different types of information to be displayed differently. The following is a breakdown of possible information blocks:

Information Blocks
Definitions: What a term means
Purpose Statements: What a screen or module is intended to do
Guidelines: Higher order information a user needs to know in order to apply the screen or application within a user-goal context. For example, what are the impacts of enabling or disabling a feature or what should be considered when choosing among radio buttons A, B, or C.
Procedures: Sequence of steps to accomplish a task
Orientation: What impacts the current screen/task; what is impacted by the current screen/task

Food for Thought
As technical communicators, we are often drawn to procedural information as the core of user assistance. The sophistication of user interface design practices, however, often obviate this kind of information. See Procedures: the Sacred Cow Blocking the Road? . Consider this when prioritizing what types of information to present at the highest levels.

Wednesday, October 18, 2006

Progressive Disclosure and Hierarchy of Information
I am currently working on a wireframe for an embedded help pane and wrestling with the real estate constraints and information-prioritization issues that come with megabytes of information vying for a surface area roughly the size of an envelope.

The obvious solution is to not show everything at once in the panel, but to use the pane as a gateway into the content. Allow the user to navigate into and through the content by progressively disclosing greater detail or related topics.

The UA architectural issue is how to categorize and prioritize information so it can be appropriately displayed and disclosed. This requires that a taxonomy of presentation tiers be defined and that an ontology of pane elements be defined and mapped to the presentation tiers.

Presentation Tiers
Start first by understanding how many layers of presentation options you will use and rank them into tiers. The following is an example of a presentation tier taxonomy:
Tier 1: Information that is displayed upon initial appearance of the pane without any interaction on the part of the user.
Tier 2: Information that is displayed within the original presentation by expanding the content. In other words, the pane's initial information remains, but new information is inserted. For example, a definition could be tier 2 information; the user clicks on a term in the text and the text expands to include the term's definition.
Tier 3: Information that replaces the information in the original pane. In essence, the pane's content is replaced with new content. One could slice this even finer by having Tier 3A that replaces a entire pane's content and Tier 3B that replaces just a section within the pane.
Tier 4: A new window is opened to display the content; the original embedded help pane stays intact.

Ontology of Elements
I just love having the opportunity to use the word ontology :-) I can only hope that I am using it reasonably correctly. I use it in a broad sense to mean what is the furniture, so to speak, you can put into a user assistance pane. The ontology will contain interaction devices such as search criteria entry fields, navigation devices, such as links, and information elements, such as headings, definitions, procedures, multimedia, etc.

This is the part I am working on now, defining that ontology and mapping the elements to their appropriate presentation tier.

Keep posted.

Tuesday, October 17, 2006

Community-building User Assistance
For a great example of community-based user assistance, sign up for Trillian, a cross-application instant messaging client (www.trillian.cc). I did so and then went to their help to find out how to add a contact. I entered "add contact" in their search engine and was taken to a forum thread where a user had posted instructions for adding a client. The post included the user's picture!

Granted, it's an instant messaging site--they are all about community--but it's a technique that stodgier apps could also apply.

Why Incorporate Community-based User Assistance?
I'll be real honest, something seems a little topsy-turvy at first about letting users write (or least contribute to) the user assistance. Aren't we supposed to know more about our applications than our users do? (let it sink in, two...three...four) Says who?

There is an underlying assumption that people who build an application know more about using it than the people who use it. That is not always the case. Users have more contextual knowledge than the inventors do in many cases. Add to that the fact that user assistance is typically written by technical communicators (that would be us) who are often isolated from both the inventors and the users. At best, we typically document as designed whereas users understand as built and as used.

I was at a presentation some work colleagues of mine did at Georgia Tech last week where our Chief Information Architect made the point that a successful product should build communities as a way of raising the cost of leaving the product. He struck a metaphor of selling one's home and moving. It's not just the house you would be leaving, it's the community and all you have invested socially in its members and institutions. Products that build communities encourage loyalty. User assistance has a unique perspective to bring to community-building: getting the user in touch with others who have met and solved the same problem the current user is struggling with.

Monday, October 16, 2006

User Assistance as Community Builder
A user assistance architecture that was designed to act like a knowledge management system would include community building capability--that is, ways to link the users to experts other than the static published documentation set.

Look at how Amazon.com, a retail site, does an excellent job of community building (thus providing a dynamic user-to-user user assistance).
  • User reviews
  • People who bought this book also bought these books...
  • User lists of favorites around a given topic being currently viewed

At the WritersUA conference this year in Palm Springs, I sat on the pundits' panel that had to offer up predictions of where UA was heading. One of my predictions is that user assistance would incorporate user-content accommodations like Wikis. This would be one way for complex applications to create communities of practice around their products. Other community-building opportunities to incorporate into the user assistance framework would be WebChat, links to relevant threads within a user forum (based on the topic or search question at hand), and links to external sites, such professional associations, that could provide information about the topic.

Friday, October 13, 2006

Content Management vs. Knowledge Management
In a meeting yesterday, I noted that I thought user assistance architecture was evolving to a fusion of content management and knowledge management. Later, I wondered what the heck that meant. What exactly is the difference between the two and what would something act like if it were a fusion of the two?

A somewhat simplistic yet illustrative analogy comes to mind--
Content Management : Knowledge Management :: Food Warehouse : Grocery Store

There are many flaws in this analogy, but let's work it for awhile as if it answers the question. A content management system (CMS) focuses on storing and retrieving content, predominantly in large boxes known as documents. A CMS is essentially a storage and redistribution channel. The main users of CMS are not the end user of the content in many cases, but writers and publishers who create multiple documents by rearranging content or distributing that content in different formats, e.g., PDF, paper, online Help, web pages, etc.

A knowledge management systems (KMS) focuses on creating content and delivering it within the context of specific user needs. There is also a stronger social component in a KMS. What does the grocery store have that a food warehouse does not? Experts like butchers and produce managers. Grocery stores also have fellow shoppers who can help. Another big difference is that warehouses lack user context, Cheerios are as likely to be stored next to ketchup as anything else. Context in a warehouse has more to do with storage size, inventory turns, and the like. In a grocery store, things get stored next to like things for ease of price comparison or next to related things. No food warehouse would ever store bananas and vanilla wafers next to each other, whereas that arrangement is common in grocery stores for obvious reasons.

Next week, I will try to land this analogy home with a more detailed examination of how it applies to user assistance architecture.

Thursday, October 12, 2006

Defining User Assistance Architecture
I have avoided the job title of architect for several decades now, but it has eventually overtaken me. So now I must ask myself what do I do that's different from being a technical communicator? In a classic sense of architecture (as in designing buildings) architecture means defining the structure and form for places that accommodate human activity. OK, that seems like a good starting place. So a User Assistance Architect is someone who defines the structure, organization, and delivery methods for presenting user assistance, i.e., information and instruction that supports user activity within an application.

As an architect, I need to be less concerned with the detail of the content and more concerned with the types of information that content must contain, when to present it, and how to best let the user access it.

So how does one go about doing this? The initial methodology I am using in my current task is based on use cases. Bear in mind that I have been brought into an environment where there are mature products with lots of user assistance in a variety of forms and locations already in existence. My chosen role right now is more of User Assistance Anthropologist, i.e., discovering and cataloging what exists. Even though this puts me more in a deconstruction mode, I think the approach would work equally well in a construction mode, i.e., where a product or product suite were being built from scratch. I say this, because use cases are primarily design tools anyway, and my use of them to analyze an existing product structure is more the exception.

I have chosen a representative product and I am populating a four-column table with:

Use Case * Scenario * Information Requirements * UA Channels/Patterns

I am in the process of reviewing product documentation, interviewing SMEs and writers, and then trying to fit what I am learning into this table. Although the initial purpose is to capture current state, I imagine that the same structure will be useful to define future state as well.

So far the approach is working well for me--it provides a structure to help me start to unpeel the onion in manageable layers. Best of all, it lets me collect information in the random way information likes to emerge but store it in a way that lets structure start to emerge.

I will discuss this methodology in more detail in later blogs.

Wednesday, October 11, 2006

Modes of User Assistance
One can think of user assistance as falling into one of two modes: pulled UA and pushed UA.

In pulled UA, the user takes action to summon or access the user assistance.
Examples of pulled UA include:

  • Manuals
  • Help files (accessed by clicking on Help in the menu or a Help icon)
  • What's this?
  • Search

In pushed UA, the system detects a situation and provides assistance proactively. Examples of pushed UA include:

  • UI instructional text that is part of the static UI display, e.g., examples of date formats next to date entry fields
  • Field and control labels
  • Default instructions in dropdown lists, e.g., "Select carrier" as the default selection in a dropdown list for selecting cell phone carrier.
  • Roll-0ver techniques such as tool tips (which can turn into pulled UA as the user comes to expect the technique and deliberately mouses over an object to get information about that object)
  • Contextual embedded help
  • Did you know? or Tip of the day messages
  • Context-sensitive help activated by clicking an icon
  • Linked help, e.g., Tell me more... links that expand on information provided in the UI.

Pulled UA requires that the user be in a state of explicit ignorance, that is, the user does not know something BUT is aware that she does not know that something. It also requires that the user be able to articulate, at least in general terms, the question she wants answered. For example, a procedure on How to make an unordered list is going to be accessed only by someone who knows that lists can be bulleted for better effect AND who would know that such a list is called an unordered one.

Pushed UA can be very effective when the user is in a state of tacit ignorance, that is, the user does not know something and is also unaware that she does not know it, or that it even exists to be known.

Designing and writing pulled UA require attention to content organization, taxonomy creation, indexing, and good search capabilities. Designing and writing pushed UA requires that the user assistance be more integrated with the application and act more like part of the product.

Tuesday, October 10, 2006

Categories of User Assistance
By its very form, the phrase user assistance implies the existence of a tool, i.e., the thing used. The existence of a tool implies an application or goal, i.e., what the user is trying to accomplish with the tool. In other words, the existence or a Word Processor implies the user's need to create a document. So user assistance is help in using a tool to achieve a goal. In this sense, user assistance can be divided into two main categories: (1) How to use the tool and (2) How to use the tool to...
The first is very tool-centric and focuses on the rules and manipulations of the application itself. Good user-experience design minimizes the extent to which this kind of UA is needed by making these manipulations and interactions self-evident, but some degree of this assistance will be required in most applications. How to enter a summation formula in Excel would be an example of How to use the tool user assistance.
The second category of UA, How to use the tool to..., is more user-centric and focuses on goals and problems within the user's context. Showing someone how to use Excel to do a budget would be an example of How to use the tool to... kind of UA.

Levels of User Assistance
At its most basic level, UA is discourse that explains something and can be delivered through channels that are more or less detached from the tool, e.g., manuals or compiled help files. As it becomes more advanced, it becomes more interactive and integrated with the tool. Embedded help and bubble help are examples of more highly integrated user assistance patterns. At a more advanced level, user assistance acts like a Performance Support System and is highly integrated within the tool, e.g., a Wizard. At its most advanced level, it becomes the tool. For example, is spell-check a feature within a word processor or is it user assistance?

Sunday, October 08, 2006

Metablog
Blogging is an odd activity. On the one hand, it seems absurdly self-centered, as if what the blogger has to say is of such importance it must be shared with the world. On the other hand, it is like the mutterings of a street vagabond: words said in public but listened to by no one. I rather hope it is more like the latter :-)

The Role of This Blog
I am changing jobs and taking on the role of User Assistance Architect in which I will have the fun task of "identifying tools, methods, and standards to integrate the content and delivery of user assistance, including documentation, help, e-learning, and training. " I will use this blog to reflect on and articulate my thoughts about user assistance as I take on this new challenge. I will deliberately avoid the temptation to publish well-thought-out papers, and use this space instead to put emergent thoughts out for my own reflection and for public scrutiny.

So if anyone wanders by this cyber vagabond and would like to comment on my mutterings, all input would be appreciated.

In my next entries, I will be focusing on defining user assistance and exploring its place in the broader field of User Experience.