You are on page 1of 9

A Case Study for Evaluating Interface Design

through Communicability
Raquel O. Prates2,1 Simone D.J. Barbosa1 Clarisse S. de Souza1
1
Informatics Department, PUC-Rio
R. Marquês de São Vicente, 225 – Gávea
Rio de Janeiro – RJ – Brazil – 22453-900
2
Computer Science and Informatics Department, IME/UERJ
R. São Francisco Xavier, 524 – 6o. andar – Maracanã
Rio de Janeiro – RJ – Brazil – 20550-013

sim@les.inf.puc-rio.br, raquel@les.inf.puc-rio.br, clarisse@inf.puc-rio.br

ABSTRACT understands the designers’ intentions and the interactive


principles that guided the application's design.
Communicability evaluation is a method based on semiotic
engineering that aims at assessing how designers communicate to Motivated by the need to evaluate how well designers convey
users their design intents and chosen interactive principles, and their intentions to users through an application’s interface, we
thus complements traditional usability evaluation methods. have developed the communicability evaluation method [8],
which can be perceived as complementary to usability evaluation.
In this paper, we present a case study in which we evaluate how It aims at measuring whether the software successfully conveys to
communicablity tagging of an application changes along users’ users the designers’ intentions and interactive principles. By
learning curves. Our main goal was to have indications of how evaluating the communicability of their software, designers can
communicability evaluation along a learning period helps provide appreciate how well users are getting the intended messages
valuable information about interface designs, and identify across the interface and identify communication breakdowns that
communicative and interactive problems, as users become more may take place during interaction. This information is valuable to
proficient in the application. designers, either during formative or summative evaluation, since
Keywords it allows them to identify persistent communicative problems in
the interface, as well as unexpected use or declination of interface
Communicability, interface design evaluation, users’ learning elements.
curves, semiotic engineering.
2. COMMUNICABILITY EVALUATION:
1. INTRODUCTION THE METHOD
Most of the research in HCI aims at providing users with usable
In communicability evaluation, evaluators must also define a set
computer systems [5]. Evaluation plays a crucial role here, in that
of tasks for users to perform and record their interaction using
it reveals actual and potential design problems. The majority of
software that is able to capture mouse-pointer movements and
well-known evaluation methods focus on usability evaluation.
other screen events (e.g., Lotus® ScreenCam™). The evaluation
They have been developed to measure if the system is easy to use
method itself comprises three steps [8]:
and learn, and if it is useful and pleasant to use [9]. However,
usability evaluation does not focus on how designers 1. Tagging
communicate to users their design intents and chosen interactive 2. Interpretation
principles, and how well users make sense of the application. 3. Semiotic Profiling
Our research is motivated by a semiotic engineering perspective
[2], in which user interfaces are perceived as one-shot, higher- 1. Tagging
order messages sent from designers to users. We claim that the Tagging is the process of relating a sequence of interactive steps
degree to which a user will be able to successfully interact with an to an utterance (from a predefined set), in an attempt to express
application and carry out his tasks is closely related to whether he the user’s reaction when a communication breakdown occurs. The
tags used in communicability evaluation were identified based on
literature about explanatory design [3]. Table 1 presents each one
Permission to make digital or hard copies of all or part of this work for of these tags with a brief description of the context in which they
personal or classroom use is granted without fee provided that copies
occur.
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page. To
copy otherwise, or republish, to post on servers or to redistribute to
lists, requires prior specific permission and/or a fee.
DIS ’00, Brooklyn, New York.
Copyright 2000 ACM 1-58113-219-0/00/0008…$5.00.

308
308
Table 1 – Tags used in communicability evaluation
Tag Description
• Where is? The user seems to be searching for a specific function but demonstrates difficulty in
locating it. So, he sequentially (worse case) or thematically (better case) browses
• What now?
menus and/or toolbars for that function, without triggering any action. This category
includes a special case we have called What now? which applies when a user is
clearly searching for a clue of what to do next and not searching for a specific function
that he hopes will achieve what he wants to do.
• What’s this? The user seems to be exploring the possibilities of interaction to gain more (or some)
understanding of what a specific function achieves. He lingers on some symbol waiting
• Object or action?
for a tool tip and/or explicitly calls for help about that symbol, or he hesitates between
what he thinks are equivalent options. This category also includes cases in which users
are confused about widgets being associated with objects instead of actions and vice
versa (Object or action?).
• Oops! This category accounts for cases in which a user performs some action to achieve a
specific state of affairs, but the outcome is not what he expected. The user then either
• I can’t do it this way.
immediately corrects his decision (typically via Undo or by attempting to restore some
• Where am I? previous state) or completes the task with an additional sequence of actions.
Sometimes the user follows some path of action and then realizes that it’s not leading
him where he expected. He then cancels the sequence of actions and chooses a
different path. In this case the associated utterance is I can’t do it this way. This
category includes another one, Where am I?, in which the user performs some action
that is appropriate in another context but not in the current one.
• Why doesn’t it? This category involves cases in which the user expects some sort of outcome but does
not achieve it. The subsequent scenario is that he then insists on the same path, as if he
• What happened?
were so sure that some function should do what he expects that he simply cannot
accept the fact that it doesn't. Movies show that users carefully step through the path
again and again to check that they are not doing something wrong. The alternative
scenario (What happened?) is when they do not get feedback from the system and are
apparently unable to assign meaning to the function’s outcome (halt for a moment).
• Looks fine to me… The user achieves some result he believes is the expected one. At times he
misinterprets feedback from the application and does not realize that the result is not
the expected one.
• I can’t do it. The user is unable to achieve the proposed goal, either because he does not know how
to or because he does not have enough resources (time, will, patience, etc.) to do it.

• Thanks, but no, thanks. The user ignores some preferential intended affordance present in the application’s
interface and finds another way around task execution to achieve his goal. If the user
• I can do otherwise.
has successfully used the afforded strategy before and still decides to switch to a
different path of action, then it is a case of Thanks, but no, thanks. If the user is not
aware of the intended affordance or has not been able to use it effectively then it is a
case of I can do otherwise. Whereas Thanks, but no, thanks is an explicit declination
of some affordance, I can do otherwise is a case of missing some intended affordance.
• Help! The user accesses the help system.

Tagging can be perceived as “putting words in the user's mouth”, onto HCI ontologies of problems or design guidelines. The
inasmuch as the evaluator selects the “words”, one of the general classes of problems identified are navigation, meaning
utterances from the set shown in Table 1, when he identifies a assignment, task accomplishment, missing of affordance, and
system-user communication breakdown. Thus, this step can be declination of affordance (Table 2). HCI experts may use
thought of as an attempt to recreate a verbal protocol [4]. alternative taxonomies for a more precise user-system interaction
diagnosis. For instance, utterances may be mapped to such distinct
2. Interpretation taxonomies as Nielsen’s discount evaluation guidelines [6],
In the interpretation step, the evaluator tabulates the data collected Shneiderman’s eight golden rules [11], Norman’s gulfs of
during tagging and maps the utterances (communicability tags) execution and evaluation [7], or even Sellen and Nicol’s
taxonomy of contents for building online help [10].

309
309
Table 2 – Mapping communicability tags to typical HCI problems
Navigation Meaning Task Declination /
Assignment Accomplishment Missing of
Affordance
Where is?

What now?

What’s this?

Object or action?

Why doesn’t it?

What happened?

Oops!

I can't do it this way.

Where am I?

I can’t do it.

Looks fine to me…

Thanks, but no, thanks.

I can do otherwise.

.
The first three general classes of problems are well known and original designer’s meta-communication, that is, the meaning of
dealt with by other evaluation techniques. Declination of the overall designer-to-user message. This step should be
affordance occurs when a user knows how to use certain interface performed by a semiotic engineering expert, due to the nature of
elements to carry out a certain task, but takes another course of the analysis. For instance, according to the tabulated results, we
action he prefers. Missing of affordance is a more serious may find an application to be easy to use for novices, but
problem, because the user cannot follow a path predicted by the cumbersome for expert users, indicated by an increase of
designer for a given task, possibly because the interface elements declination of affordance tags along the evaluated period. This
are not evident enough, or are inadequately mapped to the could be due to a design approach that is tutorial but does not
corresponding functionality, or are inconsistent with the provide easy access or shortcuts.
application as a whole.
In other words, declination of affordance means that the user
understood how the designer wanted him to carry out a task, but 3. APPLICATIONS USED IN OUR CASE
decided otherwise, whereas missing of affordance indicates that STUDY
the user did not understand the designer’s intentions at all for the We have arbitrarily selected two HTML tag-based editors –
corresponding task. Arachnophilia [1] and SpiderPad [1212]. Figures 1 and 2 show
their main application windows. Notice that, when a user creates a
3. Semiotic profiling new document, SpiderPad presents a blank document, whereas
In the semiotic profiling step, the evaluators proceed to interpret Arachnophilia presents the basic structure of an HTML document
the tabulation in semiotic terms, in an attempt to retrieve the as a starting point.

310
308
Figure 1 – SpiderPad main application window.

Figure 2 – Arachnophilia main application window.

In a previous case study [3], we have evaluated the In this study all three steps of the methods were performed for
communicability of both editors for a set of predefined tasks. both editors. It is interesting to notice the different semiotic
Participants who had no previous contact with either editor were profiles obtained for each one of them. In Arachnophilia the
split up into two groups: one that worked first with designer adopts a tutorial approach, and his discourse to the user
Arachnophilia, and another that worked first with SpiderPad. is more conversational and verbose. On the other hand,
(The order in which participants interacted with editors did not SpiderPad designer’s discourse is more terse and descriptive,
play a relevant role in the experiment.) Users were asked to with a functional approach, conveying SpiderPad as a toolbox
perform the same tasks in both editors. for HTML.

311
309
4. CASE STUDY Figure 3 and Figure 4 show the results obtained in the three
By observing how communicability tags (utterances) change as tests for Arachnnophilia and SpiderPad, respectively. We next
users learn to use an application, we wanted to assess how its discuss the obtained results for each one of the utterances in
interface design supports users along their learning curve, i.e., both editors.
whether users are able to grasp the intended message, and in
which ways it succeeds or fails. Although our focus is on Table 3 – Tasks performed in tests.
communication, the study was also meant to reveal some Test Date Tasks
interaction problems, typically disclosed by traditional
evaluation methods. Analyzing communicative and interactive Test 1 Apr. 5, 99 In both editors, users were asked to:
problems together and in the context where they occur, we 1. Create nested lists, with different
wanted to provide designers with some indication of how they bullet types.
could improve their interface design, and thus their 2. Change the page’s background
communication to users. color
The case study was done using SpiderPad [12] and 3. Create a 2x2 table
Arachnophilia [1], and it included 4 participants and 3 different
tests throughout a semester. The participants were students in an 4. Merge cells of first line of table
HCI grad course, who had never used any of the editors, and had Test 2 Apr. 20, 99 In their assigned editor, users were
different knowledge of HTML. In the first test users were asked asked to:
to do the same tasks in both editors. The participants were then
1. Create a table, containing lists,
“assigned” to using one of the editors (2 participants to each
cells with different background
editor). From one test to the next participants were asked to
colors, and merged cells.
perform specific tasks using their assigned editor. They were
also asked to use it during the semester to do any other HTML 2. Create a nested list, with different
editing they needed. bullet types
3. Create a page with links to
1. Description of tests previous pages (containing table
Each test was composed by some tasks. Users only had access to and list)
one task at the time, in a given order. Users had a maximum
amount of time allowed for each one of the tasks. In the first Test 3 May 11, 99 In their assigned editor, users were
test, users performed tasks in both editors, whereas on the other asked to:
two they performed tasks in their assigned editor. Table 3 shows 1. Create a title page, containing a
the tasks users had to perform in each test. Notice that at each logo and a text with a different
test at least one new task was included. background color
2. Results 2. Create a page containing nested
One of the goals of the case study was to observe how taggings lists with different bullet types, a
changed as users learned the software. The expected result was table, and links to the same page
that the occurrence of all taggings (except for Thanks, but no and text.
thanks) would decrease. After all, as users interact with the 3. Create a page containing links to
software they learn the functions that are available to them, their previous page
corresponding interface elements, and the overall organization 4. Create a page using frames and
and communication code presented by the designer. Thus, one previous pages.
can expect that the communication breakdowns will decrease.
However, this was not what happened in all cases.
As for Thanks, but no thanks the expected result was that it
would increase. Often in a software application one function is
afforded in different ways to users. Once users become
experienced in it, they develop their own personal way and
preferences for interacting with it. These preferences don't
necessarily include the primary1 affordance provided by the
designer, and can be perceived as a declination of the afforded
strategy.

1
We consider the primary affordance to be the ones that are
more salient in the interface and typically taught in “How to”
section of the documentation.

312
310
Arachnophilia

% tagged
0% 5% 10% 15% 20% 25% 30% 35% 40%

Where is?

What's this?

I can't do it this way.


Utterances

What happened?

I can't do it.

Test 1
I can do otherwise
Test 2
Test 3

Figure 3 - Tagging results for Arachnophilia

SpiderPad

%tagged
0% 5% 10% 15% 20% 25% 30% 35% 40%

Where is?
What now?
What's this?
Oops!
I can't do it this way.
Utterances

Why doesn't it?


What happened?
Looks fine to me…
I can't do it.
Thanks, but no thanks
Test 1
I can do otherwise Test 2
Help Test 3

Figure 4 - Tagging results for SpiderPad

5. DISCUSSION Where is?


Some of the taggings changed as expected in both editors, In both editors the number of Where is? utterances decreased, as
namely Where is?, What’s this?, Looks fine to me…, and expected. Although in the long term we would expect it to tend
Thanks, but no, thanks. The other tagging changes in time were to zero, this was not expected during these tests, since every test
more erratic, varied from one editor to the other and not always presented a new task to the user.
followed the expected pattern. We will discuss the result If we compare the occurrences of this tag in both editors, we
obtained for each tagging, as well as some factors that will note that initially SpiderPad presents a higher number of
contributed to this outcome.

313
311
Where is? occurrences than Arachnophilia. However, in the last • Interactive breakdown: the user knows what he wants to
test this number is much smaller in SpiderPad than in do, but does not know how to achieve that next step
Arachnophilia. This is an indication that interface elements through the interface.
relative to more basic functions were easier to find in
• Task breakdown: the user didn’t know what to do to
Arachnophilia than in SpiderPad. However, as tasks became
achieve a result in the domain (in this case, in HTML).
more complex, functions were harder to find in Arachnophilia.
This indication is in line with their semiotic profile (see the In our experiment, the fact that the 3rd test presented more
previous section). Being more tutorial, Arachnophilia is meant complex tasks to the user could explain why this utterance
for people who will use only basic HTML, whereas SpiderPad's increased from the 2nd to the 3rd test in both editors.
toolbox approach is for people who are more proficient.
Oops!
What's this? If we observe the occurrences of Oops! in Figures 3 and 4, we
In both editors the number of occurrences of What's this? notice that not only did it not decrease, but also it behaved very
decreased, as expected. Notice that in SpiderPad the number of differently in each editor. In order to understand this result, we
utterances of this tag is significantly smaller than in analyzed the movies and taggings more thoroughly trying to
Arachnophilia. This indicates that the signs used in SpiderPad identify the possible causes. Our conclusion was that Oops!
were more easily remembered by users than those presented in breakdown happened for different reasons and could be further
Arachnophilia. Besides, Arachnophilia supplied users with qualified. The possible categories we identified are:
Command Bars that could be easily turned on and off by the • Expectation breakdown: the user expects a result but the
users, depending on their needs. When users turned on a outcome is not what he expected.
Command Bar they didn't regularly use, they would usually
check the meaning of one or more buttons of the Command Bar, • Similarity breakdown: two “similar” options are displayed
or check the difference between two of the buttons, thus next to each other and the user ends up clicking on the
increasing the number of What's this? uttered by the users. wrong one.
• Exploratory breakdown: user does not know what a specific
Looks fine to me… option does, he then clicks on it in order to find out.
The percentage of Looks fine to me… decreased in both editors.
This was an expected result, for users learned to interpret and • Status breakdown: the user opens the correct dialog, and
better understand the feedback presented by the editors, as they only then realizes the software is not in the necessary state
became more experienced in them. to achieve the desired result.
It seems that the occurring subcategories of Oops! change along
Thanks, but no, thanks. users’ learning curve. We could expect Expectation breakdowns
As expected, the percentage of Thanks, but no, thanks increased to happen when the user does not know the software well and is
in both editors. Both editors presented distinct paths to perform still building his usability model of the application. However, in
a task, and we considered dialogs and wizards to be the primary order to have a Status breakdown the user must know enough to
afforded interaction form. In the tests we noticed that users identify that the software's current state is not the necessary state
would often choose not to use this primary affordance, or would to achieve the desired result.
only use it partially. For instance, dialogs for editing tables
allowed users to define a great number of attributes relative to I can’t do it this way.
the table or its rows or cells. However, users would frequently In both editors, in the 1st test the number of occurrences was
use it just to create the template (code structure) of the table and very small, whereas in the 2nd they disappeared. Up to this point
then would edit attributes and values directly on code or by the results are in line with what we had expected. However, in
modifying the tag afterwards. Another common strategy the 3rd test in SpiderPad this tagging appeared once again, but
observed during the tests occurred when users created more than the number of occurrences decreased in comparison to the 1st
one instance of an object (e.g. table or list). In this case, they test. This probably resulted from the 3rd test being relatively
would create the first one using the dialog and then for the more complex than the 2nd.
others they would just copy and paste directly the code and then
In Arachnophilia this tagging behaved differently in the 3rd test,
edit it. In the first example users would be declining the
having a significant increase when compared to the 1st test. By
opportunity to define (most) attributes during the creation of the
analyzing the movies we noticed that this was caused by an
table, whereas in the second they would decline the wizard or
interaction strategy adopted by the users. Arachnophilia does
dialog altogether after creating the first object.
not provide users with any functions to change HTML tags’
What now? attributes (it could only be done via code). Furthermore, most of
the HMTL attributes are not available in the help system. Thus,
Although we expected the What now? utterance to decrease as
Arachnophilia only presents users the more “basic” attributes
users learned how to interact, it did not, in either editor. In each
and only in the creation dialog. Consequently, when users knew
editor it behaved differently, but in both it decreased in the 2nd
they had to change an attribute, but did not know what the
test, and then increased again. We relate this result to the fact
attribute was or its possible values, they would open the tag’s
that a What now? breakdown depends not only on the interface,
creation dialog (observed for lists and tables) to try and find out.
but also on the task and the user’s knowledge of the domain.
Since only a limited number of the possible attributes was
Frequently users did not know how to achieve a result in
presented, often they could not get the desired information,
HTML, and thus, did not know what to look for in the interface.
characterizing an I can’t do it this way breakdown.
Therefore, the What now? breakdowns could be qualified in two
categories:

314
312
This result in Arachnophilia indicates users’ need to have access users not having perceived or received a feedback. Users would
to HTML tags’ attributes and their possible values, which was often try to activate the same function one more time, or even a
not fulfilled by the designer’s solution. few more times. This also corroborates the indication that the
feedback provided to users should be reviewed.
I can do otherwise. In this test, we did not notice any cases of Why doesn’t it? in
In both editors this utterance has the highest number of
which users insisted on a path because they thought it had to be
occurrences in the 2nd test. We relate this result to the fact that
the right path, even though they realized they had not achieved
when users couldn’t remember where a desired interface element
the desired result. Rather, we noticed that in some cases users
was, or couldn’t find it immediately, they would quickly give up
would not find the function they were looking for, and decided
searching and would type in the code. This tendency did not
to go back to what they seemed to have considered the most
continue in Test 3, since it was a little more complex than Test
probable location for it and check if they hadn’t missed it by any
2. Thus, users frequently didn’t know the exact syntax to type
chance.
in, so they had to find the intended interface element.
This result does not necessarily indicate a problem in the 6. FINAL REMARKS
interface, but rather one of the users' strategy. If the users knew Although the results presented in this paper were not statistically
the complete syntax for an HTML object and couldn’t easily significant, they provide us with relevant indications of how
find the corresponding interface element, they would perceive designers and users can benefit from observing how
the cost of looking for it higher than its benefit and would communicative breakdowns change along users’ learning
choose to type it in. curves. If these changes do not behave as expected, they point
out to the designer a problem in his meta-message to the user,
Help! and should be looked into, as was indicated by the What
In SpiderPad this utterance occurred in the 1st test, and then happened? and Why doesn’t it? tags. Also, the tags that behave
disappeared in the other 2. That is, it behaved as expected. In accordingly to the expected pattern point to a consistent
Arachnophilia, although it behaved as expected in Tests 1 and 2, communicative act from designer to user. This happened with
it increased in Test 3. This result was due to the way the task – the utterances: Where is?, What’s this? and Looks fine to me….
to create an HTML document containing frames – is achieved in Even then, the rate of change of the tags may provide designers
Arachnophilia. with valuable indications. For instance, if the Where is?
In Arachnophilia a new page always contains the basic structure utterance took a long period of time to decrease, or did not
(body, head, title and html tags) of an HTML document. decrease significantly, it would indicate to the designer, that
However, part of this structure must be removed when the user although users were able to understand and learn how the
creates a frame. Although in the Help an example was shown, interface was organized, it was not an easy task for them.
this point was never emphasized, and was overlooked by users. The communicability method not only identifies problems, but
This result points out an inconsistent message sent from also gives the designer hints to a solution. The utterance
designer to users. When the designer creates a new HTML category narrows down the possible causes of the problem, and
document containing the basic structure he conveys to the user the pieces of tagged movies frequently provide the designer with
the idea that these tags should always be part of an HTML evidence of the users intuitions about a solution. Furthermore,
document. Nonetheless, this is not the case when creating this method provides the designer with some information that
frames. other evaluation methods do not consider, such as the
declination or missing of affordance and also some of the
What happened? strategies developed or chosen by the users.
The What happened? tag did not behave as expected. This result By identifying interface elements whose meaning users were
indicates a problem with the feedback provided by both editors. unable to grasp (missing of affordance) the designer may choose
Frequently, the user would try to achieve a result using a to represent those elements differently. That is, "rephrase" his
specific function, but “nothing” would happen. There were two message to the user. As for declination of affordance, the ability
main causes that led to this situation: to identify this breakdown allows the designer to evaluate the
• the feedback was too subtle and the user was unable to cost/benefit ratio of offering such an element (especially if
perceive it identified during formative evaluation) or to decide in favor of
other interactive elements, preferred by users. This study has
• the application was not in the necessary state to perform an also provided valuable information to our research in the
action, but the action was available to the user; once communicability evaluation method and has set the need for
activated it would not be executed, but it wouldn’t provide further research. It has pointed to the need to look into how and
the user with an error message. when utterances should be qualified. The qualification of
In both cases there was a communication breakdown between utterances would refine further the evaluation and would
user and system, and they both indicate the need for a review of provide designers with more specific indications. For instance, if
the feedbacks provided to users. Similarity Oops! breakdowns often occurred when interacting
with a specific menu, it could indicate that 2 or more options of
The higher number of occurrences in Test 3 is probably due to it
that menu were not distinctive enough and should probably be
being more complex than Test 2.
changed. If instead regular Oops! utterances were identified, a
Why doesn’t it? further investigation of their causes would be necessary to
In both editors this utterance behaved very similarly to the What precisely pinpoint the interactive problem.
happened? tag. This was not a coincidence, but resulted from the

315
313
The co-occurrence between the What happened? and Why
doesn’t it? tags indicates that we should investigate not only the
8. REFERENCES
]1] Arachnophilia 3.9 Paul Lutus. 1996-1998.
tags, but also sequences of tags. That is, examine which
sequences have an associated meaning to them, and if this [2] de Souza, C. S. (1993). “The Semiotic Engineering of User
meaning adds to the meaning of individual tags. Interface Languages”. International Journal of Man-
Machine Studies, 39, 753-773.
Furthermore, as we have pointed out, the ability to identify
declination of affordance would be particularly interesting if [3] de Souza, C.S., Prates, R.O., Barbosa, S.D.J. "A Method
done during formative evaluation. Thus, it would be interesting for Evaluating Software Communicability". Proceedings of
to investigate how effective communicability evaluation method the Second Brazilian Workshop in Human–Computer
would be when applied to prototypes, storyboards and the like Interaction (IHC '99), in CD-ROM, Campinas, Brazil,
during the design process. Clearly some of the utterances may October 1999.
emerge in tests done without a running prototype. [4] Ericsson, K. A. and Simon, H. A. (1985). Protocol
This case study also justifies the carrying out of a more Analysis: Verbal Reports as Data. MIT Press, Cambridge,
extensive study, which would include more participants and a MA.
longer period of time. Such a study would provide us with more [5] Hartson, H. R. (1998). “Human-computer interaction:
statistically significant results and more than providing us with Interdisciplinary roots and trends.” The Journal of System
indications, it would allow us to make claims as to the meaning and Software, 43, 103-118
of communicative breakdowns changing patterns along users
learning curves. [6] Nielsen, J. Usability Engineering. Academic Press, Boston,
MA, 1994.
Moreover, we could ask users to tag movies of their own
interaction. In this case, they would provide evaluators with [7] Norman, D. Cognitive Engineering. In Norman, D. A. and
their own report of the conversational breakdowns experienced, Draper, S. W. (eds.), User-Centered System Design.
as opposed to the evaluator's interpretation. Users' tagging could Lawrence Erlbaum Associates, Hillsdale, NJ, 1986, pp.
either be done based on the interaction movie, or during the 411–432.
interaction. Whereas the former could be considered a case of [8] Prates, R.O., de Souza, C.S., Barbosa, S.D.J. (2000) “A
controlled2 post-event protocol, the latter could be considered a Method for Evaluating the Communicability of User
case of controlled think aloud protocol [9]. The advantage of Interfaces”. Interactions. Jan-Feb 2000.
doing the tagging based on the interaction movie is that users'
[9] Preece, J., Rogers, Y., Sharp. H., Benyon, D., Holland, S.
attention would not be divided between performing the task and
and Carey, T. (1994). Human-Computer Interaction.
doing the tagging.
Addison-Wesley.
It would also be interesting to have non-HCI experts tag movies.
[10] Sellen, A. and Nicol, A. Building User-Entered Online
The comparison of their taggings with those of the users (for a
Help. In B. Laurel (ed.), The Art of Human-Computer
same interaction movie) could give us more insights about
Interface Design. Addison-Wesley, Reading, MA, 1990.
communicability and user-developer communication [8].
[11] Shneiderman, B. Designing the User Interface. Addison-
7. ACKNOWLEDGMENTS Wesley, Reading, MA, 1998.
The authors would like to thank the participants of the tests for [12] SpiderPad 1.5.3. Six-Legged Software. 1996-1997.
their time and interest. They would also like to thank CNPq,
FAPERJ, TeCGraf and LES for their support.

2
Controlled in the sense that users would have to express
themselves using the set of utterances of the communicability
evaluation method.

316
314

You might also like