You are on page 1of 60

HUMAN COMPUTER INTERACTION

1.What is Human Computer interaction Introduction?

HCI (human-computer interaction) is the study of how people interact with computers
and to what extent computers are or are not developed for successful interaction with human
beings.

As its name implies, HCI consists of three parts: the user, the computer itself, and the ways they
work together.

User
By "user", we may mean an individual user, a group of users working together. An appreciation
of the way people's sensory systems (sight, hearing, touch) relay information is vital. Also,
different users form different conceptions or mental models about their interactions and have
different ways of learning and keeping knowledge and. In addition, cultural and national
differences play a part.

Computer

When we talk about the computer, we're referring to any technology ranging from desktop
computers, to large scale computer systems. For example, if we were discussing the design of a
Website, then the Website itself would be referred to as "the computer". Devices such as
mobile phones or VCRs can also be considered to be “computers”.

Interaction

There are obvious differences between humans and machines. In spite of these, HCI attempts
to ensure that they both get on with each other and interact successfully. In order to achieve a
usable system, you need to apply what you know about humans and computers, and consult
with likely users throughout the design process. In real systems, the schedule and the budget
are important, and it is vital to find a balance between what would be ideal for the users and
what is feasible in reality.

1|Page
The Goals of HCI

The goals of HCI are to produce usable and safe systems, as well as functional systems. In order
o produce computer systems with good usability, developers must attempt to:

 understand the factors that determine how people use technology


 develop tools and techniques to enable building suitable systems
 achieve efficient, effective, and safe interaction
 put people first

2. What is Stress?

Stress is your body's way of responding to any kind of demand. It can be caused by both good
and bad experiences. When people feel stressed by something going on around them, their
bodies react by releasing chemicals into the blood. These chemicals give people more energy
and strength, which can be a good thing if their stress is caused by physical danger. But this can
also be a bad thing, if their stress is in response to something emotional and there is no outlet
for this extra energy and strength. This class will discuss different causes of stress, how stress
affects you, the difference between 'good' or 'positive' stress and 'bad' or 'negative' stress, and
some common facts about how stress affects people today.

3. What is Stress Mitigation?


Stress Mitigation
To mitigate is to reduce in force or intensity or make less severe. Loss mitigation reduces a
lender's loss making the impact less severe, stress mitigation reduces the effect of stressors
making the level of stress less severe.

Stress and the foreclosure epidemic are both major problems in the U. S. and will get worse
before they get better. (what is more stressful than losing your house?)

That's the bad news.


2|Page
The really bad news, however, is that we don't take stress seriously as a
threat to our health.

If a stressor causes stress, then stress mitigation either eliminates or


reduces the effect of the stressor and thus lessens the risk to our health.

Some of our Associates have used quantum biofeedback to mitigate the effects of stress and it
works so well some of them got trained and became biofeedback practitioners with several
others joining them.

A foreclosure is a stressor for sure, but it's not the end of the world.

A foreclosure is a problem with a number of solutions, solve the problem and you remove the
stressor which is the cause of the problem. At Anchor we have many ways to solve the
foreclosure problem.

Since the underlying cause is nothing more than a money problem, an effective stress
mitigation will also deal with the underlying cause and not just treat or eliminate symptoms.

If money (or more accurately the lack of money or income) is part of the problem, an effective
permanent mitigation solution will address eliminating or mitigating the effects of stress
(quantum biofeedback), and also eliminate or mitigate the underlying cause (stressor) and is
addressed in our stress mitigation program.

Before we look at the easy problem to solve (money) let's take a look at the technogical
solution that has some of our Associates also becoming practiontioners, quantum biofeedback
from the L.I.F.E. System.

The L.I.F.E. System is a professional biofeedback technology used by thousands of trained and
qualified practitioners. Acupuncturists, Chiropractors, Dentists, Homeopaths, Medical Doctors,
Nutritionists and Veterinarians are using the L.I.F.E. System to assist in achieving Stress
Management, Muscle Relaxation, and Preventative Healthcare, while supporting the well-being
of their clients.

3|Page
4 . Difference between perception and perceptual?

Perception is our sensory experience of the world around us and involves both recognizing
environmental stimuli and actions in response to these stimuli. Through the perceptual process,
we gain information about properties and elements of the environment that are critical to our
survival. Perception not only creates our experience of the world around us; it allows us to act
within our environment.

Perception includes the five senses; touch, sight, taste smell and taste. It also includes what is
known as proprioception, a set of senses involving the ability to detect changes in body
positions and movements. It also involves the cognitive processes required to process
information, such as recognizing the face of a friend or detecting a familiar scent.

The perceptual process is a sequence of steps that begins with the environment and leads to
our perception of a stimulus and an action in response to the stimulus. This process is
continual, but you do not spend a great deal of time thinking about the actual process that
occurs when you perceive the many stimuli that surround you at any given moment.

The process of transforming the light that falls on your retinas into an actual visual image
happens unconsciously and automatically. The subtle changes in pressure against your skin that
allow you to feel object occur without a single thought.

5.Difference between stimulus and stimuli?


As nouns the difference between stimulus and stimulation is that stimulus is anything that may
have an impact or influence on a system while stimulation is a pushing or goading toward
action.

A stimulus 'stimulates' you and a response is how you respond. If you are stimulated by cold
water your response may be to shiver. If you place your hand on a hot stove the heat will
stimulate your skin (ouch!) and your response will be to withdraw your hand (even without
thinking).

6. What is cognition?
Cognition can simply be defined as the mental processes which assist us to remember, think,
know, judge, solve problems, etc. It basically assists an individual to understand the world
around him or her and to gain knowledge. All human actions are a result of cognitive

4|Page
procedures. These cognitive abilities can range from being very simple to extremely complex in
nature. Cognition can include both conscious and also unconscious processes. Attention,
memory, visual and spatial processing, motor, perception are some of the mental processes.
This highlights that perception can also be considered as one such cognitive ability. In many
disciplines, cognition is an area of interest for academics as well as scientists. This is mainly
because the capacities and functions of the cognition are rather vast and applies to many fields.

Psychological processes involved in acquisition and understanding of knowledge, formation of


beliefs and attitudes, and decision making and problem solving. They are distinct from
emotional and volitional processes involved in wanting and intending.

7. Explain Cognitive Architecture?


Cognitive Architecture

The goal of this effort is to develop a sufficiently efficient, functionally elegant, generically
cognitive, grand unified, cognitive architecture in support of virtual humans (and hopefully
intelligent agents/robots – and even a new form of unified theory of human cognition – as
well).

A cognitive architecture is a hypothesis about the fixed structures that provide a mind, whether
in natural or artificial systems, and how they work together – in conjunction with knowledge
and skills embodied within the architecture – to yield intelligent behavior in a diversity of
complex environments.

A grand unified architecture integrates across (nominally symbolic) higher-level thought


processes plus any other (nominally subsymbolic) aspects critical for successful behavior in
human-like environments, such as perception, motor control, and emotions. A generically
cognitive architecture spans both the creation of artificial intelligence and the modeling of
natural intelligence, at a suitable level of abstraction. A functionally elegant architecture yields
a broad range of capabilities from the interactions among a small general set of mechanisms –
essentially what can be thought of as a set of cognitive Newton’s laws. A sufficiently efficient
architecture executes quickly enough for its anticipated applications; for example, taking no
more than 50 msec per cognitive cycle for real-time virtual humans.

5|Page
Sigma is a cognitive architecture that is itself grounded in a well-defined graphical architecture.

Our focus is on the development of the Sigma (∑) architecture, which explores the graphical
architecture hypothesis that progress at this point depends on blending what has been learned
from over three decades worth of independent development of cognitive architectures
and graphical models, a broadly applicable state-of-the-art formalism for constructing
intelligent mechanisms. The result is
a hybrid (discrete+continuous) mixed (symbolic+probabilistic) approach that has yielded initial
results across memory and learning, problem solving and decision making, mental imagery and
perception, speech and natural language, and emotion and attention.

8. What are the HUMAN ERRORS ? Give example?


Human Error is commonly defined as “a failure of a planned action to achieve a desired
outcome”. Error-inducing factors exist at individual, job, and organisational levels, and when
poorly managed can increase the likelihood of an error occurring in the workplace. When
errors occur in hazardous environments, there is a greater potential for things to go wrong. By
understanding human error, responsible parties can plan for likely error scenarios,
and implement barriers to prevent or mitigate the occurrence of potential errors.

Errors result from a variety of influences, but the underlying mental processes that lead to error
are consistent, allowing for the development of a human error typology. An understanding of
the different error types is critical for the development of effective error prevention and
mitigation tools and strategies. A variety of these tools and strategies must be implemented to
target the full range of error types if they are to be effective.

Errors can occur in both the planning and execution stages of a task. Plans can be adequate or
inadequate, and actions (behaviour) can be intentional or unintentional. If a plan is adequate,
and the intentional action follows that plan, then the desired outcome will be achieved. If a
plan is adequate, but an unintentional action does not follow the plan, then the desired
outcome will not be achieved. Similarly, if a plan is inadequate, and an intentional
6|Page
action follows the plan, the desired outcome will again not be achieved. These error points are
demonstrated in the figure below and explained in the example that follows.

Human error – failures in planning and execution.

Example:
Failures of Plans and Actions: Sam has finished his last task for the day and his desired outcome
is to get to the accommodation module. He knows that his usual path to the accommodation
module has been barricaded off, so he plans a different route to get there. From a human error
perspective, there are three potential alternative scenarios that he may experience when
executing his plan:

7|Page
Each of the failure types can be further broken down into categories and subcategories. These
will now be explained in greater detail.

8|Page
9. Discuss the Stress and Cognitive Workload?
The integration of very high-tech equipment into standard operations is a radical change in the
challenges faced by the infantry soldier. In addition, the battlefield, in all its forms, remains a
place of extreme stress. Coupled with this stress comes the new burden of cognitive workload
associated with the operation and management of new technological systems. The Land
Warrior System, as currently conceived, is but one version of a potential family of advanced
systems-each of which may generate its own combination of stresses. In this chapter we
examine these stress-inducing factors, to identify sources of potential problems and to
recommend avenues to solve such problems, either through existing capabilities or by
proposing additional research.

A UNIFYING FRAMEWORK

One of the reasons it is difficult to predict accurately the performance of soldiers under stress is
that the scientific foundation of this area is inadequate. For many decades, we have relied on
the untestable, inverted-U theory of physiological arousal, with both high and low levels of
arousal inhibiting attention (see Yerkes and Dodson, 1908). This theory has been used to
account for a wide pattern of performance, after it has happened; unfortunately, it has little if
anything to say about what will happen in the future under specific circumstances (Hancock,
1987). Consequently, although the theory is very useful to the scientist trying to explain a
confusing pattern of results, it is of little use to the designer of equipment or the leader of a
platoon trying to predict how individuals will react to specific conditions. More recently,
researchers have looked at effects on

9|Page
FIGURE 6-1 A model for the prediction of stress effects.

either a task-by-task basis (e.g., memory versus decision making) or on a stress-by-stress basis
(e.g., heat versus noise) (Hockey and Hamilton, 1983). However, these approaches provide no
unified basis for understanding stress and subsequent effects on workload.

A unified approach to the prediction of stress effects has been presented recently by Hancock
and Warm (1989).Briefly, the model establishes ranges of adaptability. Violation of these ranges
of adaptability causes progressive failure in performance. Because the model uses both a
physiological and a behavioral index, it can deal with the question of combined physical and
mental demands and is recommended for use in association with the testing of advanced
technologies. In this work, Hancock and Warm (1989) have combined existing physiological and
psychological theories of stress effects into a unified view.

In the center of the figure is a region in which stress exerts little impact in performance
variation. In the center of this comfort zone, individuals can be expected to produce their best
performance. As stress increases, either through a systematic increase or a systematic decrease
of appropriate stimulation, the individual is required to use greater resources to combat the
influence of such demanding conditions. This reduces the resources available for explicit task
performance, and so efficiency is reduced accordingly. It must be emphasized that the demands
of the primary task itself may well act in this depleting fashion.

There are some strategies by which the individual can sustain satisfactory performance, even as
stress increases. One common tactic is to select only those.

10 | P a g e
cues relevant to task completion and to reject of filter out cues that are irrelevant. This allows
some degree of protection from stress effects. However, if the stress should continue to grow
or should be perpetuated over a longer time, then some degree of performance failure is
observed.

Individuals can tolerate a certain level of stress with no disturbance to their capability. This is
the flat portion of the extended-U of the model. However, if the stress persists for a sufficient
period or the severity is increased, the individual begins to lose capability. Since the functions
stay the same, the way in which task performance falls off mirrors the way in which
physiological capacity is impaired. The Army is accustomed to providing physiological support
for its personnel, in terms of energy, resources, and specific equipment to avoid threat. The
Army is accustomed to providing supplementary support when such systems fail or induce
stress. The proposed model argues that new equipment ensembles and associated mission
stresses can be ameliorated in the same fashion-that is, by providing training and
supplementary support.

The model does not indicate where any individual will fail in relation to a specific form or level
of stress. Furthermore, the model does not stipulate what stress alleviation or stress generation
is inherent in the employment of the Land Warrior System. There is much to be learned about
individual reactions under such circumstances, and this consequently represents one area of
needed research. One critical aspect of this research must be the identification of incipient
failure and the reduction of stress under the specific conditions associated with using the Land
Warrior equipment. As is evident from the model, failure in performance, such as errors, will be
evident before failure in physiological functioning. However, since the soldier is exposed to
stress in order to accomplish a task, it is essential for his squad or platoon leader to be aware of
these incipient failures. This implies the need for on-line feedback of individual soldier
performance to be provided throughout the information network; it would then become
possible for the leader to know when specific members of their command are becoming
overloaded and subject to performance error and potential failure. Such information is critical
to ensure mission success, and its transmission could be a central feature of the proposed
helmet-mounted display.

Much more research effort needs to be directed to the critical question of performance
prediction under stress. It is a major requirement for the success of the Land Warrior System on
the battlefield of the 21st century. The Land Warrior System will add new tasks for the soldier to
perform. These tasks are expected to be more information-intensive: they will require more
reading and more cognitive processing than is currently necessary. The following section
describes the existing sources of battlefield stress that provide the psychological and physical
context for the introduction of the proposed system. These context factors must be considered

11 | P a g e
in examining the implications of the helmet-mounted display for increased workload and, in
turn, for soldier performance.

10. Explain Model Human Processor (MHP)?


Model Human Processor

The MHP offers an integrated description of psychological knowledge about human


performance (relevant to HCI)

Oversimplification, but the aim was to offer a model around which discussion can centre

MHP:-

(1) a set of memories and processors together with (2) a set of principles, the "principles of
operation"

Three interacting subsystems:

(1) perceptual system

(2) the motor system

(3) the cognitive system

each with their own memories and processors

the perceptual processor (consists of sensors and associated buffer memories, the most
important being a Visual Image Store and an Auditory Image Store to hold the output of the
sensory system whilre it is being symbolically coded)

the cognitive processor (receives symbolically coded information from the sensory image store
in its Working memory and uses previously stored information stired in Long term memory to
make decision about how to respond)

the motor processor (carries out the specified response)

For some tasks the human being responds like a serial processor (e.g. pressing a key in response
to light), for other tasks (like typing, reading, simultaneous translation) integrated, parallel
operatopm of the three subsystems is possible, in the manner of three pipe-lined processors;

12 | P a g e
information flows continuously from input to output with a characteristically short time lag
showing that all the three processors are working simulataneously.

Memories are described by a few parameters .: storage capacity in items (miu), the decay time
of an item (alpha) and the main code type (physical, acoustic, visual, semantic (gamma). The
most important parameter of a processor is the cyle time (pie).

The anlayst generates time predictions by analysing a task into the constituent operation
executed by the three subsystems and from there calculates how long the task will take and
how much processing is involved. This can be doen within three bands of performance:
fastman, slowman, and middleman, thus allowing predictions along at least the central and
extreme points along the behavioural continuum.

Using the Model Human Processor to make predictions

parameters for each operation offer averaged time to carry out some action

In analysing a task and making time predictions, consider

 storage capacity

 decay time of item

 main code type (i.e. physical acoustic, semantic)

 processor cycle time (on the order of 1/10th second)

 dealing with various levels of expertise

 fast, middle and slow person

 i.e. worst, best and nominal performance, set bounded range

MHP: Example 1: perception, visual


Compute the frame rate at which an animated image on a video display must be refreshed to
give the illusion of movement

Consider: cycle time of the Perceptual Processor: closely related images which appear nearer in
time than the processing time will be fused into a single image. Therefore

frame rate > 1/cycle time of processor = 1/(100 msec frame)

= 10 frames/second

13 | P a g e
Frame rate should be faster than this. Upper bound specified by for how fast the rate needs to
be can be found by redoing the calculation for fast-man

max frame rate for fusion = 1/(50 msec/frame)

= 20 frames/sec.

11.What is Human memory and its types?

The ability to recall is an essential function of our minds. Its contribution to our evolution is
significant. Other than that, it plays a vital role in the way we learn and adapt to the
environment.

We would say that the ability to remember completes the learning process and is interrelated
with it. If any of both were missing, then there would be no evolution, no new knowledge, and
no civilization.

Memory process is a collection of three sub-processes.

o Encoding

o Storing

o Retrieval

I think it’s obvious what each one does. Encoding is the initial process when the mind perceives
and registersthe information. Storing is to keep the encoded information in a good shape so to
be remembered easily over the time. And lastly the Retrieval process recovers the stored
information on the individual’s demand.

Memory Types

There are three basic memory types.

o Sensory Memory

o Short-term or Working Memory

o Long-term Memory

Let’s see them in detail.

14 | P a g e
Sensory Memory

The term Sensory refers to the initial process of storing information that is perceived through
our senses. It lasts for a subtle period and it is regularly replaced by new data, as our senses
work continuously.

Sensory memory is divided into five memory types, one per each sense.The following example
will clear out how the sensory type of memory works.

Let’s suppose that you browse a magazine fast but you don’t focus in its pages. If you try to
remember something you saw, most probably you are unable to. You realize that you
cannot remember precisely a whole page, a title or a picture.

As your eyes scan the magazine’s pages, your mind is registering fast and briefly the incoming
information. While you continue doing this, your mind receives new information and replaces
the old one. This type of sensory memory is called iconic and lasts for about one second. Then,
it gets replaced by new sensory data.

This memory type relates not only to the vision but all the human senses. Another kind of
sensory memory which also got a lot of attention is the one called echoic. Obviously, the
sensory organs here are the ears. Echoic type operates just like the iconic, but it lasts a little
longer, about 4 seconds.

As you look at the magazine, something may get your attention. While you stay on the
page more time to read it, the Short-term or Working memory is activated.

Short-term or Working Memory

15 | P a g e
Short-term memory is an expression used by scientists when they discovered this memory type.
Their intention was to define the ability to store information for a short time.

The new term, working memory, is used now. It is an expanded definition of the short-term
memory, including also the manipulation of the temporary stored information.

We can hold information in the working memory by repetition. A classic example is when you
need to remember a phone number until you find a piece of paper to write it down; you are
repeating it to yourself. If something distracts you, then you can easily forget the number. A
general characteristic of working memory is that you will misremember after a short period.

Another aspect of the working memory is that it has a limited capacity. Various tests can easily
prove it. The most common test is to have a large list of items while subjects try to remember
as many items from this list as possible.

The psychologist George Miller has concluded that the capacity of the working memory type is
limited to about seven items at a time. Nevertheless, more recent research tried to separate
the processing capability from the recall ability and resulted that the capacity is lower than
seven.

The working memory is essential when we are thinking. For example, when you try to solve a
simple mathematical problem in your mind, you need to do first some calculations before
reaching in the solution. These calculations are stored temporarily in the working memory. It
becomes apparent now why the short-term definition is not used anymore for this type of
memory.

Finally, this memory type is closely correlated with intelligence. Some studies resulted that the
higher the capacity of the working memory the more intelligent the individual is.

Long-term Memory

16 | P a g e
The last of memory types is a little more complex than the previous. Long-term memory points
to the ability to remember things for a very long time or the entire lifespan.

Memories such as the movie you watched yesterday, playing basketball, academic or
encyclopedic knowledge and the date that you got your degree, are all set in the long-term
memory.

In contrast with the two previous memory types, this type has no capacity limitations. You can
get unlimited new knowledge and skills throughout your life. When you train your brain by
acquiring new knowledge and abilities, they say that you are also slowing down the brain’s
aging process.

There are some theories about how the mind stores information into the long-term memory.
Initially, the scientists supported the view that information was stored in sequence, passing first
from the working to the long-term memory.

According to another theory, the memory types are independent systems and can function
in parallel. Knowledge seems to be stored in both of them at the same time.

17 | P a g e
Although the first approaches to the long-term memory was the existence of one single brain
system, scientists now believe that three independent systems constitute the long term
memory. They distinguish the following:

o Episodic

o Semantic

o Procedural

The first system is dependent on time and place. Recollections that are retrieved from the
episodic structure are passed events. These are characterized by when and where they
happened. A beautiful trip to the Caribbean when you got married is a good example. These are
episodes of your life, and that’s why this memory system is called episodic.

The semantic system refers to stored information that is related to encyclopedic knowledge
and facts. An example of this kind of information is the fact that the earth is moving around the
sun. It’s obvious that the semantic system is independent of the time and place where you
absorbed the information.

The memory systems that we saw are both declarative, which means that when you learn or
recall something you do it verbally. The third and last system is non-declarative

12. What do you mean by Cognitive Psychology ?


Cognitive psychology studies and analyses the mental processes. This includes how we think,
remember, learn and perceive.

System of theories which states that human mental and physical activity can be explained
in terms of information processing by a computer, and attempts to investigate how
mind works by focusing on how organisms perceive, remember, think and learn. As an
emerging discipline, it integrates ideas from fields such as artificial intelligence, epistemology,
linguistics, mathematics, neuroscience, and philosophy. It maintains that behavior can only be
understood by studying the underlying mental processes, interaction between an organism and
its environment influences its behavior as well as its knowledge of the environment which

18 | P a g e
affects its future response to the environment, how animals behave may not
be applied directly to the study of human behavior, but how machines learn may be,
development of learning strategies and structuring of learning environment brings
understanding, knowledge is not something that is acquired but something that is created by a
learner based on what he or she already knows, an instructor should focus on encouraging
exploration and knowledge formation, development of judgment,
and acquisition and organization of information by the learner.

EXAMPLES

 Attention - Sometimes our cognitive processing systems get overloaded and we have to
select information to process further. This deals with how and why performance
improves with attention.

 Formation of concepts - This aspect studies human’s ability to organize experiences into
categories. Response to stimulus is determined by the relevant category and the
knowledge associated with that particular category.

 Judgment and decision - This is the study of decision making. Any behavior, implicit or
explicit, requires judgment and then a decision or choice.

 Language processing - This is the study of how language is acquired, comprehended and
produced. It also focuses on the psychology of reading. This includes processing words,
sentences, concepts, inferences and semantic assumptions.

 Learning - This is the study of new cognitive or conceptual information that is taken in
and how that process occurs. It includes implicit learning that takes into account
previous experience on performance.

 Memory - Studying human memory is a large part of cognitive psychology. It covers the
process of acquiring, storing and retrieving memory, including facts, skills and capacity.

 Perception - This includes the senses and the processing of what we sense. This also
includes what we sense and how it interacts with what we already know.

 Problem solving - Solving problems is a way that humans achieve goals.

 Achieving goals - Moving to a goal can include different kinds of reasoning, perception,
memory, attention and other brain functions.

19 | P a g e
 Reasoning - This is the process of formulating logical arguments. It involves making
deductions and inferences and why some people value certain deductions over others.
This can be affected by educated intuitive guesses, fallacies or stereotypes.

13. What is the difference between cognition &perception?


Cognition can simply be defined as the mental processes which assist us to remember, think,
know, judge, solve problems, etc. It basically assists an individual to understand the world
around him or her and to gain knowledge. All human actions are a result of cognitive
procedures. These cognitive abilities can range from being very simple to extremely complex in
nature. Cognition can include both conscious and also unconscious processes. Attention,
memory, visual and spatial processing, motor, perception are some of the mental processes.
This highlights that perception can also be considered as one such cognitive ability. In many
disciplines, cognition is an area of interest for academics as well as scientists. This is mainly
because the capacities and functions of the cognition are rather vast and applies to many fields.

Perception is the process by which we interpret the things around us through sensory stimuli.
This can be through sight, sound, taste, smell, and touch. When we receive sensory
information, we not only identify it but also respond to the environment accordingly. In daily
life, we rely greatly on this sensory information for even the minute tasks. Let us understand
this through an example. Before crossing the road from a pedestrian crossing, we usually tend
to look both ways before crossing the road. In such as instance, it is the sensory information
gained through sight and sound that gives the signal for us to cross the road. This can be
considered an instance where people respond to the environment according to the received
information. This highlights that perception can be considered as an essential cognitive skill,
which allows people to function effectively. This skill or ability does not require much effort
from the side of the individual as it is one of the simplest processes of cognition.

20 | P a g e
Before crossing the road, we gather information through sensory stimuli.

Difference between Cognition and Perception

• Cognition includes a number of mental processes such as attention, memory, reasoning,


problem solving ,etc.

• Perception is the process that allows us to use our sense to make sense of the information
around us through organization, identification and interpretation.

• The main difference is that while cognition encompasses a variety of skills and processes,
Perception can be defined as one such cognitive skill or ability which assists in enhancing the
quality of cognitive abilities.

14 .What is perceptual motor behavior?


Perceptual Motor Behaviour will have implications for the design of robotics, technology, and
novel rehabilitation programs. The findings from experiments conducted in the Perceptual
Motor Behaviour increase our understanding of the underlying neural processes for motor
control and learning, and test new and innovative approaches to the assessment and treatment
of new or challenging motor skills. The advancements made will contribute to developing
interventions that are cost effective and track changes in motor performance accurately.

Currently there are three main areas of research:

1. Multisensory-motor integration in the typically developing population

2. Sensorimotor integration in individuals with an Autism Spectrum Disorder

3. Assessment and treatment of individuals with neurological disorders

21 | P a g e
By studying how people move, we can:

1) Strive to better measure a patient’s clinical progression throughout a course of treatment

2) Quantify the delivery of manual therapies by clinicians, and perceptual factors that influence
performance

Our lab explores the two themes above using three approaches to research:

1) Basic Science

2) Applied Clinical Science

3) Clinical Intervention Studies.

15. Discuss Ergonomics in detail?


Ergonomics (from the Greek word ergon meaning work, and nomoi meaning natural laws), is
the science of refining the design of products to optimize them for human use. Human
characteristics, such as height, weight, and proportions are considered, as well as information
about human hearing, sight, temperature preferences, and so on. Ergonomics is sometimes
known as human factors engineering.

Computers and related products, such as computer desks and chairs, are frequently the focus
of ergonomic design. A great number of people use these products for extended periods of
time -- such as the typical work day. If these products are poorly designed or improperly
adjusted for human use, the person using them may suffer unnecessary fatigue, stress, and
even injury.

The term “ergonomics” can simply be defined as the study of work. It is the science of fitting
jobs to the people who work in them. Adapting the job to fit the worker can help reduce
ergonomic stress and eliminate many potential ergonomic disorders (e.g., carpel tunnel
syndrome, trigger finger, tendonitis). Ergonomics focuses on the work environment and items
such as the design and function of workstations, controls, displays, safety devices, tools and
lighting to fit the employee’s physical requirements, capabilities and limitations to ensure
his/her health and well being.

16. Define heuristics? Explain with example?


A heuristic is a word from the Greek meaning "to discover." It is an approach to problem
solving that takes one's personal experience into account.

22 | P a g e
Ways to Use Heuristics In Everyday Life

Here are some examples of real-life heuristics that people use as a way to solve a problem or to
learn something:

 "Consistency heuristic" is a heuristic where a person responds to a situation in way that


allows them to remain consistent.

 "Educated guess" is a heuristic that allows a person to reach a conclusion without


exhaustive research. With an educated guess a person considers what they have
observed in the past, and applies that history to a situation where a more definite
answer has not yet been decided.

 "Absurdity heuristic" is an approach to a situation that is very atypical and unlikely – in


other words, a situation that is absurd. This particular heuristic is applied when a claim
or a belief seems silly, or seems to defy common sense.

 "Common sense" is a heuristic that is applied to a problem based on an individual’s


observation of a situation. It is a practical and prudent approach that is applied to a
decision where the right and wrong answers seems relatively clear cut.

 "Contagion heuristic" causes an individual to avoid something that is thought to be bad


or contaminated. For example, when eggs are recalled due to a salmonella outbreak,
someone might apply this simple solution and decide to avoid eggs altogether to
prevent sickness.

 "Availability heuristic" allows a person to judge a situation on the basis of the examples
of similar situations that come to mind, allowing a person to extrapolate to the situation
in which they find themselves.

 "Working backward" allows a person to solve a problem by assuming that they have
already solved it, and working backward in their minds to see how such a solution might
have been reached.

 "Familiarity heuristic" allows someone to approach an issue or problem based on the


fact that the situation is one with which the individual is familiar, and so one should act
the same way they acted in the same situation before.

 "Scarcity heuristic" is used when a particular object becomes rare or scarce. This
approach suggests that if something is scarce, then it is more desirable to obtain.

23 | P a g e
 "Rule of thumb" applies a broad approach to problem solving. It is a simple heuristic
that allows an individual to make an approximation without having to do exhaustive
research.

 "Affect heuristic" is when you make a snap judgment based on a quick impression. This
heuristic views a situation quickly and decides without further research whether a thing
is good or bad. Naturally, this heuristic can be both helpful and hurtful when applied in
the wrong situation.

 "Authority heuristic" occurs when someone believes the opinion of a person of


authority on a subject just because the individual is an authority figure. People apply
this heuristic all the time in matters such as science, politics, and education.

17. Explain Human computer Interaction in Software Process?


 Software engineering provides a means of understanding the structure of the design
process, and that process can be assessed for its effectiveness in interactive system
design. n

 Usability engineering promotes the use of explicit criteria to judge the success of a
product in terms of its usability.

 Iterative design practices work to incorporate crucial customer feedback early in the
design process to inform critical decisions which affect usability.

 Design involves making many decisions among numerous alternatives. Design rationale
provides an explicit means of recording those design decisions and the context in which
the decisions were made.

The software engineering life cycle aims to structure design in order to increase the reliability
of the design process. For interactive system design, this would equate to a reliable and
reproducible means of designing predictably usable systems. Because of the special needs of
interactive systems, it is essential to augment the standard life cycle in order to address issues
of HCI. Usability engineering encourages incorporating explicit usability goals within the design
process, providing a means by which the product’s usability can be judged. Iterative design
practices admit that principled design of interactive systems alone cannot maximize product
usability, so the designer must be able to evaluate early prototypes and rapidly correct features
of the prototype which detract from the product usability. The design process is composed of a
series of decisions, which pare down the vast set of potential systems to the one that is actually

24 | P a g e
delivered to the customer. Design rationale, in its many forms, is aimed at allowing the designer
to manage the information about the decision-making process, in terms of when and why
design decisions were made and what consequences those decisions had for the user in
accomplishing his work.

18. Define Cognitive neuroscience?


Cognitive neuroscience

The field of Cognitive Neuroscience concerns the scientific study of the neural mechanisms
underlying cognition and is a branch of neuroscience.

Cognitive neuroscience overlaps with cognitive psychology, and focuses on the neural
substrates of mental processes and their behavioral manifestations.

The boundaries between psychology, psychiatry and neuroscience have become quite blurred.

Cognitive neuroscientists tend to have a background in experimental psychology, neurobiology,


neurology, physics, and mathematics.

Methods employed in cognitive neuroscience include psychophysical experiments, functional


neuro imaging, electrophysiological studies of neural systems and, increasingly, cognitive
genomics and behavioral genetics.

Clinical studies in psychopathology in patients with cognitive deficits constitute an important


aspect of cognitive neuroscience.

The main theoretical approaches are computational neuroscience and the more traditional,
descriptive cognitive psychology theories such as psychometrics.

25 | P a g e
19.. What do you mean by negative and positive appraisal in HCI? Why it is
important in interface design?
According to the Appraisal Theory of emotions, how we feel about a certain situation is
determined by our Appraisal or evaluation of the event.

Let's say you went on a job interview and you believe that the interview went well - you gave
good answers to the recruiter's questions, and you have all the qualifications needed for the
job, then you will have a positive emotion about it. You might feel happy and excited. But if you
perceive that it did not go well, then you will feel negatively about the event. You might feel
sad, dejected, or helpless.

The cognitive evaluation and interpretation of a phenomenon or event. In theories of emotion,


cognitive appraisals are seen as determinants of emotional experience, since they influence the
perception of the event. See cognitive theory.

APPRAISAL: "The person with a negative appraisal of the class felt unhappy, whereas the
person with a positive appraisal of the same class felt satisfied. "

“Appraisal” theories provide much greater predictive power than category or hierarchy-based
schemes by specifying the critical properties of antecedent events that lead to particular.
Ellsworth (1994), for example, described a set of “abstract elicitors” of emotion. In addition to
novelty and valence, Ellsworth contended that the level of certainty/uncertainty in an event has
a significant impact on the emotion experienced. For instance, “uncertainty about probably
positive events leads to interest and curiosity, or to hope,” while, “uncertainty about probably
negative events leads to anxiety and fear Certainty, on the other hand, can lead to relief in the
positive case and despair in the negative case. Because slow, unclear, or unusual responses
from an interface generally reflect a problem, one of the most common interface design
mistakes—from an affective standpoint—is to leave the user in a state of uncertainty. Users
tend to fear the worst when, for example, an application is at a standstill, the hourglass remains
up longer than usual, or the hard drive simply starts grinding away unexpectedly. Such
uncertainty leads to a state of anxiety that can be easily avoided with a well-placed, informative
message or state indicator. Providing users with immediate feedback on their actions reduces
uncertainty, promoting a more positive affective state (see Norman, 1990, on visibility and
feedback). When an error has actually occurred, the best approach is to make the user aware of
the problem and its possible consequences, but frame the uncertainty in as positive a light as
possible (i.e., “this application has experienced a problem, but the document should be
recoverable”). According to Ellsworth (1994), obstacles and control also play an important role
in eliciting emotion. High control can lead to a sense of challenge in positive situations, but

26 | P a g e
stress in negative situations. Lack of control, on the other hand, often results in frustration,
which if sustained can lead to desperation and resignation. In an HCI context, providing an
appropriate level of controllability, given a user’s abilities and the task at hand, is thus critical
for avoiding negative affective consequences. Control need not only be perceived to exist but
must be understandable and visible, otherwise the interface itself is an obstacle (Norman,
1990). Agency is yet another crucial factor determining emotional response .When oneself is
the cause of the situation, shame (negative) and pride (positive) are likely emotions. When
another person or entity is the cause, anger (negative) and love (positive) are more likely.
However, if fate is the agent, one is more likely to experience sorrow (negative) and joy
(positive). An interface often has the opportunity to direct a user’s perception of agency. In any
anomalous situation, for example—be it an error in reading a file, inability to recognize speech
input, or simply a crash—if the user is put in a position encouraging blame of oneself or fate,
the negative emotional repercussions may be more difficult to diffuse than if the computer
explicitly assumes blame (and is apologetic). For example, a voice interface encountering a
recognition error can say, “This system failed to understand your command” (blaming itself),
“The command was not understood” (blaming no one), or “You did not speak clearly enough for
your command to be understood” (blaming the user). Appraisal theories of emotion, are useful
not only in understanding the potential affective impacts of design decisions, but also in
creating computer agents that exhibit emotion. Although in some cases scripted emotional
responses are sufficient, in more dynamic or interactive contexts, an agent’s affective state
must be simulated to be believable.

20. What do you know about Haptic Interface?


A haptics interface is a system that allows a human to interact with a computer through bodily
sensations and movements. Haptics refers to a type of human-computer interaction technology
that encompasses tactile feedback or other bodily sensations to perform actions or processes
on a computing device.

A haptics interface is primarily implemented and applied in virtual reality environments, where
an individual can interact with virtual objects and elements. A haptics interface relies on
purpose-built sensors that send an electrical signal to the computer based on different sensory
movements or interactions. Each electrical signal is interpreted by the computer to execute a
process or action. In turn, the haptic interface also sends a signal to the human organ or body.
For example, when playing a racing game using a haptic interface powered data glove, a user
can use his or her hand to steer the car. However, when the car hits a wall or another car, the
haptics interface will send a signal that will imitate the same feeling on user’s hands in the form
of a vibration or rapid movement.

27 | P a g e
21. Discuss how gain or C:D ratio is used to compute the effectiveness of an
input device?

Gain Also known as control-to-display (CD) gain or C:D ratio, gain is defined as the distance
moved by an input device divided by the distance moved on the display. Gain confounds what
should be two measurements—(a) device size and (b) display size—with one arbitrary metric
and is therefore suspect as a factor for experimental study. In experiments, gain typically has
little or no effect on the time to perform pointing movements, but variable gain functions may
provide some benefit by reducing the required footprint (physical movement space) of a
device. Indirect tablets report the absolute position of a pointer on a sensing surface. Touch
tablets sense a bare finger, whereas graphics tablets or digitizing tablets typically sense a stylus
or other physical intermediary. Tablets can operate in absolute mode, with a fixed CD gain
between the tablet surface and the display, or in relative mode, in which the tablet responds
only to motion of the stylus. If the user touches the stylus to the tablet in relative mode, the
cursor resumes motion from its previous position; in absolute mode, it would jump to the new
position. Absolute mode is generally preferable for tasks such as drawing, handwriting, tracing,
or digitizing, but relative mode may be preferable for typical desktop interaction tasks such as
selecting graphical icons or navigating through menus. Tablets thus allow coverage of many
tasks, whereas mice only operate in relative mode

22. What are Wearable Computer? Explain with example?


Wearable computing is the study or practice of inventing, designing, building, or using
miniature body-borne computational and sensory devices. Wearable computers may be worn
under, over, or in clothing, or may also be themselves clothes.

Wearable computing as a reciprocal relationship between man and machine

An important distinction between wearable computers and portable computers (handheld and
laptop computers for example) is that the goal of wearable computing is to position or
contextualize the computer in such a way that the human and computer are inextricably
intertwined, so as to achieve Humanistic Intelligence – i.e. intelligence that arises by having the
human being in the feedback loop of the computational process, e.g. Mann 1998.

An example of Humanistic Intelligence is the wearable face recognizer (Mann 1996) in which
the computer takes the form of electric eyeglasses that "see" everything the wearer sees, and
therefore the computer can interact serendipitously. A handheld or laptop computer would not

28 | P a g e
provide the same serendipitous or unexpected interaction, whereas the wearable computer
can pop-up virtual nametags if it ever "sees" someone its owner knows or ought to know.

In this sense, wearable computing can be defined as an embodiment of, or an attempt to


embody, Humanistic Intelligence. This definition also allows for the possibility of some or all of
the technology to be implanted inside the body, thus broadening from "wearable computing"
to "bearable computing" (i.e. body-borne computing).

One of the main features of Humanistic Intelligence is constancy of interaction, that the human
and computer are inextricably intertwined. This arises from constancy of interaction between
the human and computer, i.e. there is no need to turn the device on prior to engaging it (thus,
serendipity).

Another feature of Humanistic Intelligence is the ability to multi-task. It is not necessary for a
person to stop what they are doing to use a wearable computer because it is always running in
the background, so as to augment or mediate the human's interactions. Wearable computers
can be incorporated by the user to act like a prosthetic, thus forming a true extension of the
user's mind and body.

It is common in the field of Human-Computer Interaction (HCI) to think of the human and
computer as separate entities. The term "Human-Computer Interaction" emphasizes this
separateness by treating the human and computer as different entities that interact. However,
Humanistic Intelligence theory thinks of the wearer and the computer with its associated input
and output facilities not as separate entities, but regards the computer as a second brain and its
sensory modalities as additional senses, in which synthetic synesthesia merges with the
wearer's senses. When a wearable computer functions as a successful embodiment of
Humanistic Intelligence, the computer uses the human's mind and body as one of its
peripherals, just as the human uses the computer as a peripheral. This reciprocal relationship is
at the heart of Humanistic Intelligence.

Concrete examples of wearable computing

Example : Augmented Reality

Augmented Reality means to super-impose an extra layer on a real-world environment, there


by augmenting it. An ”augmented reality” is thus a view of a physical, real-world environment
whose elements are augmented by computer-generated sensory input such as sound, video,
graphics or GPS data. One example is the Wikitude application for the iPhone which lets you
point your iPhone’s camera at something, which is then “augmented” with information from
the Wikipedia (strictly speaking this is a mediated reality because the iPhone actually modifies
vision in some ways - even if nothing more than the fact we're seeing with a camera).

29 | P a g e
Augmented Reality prototype

Photograph of the Head-Up Display taken by a pilot on a McDonnell Douglas F/A-18 Hornet

30 | P a g e
23. Discuss the catastrophic effect of human error in HCI?
Human capability for interpreting and manipulating information is quite impressive. However,
we do make mistakes. Some are trivial, resulting in no more than temporary inconvenience or
annoyance. Others may be more serious, requiring substantial effort to correct. Occasionally an
error may have catastrophic effects, as we see when ‘human error’ results in a plane crash or
nuclear plant leak. Why do we make mistakes and can we avoid them? In order to answer the
latter part of the question we must first look at what is going on when we make an error. There
are several different types of error. As we saw in the last section some errors result from
changes in the context of skilled behavior. If a pattern of behavior has become automatic and
we change some aspect of it, the more familiar pattern may break through and cause an error.
A familiar example of this is where we intend to stop at the shop on the way home from work
but in fact drive past. Here, the activity of driving home is the more familiar and overrides the
less familiar intention. Other errors result from an incorrect understanding, or model, of a
situation or system. People build their own theories to understand the causal behavior of
systems. These have been termed mental models. They have a number of characteristics.
Mental models are often partial: the person does not have a full understanding of the working
of the whole system. They are unstable and are subject to change. They can be internally
inconsistent, since the person may not have worked through the logical consequences of their
beliefs. They are often unscientific and may be based on superstition rather than evidence.
Often they are based on an incorrect interpretation of the evidence.

24. Define User Interface Management System?


A User Interface Management System ( UIMS) should not be thought of as a system but rather
a software architecture (a UIMS is also called a User Interface Architecture) "in which the
implementation of an application's user interface is clearly separated from that of the
application's underlying functionality". A large number of software architectures are based on
the assumption that the functionality and the user interface of a software application are two
separate concerns that can be dealt with in isolation. The objective of such a separation is to
increase the ease of maintainability and adabtability of the software. Also, by abstracting the
code generating the user interface from the rest of the application's logic or semantics,
customisation of the interface is better supported. Some examples of such architectures are
Model-View-Controller (fundamental to modern Object Orientation, e.g. used in Java (Swing)),
the linguistic model (Foley 1990), the Seeheim model (first introduced in Green 1985), the
Higgins UIMS (described in Hudson and King 1988), and the Arch model (a specialisation of the
Seeheim model; see Coutaz et al. 1995, Coutaz 1987, and Coutaz 1997).

31 | P a g e
Such user interface architectures have been proven useful but also introduce problems. In
systems with a high degree of interaction and semantic feedback (e.g. in direct manipulation
interfaces) the boundary between application and user interface is difficult or impossible to
maintain. In direct manipulation interfaces, the user interface diplays the 'intestines' or the very
semantics of the application, with which the user interacts in a direct and immediate way. It
thus becomes very problematic to decide if these intestines should be handled by the User
Interface or in the application itself.

25. Write a detailed note on PIE model?


The PIE model

The PIE model is a black-box model. It does not try to represent the internal architecture and
construction of a computer system, but instead describes it purely in terms of its inputs from
the user and outputs to the user. For a simple single-user system, typical inputs would be from
the keyboard and mouse, and outputs would be the computer’s display screen and the
eventual printed output. The difference between the ephemeral display of a system and the
permanent result is central to the PIE model. We will call the set of possible displays D and the

set of possible results R. In order to express principles of observability, we will want to talk
about the relation between display and result. Basically, can we determine the result (what you

32 | P a g e
will get) from the display (what you see)? For a formal statement of predictability it helps (but is
not essential) to talk about the internal state of the system. This does not counter our claim to
have a black-box model. First, the state we define will be opaque; we will not look at its
structure, merely postulate it is there. Secondly, the state we will be discussing is not the actual
state of the system, but an idealization of it. It will be the minimal state required to account for
the future external behavior. We will call this the effect(E). Functions display and result obtain
the current outputs from this minimal state: display : E → D result : E → R

The current display will be literally what is now visible. The current result is actually not what is
available, but what the result would be if the interaction were finished. For example, with a
word processor, it is the pages that would be obtained if one printed the current state of the
document. A single-user action we will call a command (from a set C). The history of all the
user’s commands is called the program (P = seq C), and the current effect can be calculated
from this history using an interpretation function: I : P → E

Arguably the input history would be better labeled H, but then the PIE model would lose its
acronym! If we put together all the bits, we obtain a diagram of sets and functions,which looks
rather like the original illustration. In principle, one can express all the properties one wants in
terms of the interpretation function, I. However, this often means expressing properties
quantified over all possible past histories. To make some of the properties easier to express, we
will also use a state transition function doit:

doit : E × P → E

33 | P a g e
The function doit takes the present state e and some user commands p, and gives the new state
after the user has entered the commands doit(e, p). It is related to the interpretation function I
by the following axioms: doit(I(p), q) = I(p ^ q) doit(doit(e, p), q) = doit(e, p ^ q) The PIE diagram
can be read at different levels of abstraction. One can take a direct analogy with Figure 17.2.
The commands set C is the keystrokes and mouse clicks, the display set D is the physical display,
and the result R is the printed output: C = {‘a’, ‘b’, ... , ‘0’, ‘1’, ... , ‘*’, ‘&’, ... } D = Pixel_coord →
RGB_value R = ink on paper This is a physical/lexical level of interpretation. One can produce a
similar mapping for any system, in terms of the raw physical inputs and outputs. It is often
more useful to apply the model at the logical level. Here, the user commands are higher-level
actions such as ‘select bold font’, which may be invoked by several keystrokes and/or mouse
actions. Similarly, we can describe the screen at a logical level in terms of windows, buttons,
fields and so on.

Also, for some purposes, rather than dealing with the final physical result, we may regard, say,
the document on disk as the result. The power of the PIE model is that it can be applied at
many levels of abstraction. Some properties may only be valid at one level, but many should be
true at all levels of system description. It is even possible to apply the PIE model just within the
user, in the sense that the commands are the user’s intended actions, and the display, the
perceived response. When applying the PIE model at different levels it is possible to map
between the levels. This leads to level conformance properties, which say, for example, that the
changes one sees at the interface level should correspond to similar changes at the level of
application objects.

Predictability and observability

WYSIWYG is clearly related to what can be inferred from the display (what you see). Harold
Thimbleby has pointed out that WYSIWYG can be given two interpretations

One is what you see is what you will get at the printer. This corresponds to how well you can
determine the result from the display. The second interpretation is what you see is what you
have got in the system. For this we will ask what the display can tell us about the effect. These
can both be thought of as observability principles. A related issue is predictability. Imagine you
have been using a drawing package and in the middle you get thirsty and go to get a cup of tea.
On returning, you are faced with the screen – do you know what to do next? If there are two
shapes one on top of the other, the graphics package may interpret mouse clicks as operating
on the ‘top’ shape. However, there may be no visual indication of which is topmost. The screen
image does not tell you what the effect of your actions will be; you need to remember how you
got there, your command history. This has been called the ‘gone away for a cup of tea
problem’. In fact, the state of the system determines the effects of any future commands, so if
we have a system, which is observable in the sense that the display determines the state, it is

34 | P a g e
also predictable. Predictability is a special case of observability. We will attempt to formalize
these properties. To say that we can determine the result from the display is to say that there
exists a function transparentR from displays to results: E transparentR : D → R • A e e E •
transparentR(display(e)) = result(e) It is no good having any old function from the display to the
result; the second half of the above says that the function gives us exactly the result we would
get from the system. We can call this property result transparency. We can do a similar thing
for the effect, that is the system state: E transparentE : D → E • A e e E •
transparentE(display(e)) = (e) We can call this property simply transparency. What would it
mean for a system to be transparent in one of these senses? If the system were result
transparent, when we come back from our cup of tea, we can look at the display and then work
out in our head (using transparentR) exactly what the printed drawing would look like. Whether
we could do this in our heads is another matter. For most drawing packages the function would
be simply to ignore the menus and ‘photocopy’ the screen. Simple transparency is stronger still.
It would say that there is nothing in the state of the system that cannot be inferred from the
display. If there are any modes, then these must have a visual indication; if there are any
differences in behavior between the displayed shapes, then there must be some corresponding
visual difference. Even forgetting the formal principles, this is a strong and useful design
heuristic. Unfortunately, these principles are both rather too strong. If we imagine a word
processor rather than a drawing package, the contents of the display will be only a bit of the
document.

Clearly, we cannot infer the contents of the rest of the document (and hence the printed
result) from the display. Similarly, to give a visual indication of, say, object grouping within a
complex drawing package might be impossible (and this can cause the user problems).When
faced with a document on a word processor, the user can simply scroll the display up and down
to find out what is there. You cannot see from the current display everything about the system,
but you can find out. The process by which the user explores the current state of the system is
called a strategy. The formalization of a strategy is quite complex, even ignoring cognitive
limitations. These strategies will differ from user to user, but the documentation of a system
should tell the user how to get at pertinent information. For example, how to tell what objects
in the drawing tools are grouped. This will map out a set of effective strategies with which the
user can work. Ideally, a strategy for observing the system should not disrupt the state of the
application objects, that is the strategy should be passive. For example, a strategy for looking at
a document which involved deleting it would not be very useful. This seems almost too obvious,
but consider again grouping in drawing tools. Often the only way to find out how a grouped
object is composed is to ungroup it piece by piece. You then have to remember how to put it
back together. The advantage of a passive strategy becomes apparent. Using such a strategy
then gives one a wider view of the system than the display. This is called the observable effect
(O). In a word processor this would be the complete view of the document obtained by scrolling
35 | P a g e
plus any current mode indicators and a quick peek at the state of the cut/paste buffer. The
observable effect contains strictly more information than the display, and hence sits before it in
a functional diagram .We can now reformulate principles in terms of the observable effect. First
of all the system is result observable if the result can be determined from the observable effect:
E predictR: O → R • A e e E • predictR(observe(e)) = result(e) This says that the observable
effect contains at least as much information as the result. However, it will also contain
additional information about the interactive state of the system. For example, you will observe
the current cursor position, but this has no bearing on the printed document. So you know
what will happen if you hit the print button now. Refreshed from your cup of tea, you return to
work. You press a function key which, unknown to you, is bound to a macro intended for an
entirely different application.

The screen rolls, the disk whirrs and, to your horror, your document and the entire disk
contents are trashed. You leave the computer and go for another drink...not necessarily of tea.
A stronger condition is that the system be fully predictable: E transparentE : O → E • A e e E •
predict(observe(e)) = (e) This says that you can observe the complete state of the system. You
can then (in theory) predict anything the system will do. If the system were fully predictable,
you would be able to tell what the bindings of the function keys were and hence (again in
theory) would have been able to avoid your disaster. This is as far as this bit of the formal story
goes in this book. However, there are more sophisticated principles of observability and
predictability which take into account aspects of user attention, and issues like keyboard
buffers. Formalisms such as the PIE model have been used to portray other usability principles
discussed in Chapter 7. Principles of predictability do not stand on their own; even if you had
known what was bound to the function key, you might still have hit it by accident, or simply
forgotten. Other protective principles like commensurate effort need to be applied. Also,

36 | P a g e
although it is difficult to formalize completely, one prefers a system that behaves in most
respects like the transparency principles, rather than requiring complicated searching to
discover information. This is a sort of commensurate effort for observation.

37 | P a g e
26. Discuss the design issue of user support and help systems?
USER SUPPORT AND HELP SYSTEMS:

Not every help system will have all of these features, sometimes for good reason, but they are
useful as benchmarks against which we can test the support tools we design. Then, if our
system does not have these features, it will be by design and not by accident! Some of these
terms have also been used in discussing principles for usability. The use of the terms here is
more constrained but related.

Availability

The user needs to be able to access help at any time during his interaction with the system. In
particular, he should not have to quit the application he is working on in order to open the help
application. Ideally, it should run concurrently with any other application. This is obviously a
problem for non-windowed systems if the help system is independent of the application that is
running. However, in windowed systems there is no reason why a help facility should not be
available constantly, at the press of a button.

Accuracy and completeness

It may seem obvious to state that the assistance provided should be accurate and complete.
But in an age where applications are frequently updated, and different versions may be active
at the same time, it is not a trivial problem. However, if the assistance provided proves not to
match the actual behavior of the system the user will, at best, become disillusioned with the
help facilities, and, at worst, get into difficulties. As well as providing an accurate reflection of
the current state of the system, help should cover the whole system. This completeness is very
important if the help provided is to be used effectively. The designer cannot predict the parts of
the system the user will need help with, and must therefore assume that all parts must be
supported. Finding no help available on a topic of interest is guaranteed to frustrate the user.

User support:

Consistency

As we have noted, users require different types of help for different purposes. This implies that
a help system may incorporate a number of parts. The help provided by each of these must be
consistent with all the others and within itself. Online help should also be consistent with paper
documentation. It should be consistent in terms of content, terminology and style of
presentation. This is also an issue where applications have internal user support – these should
be consistent across the system. It is unhelpful if a command is described in one way here and

38 | P a g e
in another there, or if the way in which help is accessed varies across applications. In fact,
consistency itself can be thought of as a means of supporting the user since it reinforces
learning of system usage.

Robustness Help systems are often used by people who are in difficulty, perhaps because the
system is behaving unexpectedly or has failed altogether. It is important then that the help
system itself should be robust, both by correct error handling and predictable behavior. The
user should be able to rely on being able to get assistance when required. For these reasons
robustness is even more important for help systems than for any other part of the system.

Flexibility Many help systems are rigid in that they will produce the same help message
regardless of the expertise of the person seeking help or the context in which they are working.
A flexible help system will allow each user to interact with it in a way appropriate to his needs.
This will range from designing a modularized interactive help system, through context-sensitive
help, to a full-blown adaptive help system, which will infer the user’s expertise and task. We will
look at context-sensitive and adaptive help in more detail later in the chapter. However, any
help system can be designed to allow greater interactivity and flexibility in the level of help
presented. For example, help systems built using hypertext principles allow the user to browse
through the help, expanding topics as required. The top level provides a map of the subjects
covered by the help and the user can get back to this level at any point. Although hypertext
may not be appropriate for all help systems, the principle of flexible access is a useful one.

Unobtrusiveness The final principle for help system design is unobtrusiveness. The help system
should not prevent the user from continuing with normal work, nor should it interfere with the
user’s application. This is a problem at both ends of the spectrum. At one end the textual help
system on a non-windowed interface may interrupt the user’s work. A possible solution to this
if no alternative is available is to use a split-screen presentation.

ADAPTIVE HELP SYSTEMS

Stereotypes

Another approach to automatic user modeling is to work with stereotypes. Rather than
attempting to build a truly individual model of the user, the system classifies the user as a
member of a known category of users or stereotype. Stereotypes are based on user
characteristics and may be simple, such as making a distinction between novice and expert
users, or more complex, for example building a compound stereotype based on more than one
piece of information. There are several ways of building stereotypes. One is to use information
such as command use and errors to categorize different types of user and then to use rules to
identify the stereotype to which the user belongs. An alternative method is to use a machine
learning approach, such as neural networks, to learn examples of different types of user
39 | P a g e
behavior (from actual logs) and then to classify users according to their closeness to the
examples previously learned. Stereotypes are useful in that they represent the user at the level
of granularity at which most adaptive help systems work, and do not attempt to produce a
sophisticated model, which will not be fully utilized. After all, if the only information that is
available about the user at any time is how he is interacting with the system, it is not possible
to infer very much about the user himself. However, what can be inferred may be exactly what
is required to provide the necessary level of help.

Overlay models

One of the most common techniques used is the overlay model. Here an idealized model, often
of an expert user, is constructed and the individual user’s behavior compared with it. The
resulting user profile may represent either the commonality between the two models or the
differences. An advantage of this style of modeling is that it allows a certain degree of
diagnostic activity on the part of the system. Not only is the system aware of what the user is
doing, but it also has a representation of optimal behavior. This provides a benchmark against
which to measure the user’s performance, and, if the user does not take the optimal course of
action, gives an indication of the type of help or hint that is required. A similar approach is used
in error-based models where the system holds a record of known user errors and the user’s
actual behavior is compared with these. If this behavior matches an error in the catalog, then
remedial action can be taken. Potential errors may be matched when partially executed and
help given to enable the user to avoid the error, or recover more quickly. These types of
modeling are also useful in intelligent tutoring systems where diagnostic information is
required in order to decide how to proceed with the tutorial.

Knowledge representation: domain and task modeling All adaptive help systems must have
some knowledge of the system itself, in order to provide relevant and appropriate advice. This
knowledge may include command use, common errors and common tasks. However, some help
systems also attempt to build a model of the user’s current task or plan. The motivation behind
this is that the user is engaged in a particular problem-solving task and requires help at that
level. Generic help, even adapted to the expertise and preference of the user, is not enough.
One common approach to this problem is to represent user tasks in terms of the command
sequences that are required to execute them. As the user works, the commands used are
compared with the stored task sequences and matched sequences are recovered. If the user’s
command sequence does not match a recognized task, help is offered. This approach was used
in the PRIAM system .Although an attractive idea, task recognition is problematic. In large
domains it is unlikely that every possible method for reaching every possible user goal could be
represented. Users may reasonably approach a task in a non-standard way, and inferring the
user’s intention from command usage is not a trivial problem. As we saw in Chapter 9, system

40 | P a g e
logs do not always contain sufficient information for a human expert to ascertain what the user
was trying to do. The problem is far greater for a computer. Assistants and agents use task
recognition at a basic level to monitor user behavior and provide hints and macros when a
familiar or repeated sequence is noticed.

Knowledge representation: modeling advisory strategy

A third area of knowledge representation, which is sometimes included in adaptive help, is


modeling advisory or tutorial strategies. Providing a help system with this type of information
allows it not only to select appropriate advice for the user but also to use an appropriate
method of giving advice. As we have already seen, people require different types of help
depending on their knowledge and circumstances. These include reminders, task-specific help
and tutorial help. There is evidence to indicate that human experts follow different strategies
when advising colleagues [293]. These include inferring the intention of the person seeking help
and advising at that level or providing a number of solutions to the person’s problem.
Alternatively they may attempt to place the problem in a context and provide a ‘sample
solution’ in that context. Few adaptive help systems have attempted to model advisory
strategy, and those that do provide a limited choice. Ideally, it would be useful if the help
system had access to a number of alternative strategies and was able to choose an appropriate
style of guidance in each case. However, this is very ambitious – too little is known about what
makes a guidance strategy appropriate in which contexts. However, it is important that
designers of adaptive help systems give some thought to advisory strategies, if only to make an
informed choice about the strategy that is to be used. The EuroHelp adaptive help system
adopts a model of teacher–pupil, in which the system is envisaged as a teacher watching the
user (pupil) work and offering advice and suggestions in an ‘over-the-shoulder’ fashion [126,
44]. In this case, instruction

Adaptive help systems

may be high to begin with but will become less obtrusive as the user finds his feet. The user is
able to question the system at any point and responses are given in terms of the current
context. This mixed-initiative dialog is also used in the Activist/Passivist help system, which will
accept requests from the user and actively offer suggestions and hints, particularly about areas
of functionality that it infers the user is unfamiliar with

Techniques for knowledge representation

All of the modeling approaches described rely heavily on techniques for knowledge
representation from artificial intelligence. This is a whole subject in its own right and there is
only room to outline the methods here (although some of the techniques are based on theories
of memory and problem solving as discussed .The interested reader is also referred to the text
41 | P a g e
on artificial intelligence in the recommended reading list. There are four main groups of
techniques used in knowledge representation for adaptive help systems: rule based, frame
based, network based and example based. Note that these general techniques are often
combined to produce hybrid systems.

Rule-based techniques

Knowledge is represented as a set of rules and facts, which are interpreted using some
inference mechanism. Predicate logic provides a powerful mechanism for representing
declarative information, while production rules represent procedural information. Rule-based
techniques can be used in relatively large domains and can represent actions to perform as well
as knowledge for inference. A user model implemented using rule-based methods may include
rules of the form IF command is EDIT file1 AND last command is COMPILE file1 THEN task is
DEBUG action is describe automatic debugger

Frame-based techniques

Frame-based systems are used to represent commonly occurring situations and default
knowledge. A frame is a structure that contains labeled slots, representing related features.
Each slot can be assigned a value or alternatively be given a default value. User input is
matched against the frame values and a successful match may cause some action to be taken.
They are useful in small domains. In user modeling the frame may represent the current profile
of the user:

User Expertise level: novice Command: EDIT file1 Last command: COMPILE file1 Errors this
session: 6 Action: describe automatic debugger

Network-based techniques

Networks represent knowledge about the user and system in terms of relationships between
facts. One of the most common examples is the semantic network. The network is a hierarchy
and children can inherit properties associated with their parents. This makes it a relatively
efficient representation scheme and is useful for linking information clearly. Networks can also
be used to link frame-based representations. The compile example could be expanded within a
semantic network: CC is an instance of COMPILE COMPILE is a command COMPILE is related to
DEBUG COMPILE is related to EDIT Automatic debugger facilitates DEBUG

Example-based techniques

Example-based techniques represent knowledge implicitly within a decision structure of a


classification system. This may be a decision tree, in the case of an inductive learning approach
such as ID3 [298], or links in a network in the case of neural networks. The decision structure is

42 | P a g e
constructed automatically based on examples presented to the classifier. The classifiers
effectively detect recurrent features within the examples and are able to use these to classify
other input. An example may be a trace of user activity: EDIT file1 COMPILE file1

This would be trained as an example of a particular task, for example DEBUG.

Problems with knowledge representation and modeling

Knowledge representation is the central issue in adaptive help systems, but it is not without its
problems. Knowledge is often difficult to elicit, particularly if a domain expert is not available.
This is particularly true of knowledge of user behavior, owing to its variability. It is especially
difficult to ensure completeness and correctness of the knowledge base in these circumstances.
Even if knowledge is available, the amount of knowledge required is substantial, making
adaptive help an expensive option.

Adaptive help systems

A second problem is interpreting the information appropriately. Although the knowledge base
can be provided with detailed knowledge of the expected contexts and the domain in advance,
during the interaction the only information that is available is the system log of the user’s
actions. interpreting system logs is very difficult because it is stripped of much context and
there is no access to the user’s intention or goal (except by inference). However, this data is not
arbitrary and does contain recurrent patterns of activity, which can be used with care to infer
task sequences and the like. However, it should be realized that these represent
approximations only.

27. Write detailed note on Cognition-Adaptive Multimodal Interface (CAMI) ?

ANS. Cognition-Adaptive Multimodal Interface (CAMI)

Motivation and Objectives

A cognition-adaptive multimodal interface (CAMI), which is inspired from and for a real world
traffic incident management system. CAMI is motivated by these factors:

While the idea of cognition-adaptive user interfaces has been around for a long time,
there has not been many case studies or design examples reported in the literature,
especially for large-scale mission-critical applications.

43 | P a g e
Our industry partner has provided us with an interface design task which is technically
challenging and economically significant, and at the same time they provide us with an
ideal research and development platform.
We aim to combine concepts and methods from both CLT (Cognitive Load Theory) and
CSE (Cognitive System Engineering) and develop a new type of design methodology
based on the complementary nature of both.
We intend to apply our prior experiences and expertise in the area of multimodal user
interfaces to the development of CAMI.

The system objective of CAMI is to create an interface that is able to support the work of
traffic control officers so that their cognitive work load is reduced, their use of existing
technical and business resources is “cognitively” facilitated, and their task performance is
improved (e.g. total incident handling time is reduced). The research objective of CAMI is to
combine expertise from CLT, CSE and multimodal interfaces to create a new design
methodology for cognition-adaptive information systems and their user interfaces.

CAMI Design and Analysis

Figure 1 shows the simplified traffic incident management system (TIM) used by traffic control
officers (TCOs) without CAMI. Basically, TIM operations consist of incident detection, incident
verification and incident response. At the traffic management centre (TCC) with which we
collaborate, TCOs use more than fifteen different programs, software and devices to perform
TIM operations on a 24/7 basis. Many of these programs and device controllers have different
user interfaces. TCOs’ cognitive work load can become very high when emergency incidents
occur during peak hours every day. CAMI project aims to significantly simply the TIM
operations, and provide TCOs with a new user interface that can sense TCOs cognitive workload
and provide cognitive support adaptively. CAMI will not touch heavy back-end system of
detection, verification, response, field device and personnel, shown on the left hand side of the
dotted line in Figure 1. CAMI will add in a decision-supporting middleware layer between these
components and the new multimodal user interface, as shown in Figure 2.

44 | P a g e
45 | P a g e
CAMI has four main parts: a multimodal interface which facilitates user’s input to the system
and detection of user’s cognitive state; an input and context analyzer (input analyzer, cognitive
load analyzer, user model, and context model) which analyzes user’s intention and cognitive
load level; an output and support part (cognitive aids, adaptive content presentation, incident
response plans, retrieval support) which provides user with information display and tools based
on user needs and context; and finally an incident management engine which connects TIM
back-end, user input and output modules. A detailed description of these components and
modules are provided below. The term “user” is used interchangeably to mean Traffic Control
Officer (TCO).

A multimodal user interface (MMUI) is an emerging technology that allows a user to interact
with a computer by using his or her natural communication modalities, such as speech, pen,
touch, gestures, eye gaze, facial expression etc. An MMUI consists of devices and sensors, in
addition to conventional keyboard and mice, to facilitate user’s intuitive input to the system. In
CAMI the MMUI includes a microphone to capture user’s speech and voice during phone calls
and oral reports, which will be used for the detection and assessment of user cognitive load
levels. Speech can be used as an input mode for users to quickly find a location, or another
piece of information which may be difficult to do using key presses or mouse clicks on the on-
screen electronic forms. We also plan to include a video camera into the MMUI which can
analyze user’s eye-gaze movement and changes to user’s pupil diameter, both of which can also
provide indication of user cognitive load levels. Touch-screen and digital pen can also be part of
the MMUI that provides users with more direct and intuitive input modes. For the output part,
apart from LCD monitors for information display, speakers and alarm devices are included in
the MMUI to provide users with combined audio-visual information delivery.

The user input analyzer (UIA) receives all user inputs including key presses, mouse clicks,
mouse movement, speech, voice. It can also include user eye-gaze movement and pupil
diameter changes. It also keeps track of time and consults Context Model for context and task
related information. The main functionality of the UIA includes:

Filtering and passing user commands and communication (key presses, mouse clicks,
phone calls etc) to the TIM system, as if CAMI does not exist;
Analyzing user’s intention: what is he or she trying to achieve at the moment? What’s
user’s strategy to solve a problem? The UIA does this by consulting Context Model (CM)
and the TIM task model in CM. The idea is that for rule based tasks (e.g. a reported
incident has to be verified before any response actions are taken; if there is casualty in
an incident, ambulance must be called immediately; etc) the CM and TM can predict
and enforce (if necessary) compulsory actions. A task model for current task is built and
compared to the “ideal” task model.

46 | P a g e
Extracting features from user’s speech and voice signals for cognitive load analysis
purpose.
Calculating time a user has spent in current task or sub-task. The time information can
then be correlated with the CM data and user’s past performance for similar task or sub-
task. This information can also provide indication about whether user is in difficulty or
needs help.

47 | P a g e
28. Write a note on Failure Modes and Effects Analysis (FMEA), and Human
Factor Process FMEA?
Failure Mode Effects Analysis (FMEA)

Also called: potential failure modes and effects analysis; failure modes, effects and criticality
analysis (FMECA).

Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all possible
failures in a design, a manufacturing or assembly process, or a product or service.

“Failure modes” means the ways, or modes, in which something might fail. Failures are any
errors or defects, especially ones that affect the customer, and can be potential or actual.

“Effects analysis” refers to studying the consequences of those failures.

Failures are prioritized according to how serious their consequences are, how frequently they
occur and how easily they can be detected. The purpose of the FMEA is to take actions to
eliminate or reduce failures, starting with the highest-priority ones.

Failure modes and effects analysis also documents current knowledge and actions about the
risks of failures, for use in continuous improvement. FMEA is used during design to prevent
failures. Later it’s used for control, before and during ongoing operation of the process. Ideally,
FMEA begins during the earliest conceptual stages of design and continues throughout the life
of the product or service.

Begun in the 1940s by the U.S. military, FMEA was further developed by the aerospace and
automotive industries. Several industries maintain formal FMEA standards.

What follows is an overview and reference. Before undertaking an FMEA process, learn more
about standards and specific methods in your organization and industry through other
references and training.

When to Use FMEA

 When a process, product or service is being designed or redesigned, after quality function
deployment.
 When an existing process, product or service is being applied in a new way.
 Before developing control plans for a new or modified process.
 When improvement goals are planned for an existing process, product or service.
48 | P a g e
 When analyzing failures of an existing process, product or service.
 Periodically throughout the life of the process, product or service

FMEA Procedure

(Again, this is a general procedure. Specific details may vary with standards of your organization
or industry.)

1. Assemble a cross-functional team of people with diverse knowledge about the process,
product or service and customer needs. Functions often included are: design,
manufacturing, quality, testing, reliability, maintenance, purchasing (and suppliers),
sales, marketing (and customers) and customer service.
2. Identify the scope of the FMEA. Is it for concept, system, design, process or service?
What are the boundaries? How detailed should we be? Use flowcharts to identify the
scope and to make sure every team member understands it in detail. (From here on,
we’ll use the word “scope” to mean the system, design, process or service that is the
subject of your FMEA.)
3. Fill in the identifying information at the top of your FMEA form. Figure 1 shows a typical
format. The remaining steps ask for information that will go into the columns of the
form.

Figure 1 FMEA Example (click image to enlarge)

4. Identify the functions of your scope. Ask, “What is the purpose of this system, design,
process or service? What do our customers expect it to do?” Name it with a verb
followed by a noun. Usually you will break the scope into separate subsystems, items,
parts, assemblies or process steps and identify the function of each.
5. For each function, identify all the ways failure could happen. These are potential failure
modes. If necessary, go back and rewrite the function with more detail to be sure the
failure modes show a loss of that function.

49 | P a g e
6. For each failure mode, identify all the consequences on the system, related systems,
process, related processes, product, service, customer or regulations. These are
potential effects of failure. Ask, “What does the customer experience because of this
failure? What happens when this failure occurs?”
7. Determine how serious each effect is. This is the severity rating, or S. Severity is usually
rated on a scale from 1 to 10, where 1 is insignificant and 10 is catastrophic. If a failure
mode has more than one effect, write on the FMEA table only the highest severity rating
for that failure mode.
8. For each failure mode, determine all the potential root causes. Use tools classified
as cause analysis tool, as well as the best knowledge and experience of the team. List all
possible causes for each failure mode on the FMEA form.
9. For each cause, determine the occurrence rating, or O. This rating estimates the
probability of failure occurring for that reason during the lifetime of your scope.
Occurrence is usually rated on a scale from 1 to 10, where 1 is extremely unlikely and 10
is inevitable. On the FMEA table, list the occurrence rating for each cause.
10. For each cause, identify current process controls. These are tests, procedures or
mechanisms that you now have in place to keep failures from reaching the customer.
These controls might prevent the cause from happening, reduce the likelihood that it
will happen or detect failure after the cause has already happened but before the
customer is affected.
11. For each control, determine the detection rating, or D. This rating estimates how well
the controls can detect either the cause or its failure mode after they have happened
but before the customer is affected. Detection is usually rated on a scale from 1 to 10,
where 1 means the control is absolutely certain to detect the problem and 10 means the
control is certain not to detect the problem (or no control exists). On the FMEA table,
list the detection rating for each cause.
12. (Optional for most industries) Is this failure mode associated with a critical
characteristic? (Critical characteristics are measurements or indicators that reflect
safety or compliance with government regulations and need special controls.) If so, a
column labeled “Classification” receives a Y or N to show whether special controls are
needed. Usually, critical characteristics have a severity of 9 or 10 and occurrence and
detection ratings above 3.
13. Calculate the risk priority number, or RPN, which equals S × O × D. Also calculate
Criticality by multiplying severity by occurrence, S × O. These numbers provide guidance
for ranking potential failures in the order they should be addressed.

50 | P a g e
14. Identify recommended actions. These actions may be design or process changes to
lower severity or occurrence. They may be additional controls to improve detection.
Also note who is responsible for the actions and target completion dates.
15. As actions are completed, note results and the date on the FMEA form. Also, note new
S, O or D ratings and new RPNs.
FMEA Example

A bank performed a process FMEA on their ATM system. Figure 1 shows part of it—the function
“dispense cash” and a few of the failure modes for that function. The optional “Classification”
column was not used. Only the headings are shown for the rightmost (action) columns.

Notice that RPN and criticality prioritize causes differently. According to the RPN, “machine
jams” and “heavy computer network traffic” are the first and second highest risks.

One high value for severity or occurrence times a detection rating of 10 generates a high RPN.
Criticality does not include the detection rating, so it rates highest the only cause with medium
to high values for both severity and occurrence: “out of cash.” The team should use their
experience and judgment to determine appropriate priorities for action.

HUMAN FACTORS PROCESS FMEA


Methods, computer-readable media, and systems for automatically performing Human Factors
Process Failure Modes and Effects Analysis for a process are provided. At least one task
involved in a process is identified, where the task includes at least one human activity. The
human activity is described using at least one verb. A human error potentially resulting from
the human activity is automatically identified, the human error is related to the verb used in
describing the task. A likelihood of occurrence, detection, and correction of the human error is
identified. The severity of the effect of the human error is identified. The likelihood of
occurrence, and the severity of the risk of potential harm is identified. The risk of potential
harm is compared with a risk threshold to identify the appropriateness of corrective measures.

29. Discuss GOMS In human computer interaction?


GOMS is a modeling technique (more specifically, a family of modeling techniques) that
analyzes the user complexity of interactive systems. It is used by software designers to model

51 | P a g e
user behavior. The user's behavior is modeled in terms
of Goals, Operators, Methods and Selection rules, which are described below in more detail.
Briefly, a GOMS model consists of Methods that are used to achieve Goals. A Method is a
sequential list of Operators that the user performs and (sub)Goals that must be achieved. If
there is more than one Method which may be employed to achieve a Goal, a Selection rule is
invoked to determine what Method to choose, depending on the context.

Several variations of GOMS have been developed. To distinguish from later variants, the
original GOMS formulation is sometimes referred to as CMN-GOMS.

Scope and Application

A GOMS model provides the designer with a model of a user's behavior while performing well-
known tasks. These models can be used for a variety of purposes, as follows

Functionality Coverage

If the designer has a list of likely user goals, GOMS models can be used to verify that a method
exists to achieve each of these goals.

Execution time

GOMS models can predict the time it will take for the user to carry out a goal (assuming an
expert user with no mistakes). This allows a designer to profile an application to locate
bottlenecks, as well as compare different UI designs to determine which one allows users to
execute tasks quicker.

Help systems

Since GOMS models are an explicit representation of expert user activity, they can assist in
designing help systems and tutorials to assist users in achieving goals.

Principles:

Goals
Goals are what the user is trying to accomplish. These can be defined at various levels of
abstraction, from very high-level goals (e.g. WRITE-RESEARCH-PAPER) to low-level goals (e.g.
DELETE-WORD). Higher-level goals are decomposable into subgoals, and are arranged
hierarchically.

52 | P a g e
Operators
Operators are the elementary perceptual, motor or cognitive actions that are used to
accomplish the goals (e.g. DOUBLE-CLICK-MOUSE, PRESS-INSERT-KEY). Operators are not
decomposable: they are atomic elements in the GOMS model. Furthermore, it is generally
assumed that each operator requires a fixed amount of time for the user to execute, and that
this time interval is independent of context (e.g. CLICK-MOUSE button takes 0.20 seconds to
execute).

Methods
Methods are the procedures that describe how to accomplish goals. A method is essentially an
algorithm that the user has internalized that determines the sequence of subgoals and
operators necessary to achieve the desired goal. For example, one method to accomplish the
goal DELETE-WORD in the Emacs text editor would be to MOVE-MOUSE to the beginning of the
word, and PRESS-ALT-D-KEY-COMBINATION (the use-mouse-delete-word method). Another
method to accomplish the same goal could involve using the arrow keys to reach the beginning
of the word (the use-arrows-delete-word method).

Selection rules
Selection rules specify which method should be used to satisfy a given goal, based on the
context. Since there may be several different ways of achieving the same goal, selection rules
represent the user's knowledge of which method must be applied to achieve the desired goal.
Selection rules generally take the form of a conditional statement, such as "if the word to be
deleted is less than 3 lines away from the current cursor location, then use the use-arrows-
delete-word-method, else use the use-mouse-delete-word method".

30. Discuss Effects of affect in detail?

Attention One of the most important effects of emotion lies in its ability to capture
attention. Emotions have a way of being completely absorbing. Functionally, they direct and
focus our attention on those objects and situations that have been appraised as important to
our needs and goals so that we can deal with them appropriately. Emotion-relevant thoughts
then tend to dominate conscious processing—the more important the situation, the higher the
arousal, and the more forceful the focus. In an HCI context, this attention-getting function can
be used advantageously, as when a sudden beep is used to alert the user, or can be distracting,
as when a struggling user is frustrated and can only think about his or her inability. Emotion can
further influence attention through a secondary process of emotion regulation . Once an
53 | P a g e
emotion is triggered, higher cognitive processes may determine that the emotion is
undesirable. In such cases, attention is often directed away from the emotion-eliciting stimulus
for the purpose of distraction. For example, becoming angry with an onscreen agent may be
seen as ineffectual (i.e., because it doesn’t recognize your anger) or simply unreasonable. An
angered user may then actively try to ignore the agent, focusing instead on other onscreen or
off-screen stimuli, or even take the next step and completely remove the agent from the
interaction (which could mean leaving an application or Website entirely). Positive emotions
may likewise require regulation at times, such as when amusing stimuli lead to inappropriate
laughter in a work environment. If the emotionally relevant stimulus is too arousing, however,
regulation through selective attention is bound to fail (Wegner, 1994), because users will be
unable to ignore the stimulus. Mood can have a less profound but more enduring effect on
attention. At the most basic level, people tend to pay more attention to thoughts and stimuli
that have some relevance to their current mood state .However, people also often consciously
regulate mood, selecting and attending to stimuli that sustain desired moods or, alternatively,
counteract undesired moods. An interface capable of detecting—or at least predicting—a user’s
emotional or mood state could similarly assume an affect-regulation role, helping to guide
attention away from negative and toward more positive stimuli. For example, a frustrated user
could be encouraged to work on a different task, focus on a different aspect of the problem at
hand, or simply take a break (perhaps by visiting a suggested online entertainment site).

Memory
Emotion’s effect on attention also has implications for memory. Because emotion focuses
thought on the evoking stimulus, emotional stimuli are generally remembered better than
unemotional events .Negative events, which tend to be highly arousing, are typically
remembered better than positive events . In addition, emotionality “improves memory for
central details while undermining memory for background details” Mood also comes into play
in both memory encoding and retrieval. Research has shown that people will remember
“moodcongruent” emotional stimuli better than incongruent stimuli. Bower, Gilligan, and
Monteiro (1981), for example, hypnotized subjects into either a happy or sad mood before
having them read stories about various characters. The next day, subjects were found to
remember more facts about characters whose mood had agreed with their own than about
other characters. Similarly, on the retrieval end, people tend to better recall memories
consistent with their current mood (Ellis & Moore, 1999). However, the reverse effect has also
been shown to occur in certain situations; people will sometimes better recall
moodincongruent memories (i.e., happy memories while in a sad mood). Parrott and Spackman
(2000) hypothesized that mood regulation is responsible for this inverse effect: When a given
mood is seen as inappropriate or distracting, people will often actively try to evoke memories
or thoughts to modify that mood Affect Infusion Model (AIM) for insight into these

54 | P a g e
contradictory findings .Finally, there is some evidence for mood-dependent recall: Memories
encoded while in a particular mood are better recalled when in that same mood. This effect is
independent of the emotional content of the memory itself (Ucros, 1989). It should be noted,
however, that the effects of mood on memory are often unreliable and therefore remain
controversial.

Performance
Mood has also been found to affect cognitive style and performance. The most striking finding
is that even mildly positive affective states profoundly affect the flexibility and efficiency of
thinking and problem solving). In one of the best-known experiments, subjects were induced
into a good or bad mood and then asked to solve Duncker’s (1945) candle task. Given only a
box of thumbtacks, the goal of this problem was to attach a lighted candle to the wall, such that
no wax drips on the floor. The solution required the creative insight to thumbtack the box itself
to the wall and then tack the candle to the box. Subjects who were first put into a good mood
were significantly more successful at solving this problem). In another study, medical students
were asked to diagnose patients based on X-rays after first being put into a positive, negative,
or neutral mood. Subjects in the positive-affect condition reached the correct conclusion faster
than did subjects in other conditions Positive affect has also been shown to increase heuristic
processing, such as reliance on scripts and stereotypes. Though some have argued that such
reliance is at the expense of systematic processing (Schwartz & Bless, 1991), more recent
evidence suggests that heuristic processing and systematic processing are not mutually
exclusive. Keeping a user happy may, therefore, not only affect satisfaction, but may also lead
to efficiency and creativity.

Assessment
Mood has also been shown to influence judgment and decision making. As mentioned earlier,
mood tends to bias thoughts in a mood-consistent direction, while also lowering the thresholds
of mood-consistent emotions. One important consequence of this is that stimuli—even those
unrelated to the current affective state—are judged through the filter of mood. This suggests
that users in a good mood will likely judge both the interface and their work more positively,
regardless of any direct emotional effects. It also suggests that a happy user at an e-commerce
site would be more likely to evaluate the products or services positively. Positive mood also
decreases risk-taking, likely in an effort to preserve the positive mood. That is, although people
in a positive mood are more risk-prone when making hypothetical decisions, when presented
with an actual risk situation, they tend to be more cautious . In an e-commerce purchasing
situation, then, one can predict that a low-risk purchase is more likely during a good mood, due
to a biased judgment in favor of the product, while a high-risk purchase may be more likely in a

55 | P a g e
less cautious, neutral, or negative mood (consistent with the adage that desperate people
resort to desperate measures). A mood’s effect on judgment, combined with its effect on
memory, can also influence the formation of sentiments. Sentiments are not necessarily
determined during interaction with an object; they often are grounded in reflection. This is
important to consider when conducting user tests, as the mood set by the interaction
immediately prior to a questionnaire may bias like/dislike assessments of earlier interactions.
Thus, varying order of presentation ensures both that later stimuli do not influence the
assessment of earlier stimuli, and that earlier stimuli do not influence the experience of later
stimuli (as discussed earlier).

31. Write a note on Sensor based Interaction?


Originally, sensor technology was developed to measure the environment, and allowed
computers to react to changes in the environment. Earliest uses of sensors were monitoring
activities, such as the thermostat of a central heating system. If the building was too cold the
heating was switched on. Nowadays, sensors are being used in a range of appliances that
interact with users, ranging from toilets to smart homes. Most notable are the use of sensor-
based interactions for turning on and off taps, flushing toilets and switching on and off lights.
Motivated by recent trends towards improving hygiene and energy saving, controlling devices
that require explicit physical contact, such as taps, buttons and switches, are being replaced by
the detection of body movements (e.g., presence of hands or body moving) requiring no
physical contact. Ideally, this form of sensor-based controlling should be effortless and intuitive.
Walking into a room should be all that is needed for a light to come on. Likewise, placing one’s
hands in a basin should be all that is needed for the taps to turn on. However, it can often be
the case that we get caught out and frustrated by the novel mode of interaction. For example,
when we enter a new building that has sensor-based controls, we may not know what to do or
where to look to activate a device or change an aspect of the environment. The physical cues
(i.e. switches, taps) we have become accustomed to are no longer available in the environment
to guide our actions. Instead we must learn to move our bodies, or parts of them, in specific
ways to be detected by the sensors. A main effect of replacing physical control with sensor-
based control is to shift the agency of control. With physical-based interactions, the person is
largely in control, deciding when to start, how much and when to stop an operation. With
sensor-based interactions, the system is largely in control, deciding when to start, how much
and when to stop an operation – based on the detection of changes in the environment. The
problem with this shift in control is that, as Shneiderman (1998) has commented, in most
situations humans like to be in control of their actions and interactions. We like to know what is
going on, be involved in the action and have a sense of power over the system we are
interacting with. Issuing instructions to systems – through typing in commands, selecting
options from menus in a window environment or on a touch screen, or pressing buttons –

56 | P a g e
provides us with this sense of control. We also like to be able to readily rectify mistakes, like
turning the tap to slow the flow of water or change its temperature, or pressing the 3 undo
button after selecting the wrong menu item. Most of which is currently difficult or impossible to
do with sensor-based interfaces.

32. What is the difference between Emotions and Moods?


In everyday language, people often use the terms 'emotions' and 'moods' interchangeably, but
psychologists actually make distinctions between the two. How do they differ? An emotion is
normally quite short-lived, but intense. Emotions are also likely to have a definite and
identifiable cause. For example, after disagreeing with a friend over politics, you might feel
angry for a short period of time. A mood, on the other hand, is usually much milder than an
emotion, but longer-lasting. In many cases, it can be difficult to identify the specific cause of a
mood. For example, you might find yourself feeling gloomy for several days without any clearly
identifiable reason.

33. Discuss measurement affect in human computer interaction?


Measuring user affect can be valuable both as a component of usability testing and as an
interface technique. When evaluating interfaces, affective information provides insight into
what a user is feeling—the fundamental basis of liking and other sentiments. Within an
interface, knowledge of a user’s affect provides useful feedback regarding the degree to which
a user’s goals are being met, enabling dynamic and intelligent adaptation. In particular, social
interfaces (including character-based interfaces) must have the ability to recognize and respond
to emotion in users to execute effectively real-world interpersonal interaction strategies.

Neurological Responses

The brain is the most fundamental source of emotion. The most common way to measure
neurological changes is the electroencephalogram (EEG). In a relaxed state, the human brain
exhibits an alpha rhythm, which can be detected by EEG recordings taken through sensors
attached to the scalp. Disruption of this signal (alpha blocking) occurs in response to novelty,
complexity, and unexpectedness, as well as during emotional excitement and anxiety .EEG
studies have further shown that positive/approach-related emotions lead to greater activation
of the left anterior region of the brain, while negative/avoidance-related emotions lead to
greater activation of the right anterior region .Indeed, when one flashes a picture to either the
left or the right of where a person is looking, the viewer can identify a smiling face more quickly
when it is flashed to the left hemisphere and a frowning face more quickly when it is flashed to
the right hemisphere .Current EEG devices, however, are fairly clumsy and obstructive,
rendering them impractical for most HCI applications. Recent advances in magneto resonance

57 | P a g e
imaging (MRI) offer great promise for emotion monitoring, but are currently unrealistic for HCI
because of their expense, complexity, and form factor.

Autonomic Activity

Autonomic activity has received considerable attention in studies of emotion, in part due to
the relative ease in measuring certain components of the autonomic nervous system (ANS),
including heart rate, blood pressure, blood-pulse volume, respiration, temperature, pupil
dilation, skin conductivity, and more recently, muscle tension (as measured by
electromyography (EMG)). However, the extent to which emotions can be distinguished on the
basis of autonomic activity alone remains a hotly debated issue .On the one end are those,
following in the Jamesian tradition (James, 1884), who believe that each emotion has a unique
autonomic signature—technology is simply not advanced enough yet to fully detect these
differentiators. On the other extreme, there are those, following Cannon0 (1927), who
contended that all emotions are accompanied by the same state of nonspecific autonomic
(sympathetic) arousal, which varies only in magnitude—most commonly measured by galvanic
skin response (GSR), a measure of skin conductivity (Schachter & Singer, 1962). This controversy
has clear connections to the nature-nurture debate in emotion, described earlier, because
autonomic specificity seems more probable if each emotion has a distinct biological basis, while
nonspecific autonomic (sympathetic) arousal seems more likely if differentiation among
emotions is based mostly on cognition and social learning.

Facial Expression Facial expression provides a fundamental means by which humans detect
emotion. Table 4.1 describes characteristic facial features of six basic emotions. Endowing
computers with the ability to recognize facial expressions, through pattern recognition of
captured images, have proven to be a fertile area of research .Ekman and Friesen’s (1977)
Facial Action Coding System (FACS), which identifies a highly specific set of muscular
movements for each emotion, is one of the most widely accepted foundations for facial-
recognition systems.

Voice

Voice presents yet another opportunity for emotion recognition .Emotional arousal is the most
readily discernible aspect of vocal communication, but voice can also provide indications of
valence and specific emotions through acoustic properties such as pitch range, rhythm, and
amplitude or duration changes (Ball & Breese, 2000; Scherer, 1989). A bored or sad user, for
example, will typically exhibit slower, lower-pitched speech, with little high frequency energy,
while a user experiencing fear, anger, or joy will speak faster and louder, with strong high-
frequency energy and more explicit enunciation. Murray and Arnott (1993) provided a detailed
account of the vocal effects associated with several basic emotions Though few systems have

58 | P a g e
been built for automatic emotion recognition through speech, Banse and Scherer (1996) have
demonstrated the feasibility of such systems. Cowie and Douglas-Cowie’s ACCESS system also
presents promise .

Affect Recognition by Users

Computers are not the only (potential) affect recognizers in human–computer interactions.
When confronted with an interface—particularly a social or character-based interface— users
constantly monitor cues to the affective state of their interaction partner, the computer
(though often nonconsciously; see Reeves & Nass, 1996). Creating natural and efficient
interfaces requires not only recognizing emotion in users, but also expressing emotion.
Traditional media creators have known for a long time that portrayal of emotion is a
fundamental key to creating the “illusion of life” (Jones, 1990; Thomas & Johnson, 1981; for
discussions of believable agents and emotion, see, i.e., Bates, 1994; Maldonado, Picard, &
Hayes-Roth, 1998). Facial expression and gesture are the two most common ways to manifest
emotion in screen-based characters. Though animated expressions lack much of the intricacy
found in human expressions, users are nonetheless capable of distinguishing emotions in
animated characters). As with emotion recognition, Ekman and Friesen’s (1977) Facial Action
Coding System (FACS) is a commonly used and well-developed method for constructing
affective expressions. One common strategy for improving accurate communication with
animated characters is to exaggerate expressions, but whether this leads to corresponding
exaggerated assumptions about the underlying emotion has not been studied.

34.What is the definition of cognitive complexity?

Cognitive complexity is the psychological characteristic or variable that shows how complex or
simple the frame and perceptual skill of a person are. It is the extent to which a person
differentiates and integrates an event. A person who measures highly on cognitive complexity
tends to observe gradations and subtle differences while persons with a less complex cognitive
structure for the task does not.

35.Write a nnote on Consequences of Human Error.


The consequences of human errors being made range from serious accidents through to an
event with no apparent lasting effect. Serious accidents tend to be investigated whereas less
serious incidents may not even be reported. It should be remembered that any incident has the

59 | P a g e
potential to lead to something more serious. Knox [21] has represented the number of
incidents compared to their seriousness as a pyramid.

Time lost injury


Non-disabling injury
Property damage and financial loss.
Unreported incidents or near misses.

Of course the consequences of human error depends on who makes the error, and what error
has been made. If an operator makes a mistake or slip, the consequence will either be that the
equipment required will not operate or its operation will be delayed. Latent errors are much
more important. Design errors may mean that the system will not respond to multiple failures
or that a series of events will prevent a safe state from being reached. Design is also important
on assuring that operator stress is kept to a minimum. Fabrication and maintenance errors will
lead to poor reliability. Management errors can lead to a generally poor attitude to safety
which will spread to all areas of the company.

balrajgillonline Balraj Gill

60 | P a g e

You might also like