You are on page 1of 7

Lewis 1

Karyn N. Lewis
0105-440-01 Internet Marketing
Rochester Institute of Technology Winter 20082
Research Paper

Evaluating Web Site Quality: A Benchmarking Approach

Executive Summary
Quality web site design is a critical success factor for corporations with an e-commerce strategy. The
assessment of web site quality is considered a problem of measuring user satisfaction in order to analyze user
perceptions and preferences, requiring a set of guidelines—or criteria—from which corporations can benchmark their
design activities. An effective web site evaluation model is one that makes it possible to pinpoint the web
characteristics that contribute to improving virtual customers’ attitudes and purchasing intentions. Much of the
previous research done on web quality assessment proposes a multivariate approach because of the complexity and
the multi-dimensional nature of the problem. From specified quality features, a set of satisfaction criteria can be
assessed reflecting all aspects of user perceptions about web site quality. This method of data collection and
analysis assumes a multicriteria preference disaggregation approach following an ordinal regression model, which
provides quantitative measures of customer satisfaction considering the qualitative form of customers’ judgments.
The main objective of this method is the gathering of individual judgements into a collective value function, assuming
the client’s satisfaction depends on a set of criteria. From this data, a series of perceptual maps—comparative
performance diagrams—can be developed to benchmark a set of competing organizations and define possible
improvement actions according to customer satisfaction.

Introduction
In the virtual enterprise, quality web site design is a critical success factor for corporations with an e-
commerce strategy because most contact with customers hinges on their interaction through a web site (Joia &
Barbosa de Oliveira, 2008). Most B2C e-commerce web sites, however, were not developed to cater to this aspect.
Although companies may try to emulate human behavior with technology, the interaction is different due to many
aspects that technology cannot replace—courtesy, friendliness, helpfulness, care, commitment, flexibility, and
cleanliness. With the proliferation of web sites and the commercial Internet invested in them, the assessment of web
site quality has evolved as an important activity (Grigoroudis et al., 2008). Business organizations throughout the
world invest a great deal of time and money in order to develop and maintain quality sites to provide effective
communication and information channels to their customers.
The assessment of web site quality is considered a problem of measuring user satisfaction in order to
analyze user perceptions and preferences (Grigoroudis et al., 2008). Avoidance of poor web site design demands a
set of guidelines—or criteria—from which corporations can benchmark their design activities (Kim et al., 2003).
Having this basic set of criteria serves the corporation’s effort when evaluating, comparing, monitoring, managing,
and designing web sites.

The Need for Web Site Quality Assessment


Modern web sites present a wide variety of features, complexity of structure and offered services
(Grigoroudis et al., 2008). Evaluation is an aspect of site development and operation that often contributes to
maximizing invested resources in serving user needs and expectations. A quality-oriented approach to web site
assessment would consider the web site as the product and the user as the customer, focusing on the analysis and
Lewis 2

assessment of web site features that affect overall user satisfaction (Zhang & von Dran, 2001). Thus, exploring web
site quality and user expectations is the crucial key for site improvement.
When a consumer accesses a corporation’s web site, the appearance, structure, and maintenance status all
influence the consumer’s perception of both the transaction experience and the corporate image (Kim et al., 2003).
These characteristics of customer and company interaction developed via a web site are linked to the subjective and
objective elements that influence purchase, which makes careful web site planning of paramount importance. An
effective web site evaluation model is one that makes it possible to pinpoint the web characteristics that contribute to
improving virtual customers’ attitudes and purchasing intentions. The goal is to distinguish clearly between the
external aspects of web sites—those related to the particular behaviors of the user—from their internal aspects, or
those related to their design (Joia & Barbosa de Oliveira, 2008). In this way, the relationships between the many
different factors affecting a web site’s rating can be found in order to understand their contribution to the closure of an
online purchase.

Existing Web Site Quality Assessment Research


Literature on web site quality assessment includes the perspectives of a broad range of experts in human
factors, cognitive psychology and web development, as well as research addressing issues associated with the
design and usability of web products (Grigoroudis et al., 2008). Traditional research on web site evaluation methods
offers insight in achieving usable web-based interfaces. Several authors have undertaken studies regarding B2C e-
commerce web site evaluation using a variety of conventional methodologies including usability testing (e.g.,
Zimmerman, et al., 1998 cited in Hassen & Li, 2005), expert review (e.g. Zhang & von Dran, 2000 cited in Hassen &
Li, 2005), case studies (e.g., Smith, Newman, & Parks, 1997 cited in Hassen & Li, 2005), and automated assessment
(e.g., Tausher & Greenberg, 1997 cited in Hassen & Li, 2005).
In several cases, web site quality has been related to the level of user expectations fulfillment and quality
standards (Grigoroudis et al., 2008). The SERVQUAL model (Parasuraman et al., 1985, 1988, 1991 cited in
Grigoroudis et al., 2008) has also been used for web site quality assessment, proposing a universal set of quality
dimensions (tangibles, reliability, responsiveness, assurance and empathy). Much of the previous research done on
web quality assessment proposes a multivariate approach because of the complexity and the multi-dimensional
nature of the problem. Several web site quality aspects can be found in the literature, which may be summarized in
the following principal quality dimensions listed by Grigoroudis et al. (2008):
a. Content: Related to the responsiveness of a web site to satisfy user inquiry and trust regarding the information
offered (Beck, 1997 cited in Grigoroudis et al., 2008); described in several dimensions such as utility of content
(Grose et al., 1998 cited in Grigoroudis et al., 2008), content integration (Winkler, 2001 cited in Grigoroudis et al.,
2008), completeness of information, subject specialization (Nielsen, 2002 cited in Grigoroudis et al., 2008), and
content credibility.
b. Personalization: Examined at the following levels: personalization of information (Blankenship, 2001 cited in
Grigoroudis et al., 2008), personalization of interface (Brusilovsky, 2001 cited in Grigoroudis et al., 2008), and
personalization of layout (Winkler, 2001 cited in Grigoroudis et al., 2008).
c. Navigation: Reflects the support provided to users when moving in and around a site, including the following
dimensions: convenience of navigation tools (Vora, 1998 cited in Grigoroudis et al., 2008), means of navigation,
ease of use of navigation tools, and links to other sites.
d. Structure and design: Examined at the following levels: loading speed (Virpi and Kaikkonen, 2003 cited in
Grigoroudis et al., 2008), technical integrity (Shredo, 2001 cited in Grigoroudis et al., 2008), real time
information, software requirements, and browser capability (Vora, 1998 cited in Grigoroudis et al., 2008).
e. Appearance and multimedia: Captures aspects related to a web site’s look and feel, giving emphasis on state
of the art graphics and multimedia artifacts. Examined at the following levels: graphics representation, existence
and usability of images, and voice and video.
Lewis 3

In the current era of e-commerce, researchers must consider the addition of business-related evaluation
criteria to the more dominant ones (Kim et al., 2003). Researchers have recently suggested various personalization
schemes that ostensibly make the visitor interested in the site and guide him or her to important pages (Jenamani &
Mohapatra, 2006). Studies have been conducted to provide unique navigation structure for the individual user using
personalization techniques built on user behavior models. According to Zhang and von Dran (2001), however,
designers must first have a thorough understanding of the different quality dimensions that affect user expectations.
They must then be able to relate these quality characteristics to specific design features.

Application & Understanding of the Benchmarking Technique


Benchmarking is a measuring method widely to improve organizations (Elmuti, 1998). The essence of
benchmarking is the process of identifying the highest standards of excellence for products, services, or processes,
and then making the improvements necessary to reach those standards. It’s more than just a means of gathering
data on how well a company performs against others—it’s a method of identifying new ways to improve processes.
Benchmarking strategies establish methods of measuring each area in terms of units of output as well as cost,
ultimately helping organizations faithful to the process achieve cost savings. Successful implementation of
benchmarking has been credited with helping to improve quality, cut costs in manufacturing and development time,
increase productivity, and improve overall organizational effectiveness. In web site quality evaluation, benchmarking
can be used to measure the performance of a web site against its competitors’ in order to identify its strengths and
weaknesses.

Table 1
Web site quality assessment criteria
Criterion Definition
Relevance It is related to the user perception regarding the significance of web site’s content to the visitor’s inquiry. The web site is
relevant according to the degree that the information it contains is relevant to visitor’s interests
Usefulness Usefulness extends relevance to the nature of the specific visitor’s inquiry. Developers should continuously check
information contained in the web site to assess usefulness to a wide audience of visitors. Often, web site administrators
ask visitors to evaluate provided information, or offer star based qualification of web site material
Reliability Reliability is related to accuracy of information contained in the web site. Often designers include a note about last
update of information. Doing so they help the visitor in forming an opinion regarding web site’s reliability
Specialization Specialization captures the specificity of information contained in the web site. It contributes to web site relevance and
usefulness, yet places a heavier burden on reliability
Architecture Architecture concerns the way that the content is organized in a web site. This dimension is particularly focused on the
arrangement of objects, which are used to convey information to visitors
Navigability This dimension reflects both the easiness and the convenience of moving in and around the site
Efficiency Efficiency captures the technical performance characteristics of the web site (e.g. is it fast? does the visitor get advance
notice about the estimated time it may take to retrieve information?)
Layout Layout reflects the unique aspects of the web site involved in the presentation of objects. It is related to its architecture,
yet it is used to differentiate the web site based on its unique design characteristics
Animation This dimension concerns the moving objects involved in the presentation of information and the web site-user interaction
Source: Grigoroudis et al., 2008*

*
Grigoroudis et al., 2008, p. 1348 cited in Moustakis et. al, 2004
Lewis 4

In order to analyze the data accrued from a satisfaction survey, statistical methods and multivariate analysis
techniques must be used (Joia & Barbosa de Oliveira, 2008). The former are useful for analyzing each variable
graphically, identifying outliers, and confirming necessary premises. The latter are useful for validating the proposed
model as a whole as well as the relationships among the constructs.
From the aforementioned quality features, a set of satisfaction criteria can be assessed reflecting all aspects
of user perceptions about web site quality, as seen in Table 1. These satisfaction criteria are based on research by
Moustakis et. al, 2004, who conducted a wide experimental survey in order to conclude a relevant set of quality
features pertaining to user perceptions and preferences (Grigoroudis et al., 2008). Customer satisfaction surveys
often include both overall and partial satisfaction judgments for a set of criteria, requiring satisfaction judgement to be
included in the qualitative data collected in a benchmarking analysis. Thus, satisfaction-benchmarking analysis is
usually a problem of exploring this ordinal data and evaluating collective measures of satisfaction performance.
This method of data collection and analysis assumes a multicriteria preference disaggregation approach
following an ordinal regression model which provides quantitative measures of customer satisfaction considering the
qualitative form of customers’ judgments (Grigoroudis et al., 2008). The main objective of this method is the gathering
of individual judgements into a collective value function, assuming the client’s satisfaction depends on a set of criteria
like the aforementioned characteristics used to assess web site quality. This ensures maximum consistency between
value functions and customers’ judgements. The results of the method would be based on evaluation criteria weights
and additive/marginal value functions.
From this data, a series of perceptual maps—comparative performance diagrams—can be developed to
benchmark a set of competing organizations and define possible improvement actions according to customer
satisfaction (Grigoroudis et al., 2008). Criteria weights and satisfaction indices could be combined to indicate the
strong and weak points of customer satisfaction and define required improvement efforts. These diagrams would
present the average satisfaction indices of a particular company in relation to its competition. They are divided into
four quadrants and can be used as a benchmarking tool in order to assess the performance of different
characteristics of a company’s web site against its competitors, as in Fig. 1 and Fig. 2 below.
Fig. 1. Action diagram. Fig. 2. Comparative performance diagram

Source: Grigoroudis et al., 2008

Web sites usually contain enormous amounts of information spanning hundreds of pages (Jenamani &
Mohapatra, 2006). Without proper guidance, a visitor often wanders aimlessly without visiting important pages, loses
interest, and leaves the site sooner than expected. Therefore, web sites need efficient design that aid users in
retrieving the right information at the right time in order to increase their competitiveness. The benchmarking process
makes it easy to identify the gap between where the organization is and where it would like to be, providing a
measurement of the improvement an organization would like to make (Elmuti, 1998).
Lewis 5

The satisfaction benchmarking analysis is mainly focused on the performance evaluation of competitive
organizations against the satisfaction criteria, as well as the identification of the competitive advantages of each
company (Grigoroudis et al., 2008). The analysis is based on comparative performance diagrams produced by the
multicriteria preference disaggregation method, which can help each company to locate its position against
competitors, pinpoint weak points and determine which criteria will improve its overall site performance, as in Table 2.
Table 2
Example: Average satisfaction indices (%)
Criteria Company A Company B Company C
Relevance 81 83 73
Usefulness 79 73 70
Reliability 79 82 75
Specialization 77 76 68
Architecture 71 68 72
Navigability 72 72 62
Technical efficiency 46 51 47
Layout 67 74 62
Animation 31 23 21
Global 68 66 61
Source: Grigoroudis et al., 2008*

Conclusion
As e-commerce expands, the design of web sites becomes a critical success factor (Kim et al., 2003). Web
sites are often the main interface between businesses and consumers, making their design as important as a store’s
layout and aesthetics. In order to check the efficiency and effectiveness of a design, good evaluation criteria are
needed. It is critical to consider all aspects of the virtual arena, so as not to miss important explanations about a web
site’s performance (Joia & Barbosa de Oliveira, 2008). Moreover, the user’s characteristics must be established in
order to better understand his/her behavior in a digital environment.
The main objective of a user satisfaction analysis is the identification of customers’ attitudes and
preferences. This type of analysis would include relative importance and level of demand for different satisfaction
dimensions of customers (Grigoroudis et al., 2008). The main advantage of this type of method is that it fully
considers the qualitative form of customer judgements and preferences, as expressed through customer satisfaction
surveys. Furthermore, the method is able to assess an integrated benchmarking system, given the wide range of
results provided. Thus, discussion is not solely focused on the descriptive analysis of customer satisfaction data, but
is able to emphasize customer preferences and expectations.
Customer satisfaction benchmarking analysis is a useful tool for modern business organizations in order to
locate their position against competition (Grigoroudis et al., 2008). It provides an organization with the ability to
identify the most critical improvement actions and adopt the best practices of an industry. This type of coordination
and cooperation for conducting analysis demands a great deal of integration that is normally still difficult to do (Joia &
Barbosa de Oliveira, 2008).

*
Grigoroudis et al., 2008, p. 1353
Lewis 6

Bibliography

Beck, S., 1997. Evaluation Criteria. The good, the bad and the ugly: Or, why it’s a good idea to evaluate web sources. [Online]
Available at: http://lib.nmsu.edu/instruction/evalcrit.html [Accessed 18 January 2009].

Blankenship, E., 2001. Portal design vs. web design. [Online] Available at:
http://www.sapdesignguild.org/editions/edition3/graphic.asp [Accessed 18 January 2009].

Brusilovsky, P., 2001. Adaptive Hypermedia. User Modeling and User Adaptive Interaction, 11 (1/2), pp. 87-110.

Elmuti, D., 1998. The perceived impact of the benchmarking process on organizational effectiveness. Production and Inventory
Management Journal, Third Quarter, 39 (3), pp. 6-12.

Grigoroudis, E., Litos, C., Moustakis, V.A., Politis, Y. & Tsironis, L., 2008. The assessment of user-perceived web quality:
Application of a satisfaction benchmarking approach. European Journal of Operational Research, 187 (2008), pp.
1346-1357.

Grose, E., Forsythe, C., & Ratner, J., 1998. Using web and traditional style guides to design web interfaces. In: Human Factors
and Web Development. New Jersey: Lawrence Erlbaum Associates, pp. 121-131.

Hassan, S. & Li, F., 2005. Evaluating the Usability and Content Usefulness of Web Sites: A Benchmarking Approach. Journal of
Electronic Commerce in Organizations, Apr.-Jun., 3 (2), pp. 46-68.

Jenamani, M. & Mohapatra, P., 2006. Design benchmarking, user behavior analysis and link-structures personalization in
commercial web sites. Internet Research, 16 (3), pp. 248-266.

Joia, L.A. & Barbosa de Oliveira, L.C., 2008. Development and Testing of an E-Commerce Web Site Evaluation Model. Journal
of Electronic Commerce in Organizations, Jul.-Sep., 6 (3), pp. 37-54.

Kim, S., Shaw, T. & Schneider, H., 2003. Web site design benchmarking within industry groups. Internet Research, 13 (1), pp.
17-26.

Misic, M.M. & Johnson, K., 1999. Benchmarking: A tool for Web Site evaluation and improvement. Internet Research: Electronic
Networking and Policy, 9 (5), pp. 383-392.

Moustakis, V., Litos, C., Dalivigas, A. & Tsironis, L., 2004. Website assessment criteria. In: Proceedings of International
Conference on Information Quality. Boston: MIT, pp. 59-73.

Neilsen, J., 2002. Designing web usability: The practice of simplicity. Indianapolis: New Riders Publishing.

Parasuraman, A., Zeithaml, V.A., & Berry, L.L., 1985. A conceptual model of service quality and its implications for future
research. Journal of Marketing, 49, pp. 41-50.

Parasuraman, A., Zeithaml, V.A., & Berry, L.L., 1988. SERVQUAL: A multiple item scale for measuring consumer perceptions of
service quality. Journal of Retailing, 64 (1), pp. 14-40.

Parasuraman, A., Zeithaml, V.A., & Berry, L.L., 1991. Refinement and reassessment of the SERVQUAL scale. Journal of
Retailing, 67 (4), pp. 420-450.

Smith, P.A., Newman, L.A. & Parks, L.M., 1997. Virtual hierarchy and virtual networks: Some lessons from hypermedia usability
research applied to the World Wide Web, International Journal of Human-Computer Studies, 47 (1), pp. 67-95.

Tauscher, L. & Greenberg, S., 1997. How people revisit web pages. International Journal of Human-Computer Studies, 47 (1),
pp. 97-137.

Virpi, R. & Kaikkonen,. A., 2003. Acceptable download times in the mobile internet. In: Proceedings of the 10th International
Lewis 7

Conference on Human-Computer Interaction. New Jersey: Lawrence Erlbaum Associates, pp. 1467-1472.

Vora, P., 1998. Human factors methodology for designing web sites. In: Human Factors and Web Development. New Jersey:
Lawrence Erlbaum Associates, pp. 189-198.

Winkler, R., 2001. Portals-The all-in-one web supersites: Features, functions, definition, taxonomy. [Online] Available at:
http://www.sapdesignguild.org/editions/edition3/overview_edition3.asp [Accessed 18 January 2009].

Zhang, P. & von Dran, G.M., 2000. Satisfiers and dissatisfiers: A two-factor model for web site design and evaluation. Journal of
the American Society for Information Science, 57 (14), pp. 1253-1268.

Zhang, P. & von Dran, G.M., 2001. User expectations and rankings of quality factors in different web site domains. International
Journal of Electronic Commerce, 6 (2), 9-33.

Zimmerman, E.D., Muraski, M., Palmquist, M., Estes, M., McClintoch, C. & Bilsing, L., 1998. Examining WWW designs: Lessons
from pilot studies. [Online] Available at: http://www.microsoft.com/usability/webconf/zimmerman.htm [Accessed 13
January 2009].

You might also like