You are on page 1of 6

32

IJHCQA 8,6

Methodological issues in patient satisfaction surveys


Binshan Lin and Eileen Kelly Patients satisfaction constitutes a crucial aspect of quality of care Introduction
The nature of quality of care has evolved from its inception as meaning conformance to specification[1] towards a systematic view[2] with a broader concept expressed as multidimensionality of quality management in health care[3]. The scope of quality measurement has witnessed a shift from a bias reflecting professional consensus to a shared expression that includes the patients real and perceived expectations of quality[4]. According to Gonnella[5], to measure quality in the health care industry better, one must look at the entire process, including the setting in which care is rendered, the patient receiving the care, and the competence of those delivering the care. This requirement adds to the complexity of measurement because in health care two outcomes are rarely alike. Patient satisfaction constitutes a crucial aspect of quality of care[6-8]. Donabedian[9] indicated that patient satisfaction is a key outcome of care. The earliest studies of patient satisfaction date from the mid-1950s (such as [10,11]). The depth and richness of this stream of literature provides physicians and their administrators with adequate knowledge of the measurement of quality of care. The importance of understanding and measuring accurately health care quality from a patient-based marketing perspective has been highlighted by recent research[12]. Patient satisfaction surveys have been used to a great extent as the method to assess patient satisfaction[13]. Patient satisfaction surveys fulfill two needs. The first provides a summative evaluation for the researchers and the second provides a formative evaluation for the practitioner in health care. A researcher, for example, might want to investigate generalizable relationships of patient satisfaction with other variables. On the other hand, a health care practitioner might be more interested in patient satisfaction survey as a feedback mechanism to uncover patient perception of strengths and weaknesses.
International Journal of Health Care Quality Assurance, Vol. 8 No. 6, 1995, pp. 32-37 MCB University Press Limited, 0952-6862

A patient satisfaction survey can be a rich source of information for generating continuous quality improvements, but only if it is examined carefully and used within a consistent framework. It is essential to evaluate the reliability of the method, particularly considering the fact that attitudinal studies using questionnaires are the most common method for measuring the quality of care in hospitals[14], and because of the lack of standardized instruments for measuring the patient experience of and satisfaction with hospital care[15]. The major purpose of this study is to reassert the importance of studying patient satisfaction surveys, and to clarify and illuminate some of the methodological problems. Attention will focus on four aspects of the problems which are of general interest to those conducting surveys, namely the sampling frames, quality of survey data and instrument, non-response problems, and reporting of results and the interpretation. These issues form the context for our discussion of current methodologies for patient satisfaction surveys. These methodological issues are both prescriptive as well as descriptive and evaluative. Implications of these guidelines are presented as they relate to the development and implementation of patient satisfaction surveys. Finally, a research agenda is laid out, extrapolating patient satisfaction survey requirements in the future.

Problems of sampling frames


Two kinds of patient satisfaction survey instrument have been developed. These instruments focus on general patient satisfaction[16] and, more recently, on satisfaction with specific facilities[17]. Patient satisfaction surveys usually have a high level of internal validity since patient satisfaction is expressed in terms of the patients in the facility and not all patients in general[4]. Unfortunately, patient satisfaction surveys are vulnerable to haphazard sampling procedures that introduce biases of unknown magnitude into their results. The construction of a proper sampling plan is crucial when it comes to the elimination of potentially large biases.

METHODOLOGICAL ISSUES IN PATIENT SATISFACTION SURVEYS

33

The way to evaluate a sample is not by the results the characteristics of the sample but by examining the process by which it was selected. The first step in evaluating the quality of a sample is to define the sample frame. Fowler[18] states that there are three characteristics of the sample frame: comprehensiveness, probability of selection, and efficiency. One problem found in the patient satisfaction surveys carried out by routine procedure is uncontrolled sampling. There are numerous questions about whether the use of sampling theory is appropriate in this context. Many statisticians would even question whether a consecutive series of patients can be regarded as a random sample. Dealing with collecting large samples Difficulties of collecting sufficiently large samples for auditing the care of specific patient groups have been found with even a common surgical procedure such as appendicectomy[19]. Dealing with the choice of a target population The choice of a target population for sampling depends on the survey questions conducted. Under most situations, the target population will be all enrollees, and a random sample of them may be studied. Other analysis will limit the target population to a subgroup with a particular medical condition (such as diabetes) or characteristic (such as children under age three). Identifying these populations or subpopulations may present a challenge of the moving target due to ongoing plan enrolment and disenrolment. Moreover, sampling techniques must be sensitive to statistical considerations. Dealing with differences in target population If one is interested in comparing the patient satisfaction for a given medical condition, problems could arise if such surveys are used without reference to a target population. For example, patient satisfaction surveys are often triggered by use of hospital care only. This might be appropriate when comparing plans in which similar patients are equally likely to seek care for a given condition. However, policy options, such as the future of benefit package and the degree of copayment, can influence the probability of receiving care for different conditions[20].

The quality of a patient satisfaction survey based on ceded data is dependent on the quality of the data itself. Redman[22] suggests four basic dimensions for data quality: timeliness, completeness, usability and accuracy. Timeliness seems obvious, but different users may have very different needs. Completeness involves several concepts of strata, sample sizes, coverage and thoroughness of survey design. Usability includes format of survey, survey instructions and understandability issues. It is, of course, crucial that the data be accurate and gives a true picture of the information that is encoded. It is important that the data be consistent and the same attributes of the information should invariably be considered in the encoding process. In the effort to measure any particular attribute, numerous sources of error might be introduced by subtle factors which influence the behaviour of patients leading to variation in the measurements made. Two types of measurement error can be identified in patient satisfaction surveys: response error and procedure error. Response errors are indicated when responses are recorded in both the interview and the reinterview but they differ. Such response errors have been common in patient satisfaction survey due to the inability or inconvenience of patients. Errors can also be introduced in survey measures, because patients may vary from time to time in their willingness to provide responses to specific questions and because skip sequences are not always followed properly in a patient satisfaction survey. It is critical for studies using the survey method to follow guidelines regarding questionnaire construction and administration[23,24] so that the data collected are relevant and appropriate. Discussions about survey design issues such as question formatting options, graphic layout, integration of interviewer recording tasks, and optimal routeing strategies are usually absent from published works on patient satisfaction surveys. In general, two approaches are taken to improve the quality of the patient satisfaction survey instruments and, as a side benefit, to reduce its length. The first approach involved eliminating permanently scales which showed undesirable psychometric qualities. However, results can be spurious. The second approach was to eliminate items within scales in order to reduce the time to complete the instrument without losing any of its positive psychometric qualities. If time is a major consideration, a short-form of the instrument, which provides only an overall measure of the construct, may be desirable.

Improving quality of survey data and instrument


Patient satisfaction surveys are used to assess quality of care. The quality of questionnaire design is recognized generally as an important factor for self-administrated instruments[21]. It is becoming clear that greater attention to the quality of patient satisfaction survey data is essential for the evolvement of the field into a discipline.

Problems of non-response
Non-response is a feature of virtually all surveys, damaging the inferential value of the sample survey methods. Two strains of literature dealing with the problem of non-response can be identified. First, there is a

34

IJHCQA 8,6

large research on methods to increase response rates in surveys, involving advanced letters, payments to respondents, interviewer scripts that are persuasive, and strategies of timing calls on sample[25]. Second, many researchers focus largely on attempts to reduce error arising from non-response through the use of post-survey adjustments. This includes weighting cases by estimated probabilities of co-operation and by known population quantities, imputation, and selection bias models[26]. With regard to mail surveys at least, several studies have noted the importance of distinguishing between nonresponse due to non-compliance and non-response due to inaccessibility[27,28], raising the possibility that different groups of non-respondents of patient satisfaction surveys may have different predispositions towards survey participation. It was found, for example, that respondents were more likely to be female and married[29]. The difficulty of knowing whether non-response is systematic or not is a common problem. Hospitals frequently administer questionnaires which have not been validated to a non-representative patient sample and employ data collection methods which produce low response rates[29]. The occurrence of non-response is most likely not a random process but is determined by specific factors varying from study to study[30]. As such the following provisions should be made to minimize non-response: G A reply-paid envelope should be enclosed along with a letter encouraging patients to complete the questionnaire. G The questionnaires should be administered by the patients hospital rather than a distant research team about three weeks after surgery[31]. G Techniques include getting a foot-in-the-door by involving respondents in some small task or offering a monetary incentive[32]. Although these techniques help to boost response rates, they may introduce sample-composition bias. Samplecomposition bias occurs when those responding to a survey differ in some important respect from those who do not respond to the survey. In other words, the techniques used to increase response rates may appeal to some members of the sample and alienate others, causing the results to be non-representative of the population of interest[33]. Technology-driven advances offer tremendous opportunities to researchers to apply some of developments in an innovative manner to improve the patient satisfaction survey process. For example, geodemographic systems such as PRIZIM and MICROVISION can be used for predicting and correcting response rate problems in surveys regardless of the mode of survey administration[34].

Problems of reporting and interpretation


Several difficulties of reporting and interpretation are related to the nature of patient satisfaction surveys. First, traditional survey methodologies frequently relate to a specific transaction[35]. As Iguanzo[17] notes, this approach focuses almost exclusively on patient satisfaction with the service received in the health care institution. While satisfaction with delivered services is important, focusing on it alone fails to address customer needs. Survey instruments must provide specific information on what customers need in order to chart process improvement plans that meet the goals of TQM and CQI[36]. David Gustafson, professor at the University of Wisconsin Hospitals, Madison, is a strong proponent of survey needs, not satisfaction. Gustafson believes that instead of a survey that asks, How do you like our doctors? the survey should ask How well did we help you understand what it would be like when you woke up after surgery?[36]. Second, patient satisfaction involves many determinants. The humaneness, for example, of the service and doctorpatient communication are important determinants for satisfaction in many studies[37,38]. Dealing with complex dimensions Differences in patient satisfaction are the result of complex dimensions. These factors include patients social condition, patients physiological reserve, immediate problem or illness, and provider performance[39]. Ware et al. [40] found patient satisfaction dimensions to consist of access/convenience of care, availability of resources, humaneness, finance and quality of care. Rubin et al.[41] confirmed that patients used nine dimensions in evaluating their health care: admissions, nursing, doctors care, daily care, ancillary staff, discharge, billing and overall quality. Dolinsky and Caputo[42] found that the satisfaction dimensions of quality, access, and cost varied significantly for an HMO and non-HMO sample based on the demographic characteristics of age, marital status and race. Patients are not homogeneous even though many studies report survey results as if they are. Distinctions between males and females, blacks and whites, and young and old need to be measured and reported clearly. In order to measure the effect of hospital performance on outcome with accuracy, one would have to control for all the other factors. This clearly is very difficult, given the existing surveys and measurement instruments. Dealing with item heterogeneity Patient satisfaction surveys usually employ multiple items for various reasons. First, if a particular question is subject to measurement error owing to a patients haste or lack of understanding, a repetition of the same

METHODOLOGICAL ISSUES IN PATIENT SATISFACTION SURVEYS

35

question in different ways decreases the chance for a random answer. Second, if a particular concept is too broad to captured in one question, multiple items are used to tap each dimension. Finally, simply increasing the number of items in an instrument increases reliability scores[43]. However, interpretation of results from multiple-item surveys requires caution. The empirical support for totalling the individual items in this way appears to be the highest internal consistency reliability as reported by Cronbachs alpha. However, Cronbachs alpha should be used only for homogeneous tests since the formula assumes item homogeneity. If the survey measures a variety of traits, Cronbachs alpha is not a suitable measure[43]. Dealing with the survey instrument In designing such a questionnaire, there are two minimal requirements: first, questions about specific aspects of care should be included as they are less ambiguous and more sensitive than general questions and, second, openended questions should be included to aid interpretation of the responses to precoded questions[44,45]. Methodology used in the survey studies appears to vary in one major respect: some use a free-response technique to determine important factors or causes associated with patients satisfaction, while other studies ask patients to check or rank a set of predetermined factors[46]. Moreover, usually there is a time lag between receiving patients responses by mail and service time. This made interpretation of the survey results and taking corrective action difficult. A number of institutions have turned to interactive survey methodologies to counter the time lag problem and provide centralized, uniform data collection. Through the use of interactive TV services, patients use a menu to browse through hospital services, watch videos on their particular medical procedures, place orders with the gift shop, fill out satisfaction surveys, etc. In a hospital setting, for example, satisfaction surveys can be filled out on a daily basis. Patients can indicate their level of satisfaction with the cleanliness of their room, promptness of the nursing staff, physician issues, and a variety of other areas[47]. The interactive approach to surveying has several advantages. First, the time lag issue is resolved since survey information is received literally as the patient responds. Second, based on survey results, the health care provider can respond immediately to the patients areas of dissatisfaction, rather than become aware of problems only after the patient has left. Institutions are thus in a position to take a proactive, rather than reactive, stance to continuous quality improvement. Third, letting patients know that you are aware of their concerns and are taking immediate action, leads to a more satisfied customer. Fourth, data collection is simplified, streamlined, reliable and uniform. Another

variation of the interactive approach is the use of a freestanding, portable computer, such as the RT-2000 Measurement in a Box System which can literally be wheeled into any area for a patient to respond to a customer satisfaction survey[36]. Dealing with statistical analysis The final difficulty with the survey instrument is the requirement for non-parametric statistical analysis. According to measurement theory[48], ordinal measures (e.g. the ranking of quality of services or pleasantness of facilities) must not be analysed in the same fashion as interval measures such as body temperature or waiting time. Instead of means, standard deviations, or product moment correlations, the only permissible statistics for ordinal measures are non-parametric (medians, percentiles and order correlations)[49]. Many measurements in patient satisfaction surveys are ordinal measure but a few researchers in patient satisfaction surveys use parametric tests and do not acknowledge the potential misfit.

Conclusion
This article provides several implications for researchers. First, there is still a need for more reliable measures of patient satisfaction survey. As noted earlier, while measuring satisfaction is important, measuring customer needs is equally important and an area often overlooked by researchers. Second, the real value of establishing a generalized instrument for patient satisfaction surveys cannot be realized unless a mechanism is established for provision of a centralized data bank of results. Such a data bank would permit comparison of results across hospitals, regions, or states, and across other variables of interest. Given larger samples further analysis could be performed, for example, by functional areas or by departmental level. We call on fellow researchers in health care systems to encourage the development of such a data bank. In doing so, however, we caution that grouping patients for comparison purposes is a daunting task. Even patients with the same diseases can differ in substantial ways. Nonetheless for research purposes, patient groups need to be relatively cohesive. Making this determination can be a time-consuming and difficult task[50]. Third, researchers must develop methods to reduce the data collection demands imposed on the patients without sacrificing optimal information to capture the interest of health care quality. While the development of shorter patient satisfaction surveys would help, it will have to be complemented by procedures that are smarter and can fill up gaps due to the limited amount of information obtained from the respondents[51].

36

IJHCQA 8,6

Fourth, if continuous quality improvement is truly a factor for an institution, patients should be involved in the construction of the survey instruments. It is not uncommon for health care providers to develop a list of ideas about patient expectations without benefit of input from patients[52]. Involving patients in the design process is essential to focusing accurately on areas of patient concern[53]. Patient satisfaction is an important indicator of the quality of medical care and a major determinant in the choice of a care provider in the future[54]. Accurate and reliable survey information provides the data basis for continuous quality improvement in the delivery of services. By meeting the needs of the patient, the institution in turn will ultimately ensure its competitive position[55].
References 1. Crosby, P.B., Quality Is Free, The New American Library, New York, NY, 1980. 2. Juran, J.M.,The quality trilogy, Qual ity Progress , Vol. 19, August 1986, pp. 19-24. 3. Lin, B. and Schneider, H., A framework for measuring quality in health care, International Journal of Health Care Quality Assurance, Vol. 5 No. 6, 1992, pp. 25-31. 4. Elbeck, M., Patient contribution to the design and meaning of patient satisfaction for quality assurance purposes: the psychiatric case, Health Care Management, Vol. 17 No. 1, 1992, pp. 91-5. 5. Gonnella, J.S., Evaluation of competence, performance, and health care, Journal of Medical Education, Vo1. 54, October 1979, p. 825. 6. Fottler, M.D., Health care organizational performance: present and future research, Journal of Management, Vol. 13 No. 2, 1987, pp. 367-91. 7. Cleary, P.D. and McNeil, B.J., Patient satisfaction as an indicator of quality care, Inquiry, Vol. 15, 1988, pp. 25-36. 8. Fitzpatrick, R., Surveys of patient satisfaction: important considerations, British Medicine Journal , Vol. 302, 1991, p. 887. 9. Donabedian, A., The quality of care: how can it be assessed?, Journal of the American Medical Association, Vol. 260, 1988, pp. 1743-8. 10. Souelem, O., Mental patients attitudes toward mental hospitals, Journal of Clinical Psychology, Vol. 11 No. 2, 1955, pp. 181-5. 11. Klopfer, W.G., Hillson, J.S. and Wylie, A.A., Attitudes toward mental hospitals, Journal of Clinical Psychology, Vol. 12 No. 4, 1956, pp. 361-5. 12. Woodside, A.G., Frey, L.L. and Daly, R.T., Linking service quality, customer satisfaction and behavioral intention, Journal of Health Care Marketing, Vol. 9, 1989, pp. 5-17. 13. Vuori, H., Patient satisfaction does it matter?, Quality Assurance Health Care, Vol. 3, 1991, p. 183. 14. Abramowitz, S., Analyzing patient satisfaction: a multianalytic approach, Quality Review Bulletin, Vol. 13, 1987, pp. 122-30.

15. Meterko, M. and Rubin, H.R., Patient judgement of hospital quality: report of a pilot study, Medical Care, S10, 1990. 16. Findlay, D., The happier the patients, the fewer the empty beds, US News & World Report, Vol. 112, 15 June 1992, p. 63. 17. Iguanzo, J.M., Taking a serious look at patient expectations, Hospitals, Vol. 66, 5 September 1992, p. 68. 18. Fowler, F.J., Survey Research Methods, Sage, Vol. 7 No. 4, 1994, Newbury Park, CA, 1993, pp. 11-15. 19. Black, N. and Moore, L., Comparative audit between hospitals: the example of appendicectomy, International Journal of Quality Assurance in Health Care, 1993. 20. Lohr, K.N., Use of medical care in the RAND health insurance experiment: diagnosis- and service-specific analysis in a randomized controlled trial, Medical Care, Vol. 24, 1986, pp. S1-S87. 21. Dillman, D.A., Mail and other self-administrated questionnaires, in Rossi, P., Wright, J.D. and Anderson, A.B. (Eds), Handbook of Survey Research , Academic Press, New York, NY, 1983, pp. 359-77. 22. Redman, T.C., Data Qual ity: Management and Technology, Bantam Books, New York, NY, 1992. 23. Cronbach, L.J. and Meehl, P.E., Construct validity in psychological tests, Psychological Bulletin, 1955, Vol. 52, pp. 282-302. 24. Sudman, S. and Bradburn, N.M., Asking Questions: A Practical Guide to Questionnaire Design, Jossey-Bass, San Francisco, CA, 1982. 25. Groves, R.M., Survey Errors and Survey Costs , John Wiley & Sons, New York, NY, 1989. 26. Little, R. and Rubin, D.B., Statistical Analysis with Missing Data, John Wiley & Sons, New York, NY, 1987. 27. Mayer, C.S. and Pratt, R.W., A note on non-response in a mail survey, Public Opinion Quarterly, Vol. 30, 1966, pp. 639-46. 28. Stinchcombe, A., Jones, C. and Sheatsley, P., Nonresponse bias for attitude questions, Public Opinion Quarterly, Vol. 45, 1981, pp. 359-75. 29. Nelson, E.C., Hays, R.D., Larson, C. and Batalden, P.B., The patient judgment system: reliability and validity, Quality Review Bulletin, Vol. 15 No. 6, 1989, pp. 185-91. 30. Thorslund, M. and Warneryd, B., Surveying the elderly about health, medical care and living conditions: some issues of response inconsistency, Arch Gerontol Geriatr, Vol. 11, 1990, p. 161. 31. Black, N. and Sanderson, C., Day surgery: development of a questionnaire for eliciting patients experiences, Quality in Health Care, Vol. 2 No. 3, 1993, pp. 157-61. 32. Yammarino, F., Skinner, S.J. and Childers, T., Understanding mail survey response behavior: a metaanalysis, Public Opinion Quarterly, Winter 1991. 33. Parker, C. and McCrohan, K.F., Increasing mail survey response rates: a discussion of methods and induced bias, in Summey, J., Viswanathan, R., Taylor, R. and Glynn, K. (Eds), Marketing: Theories and Concepts for an Era of Change, Southern Marketing Association, 1983, pp. 254-6.

METHODOLOGICAL ISSUES IN PATIENT SATISFACTION SURVEYS

37

34. Appel, V. and Baim, J., Predicting and correcting response rate problems using geodemography, Marketing Research, Vol. 4, March 1992, pp. 22-8. 35. Oliver, R., Measurement and evaluation of satisfaction process in retail settings, Journal of Retailing, Vol. 57, 1981, pp. 25-48. 36. Koska, M.T., Surveying customer needs, not satisfaction, is crucial to CQI, Hospitals, Vol. 66 No. 5, 1992, pp. 50-3. 37. Greenley, J.R. and Schnherr, R.A., Organization effects on client satisfaction with humanness of service, Journal of Health and Social Behavior , Vol. 22, 1981, pp. 2-10. 38. Buller, M.K. and Buller, D.B., Physicians communication style and patient satisfaction, Journal of Health and Social Behavior, Vol. 28, 1987, pp. 375-80. 39. Desharnis, S., Chessey, J.D., Wroblenski, R., Fleming, S.T. and McMahon, L., The risk-adjusted mortality index: a new measure of hospital performance, Medical Care, Vol. 26 No. 12, 1988, pp. 23-35. 40. Ware, J.E., Snyder, M.K. and Wright, W.R., Development and Validation of Scales to Measure Patient Satisfaction with Health Care Services, Vol. 1, Final Report: Review of the Literature, Overview of Methods, and Results of Construction of Scales , Southern Illinois University, School of Medicine, Carbondale, IL, 1976. 41. Rubin, H.R., Ware, J.E. and Hays, R.D., Exploratory factor analysis and empirical scale construction, Medical Care, Vol. 28 No. 9, 1990, pp. S22-9. 42. Dolinsky, A.L. and Caputo, R.K., The role of health care attributes and demographic characteristics in the determination of health care satisfaction, Journal of Health Care Marketing, Vol. 10 No. 4, 1990, pp. 31-9. 43. Allen, M.J. and Yen, W.M., Introduction to Measurement Theory, Brooks/Cole, Monterey, CA, 1979. 44. Hall, J.A. and Dornan, M.C., What patients like about their medical care and how often they are asked: a

45.

46.

47. 48.

49.

50.

51.

52.

53.

54.

55.

meta-analysis of the satisfaction literature, Social Science and Medicine, Vol. 27, 1988, pp. 935-9. Carr-Hill, R.A., The measurement of patient satisfaction, Journal of Public Health and Medicine , Vol. 14, 1992, pp. 236-49. Hill, C.J. and Garner, S.J., Factors influencing physician choice, Hospital & Health Services Administration, Vol. 36 No. 4, 1991, pp. 491-503. Lumsdon, K., TV becoming a window on patient satisfaction, Hospitals, Vol. 66, 5 October 1992, p. 70. Stevens, S.S., Mathematics, measurement and psychophysics, in Stevens, S.S. (Ed.), Handbook of Experimental Psychology, John Wiley & Sons, New York, NY, 1966. Bogatta, F.F. and Bohrnstedt, G., Level of measurement: once over again, Sociological Methods & Research, Vol. 9 No. 2, 1990, pp. 147-60. Alter, J. and Holzman, D., Many research issues need to be resolved, Business and Health, Vol. 10, special report, September 1992, pp. 21-3. Malhotra, N., Analyzing marketing research data with incomplete information on the dependent variables, Journal of Marketing Research, Vol. 24, February 1987, pp. 74-84. Waggoner, D.M., Application of continuous quality improvement techniques to the treatment of patients with hypertension, Health Care Management Review, Vol. 17 No. 3, 1992, pp. 33-42. Alter, J. and Holzman, D., Interest in outcomes research is growing rapidly, Business and Health, Vol. 10, special report, September 1992, pp. 8-13. Hansago, H., Carlsson, B. and Brismar, B., The urgency of care need and patient satisfaction at a hospital emergency department, Health Care Management Review, Vol. 17 No. 2, pp. 71-5. Podolsky, D., Americas best hospitals, US News & World Report, Vol. 112, 15 June 1992, pp. 60-3.

Binshan Lin is an Associate Professor of Management and Marketing, College of Business Administration, Louisiana State University, Shreveport, Louisiana, USA. Eileen Kelly is Associate Professor and Chair of the Department of Management, School of Business, Ithaca College, Ithaca, New York, USA.

You might also like