You are on page 1of 98

UNIVERSIDAD PONTIFICIA COMILLAS

ESCUELA TÉCNICA SUPERIOR DE INGENIERÍA (ICAI)


INGENIERO INDUSTRIAL

PROYECTO FIN DE CARRERA

OPTICAL ABERRATIONS IN
HEAD-UP DISPLAYS

AUTOR: LUIS SAMPEDRO DÍAZ

MADRID, SEPTIEMBRE 2005


Technische Universität München

Institut für Produktionstechnik

Lehrstuhl für Ergonomie

Optical Aberrations in Head-up Displays

Luis Sampedro Díaz

In cooperation with:

AUDI AG Ingolstadt

March 2005 - September 2005


TABLE OF CONTENTS

Table of contents

TABLE OF CONTENTS ......................................................................................................3

LIST OF FIGURES.............................................................................................................6

LIST OF TABLES ..............................................................................................................9

0 ABSTRACT ............................................................................................................10

1 INTRODUCTION ......................................................................................................11

2 OBJECTIVE OF THE THESIS .....................................................................................12

3 PRELIMINARY CONCEPTS........................................................................................13

3.1 Geometric optics..........................................................................................13

3.1.1 Basic principles.....................................................................................13

3.1.2 Optical aberrations ...............................................................................17

3.1.2.1 Chromatic Aberrations.........................................................................18

3.1.2.1.1 Longitudinal chromatic aberration ................................................18

3.1.2.1.2 Transverse chromatic aberration ..................................................19

3.1.2.2 Monochromatic Aberrations ................................................................19

3.1.2.2.1 Spherical aberration......................................................................20

3.1.2.2.2 Coma ............................................................................................20

3.1.2.2.3 Astigmatism ..................................................................................21

3.1.2.2.4 Field curvature ..............................................................................21

3.1.2.2.5 Distortion.......................................................................................22

3.1.3 Representation of optical imaging performance ...................................23

3.2 Physiology of the human eye.......................................................................25

3.2.1 Introduction...........................................................................................25

3.2.2 Visual Acuity .........................................................................................28

3.2.3 Accommodation and Adaptation ..........................................................29

LUIS SAMPEDRO DÍAZ 3


TABLE OF CONTENTS

4 STATE OF THE ART.................................................................................................31

4.1 Information during the driving task...............................................................31

4.2 Displays in automobiles ...............................................................................32

4.3 Head-up Display ..........................................................................................34

4.3.1 Introduction...........................................................................................34

4.3.2 Technical description............................................................................35

4.3.3 Inherent problems in Head-up Displays ...............................................38

5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS .........................................................41

5.1 Introduction ..................................................................................................41

5.2 Double image effect.....................................................................................42

5.3 Astigmatism .................................................................................................44

5.4 Distortion......................................................................................................48

5.5 Accommodation of the eye ..........................................................................50

5.6 Stereoscopic discussion ..............................................................................52

5.7 Night/day driving conditions.........................................................................54

5.8 Conclusion: Theoretical definition of a “good-quality” virtual image ............54

6 TEST STATION DESCRIPTION ...................................................................................56

6.1 Introduction ..................................................................................................56

6.2 General description......................................................................................56

6.3 Components ................................................................................................57

6.3.1 Digital camera.......................................................................................57

6.3.2 Head-up Display ...................................................................................65

6.3.3 Windscreen...........................................................................................67

6.3.4 Seat ......................................................................................................68

6.4 Procedure to generate and measure optical aberrations ............................69

6.4.1 General discussion about the generation of test images .....................69

LUIS SAMPEDRO DÍAZ 4


TABLE OF CONTENTS

6.4.2 Double image effect..............................................................................71

6.4.3 Astigmatism ..........................................................................................71

6.4.4 Distortion ..............................................................................................75

6.4.5 Accommodation....................................................................................76

6.4.6 Stereoscopic discussion .......................................................................77

6.4.7 Night/day driving conditions..................................................................77

7 EXPERIMENT METHOD ............................................................................................78

7.1 Introduction ..................................................................................................78

7.2 Objective test ...............................................................................................78

7.2.1 Images for objective analysis ...............................................................80

7.2.2 Software ...............................................................................................81

7.3 Subjective test .............................................................................................86

7.4 Evaluation of objective and subjective results .............................................89

8 CONCLUSION ........................................................................................................93

9 OUTLOOK .............................................................................................................94

BIBLIOGRAPHY .............................................................................................................97

LUIS SAMPEDRO DÍAZ 5


LIST OF FIGURES

List of figures

Figure 1: Reflection law [26].......................................................................................14

Figure 2: Refraction law [26] ......................................................................................14

Figure 3: Positive (converging) lens [27]....................................................................15

Figure 4: Negative (diverging) lens [27] .....................................................................16

Figure 5: Imaging properties of a spherical lens [27] .................................................16

Figure 6: Longitudinal chromatic aberration [28]........................................................19

Figure 7: Transverse chromatic aberration [28] .........................................................19

Figure 8: Spherical aberration [13, 29].......................................................................20

Figure 9: Coma [13, 29] .............................................................................................20

Figure 10: Astigmatism [30] .......................................................................................21

Figure 11: Field curvature [20] ...................................................................................22

Figure 12: Distortion [30]............................................................................................23

Figure 13: Transverse ray plot [8] ..............................................................................24

Figure 14: Spherical aberration for three colors [8]....................................................24

Figure 15: Field plots [8].............................................................................................25

Figure 16: Spectrum of visible light [10] .....................................................................26

Figure 17: The human eye [24] ..................................................................................27

Figure 18: Distant Vision [25] .....................................................................................30

Figure 19: Close Vision [25] .......................................................................................30

Figure 20: Displays in a car [Source: AUDI AG] ........................................................32

Figure 21: Display in the middle console [31] ............................................................33

Figure 22: Dashboard of the AUDI B6 [AUDI AG] .....................................................33

Figure 23: Head-up Display [31] ................................................................................34

Figure 24: Head-up Display system [Siemens VDO] .................................................35

Figure 25: TFT-LCD Display module in a BMW HUD [Siemens VDO] ......................36

LUIS SAMPEDRO DÍAZ 6


LIST OF FIGURES

Figure 26: BMW Head-up Display [31].......................................................................37

Figure 27: Simulation of a correct-location HUD system [AUDI AG] .........................38

Figure 28: Potential problem sources in a HUD.........................................................39

Figure 29: Double image effect [32] ...........................................................................43

Figure 30: Double image corrected [32].....................................................................44

Figure 31: Spherical lens focus [33]...........................................................................45

Figure 32: Astigmatism example [33].........................................................................46

Figure 33: Astigmatism simulation [27] ......................................................................47

Figure 34: Simulation of distortion impact in the perception of information ...............48

Figure 35: TV-Distortion [34] ......................................................................................49

Figure 36: Accommodation due to movement of the head within the eye-box ..........51

Figure 37: Simulation of the independent perception of both eyes............................53

Figure 38: Simulation of independent perception after accommodation....................53

Figure 39: Test station ...............................................................................................57

Figure 40: Transformation of a point into a straight line.............................................58

Figure 41: Transformation of a point into an irregular line .........................................58

Figure 42: CMOS camera chip [37]............................................................................60

Figure 43: Imaging representation ignoring Nyquist theorem [36] .............................61

Figure 44: Imaging representation applying Nyquist theorem [36] ............................61

Figure 45: Higher sampling rates examples [36] .......................................................61

Figure 46: Optical system of a camera ......................................................................62

Figure 47: Passive auto focus [38].............................................................................65

Figure 48: Box design concept...................................................................................65

Figure 49: Head-up Display Box Design ....................................................................66

Figure 50: Windscreen diagram .................................................................................68

Figure 51: Linear pattern............................................................................................69

LUIS SAMPEDRO DÍAZ 7


LIST OF FIGURES

Figure 52: Scaled pattern disregarding proportions ...................................................69

Figure 53: Rotation of the windscreen to generate astigmatism................................72

Figure 54: Distance-to-object scale............................................................................73

Figure 55: Measurement of the eye-box ....................................................................74

Figure 56: Position of the eyes of the test person......................................................75

Figure 57: Distortion generated with Adobe Photoshop ............................................75

Figure 58: TV-Distortion measurement method .........................................................76

Figure 59: Accommodation of the eye .......................................................................76

Figure 60: Vertical and horizontal lines for testing astigmatism.................................80

Figure 61: Grid image for testing distortion ................................................................80

Figure 62: Main window of FrameWork 2.7 ...............................................................82

Figure 63: Optimum Net.............................................................................................83

Figure 64: Aberrated Net............................................................................................83

Figure 65: Optimum net results..................................................................................83

Figure 66: Aberrated net results.................................................................................84

Figure 67: Detection of pattern with FrameWork 2.7 .................................................85

Figure 68: Representation of a game-type distraction ...............................................87

Figure 69: Accommodation values represented in a 3D graphic ...............................91

Figure 70: Accommodation values within the eye-box...............................................91

Figure 71: Distribution of astigmatism at y = -5mm....................................................94

Figure 72: Distribution of astigmatism at y = 0mm.....................................................95

Figure 73: Distribution of astigmatism at y = 5mm.....................................................95

LUIS SAMPEDRO DÍAZ 8


LIST OF TABLES

List of tables

Table 1: Refractive indices.........................................................................................15

Table 2: Accommodation time [41].............................................................................29

Table 3: Specifications of the HUD in a BMW 5 and 6 Series ...................................37

Table 4: Simulation of astigmatic image in different planes [33]................................46

Table 5: Theoretical values for a “good-quality” virtual image ...................................55

Table 6: Values of focal length...................................................................................63

Table 7: Values of focal length...................................................................................64

Table 8: Box design versus direct installation of a HUD ............................................67

Table 9: Registration of objective test results ............................................................79

Table 10: Iterative method for a subjective test .........................................................88

Table 11: Subjective test by means of positive increments of astigmatism ...............88

Table 12: Subjective test by means of negative increments of astigmatism..............89

Table 13: Values of distance to virtual image ............................................................90

Table 14: Values in matrix form .................................................................................90

LUIS SAMPEDRO DÍAZ 9


0 ABSTRACT

0 Abstract

The present Degree Thesis has been carried out at the Technical Development
Department I/EE-71 at AUDI AG in Ingolstadt. The goal is to design a method to
reproduce and evaluate optical aberrations in Head-up Displays in automobiles. To
achieve this, a test station is to be designed with the capability to produce both
objective and subjective results, in order to identify the tolerable limits of such
aberrations in typical driving situations. The subjective output will be generated by
means of a number of test individuals and a digital camera will provide the objective
data.

LUIS SAMPEDRO DÍAZ 10


1 INTRODUCTION

1 Introduction

The arrival of new technologies has made an extraordinary impact in the


development of automobiles. While first cars were conceptualized as machines that
transported passengers to the desired destination, today a car is marketed as a
symbol of progress, high technology, design and comfort.

However, although modern vehicles are equipped with more and more high-
technology driving-assistance systems, safety still constitutes a field in need of
improvement: today, more than 450 million passenger cars travel the streets and
roads of the world [5], and more than 1 million people die in a car accident every year
[World Health Organization].

A definitive step towards increasing the safety of the car would be to provide the
driver with personalized and useful assistance information at high ergonomic levels in
the driving field of view, so that the driver maintains eye contact with the road at all
time. In order to achieve this, Head-up Displays need to be implemented in the
automobile industry. In theory, these optical devices have a tremendous potential in
the improvement of the ergonomics and the safety of automobiles, although they
comprise a complex optical system with subsequent optical errors, which can
jeopardize the usefulness of such. The optical aberrations entailed in these systems
are essentially the core of the present thesis.

LUIS SAMPEDRO DÍAZ 11


2 OBJECTIVE OF THE THESIS

2 Objective of the thesis

The objective of the present document is to analyze optical aberrations in Head-up


Displays conceived for use in automobiles. At first, all aberrations and optical effects
relevant to Head-up Displays are to be identified and analyzed. Then, a method to
accurately and individually reproduce such relevant optical aberrations will be
proposed. Finally, a methodology to obtain the limit tolerances of the driver to each of
the aberrations is to be described. The accomplishment of these three goals will
allow a quick and accurate evaluation of the optical quality of any Head-up Display.

LUIS SAMPEDRO DÍAZ 12


3 PRELIMINARY CONCEPTS

3 Preliminary concepts

3.1 Geometric optics

3.1.1 Basic principles

Optics deals with the propagation of light through transparent media and its
interaction with mirrors, lenses, slits, etc. Geometric optics is a simplified and more
approachable theory that can explain a wide range of phenomena, including those
taking place within the optical system of a typical Head-up Display.

In geometric optics, light is treated as a set of rays, emanating from a source, which
propagate through transparent media according to a set of three simple laws:

- Law of rectilinear propagation, which states that light rays propagating through
a homogeneous transparent medium propagate in straight lines.

- Law of reflection, which governs the interaction of light rays with conducting
surfaces like metallic mirrors.

- Law of refraction, which explains the behavior of light rays as they traverse a
sharp boundary between two different transparent media (air and glass, for
example).

It may be interesting for the goal of the present thesis to briefly explain the last two
laws mentioned, as the first can be easily accepted by the reader. The law of
reflection states that the incident ray, the reflected ray and the normal to the surface
of the mirror all lie in the same plane (see Figure 1). Furthermore, the angle of
reflection “ r ” is identical to the angle of incidence “ i ”.

LUIS SAMPEDRO DÍAZ 13


3 PRELIMINARY CONCEPTS

Figure 1: Reflection law [26]

Finally, considering a light ray incident on a plane interface between two transparent
dielectric media like shown in Figure 2, the law of refraction, generally known as
Snell’s law, states that the refracted ray and the normal to the interface all lie in the
same plane.

Figure 2: Refraction law [26]

Furthermore, the following equation describes the relationship between θ1 and θ2,
where n1 and n2 are the refractive indices of media 1 and 2, respectively:

n1 ⋅ sin(θ1 ) = n2 ⋅ sin(θ 2 )

LUIS SAMPEDRO DÍAZ 14


3 PRELIMINARY CONCEPTS

Table 1 shows the refractive indices of common materials for yellow light (λ = 589
nm):

Material n

Air 1.00029

Water 1.33

Glass 1.58 ~ 1.89

Diamond 2.42

Table 1: Refractive indices

These simple laws can be applied to predict the optical behavior of a lens. The most
common type of lens is the spherical lens, which is defined with two radii (R1 and R2)
forming two spherical surfaces (see Figure 3).

Figure 3: Positive (converging) lens [27]

Depending on the values and the signs of R1 and R2, a lens can vary its optical
properties. In the case shown in Figure 3, both radii of curvature are positive. This

LUIS SAMPEDRO DÍAZ 15


3 PRELIMINARY CONCEPTS

implies that the light rays emitted from the object (left hand side of the lens) will
converge at the focal point located on the right of the lens. On the other hand, if both
R1 and R2 are negative, the resulting lens will spread the light rays emitted by the
object, resulting in a focal point on the side of the object like shown in Figure 4:

Figure 4: Negative (diverging) lens [27]

Lenses are typically used to “adapt” the light rays coming from an object, creating an
image onto a desired plane, like a film (in cameras, for example) or directly onto the
retina (telescope, microscope, etc.).

Figure 5: Imaging properties of a spherical lens [27]

LUIS SAMPEDRO DÍAZ 16


3 PRELIMINARY CONCEPTS

Figure 5 shows the representation of one point in an object. S1 is the distance


between the object and the lens and S2 represents the distance between the lens
and the image. Note that the focal distance f of a lens differs from the distance
between the lens and the image S2. The focal distance can be calculated in terms of
S1 and S2 with the following equality:

1 1 1
= +
f S1 S2

Finally, the focal distance is also dependant on the two curvatures mentioned (R1 and
R2) which define the lens, the refractive index of the lens material n, the refractive
index of the medium n’ and the distance between the two surfaces of the lens d:

1 ⎛n ⎞ ⎡1
= ⎜ − 1⎟ ⋅ ⎢ +
1
+
(n − 1) ⋅ d ⎤

f ⎝ n' ⎠ ⎣ R1 R 2 n ⋅ R1 ⋅ R 2 ⎦

In the next section, it will be introduced that, due to the physical characteristics of a
lens, the representation of an object with such optical elements is strongly attached
to deviations called optical aberrations.

3.1.2 Optical aberrations

An Optical aberration is a distortion in the image formed by an optical system. It


occurs when a point before the optical system is transformed into a number of points
(or into a deformed point) after the transmission through the optical system, normally
leading to a blurry effect detected by the human eye. They arise due to a number of
factors, including imperfections or limitations in such optical components such as
lenses and mirrors.

LUIS SAMPEDRO DÍAZ 17


3 PRELIMINARY CONCEPTS

A standard classification distinguishes between chromatic and monochromatic


aberrations. The former occurs when an optical system disperses the various
wavelengths of white light and the latter when such color dispersion does not occur.
It is important to understand that an optical system is very likely to contain
aberrations. The center of the discussion is then to be able to state which of these
aberrations are tolerable by the user and which should be avoided.

3.1.2.1 Chromatic Aberrations

Since the focal length of a lens f is dependant on the refractive index n (also
dependant on the wavelength), an optical element will present a chromatic aberration
by bringing different wavelengths to a focus at different positions. Chromatic
aberrations can be minimized by using two or more lenses of different chemical
composition, and the absence of this aberration is named achromatism. The two
types of chromatic aberrations are described in the following lines.

3.1.2.1.1 Longitudinal chromatic aberration

As mentioned before, the refractive index of a lens depends on the wavelength. This
leads to a relation between the focal length and the wavelength of the light that
passes through the lens. The following equations show this fact, in which n is the
refractive index, λ is the wavelength, s is the focal distance and r is the curvature
radius of the optical element:

n = n (λ )

n (λ )
s = s (λ ) = r
n (λ ) − 1

In other words, the longitudinal aberration is the inability of the lens to focus different
wavelengths of light in the same focal plane (see Figure 6).

LUIS SAMPEDRO DÍAZ 18


3 PRELIMINARY CONCEPTS

Figure 6: Longitudinal chromatic aberration [28]

3.1.2.1.2 Transverse chromatic aberration

When the object is away from the optical axis, that is, when light rays are obliquely
incident, transverse chromatic aberration will be present (see figure 7). In this case,
all wavelengths are in foci but with different magnifications.

Figure 7: Transverse chromatic aberration [28]

3.1.2.2 Monochromatic Aberrations

An ideal optical system focuses a number of parallel rays of light into a defined and
unique point. In reality, this ideal situation does not take place: flat images present
curved foci surfaces, the ideal sharpness of the details are translated into blurred
images, etc. These optical defects are called monochromatic aberrations and include
five types: spherical aberration, coma, astigmatism, field curvature and distortion.

LUIS SAMPEDRO DÍAZ 19


3 PRELIMINARY CONCEPTS

3.1.2.2.1 Spherical aberration

This type of aberration can be described as the blurring of an image that occurs
when light from the margin of a lens or mirror with a spherical surface comes to a
shorter focus than light from the central portion (see Figure 8). In imaging systems,
spherical aberration tends to blur the image and reduce the contrast.

Figure 8: Spherical aberration [13, 29]

3.1.2.2.2 Coma

Coma occurs when an object away from the optical axis of the lens is imaged, where
rays pass through such axis at an angle θ (see Figure 9):

Figure 9: Coma [13, 29]

LUIS SAMPEDRO DÍAZ 20


3 PRELIMINARY CONCEPTS

In this case, the lens will present different levels of magnification at different locations
and consequently, an off-axis object point will not produce a sharp image point, but a
characteristic comet-like flare.

3.1.2.2.3 Astigmatism

Astigmatism occurs when tangential and sagittal foci do not coincide and the system
appears to have two focal lengths. The following figure illustrates that tangential rays
from the object come to a focus closer to the lens than rays in the sagittal plane (see
Figure 10).

Figure 10: Astigmatism [30]

This aberration, in moderate or severe amount, is detected with symptoms like


headache, eyestrain, fatigue and blurred vision.

3.1.2.2.4 Field curvature

All rays traveling through a curved lens present a focus contained in a surface called
the Petzval surface, which is curved and not planar (see Figure 11). This aberration
varies with the lens curvature and with square of image height, which means that by

LUIS SAMPEDRO DÍAZ 21


3 PRELIMINARY CONCEPTS

reducing the field angle by one-half, the blur from the field curvature reduces to one
quarter.

sagittal Petzval surface

tangential Petzval surface

Figure 11: Field curvature [20]

In this figure, the foci of the objects a, b, c and d (which lay on a planar surface) are
shown at their correspondent sagittal and tangential Petzval (curved) surface.

3.1.2.2.5 Distortion

A pure distortion affects the shape of the image, and not its sharpness nor its color.
Like shown in Figure 12, this aberration represents the inability of the lens system to
reproduce an image coherent with the initial structure of the subject, producing the
pincushion and barrel effects.

LUIS SAMPEDRO DÍAZ 22


3 PRELIMINARY CONCEPTS

Figure 12: Distortion [30]

Distortion happens because the focal length of the lens varies over the Petzval
surface and thus some parts of the image are more magnified than others.

3.1.3 Representation of optical imaging performance

Aberration curves are mainly used by optical designers, although they may be
interesting for the reader of the present document. A simple introduction of these
plots will be presented in this section. Aberration curves provide important details
about the relative contributions of individual aberrations to lens performance. They
can be divided into two types: those expressed in terms of ray errors and those in
terms of the optical path difference.

The most common form is the transverse ray aberration curve, which is generated by
tracing fans of rays from a specific object point to a linear array of points in the
entrance pupil. Figure 13 shows the resulting transverse ray plot obtained with the
spherical aberration represented. The abscissa represents the distance along the
lens and the ordinate the distance across the entrance pupil.

LUIS SAMPEDRO DÍAZ 23


3 PRELIMINARY CONCEPTS

Figure 13: Transverse ray plot [8]

Note that if the evaluation plane is in the image of a perfect representation, then
there would be no ray error and, thus, the curve in the transverse ray plot would be a
straight line coincident with the abscissa. With this type of aberration curve,
chromatic aberration is also clearly represented by plotting different wavelengths
independently (see Figure 14). Chromatic aberration is detected if there is a
difference in slope between these curves at the origin.

Figure 14: Spherical aberration for three colors [8]

In field plots, the independent variable is usually the field angle (plotted vertically)
and the aberration is plotted horizontally. The three main plots in this case are

LUIS SAMPEDRO DÍAZ 24


3 PRELIMINARY CONCEPTS

distortion, field curvature and transverse chromatic aberration. Distortion as a


function of field angle is represented on the left diagram in Figure 15; field curvature
is displayed in the middle of Figure 15 by the tangential and sagittal foci as a function
of object point or field angle; finally, transverse chromatic aberration is plotted on the
right diagram in Figure 15 as the difference between the chief ray heights at the red
and blue wavelengths as a function of field angle.

Figure 15: Field plots [8]

3.2 Physiology of the human eye

3.2.1 Introduction

The eye is a complex organ composed of many small parts that provides us with vital
information like colors, textures, distance, size, form, movement, etc. The human eye
can only process a small part of the electromagnetic spectrum called visible light,
which includes wavelengths between 380 and 780 nm (see Figure 16).

LUIS SAMPEDRO DÍAZ 25


3 PRELIMINARY CONCEPTS

Figure 16: Spectrum of visible light [10]

Light rays are reflected from all objects and directed into the eye through the cornea,
which is the clear and transparent portion of the coating that surrounds the eyeball.
Then, such light rays pass through an opening in the iris (colored part of the eye),
called the pupil. The iris controls the amount of light entering the eye by dilating or
constricting the pupil (see Figure 17).

Light then reaches the crystalline lens, which focuses light rays onto the retina by
refraction. The lens can change its shape (accommodate) to provide clear vision at
various distances. If an object is close, the ciliary muscles of the eye contract and the
lens becomes rounder. To see a distant object, the same muscles relax and the lens
flattens.

LUIS SAMPEDRO DÍAZ 26


3 PRELIMINARY CONCEPTS

Figure 17: The human eye [24]

Behind the lens and in front of the retina is a chamber called the vitreous body, which
contains a clear and gelatinous fluid called vitreous humor. Light rays pass through
the vitreous humor before reaching the retina, which lines the back two-thirds of the
eye and is responsible for the wide field of vision that most people experience. For
clear vision, light rays must focus directly on the retina. When light focuses in front of
or behind the retina, the result is blurry vision. The retina contains millions of
specialized photoreceptor cells called rods and cones that convert light rays into
electrical signals that are transmitted to the brain through the optic nerve. Rods and
cones provide the ability to see in dim light and to see in color, respectively.

The macula, located in the center of the retina, is where most of the cone cells are
located. The fovea, a small depression in the center of the macula, has the highest
concentration of cone cells. The macula is responsible for central vision, seeing
color, and distinguishing fine detail. The outer portion (peripheral retina) is the
primary location of rod cells and allows for night vision and seeing movement and
objects to the side (i.e., peripheral vision).

LUIS SAMPEDRO DÍAZ 27


3 PRELIMINARY CONCEPTS

The optic nerve, located behind the retina, transmits signals from the photoreceptor
cells to the brain. Each eye transmits signals of a slightly different image, and the
images are inverted. Once they reach the brain, they are corrected and combined
into one image.

Finally, the stabilization of eye movement is accomplished by six extraocular muscles


that are attached to each eyeball and perform their horizontal and vertical
movements and rotation. These muscles are controlled by impulses from the cranial
nerves that tell the muscles to contract or to relax. When certain muscles contract
and others relax, the eye moves. The six muscles and their function are listed below:

- Lateral rectus: moves the eye outward, away from the nose

- Medial rectus: moves the eye inward, toward the nose

- Superior rectus: moves the eye upward and slightly outward

- Inferior rectus: moves the eye downward and slightly inward

- Superior oblique: moves the eye inward and downward

- Inferior oblique: moves the eye outward and upward

3.2.2 Visual Acuity

Visual acuity is the ability of the eye to see fine detail. It is limited by diffraction,
aberrations and the photoreceptor density in the eye, apart from the illumination,
contrast, etc. From all of these factors affecting visual acuity, the pupil size and the
illumination are the ones that may play an important role in a typical driving situation.

Large pupils allow more light to enter into the eye and stimulate the retina, reducing
diffraction but affecting the resolution due to aberrations that occur in the eye. On the
other hand, a small pupil reduces optical aberrations but limits diffraction.

LUIS SAMPEDRO DÍAZ 28


3 PRELIMINARY CONCEPTS

For recognition tasks, visual acuity is affected by the level of background luminance.
A proof to support this statement has not been proposed, although some theories
suggest the possible answer. Hecht suggests that within the rod population and
within the cone population, there are different sensitivities which are distributed
randomly [23]. At high luminance, all cells are active, providing a high level of visual
acuity. The problem emerges at low luminance situations, because only the cells that
are sensitive to that luminance level are active. Therefore, at low levels of luminance,
the number of cells being active is decreased and therefore, because they are
randomly distributed, the level of visual acuity achieved is diminished.

3.2.3 Accommodation and Adaptation

The ability of the eye to adjust its focal length is known as accommodation. In the
next table, different accommodation times are shown. They represent the time the
eye needs to adjust its optical system to receive a sharp image, from the infinity to
the distances included:

Distance (in cm) Accommodation Time (in ms)

140 330

50 450

Table 2: Accommodation time [41]

For distance objects, the ciliary muscles relax, producing a flatter shape in the lens
and thus a maximal focal length. When the ciliary muscles contract and squeeze the
lens into a more convex shape, the focal length is reduced, producing a more
suitable situation for nearby object perception (see Figures 18 and 19).

LUIS SAMPEDRO DÍAZ 29


3 PRELIMINARY CONCEPTS

Light rays from distant objects are nearly


parallel and do not require as much
refraction to bring them to a focus

Figure 18: Distant Vision [25]

Light rays from close objects diverge and


therefore require more refraction for
focusing

Figure 19: Close Vision [25]

On the other hand, the concept of adaptation is present when there is a change in
the light conditions. Above a certain luminance level (about 0,03 cd/m2), the cone
mechanism is involved (photopic vision), whereas the rod mechanism comes into
play when being below such luminance level (scotopic vision or night vision).
Therefore, two types of adaptation can be differentiated: the dark adaptation and the
light adaptation.

A combination of the accommodation and adaptation effects will take place in the
eyes of the driver when looking at the dashboard from the horizon. Therefore, it
seems interesting to produce the information necessary for a safe driving experience
away from the driver, in particular at the street in the position where the potential
danger is located. This is, in general terms, the description of a correct-location
Head-up Display (see [19]).

LUIS SAMPEDRO DÍAZ 30


4 STATE OF THE ART

4 State of the art

4.1 Information during the driving task

The main types of information that play a role during driving are visual, acoustical
and tactile. Between these three sources of information, the visual perception stands
out as a decisive support to detect essential details from the traffic situation. This can
be called primary driving data, as it is required to enabling a person to drive. Apart
from this main source, there is a secondary visual information group, which includes
driving assistance systems (navigation system, ACC, speed indicator, etc.) and car-
status description (fuel level, oil temperature, etc.). It is important to notice that these
two channels are abruptly mismatched, as the primary data only comes from the road
and the secondary is produced at several locations inside the car, where different
displays have been installed. Thus, in order for the driver to perceive a secondary
indication, the eye contact with the road will be lost, producing a potential dangerous
situation. Furthermore, the ergonomic level of the information arrangement plays a
fundamental role, because the more complicated this secondary data is displayed,
the more difficult it is to recognize it and, therefore, the higher the potential hazard.

Together with the reception of the primary and secondary visual information, the
driver perceives, simultaneously, sounds (music, phone, low fuel warning, etc.) and
tactile feedback from the driving wheel and the pedals. All of these pieces of
information must be processed by the driver and, therefore, affect the driving task in
general and the visual perception in particular, as a warning sound, for example,
instinctively forces the driver to look in the direction of the sound source.

It is therefore desirable to provide high-ergonomic assistance information in order to


reduce look-down times. A suitable step towards this goal would be to combine the
driving assistance information with the outside environment, that is, to generate the
assistance data outside of the car, in the field of view of the primary information. This
is, in essence, the idea of a Head-up Display (see section 4.3 in this document for a
description of this device).

LUIS SAMPEDRO DÍAZ 31


4 STATE OF THE ART

4.2 Displays in automobiles

As mentioned in the last section, automobiles are equipped with more and more
assistance systems, which require an interface to communicate the information they
provide. Displays play a vital role in this exchange of information because they
represent the secondary visual information, that is, the data from the assistance
systems and the car status. There are three main areas where displays are found
(see Figure 20).

Zone III

Zone II Zone I

Figure 20: Displays in a car [AUDI AG]

Zone I represents the area where secondary information offered to increase the
driving comfort is displayed. The tendency of the market is to combine all comfort
systems, which are normally installed in separate devices with independent displays
and buttons to control them, into a central control unit, where such comfort
information (air temperature, multi-media, music, etc.) is governed by using a
relatively large LCD Display and a minimum number of keys (see Figure 21).
Navigation guidelines are also typically shown in this area.

LUIS SAMPEDRO DÍAZ 32


4 STATE OF THE ART

Figure 21: Display in the middle console [31]

Important information on driving and car status is typically displayed in the Zone II,
which is easier to access than the Zone I. This Zone is mainly occupied by the
dashboard (see Figure 22) and provides information by using pointers with trivial
scales, on/off LED and a small LCD and/or TFT Display typically presenting basic
information from the navigation system as well as car status details.

Figure 22: Dashboard of the AUDI B6 [AUDI AG]

Finally, the Zone III represents the area occupied by the virtual image of a Head-up
Display. Note the increase in ergonomic efficiency by displaying useful information on

LUIS SAMPEDRO DÍAZ 33


4 STATE OF THE ART

this area, as the need for the driver to loose eye contact with the environment is
dramatically reduced (see Figure 23).

Figure 23: Head-up Display [31]

The basic principles of operation of a Head-up Display, together with the main
inherent technical problems that such system implies will be described in the next
section.

4.3 Head-up Display

4.3.1 Introduction

A Head-up Display (HUD) is an optical system which provides important information


in a semi-transparent (or see through) fashion outside of the vehicle. This implies
having all important information for the driving task at one place, which decreases the
potential risk entailed when loosing eye contact with the road.

Head-up Displays have been present in the military aviation industry for over three
decades, helping pilots to concentrate in the environment without having to look

LUIS SAMPEDRO DÍAZ 34


4 STATE OF THE ART

down to monitor the assistance information; and in the civil aviation industry for over
one decade, assisting the pilots mainly in take off and landing. Corvette was the first
car manufacturer that offered this optical system in a car (in 1999). In 2003, BMW
was the first European manufacturer that offered to their 5 Series customers the
option to include a HUD system designed and manufactured in cooperation with
Siemens VDO, which generated a virtual semi-transparent image on top of the motor
hood, around 2.2 meters away from the driver.

4.3.2 Technical description

Figure 24 schematically shows the operation principle of a Head-up Display system.


An image produced by an image source is reflected by a number of mirrors, then
conducted through a glass cover and onto the windshield, which will reflect the image
to the driver. As a result, the driver perceives a semi-transparent image on top of the
engine hood, at a certain distance. The longer the distance traveled by the light from
the image source until the eye, the further away the virtual image will be.

Distance of observation

Windshield

Driver

virtual Image Glass cover


Image source

Mirror
Mirror

Head-up Display

Figure 24: Head-up Display system [Siemens VDO]

LUIS SAMPEDRO DÍAZ 35


4 STATE OF THE ART

The image source can be a TFT-LCD Display with a relatively large backlight module
(see Figure 25) or a Laser Projection Display. Due to the amount of energy loss
involved, the luminance produced at the LED Matrix in the Siemens VDO Head-up
Display version for BMW is between 500 and 700 kcd/m2. A Laser Projection Display
presents itself as a more effective way than an LED Matrix to achieve this
requirement, although Laser displays still need to face problems like temperature
stabilization and construction volume.

TFT-LCD
LED-Matrix

Licht conditioner

Figure 25: TFT-LCD Display module in a BMW HUD [Siemens VDO]

The mirrors included in a HUD are conceived to achieve two main tasks: generate
the appropriate distance of observation and cancel out the optical aberrations
included by the curvature of the windshield. They are designed and positioned at the
optimum arrangement, to avoid occupying a large volume.

In some designs, a semi-transparent film is installed in the windshield in order to


reduce double image effects and to reduce energy loss at the windshield. For some
car manufacturers, this approach is not accepted (for aesthetic reasons) and the
double image effect is corrected by applying an infinitesimal angle to the interlayer of
the windshield, like described in the section 5.2 in the present document.

LUIS SAMPEDRO DÍAZ 36


4 STATE OF THE ART

In order to better understand the relative complexity a Head-up Display, the main
specifications of the HUD mounted in a BMW 5 Series are shown in the next table.
This HUD is also offered in the BMW 6 Series.

Construction Volume [liters] ≈4

Display Type Transmissive color TFT LC-Display

LCD Resolution [Pixels] 360x180

Backlight module 128 LED-Matrix (500-700 kcd/m2)

Image Luminance 5000-7000 cd/m2

Colors yellow, orange, red and green

Number of mirrors 4

Projection distance [m] ≈ 2,20

Size of the eye-box [mm] 180x90

Price for customers [€] 1,300

Table 3: Specifications of the HUD in a BMW 5 and 6 Series

Figure 26: BMW Head-up Display [31]

LUIS SAMPEDRO DÍAZ 37


4 STATE OF THE ART

The resulting image of this device is shown in Figure 26. Note that, with such system,
the display is at a constant distance from the driver and still independent from the
environment.

Therefore, the integration of the information offered by the Head-up Display into the
real-time incidents taking place around the car is still a step to be achieved. A
simulation of this system is shown in Figure 27. Note that now the information is
represented at the location where the potential threat is situated.

Figure 27: Simulation of a correct-location HUD system [AUDI AG]

4.3.3 Inherent problems in Head-up Displays

After understanding the system, it seems clear that a Head-up Display is a relatively
complex optical system. In this section, an analysis of potential problems inherent to
such systems will be presented.

LUIS SAMPEDRO DÍAZ 38


4 STATE OF THE ART

If we do a ray tracing from the light ray generation until the eye of the driver, we can
distinguish four main sources of problems: the display, the optical system contained
in the HUD, the windscreen and the driver (see Figure 28).

Windscreen

Driver
Display

Optical System

Figure 28: Potential problem sources in a HUD

If we consider a Head-up Display that uses LCD technology with backlight to


generate the image, due to energy loss through transmission at the windscreen and
due to the size of the eye-box of the Head-up Display, less than 10% of the
luminance produced at the surface of the backlight module will reach the eyes of the
driver. In particular, only 1% of the light generated by the LED-Array in a Siemens
VDO Head-up Display device (which is the one used in the BMW 5-Series and 6-
Series) is sensed by the driver: between 500 and 700 kcd/m2 are produced at the
LED-Array and only between 5000 and 7000 cd/m2 make it to the virtual
representation. These last values depend on the capacity of a person to perceive the
virtual image in daylight conditions, so they are specified and fixed. The only value
that permits changes is the amount of luminance needed in the HUD System to
generate around 7000 cd/m2 at the virtual image. It is important to note that a low
efficiency means the need for a big amount of energy at the image generation
module, which is directly related to high temperature values at such locations, forcing
the use of temperature regulation systems and a heat exchanger. In the end, this all
conducts to a more complex system that grows in construction volume, which is

LUIS SAMPEDRO DÍAZ 39


4 STATE OF THE ART

highly undesirable because the HUD will be mounted behind the dashboard, where
only a restricted amount of space is available.

On the other hand, it can be observed in Figure 28 that a small change in the location
of any of the optical components contained in the HUD will have direct consequences
in the final image quality. It is therefore desirable to produce a design coherent with
both production efforts (since the higher the production requirements, the higher the
final product costs), use reliability (vibrations present in a normal driving condition
should not imply a loss in image quality and legibility) and construction volume (as
mentioned, given that the location of the car where the HUD is installed only makes a
limited room available).

The main source of potential problems is the windscreen. Due to the gravitational
manufacturing processes used nowadays, the exact curvature of the windscreen is
left uncertain. The mentioned processes can produce up to ±5mm of tolerance in the
X-direction at the center point of a windscreen. These are acceptable results for
aerodynamic and aesthetic standards, but lead to aberrations when using a Head-up
Display.

Finally, assuming that the three mentioned potential problems are solved, that is,
assuming we have an optimum HUD-windscreen adjustment, we still must take into
account that the driver does not necessarily sit still, but can move the head freely
within the eye-box.

LUIS SAMPEDRO DÍAZ 40


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

5 Optical aberrations in Head-up Displays

5.1 Introduction

It has been noted, in the last section, that relatively small manufacturing deviations in
the production line produce a whole different result in terms of quality of the
consequential HUD image. Even if the ideal accuracy is reached, that is, even if the
Head-up Display is manufactured with zero errors and positioned in the optimum
place with respect to the windscreen, and the windscreen is mounted into the frame
like the specifications propose, we must still take into account that the user can freely
move the head within the eye-box, drive in the day or in the night, etc.

The aim of this section is to provide a description of a theoretical “good-quality”


virtual image, by determining which of the optical aberrations play a significant role in
a Head-up Display and by analyzing some other optical effects that may question the
optical reliability of such system.

It is then reasonable to start by mentioning the aberrations that play a less significant
role in the quality of the virtual image. They are the longitudinal and transverse
chromatic aberrations, the spherical aberration, coma and field curvature. It must be
emphasized at this point, that these aberrations do play a role in the quality of the
image, although, in general, they can and must be solved during the development
stage of a Head-up Display, as they are not so directly dependant on the
manufacturing tolerances of the windscreen, nor are they on the position of the head
of the driver.

In particular, the chromatic aberrations can be ignored because they are strictly
related to lenses (they are related to refraction in a lens), and the Head-up Display
configurations nowadays do not include any lens within the optical system.

On the other hand, the spherical aberration is closely related to the use of spherical
lenses or spherical reflection surfaces. These elements are typically used in the

LUIS SAMPEDRO DÍAZ 41


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

astronomy and photography because the production of a spherical optical element is


dramatically less demanding than an aspherical one and, therefore, more
inexpensive. In that case, spherical aberrations can be minimized by using a
combination of concave and convex lenses, as well as aspherical lenses. As
mentioned before, no lenses take a part into the Head-up Displays considered in this
thesis, so the only possible cause for this aberration will be curved reflection
surfaces. Therefore, the design and manufacture of the mirrors contained in the
Head-up Display must be heavily controlled. Finally, the results of a simulation
carried out with the optical software Zemax and the CAD data of the HUD from
Siemens VDO mounted in a BMW 5-Series show that a change in the curvature of
the windscreen produces an unsubstantial impact in the spherical aberration
coefficient. Furthermore, this simulation showed that neither coma, nor field curvature
aberrations were decisively affected by the tolerances in a windscreen, but it clearly
affected the presence of astigmatisms in the virtual image.

Therefore, it can be concluded that only astigmatism and distortion are to be


analyzed. These are the only two proper “optical aberrations”, although not the only
aspects affecting the quality of the virtual image. We must add to this list the physical
phenomena taking place at the windscreen (reflection and refraction) and the natural
physiological limitations of the eyes of a person. Each of these aspects will be
analyzed in the following sections of the present document.

5.2 Double image effect

The windscreen plays a fundamental role in the achievement of an error-free image,


because it suffers from many manufacturing tolerances and because it naturally
presents a double image effect that needs to be corrected in order to offer an
acceptable image to the driver. This aberration is present due to the double reflection
surface that any glass has. If we take a normal windscreen that was not designed to
help avoiding this aberration, two versions of the image produced by the HUD will be
reflected into the eye of the driver (see Figure 29). One will be the image reflected in
the inner surface of the windscreen (primary reflection) and the other one will be
produced by another light ray after traveling through the glass and reflecting on the

LUIS SAMPEDRO DÍAZ 42


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

outer surface (secondary reflection). If the position of these two resultant virtual
images does not match, they seem blurry to the observer and produce a very
uncomfortable sensation in the driver. The brain will try to make both images match
by adapting the eye optics like if the image was not in focus and, consequently, the
probability to suffer from headache and eyestrain increases with the using-time.

Figure 29: Double image effect [32]

It is essential to have this aberration in consideration when producing the system


specifications, although it does not have further interest in the present thesis because
the technology available today is already capable of preventing this effect with
satisfactory performance. By applying an infinitesimal angle of the order of minutes of
arc in the PVB (butyrol-polyvinyl) interlayer in the HUD’s projection zone, a sharper
image can be achieved. This interlayer is an integral part of laminated windscreens
and its adjustment can lead to a reasonable reduction of double image effects (see
Figure 30).

LUIS SAMPEDRO DÍAZ 43


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

Figure 30: Double image corrected [32]

As noticed in Figure 30, the PVB interlayer has been modified until the virtual image
of the secondary reflection lies together with the virtual image of the primary
reflection. Note that the double image effect is still present, that is, there are still two
images being reflected to the driver, only that now both images lie together and thus,
the driver does not notice. In general, the angle of this interlayer depends on the
angle of incidence of the light rays coming from the HUD and the position of the eyes
of the driver (the position of the eye-box). This effect will not be accepted by the
driver and must be carefully taken into account.

5.3 Astigmatism

Astigmatism is a well known optical aberration produced in the human eye when the
surface at the retina suffers from a non spherical shape, that is, when it presents
different curvatures in different directions. In general, astigmatism is an aberration
that results when an optical system focuses two orthogonal axes of light at two
different planes in space. The ideal conception of a spherical lens introduces no
astigmatism because the lens is symmetric in all directions and thus, its optical power
does not vary when the light that strikes the lens is in the vertical or in the horizontal

LUIS SAMPEDRO DÍAZ 44


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

plane. With regard to focus, if the light emitted from a single point in the object is
considered, after being captured by different parts of the lens and being redirected by
lens refraction, it will be imaged as a single point (see Figure 31). By focusing on a
plane away from the focal plane, a blurred image will be observed.

Figure 31: Spherical lens focus [33]

In order to better understand astigmatism, let us consider a cylindrical lens. Unlike a


spherical lens, which is radially symmetric, a cylindrical lens bends light in only one
axis, that is, it has optical power in one axis. Consequently, a cylindrical lens has the
ability to reconverge light from an object to its focus in one direction, letting the light
in the other direction keep its divergent path, like if no lens was there. If we consider
a grid-shape object, by aligning one of the axes of the grid with the orientation of the
cylindrical lens, only one of the line structures (either the vertical or the horizontal)
will be imaged.

By using two cylindrical lenses with different optical power and positioning them
orthogonally to each other, we can simulate the real effect of an optical system that
suffers from astigmatism. If we consider a grid as our object and place the two
cylindrical lenses as mentioned, vertical lines an horizontal lines will present a
different focal plane (see Figure 32).

LUIS SAMPEDRO DÍAZ 45


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

Figure 32: Astigmatism example [33]

In this figure, the lens closest to the object will only bend horizontal details (vertical
lines of the grid). The remaining vertical details (horizontal lines of the grid) continue
their divergent path until they reach the following cylindrical lens, which will only bend
such light rays into their focus plane. Since the lenses are placed at a certain
distance to each other, the two resultant foci are located in different planes. A
simulation of the consequent images is shown in Table 4:

Horizontal Focus plane Middle plane Vertical Focus plane

Table 4: Simulation of astigmatic image in different planes [33]

Notice that in the horizontal focus plane, only vertical lines appear sharp, because
horizontal lines have still not reached their convergent points. The image in the
middle plane is blurred in both directions. Finally, at the vertical focus plane,

LUIS SAMPEDRO DÍAZ 46


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

horizontal lines have reached their focus plane and appear sharp, whereas vertical
lines are already diverging and come out blurry. Note that the image will not be fully
sharp at any plane. Figure 33 shows a simulation of the effect of this aberration in the
perception of letters:

Figure 33: Astigmatism simulation [27]

An astigmatic image will be received by the brain as a blurry image. Thus, it will
continuously try to adjust the optics of the eyes until a sharper image is formed at the
retina. The final result is a continuous change of the focus plane between the two
limits (vertical and horizontal structures). This effort leads to headache and eyestrain.

In the HUD system, astigmatism will appear due to deviations in the manufacturing
process of curved mirrors and the windshield, as well as their installation into the car.
These errors lead to an astigmatic HUD-eye-box, which implies that the driver will not
be able to focus both vertical and horizontal structures at the same time, producing
the mentioned discomforting sensations and being therefore a threat to the safety of
driving.

LUIS SAMPEDRO DÍAZ 47


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

The theoretical limit value for astigmatism is 0.25dpt, calculated by measuring the
distance between the object and both the horizontal “h” and vertical “v” focus planes
like shown in the next equation.

1 1
Astigmatismus[dpt ] = −
v h

Note that 1/m is the unit definition of a dioptre. Therefore, the higher the dioptre
value, the stronger the refraction capability of the optical system, because an image
is achieved closer to such. On the other hand, a low dioptre value means that the
image of the object is further away from the optical system, that is, its refraction
capacity is lower.

5.4 Distortion

Distortion is manifested by lack of coherence in the geometrical structure of an image


rather than its sharpness. That is, a pure distorted image is a sharp image that no
longer maintains the geometrical rationality of its original object. In general, this can
affect the perception of the information displayed on the virtual image; in particular,
the recognition of symbols, numbers and letters.

Figure 34: Simulation of distortion impact in the perception of information

LUIS SAMPEDRO DÍAZ 48


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

In the last figure, all the information can be recognizable, although it can be
concurred that the symbol and the words on the left are easier and faster identified
than those on the right. This is example is an exaggeration of this aberration, but it
clearly shows that the more distorted the image is, the more time it takes to extract
the information it contains, causing evident risks while driving.

In reality, distortion is remarkably noticeable when positioning the eyes at the limit of
the eye-box of a Head-up Display (because we are then using the edge of the optical
components, that is, we are away from the optimum central region of the optical
system) although it can also appear to a smaller extent in other areas of the eye-box.
A practical and common method to measure distortion is named the TV-Distortion
and it is the percentage difference between the values “A” and “B” as shown in the
next equations:

A−B
TV − Distortion = ⋅ 100[%]
B

A1 + A2
A=
2

The values “A1”, “A2” and “B” needed to calculate the TV-Distortion rate are
schematically represented in the Figure 35:

Figure 35: TV-Distortion [34]

LUIS SAMPEDRO DÍAZ 49


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

By using this definition, the theoretical maximum tolerable value for TV-Distortion lies
between ±1% and ±1.5%. Note that this value is not a clearly fixed number, as it
strongly depends on the nature of the information being perceived. In our case, this
value depends on the complexity of the symbols, letters and numbers, together with
their size on the virtual image.

5.5 Accommodation of the eye

The concept of accommodation has already been described in the section 3.2.3 as
the ability of the eye to adjust its focal length. Accommodation takes place every time
the driver focuses on the dashboard looking for secondary information like speed,
fuel level, navigation signs, etc. It has been mentioned that the bigger the distance
between the two focus distances, the longer it takes until the eye accommodates,
that is, the longer it takes to perceive a sharp image.

On the other hand, a number of studies have let clear that the driver pays attention to
the environment in different ways, depending on the type of the road and, therefore,
depending on the speed of the car. In other words, for a driver traveling through the
streets of a city at 50km/h, the word “environment” is related to a domain of 20 to 30
meters maximum. The driver, in this case, concentrates and focuses typically on the
name of the streets, buildings and the cars nearby. Furthermore, a car being driven
on the motorway at more than 100km/h indirectly implies the expansion of the
“environment” to a domain of at least 50 to 100 meters.

By reading the last paragraph, it is correct to state that the time it takes to read a
piece of information on the dashboard is directly proportional to the speed of the car,
because driving at a high speed means paying attention to elements further away
from the car.

The last three paragraphs could be used to justify the use of Head-up Displays in the
way that they reduce the distance between the environment and the driving

LUIS SAMPEDRO DÍAZ 50


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

assistance information, and therefore, the accommodation time. However, this is not
the only aim in this section. Accommodation not only implies time away from
perceiving the environment, but it can also imply eyestrain and even headache, like
described in the following lines.

Accommodation is inevitably linked to astigmatism, because the attempt of the eyes


to continuously find the sharp image between the two limit focus planes, as explained
in section 5.2, is nothing but accommodation.

x
virtual image

y
z

eye-box
Figure 36: Accommodation due to movement of the head within the eye-box

Another situation in which accommodation is present is due to construction or optical


design reasons. If moving the head within the eye-box in the “Y” and “Z” direction
entails a movement of the whole virtual image in the “X” direction (see Figure 36),
then the eyes must accommodate to the new distance where the virtual image lies,
which will not be noticed by the driver, although it can lead to eyestrain and
headache in the long term. Whether this effect will have an impact in the use of
Head-up Displays is an open and not so clear discussion that needs to be clarified by
doing the experimental procedure suggested further on in the present document.

LUIS SAMPEDRO DÍAZ 51


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

The theoretical limit value for accommodation is commonly situated by the eye-optics
field around ±0.25dpt, that is, while looking at an image, the maximum absolute
change in the eye lens should be 0.25dpt. This value cannot be automatically
accepted in our case and needs to be tested in the particular scenario of a Head-up
Display because the information contained in the virtual image is only requested at
certain points while driving, and not for sustained periods of time. On the other hand,
accommodation tolerances decrease with the age, so test individuals of all possible
ages must be considered.

5.6 Stereoscopic discussion

By accepting that the eye-box does not present a homogeneous light ray distribution,
that is, admitting that different signal-qualities will be perceived in different positions
of the eye-box, we are forced to analyze the impact on the quality of the resulting
image joined by the brain when the independent images are coming from two
receptors (eyes) located at a certain distance (65mm in humans [35]) within the eye-
box.

The size of an eye-pupil (around 4mm) can be in the order of 100 times smaller than
the size of the eye-box. This implies that the version of the virtual image collected by
one eye can be considerably different from the version of the other eye, because
they both perceive only the local fraction of the eye-box and because the eyes are
considerably apart.

Let us consider the example shown in the Figure 37, which shows the perception of
each eye separately at a given position of the head within the eye-box. The virtual
image for this simulation is a black point. It can be noticed that the left eye perceives
a sharp image of the black dot right in the middle of the retina, whereas the right eye
perceives an image that suffers from a typical aperture aberration.

LUIS SAMPEDRO DÍAZ 52


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

Left eye Right eye

Figure 37: Simulation of the independent perception of both eyes

The brain will try to superimpose both images, but a definite factor for that to be
successful is that they both should have the same center of gravity, which does not
take place in our example. Therefore, the brain will reject accepting that the resulting
image is sharp and will make the eyes accommodate until the two perceptions have
the same center of gravity, that is, until they can be superimposed. This can lead to
the fact that none of the images are really sharp, like shown in Figure 38, although
the resulting combined image is taken by the brain as an acceptable image.

Left eye Right eye

Figure 38: Simulation of independent perception after accommodation

This effect does not have a theoretical limit value applicable to a Head-up Display
system and should be obtained by doing the suggested experiments in the present
thesis (section 6.3).

LUIS SAMPEDRO DÍAZ 53


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

5.7 Night/day driving conditions

The size of the pupil is controlled depending on the amount of light present at the
environment and plays a role in visual acuity. It is important to consider the impact of
a typical night driving condition on the perception of the virtual image. It is clear that
less light implies a bigger pupil, causing the eye to be more sensitive to aberrations
by amplifying the actual aberrations taking place in the optics of the eye. However, in
night conditions, the contrast of the virtual image is increased because the
background (the road) is no longer visible or, at least, less visible.

Thus, it can be stated that in night driving conditions, the virtual image will present
less noise from the background, although it will be perturbed with aberrations taking
place in the eyes due to the bigger size of the pupils. As there is no documented
statement about the repercussion of environmental light in the visual acuity of the
virtual image of a Head-up Display, a testing procedure has been suggested in the
present document (section 6.3).

5.8 Conclusion: Theoretical definition of a “good-quality” virtual


image

At this point, a theoretical “good-quality” virtual image can be defined as the image
which lacks of double image effect, presents a maximum astigmatism value of
0.25dpt, is 1 to 1.5% distorted at the most following the TV-distortion definition,
forces the eye to accommodate in a range of less than ±0.25dpt, and has been
checked for the possible consequences of the stereoscopic discussion and the
night/day driving conditions described in this thesis. These values are represented in
Table 5.

LUIS SAMPEDRO DÍAZ 54


5 OPTICAL ABERRATIONS IN HEAD-UP DISPLAYS

Double image effect Zero

Astigmatism ≤ 0.25 dpt

Distortion ≤ ± 1~1.5 %

Accommodation ≤ ± 0.25 dpt

Stereoscopic aberration Analyzed

Night/day driving conditions Analyzed

Table 5: Theoretical values for a “good-quality” virtual image

It is important to emphasize the need for a system with which these theoretical
values can be tested. These limit values have been taken from disciplines (like eye
optics, astronomy, microscopy, etc.) away from the scope of our particular case.
Therefore, a test station capable of reproducing exact aberrations is to be proposed
in the next section of this document. Together with this capability, both objective and
subjective measurements will be available with the clear goal to test the theoretical
limit values, in order to produce equitable evaluations of any Head-up Display system
and, therefore, a truthful comparison between different systems.

LUIS SAMPEDRO DÍAZ 55


6 TEST STATION DESCRIPTION

6 Test station description

6.1 Introduction

As mentioned in the last section, the theoretical limit values have been taken from
disciplines that may be away from the particular scope of the present document. The
use of a Head-up Display presents singular characteristics that such limit values do
not take into account, like the fact that the virtual image is only seen at certain
moments and only for small amounts of time. This factor may play an important role
and, thus, the theoretical values proposed in the previous sections need to be tested.

In the present section, a test station is to be described, with which accurate


aberrations from different Head-up Displays can be repeatedly simulated and
measured. That is, the test station will be used to simulate the use of Head-up
Displays in automobiles and to evaluate optical aberrations through objective und
subjective test, as well as to produce quality-control studies to devices delivered from
manufacturers during the development process of a Head-up Display and also
market analysis to compare Head-up Displays from different system suppliers.

6.2 General description

The test station mainly integrates three components: a Head-up Display, a


windshield and an observer (see Figure 39). The main module consists of an
aluminum frame that incorporates the windscreen, the Head-up Display and a car
seat. A supplementary and attachable module will be constructed including a digital
camera. The reason to construct this module separately is to provide the option to
measure the quality of Head-up Displays that are already mounted in a car. This
decision has been made predicting that components contained in cars designed by
foreigner markets (or restricted to foreign markets) will not be accessible separately
and, therefore, the test must take place in the actual car.

LUIS SAMPEDRO DÍAZ 56


6 TEST STATION DESCRIPTION

Windscreen

Camera

Head-up Display

Figure 39: Test station

These components will be installed regarding their exact position within the car and
with the capability to adapt their relative position to different car models. In the
following sections, an analysis of the requirements of each component is presented.

6.3 Components

6.3.1 Digital camera

The digital camera is the simulator of one eye of the driver which will provide
objective measurements. Therefore, the election of the device should satisfy a
number of requirements like resolution, accuracy, use flexibility, etc.

The first discussion is whether a video camera is meaningful. It seems clear that
such selection is needed when a dynamic analysis is to be carried out, that is, when
simulating the movement of the head of the driver within the eye-box. However, it
may involve the ambiguity between the “real” errors due to optical aberrations and

LUIS SAMPEDRO DÍAZ 57


6 TEST STATION DESCRIPTION

the ones present because of the mechanical movement of the camera and the
camera itself.

In order to support this assumption, let us consider a point as the image subject to be
analyzed in dynamic conditions. Due to the exposure time of the video camera being
greater than zero, it is possible to state that, in motion conditions (it is the same
whether the point or the camera moves), the initial point will be perceived as a line
with longitude proportional to the exposure time set in the video camera, as shown in
the following figure:

Figure 40: Transformation of a point into a straight line

This is assuming the movement of the camera to be perfectly linear; but if we


consider the vibration due to asymmetries in the parts of the motors and the
mechanical junctions, the line registered by the camera will no longer be straight, but
it will present, for example, the shape shown in the following figure:

Figure 41: Transformation of a point into an irregular line

It is therefore necessary to determine the importance of a dynamic analysis, since it


involves a great amount of effort to add only a little value. A reasonable approach is
to conceive movement as a set of steady pictures, which is what will be followed in
this document, because in the end, processing a video file means processing a
number of frames separately and then analyzing the relation between the frames.

LUIS SAMPEDRO DÍAZ 58


6 TEST STATION DESCRIPTION

Consequently, as a conclusion to this initial discussion, the preference is to


implement a digital photo camera with the appropriate features. The main parameters
that need to be determined are the camera resolution, the optics and the access to
the different operation modes (exposure time, focus, zoom, etc). Note that these
attributes must be easily variable because our aim is to evaluate different Head-up
Displays and each of them has its own specifications (size of the eye-box, size of the
virtual image and distance between the eye and the virtual image). However, it is
clear that the camera must allow both manual and automatic operation modes, that
is, the exposure time must be set by the user and kept constant throughout the test in
order to facilitate the evaluation tasks, as well as the focus and the zoom
adjustments (unless the test for astigmatism or accommodation is carried out, which
will imply varying the focus distance along the test, as described in section 6.3).

Nowadays, the resolution of the camera is simply not an issue anymore, as modern
digital cameras have chips with many mega pixels as a regular feature. There are
two main chip sensors: CCD (Charged Couple Device) and CMOS (Complementary
Metal Oxide Semiconductor). Both chips have the same sensitivity and convert the
light falling on the chip into electron by means of the same process. The main
difference between the two approaches is less the chip itself, by the connection
between the chip and the camera. Whereas the CCD implies that all image
processing is done “off-chip”, that is, the data leaving the CCD chip is still in analog
form, a CMOS produces both the photon-to-electron and the electron-to-voltage
conversions in the actual chip (see Figure 42). The result is that a CMOS chip must
use part of its surface to allocate the circuitry to produce the conversion, also
implying the use of micro lenses to redirect the light to the sensitive part of the pixel,
in order to increase the efficiency. Thus, the extra sensing area available in a CCD
chip allows it to sense more light, increasing the quality of the image. Consequently,
a CCD-chip camera is to be used in our test station.

LUIS SAMPEDRO DÍAZ 59


6 TEST STATION DESCRIPTION

Figure 42: CMOS camera chip [37]

It has been stated that modern cameras are built with many mega pixels. However,
we should make sure our sampling is correct. Probably, when introducing the
concept of sampling, the Nyquist theorem (see [36]) comes into the mind of the
reader. This theorem was developed in the 1920s and states that to accurately
reproduce an analog signal, the digital sampling rate must be twice the frequency of
the original (analog) signal. This theorem works well in two dimensional signals like
audio (an audio signal depends on time and amplitude of the wave), but cannot be
directly applied in more complex signals like imaging. A pixel is defined by its
horizontal and vertical position, plus its intensity. Therefore, the digital sampling rate
for imaging cannot be twice the resolution of the original, but it must be higher. In
Figure 43, a star (white point which, in this example, is the size of a pixel on the CCD
chip) is sampled ignoring the Nyquist theorem. It can be clearly noted the problem
this may involve. If we are lucky enough, the image of the star will lie exactly on one
of the pixels assigned at the CCD chip to sample it. Only then, it will be recorded as a
truthful size-and-intensity image of the star (example on the left at Figure 43). If we
take the example on the right of this figure, we notice that the image is not a faithful
representation the size of the star, nor it is of its intensity.

LUIS SAMPEDRO DÍAZ 60


6 TEST STATION DESCRIPTION

Figure 43: Imaging representation ignoring Nyquist theorem [36]

By applying the Nyquist theorem, that is, by giving two pixels in each direction (four
pixels in total) to sample the star (Figure 44), it can be noticed that the size
representation will be more accurate, although the intensity will be degraded because
only a part of the pixel will be hit with light. Therefore, as shown in Figure 44, The
image of the star seems dimmer than it is in reality.

Figure 44: Imaging representation applying Nyquist theorem [36]

Therefore, the sampling rate should be higher than twice the resolution of the original
image. Examples of applying a higher sampling rate are shown in Figure 45.
Logically, the higher the sampling rate, the more accurate the imaging quality will be.

Figure 45: Higher sampling rates examples [36]

At this point it seems reasonable to analyze the minimum resolution requirements.


Say we take a Head-up Display from the BMW 5 Series. The LCD-Display mounted

LUIS SAMPEDRO DÍAZ 61


6 TEST STATION DESCRIPTION

in the system has a resolution of 360x180 pixels. By applying a sampling factor of


three, we will need to hit an area of a CCD chip with 1080x540 pixels, that is, way
lower than a standard resolution to any camera today. Therefore, the standard high-
resolution offered by any modern digital camera today satisfies the requirements
presented in this section.

As mentioned before, the camera optics must provide us with the possibility to
change both the focus and the zoom manually. The following is a calculation of the
approximate optic requirements. By reducing the optical system into a single lens,
the calculation procedure is simplified as follows (see Figure 46):

Image Lens
β
β

Object

a b

Figure 46: Optical system of a camera

In the previous figure, a simple optical system has been represented. An object is
placed “b” meters away from the lens, which is “a” meters apart from the image. The
angle formed by the light beam and the optical axis of the lens is “β”. Both the
“object” and the “image” lengths in this diagram are their diagonal dimension. By
adapting this diagram into our particular case, the object will be the virtual image
formed by the Head-up Display, the lens is the camera lens system and the image
will be the size of the CCD Sensor of the digital camera. From all of these, the only
unknown is the length “a”, which is approximately the focal length of the objective. It
will be calculated as follows:

LUIS SAMPEDRO DÍAZ 62


6 TEST STATION DESCRIPTION

φCCD φVirtual _ Im age φCCD


= ⇔ a = b⋅
a b φVirtual _ Im age

where “Φ” represents the diagonal size. In other words, by choosing a camera body,
the CCD Sensor size will be determined and by choosing a Head-up Display, both
the length “b” and the size of the virtual image will be known. To illustrate this, let us
choose the CCD Sensor of a standard digital reflex camera (23.7x15.6mm), a
distance “b” between 2 and 3 meters and a virtual image of 180x90mm. The following
table represents various combinations and their corresponding values of the focal
length. All values in Table 6 are in millimeters:

CCD Diagonal: 28.31

Image Diagonal: 201.25

Distance "b" Focal Length

2000 281.32

2200 309.45

2400 337.58

2600 365.72

2800 393.85

3000 421.98

Table 6: Values of focal length

The focal length obtained is relatively big, and the bigger the focal length, the higher
the cost of the objective. Note that these results imply that the camera is using the
entire CCD-Sensor to capture 1080x540 pixels, a very low resolution considering the
products in the market today. By doubling the resolution of the camera, we require
half the CCD-Sensor area in order to get the necessary 1080x540 pixels of
information, and therefore (see Table 7):

LUIS SAMPEDRO DÍAZ 63


6 TEST STATION DESCRIPTION

CCD Diagonal: 14.19

Image Diagonal: 201.25

Distance "b" Focal Length

2000 140.99

2200 155.09

2400 169.19

2600 183.29

2800 197.38

3000 211.48

Table 7: Values of focal length

As a result, the right combination of optical zoom and pixel resolution must be found.
A good approach will be to implement a 200mm zoom and at least 4 mega pixels. It
might be interesting, for further exhaustive tests in the future, to consider using a
more powerful zoom to analyze some region of the virtual image, or even, a number
of pixels within the virtual image. This is not a critical decision because objectives are
easily interchangeable and reasonable in price.

Finally, there are two types of auto focus: active and passive. Active auto focus
cameras measure the distance-to-object by emitting a signal (ultrasound, infrared,
etc.) that will bounce on the object and will be sent back to the camera. Passive
systems analyze the actual image electronically, measuring the difference in intensity
among adjacent pixels in real time. By changing the focus, the algorithm can detect
the maximum intensity difference point among adjacent pixels and, therefore, the
sharpest image (see Figure 47). Obviously, active auto focus cannot be used in our
system because they will always detect the windscreen as the object. Therefore, a
camera with passive auto focus is the right decision.

LUIS SAMPEDRO DÍAZ 64


6 TEST STATION DESCRIPTION

Figure 47: Passive auto focus [38]

Concluding, the appropriate choice will be a digital reflex camera with a CCD Sensor
of at least 4 mega pixels, passive auto focus and a 55-200mm objective. By
researching the current market of digital cameras and considering the advice of a
number of professional photographers, the good final choice will be either the Nikon
D50 or the Nikon D100.

6.3.2 Head-up Display

Head-up Displays from different suppliers present different volume sizes, as well as
diverse mounting approaches into the car. Furthermore, they have their own
constraints with respect to their positioning relative to the windscreen.

Figure 48: Box design concept

LUIS SAMPEDRO DÍAZ 65


6 TEST STATION DESCRIPTION

It is therefore desirable to have a quick, precise and cost effective method to


integrate different Head-up Displays into the test station. The main idea to achieve
this is to avoid mounting the Head-up Display directly in the test station. That is, the
Head-up Display will be mounted in a “box” which will be mounted in the test station
(see Figure 48). This box will be identical for all Head-up Displays and some size
bigger than the biggest Head-up Display system in the market. This way, the
installation of the box into the test station can be standardized, producing the above
mentioned desired features. The following is a schematic representation of the
mentioned box (see Figure 49):

Figure 49: Head-up Display Box Design

Note the two holes applied to the sides of the box. The box will be fastened onto the
HUD-platform available in the test station by applying two screws at the holes
mentioned. As a result, the installation of a Head-up Display will only require our
effort and supervision the first time, because once it has been correctly mounted into
the box, it can be fixed in that position, keeping its location with respect to the screws
and, therefore, with respect to the platform and to the test station in following
installations. That is, the Head-up Display will be mounted and fixed once into the
box and, then, this box will be integrated into the test station by using the two fixed

LUIS SAMPEDRO DÍAZ 66


6 TEST STATION DESCRIPTION

points mentioned. Table 8 shows a comparison between the box design concept and
the direct installation of a HUD into the test station:

Directly into the test station Box-Design

9 Particular solution 9 1 solution to all problems


8 Not possible to use the test station 9 Use of the test station independent
while mounting a new HUD from the HUD set up
8 More time and effort needed 9 Fast installation

8 1 solution for 1 problem 9 Only adjusted once


8 The same problems need to be
8 More materials needed
addressed in every installation
Table 8: Box design versus direct installation of a HUD

The Head-up Display platform can accurately move in the X, Y and Z directions in
order to adapt the position of the Head-up Display relative to the windscreen as the
specifications suggest. After this adjustment, the platform can be easily fixed in order
to achieve the maximum accuracy in the test measurements. Once the Head-up
Display is in optimum position, aberrations will be generated by means of a number
of methods described in section 6.3. Among these methods, the one to achieve
astigmatism is to be mentioned here, as it involves the rotation of the Head-up
Display system around the Y axes. This rotation will be electronically controlled, in
order to achieve maximum accuracy and also, to offer the possibility to repeat the
aberrations as many times as needed. By doing this, we can assure the validity of the
test results because the absolute aberration to which a person is confronted in a test
will be identical to the one experienced by the rest of the test population.

6.3.3 Windscreen

The windscreen frame should be able to accommodate different windscreen sizes


and shapes, as it is a goal for the test station to adapt to the specifications of different
car models. Due to the complexity of the curvature of a windscreen, it has been
decided to prevent the windscreen to be a variable part during the test, because an

LUIS SAMPEDRO DÍAZ 67


6 TEST STATION DESCRIPTION

uncertain movement means changing its curvature to an unknown value, which


should be completely avoided as the curvature plays such a critical role in the
resultant image quality. Thus, the windscreen will be leaned on the frame of the test
station in optimum position, like specifications advice, by means of a number of
supporting points (at least three supporting points at the top and four at the bottom).

Figure 50: Windscreen diagram

The use of any type of tighten fixation system should be avoided, as it will complicate
the achievement of the optimum curvature (see Figure 50). Instead, the windscreen
will simply lean on the seven points mentioned. In the test station, the frame where
the windscreen is installed describes a rotation around the Y axes (see Figure 39), in
order to achieve the correct simulation of many car specifications.

6.3.4 Seat

A normal car seat is to be mounted in the test station in order to allow a test person
to be in the correct position with respect to the windscreen and, therefore, with the

LUIS SAMPEDRO DÍAZ 68


6 TEST STATION DESCRIPTION

Head-up Display. This will be achieved by installing the seat on a platform moveable
in the X, Y and Z directions. After achieving the correct position, the platform will be
fixed. After this, only the normal seat adjustment in the X direction will be allowed, in
order to adapt the station to the size of the person.

6.4 Procedure to generate and measure optical aberrations

6.4.1 General discussion about the generation of test images

Different Head-up Displays apply different technologies to produce an image,


different display sizes and different resolutions. In this section, a discussion on
general aspects about the generation of test images is provided. The main concern is
to make sure that, when adapting a picture to the resolution of different displays,
proportions are respected. To prove the importance of this, let us consider the
following pattern:

Figure 51: Linear pattern

Figure 52: Scaled pattern disregarding proportions

In this simple example, we have created a linear pattern by using a CAD Software
(see Figure 51). In this picture, a single square represents a pixel, and two pixels
represent a unit of measure. Once we have created the top image (namely original
image), Figure 52 is produced by scaling the original with a 110% factor. The result
of doing such scaling process forces us to state that both figures are not identical
anymore; it forces us to say that the second one contains errors: compared to the

LUIS SAMPEDRO DÍAZ 69


6 TEST STATION DESCRIPTION

original, it lacks of horizontal and vertical symmetry and presents a pronounced non
linearity within the unit divisions. This situation would put the objective results at our
test station at risk because it will obligate the program to report an anomaly that has
no optical motivation at all.

If we analyze the problem, we find that the original pattern in Figure 51 was created
by using 41x5 pixels and the scaled one results to have 45x6 pixels (110% of the
original). The number of pixels used to create the divisions is 41-1=40 in the original
image and 45-1=44 in the scaled one (one is subtracted to the total number so that
zero is not counted in the operation); on the other hand, the number of divisions is 20
in both cases. This leads us to the following results:

40
Original: 41 − 1 = 40 → = 2 pixels / division
20

44
Scaled: 45 − 1 = 44 → = 2,2 pixels / division
20

By looking at these numbers, we notice that the amount of pixels representing a


division in the scaled equation is not a natural number and, consequently, the
divisions in the scaled figure will be located depending on the algorithm of the
converter, because a pixel is the minimum element of an image and cannot be
divided. Therefore, it should be avoided to assign a resolution to the analysis picture
different than the original resolution times a natural number. That is, the following
equations must be true:

Pixels _ widthSCALED = Ν ⋅ Pixels _ widthORIGINAL

Pixels _ height SCALED = Ν ⋅ Pixels _ height ORIGINAL

⎡ PixelsWIDTH ⎤ ⎡ PixelsWIDTH ⎤
⎢ ⎥ =⎢ ⎥
⎣ PixelsHEIGHT ⎦ORIGINAL ⎣ PixelsHEIGHT ⎦ SCALED

LUIS SAMPEDRO DÍAZ 70


6 TEST STATION DESCRIPTION

where N is a natural number. In words, these equations state that the proportions
within the scaled image must be identical to those in the original one and are
independent of its size. These laws are to be followed when converting test images
into the format of the Head-up Display.

6.4.2 Double image effect

As mentioned in the present document, double image effects produce a very


uncomfortable sensation in the driver and must be completely solved. The
technology available today, explained in section 5.2, is already a good approach.
Thus, the interest on simulating this effect is very low.

On the other hand, as this effect produces a strong impact in the sharpness of the
image, the test with people does not provide interesting conclusions because the
only way to get a yes/no type of answer from the user is to provide the test person
with a sharp and a fuzzy image. In other words, it may not be interesting to produce a
scale rating how blurry an image is, when the first blurry level is not acceptable.
Nevertheless, in the hypothetical case in which the reader needed to simulate this
effect, it would be enough to use a windscreen with a wrong angle at the interlayer.

6.4.3 Astigmatism

This optical aberration is strongly related to the curvature of the windscreen, which is
not fully defined due to tolerances in the manufacturing process, as described in the
present document.

The goal of this section is to describe a way to artificially and accurately generate
astigmatism, as to state how to measure it. By rotating the windscreen around the Y
axis as shown in Figure 53, the area where the light rays strike the windscreen
changes. This produces a gradual modification of the reflection curvature and,
consequently, generates astigmatism.

LUIS SAMPEDRO DÍAZ 71


6 TEST STATION DESCRIPTION

Figure 53: Rotation of the windscreen to generate astigmatism

At this point, the reader will probably raise a question on the appearance of other
aberrations when rotating the windscreen. A 3D computer simulation carried out in
Siemens VDO showed that by rotating the windscreen like shown, only astigmatism
was produced while the other aberrations were let practically intact.

As suggested in section 6.3.3, the handling of the windscreen is complicated. That is


the reason why installing the windscreen in optimal position and curvature, and
avoiding changes during the test seems the right approach. Therefore, instead of
rotating the windscreen, the Head-up Display will be rotated, producing identical
results.

Finally, the value for astigmatism will be measured at every desired point within the
eye-box by means of the digital camera. It has been explained that astigmatism
produces two limit planes where either horizontal or vertical lines are sharp.
Therefore, by displaying first horizontal lines in the virtual image and focusing at
these lines with the camera, the distance between the camera and the vertical focus

LUIS SAMPEDRO DÍAZ 72


6 TEST STATION DESCRIPTION

plane (horizontal lines are sharp in the vertical focus plane) can be read in the
distance-to-object scale provided on the objective (see Figure 54).

Figure 54: Distance-to-object scale [39]

This distance-to-object scale is not linear. The procedure to calculate the value
between two marks is complicated and not the focus of the present thesis. The link
[39] provides an accurate and simple-to-use calculation of the value between the
marks in this scale.

The same procedure will be followed with vertical lines in order to measure the
distance between the camera and the horizontal focus plane. The mathematics
behind the calculation of astigmatism has been presented in section 5.3 as:

1 1
Astigmatismus[dpt ] = −
v h

where v is the distance to the vertical focus plane and h is the distance to the
horizontal focus plane. Both distances must be in meters.

Finally, the position of the eyes of the test person within the eye-box should be
monitored because, as noted in this document, the aberrations will vary within the
eye-box. Therefore, defining the location and size of the eye-box in the test station

LUIS SAMPEDRO DÍAZ 73


6 TEST STATION DESCRIPTION

will be the first step. For example, the eye-box can be defined with respect to two
scales in Y and Z directions fixed to the frame like shown in Figure 55. In this
example, the eye-box lies from 2 to 34 in the Z scale and from 5 to 62 in the Y scale.

60 55 50 45 40 35 30 25 20 15 10 5

40

35

30

25

20

15

10

Figure 55: Measurement of the eye-box

Once we have defined the eye-box, we can directly know the position of the eyes of
the test person relative to the eye-box by measuring the position of the eyes on the
scales (see Figure 56). Therefore, the objective information given by the camera will
be used to know before hand the value of the aberrations present at the region where
the eyes are positioned.

LUIS SAMPEDRO DÍAZ 74


6 TEST STATION DESCRIPTION

60 55 50 45 40 35 30 25 20 15 10 5

40

35

30

25

20

15

10

Figure 56: Position of the eyes of the test person

6.4.4 Distortion

A controlled and artificial method to generate a distorted image is to deform it


electronically. That is, instead of displaying a perfect shape on the display of the
HUD, a distorted image can be displayed. Figure 57 shows an example of both
possibilities: a perfect-structure image (on the left) and a distorted one (on the right).
The distortion effect can be produced by means of an image processing software (for
this example, Adobe Photoshop was used).

Figure 57: Distortion generated with Adobe Photoshop

LUIS SAMPEDRO DÍAZ 75


6 TEST STATION DESCRIPTION

On the other hand, there can be a distortion present resulting from the optical design
of the Head-up Display. Logically, this distortion cannot be controlled and should be
measured by analyzing pictures taken at different (Y, Z) coordinates in the eye-box,
and by monitoring the location of the eyes in the subjective test as described in
section 6.4.3. The objective analysis will be done with an image processing software
(for example, the one presented in section 7.2) and will follow the TV-Distortion
measuring concept defined in section 5.4 (see Figure 58):

A−B
TV − Distortion = ⋅ 100[%]
B

A1 + A2
A=
2

Figure 58: TV-Distortion measurement method

6.4.5 Accommodation

This effect cannot be directly simulated by using a Head-up Display configuration,


although it can be measured (objective test) and monitored (subjective test) by
storing into a spreadsheet the distance-to-object values of every photo taken at every
point of the eye-box.

x
virtual image

y
z

eye-box

Figure 59: Accommodation of the eye

LUIS SAMPEDRO DÍAZ 76


6 TEST STATION DESCRIPTION

It is always difficult to work and make conclusions by looking at tables filled with
numbers, so a graphical approach should help with the analysis and evaluation of a
Head-up Display. The idea is to represent an X value for every (Y, Z) coordinate in
our system (see Figure 59). That is, representing the distance between the driver and
the virtual image (X value) correspondent to the location in the eye-box (that is, (Y, Z)
coordinate). This graphical evaluation approach can be applied to evaluate any of the
other optical aberrations and optical effects analyzed in the present thesis and will be
described in section 7.4.

6.4.6 Stereoscopic discussion

Once again, this effect does not offer the possibility to be artificially simulated with a
Head-up Display system. Therefore, the only option available is to analyze this effect
for a given Head-up Display.

The measurement procedure will be to take two pictures for every situation
considered. These two pictures will be at a distance of 65mm, which is the average
distance between the eyes of a person [35]. These values will be registered into a
spreadsheet to allow the graphical analysis method explain in section 7.4.

On the other hand, the location of the eyes of the test person will be monitored and
registered along the subjective test.

6.4.7 Night/day driving conditions

Night and day driving conditions are easily simulated by changing the environmental
lighting conditions in the laboratory. The prior aberrations and optical effects should
be tested in both conditions, in order to determine whether the limit values for
aberrations are dependant on the lighting conditions.

LUIS SAMPEDRO DÍAZ 77


7 EXPERIMENT METHOD

7 Experiment method

7.1 Introduction

The first objective of the experiment is to obtain the threshold values for aberrations
particularized to a Head-up Display. By definition, these limit values do not depend
on the technology or the device used, but on the amount of optical aberration present
in the virtual image. Consequently, the first test to be carried out should be an
objective test (described further down in this section), done by simulating the eyes of
the driver with a digital camera as described in this thesis. The camera will be
capable of moving in the Y and Z directions in order to simulate different people sizes
and movements of the head.

After obtaining the objective values of the aberrations in every point of interest in the
eye-box, test people will produce the subjective results. At that point, we will know
that, for example, rotating the HUD “β” degrees produces “x” dpt of astigmatism, and
we will add to this the acceptance of many users to such level of astigmatism.
Consequently, we will be able to obtain the definitive acceptance-to-aberrations limit
values of people using a Head-up Display.

Once these values have been determined, different Head-up Displays will be easily
evaluated because the impression of the user in terms of quality of the virtual image
can be predicted by measuring the aberration values of different points in the eye-
box and comparing them with our limit values. That is, we will not need to ask a
person anymore about the quality of the image because we will have the definition of
a good-quality image of any Head-up Display.

7.2 Objective test

The initial goal of the objective test is to establish the relationship between the
movement of the parts of the test station and its consequent variation in the value of
aberrations. This relationship will be of great importance when proceeding with

LUIS SAMPEDRO DÍAZ 78


7 EXPERIMENT METHOD

subjective tests because it represents the link between the evaluation of the virtual
image done by the driver and the amount and type of aberration present at all time.
Therefore, in every test, all parameters of the system should be registered together
with the resulting image file and data extracted by reading from the objective of the
camera. A spreadsheet seems the best way to register the information. The minimum
pieces of information that must be included in the spreadsheet are the name of the
original test image, the (x, y, z, β) coordinates of the Head-up Display (β is the angle
of rotation of the HUD around the Y axis), the (y’, z’) coordinates of the digital
camera, the distance-to-object (d.t.o.) value (x’) taken from the scale at the objective
of the camera and the name (or number code) of the file of the resulting photo. These
pieces of information will be registered for every picture taken in the objective test.
The spreadsheet will be finally completed with the value of the aberration measured
for each case. An example of the registration spreadsheet is represented in Table 9,
where all distance values are in millimeters:

Original test Head-up Display Camera d.t.o. Analysis Aberration Aberration


picture x y z β y' z' x' picture tested value

Dist001.tga 100 90 57 0.7° 20 44 1980 0005.jpg Distortion 1.1%


Ver_lines.tga 100 90 57 0.7° 20 44 1980 0006.jpg
Astigmatism 0.078 dpt
Hor_lines.tga 100 90 57 0.7° 20 44 2340 0007.jpg

Table 9: Registration of objective test results

On the other hand, as mentioned in the preceding section, once the description of a
good-quality virtual image is completed, the objective test will provide accurate and
quick results on the evaluation of any Head-up Display.

LUIS SAMPEDRO DÍAZ 79


7 EXPERIMENT METHOD

7.2.1 Images for objective analysis

In theory, the features of a test image depend on the aberration being tested. For
instance, when generating and measuring astigmatism, it seems practical to
represent horizontal and vertical lines separately like shown in Figure 60.

Figure 60: Vertical and horizontal lines for testing astigmatism

On the other hand, in order to test distortion, both horizontal and vertical lines should
be displayed simultaneously; that is, the suitable image for this test is a grid like the
one represented in Figure 61.

Figure 61: Grid image for testing distortion

In conclusion, a combination of lines and points seems the most suitable way to carry
out objective tests. Note that these images are not suitable for tests with people,
because they are unrealistic images that will never be directly present in a driving
situation.

LUIS SAMPEDRO DÍAZ 80


7 EXPERIMENT METHOD

7.2.2 Software

There is no need for a complex software in order to carry out the image processing
required. Conceptually, all we need is to count the number of pixels between two
edges in an image. FrameWork 2.7 is a software package capable of producing the
objective analysis of the pictures taken with the digital camera. It provides a large
number of user-friendly parameters that supply a comprehensive control over the
analysis of the images. Moreover, it is very flexible and intuitive when programming a
personalized examination procedure. This advanced software has been developed
by DVT Corporation and it is offered to the public at no cost (see [40]).

The software has been conceived as the user interface that controls the configuration
of the “DVT SmartImage Sensor” family, which are the digital video cameras
manufactured by this company. The user has access to the real-time image coming
from the camera, as well as the ability to produce inspections on such images.

In our case, no DVT SmartImage Sensor (that is, no DVT digital video camera) is
required because our examination, as described in the present document, is
essentially static. For this approach, FrameWork 2.7 offers the possibility to emulate
a connection to a DVT SmartImage Sensor and to import a picture in BMP format
with the resolution of the preferred emulator into the program. This allows the user to
run all the analysis features available in FrameWork in such BMP file. Therefore, the
procedure is reduced to converting the pictures taken with our digital camera into
BMP format, importing them in FrameWork and producing the required analysis. The
one disadvantage of this software is that the maximum resolution allowed is
1280x1024 pixels. In the end, we will only use two of the features of this program,
which are the edge detection and the pixels counting, so any other image processing
software with these capabilities and possibly with more resolution can be used.

At this point, it may be interesting to show an example of the general analysis


procedure. The mentioned software FrameWork 2.7 will be used to illustrate this
example. The program layout is shown in Figure 62:

LUIS SAMPEDRO DÍAZ 81


7 EXPERIMENT METHOD

Figure 62: Main window of FrameWork 2.7

In the red box, the possible analysis tool types offered by the program are
represented. Each of these buttons provides access to different sub options,
depending on the nature of the examination and the camera emulated. The area
marked in green contains the image given by the camera (imported BMP file in our
case). When the analysis tools are positioned, this window will also show them
combined with the image and the result of the examination. Finally, in yellow is
marked the area of results.

In the next lines, an example will be described to show the potential of this software
and a general analysis. Let us consider the situation represented in the Figures 63
and 64:

LUIS SAMPEDRO DÍAZ 82


7 EXPERIMENT METHOD

Figure 63: Optimum Net Figure 64: Aberrated Net

A clear way to analyze the situation here represented is to measure the distance
between the lines in the image and their angle with respect to a common reference
line. In our illustrative example, we will focus only on one line (the vertical line on the
left margin). By means of the measurement tool accessible in FrameWork, these two
values are given as follows (see Figure 65):

Figure 65: Optimum net results

LUIS SAMPEDRO DÍAZ 83


7 EXPERIMENT METHOD

As noticed in the results table, the line detected is 10 pixels away from the reference
(X = 10.00), at an angle of zero degrees (Angle = 0.00 deg). Drawn at the image
appears in blue the area of inspection. It operates by searching for an edge from left
to right, that is, looking for a change in contrast throughout a horizontal path from left
to right. Each pixel detected as an edge is represented in green and the adjusted line
in yellow. Figure 66 shows the results for the inspection in the aberrated image:

Figure 66: Aberrated net results

Note that the area of inspection has not changed. This means that the programming
is completed once and run as many times as required. By importing the different
BMP files containing the aberrated versions of the original net, the user automatically
obtains the objective results of the new inspection. It is important to notice that all
examinations are done with the exact same reference points and, therefore, are
directly comparable. In this case, the edge found is adjusted as a line positioned 9,31

LUIS SAMPEDRO DÍAZ 84


7 EXPERIMENT METHOD

pixels at an angle of 6,90 degrees with respect to reference. All of these parameters
can be exported to Excel and therefore are easily manageable.

It is of vital importance to mention at this point the role of the background of the
image in the analysis process. Not only in this example but in every scenario, the
background is as significant as the image itself and must stay constant throughout
the objective the test.

This has just been the detection of a line, but the programming possibilities are
numerous and not limited to that. Instead of looking for a single line, a certain pattern
within the pixels that form our original figure can be considered, producing a more
complex program but offering more accurate results. In the next example, the
analysis consists of looking for square shapes with certain deformation restrictions
(see Figure 67):

Figure 67: Detection of pattern with FrameWork 2.7

LUIS SAMPEDRO DÍAZ 85


7 EXPERIMENT METHOD

In this case, as seen in the results table, the program finds 18 “objects” (squares in
this case) and provides a list of the (x, y) coordinates and the orientation angle of
every object detected. The advancement on the usage of this program is let to the
reader, as it is not the purpose of the present thesis.

7.3 Subjective test

In reality, the user does not put all the attention on a display in the car, but is focused
on the road. That is, the Head-up Display will be checked only at certain points of the
driving task. Therefore, the subjective test should avoid presenting the user only with
a virtual display and try to make the user to focus on something else. Adapting the
test station into a driving simulator is obviously the best approach, although it
involves a great amount of time, effort and cost.

If a car simulator is not available, a simple solution is to project an image at least 5


meters away from the user, by means of a projector and a computer. This image
should be such that forces the driver to interact. For example, a figure that moves
with a joystick, controlled by the user, and needs to be within the limits of a path (like
shown in Figure 68) or a videogame of such low level of complexity should be good
enough. While being concentrated on the game, the user will be asked to read the
information on the virtual image laying around 2 meters away, between the user and
the projection surface of the game and evaluate whether it is easy to read or not.

LUIS SAMPEDRO DÍAZ 86


7 EXPERIMENT METHOD

Figure 68: Representation of a game-type distraction

The images used to carry out the subjective test should be the graphics, letters,
indications and symbols applied in a real driving situation with a Head-up Display,
rather than the abstract combination of lines and points used in the objective test. In
theory, this will help both the test person and the examiner and it will provide more
realistic results.

The procedure to get to the limit value for tolerance should be iterative. There is
many ways to carry out the test: start with the optimum position (zero aberration) and
proceed increasing the value of an aberration until the user cannot read anymore (or
it produces an uncomfortable sensation), start with a very aberrated image and
proceed diminishing the aberration value until the user can read the information on
the virtual image, or jump from a high to a low aberrated image until the user
converges into a value. In the following tables, different combinations for testing
astigmatism are represented. In these tables, tests run from left to right, _↑_ means
that the reader could read the information on the virtual image, and _↓_ represents
that the user failed to accept the virtual image with the related amount of aberration.
Therefore, Table 9 represents a test for astigmatism which starts with a zero-
aberration image. The user reads the information with no problem and agrees to

LUIS SAMPEDRO DÍAZ 87


7 EXPERIMENT METHOD

include it in the “good-quality” group (_↑_). Then, the next step is to produce 0.40 dpt
of astigmatism, which is not tolerated by the test person (_↓_). The procedure from
there is to go back to the previous picture and add one unit of aberration (0.05 dpt of
astigmatism in this example) if the previous picture was accepted (_↑_) or to reduce
one unit of aberration if the aberrated image was not accepted by the user (_↓_). The
other tables represent other logics of operation that can be easily inferred. All the
tables have been created simulating that the user will tolerate up to 0.20 dpt of
astigmatism (see Tables 10, 11 and 12).

Astigmatism [dpt]
0.40 _↓_
0.35 _↓_
0.30 _↓_
0.25 _↓_ _↓_
0.20 ↑ ↑
0.15 ↑
0.10 _↑_
0.05 _↑_
0.00 _↑_
Table 10: Iterative method for a subjective test

Astigmatism [dpt]
0.40
0.35
0.30
0.25 _↓_
0.20 ↑ ↑
0.15 ↑
0.10 _↑_
0.05 _↑_
0.00 _↑_
Table 11: Subjective test by means of positive increments of astigmatism

LUIS SAMPEDRO DÍAZ 88


7 EXPERIMENT METHOD

Astigmatism [dpt]
0.40 _↓_
0.35 _↓_
0.30 _↓_
0.25 _↓_ _↓_
0.20 ↑ ↑
0.15
0.10
0.05
0.00
Table 12: Subjective test by means of negative increments of astigmatism

7.4 Evaluation of objective and subjective results

After both the objective and subjective tests have been carried out, a great amount of
numbers and tables with calculations and results will be available. It is then useful to
design a method that will allow quick and accurate evaluation of the data. The
methods proposed in this document are mainly graphical because they are very
intuitive and especially suitable in this case.

By looking at the proposed results table for objective results in section 7.2, it can be
noted that objective results can be represented by using 3 variables. For every HUD
position, there is a camera exploration within the limits of the eye-box. Therefore, the
graph should represent the relationship between each of the (y’, z’) coordinates of
the camera and the consequent aberration values. By taking accommodation as an
example, a simulation of such 3D graphic evaluation method is proposed in the next
lines.

Table 13 represents a simulation of the values taken in a hypothetical test. The size
of the eye-box is 100x200mm and the distance to the virtual image goes between
1.7m and 2.4m:

LUIS SAMPEDRO DÍAZ 89


7 EXPERIMENT METHOD

Distance to
eye-box
virtual image
Y Z X
0 0 2.2
0 50 2
0 100 2.4
50 0 2.3
50 50 1.8
50 100 2.2
100 0 2
100 50 1.7
100 100 2
150 0 2.4
150 50 1.8
150 100 2.1
200 0 2.3
200 50 2
200 100 2.4
Table 13: Values of distance to virtual image

Representing these values in a matrix format results in the following (see Table 14):

Y↓ Z→ 0 50 100
0 1.7 2 1.7
50 1.8 2.2 2
100 2 2.4 2.1
150 1.8 2 1.8
200 1.9 1.8 1.7
Table 14: Values in matrix form

By using Microsoft Excel or any mathematical software package, a 3D image of


these values can be provided. Figure 69 is a 3D representation of the values
contained in Table 14:

LUIS SAMPEDRO DÍAZ 90


7 EXPERIMENT METHOD

Accommodation

2.4

2.3

2.2

2.1 2.3-2.4
2.2-2.3
X 2
2.1-2.2
1.9 2-2.1
1.9-2
1.8
1.8-1.9
1.7 1.7-1.8
1.6-1.7
1.6

0
50
100 100
Y 150 50
200 Z
0

Figure 69: Accommodation values represented in a 3D graphic

Note that, depending on the amount of test points in the eye-box, and the values
obtained at such points, this 3D graph can be more or less helpful. When the
intelligibility of the graph is compromised, the graph represented in Figure 70 is
recommended:

Accommodation

100
2.3-2.4
2.2-2.3
2.1-2.2
2-2.1
1.9-2
1.8-1.9
50 Z 1.7-1.8
1.6-1.7

0
0 50 100 150 200

Figure 70: Accommodation values within the eye-box

LUIS SAMPEDRO DÍAZ 91


7 EXPERIMENT METHOD

This last graph, which is a top view of the previous graph, clearly shows the
accommodation values within the eye-box. By looking at the key on the right side of
the graph we can quickly see the amplitude of the aberration and, because it is color
coded, the area of the eye-box with a certain value of aberration can be easily
detected. Therefore, these graphs can be used to represent the objective quality of a
Head-up Display.

On the other hand, the results of the subjective tests should be processed with the
standard theory of statistics. It will be interesting to combine the information supplied
by the tests when generating conclusions, as they will provide statements that are
extremely difficult to predict at the present theoretical stage; for example, the impact
of the age of the driver on the tolerance of the aberrations.

LUIS SAMPEDRO DÍAZ 92


8 CONCLUSION

8 Conclusion

The analysis of optical aberrations carried out in this thesis concludes that the
theoretical good-quality virtual image contains no double image effect, a maximum of
0.25 dpt of astigmatism and less than 1.5% of TV-Distortion. Moreover, the
accommodation of the eye should be kept below ±0.25 dpt throughout the eye-box
and the stereoscopic discussion and the night/day driving conditions should be taken
into account.

A test station has been planed in order to simulate the mentioned relevant
aberrations and to measure the limit tolerance values of the driver to such optical
effects. These threshold values will be of great importance in the establishment of a
global scale to evaluate and compare the optical quality of Head-up Displays.

LUIS SAMPEDRO DÍAZ 93


9 OUTLOOK

9 Outlook

The use of the methodology defined in this thesis can be applied in other contexts
apart from the ones set as main goals at the beginning. For example, a direct
application of the test station can be to predict errors at the assembly line and,
therefore, to avoid them. That is, a simulation of the assembly tolerances can be
carried out, providing a table that connects these tolerance values with the aberration
they entail. Let us consider the position of the Head-up Display as the assembly
tolerance to illustrate my statement. A change in the (x, y, z) coordinates of the HUD
with respect to the windscreen will entail the emergence of certain optical aberrations
with certain levels. This variation in (x, y, z) can be simulated and analyzed in the test
station presented in this thesis. Therefore, a database linking the position of the HUD
and the values of the aberrations entailed by such variations can be generated. This
database can be used in the assembly line to determine the optimum (x, y, z)
coordinates of the Head-up Display particularized to every car, quickly and in real
time.

In order to illustrate this, let us consider the change of the “y” coordinate of the Head-
up Display in a range of ±5mm from optimum position. Let us assume that such
variation implies the appearance of astigmatism in the virtual image like shown in
Figures 71, 72 and 73:

Astigmatism (y = -5 mm)
0.45-0.5
100 0.4-0.45
0.35-0.4
0.3-0.35
0.25-0.3

50 Z 0.2-0.25
0.15-0.2
0.1-0.15
0.05-0.1
0-0.05
0
0 50 100 150 200
Y

Figure 71: Distribution of astigmatism at y = -5mm

LUIS SAMPEDRO DÍAZ 94


9 OUTLOOK

Astigmatism (y = 0 mm; optimum position)


0.45-0.5
0.4-0.45
100
0.35-0.4
0.3-0.35
0.25-0.3
0.2-0.25
50 Z 0.15-0.2
0.1-0.15
0.05-0.1
0-0.05
0
0 50 100 150 200
Y

Figure 72: Distribution of astigmatism at y = 0mm

Astigmatism (y = 5 mm)
0.45-0.5

100 0.4-0.45
0.35-0.4
0.3-0.35
0.25-0.3
50 Z 0.2-0.25
0.15-0.2
0.1-0.15
0.05-0.1
0
0 50 100 150 200 0-0.05

Figure 73: Distribution of astigmatism at y = 5mm

Logically, the robots at the assembly line will use numbers, but the graphical
representation of those numbers is used in this example to simplify the explanation to
the reader. At the assembly line, the real-time eye-box data represented in the
previous graphs can be obtained by means of a multi-camera robot arm that will
place itself at the location of the eye-box defined in the specifications. Let us assume
that the multi-camera device, which consists of a number of digital cameras in order
to take a sample of the distribution of an aberration in the whole eye-box at once,
takes a snapshot of the virtual image given by the HUD mounted in a particular car.

LUIS SAMPEDRO DÍAZ 95


9 OUTLOOK

By comparison with the available database created previously, it detects that the
distribution of astigmatism matches the one represented in Figure 73. This means
that the HUD presents an offset of 5mm in the Y axis. Therefore, a signal can be
generated to change the “y” coordinate of the HUD 5mm in the negative direction in
order to cancel such offset and position the Head-up Display in its optimum place.

This has been only the adjustment of the HUD in the “y” direction. In theory, every
change in direction and in angle can be characterized with a unique combination of
distribution of aberrations. Consequently, by measuring these optical effects like
described in this section, a real-time adjustment can be provided at the assembly line
not only to obtain the particular optimum assembly position for every car, but also to
adapt the HUD to the age of the customer and to any other further conclusions taken
from the experiments carried out with the test station described in this thesis.

LUIS SAMPEDRO DÍAZ 96


BIBLIOGRAPHY

Bibliography

Books and lecture notes


[1] Thomas Thöniß: Abbildungsfehler und Abbildungsleistung optischer Systeme.
LINOS Photonics GMBH & Co. KG
[2] Robert Wichard Pohl: Optik und Atomphysik. Springer-Verlag 1976
[3] Prof. Dr.-Ing. Helmut Strasser, Universität GH-Siegen: Ergonomie-
Umgebungseinflüsse, Beleuchtung
[4] E. Bruce Goldstein: Wahrnehmungspsychologie. Eine Einführung.
Heidelberg: Spektrum Akademischer Verlag 1997
[5] Automobile World Book Encyclopedia - Chicago: World Book, 2001
[6] Karl-Heinz Brück: Fahrzeugverglasung. Vieweg Verlag 1990
[7] Siegfried Marquardt: Ergonomie der Kraftfahrzeugtechnik. Schuch Verlag
1997
[8] Optical Society of America: Handbook of optics. Michael Bass, Eric W. Van
Stryland, Williams David, William L. Wolfe 1994

Internet web pages


[9] http://www.winlens.de/pdf/papers/Abbildungsfehler.pdf
[10] http://www.mb.uni-siegen.de/d/aws/pdf/komp8.pdf
[11] http://www.zdnet.de
[12] http://www.lexikon-definition.de
[13] http://www.foto-net.de
[14] http://www.atmsite.org/contrib/Ceragioli/newrefractor/chapters/Chapter%202.ht
ml
[15] http://wwwcs.upb.de/fachbereich/AG/agdomik/computergrafik/cg_skript/html/no
de130.htm
[16] http://www.aulag.de/weitsichtigkeit_animation.html
[17] http://www.augenlaser-dresden.de
[18] http://www.egbeck.de/skripten/12/bs12-35.htm
[19] http://www.ini.tum.de/DE/projekte/schneid.htm
[20] http://www.czapek.de/rosa/semi/bildfeld.html
[21] http://www.ftm.mw.tum.de/zubehoer/pdf/Tagung_AS/16_abel.pdf

LUIS SAMPEDRO DÍAZ 97


BIBLIOGRAPHY

[22] http://www.dvtsensors.com (DVT Corporation)


[23] http://webvision.med.utah.edu/index.html
[24] http://www.tedmontgomery.com/the_eye
[25] http://www.physicsclassroom.com
[26] http://www.utexas.edu
[27] http://www.wikipedia.org
[28] http://www.vanwalree.com
[29] http://home.digitalexp.com/~suiterhr/TM/kestner.jpg
[30] http://www.mellesgriot.com
[31] http://germancarfans.com
[32] http://www.gepvp.org/publications/Newsletter/2004-1/ENGNewsletter0406.pdf
[33] http://ocw.mit.edu
[34] http://imatest.com
[35] http://www.cip.physik.uni-muenchen.de/~wwieser/render/stereo/create.html
[36] http://www.starizona.com/ccd/advtheorynyq.htm
[37] http://www.tasi.ac.uk/advice/creating/pdf/camera.pdf
[38] http://electronics.howstuffworks.com/autofocus.htm
[39] http://www.imaginatorium.org/stuff/focus.htm
[40] www.dvtsensors.com
[41] http://www.unibw.de

Degree thesis
[42] Pfleiderer U.: Bestimmung optischer Parameter und Untersuchung der
Ablesbarkeit von Kfz-Kombiinstrumenten unter visuellen Gesichtspunkten,
Ingolstadt, 1997
[43] Hilpoltsteiner C.: Erarbeitung eines Ergonomie Grundlagenkatalogs für
ausgewählte Stellteile und Anzeigeelemente im PKW-Interieur, Ingolstadt, 2000
[44] Schneid M.: Optimierung eines Fahrerfußraumes bei Personenkraftwagen –
Erarbeitung und empirische Validierung, Ingolstadt, 2004

LUIS SAMPEDRO DÍAZ 98

You might also like