Professional Documents
Culture Documents
Introduction
Chapter
Introduction
Machine vision is the technology to replace or complement manual inspections
and measurements with digital cameras and image processing. The technology is
used in a variety of diferent industries to automate the production, increase production
speed and yield, and to improve product quality.
Machine vision in operation can be described by a four-step flow:
1. Imaging: Take an image.
2. Processing and analysis: Analyze the image to obtain a result.
3. Communication: Send the result to the system in control of the process.
4. Action: Take action depending on the vision system's result.
1. Take image
4. Take action
2. Analyze image
3. Send result
This introductory text covers basic theoretical topics that are useful in the practical work with
machine vision, either if your profession is in sales or in engineering. The level is set for the
beginner and no special knowledge is required, however a general technical orientation is essential. The contents are chosen with SICK IVP's cameras in mind, but focus is
on understanding terminology and concepts rather than specific products.
The contents are divided into eight chapters:
1. Introduction (this chapter)
2. Imaging
3. Illumination
4. Laser Triangulation
5. Processing and Analysis
6. Communication
7. Vision Solution Principles
8. Appendix
The appendix contains some useful but more technical issues needed for a deeper
under- standing of the subject.
1.1
Objective
Chapter
Introduction
1.2
Machine Vision
Application Types
Machine vision applications can be divided into four types from a technical point of
view:
Locate, measure, inspect, and identify.
1.2.1
Locate
1.2.2
Measure
1.2.3
Inspect
1.2.4
Identify
Machine Vision
Introduction
1.3
Chapter
Branch Types
Machine vision applications can also be categorized according to branch type, for
exam- ple:
Automotive
Electronics
Food
Logistics
Manufacturing
Robotics
Packaging
Pharmaceutical
Wood.
The branch categories often overlap, for example when a vision-guided robot
(robotics) is used to improve the quality in car production (automotive).
1.4
Camera Types
Cameras used for machine vision are categorized into vision sensors, smart
cameras, and PC-based systems. All camera types are digital, as opposed to analog
cameras in tradi- tional photography. Vision sensors and smart cameras analyze the
image and produce a result by themselves, whereas a PC-based system needs an
external computer to produce a result.
1.4.1
Vision Sensors
Chapter
Introduction
1.4.2
Machine Vision
Smart Cameras
A smart camera is a camera with a built-in image analysis unit that allows the
camera to operate stand alone without a PC. The flexible built-in image analysis
functionality pro- vides inspection possibilities in a vast range of applications.
Smart cameras are very common in 2D machine vision. SICK IVP also produces
a smart camera for 3D analysis.
Example: 2D Smart
The IVC-2D (Industrial Vision Camera) is a
stand-alone vision system for 2D analysis.
For example, the system can detect the
correct label and its position on a whisky
cap. A faulty pattern or a misalignment is
reported as a fail.
Example: 3D Smart
The IVC-3D is a stand-alone vision system
for 3D analysis. It scans calibrated 3D data
in mm, analyzes the image, and outputs
the result.
For example, the system can detect
surface defects, measure height and
volume, and inspect shape.
10
Machine Vision
Introduction
1.4.3
Chapter
PC-based Systems
In a PC-based system the camera captures the image and transfers it to the PC for
proc- essing and analysis. Because of the large amounts of data that need to be
transferred to the PC, these cameras are also referred to as streaming devices.
Example: 3D Camera
The Ruler collects calibrated 3D-shape data in
mm and sends the image to a PC for
analysis. For example, it detects the presence
of apples in a fruit box and measures log
dimensions to optimize board cutting in
Gray scale.
Gloss.
Chapter
Imagin
Machine Vision
2
The term imaging defines the act of creating an image. Imaging has several
technical names: Acquiring, capturing, or grabbing. To grab a high-quality image
is the number one goal for a successful vision application.
This chapter covers the most basic concepts that are essential to understand when
learn- ing how to grab images.
2.1
A simplified camera setup consists of camera, lens, lighting, and object. The
lighting illuminates the object and the reflected light is seen by the camera. The object
is often referred to as target.
Lighting
Came
ra
2.1.1
Object/targe
Len
s
Digital Imaging
12
Machine Vision
Imagin
2.1.2
Chapter
Unfocused or blurred
The main diferences between lens types are their angle of view and focal length.
The two terms are essentially diferent ways of describing the same thing.
The angle of view determines how much of the visual scene the camera sees. A
wide angle lens sees a larger part of the scene, whereas the small angle of view of a
tele lens allows seeing details from longer distances.
Wide
angle
Len
Norm
al
Tele
Angle of
The focal length is the distance between the lens and the focal point. When the focal
point is on the sensor, the image is in focus.
Len
Len
s
Parallel light beams
Focal
length
Focal length is related to angle of view in that a long focal length corresponds to a small
angle of view, and vice versa.
Chapter
Imagin
Machine Vision
Exam
In addition to the standard lenses there are other types for special purposes, described in
further detail in the Appendix.
2.1.3
Field of View in 2D
The FOV (Field of View) in 2D systems is the full area that a camera sees. The FOV is
specified by its width and height. The object distance is the distance between the
lens and the object.
FO
Object
distance
The object distance is also called LTO (lens-to-object) distance or working
2.1.4
The aperture is the opening in the lens that controls the amount of light that is let onto
the sensor. In quality lenses, the aperture is adjustable.
The size of the aperture is measured by its F-stop value. A large F-stop value
means a small aperture opening, and vice versa. For standard CCTV lenses, the Fstop value is adjustable in the range between F1.4 and F16.
14
Machine Vision
Imagin
2.1.5
Chapter
Depth of Field
Impossible to focus
Possible to focus
Minimum object
The focal plane is found at the distance where the focus is as sharp as possible.
Objects closer or farther away than the focal plane can also be considered to be in focus.
This distance interval where good-enough focus is obtained is called depth of field
Focal
Impossible to
focus
Minimum object
distance
Out
of
focus
In focus
Out
of
focus
Depth
of field
The depth of field depends on both the focal length and the aperture adjustment (described in next section). Theoretically, perfect focus is only obtained in the focal plane at an
exact distance from the lens, but for practical purposes the focus is good enough within
the depth of field. Rules of thumb:
1. A long focal length gives a shallow depth of field, and vice versa.
2. A large aperture gives a shallow depth of field, and vice versa.
Exam
Chapter
Imagin
Machine Vision
By adding a distance ring between the camera and the lens, the focal plane (and
thus the MOD) can be moved closer to the camera. A distance ring is also referred to as
shim, spacer, or extension ring.
A thick distance ring is called an extension tube. It makes it possible to position the
camera very close to the object, also known as macro functionality.
1 mm
Distance rings and extension tubes are used to decrease the minimum object
distance. The thicker the ring or tube, the smaller the minimum object
distance.
A side-efect of using a distance ring is that a maximum object distance is introduced
and that the depth of field range decreases.
Distance ring
2.2
Maximum object
distance
Impossibl
e to
focus
In
focu
s
Minim
um
object
distance
Depth
of field
Impossible to
This section treats basic image terminology and concepts that are needed when
working with any vision sensor or system.
2.2.1
A pixel is the smallest element in a digital image. Normally, the pixel in the image corresponds directly to the physical pixel on the sensor.
Pixel is an abbreviation of 'picture element'.
Normally, the pixels are so small that they
only become distinguishable from one
an- other if the image is enlarged.
To the right is an example of a very small
image with dimension 8x8 pixels. The
dimen- sions are called x and y, where x
corresponds to the image columns and
16
Pixel
Machine Vision
Imagin
Chapter
Note the direction of the y axis, which is opposite from what is taught in school mathematics. This is explained by the image being treated as a matrix, where the upper-left corner is
the (0,0) element. The purpose of the coordinate system and matrix representation is to
make calculations and programming easier.
The object resolution is the physical dimension on the object that corresponds to one
pixel on the sensor. Common units for object resolution are m (microns) per pixel and
mm per pixel. In some measurements the resolution can be smaller than a pixel.
This is achieved by interpolation algorithms that extract subpixel information from pixel
Example: Object Resolution
Calculation The following practical
method gives a good approximation of the
object resolution:
FOV width = 50 mm
Sensor resolution = 640x480 pixels
50
mm /
0 08.
Intensity
The brightness of a pixel is called intensity. The intensity information is stored for each
pixel in the image and can be of diferent types. Examples:
1. Binary: One bit per pixel.
0
2. Gray scale: Typically one byte per pixel.
Chapter
Imagin
Machine Vision
3. Color: Typically one byte per pixel and color. Three bytes are needed to obtain full
color information. One pixel thus contains three components (R, G, B).
25
25
25
When the intensity of a pixel is digitized and described by a byte, the information is quantized into discrete levels. The number of bits per byte is called bit-depth. Most often in
machine vision, 8 bits per pixel are enough. Deeper bit-depths can be used in highend sensors and sensitive applications.
Exam
ple
Binary image.
Color
Because of the diferent amounts of data needed to store each pixel (e.g. 1, 8, and
24
bits), the image processing time will be longer for color and gray scale than for binary.
2.2.3
Exposure
Exposure is how much light is detected by the photographic film or sensor. The
exposure amount is determined by two factors:
1. Exposure time: Duration of the exposure, measured in milliseconds
(ms). Also called shutter time from traditional photography.
2. Aperture size: Controls the amount of light that passes through the
lens. Thus the total exposure is the combined result of these two parameters.
If the exposure time is too short for the sensor to capture enough light, the image is said to
be underexposed. If there is too much light and the sensor is saturated, the image is
said to be overexposed.
Exam
Underexposed image.
18
Overexposed image
with saturated areas
(white).
Machine Vision
Imagin
Chapter
A topic related to exposure time is motion blur. If the object is moving and exposure
time is too long, the image will be blurred and application robustness is threatened. In
applica- tions where a short exposure time is necessary because of object speed, there
are three methods to make the image bright enough:
1. Illuminate the object with high-intensity lighting (strobe)
2. Open up the aperture to allow more light into the camera
3. Electronic gain (described in next section)
Exam
ple
Gain
Exposure time and aperture size are the physical ways to control image intensity. There is
also an electronic way called gain that amplifies the intensity values after the sensor has
already been exposed, very much like the volume control of a radio (which doesnt
actually make the artist sing louder). The tradeoff of compensating insufficient exposure
with a high gain is amplified noise. A grainy image appears.
2.2.5
Contrast is the relative diference between bright and dark areas in an image. Contrast
is necessary to see anything at all in an image.
Low contrast.
Normal contrast.
High contrast.
Chapter
Imagin
Machine Vision
A histogram is a diagram where the pixels are sorted in order of increasing intensity
values. Below is an example image that only contains six diferent gray values. All pixels of
a specific intensity in the image (left) become one bar in the histogram (right).
Intensity value
Example
Histograms for color images work the same way as for grayscale, where each color
chan- nel (R, G, B) is represented by its individual histogram.
Typically the gray scale image contains many more gray levels than those present in the
example image above. This gives the histogram a more continuous appearance.
The histogram can now be used to understand the concept of contrast better, as
shown in the example below. Notice how a lower contrast translates into a narrower
Normal
Intensity value
Low
20
Intensity value
255
Machine Vision
Illumination
Chapter
Illumination
Light is of crucial importance in machine vision. The goal of lighting in machine vision is to
obtain a robust application by:
1. Enhancing the features to be inspected.
2. Assuring high repeatability in image quality.
Illumination is the way an object is lit up and lighting is the actual lamp that generates the
illumination.
Light can be ambient, such as normal indoor light or sunlight, or special light that
has been chosen with the particular vision application's needs in mind.
Most machine vision applications are sensitive to lighting variations, why ambient light
needs to be eliminated by a cover, called shroud.
3.1
Illumination Principles
Using diferent illumination methods on the same object can yield a wide variety of results.
To enhance the particular features that need to be inspected, it is important to understand
basic illumination principles.
3.1.1
Visible spectrum
500 nm
nm
600
IR
700
nm
The spectral response of a sensor is the sensitivity curve for diferent wavelengths.
Cam- era sensors can have a diferent spectral response than the human eye.
Exam
Spectral response of a gray scale CCD sensor. Maximum sensitivity is for green (500
nm).
SICK IVP Industrial Sensors www.sickivp.com All rights reserved
Chapter
Illumination
3.1.2
Machine Vision
The optical axis is a thought line through the center of the lens, i.e. the direction the
camera is looking.
Camera
Optical axis
The camera sees the object thanks to light that is reflected on its surface. In the figure
below, all light is reflected in one direction. This is called direct or specular reflection and is
most prevalent when the object is glossy (mirror-like).
Surfac
e
norm
Incident light
Reflected light
Angle of reflection
Angle of
Glossy
The angle of incidence and angle of reflection are always equal when measured
from the surface normal.
When the surface is not glossy, i.e. has a matte finish, there is also diffuse reflection.
Light that is not reflected is absorbed in the material.
Incident light
Main reflection
Diffuse reflection
Absorbed
light
Matte surface
Transparent or semi-transparent materials also transmit
Incident light
Direct reflection
Absorbed light
Semitransparent
surface
Transmitted light
The above principles, reflection, absorption, and transmission, constitute the basis of most
lighting methods in machine vision.
There is a fourth principle, emission, when a material produces light, for example
when molten steel glows red because of its high temperature.
22
Machine Vision
Illumination
3.2
Chapter
Lighting Types
There is a large variety of different lighting types that are available for machine vision. The
types listed here represent some of the most commonly used techniques. The
most ac- cepted type for machine vision is the LED (Light-Emitting Diode), thanks to its
even light, long life, and low power consumption.
3.2.1
Ring Light
A ring light is mounted around the optical axis of the lens, either on the camera or
some- where in between the camera and the object. The angle of incidence
depends on the ring diameter, where the lighting is mounted, and at what angle the
LEDs are aimed.
Mainly direct reflections
reach the camera
Objec
Ring light
Pros
Easy to use
High intensity and short
exposure times possible
Cons
Direct reflections, called hot spots,
on reflective surfaces
Exam
ple
Ambient light.
3.2.2
Spot Light
A spot light has all the light emanating from one direction that is diferent from the optical
axis. For flat objects, only diffuse reflections reach the camera.
Spot light
Chapter
Illumination
Pros
Machine Vision
Cons
Uneven illumination
No hot spots
3.2.3
Backlight
The backlight principle has the object being illuminated from behind to produce a
contour or silhouette. Typically, the backlight is mounted perpendicularly to the optical axis.
Object
Pros
Backlight
Cons
Dimensions must be larger
than object
Exam
ple
Ambient light.
contours
3.2.4
Backlight: Enhances
Darkfield
Darkfield means that the object is illuminated at a large angle of incidence. Direct reflections only occur where there are edges. Light that falls on flat surfaces is reflected away
from the camera.
Darkfield ring
light
24
Machine Vision
Illumination
Pros
Chapter
Good
enhancement
of
scratches, protruding edges, and
dirt on sur- faces
Cons
Mainly works on flat surfaces with
small features
Exam
Ambient light.
3.2.5
On-Axis Light
When an object needs to be illuminated parallel to the optical axis, i.e. directly from the
front, a semi-transparent mirror is used to create an on-axial light source. On-axis is also
called coaxial. Since the beams are parallel to the optical axis, direct reflexes appear on
all surfaces parallel to the focal plane.
Light source
Semitransparent
Objec
Coaxial light
Pros
Cons
Low intensity requires long exposure times
Exam
Chapter
Illumination
3.2.6
Machine Vision
Dome Light
Glossy materials can require a very difuse illumination without hot spots or shadows. The
dome light produces the needed uniform light intensity thanks to LEDs illuminating
the bright, matte inside of the dome walls. The middle of the image becomes darker
because of the hole in the dome through which the camera is looking.
Do
Objec
Pros
Cons
Low intensity requires long exposure times
Exam
Laser Line
26
Cons
Machine Vision
Illumination
Chapter
Exam
--
3.3
Lighting Variants and
Accessories
3.3.1
A strobe light is a flashing light. Strobing allows the LED to emit higher light intensity than
what is achieved with a constant light by turbo charging. This means that the LED
is powered with a high current during on-time, after which it is allowed to cool of during
the of-time. The on-time relative to the total cycle time (on-time plus of-time) is referred to
as duty cycle (%).
With higher intensity, the exposure time can be shortened and motion blur reduced.
Also, the life of the lamp is extended.
Strobing a LED lighting requires both software and hardware support.
3.3.2
Diffusor Plate
Many lighting types come in two versions, with or without a diffusor plate. The
difusor plate converts direct light into difuse.
The purpose of a difusor plate is to avoid
bright spots in the image, caused by the direct
light's reflections in glossy surfaces.
Rules of thumb:
1. Glossy objects require difuse light.
2. Difusor plates steal light intensity.
Typically, 20-40% of the intensity is lost in the
Two identical white bar lights, with
difusor plate, which can be an issue in highdiffusor plate (top) and without
speed applications where short exposure
(bottom)
times are needed.
Difusor plates work well on multi-LED arrays, whereas single LEDs will still give bright hot
spots in the image.
3.3.3
LED Color
LED lightings come in several colors. Most common are red and green. There are also
LEDs in blue, white, UV, and IR. Red LEDs are the cheapest and have up to 10 times
longer life than blue and green LEDs.
Chapter
Illumination
Ultra-violet (UV)
Not visible to
the eye.
Machine Vision
Blue
Green
Red
Infra-red (IR)
Not visible to
the eye.
Optical Filters
Original image.
28
Machine Vision
Illumination
Chapter
3.4
Laser Safety
A laser is a light source that emits parallel light beams of one wavelength (color), which
makes the laser light dangerous for the eye.
Lasers are classified into laser classes, ranging from 1 to 4. Classes 2 and 3 are most
common in machine vision. Below is an overview of the classifications.
Europe
Americ
an class
an class
1-1M
I
2-2M
II
3R-3B
IIIb
Practical
Harmless. Lasers of class 1M may become
hazardous with the use of optics (magnifying glass,
telescope, etc).
Caution. Not harmful to the eye under normal
circum- stances. Blink reflex is fast enough to protect
the eye from permanent damage. Lasers of class
2M may be- come hazardous with the use of
optics.
Danger. Harmful at direct exposure of the retina or
after reflection on a glossy surface. Usually doesn't
produce harmful difuse reflections.
Exam
Chapter
Illumination
3.4.2
Machine Vision
LEDs
LEDs are not lasers from a technical point of view, but they behave similarly in that they
have a small size and emit light in one main direction. Because of this the intensity can be
harmful and a safety classification is needed. There is no system for classifying LEDs
specifically, so the laser classification system has been (temporarily?) adopted for LEDs.
LEDs are often used in strobe lights, which can cause epileptic seizures at certain frequencies in people with epilepsy.
3.4.3
Protective Eyewear
30
Machine Vision
Laser Triangulation
Chapter
Laser Triangulation
Laser triangulation is a technique for acquiring 3D height data by illuminating the object
with a laser from one direction and having the camera look from another. The laser
beam is divided into a laser line by a prism. The view angle makes the camera
see a height profile that corresponds to the object's cross-section.
Lase
r
Came
Pris
m
Height profile of
object cross-section
A laser line is projected onto the object so that a height profile can be seen by a camera
from the side. The height profile corresponds to the cross-section of the object.
The method to grab a complete 3D image is to move the object under the laser
line and put together many consecutive profiles.
4.1
Field of View in 3D
The selected FOV (Field of View) is the rectangular area in which the camera sees
the object's cross-section. The selected FOV, also called defining rectangle, lies within
the trapezoid-shaped maximum FOV.
There are several possible camera/laser geometries in laser triangulation. In the basic
geometry, the distance between the camera unit and the top of the FOV is called
stand-off. The possible width of the FOV is determined by the focal length of the lens, the
laser prism's fan angle, and the stand-of.
Camera
View
angle
Fan
angl
e
Min
Optical axis
Ma
x
FO
Selected
Max FOV
The fan angle of the laser line gives the maximum FOV a trapezoidal shape. Within
this, the selected FOV defines the cross-section where the camera is looking at the
SICK IVP Industrial Sensors www.sickivp.com All rights reserved
Chapter
Laser Triangulation
4.2
Machine Vision
There are a number of diferent representations of 3D data. SICK IVP uses intensitycoded height data, where bright is high and dark is low. This can be transformed into a
3D visu- alization with color-coded height data.
x
O
y
3D image visualized in 3D
viewer
with color-coded height data.
The coordinate system in the 3D image is the same as that of a normal 2D image
regard- ing x and y, with the addition that y now corresponds to time. The additional
height dimen- sion is referred to as the z axis or the range axis.
Since the front of the object becomes the first scanned row in the image, the y axis will be
directed opposite to the conveyor movement direction.
4.3
Scanning Speed
Since laser triangulation is a line scanning method, where the image is grabbed little
by little, it is important that the object moves in a controlled way during the scan.
This can be achieved by either:
1. An encoder that gives a signal each time
the conveyor has moved a certain
distance.
2. A constant conveyor speed.
When an encoder is used, it controls the profile
triggering so that the profiles become
equidistant.
A constant conveyor speed can often not be guaranteed, why an encoder is generally recommended.
Encoder.
It is important to note that there is a maximum speed at which the profile grabbing can
be done, determined by the maximum profile rate (profiles/second).
If this speed or maximum profile rate is exceeded, some profiles will be lost and the
image will be distorted despite the use of an encoder. A distorted image means that
the object proportions are wrong. An image can also appear distorted if the x and y
3D image of a circular object. The proportions are correct thanks to the use of
an
32
Machine Vision
Laser Triangulation
Chapter
4.4
Occlusion and Missing
Data
Because of the view angle
between the camera and
the laser line, the camera will
not be able to see behind
object features. This
phenomenon is called
camera occlusion (shadowing) and results in missing
data in the image.
As a consequence, laser triangulation is not a suitable for
scanning parts of an object
located behind high features.
Examples are inspections of
the
bottom
of afan
hole
or behind
Because
of the
angle
of
the laser, the laser line itself can
be occluded and result in
missing data. This
phenomenon is
Lase
Camer
a
occlusio
n
The image below of a roll of scotch tape shows both camera and laser occlusion.
The yellow lines show where the camera occlusion starts to dominate over the
laser occlu- sion.
Laser
occlusion
Laser
occlusion
Camera
occlusion
Intensity-coded 3D image of a roll of scotch tape,
showing both camera and laser occlusion.
Chapter
Laser Triangulation
4.5
Machine Vision
System Components
Cable to
PC
Laser
Came
Laser line
3D
z
Convey
or
Camera
enable, e.g.
photo
x
Encod
er
In some laser triangulation products, all of the above components are bought and
config- ured separately for maximum flexibility. Others are partially assembled, for
example with a fixed geometry (view angle), which makes them more ready to use
Example
1. SICK IVP Ranger: All components are separated.
2. SICK IVP Ruler: Camera and laser are built in to create a fixed geometry.
3. SICK IVP IVC-3D: Camera and laser are built in to create a fixed geometry. In
addi- tion to this, the unit contains both image processing hardware and software
for stand-alone use.
4.6
The laser emits monochromatic light, meaning that it only contains one wavelength.
By using a narrow band-pass filter in front of the sensor, other wavelengths in the
ambient light can be suppressed. The result is a system that is rather robust against
ambient light. However, when the ambient light contains wavelengths close to that of
the laser, this will be let through the filter and appear as disturbing reflections in the image.
Then the instal- lation needs to be covered, or shrouded. Typically, problems with
reflections occur with sunlight and warm artificial light from spotlights.
34
Machine Vision
Processing and
Chapter
5.1
Region of Interest
A ROI (Region of Interest) is a selected area of concern within an image. The purpose of
using ROIs is to restrict the area of analysis and to allow for diferent analyses in diferent areas
of the image. An image can contain any number of ROIs. Another term for ROI is
AOI
(Area of Interest).
A common situation is when the object location is not the same from image to image.
In order to still inspect the feature of interest, a dynamic ROI that moves with the object
can be created. The dynamic ROI can also be resized using results from previous analysis.
Exampl
5.2
Pixel Counting
Pixel counting is the most basic analysis method. The algorithm finds the
number of pixels within a ROI that have intensities within a certain gray level interval.
Pixel counting is used to measure area and to find deviances from a normal
appearance of an object, for example missing pieces, spots, or cracks.
A pixel counter gives the pixel sum or area as a result.
Exam
ple
RO
Chapter
Processing and
5.3
Machine Vision
Digital filtering and operators are used for preprocessing the image before the analysis
to remove or enhance features. Examples are removal of noise and edge
enhancement.
Exampl
es
36
Machine Vision
Processing and
5.4
Chapter
Thresholds
Binarized image:
Binary.
D
E
T1 T2 T3
T4
E
C
A
A
B
0
Example image.
Intensity value
In the image below, object B is found by selecting gray low to T1 and gray high to T2.
The red coloring of the object highlights which pixels fall within the selected gray scale interval.
Chapter
Processing and
Machine Vision
5.5
Edge Finding
5.6
Blob Analysis
A blob is any area of connected pixels that fulfill one or more criteria, for example
having a minimum area and intensity within a gray value interval.
A blob analysis algorithm is used to find and count objects, and to make basic
measure- ments of their characteristics.
Blob analysis tools can yield a variety of results, for example:
1. Center of gravity: Centroid. (Blue cross in the example)
2. Pixel count: Area. (Green pixels in the example)
3. Perimeter: Length of the line that encloses the blob area.
4. Orientation: Rotation angle.
Exam
ple
38
Machine Vision
Processing and
5.7
Chapter
Pattern Matching
5.8
Matching in new
Real-world coordinates.
Chapter
Processing and
Machine Vision
Exam
(x,y) in pixels
mm
Perspective in
5.9
(X,Y) in
Transformed
Code Reading
Codes are printed on products and packages to enable fast and automatic identification
of the contents. The most common types are barcodes and matrix codes.
5.9.1
Barcode
A barcode is a 1D-code that contains numbers and consists of black and white
vertical line elements. Barcodes are used extensively in packaging and logistics.
Examples of common barcode types are:
1. EAN-8 and EAN-13
2. Code 39 and Code 128
3. UPC-A and UPC-E
4. Interleaved 2 of 5.
Example of Code 39
barcode.
5.9.2
Matrix Code
A matrix code is a 2D-array of code elements
(squares or dots) that can contain information
of both text and numbers. The size of the
matrix depends on how much information
the code contains. Matrix codes are also
known as
Example of DataMatrix
2D codes.
code. An important feature with matrix codes is the redundancy of information,
which means that the code is still fully readable thanks to a correction scheme
although parts of the image are destroyed.
Examples of matrix codes are:
1. DataMatrix (e.g. with correction scheme ECC200)
2. PDF417
40
Machine Vision
Processing and
Chapter
Example:
OCV: Teach
string.
OCR: Teach
OCR: Read
Chapter
Processing and
Machine Vision
42
Machine Vision
Processing and
Chapter
Example
A blister pack needs inspection
before the metal foil is glued on to seal
the pack. The task is to inspect each
blister for pill presence and correctness. If any of the blisters is faulty, the
camera shall conclude a fail and the
result is communicated to reject the
blister pack.
A backlight is used to enhance the pill
contours.
The application can be solved either
by pixel counting, pattern matching,
blob analysis, or edge finders,
depending
on accuracy needs and cycle time
IF
Number of correct
blobs is OK
False
True
Pass
Set output to 0
ELSE
Fail
Set output to 1
END of IF
Send result
Chapter
Communicat
Machine Vision
Communication
A machine vision system can make
measure- ments and draw conclusions, but
not take actions by itself. Therefore its results need
to be commu- nicated to the system in
control of the process. There are a number of
diferent ways to commu- nicate a result, for
example:
1. Digital I/Os
2. Serial bus
3. Ethernet network.
In factory automation systems, the result is used to control an action, for example a rejector arm, a sorting mechanism, or the movement of a robot.
The hardware wiring of the communication channel can either be a direct
connection or via a network.
6.1
Digital I/O
A digital I/O (input/output) signal is the simplest form of communicating a result or receiving input information. A single signal can be used output one of two possible states, for
example good/bad or true/false. Multiple I/Os can be used to classify more
combinations. In industrial environments, the levels of a digital I/O are typically 0 (GND)
and 24V.
6.2
Serial Communication
6.3
Protocols
44
Machine Vision
Communicat
6.4
Chapter
Networks
Ethernet
An Ethernet LAN (Local Area Network) connects various devices via a switch.
When two or more LANs are connected into a wider network via a router, they
become a WAN (Wide Area Network).
LAN
WAN
Switc
LAN
1
LAN
LAN
Chapter
Machine Vision
7.1
Standard Sensors
7.2
Vision Qualifier
When assessing the suitability of an application to be solved by machine vision, there are
certain economic and technical key issues to consider.
7.2.1
Investment Incentive
46
Machine Vision
Chapter
A key factor in building a robust vision application is to obtain good repeatability of object
representation regarding:
1. Illumination
2. Object location and rotation
There are methods to deal with variations in these factors, but in general less variability gives
a more robust application. The optimal situation is a shrouded inspection station with
constant illumination where the object has a fixed position and rotation relative to the
camera.
Image Processing Algorithms
Having a good image addresses the first half of the solution. The next step is to apply
image processing algorithms or tools to do the actual analysis. Some vision
systems are delivered with a ready-to-use software or toolbox, whereas others need
third-party algo- rithms or even custom development of algorithms. This can have a
heavy impact on the project budget.
7.3
Once the application has qualified as a viable vision project, the phases that follow are:
Feasibility study, investment, implementation, and acceptance testing.
7.3.1
Feasibility Study
Investment
Once the feasibility study is complete, it's time for the investment
decision and the project definition. The project definition should
contain a full description of what the vision system shall do and how it
will perform. A procedure for acceptance testing should be included.
7.3.3
Implementation
Chapter
Machine Vision
The general method for solving a vision application consists of the following steps: Defin- ing
the task, choosing hardware, choosing image processing tools, defining a result output,
and testing the application. Some iteration is usually needed before a final solution is
reached.
7.4.1
Defining the task is essentially to describe exactly what the vision system shall do, which
performance is expected, and under which circumstances.
It is instrumental to have samples and knowledge about the industrial site where
the system will be located. The collection of samples needs to be representative for
the full object variation, for example including good, bad, and limit cases.
In defining the task, it is important to decide how the inspected features can be
param- eterized to reach the desired final result.
7.4.2
Choosing Hardware
In choosing the processing tools for a certain application, there are often a number of
possibilities and combinations. How to choose the right one? This requires practical
ex- perience and knowledge about the available tools for the camera system at
hand.
7.4.4
The next step is to define how to communicate the result, for example to a PLC, a
data- base, or a sorting machine. The most common output in machine vision is
pass/fail.
7.4.5
The application is not finished until it has been tested, debugged, and pushed to its limits.
This means that the system function must be tested for normal cases as well as a
number of less frequent but possible cases, for example:
1. Ambient light fluctuations, reflections, sunshine through a window, etc.
2. Close to the acceptance limit for good and bad.
3. The extremes of accepted object movement in the FOV.
48
Machine Vision
Chapter
Challenges
Defining Requirements
It can be a challenge to define the task so that all involved parties have the same expectations of system performance. The customer has the perspective and terminology of his
or her industry, and so does the vision supplier. Communication between both parties
may require that each share their knowledge. To formalize clear acceptance test
conditions is a good way of communicating the expectations of the system.
7.5.2
Performance
The cycle time can become a critical factor in the choice of camera system and
algorithms when objects are inspected at a high frequency. This situation is typical for the
packaging and pharmaceutical industries.
Accuracy is the repeatability of measurements as compared to a reference value or
position (measure applications). Accuracy is described in more detail in the Appendix.
Success rate is the systems reliability in terms of false OKs and false rejects (inspect
and identify applications). A false OK is when a faulty object is wrongly classified as OK, and
a false reject is when an OK object is wrongly classified as false. It is often important to
distinguish between the two aspects, since the consequences of each can be totally
diferent in the production line.
7.5.3
System Flexibility
Building a vision system that performs one task in a constant environment can be
easy. However, the systems complexity can increase significantly when it shall inspect
variable objects in a variable environment.
Worth keeping in mind is that objects that are very similar in the mind of their producer
can be totally diferent from a vision perspective. It is common to expect that since the
vision system inspects object A with such success, it must also be able to inspect objects
B and C with the same setup since they are so similar.
7.5.4
The object presentation is the objects appearance in the image, including position,
rotation, and illumination. With high repeatability in the image, the application solving can
be easy. On the other hand, the application solving may become difficult or even
impossi- ble for the very same object if its presentation is arbitrary.
For example, rotation invariance (360 degree rotation tolerance) in a 2D application
is more demanding on the processing than a non-rotating object. In 3D, rotation
invariance might not even be possible for some objects because of occluded
features.
7.5.5
Although a vision system can be a technically optimal solution, sometimes there is not
enough mounting space. Then one must consider an alternative solution or
redesign of the machine.
Some industries have environments with heat, vibrations, dust, and humidity
concerns. Such conditions can have undesirable side-efects: Reduced hardware
performance, life, and deteriorated image quality (blur). Information about the hardware's
ability to withstand such conditions is found in its technical specifications.
Chapter
Append
Machine Vision
8
A
Lens Selection
hFocalLen
OD
,
FOVheigh
Len
S
H
O
The figure above can just as well represent the sensor and FOV widths. The
important thing is to be consistent in the formula by using only heights or only widths.
The sensor size is often measured diagonally in inches. SensorHeight in the formula
above refers to the vertical height, which thus needs to be calculated from the diagonal.
Example
The focal length for a certain application needs to be calculated.
Known information:
Sens
or
Camera: IVC-2D VGA (640x480)
Diagonal
Widt
To use the above formula, the sensor height needs to be calculated first. The
needed calculation principles are the Pythagorean Theorem to calculate the sensor
diagonal in pixels, and the theorem about uniform triangles to find the sensor height in
50
Machine Vision
Append
Chapter
Calculations:
SensorDiagonal
1.
640 2 +
pix
htSensorHe
2.
800 pi =
480 pi
3.
hFocalLen
500
mm
= 100 25
mm
Telecentric Lens
Normal lenses give a slightly distorted image because of the view angle. Telecentric
lenses can be used to reduce or eliminate such efects. The optics in a telecentric
lens causes all light beams enter the lens in parallel. This has two practical
consequences:
1. The lens diameter needs to be at least as large as the objects size.
2. The depth of field is deeper than for standard lenses.
Exam
Cylinders as viewed
through a normal lens
Cylinders as viewed
through a
Chapter
Append
B
Machine Vision
Lighting Selection
Selecting the best lighting for a particular application can be tricky. The flow diagram
below is a rough guideline for typical cases.
Large 3D
features?
Yes
Low accuracy
requirement?
Yes
No
No
Laser line + 2D
Consider a 3D solution
Small 3D
contour
features?
Yes
Silhouette
projection
possible?
Yes
No
No
Features
at one specific working
distance?
Yes
Darkfield
Engraved characters
Imprints
Mechanical parts with small edges
No
Reflective material?
Backlight
Yes
Dome or on-axis
Bearings
Polished mechanical parts
Molded plastic
No
Consult an expert
Surface features?
Yes
Reflective material?
Yes
CDs
Polished metal
No
No
Ring light
Printed paper
Prints on some plastics
Rubber
Semitransparent
features?
Yes
Fill level?
Yes
Backlight
Bottle content check
No
No
Color
features?
No
Consult an expert
Yes
Feature
has one known
color?
No
Yes
52
Machine Vision
Append
C
Chapter
Object (mm/pixel)
Lens,
Object distance
Processing
Lighting,
Positioning
Repeatability
Calibration
Accuracy
Reference
measurements
Sensor resolution is the number of pixels on the sensor, for example VGA (640 x
480). The sensor resolution together with the focal length and object distance (i.e. the FOV)
determine the object resolution, which is the physical dimension on the object that
corre- sponds to one pixel on the sensor, for example 0.5 mm/pixel.
The object resolution together with the lighting, analysis method, object presentation etc
determines the repeatability of a measurement. The repeatability is defined by the
stan- dard deviation (, 3, or 6) from the mean value of a result when
measured over a number of inspections.
Accuracy is the reliability of a result as compared with a true value, or reference
value. Accuracy is defined in terms of the standard deviation (, 3, or 6) from the
reference value, usually referred to as the measurement error.
If the repeatability of a measurement is good there is a fair chance that a calibration
procedure will also give good accuracy. In the simplest case the calibration is just subtracting an ofset, such as the function of the zero button (tare) of an electronic scale. In vision
applications it can be a much more complex procedure, involving specially designed
calibration objects, lookup tables, interpolation algorithms, etc. A consequence of calibrating against a reference is that the vision system wont be more accurate than the
reference method.
If measurements of the same features change systematically over time, the system is
drifting and needs recalibration.
Exam
ple
Poor repeatability
and poor
Good repeatability
but poor
Good
Confidence Level
A vision system is claimed to measure the diameter of a steel rod with 0.1 mm
accuracy. What does 0.1 mm mean? The confidence level at which you can
trust the system de- pends on how many standard deviations () the stated accuracy
value refers to:
: You can be 68% sure that the measured value is within 0.1 mm of the
truth.
Chapter
Append
D
Machine Vision
Motion blur occurs when the exposure time is long enough to allow the object to
move a noticeable distance during this time.
v
b
Motion blur because of too long
exposure time relative to object
speed.
Motion blur can be calculated in physical dimensions b (mm) or in pixels
p:
Sharp image.
b=tv
d
p=
r
where
p = number of pixels
b = physical distance
(mm)
r = object resolution
t = exposure time (ms)
(mm/pixel)
v = speed (mm/ms or
Motion blur needs to be considered in applications where the conveyor moves
fast or when small features need to be detected accurately.
Motion blur is reduced by increasing the light intensity, for example by strobing, and
decreasing the exposure time.
IP Classification
Second digit
4
Protected against splashing
water
5
6
7
to
8
9
It is important to note that the system is not more resistant than its "weakest link". It is of no
use to buy a camera of IP67 if the cable connectors are IP53.
Unusually harsh environments, for example where there is an explosion risk,
require additional classification systems.
54