You are on page 1of 22

VISION SENSOR

Subramanya R Prabhu B

Machine Vision

Stored programs/ Auxiliary algorithms storage


Frame Grabber

CAMERA

A/D

Lighting

Computer (processor)

I/F

Robot controller

(TASK)
Hardware Function
Techniques

Monitor
1. Sensing & digitizing image data
- Signal Conversion - Sampling - Encoding - Image storage - Lighting

Key board
3. Applications
- Inspection - Material handling - Safety monitoring

2. Image processing & analysis


a) Data reduction b) Edge detection c) Feature extraction d) Object recognition

1. Sensing & digitizing image data


Step 1: i. Capture image of the scene vision camera ii. Digitizing iii. Image Storage & computation Device used Frame Grabber A image storage computation device Stores image in the form of Pixel array

ILLUMINATION

The scene viewed by the vision camera must be well illuminated, and the illumination must be constant over time. The lighting technologies include incandescent lamps, fluorescent lamps, sodium vapor lamps, and lasers. In front lighting the light source is located on the same side of the object as the camera. This produces a reflected light from the object that allows inspection of surface features such as printing on a label and surface patterns such as solder lines on a printed circuit board.

In back lighting the light source is placed behind the object being viewed by the camera. This creates a dark silhouette of the object that contrasts sharply with the light background. This type of lighting can be used for binary vision systems to inspect for part dimension, and to distinguish between different part outlines. Side lighting causes irregularities in an otherwise plane smooth surface to cast shadows that can be identified by the vision system. This can be used to inspect for defects and flaws in the surface of an object

Structured lighting involves the projection of a special light pattern onto the object to enhance certain geometric features. [Planar sheet of highly focused light directed against the surface of the object at a certain known angle]. The sheet of light forms a bright line where the beam intersects the surface. Any variations from the general plane of the part appear as deviations from a straight line, The distance of the deviation can be determined by optical measurement, and the corresponding elevation differences can be calculated using trigonometry.

In strobe lighting, the scene is illuminated by a short pulse of highintensity light which causes a moving object to appear stationary. The moving object might be a part moving past the vision camera on a conveyor. The pulse of light can last 5-500 microseconds. This is sufficient time for the camera to capture the scene, although the camera actuation must be synchronized with that of the strobe light.

ANALOG TO DIGITAL CONVERSION

Analog-to-digital conversion occurs in three phases: (1) sampling, (2) quantization, and (3) encoding. Sampling consists of converting the continuous signal into a series of discrete Analog signals at periodic intervals. In quantization, each discrete analog signal is assigned to one of a finite number of previously defined amplitude levels. The amplitude levels are discrete values of voltage ranging over the full scale of the ADC In the encoding phase, the discrete amplitude levels obtained during quantization are converted into digital code, representing the amplitude level as a sequence of binary digits.

Frame Grabber:
Stores digitized image after ADC in computer (frame buffer) Video data acquisition device Acquires it in 1/30 s Digital frames quantized to 8 bits/pixel Synchronized row & column counters are used Each position on screen uniquely addressed Signal sent by computer to the address (grabbing)

2. Image processing & analysis

The techniques, 1. Image data reduction 2. Segmentation 3. Feature extraction 4. Object recognition

Image data reduction

The purpose of image data reduction is to reduce the volume of data either by elimination of some or part processing, leading to the following sub-techniques. Digital conversion is characterized by reduction in number of gray levels. Windowing : Processing a portion of the stored digital image. Function: To eliminate the bottleneck that can occur from large volume of data processing.

Segmentation
 Segmentation techniques are intended to define and separate regions of interest within the image.  The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.

3 Techniques :1. Thresholding 2. Region growing 3. Edge detection

THRESHOLDING

Thresholding involves the conversion of each pixel intensity level into a binary value, representing either white or black. This is done by comparing the intensity value at each pixel with a defined threshold value.  If the pixel value is greater than the threshold, it is given the binary bit value of white, say I; if less than the defined threshold, then it is given the bit value of black, say O. Reducing the image to binary form by means of thresholding usually simplifies the subsequent problem of defining and identifying objects in the image.

REGION GROWING

 Region growing is the processing technique where grid elements processing similar attributes are grouped to form a region. Procedure:  A pixel on the object is identified and assigned the value l.  The adjacent pixel is tracked for match in the attributes.  The matching pixel is assigned 1and non-matching pixel with 0.  The terms are repeated till the complete screen is covered resulting in growth and identification of region.

EDGE DETECTION

Edge detection is concerned with determining the location of boundaries between an object and its surroundings in an image. This is accomplished by identifying the contrast in light intensity that exists between adjacent pixels at the borders of the object. The edge detection is based on follow-the-edge procedure as shown in the Fig.. The procedure is to scan the pixel within the region, for which turn left and step or otherwise turn right and step from a starting point outside the boundary. This is continued till the end point meets the starting point.

Edge detection

Feature Extraction
 Characterize an object in the Image by means of the object's features.  Some of the features of an Object include the object's area, length, width, diameter, perimeter, centre of gravity, and aspect ratio.  Feature extraction methods are designed to determine these features based on the area and boundaries of the object (using thresholding, edge detection, and other segmentation techniques).  For example: the area of the object can be determined by counting the number of white (or black) pixels that make up the object. Its length can be found by measuring the distance (in terms of pixels) between the two extreme opposite edges of the part.

Interpretation / Object Recognition


For any given application, the image must be interpreted based on the extracted features. The objective in these tasks is to identify the object in the image by comparing it with predefined models or standard values. Template matching is the name given to various methods that attempt to compare one or more features of an image with the corresponding features of a model or template stored in computer memory.  The most basic template matching technique is one in which the image is compared pixel by pixel with a corresponding computer model. Within certain statistical tolerances, the computer determines whether the image matches the template.  One of the technical difficulties with this method is the problem of aligning the part in the same position and orientation in front of the camera to allow the comparison to be made without complications in image processing.

Feature weighting

is a technique in which several features (e.g., area,

length, and perimeter] are combined into a single measure by assigning a weight to each feature according to its relative importance in identifying the object. The score of the object in the image is compared with the score of an ideal object residing in computer memory to achieve proper identification

Training the system


 Stores known objects in to extracted feature values

 Compared with unknown objects  Close to operating conditions  High level programming language used

C , RAIL (Robot Automated Incorporated Language)

Robotic Applications

Inspection
 Dimensional measurement. These applications involve determining the size of certain dimensional features of parts or  Dimensional gaging. This is similar to the preceding except that a gaging function rather than a measurement is performed.  Verification of the presence of components in an assembled product.  Verification of hole location and number of holes in a part. Operationally, this task is similar to dimensional measurement and verification of components  Defection of surface flaws and defects. Flaws and defects on the surface of a part or material often reveal themselves as a change in reflected light  Detection of flaws in a printed label: The defect can be in the form of a poorly located label or poorly printed text numbering or graphics on the label.

Part identification applications are those in which the vision system is used to recognize and perhaps distinguish parts or other objects so that some action can be taken. The applications include part sorting counting different types of parts flowing past along a conveyor, and inventory monitoring. Reading of 2D bar codes and character recognition. Visual guidance and control involves applications in which a vision system is teamed with a robot or similar machine to control the movement of the machine. Examples of these applications include seam tracking in continuous arc welding, part positioning and/or reorientation, bin picking, collision avoidance, machining operations, and assembly tasks.

You might also like