You are on page 1of 30

An Innovative Approach For Detect and

Recognize Skin Diseases Using SVM


Classifier and Data Mining Technic

ABSRACT
Skin diseases are becoming a common phenomenon these days as different types of
allergies are increasing rapidly. Most skin diseases tend to pass on from one person to
another and therefore it is important to control it at initial stages to prevent it from
spreading. In this paper, we study the problem of skin disease automated detection and
provide the user advises or treatments based on the results obtained in a shorter time
period than the existing methods. We will be constructing a diagnosis system based on
the techniques of image processing and data mining. We will be making use of Matlab
software to perform the pre-processing and processing of the skin images which will be
obtained from the given data set.

CHAPTER 1

INTRODUCTION

1.1 Definition of an image

An image is an array,or a matrix, of square pixels(picture elements)


Arranged in rows and columns.

An image is a two-dimensional picture, which has a similar appearance to


some subject usually a physical object or a person.
Image is a two-dimensional, such as a photograph, screen display, and as
well as a three-dimensional, such as a statue. They may be captured by optical
devices—such as cameras, mirrors, lenses, telescopes, microscopes, etc. and
natural objects and phenomena, such as the human eye or water surfaces.

Figure 1: grayscale image

In a (8 bit) grey scale image each picture element has an assigned intensity the
ranges from 0 to 255. A grey scale image is what people normally call a black and
white image, but the name emphasizes that such an image will also include many
shades of grey.

1.1.1 Pixels and Pixel values

An image is a rectangular grid of pixels. It has a definite height and a definite


width counted in pixels. Each pixel is square and has a fixed size on a given
display. However different computer monitors may use different sized pixels. The
pixels that constitute an image are ordered as a grid (columns and rows); each pixel
consists of numbers representing magnitudes of brightness and color.
Figure 2: Pixel values of an Image

Each pixel has a value from 0(black) to 255(white). The possible range of the pixel
values depend on the color depth of the image, here 8 bit=256 tones or grey
scales.A normal grey scale image has 8 bit colour depth=256 grey scales. A “true

colour “ image has 24 bit colour depth =8*8*8 bits=256*256*256 colours=~ 16


million colours.
Each pixel has a color. The color is a 32-bit integer. The first eight bits
determine the redness of the pixel, the next eight bits the greenness, the next eight
bits the blueness, and the remaining eight bits the transparency of the pixels.

Figure 3: Bits representation of pixel colors.

Some grayscale images have more grayscale, for instance 16 bit= 65536
grayscales. In principle three grayscale images can be combined to form an image
with 281,474,976,710,656 grayscales.

There are two general groups of ‘images’: vector graphics or line art and
bitmaps(pixel-based or’images’).

1.1.2Perception of colors

Image based on radiation from the EM spectrum are the most familiar,
especially image in the X-ray and visual bands of the spectrum. Electromagnetic
waves can be conceptualized as propagating sinusoidal waves of varying
wavelengths , or they can be thought of as a stream of mass less particles, each
traveling wavelike pattern and moving speed of light. Each bundle of energy is
called a photon. If spectral bands are grouped according to energy per photon, we
obtain the spectrum shown in figure ranging from gamma rays at one end to radio
waves at the other. The bands are shown shaded to convey the fact that bands of
EM spectrum are not distinct but rather transition smoothly from one to the other.
Figure4: The electromagnetic spectrum arranged according to energy per
photon
1.2 Image File Formats
Image file formats are standardized means of organizing and storing images.
This entry is about digital image formats used to store photographic and other
images. Image files are composed of either pixel or vector (geometric) data that are
rasterized to pixels when displayed (with few exceptions) in a vector graphic
display. Including proprietary types, there are hundreds of image file types. The
PNG, JPEG, and GIF formats are most often used to display images on the
Internet.

In addition to straight image formats, Metafile formats are portable formats


which can include both raster and vector information. The metafile format is an
intermediate format. Most Windows applications open metafiles and then save
them in their own native format.

1.2.1 Raster Formats:

These formats store images as bitmaps (also known as pix-maps).

 JPEG/JFIF:
JPEG (Joint Photographic Experts Group) is a compression method. JPEG
compressed images are usually stored in the JFIF (JPEG File Interchange Format)
file format. JPEG compression is lossy compression. Nearly every digital camera
can save images in the JPEG/JFIF format, which supports 8 bits per color (red,
green, blue) for a 24-bit total, producing relatively small files. Photographic
images may be better stored in a lossless non-JPEG format if they will be re-edited,
or if small "artifacts" are unacceptable. The JPEG/JFIF format also is used as the
image compression algorithm in many Adobe PDF files.

 EXIF:
The EXIF (Exchangeable image file format) format is a file standard similar
to the JFIF format with TIFF extensions. It is incorporated in the JPEG writing
software used in most cameras. Its purpose is to record and to standardize the
exchange of images with image metadata between digital cameras and editing and
viewing software. The metadata are recorded for individual images and include
such things as camera settings, time and date, shutter speed, exposure, image size,
compression, name of camera, color information, etc. When images are viewed or
edited by image editing software, all of this image information can be displayed.

 TIFF:

The TIFF (Tagged Image File Format) format is a flexible format that normally
saves 8 bits or 16 bits per color (red, green, blue) for 24-bit and 48-bit totals,
respectively, usually using either the TIFF or TIF filename extension. TIFFs are
lossy and lossless. Some offer relatively good lossless compression for bi-level
(black & white) images. Some digital cameras can save in TIFF format, using the
LZW compression algorithm for lossless storage. TIFF image format is not widely
supported by web browsers. TIFF remains widely accepted as a photograph file
standard in the printing business. TIFF can handle device-specific color spaces,
such as the CMYK defined by a particular set of printing press inks.
 PNG:
The PNG (Portable Network Graphics) file format was created as the free,
open-source successor to the GIF. The PNG file format supports true color (16
million colors) while the GIF supports only 256 colors. The PNG file excels when
the image has large, uniformly colored areas. The lossless PNG format is best
suited for editing pictures, and the lossy formats, like JPG, are best for the final
distribution of photographic images, because JPG files are smaller than PNG files.
PNG, an extensible file format for the lossless, portable, well-compressed storage
of raster images. PNG provides a patent-free replacement for GIF and can also
replace many common uses of TIFF. Indexed-color, grayscale, and true color
images are supported, plus an optional alpha channel. PNG is designed to work
well in online viewing applications, such as the World Wide Web. PNG is robust,
providing both full file integrity checking and simple detection of common
transmission errors.

 GIF:
GIF (Graphics Interchange Format) is limited to an 8-bit palette, or 256
colors. This makes the GIF format suitable for storing graphics with relatively few
colors such as simple diagrams, shapes, logos and cartoon style images. The GIF
format supports animation and is still widely used to provide image animation
effects. It also uses a lossless compression that is more effective when large areas
have a single color, and ineffective for detailed images or dithered images.

 BMP:
The BMP file format (Windows bitmap) handles graphics files within the
Microsoft Windows OS. Typically, BMP files are uncompressed, hence they are
large. The advantage is their simplicity and wide acceptance in Windows
programs.

1.2.2 Vector Formats:

As opposed to the raster image formats above (where the data describes the
characteristics of each individual pixel), vector image formats contain a geometric
description which can be rendered smoothly at any desired display size.

At some point, all vector graphics must be rasterized in order to be displayed


on digital monitors. However, vector images can be displayed with analog CRT
technology such as that used in some electronic test equipment, medical monitors,
radar displays, laser shows and early video games. Plotters are printers that use
vector data rather than pixel data to draw graphics.

 CGM:
CGM (Computer Graphics Metafile) is a file format for 2D vector graphics,
raster graphics, and text. All graphical elements can be specified in a textual source
file that can be compiled into a binary file or one of two text representations. CGM
provides a means of graphics data interchange for computer representation of 2D
graphical information independent from any particular application, system,
platform, or device.

 SVG:
SVG (Scalable Vector Graphics) is an open standard created and developed
by the World Wide Web Consortium to address the need for a versatile, scriptable
and all purpose vector format for the web and otherwise. The SVG format does not
have a compression scheme of its own, but due to the textual nature of XML, an
SVG graphic can be compressed using a program such as grip.
1.3 Digital images
Suppose we take an image, a photo, say. For the moment,lets make things
easy and suppose the photo is black and white(thar is, lots of shades of grey),so no
colour. We may consider this image as being a two dimensional function where the
functional values give the brightness of the image at any given point. We may
assume that in such an image brightness values can be any real numbers in the
range (black)(white).
A digital image from a photo in that the values all are discrete. Usually they
take an only integer values. The brightness values also ranging from 0(black) to
255(white). A digital image can be considered as a large array of discrete dots,
each of which has a brightness associated with it. These dots are called picture
elements, or more simply pixels. The pixels surrounding a given pixel constitute its
neighborhood. A neighborhood can be characterized by its shape in the same way
as a matrix : we can speak of a neighborhood,. Except in very special
circumstances neighborhood have odd numbers of rows and columns; this ensures
that the current pixels is in the center of the neighborhood.

1.4 Introduction to Image Preprocessing

Image pre-processing is the term for operations on images at the lowest level of
abstraction. These operations do not increase image information content but they
decrease it if entropy is an information measure. The aim of pre-processing is an
improvement of the image data that suppresses undesired distortions or enhances
some image features relevant for further processing and analysis task. Image pre-
processing use the redundancy in images. Neighboring pixels corresponding to one
real object have the same or similar brightness value. If a distorted pixel can be
picked out from the image, it can be restorted as an average value of neighboring
pixels. Image pre-processing methods can be classi¯ed into categories according to
the size of the pixel neighborhood that is used for the calculation of a new pixel
brightness. In this paper, it will be presented some pixel brightness transformations
and local pre-processing methods realized in MatLab.

1.6 IMAGE PROCESSING OPERATIONS


We categorize the image processing operations into following three
different types.
1. Type 0 operation: If the output intensity level at a certain pixel is strictly
dependent on only the input intensity level at that point, such an operation is
known as type 0 or a point operation. Point operations are quite frequently used in
image segmentation, pixel classification, image summing, differencing, etc.
2. Type 1 Operations: If the output intensity level at a pixel depends on the
input intensity levels of the neighboring pixels as well, then such operations are
termed type 1 or local operations. Examples of local operations are Edge detection,
image filtering, etc.
3. Type 2 operations: If the operations are such that the output level at a
point is dependent on some geometrical transformation, these operations are
termed type 2 or Geometrical operations.

1.7 Fundamental Steps In DIP:


Figure 5: Block diagram for fundamentals of DIP

1.7.1 Image Acquisition:


Image Acquisition is to acquire a digital image. To do so requires an image
sensor and the capability to digitize the signal produced by the sensor. The sensor
could be monochrome or color TV camera that produces an entire image of the
problem domain every 1/30 sec. the image sensor could also be line scan camera
that produces a single image line at a time. In this case, the objects motion past the
line. Scanner produces a two-dimensional image. If the output of the camera or
other imaging sensor is not in digital form, an analog to digital converter digitizes
it. The nature of the sensor and the image it produces are determined by the
application.

1.7.2 Image Enhancement:


Image enhancementis among the simplest and most appealing areas of
digital image processing. Basically, the idea behind enhancement techniques is to
bring out detail that is obscured, or simply to highlight certain features of
interesting an image. A familiar example of enhancement is when we increase the
contrast of an image because “it looks better.” It is important to keep in mind that
enhancement is a very subjective area of image processing.
1.7.3 Image restoration:
Image restorationis an area that also deals with improving the appearance of
an image. However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on mathematical
or probabilistic models of image degradation.
Enhancement, on the other hand, is based on human subjective preferences
regarding what constitutes a “good” enhancement result. For example, contrast
stretching is considered an enhancement technique because it is based primarily on
the pleasing aspects it might present to the viewer, where as removal of image blur
by applying a deblurring function is considered a restoration technique.
1.7.4 Color image processing:
The use of color in image processing is motivated by two principal factors.
First, color is a powerful descriptor that often simplifies object identification and
extraction from a scene. Second, humans can discern thousands of color shades
and intensities, compared to about only two dozen shades of gray. This second
factor is particularly important in manual image analysis.
1.7.5 Compression:
Compression, as the name implies, deals with techniques for reducing the
storage required saving an image, or the bandwidth required for transmitting it.
Although storage technology has improved significantly over the past decade, the
same cannot be said for transmission capacity. This is true particularly in uses of
the Internet, which are characterized by significant pictorial content. Image
compression is familiar to most users of computers in the form of image file
extensions, such as the jpg file extension used in the JPEG (Joint Photographic
Experts Group) image compression standard.
1.7.6 Segmentation:
Segmentationprocedures partition an image into its constituent parts or
objects. In general, autonomous segmentation is one of the most difficult tasks in
digital image processing. A rugged segmentation procedure brings the process a
long way toward successful solution of imaging problems that require objects to be
identified individually.
On the other hand, weak or erratic segmentation algorithms almost always
guarantee eventual failure. In general, the more accurate the segmentation, the
more likely recognition is to succeed.
1.7.7Representation and description:
Representation and description almost always follow the output of a
segmentation stage, which usually is raw pixel data, constituting either the
boundary of a region (i.e., the set of pixels separating one image region from
another) or all the points in the region itself. In either case, converting the data to a
form suitable for computer processing is necessary. The first decision that must be
made is whether the data should be represented as a boundary or as a complete
region. Boundary representation is appropriate when the focus is on external shape
characteristics, such as corners and inflections.
Regional representation is appropriate when the focus is on internal
properties, such as texture or skeletal shape. In some applications, these
representations complement each other. Choosing a representation is only part of
the solution for transforming raw data into a form suitable for subsequent computer
processing. A method must also be specified for describing the data so that features
of interest are highlighted. Description, also called feature selection, deals with
extracting attributes that result in some quantitative information of interest or are
basic for differentiating one class of objects from another.
1.7.8 Classification recognition:
The last stage involves recognition and interpretation. Recognition is the
process that assigns a label to an object based on the information provided by its
descriptors. Interpretation involves assigning meaning to an ensemble of
recognized objects.

1.7.9 Knowledgebase:

Knowledge about a problem domain is coded into image processing system


in the form of a knowledge database. This knowledge may be as simple as
detailing regions of an image when the information of interests is known to be
located, thus limiting the search that has to be conducted in seeking that
information. The knowledge base also can be quite complex, such as an inter
related to list of all major possible defects in a materials inspection problem or an
image data base containing high resolution satellite images of a region in
connection with change deletion application. In addition to guiding the operation
of each processing module, the knowledge base also controls the interaction
between modules. The system must be endowed with the knowledge to recognize
the significance of the location of the string with respect to other components of an
address field. This knowledge glides not only the operation of each module, but it
also aids in feedback operations between modules through the knowledge base. We
implemented preprocessing techniques using MATLAB.

1.8 Application of Image Processing

Image processing has an enormous range of applications, almost every area


of science and technology can make use of image processing methods. Here is a
short list just to give some indication of the range of image processing
applications.
1.8.1 Medicine

 Inspection and interpretation of images obtained from X-rays,MRI or CAT


scans,
 Analysis of cell images of chromosome karyotypes.

1.8.2 Agriculture

 Satellite/aerial views of land,for example to determine how much land is


being used for different purposes,or to investigate the suitability of different
regions for different crops,
 Inspection of fruit and vegetables distinguishing good and fresh produce
from old.

1.8.3 Industry

 Automation inspection of items on a production line,


 Inspection of paper samples.

1.8.4 Law Enforcement

 Fingerprint analysis,
 Sharpening or de-blurring of speed-camera images.

Existing method:

Formal diagnosis method to skin cancer detection is Biopsy method. A biopsy is a


method to remove a piece of tissue or a sample of cells from patient body so that it can
be analyzed in a laboratory. It is uncomfortable method. Biopsy Method is time
consuming for patient as well as doctor because it takes lot of time for testing. Biopsy is
done by removing skin tissues (skin cells) and that sample undergoes series of
laboratory testing. There is possibility of spreading of disease into other part of body. It
is more risky.
Proposed method:

In this paper we propose a diagnosis system which will enable users to detect and
recognize skin diseases. With the help of image processing and data mining techniques
and provide the user advises or treatments based on the results obtained in a shorter
time period than the existing methods. In this project, we will be constructing a
diagnosis system based on the techniques of Image Processing. We will be making use
of Matlab software to perform the pre-processing and processing of the skin images of
the users.
This processing will be conducted on the different skin patterns and will be analyzed to
obtain the results from which we can identify which skin disease the user is suffering
from. This data will help in early detection of the skin diseases and in providing their
cure. Through this we will be finding a cost effective and feasible test method for the
detection of skin disorders. The results obtained will be classified according to the given
prototype and diagnosis accuracy assessment will be performed to provide users with
efficient and fast results.
In this paper we are considering a train of images that will be obtained from the given
data set and preprocessing and segmentation will be performed on each image. After
the image is segmented we are able to determine whether the skin has been affected
by any disease or not. We are taking into consideration three disease viz., psoriasis,
vitiligo and skin cancer. Once the presence of disease is detected the portion of area
affected by the disease will be highlighted indicating the exact location of the disease on
the skin. From the affected area we will perform classification of disease through data
mining. The segmentation of image is done using a tolerance value. The tolerance
value is calculated through histogram.

Block diagram:
Dataflow diagram:

CHAPTER 2

PROJECT DECSCRIPTION
Skin cancer is a deadly disease. Skin has three basic layers. Skin cancer begins in
outermost layer, which is made up of first layer squamous cells, second layer basal
cells, and innermost or third layer melanocytes cell. In today’s world, people of different
age groups are suffering from skin diseases such as eczema, scalp ringworm, skin
fungal, skin cancer of different intensity, psoriasis etc. These diseases strike without
warning and have been one among the major disease that has life risk for the past ten
years. If skin diseases are not treated at earlier stage, then it may lead to complications
in the body including spreading of the infection from one individual to the other.

The skin diseases can be prevented by investigating the infected region at an early
stage. It is important to control it at initial stages to prevent it from spreading. Also
damage done to the skin through skin diseases could damage the mental confidence
and wellbeing of people. Therefore this has become a huge problem among people and
it has become a crucial thing to treat these skin diseases properly at the initial stages
itself to prevent serious damage. Many of the skin diseases are very dangerous,
particularly if not treated at an early stage. Skin diseases are becoming common
because of the increasing pollution. Skin diseases tend to pass from one person to
another. Human habits tend to assume that some skin diseases are not serious
problems. Sometimes, most of the people try to treat these infections of the skin using
their own method. However, if these treatments are not suitable for that particular skin
problem then it would make it worse. And also sometimes they may not be aware of the
dangerousness of their skin diseases, for instance skin cancers. With advance of
medical imaging technologies, the acquired data information is getting so rich toward
beyond the human’s capability of visual recognition and efficient use for clinical
assessment.

Input image
Input to proposed system is dermoscopic images, dermoscopic images are images
taken by dermatoscope. It is kind of magnifier used to take pictures of skin lesions (body
part). It is hand held instrument make it very easier to diagnose skin disease.

Pre processing

Goal of pre-processing is an improvement of image data that reduces unwanted


distortions and enhances some image features important for further image processing.

Image pre-processing involves three main things

1) Gray scale conversion


2) Noise removal
3) Image enhancement.

Grayscale conversion

Grayscale image contains only brightness information. Each pixel value in grayscale
image corresponds to an amount or quantity of light. The brightness graduation can be
differentiated in grayscale image. Grayscale image measures only light intensity. 8 bit
image will have brightness variation from 0 to 255 where ‘0’ represents black and ‘255’
represents white. In grayscale conversion colour image is converted into grayscale.
Grayscale images are easier and faster to process than coloured images. All image
processing technique are applied on grayscale image. In our proposed system coloured
or RBG image is converted into grayscale image by using weighted sum.

Noise Removal
The objective of noise removal is to detect and removed unwanted noise from digital
image. The difficulty is in deciding which features of an image are real and which are
caused by noise. Noise is random variations in pixel values. In our proposed system we
are using median filter to remove unwanted noise. Median filter is nonlinear filter, it
leaves edges invariant. Median filter is implemented by sliding window of odd length.
Each sample value is sorted by magnitude, the center most value is median of sample
within the window, is a filter output.

Image enhancement

The objective of image enhancement is to process an image to increase visibility of


feature of interest. Here contrast enhancement is used to get better quality result.

Segmentation
Segmentation is process of removing region of interest from given image. Region of
interest containing each pixel similar attributes. Here we are using maximum entropy
thresholding for segmentation. First of all we have to take gray level of original image
then calculate histogram of gray scale image then by using maximum entropy separate
foreground from background. After maximum entropy we obtained binary image that is
black and white image.

Feature extraction
Feature extraction plays an important role in extracting information present in given
image. Here we are using gray level co-occurrence matrix. (GLCM). GLCM for texture
image analysis. GLCM is used to capture spatial dependency between image pixels.
GLCM works on gray level image matrix to capture most common feature such as
contrast, mean, energy, homogeneity.
The purpose of feature extraction (glcm) is to suppress the original image data set by
measuring certain values or features that helps to classify different images from one
another.

Classifier
Classifier is used to classify cancerous image from other skin diseases. For simplicity
Support Vector machine classifier is used here. SVM takes set of images and predicts
for each input image belongs to which of the two categories of cancerous and non-
cancerous classes. The purpose of SVM is create hyper plane that separates two
classes with maximum gap between them. In our proposed system output of GLCM is
given as input to SVM classifier which takes training data, testing data and grouping
information which classifies whether given input image is cancerous or non-cancerous.

CHAPTER 3

SOFTWARE DESCRIPTION

Software Requirement:-

 Mat lab 7.8.0& above version

5.1 History of matlab:

MATLAB is Short form of "MATrix LABoratory", MATLAB was


invented in the late 1970s by Cleve Molar, then chairman of the computer science
department at the University of New Mexico. He designed it to give his students
access to LINPACK and EISPACK without having to learn Fortran. It soon spread
to other universities and found a strong audience within the applied mathematics
community. Jack Little, an engineer, was exposed to it during a visit Moler made
to Stanford University in 1983. Recognizing its commercial potential, he joined
with Moler and Steve Bangert. They rewrote MATLAB in C and founded The
MathWorks in 1984 to continue its development. These rewritten libraries were
known as JACKPAC.
MATLAB was first adopted by control design engineers, Little
specialty, but quickly spread to many other domains. It is now also used in
education, in particular the teaching of linear algebra and numerical analysis, and
is the defacto choice for scientists involved with image processing

5.2 Syntax:

MATLAB is built around the MATLAB language, sometimes called M-


code or simply M. The simplest way to execute M-code is to type it in at the
prompt, >>in the Command Window, one of the elements of the MATLAB
Desktop. In this way, MATLAB can be used as an interactive mathematical
shell.Sequences of commands can be saved in a text file, typically using the
MATLAB Editor, as a script or encapsulated into a function, extending the
commands available. In many other languages, the semicolon is required to
terminate commands. In MATLAB the semicolon is optional. If a statement is not
terminated with a semicolon, then the result of the statement is displayed.

5.2.1 Variables:

Variables are defined with the assignment operator, =. MATLAB is


dynamically typed, meaning that variables can be assigned without declaring their
type, and that their type can change. Values can come from constants, from
computation involving values of other variables, or from the output of a function.

5.2.2 Vectors / Matrices:

MATLAB is the "Matrix Laboratory", and so provides many convenient


ways for creating matrices of various dimensions. In the MATLAB vernacular, a
vector refers to a one dimensional (1×N or N×1) matrix, commonly referred to as
an array in other programming languages. A matrix generally refers to a multi-
dimensional matrix, that is, a matrix with more than one dimension, for instance,
an N×M, an N×M×L, etc., where N, M, and L are greater than 1. In other
languages, such a matrix might be referred to as an array of arrays, or array of
arrays of arrays, etc. Matrices can be defined by separating the elements of a row
with blank space or comma and using a semicolon to terminate each row. The list
of elements should be surrounded by square brackets [ ].

5.3 Matlab image processing:

MATLAB provides a suitable environment for image processing.


Although MATLAB is slower than some languages (such as C), its built in
functions and syntax makes it a more versatile and faster programming
environment for image processing. Once an algorithm is finalized in MATLAB,
the programmer can change it to C (or another faster language) to make the
program run faster.

MATLAB does not have the easy to use interfaces of Adobe Photoshop.
MATLAB is used to test and tweak new image processing techniques and
algorithms. Almost everything in MATLAB is done through programming and
manipulation of raw image data and not a user interface. The effects and filters in
Photoshop (or any other image editing software) are actually algorithms. With
MATLAB, the user can create these complex algorithms that are applied in
Photoshop.
5.3.1 Image matrices:

MATLAB handles images as matrices. This involves breaking each pixel


of an image down into the elements of a matrix. MATLAB distinguishes between
color and grayscale images and therefore their resulting image matrices differ
slightly.

5.3.2 Color images:

A color is a composite of some basic colors. MATLAB therefore breaks


each individual pixel of a color image (termed ‘true color’) down into Red, Green
and Blue (RGB) values. What we get as a result, for the entire image, are 3
matrices, one representing each color.The three matrices are stacked next to each
other creating a 3 dimensional m by n by 3 matrix. For an image which has a
height of 5 pixels and width of 10 pixels the resulting in MATLAB would be a 5
by 10 by 3 matrix for a true color image.

5.3.3 Gray scale images:

A grayscale image is a mixture of black and white colors. These colors,


or as some may term as ‘shades’, are not composed of Red, Green or Blue colors.
But instead they contain various increments of colors between white and black.
Therefore to represent this one range, only one color channel is needed. Thus we
only need a 2 dimensional matrix, m by n by 1. MATLAB terms this type of
matrix as an Intensity Matrix, because the values of such a matrix represent
intensities of one color.
5.3.4 Color maps:

MATLAB also allows the use of colormaps. Basically a colormap is a m


by 3 matrix representation of all the possible colors in the image/matrix.
Modifying, creating or applying one of MATLAB’s predefined colormaps will
effectively change the color range of an image.

5.4 Pixel values:

MATLAB, by default, will use integer values (which MATLAB terms as


uint8) that have a range of integers from 0 to 255 to represent a pixel value. 0
stands for the lightest color and 255 stands for the darkest color possible. This
applies to the RGB (Red Green Blue) color channels. In the case of the Red
channel, lower numbers will produce lighter (pink) values for red, and higher
numbers near 255 will produce darker (maroon) values of red.For the intensity
matrix (m by n by 1) this scale applies for colors between white and black (or
depends on the colormap being used).The second type of pixel values used for
images are called double which are floating point (decimal) numbers between 0
and 1. This range is proportional to the uint8 range and therefore multiplying each
double pixel value by 255 will yield a uint8 pixel value. Similarly conversion from
uint8 to double is done by dividing the uint8 value by 255.

MATLAB does have casting functions unit 8 () and double (). But these
only change the data type and do not scale the values. Scaling must be done
manually. The reason MATLAB has two formats is because uint8 values take less
storage. But in many older versions of MATLAB (version 6.0) direct arithmetic
operations on uint8 values is not possible because of accuracy issues. Therefore to
perform arithmetic operations, the pixel values must be converted to double first.
In version 2006a, this is not an issue as MATLAB simply changes uint8 to double
first, does the operations, and then changes the values back to uint8.

5.5 Image processing toolbox:

MATLAB does have extensions which contain functions geared towards


image processing. They are grouped under the ‘Image Processing Toolbox’. In
some versions it is an optional extra which users have to pay for, while in others it
comes packaged with the software.

This toolbox is especially helpful for applying numerous filters (such as


linear and deblur) and also includes algorithms which can detect lines and edges in
an image. A programmer can write almost all the features in this extension.
Otherwise the command ‘edit commandName” usually allows a user to see/modify
lines of the built-in functions that MATLAB and its Image Processing Toolbox
provide.

5.6 BASIC TASKS:

Converting a color image to graycale

1. Since we know that a RGB is a m-n-3 matrix and a grayscale image is a m-


n-1 matrix we can take label our new grayscale intensity matrix as either one
of the red, green or blue channel. In this way we are accounting for only
one channel values for our grayscale image.
2. Another common way is to combine elements of the red, green and blue
channel with a weight by using, weightR * RedChannel +
weightG*GreenChannel + weightB*BlueChannel. Where, the weights,
weightR + weightG + weightB = 1. This method allows us to mix each
channel selectively to get a grayscale image.
3. More recent versions of MATLAB have a function called rgb2gray() -which
turns an RGB matrix to a grayscale matrix. This function is an
implementation of the option listed above.

5.7 Data types in MATLAB

 Double (64-bit double-precision floating point)


 Single (32-bit single-precision floating point)
 Int32 (32-bit signed integer)
 Int16 (16-bit signed integer)
 Int8 (8-bit signed integer)
 Uint32 (32-bit unsigned integer)
 Uint16 (16-bit unsigned integer)
 Uint8 (8-bit unsigned integer)

5.8 Image type conversion

 RGB Image to Intensity Image (rgb2gray)


 RGB Image to Indexed Image (rgb2ind)
 RGB Image to Binary Image (im2bw)
 Indexed Image to RGB Image (ind2rgb)
 Indexed Image to Intensity Image (ind2gray)
 Indexed Image to Binary Image (im2bw)
 Intensity Image to Indexed Image (gray2ind)
 Intensity Image to Binary Image (im2bw)
 Intensity Image to RGB Image (gray2ind, ind2rgb)

5.8.1 Key Features

 High-level language for technical computing


 Development environment for managing code, files and data
 Interactive tools for iterative exploration, design and problem solving
 Mathematical function for linear algebra, statics, Fourier analysis,
filtering, optimization and numerical integration
 2-D and 3-D graphics function for visualizing data
 Tools for building custom graphical user interfaces
 Function for integrating MATLAB based algorithm with external
applications and languages such as C, C++, FORTRAN, JAVA,
COM, and Microsoft Excel.

5.9Uses of MATLAB

 Math and computation


 Algorithm development
 Data acquisition
 Modeling, simulation and Prototyping
 Data Analysis, Exploration and Visualization
 Scientific and engineering Graphics
 Application development, including Graphical User Interface
building

CONCLUSION:

An automated skin disease detection system is proposed which will help the medical
society for the early detection of the skin diseases. The diagnosis methodology uses
Digital Image Processing in MATLAB. The unique features of the enhanced images
were segmented using histogram. Based on the results, the affected area is detected
and the skin diseases are classified

REFERENCES

[1] Mugdha Smanerkar, shashwata harsh, juhi saxena, simanta p sarma, dr. u.
snekhalatha, dr.m. anburajan, “classification of skin disease using multi svm
classifier” 3rd international conference on electrical, electronics, engineering
trends, communication, optimization and sciences (eeecos)-2016.
[2] G.Ramya, J.Rajeshkumar “Novel method for segmentation of skin lesions from
digital images”, international research journal of engineering and technology
vol:02 issue:08 November 2015.
[3] B.Gohila vani et al. “Segmentation and Classification of Skin Lesions Based on
Texture Features” Int. Journal of Engineering Research and Applications
www.ijera.com ISSN : 2248-9622, Vol. 4, Issue 12( Part 6), December 2014,
pp.197-203.
[4] Kawsar Ahmed, Tasnuba Jesmin, Md. Zamilur Rahman “Early Prevention and
Detection of Skin Cancer Risk using Data Mining” International Journal of
Computer Applications (0975 – 8887) Volume 62– No.4, January 2013.
[5] I.Vijaya M. S “Categorization of Non-Melanoma Skin Lesion Diseases Using
Support Vector Machine and Its Variants”. International Journal of Medical
Imaging. Vol. 3, No. 2, 2015, pp. 34-40. doi: 10.11648/j.ijmi.20150302.15
[6] Y.P.Gowaramma et al., used marker controlled watershed segmentation method
k-nn classifier along with curvelet filter.
[7] J. Priyadharshini “A Classification via Clustering Approach for Enhancing the
Prediction Accuracy of Erythemato-squamous (Dermatology) Diseases” IJSRD -
International Journal for Scientific Research & Development| Vol. 3, Issue 06,
2015 | ISSN (online): 2321-0613.
[8] E.Barati et al., “A survey on utilization of data mining approach for
dermatological skin diseases prediction” Journal of selected areas in health
informatics march 2011.
[9] A.A.L.C. Amarathunga, et al.,”Expert system for diagnosis of skin diseases”
International journal of scientific & technology research volume 4, issue 01,
january 2015 issn 2277-8616 174 ijstr©2015.
[10] MadhuraRambhajani “Classification of Dermatology Diseases through Bayes
net and Best First Search” International Journal of Advanced Research in
Computer and Communication Engineering” Vol. 4, Issue 5, May 2015.

You might also like