You are on page 1of 9

Application of Watershed-based

Algorithm for Fragmentation


in Rock Blasting
Dr. Debashish Chakravarty
Assistant Professor, Department of Mining Engineering Indian Institute of Technology Kharagpur, India

Swapan Kumar Khatua


Master of Science Student and Junior Project Assistant Department of Mining Engineering
Indian Institute of Technology Kharagpur, India

Abstract
The field of rock blasting and analysis of the fragments has always been an area of great interest to the mining
engineers. Aided by the advent of new technologies like digital image processing, and while drill-parameter
measurement with monitoring systems, it is now possible to combine the different facets of mining into one entity that
leads to overall optimisation. The traditional view of separating the mining process into various unit-operations such as
drilling, blasting and comminution; and ultimately transportation into independent unit operations with equal weight to
all the unit operations does not any longer hold good. Now it is being increasingly realised that processes such as
blasting can have tremendous impact on processes such as crushing and grinding which come after blasting. The
places where the present study can be applied are to determine crusher performance, design a better blast model
and investigate the effect of explosive energy on rock fragments during primary crushing; and the overall study can
be used to determine the cost effectiveness of the total process. The various rock properties such as strength, nature
of deformation, size distribution and fragment shape have great effect on the performance of the primary crusher.
Using drill parameters such as SE and digital image analysis data in blasting models may lead to more accurate
results.
Shape and size determination of fragment rocks are increasingly becoming important issues in the mining,
comminution, materials handling, and construction industries. And size distribution of rock fragments obtained from
blasting and crushing in the mining industry has to be monitored for optimal control of a variety of processes before
reaching the final grinding, milling and the froth flotation processes. Whenever feasible, mechanical sieving is the
routine procedure to determine the cumulative rock weight distribution. This process is tedious and very time
consuming. The present work is related to segmentation of the fragment rocks using different watershed based
algorithms and also their comparisons. We also compare the fragment rock size distributions obtained from computer
based technique with mechanical sieving results. All the process has been implemented in laboratory scale using
256X256 gray level rock images and high level programming language.
Keywords: Image Segmentation and Thresholding, Morphological Image Processing, Rock Fragmentation, Particle
Size Distribution.

1. INTRODUCTION These include the advancement of modern instrumentation


for monitoring, increasingly sophisticated computer models
Blasting induced rock fragmentation is an art that has for blast design and blast performance prediction and
been developed and refined for hundreds of years through more versatile explosives and initiation systems. The first
blasting and recording the results. Good blasting problem one faces when dealing with rock fragmentation
fragmentation practices were thus developed through by blasting is how to define it. There are various
experience. However, this method requires a history that is approaches for the determination of rock fragmentation.
typically developed in a mine over a number of years and The size distribution of a blasted bench could be defined
sometimes at considerable expense. Quantification of by screening, but this is not practical as it would be far too
fragmentation as a function of rock mass characteristics expensive and too much time consuming. Therefore,
and blast design is attractive. Developments towards this numerous methods for the estimation of the size distribution
end have included semi-empirical methods as well as of rock fragments have been developed. Because rock
computer codes that explicitly treat shock wave fragmentation depends on many variables such as rock
transmission and the subsequent rock breakage. This properties, site geology, in situ fracturing, moisture content
paper will present predictions of fragmentation resulting and blasting parameters (both the geometry as well as the
from rock blasting using a new state-of-art technique and explosive characteristics with the initiation pattern), there is
the computer code that analyses the digital images taken no complete theoretical solution for the prediction of blast
from the laboratory scale. size distribution. However, useful empirical models are
Rock fragmentation is considered the most important used to estimate the size distribution. The most commonly
aspect of production blasting because of its direct effects used are the Kuz-Ram model and the Rosin-Rammler
on the costs of drilling and blasting and on the economics model. Using the machine vision concept a window based
of the subsequent operations of loading, hauling, and computer program may be utilised to estimate the size
crushing, i.e., material handling. Over the past two or three distributions of the rock fragments easily. The results of the
decades, significant progress has been made in the distribution can be used as a feedback for the next stages
development of new technologies for blasting applications. of planning so that the powder factor may be optimised. It

Fragblast-8 Santiago Chile, May 2006 85


is seen that by increasing the awareness and knowledge of rock shape, edge strength and region intensity
the effects of rock fragmentation by blasting on the mining characteristics. Subset feature selection based on
operation and corresponding costs, the economics of an Thorntons seperability index is used to remove redundant
open pit can be improved. and irrelevant features. They achieved the final
classification rates of 89.91% using a simple k-nearest
2. LITERATURE REVIEW neighbor Classifier [9].
Size distribution of rock fragments obtained from blasting
Although the digital size determination of blasted material and crushing in the mining industry has to be monitored for
may see a simple exercise, there are a number of technical optimal control of a variety of processes before reaching
issues that need to be resolved to optimise the the final grinding, milling and the froth flotation processes.
performance of the system. Whenever feasible, mechanical sieving is the routine
The dissertation has investigated the applicability of the procedure to determine the cumulative rock weight
various approaches of both image analyses techniques distribution on conveyor belts or free falling off the end of
and the definition of the digital textures, in terms of laying transfer chutes. This process is tedious and very time
the foundation stone for applicability of the digital textures consuming, even more so if a complete set of sieving
towards rock image classification. The work has used the meshes is used. A computer vision technique is proposed
new and novel techniques in the allied applied fields for based on a series of segmentation, filtering and
solving the problems in the field of rock mechanics, aided morphological operations specially designed to determine
by the analysis of digital images. rock fragment sizes from digital images. The final step uses
The research has made the following major and active an area-based approach to estimate rock volumes. This
contributions, which taken together would definitely allow segmentation technique was implemented and results of
the theoreticians, academicians as well as the cumulative rock volume distributions obtained from this
professionals progress in the field of problem solving in approach are compared to the mechanical fragment
rock mechanics. distributions. The technique yields rock distribution curves,
1. The relevance of the digital image processing which represents an alternative to the mechanical sieving
techniques towards estimation of the fragmentation distributions [3].
determination problem has been investigated, thus this Automated image analysis is a useful tool in analysing
would aid the process of blast design at both the block size of blast fragmentation. The results of the analysis
management and the operator levels. however reflect only the size distributions of the blocks in
2. A new method of analysing the digital images, namely the actual image or images being used. The surface of
the a priori knowledge based technique, has been found muck piles is normally fairly horizontal, and it is often
to be performing well for complex images of rock difficult to get an orthogonal view. The area of coverage of
fragments. a single image should be calculated. If too few fragments
3. The use of the neural network (NN) technique not only for are photographed, the results may by statistically erratic. If
rock property prediction but also for rock image too many fragments are photographed, the image analysis
classification would find its applicability to both, the field system may have difficulty in identifying individual blocks
engineers, as well as the academicians [15]. and smaller fragments would be lost because of the spatial
resolution constraints of the system [10].
Rock sizing and shape determinations are increasingly The problem of determining the true block size
becoming important issues in the mining, comminution, distribution of blast fragmentation is one of measuring, on
handling industries, and in various construction industries the surface of an assemblage of blocks, some two
utilising graded stone products [15]. These factors affect dimensional size parameters of the individual blocks and
the performance of stone products and have considerable transforming it into a three dimensional block size
contribution to the cost also, and thus need to be distribution. Similar problems exist in the fields of biology,
optimized. In order to be optimized, they need to be metallography, and petrography, i.e. to obtain true particle
measured in an efficient and cost effective manner. The size distributions of grains or bodies embedded in a three-
timing constraint is also very important in present day dimensional volume from measurements on a two
scenario. In the last few years, there has been a dimensional section or cut. The size is affected by sampling
proliferation of opto-physics based measurement systems error, since there are very coarse blocks, e.g. the presence
to determine the size distributions [4]. One of the earliest of of a single coarse block could be over representative, while
these systems, originally developed at the University of the absence of the same block could be under
Waterloo [5] [6], is now marketed under the trade name representative. The another problem is simply missing
WipFrag [7]. fines, as the undersize blocks tend to fall between and
Constituent shape distributions are also becoming behind the large blocks, or are not resolved by the image
increasingly important not only in the performance of rigid analysis system [11].
and flexible pavements but also in the industries dealing Control of crushing and grinding circuits ideally requires
with conglomerates. Shape measurements today are continuous particle size measurements on both in and
laborious and tedious manual measurements [8], although outgoing streams. Until recently, size distributions can only
it is very rarely practiced. Many image processing systems be obtained by screening methods, which are neither cost
including have the capability of measuring particle shapes effective, frequent enough, nor timely. Input streams are
on two-dimensional views; however it is clear that three- often composed of large fragments, making screening
dimensional measurements are needed. An image based prohibitive. The accuracy of this method is low, subject to
method for producing three-dimensional views has been several types of measurement errors [12].
proposed by Frost and Lai [8]. More recently, optical sizing technology has been
A watershed-based segmentation approach that uses applied to processing operations such as crushing,
multiscale bilateral smoothing in the range direction for pre- grinding and screening operations. Still some sampling
filtering is adopted for the segmentation of rock scenes. But type errors persist, and matching screening results is still
the resultant segmented images also contain non-rock difficult. At the same time, optical systems have limitations.
watershed regions, which are not desired for measuring They have been proported to suffer from a lack of accuracy,
rock sizes. A proximity-based classifier is applied for the an inability to measure fines, and other associated errors.
removal of the latter using features that can be divided into Errors in this context should not be thought of as mistakes

86 Santiago Chile, May 2006 Fragblast-8


but as variability between the measured results and some will flow down along the steepest slope path until it reaches
true value. Often the true size is taken to be the a minimum. The whole set of point of the surface whose
screening results, although that may be debatable as well. steepest slope paths reach a given minimum constitutes
These errors come from a variety of sources: the catchment basin associated with this minimum. The
1. Errors related to the method of analysis of the images. watersheds are the zones dividing adjacent catchment
2. Errors related to sample presentation. basins.
3. Errors related to the imaging process.
4. Errors related to the sampling process [13].

Optical granulometry systems are required to measure


fragments in situ. That is to say, the fragments are in piles
where sorting takes place, where fragments are partially
overlapped, and where fines may not be seen because
they fall in and behind the coarser fragments, or they are
simply too small to be seen. The hidden fines error in
optical systems stems from the fact that, in an image of an
assemblage of rock fragments, the small pieces, especially
in the case of a wide or well-graded distribution, are
typically hidden from view in the image. In a narrow or well-
sorted distribution, this tends not to be a problem. Optical
imaging systems have associated errors with resolving
fines. This is true especially with well-graded distributions,
where the optical systems tend to overestimate the central
tendency of the distribution and underestimate the
variability. These errors are systematic [14].

3. IMAGE ANALYSIS TECHNIQUES Figure 1: The pixel is hit by a raindrop. The raindrop
follows the steepest path toward a local minimum (upper
3.1. Basic Concept diagram). Afterwards, the basin is flooded with water
Assuming that image objects are connected regions of little coming up out of the reached minimum (lower diagram).
grey level variations, one should be able to extract these
regions by using some neighborhood properties. Indeed, a
high grey scale variation between two adjacent pixels may
indicate that these two pixels belong to different objects. This
assumption does not hold directly for textured objects
because grey level variations within a textured object may be
higher than those occurring at the object boundaries.
However, local texture measurements can be performed so
as to obtain similar values for pixels belonging to similar
textures and therefore high variations between two neighbor
pixels belonging to two different textured regions.
In the case of region growing, homogeneous regions are
first located. The growth of these regions is based on
similarity measurements combining spatial and spectral Figure 2: The watershed lines and catchment basins.
attributes. It proceeds until all pixels of the image are
assigned to a region. Region boundaries are created when Before going to the definition of watershed transformation
two growing regions meet. it is necessary to understand some basic fundamental
Edge detection techniques proceed the opposite way. As concepts.
the image objects are assumed to show little grey level
variations, their edges are characterized by high grey level 3.3. Graphs
variations in their neighborhood. The task of edge detection is A graph G = (V, E) consists of a set V of vertices (or
to enhance and detect these variations. Local grey level nodes) and a set E VxV of pairs of vertices. In a
intensity variations are enhanced by a gradient operator. The (un)directed graph the set E consists of (un)ordered pairs
gradient image is then used to determine an edge map. A (v,w). Instead of directed graph we will also write digraph.
basic approach consists in thresholding the gradient image An unordered pair (v,w) is called an edge, an ordered pair
for all values greater than a given threshold level. (v,w) an arc. If e = (v,w) is an edge (arc), e is said to be
Unfortunately, the resulting edges are seldom connected. An incident with its vertices v and w; conversely, v and w are
additional processing is then required to obtain closed called incident with e. We also call v and w neighbors. The
contours corresponding to object boundaries. set of vertices, which are neighbors of, v is denoted by
The morphological approach to image segmentation NG(v). A path of length l in a graph G=(V,E) from vertex p
combines region growing and edge detection techniques: to vertex q is a sequence of vertices (p0, p1,........, pl-1,pl)
A
it groups the image pixels around the regional minima of the such that p0=p1,=q, and (pi, pi-1,) E, (0,l). The length
image and the boundaries of adjacent grouping are of a path is denoted by length(). A path is called simple
precisely located along the crest lines of the gradient if all its vertices are distinct. If there exists a path from a
image. This is achieved by a transformation called the vertex p to a vertex q, then we say that q is reachable from
watershed transformation. p, denoted as p q.
An undirected graph is connected if every vertex is
3.2. The watershed transformation reachable from every other vertex. A graph G=(V,E) is
Let us consider the topographic representation of a grey called a subgraph of G=(V,E) if V V, E E, and the
level tone image. Now, let a drop of water fall on such a elements of E are incident with vertices from V only. A
topographic surface. According to the law of gravitation, it connected component of a graph is a maximal connected

Fragblast-8 Santiago Chile, May 2006 87


subgraph of G. The connected components partition the geodesic influence zone of the set B within A is defined as izA
A
vertices of G. (Bi) = (p A j (1...k)\(i): dA(p,Bi) <dA(p,Bi)) let B A. The
In a digraph, a path (p0, p1,........, pl-1,pl) forms a cycle if set IZA(B) is the union of the geodesic influencek zones of the
p0 = p1 and the path contains at least one edge. If all connected components of B, i.e., IZA(B)=i=1 UizA(Bi). The
vertices of the cycle are distinct, we speak of a simple complement of the set IZA(B) within A is called the SKIZ
cycle. A self-loop is a cycle of length 1. In an undirected (skeleton by influence zones): SKIZA(B)=A\IZA(B). So the
graph, a path (p0, p1,........, pl-1,pl) forms a cycle if p0 = pl SKIZ consists of all points which are equidistant (in the sense
and p1,........, p1 are distinct. A graph with no cycles is of the geodesic distance) to at least two nearest connected
acyclic. A forest is an undirected acyclic graph; a tree is a components (for digital grids, there may be no such points).
connected undirected acyclic graph. A directed acyclic For a binary image f with domain A, the SKIZ can be defined
graph is abbreviated as DAG. by identifying B with the set of foreground pixels.
A weighted graph is a triple G=(V,E,w) where w:E is a
weight function defined on the edges. A valued graph is a 3.7. Definition of the Watershed transform
triple G=(V,E,f) where f:V is a weight function defined on Here we introduce the definition of watershed transform,
the vertices. A level component at level h of a valued graph which may be viewed as a generalisation of the skeleton by
is a connected component of the set of nodes v with the influence zones (SKIZ) to grey value images. We start with
same value f(v)=h. The boundary of a level component P the continuous case, followed by two definitions for the
at level h consists of all p P which have neighbors with digital case.
value different from h; the lower boundary of P is the set of
all p P which have neighbors with value smaller than h; 3.7.1. Continuous case
the interior of P consists of all points of P which are not on A watershed definition for the continuous case can be
the boundary. A descending path is a path along which the based on distance functions. Depending on the distance
value does not increase. By f(p) we denote the set of all function used one may arrive at different definitions.
descending paths starting in a node p and ending in some Assume that the image f is an element of the space C(D) of
node q with f(q)<f(p). A regional minimum (minimum, for real twice continuously differentiable functions on a
short) at level h is a level component P of which no points connected domain D with only isolated critical. Then the
have neighbors with value lower than h, i.e., (p) = for topographical distance between points p and q in D is
all p P. A valued graph is called lower complete when defined by Tf(p,q)=inf
f(

(s))ds, where the infimum
each node which is not in a minimum has a neighboring is over all paths (smooth curves) inside D with (0)=p,
node of lower value. (1)=q. The topographical distance between a point p D
and a set A D is defined as Tf(p,A)=MINaATf(p,a). The path
3.4. Digital Grids with shortest Tf -distance between p and q is a path of
A digital grid is a special kind of graph. Usually one steepest slope.
works with the square grid D 2, where the vertices are
called pixels. When D is finite, the size of D is the number 3.7.2. Discrete case
of points in D. The set of pixels D can be endowed with a A problem, which arises for digital images, is the
graph structure G=(V,E) by taking for V the domain D, and occurrence of plateaus, i.e., regions of constant grey value,
for E a certain subset of 2 x 2 defining the connectivity. which may extend over large image areas. Such plateaus
Usual choices are 4-connectivity, i.e., each point has edges form a difficulty when trying to extend the continuous
to its horizontal and vertical neighbors, or 8-connectivity watershed definition based on topographical distances to
where a point is connected to its horizontal, vertical and discrete images. This non-local effect is also a major
diagonal neighbors. Connected components of a set of obstacle for parallel implementation of watershed
pixels are defined by applying the definition for graphs. algorithms.
Distances between neighboring nodes in a digital grid In this case the algorithmic definition automatically takes
are introduced by associating a nonnegative weight d(p,q) care of plateaus, because it computes a watershed
to each edge (p,q). In this way a weighted graph is transform level by level, where each level constitutes a
obtained. The distance d(p,q) between non-neighboring binary image for which a SKIZ is computed.
pixels p and q is defined as the minimum path length
among all paths from p to q (this depends on the graph 4. PARTICLE SIZE MEASUREMENT
structure of the grid, i.e., the connectivity). AND DISTRIBUTION

3.5. Digital Images 4.1. Particle Shape


A digital grey scale image is a triple G=(V,E,f), where Even though we usually assume particles to be spherical
(D,E) is a graph (usually a digital grid) and p D is a in most our calculations, this is not necessarily true and
function assigning an integer value to each p D. A binary
image f takes only two values; say 1 (foreground) and 0
(background). For p D, f(p) is called the grey value. For
the range of a grey scale image one often takes the set of
integers from 0 to 255. A plateau or flat zone of grey value
is a level component of the image, considered as a valued
graph, i.e., a connected component of pixels of constant
grey value h. The threshold set of f at level h is Th =(p D
f(p) h).

3.6. Geodesic distance


Let A, with = d or = d, and a, b two points in A. The
geodesic distance dA (a,b) between a and b within A is the
minimum path length among all paths within A from to a to b
(in the continuous case, read infimum instead of minimum).
If B is a subset of A, define dA (a,B)=MINbB (dA (a,b)). Let BA Figure 3: Different equivalent spheres from one single
be partitioned in k connected components, Bi, i=1,.....,k. The particle.

88 Santiago Chile, May 2006 Fragblast-8


may contribute error in our experimentation and analyses. percentile. The mode is the largest class interval. The
The precise shape of the particles used (coal, catalyst, mean is variously defined, but a common formula is the
resin, paint pigment, drug powder etc.) and their dispersion average of the 25 and 75 percentile. A second aspect
make most particle size analysis a difficult endeavor to of sieve analysis is its sorting or the measure of degree
achieve. Since the only measurement we can easily use to of scatter. Sorting is the tendency for all the particles to
describe a particle of any shape that has increased or belong to one class of grain size. Several formulae
decreased during processing is the equivalent sphere have been used to define this parameter for a sample
concept, we easily fall into the trap assuming that all of particles.
particles are spherical in nature. In summary the main statistical measurements for sieved
Sphere is the only shape that can be described by one samples consist of a measure of central tendency
number, its diameter (D). If we have any particle of different (including median, mode, and mean); a measure of the
shape, we can easily convert the volume or the weight of degree of scatter or sorting; kurtosis, the degree of
the particle into the volume and weight of an equivalent peakedness; and skewness and the lop-sidedness of the
sphere: curve.
Volume = 4/3 (D/2)2
Weight = 4/3 (D/2)2 4.3. Mathematical Interpretation
The maximum of useful information is revealed when
D is the diameter of an equivalent sphere. This is called particle size data can be represented closely by a
the equivalent sphere theory. This ensures that we do not mathematical expression; a mathematical function also
need to describe the actual shape, which can be quite allows ready graphical representation and offers maximum
messy. But, all particles with an equivalent sphere may opportunities for interpolation, extrapolation, and
have very different shapes when we look under the comparison among particle systems. Furthermore, still
microscope. Thus, we need to define what dimension of the more useful information can be revealed if the parameters
particle we use in calculating the equivalent sphere of the function can be related to properties of the particle
system or process that produced it. This would allow for
4.2. Particle Size Distribution (PSD) and its analysis closer process and product control and tighter product
The size of particles in a sample can be measured by specifications and quality assurance.
visual estimation or by the use of a set of sieves. Their Various two parameters mathematical models and
particle size can be measured individually by optical or expressions have been developed, ranging from the well-
electron microscope analysis. The basic principle of established normal and log-normal distributions to the Kuz-
sieving technique is as follows. Ram, Rosin-Rammler (Rosin and Rammler, 1933, [18]) and
A representative sample of known weight of particles is the Gates-Gaudin-Schumann (Gates, 1915; Schumann,
passed through a set of sieves of known mesh sizes. The 1940) models. But, Gates-Gaudin-Schumann application
sieves are arranged in downward decreasing mesh has been limited due to their greater mathematical
diameters. The sieves are mechanically vibrated for a fixed complexity.
period of time. The weight of particles retained on each We shall focus mainly on the Rosin-Rammler distribution
sieve is measured and converted into a percentage of the function described by Djamarani and Clark (1997). It has
total sample. This method is quick and sufficiently accurate long been used to describe the particle size distribution of
for most purposes. Essentially it measures the maximum powders of various types and sizes. The function is
diameter of each particle. particularly suited to representing particles generated by
Both graphic and statistical methods of data presentation grinding, milling and crushing operations. The Rosin-
have been developed for the interpretation of sieve data. Rammler function is represented by two parameters: mean
The percentage of the samples in each class can be shown particle size (Dm) and n value (width of distribution) and a
graphically in bar chart or histogram. Another method of goodness-of-fit factor:
graphic display is the cumulative curve or cumulative
arithmetic curve. Cumulative curves are extremely useful
because many sample curves can be plotted on the same
graph and differences in sorting are at once apparent.
Significant percentages of coarse and fine end members
show up as horizontal limbs at the ends of the curve. R is the retained weight fraction (%), D is the particle size
(mm), Dm is the mean particle size (mm), and n is a
measure of the spread of particle sizes. The applicability of
the Rosin-Rammler distribution function can be determined
by curve fitting the actual sieve size data of particles of a
sample. A least square regression analysis can be carried
out to fit the data points and the correlation coefficient can
be used to estimate the goodness of the fit.
The value of n determines the shape of the Rosin-
Rammler curve. High values indicate uniform sizing. Low

Table 1:
The effect of different blasting parameters on n
Figure 4: Mechanical size measurement (sieving using a
particular mesh size). Serial Parameter n increases as
No the Parameter
Sorting can be expressed by various statistical
methods. The simplest of these is the measurement of 1. Burden/hole diameter Decreases
the central tendency of which there are three commonly 2. Drilling accuracy Increases
used parameters: the median, the mode, and the mean. 3. Charge length/bench height Increases
The median grain size is that which separates 50% of 4. Spacing/burden Increases
the sample from the other; the median is the 50

Fragblast-8 Santiago Chile, May 2006 89


values on the other hand suggest a wide range of sizes fragmentation system optimisation is the development of
including both oversize and fines. practical methods for determining the degree of
It normally desired to have uniform fragmentation so high fragmentation. By degree of fragmentation one generally
values of n are preferred. Experience by Cunningham means specifying the average particle size and the
(1987) has suggested that: distribution of the particles around that mean. Both direct
The normal range of n for blasting fragmentation in and indirect methods are available for determining the
reasonably competent ground is from 0.75 to 1.5, with the fragmentation. The direct methods include screen
average being around 1.0. More competent rocks have analyses, counting boulders, and measuring the pieces
higher values. directly. The indirect methods include crusher monitoring
Values of n below 0.75 represent a situation of dust and and the monitoring of secondary breaking/blasting costs.
boulders which, if it occurs on a wide scale in practice, The most recently, the image processing and analysing
indicates that the rock conditions are not conducive to techniques have grained popularity for their accurate and
control of fragmentation through changes in blasting. errorless measurement. The other parameters, like the
Typically this arises when stripping overburden in degree of obtainable accuracy with respect to the size
weathered ground. range and camera resolution; the effect of digital filters on
For values below 1 variations in the uniformity index (n) fine fragment resolution and characterisation of errors also
are more critical to oversize and fines. For n=1.5 and need to be considered separately.
higher, muckpile texture does not change much, and
errors in judgement are less punitive. 5.2. Image Acquisition
The rock at a given site will tend to break into a particular There are many ways that images can be acquired in the
shape. These shapes may be loosely termed cubes, field and scaled. For instance, when acquiring image of
plates or shards. The shape factor has an important muck piles, the angle of the slope relative to the axis of the
influence on the results of sieving tests, as the mesh used camera needs to be considered. There are many ways to
is generally square, and will retain the majority of ensure that muck pile images are scaled correctly. We do
fragments having any dimension greater than the mesh not discuss about those. It is also necessary to be
size. designed to allow scaling for all of the different image
acquisition methods, so that camera calibration factor can
The following points are necessary to be borne in mind: be tested easily.
Initiation and timing must be arranged so as to
reasonably enhance fragmentation and avoid misfires or 5.3. Convert to Gray scale image
cut-offs. Before going to the further implementation we have
The explosive should yield energy close to its calculated converted the fragmented rock images to the gray level
Relative Weight Strength. rock images (256x256). Actually we have implemented all
The joint parameters and homogeneity of the ground the methods in gray level rock images.
require careful assessment. Fragmentation is often built
into the rock structure, especially when loose joints are 5.4. Image Enhancement (filtering) and Thresholding
more closely spaced compared to the drilling pattern. After converting to the gray level rock images it is
necessary to enhance (filter) the rock images and to
5. PROPOSED ALGORITHM threshold the images using Adaptive thresholding. The
details regarding to these have been explained in [2].

5.5. Convert to binary image


Watershed segmentation can be implemented in the
binary rock images but not in gray level rock images. So it
is necessary to convert the gray level rock images to binary
ones after enhancement and thresholding.

5.6. Different Approaches to Watershed Segmentation


In the watershed based segmentation we have used the
following three methods and compared the algorithms.

5.6.1. Watershed segmentation using Distance transform


A tool commonly used in conjunction with the watershed
transform for segmentation is the distance transform. The
distance transform has been explained in [16]. The
distance transform of a binary image is a relatively simple
concept. It is the distance from every pixel to the nearest
nonzero valued pixel. Figure 6 illustrates the distance
transform. Figure 6(a) shows a small binary image matrix.
Figure 5: Flow chart of the proposed algorithm. Figure 6(b) shows the corresponding distance transform.

5.1. Fragment Size Rocks


Fragment size distribution not only depends on the blast
design but also on the rock mass strength as well as its
discontinuous nature prior to blasting, the later being a
function of the natural discontinuity frequency and
orientation. Post blast fragmentation research has been
largely directed towards developing mathematical models
or equations to forecast fragmentation. Existing empirical
relations from the field of mineral processing are commonly
known as the laws of comminution. A critical element in Figure 6: (a) Small binary image, (b) Distance transform.

90 Santiago Chile, May 2006 Fragblast-8


due to noise and other local irregularities of the
(a) At first thresholded the gray level fragment rock image gradient.
and convert the thresholded image to the binary ones. An approach used to control oversegmentation is based
(b) Complement the binary fragment rock image. on the concept of markers. A marker is a connected
(c) Compute the watershed transform of negative of the component belonging to an image. We would like to have a
distance transform. set of internal markers, which are inside each of the objects
(d) Finally, a logical AND of the original binary image and of interest, as well as a set of external markers, which are
complement of the binary image, getting after contained within the background. These markers are then
processed, have been done to complete the used to modify. Various methods have been used for
segmentation process. computing the internal and external markers, many of
In this method the oversegmentation is the common which involve the linear filtering, nonlinear filtering and
problem and in the next two methods we have tried to morphological process. Here we have done the following
overcome this difficulty. processes
(a) First compute the watershed transform of the gradient
image, without any other processing.
(b) Compute the location of all regional minima of the
fragment rock image.
(c) Find the set of internal markers and then superimpose
with the original gray level image.
(d) Find the external markers or pixels that we are confident
belong to the background. The approach we follow here
is to mark the background by finding pixels that are
exactly midway between the internal markers.
(e) Given both internal and external markers, we use them
now to modify the gradient image using a procedure
called minima imposition. The minima imposition
technique modifies the gray scale image so that
regional minima occur only in the marked locations.
Other pixel values are pushed up as necessary to
remove all other regional minima.
Figure 7: Watershed segmentation by distance transform; (f) Compute the Watershed transform of the marker
(a) the original graylevel fragmented rock sample image, modified gradient image.
(b) the thresholded binary image, (c) the complement of
(b), (d) the distance transform image, (e) watershed The point is that using markers brings a priori knowledge
transform image (from top left to bottom right). to bear on the segmentation problem. Humans often aid
segmentation and higher level tasks in everyday vision by
5.6.2. Watershed segmentation using Gradients using a priori knowledge. Thus, the fact that segmentation
The gradient magnitude is used often to preprocess a by watersheds offers a framework that can make effective
gray scale image prior to using the watershed transform for use of a priori knowledge is a significant advantage of the
segmentation. The gradient magnitude image has high methods.
pixel values along object edges, and low pixel values
everywhere else. Ideally, then, the watershed transform
would result in watershed ridge lines along object edges.
(a) To compute the gradient magnitude using either the
linear filtering methods (i.e., gradient operators) or
using a morphological gradient (i.e., dilation, erosion,
opening, closing etc.) of the preprocessed image.
(b) Here to avoid the oversegmentation, use the close-
opening operation to smooth the gradient image before
computing its watershed transform.

In this process some extraneous ridge lines also have in


the processed image and so it is difficult to determine
which catchment basins are actually associated with the
object of interest. Figure 9: Watershed segmentation by marker controlled; (a)
the thresholded binary image, (b) imposed minima image,
(c) internal marker, (d) extended minima image, (e) regional
minima of gradient magnitude, (f) image by distance
transform, (g) modified gradient image, (h) final
segmentation result (image) (from top left to bottom right).

5.7. Particle Size Distribution (PSD)


In Rosin-Rammler equation for calculating the n and Dm
Figure 8: Watershed segmentation by gradients; (a) the it is necessary the values of some parameters i.e., burden,
thresholded binary image, (b) internal marker, (c) extended hole diameter, spacing, charge length, drilling pattern,
minima image, (d) watershed transform image (from left to bench height etc. Here whole experimentation has been
right). done in the laboratory scale and taken the samples already
broken or fragmented. So it is difficult to get all the results
5.6.3. Marker-controlled watershed segmentation for calculating the value of n and Dm. For our
Direct application of the watershed transform to a experimentation we assume the value of n=1.0 and the
gradient image usually leads to oversegmentation value of Dm=30.0. The unit of all parameters is in mm

Fragblast-8 Santiago Chile, May 2006 91


(millimeter). This way, the Rosin-Rammler equation for our contains the size distribution of the fragmented rocks in the
experiments is established as image.

6. DISCUSSIONS AND CONCLUSIONS

Image segmentation is one of the challenging fields in


digital image processing. Before going to image
To calculate the values of R (% fraction retained) we have segmentation the pre image processing is most important
used different sizes (D) of fragmented rock samples. In this task in this area. We have taken the fragmented rock
case the sizes selected are 5 mm, 10 mm, 20 mm, 30 mm, sample images from the laboratory. So it is necessary to
40 mm, 50 mm, 75 mm and 100 mm. We have also eliminate the noise and enhance the image to the better
calculated the shape of the fragmented rock sample using ones. For eliminating the particular noise particular filter
digital image processing and compare the result in the size judgement is most important work. For individual noise
distribution curves. frequency spectrum is also different. So we have first find
out the frequency spectrum of the particular image for
5.8. Final Output knowing the noise pattern and enhancement the image.
Early attempts revealed that the rock segmentation
problem is complex and hence motivated the search for a
new practical approach. Object of similar shape, varying
size that are in contact with each other and randomly
distributed cannot be easily separated with definite
boundaries. The watershed transform embedded in our
computing scheme could be a viable candidate to solve
this kind problem. In watershed transform (segmentation)
three methods are there. We have computing each
method for segmentation of the fragmented rock samples
Figure 10: Cumulative size distribution of the fragmented image and also attention their drawbacks in the
rocks in the image. respective manner. According to this the last method i.e.,
marker controlled watershed transform is the best one.
In the figure 10 we have plotted the cumulative size Fragmented rocks travel on the conveyor belts at very
distribution of the fragmented rock in the image. Here the high speed (m/s) and motion artifacts have to be avoided.
blue colour has been shown the size distribution by sieving The WipFrag fragmentation sizing system, developed
method and the red one by image processing method. We by Maerz and Franklin at the University of Waterloo, has
have also plotted the orientation of the fragmented rocks in been used in widespread for several years. WipFrag is
the image with area of those rocks. the actual automated standard for estimating rock
fragments using digital image processing technique and
state-of-the-art photo analysis. Editing of images is
avoided the fusion and disintegration. It will be better for
an online sieving monitoring system if we take the digital
images at very high shutter speed at some predefined
times intervals rather than processing in real times. The
watershed segmentation method for image analysis may
be speeded up to generate the desired results within a
fraction of a second.
Further field implementation is much more necessary in
an industrial version. Then the following parameters will be
Figure 11: Fragmented rocks size distribution of the image. included:
Illumination and brightness effects from different light
5.8.1. Determine size distribution of the fragmented rocks in source
the image Huge amount of muck piles of mixed rock fragment sizes
We use granulometry to determine the size distribution of Bench preparation
fragmented rocks in the image. Granulometry likens image Pattern layout
objects to rocks whose sizes can be determined by sifting Blast hole drilling
them through screens of increasing size and collecting Blast hole loading procedures
what remains after each pass. Image objects are sifted by Post blast data collection
opening the image with a structuring element of increasing Poor rock fragmentation
size and counting the remaining object surface area after Large amounts of oversize
each opening. We accomplish these actions with a FOR High ground vibration levels
loop that consists of the following items: High air blast levels
a counter High downstream processing.
image returned by a function with a structuring element
the number of objects after each opening using a 7. ACKNOWLEDGEMENTS
function.
The authors are highly grateful for the funding by ISIRD,
5.8.2. Calculate first derivative Indian Institute of Technology Kharagpur, India.
As we have seen, the surface area of remaining objects
decreases as the size of the opening increases. A 8. REFERENCES
significant drop in surface area between two consecutive
openings indicates that the image contains objects of [1] Gonzales, R C and Woods, R E, 2002. Digital image
comparable size to the smaller opening. This is equivalent processing, Prentice-Hall, Inc., New Jersey, 2nd
to the first derivative of the surface area array, which edition.

92 Santiago Chile, May 2006 Fragblast-8


[2] Khatua, S K and Chakravarty, D, 2005. Rock image [10] Maerz N H, 1996. Image sampling techniques and
segmentation by adaptive thresholding, In Proceeding requirements for automated image analysis of rock
Technological Advancements and Environmental fragmentation, In Proceedings of the FRAGBLAST 5
Challenges in Mining and Allied Industries in the 21st Workshop on Measurement of Blast Fragmentation,
Century (TECMAC-2005), pp. 265 274, NIT pp. 115-120, Montreal, Quebec, Canada.
Rourkela, India. [11] Maerz N H, 1996. Reconstructing 3-D block size
[3] Salinas R A, Raff U and Farfan C, 2005. Vol. 152, distributions from 2-D measurements on sections, In
N.1, Automated estimation of rock fragment Proceedings of the FRAGBLAST 5 Workshop on
distributions using computer vision and its application Measurement of Blast Fragmentation, pp. 39-43,
in mining, In Proceeding IEE Proc.-Vis. Image Signal Montreal, Quebec, Canada.
Process. [12] Maerz N H, and Palangio, 2000. Online fragmentation
[4] Franklin J A, Kemeny J M and Girdner, K K, 1986. analysis for grinding and crushing control, In Proceeding
Evolution of measuring systems: a review. In Control 2000 Symposium, 2000 SME Annual Meeting,
Proceedings of the FRAGBLAST 5 Workshop on pp. 109-116, Salt Lake City, Utah, SME.
Measurement of Blast Fragmentation, pp. 47-52, [13] Maerz N H, 2001. Automated online optical sizing
Montreal, Quebec, Canada. analysis, To be presented in SAG, pp. 250-269, Rock
[5] Maerz N H, Franklin J A, Rothenburg L and Coursen D Mechanics and Explosives Research Center,
L, 1987. Measurement of rock fragmentation by digital University of Missouri-Rolla, MO, USA.
photoanalysis. In Proceeding ISRM. 6th Int. Cong. on [14] Maerz N H, and Zhou W, 1999. Calibration of optical digital
Rock Mechanics, Vol. 1, pp. 687-692, Montreal, fragmentation measuring systems, Fragblast 6, Sixth
Canada. International Symposium For Rock fragmentation By
[6] Maerz N H, Franklin J A and Coursen D L, 1987. Blasting, pp. 125-130, Johannesburg, South Africa.
Fragmentation measurement for experimental [15] Chakravarty D, 2001, Image and texture analysis of
blasting in Virginia. S.E.E., In Proc. 3rd. Mini- rocks and their classification using artificial intelligence
Symposium on Explosives and Blasting Research, pp. techniques, PhD dissertation (unpublished) submitted to
56-70, Miami. the Indian Institute of Technology Kharagpur, India.
[7] Maerz N H, Palangio T C and Franklin J A, 1996. [16] Khatua S K and Chakravarty D, A Generalized Algorithm
WipFrag image based granulometry system, In for Different types of Distance Transformations in Graylevel
Proceedings of the FRAGBLAST 5 Workshop on Rock Image, Journal: - IEEE Transactions on Image
Measurement of Blast Fragmentation, pp. 91-99, Processing (Communicated).
Montreal, Quebec, Canada. [17] Khatua S K and Chakravarty D, Study of Rock Image
[8] ASTM, 1995. Standard test method for flat particles, Segmentation and Edge Detection by Adaptive
elongated particles, or flat and elongated particles in Thresholding and Canny Edge Detector Algorithm,
coarse aggregate. ASTM D4791-95. Journal: - Signal Processing Image Communication,
[9] Mkwelo S, Jager G De and Nicolls F, Watershed-based ELSEVIER Jnl, (communicated).
segmentation of rock scenes and Proximity-based [18] Rosin & Rammler, 1933. The laws governing the
classification of watershed regions under uncontrolled fineness of powdered coal, In Proceeeding J. Inst.
lighting conditions, Department of Electrical Engineering, Fuel, pp. 29-36.
University of Cape Town, Rondebosch 7700.

Fragblast-8 Santiago Chile, May 2006 93

You might also like