You are on page 1of 6

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.

ORG

15

Real-Time Markerless Square-ROI Recognition based on Contour-Corner for Breast Augmentation


L. Rechard, and A. Bade
Abstract Augmented reality (AR) can be described as a technology consists of computer vision and image processing methods which is used to enhanced user vision and perception. Whereas marker detection and recognition are the two most important steps which are vital in the field of augmented reality (AR) in order to successfully placed computer generated object into the real world. This paper, aims to present a real-time markerless identification and verification technique by combining contour-corner approach to detect and extract strong interest point from the environment as the fundamental component in registering the virtual imagery with its real object without the needs to use conventional printed marker. To enhance and to combine the current contour and corner detection approach, we proposed smoothing and adaptive thresholding technique to the input stream and then use subpixel corner detection to obtain better and accurate interest point. Our method used marker-less approach as to avoid the needs to prepare the target environment and to make our approach more flexible. The proposed method starts with first getting an input from the real environment thru a camera as visual sensor. On receiving an input image, the proposed technique will process the image, detects strong interest points from the ROI and recognize a marker by applying enhanced contour-corner detection. Index Terms Augmented reality, contour detection, corner detection, interest point

1 INTRODUCTION

ugmented reality (AR) can be described as a technology which is developed by using computer vision techniques and image processing methods in order to enhanced user vision and perception. Milgram et al. [1] defined AR as a research field in which users vision is enhanced by combining real world scene and computer generated objects into the identical real environment space called mixed reality (see Fig. 1) Whereas, Azuma (1997) [2] defines an AR as a system that have the following characteristics: Combines real object with virtual objects, Interactive in real-time, and Registered in three dimensions.
Fig. 2 Manns reality-virtuality-mediality continuum [3].

Fig. 1 Milgrams Reality-Virtuality Continuum [1].

Later, in 2002 Mann [3] come up with a two-dimensional reality-virtuality-mediality continuum as an effort to add another axis to Milgrams virtuality-reality continuum by adding mediated reality and mediated virtuality (see Fig. 2). In [3], the system can change reality in three ways: Add something (augmented reality) Remove something (diminished reality) or Alter the reality in some other way (modulated reality)

Various studies have been conducted to explore the fundamental concept and unique potential of Augmented Reality technology to be applied in mainstream application such as in medicine, visualization, maintenance, path planning, entertainment and education, industry, military and aircraft navigation and yet most of it is marker-based application due to its ease of development and higher rate of accuracy and success despite its lack of flexibility in unprepared environment. In the field of medicine, the use of computer generated images such as CT-scan or MRI as a vision aid to the surgeon in surgical planning has increased dramatically over the years. With the use of an AR system, we believe that the surgeon's vision will be enhanced, more data can be obtained, ensure proper surgical planning thus avoiding unnecessary cutting in real surgical operation. Computer vision (CV) on the other hand is a field which is strongly related to AR application development and success. According to Marr and Nishihara (1978) [4], vision is a process that produces, from images of the external world, a description that useful to the viewer and not cluttered with irrelevant information. In AR application, CV works hand in hand with scene understanding and analysis. Sub-areas of CV

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

16

can be seen in motion detection, face detection, object tracking and recognition etc. The process to correctly combine the real scene with virtual objects and accurately register the mixed scene depends greatly on CV potentials to extract and detect features either naturally from the desired region of interest (ROI) or based on the fiducial marker that present to the camera. Idris et al., (2009) [5], considered the use of CV techniques as a starting point in detecting a fiducial marker or natural marker in order to solve the registration and tracking issue in AR application. Most AR systems operate based on the printed square marker (see Fig. 3) or customized pattern which is located physically on top of the target area or workspace.

Fig. 4 Optical See-through Augmented Reality Display [9].

Fig. 3. Example of Printed Square Markers.

Therefore, in this paper, we proposed and describe a real-time markerless identification technique designed to capture real scene, detect strong interest points from extracted contour, verify a marker and overlay 2D object. Extracting and detection of features from the real scene will be our concerns in recognizing a marker. Our proposed technique uses the combination of contour and corner approaches in order to find unique natural features from captured frame through a web cam in real time. Generally, feature points [6], edges [7], curves [8], planes and so on, are used for registration. Even though, extraction of such natural feature is more difficult than the artificial vision marker based approach, the users movement range is not limited, ensure real scene augmentation and no preparation needed to set up the marker. Therefore it is preferable to use this approach over the printed marker. The remainder of this paper, will be discussing about the basic setup of AR system, marker and marker-less detection, contour-corner approach and results for the presented methods will also be given and illustrate.

Fig. 5 Video See-through Augmented Reality Display [9].

The miniaturized camera therefore captures the real environment from the users perspective. The AR system processes these live images in real time extracting features to identify markers. Markers are identified using point and edge detectors. The features can be artificial printed markers, natural marker or natural features.

2 RELATED WORKS
2.1 Simple AR system setup The basic setup of an AR system consists of the mobile computing unit such as PDA or mobile phone and head-mounted display (HMD) to which a miniaturized camera is attached. There are two types of HMD generally used; namely optical see-through (see Fig. 4) and video see-through (see Fig. 5).

2.2 Vision-based AR Researchers in computer vision have proposed and developed several methods in tracking and detection that can be applied in AR registration. This method can be sub-divided into three types based on the equipment used; sensor-based, visual-based and hybrid-based. Since the camera was already part of the AR system, visual-based methods are used together with vision-based registration technique. Approaches for visual-based tracking are marker-based (see Fig. 6) and markerless-based method. Used of marker-based methods have been discussed in [10], [11], [12], [13]. Whereas, [14], [15], [16] have established a markerless-based method for unprepared environment. Most researchers opted for marker approach, since it is more accurate and reduce computation-resources [17]. However, for outdoor or unprepared environment, markerless approach is more suitable despite suffered from computation time-consuming [18, 19].

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

17

Classification of feature points using eigenvalues

Fig. 7 Shi-Tomasi detector [24] Fig. 6 Marker-based ARToolkit Algorithm [20]

In order to be able to detect interesting features from the environment, [21], [22], [23] have proposed and developed corner detectors techniques for visual tracking.

2.3 Corner detection Corner or interest point detection is a processing stage in computer vision. Literally, corner point is some kind of visual corner, where two edges intersect. A good corner detector must be able to distinguish between true and false corners, reliably detecting corner in noised image, accurately determining the locations of corners and can be utilized in real-time applications (Shi and Tomasi, 1994). Our method is based on Shi and Tomasi detector. This detector is inspired by Harris detector (Harris and Stephens, 1988) and its uses smallest eigenvalue. The difference lies in the cornerness measure or selection criteria which made this detector better than the original. The cornerness for Harris was calculated in (1) as given by Harris and Stephens (1988).
(1)

Fig. 8. Harris detector [24]

As depicted in Fig. 7, if both eigenvalues are large, then the features vary significantly in both directions and considered as a good feature (corner-like). Traditional Corner detection flow chart Fig. 9 shows a general flow chart of a corner detection process.

There are two ways to define corner: Large response The locations x with R(x) greater than certain threshold. Local Maximum The location x where R (x) is greater than those of their neighbors. For Shi-Tomasi, it is calculated like this: (2) If R(x) is greater than a certain predefined value, it can be marked as a good feature or a corner.

Fig. 9 Corner detection flow chart [25]

2.4 Contour detection The reason to perform contour detection in general, is to significantly reduce the amount of data in an image by detecting edges in a robust manner. Edge detection defines the boundaries between detected regions in an image. This information needed for higher level image processing. Ahmad and Choi (1999) [26], and Ziou and Tabbone (1998) [27] have discussed different types of edge detection method. Our method is based on algorithm developed by Canny (1986) [28] since

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

18

it produce smooth and continuous edges despite sensitive to noise pixels and higher in computation-time [29]. As depicted in Fig. 10, there are several stages in this algorithm as stated in [28].

Automatic initialization Able to detect natural features reliably Reasonable computation time Accurate augmentation Work in unconstraint environment Flexible and adaptive to the application needs

3.1 Algorithm Architecture


As illustrated in Fig. 11, we can see that the process started with first getting an input from the real environment thru a camera as visual sensor. On receiving an input image, the proposed system finds and detects strong interest point from the ROI by applying enhanced contour-corner detection. From the ROI, features such as number of corners and vertices can be extracted and later used to define a marker.
Fig. 10 Traditional Canny edge detection Algorithm [30]

3 APPROACH TO MARKER-LESS SQUARE-ROI


RECOGNITION

As demonstrated in Fig. 6, marker-based detection algorithm needs to go through stage (d) and (e) respectively in order to implant an object for the detected marker. These two steps are the main inspiration for our proposed method. By combining Contour-Corner approach we believe that we can somehow replace the needs to have a physical marker in the desired environment. Instead of using a physical marker, we will define a Region of Interest (ROI) as a markerless marker, called Square-ROI. This Square-ROI need to be hand-drawn manually by the user (see Fig. 13). The reason we chose a square is because squares naturally produces 4 possible points and these 4 points are needed to calculate the pose camera estimation for visualization rendering purposes. Another reason is that, the orientation of the points can be estimated as intersections of edge lines. To enhance and to combine the current contour and corner detection approach, we proposed smoothing and adaptive thresholding techniques to the input stream and then use subpixel corner detection to obtain better and more accurate

(a)

interest point.
Fig. 11 Framework for Proposed system

As discussed in [31], a markerless recognition system need to fulfill some criteria as stated below: Easy offline preparation

(b) Fig. 12 Overview of the proposed method (a) Architecture (b) Marker identification Illustration (i) draws detected contour (ii) draw detected corner (iii) Identify marker (iv) overlay an object [6]

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

19

4 RESULTS
4.1 Testing Setup We present the results of our proposed method based on the test conducted on a mannequin as an input (real environment) in our experiment.
A square is manually hand-drawn on top of the target area as a region of interest (ROI) in order to place a non-printed marker or we can say a markerless ROI (see Fig. 13 (a)). The rationale to manually hand drawn a square contour as ROI is to avoid the needs to prepare a printed marker and at the same time to make our proposed system more flexible with unprepared environment. Thus, we can ensure that only desired area of the image is searched for features. As visualized in Fig. 13 (a), our method will first capture a frame from a video feed, binaries captured frame, find contour in square shape, detect interest point and finally identify a marker. We formulate that number of vertices detected on a contour (square) must be equal with the number of corners detected in order to identify a marker and upon successfully identified a yellow circle will be overlaid on top of the target area (see Fig. 13 (d)). Yellow circle used as a substitute for 3D breast object in

In Table 1, it shows that, the size of Square-ROI used (with either single or double smoothing) influence the Distance-to-detect features. TABLE 1 Square-ROI size and Distance-to-detect Square-ROI size 17 x 16 (cm) 17 x 16 (cm) 6 x 5 (cm) 6 x 5 (cm) Distance-to-detect Estimated at 77 (cm) Estimated at 40 (cm) Estimated at 15 (cm) Estimated at 10 (cm) Smoothing Single Double Single Double

5 CONCLUSION
Based on our experiment in section 4, the proposed technique managed to do the followings: Capture real scene through a camera. Convert captured scene into a grey-scale image. Detect four (4) vertices and four (4) corners. Identify and verify a marker based on the extracted features Overlay 2D object. (Yellow circle)

this test.

At the moment, proposed technique have the tendency to detect unwanted contour and corner as shown in Figure 13 (c) and required bigger memory (RAM) size estimated at 16GB for a duration of 7 to 14.1 seconds real-time execution. In the future, we would like to extend our technique to properly visualize the coexistence of a real and synthetic 3D breast cancer model that sharing the same real environment with the aid of touchless hand and finger interaction to select the Region of Interest(ROI) in real-time.

ACKNOWLEDGMENT
The authors wish to thank the GRAVSLAB research group for their support and advice. This research is supported by a grant (FRGS0295-SG-1/2011) from the Ministry of Higher Education (MOHE), Malaysia. This paper is an extension of work originally reported in "The 2013 2nd International Conference on Medical Information and Bioengineering (ICMIB2013).

Fig. 13 Results of the proposed system: (a) Load camera and capture the environment (b) Convert captured frame into grey-scale (c) Display detected corners and square (d) If marker detected (Overlay 2D Yellow Circle) [6]

4.2 Execution
From our test, we found that there are three (3) factors could contribute or influence our proposed method success rate. Square-ROI size Square-ROI line thickness and Square-ROI distance from camera (viewpoint)

REFERENCES
[1] P. Milgram, H. Takemura, A. Utsumi and F. Kishino, Augmented
Reality: A Class of Displays on the Reality-Virtuality Continuum, Proceedings of SPIE, Vol. 2351, Telemanipulator and Telepresence Technologies, Hari Das; Ed. 1994. Pp. 282292. [2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner , S. Julier, and B. MacIntyre, Recent advances in augmented reality, IEEE Computer Graphics and Applications ;21(6):3447, 2001.

JOURNAL OF COMPUTING, VOLUME 5, ISSUE 2, FEBRUARY 2013, ISSN (Online) 2151-9617 https://sites.google.com/site/journalofcomputing WWW.JOURNALOFCOMPUTING.ORG

20

[3] S. Mann, Mediated Reality with implementations for everyday life,


Presence 2002, Teleoperators and Virtual Environments. [4] D. Marr and H. K. Nishihara, Representation and recognition of the spatial organization of three-dimensional shapes," Proc. R. Soc. of Lond. B, 200, 269-294, 1978. [5] M.Y.I. Idris, H. Arof, E.M. Tamil, N.M. Noor and Z.Razak, Review of Feature Detection Techniques for Simulataneous Localization and Mapping and System on Chip Approach, Inform. Technol. Journal, 8(3), pp. 250-262, 2009. [6] Y. Genc, S. Riedel, F. Souvannavong, C .Akinlar and N. Navab, Markerless tracking for Augmented Reality: A learning-based approach, Proc. of the International Symposium on Mixed and Augmented Reality (ISMAR 02), pp. 295-304, August 2002.

[20] ARToolkit Library.

Available: http://www.hitl.washington.edu/artoolkit/documentation/vision.h tm. Last accessed 10 February 2013 at 10.46 pm.

[21] H. Moravec, Towards automatic visual obstacle avoidance, In


Proceedings of the International Joint Conference on Artificial Intelligence, pp. 584. 1977. C. Harris and M. Stephens, A combined corner and edge detector, Alvey Vision Conference, pp. 147151. 1988. C. Tomasi and T. Kanade, Shape and motion from image streams : A factorization method-part 3. Detection and Tracking of Point Features, Technical Report CMU-CS-91-132. 1994. D. Frolova and D. Simakov, Matching with Invariant Features, Advanced Topics in Computer Vision, Spring 2004, Weizmann Institute of Science. March 2004. Lecture slide. D. Parks and P. Gravel, Corner detection, Available: http://kiwi.cs.dal.ca/~dparks/CornerDetection/algorithms.htm. Last accessed 5th March at 2.17 am. M. B. Ahmad and T. S. Choi, Local Threshold and Boolean Function Based Edge Detection, IEEE Transactions on Consumer Electronics, vol. 45, no 3, August 1999. D. Ziou and S. Tabbone, Edge detection techniques: An overview, International Journal of Pattern Recognition and Image Analysis, 8(4):537559. 1998. J. Canny, A computational approach to edge detection, Pattern Analysis and Machine Intelligence, IEEE Transactions on, PAMI-8(6):679698, Nov 1986. S.K. Mahendran, A Comparative Study on Edge Detection Algorithms for Computer Aided Fracture Detection Systems, International Journal of Engineeirng and Innovative Technology (IJEIT), vol. 2, no. 5, November 2012. L. Rechard, A. Bade, S. Sulaiman, & S. H. Tanalol, Using Computer Vision Techniques to Recognize Hand-Drawn Square-ROI as a Marker in Augmented Reality Application, In Proceedings of the 10th Annual Seminar on Science and Technology 2012 (SNT 2012), p.212, 2012. Y. Chunrong, A tracking by detection approach for robust markerless tracking, Available: http://campar.in.tum.de/twiki/pub/Chair/IsmarIndustrialAugmen tedReality2006/yuan.pdf

[22] [23]

[24]

[7] L. Vacchetti, V. Lepetit and P. Fua, Combining edge and texture


information for real-time accurate 3d camera tracking, In International Symposium on Mixed and Augmented Reality (ISMAR 04), pp. 48-57, 2004.

[25]

[8] A. I. Comport, E. Marchand, and F. Chaumette, A Real -Time Tracker


for Markerless Augmented Reality, In International Symposium on Mixed and Augmented Reality (ISMAR 03), Tokyo, Japan, September 2003.

[26]

[27]

[9] J. R. Vallino, Interactive Augmented Reality, PhD Thesis,


University of Rochester, Rochester, NY. November 1998.

[10] J. Rekimoto, Matrix: A realtime object identification and registration


method for augmented reality, In 3rd Asia Pacific Computer Human Interaction (APCHI98), pages 6368, 1998. [11] H. Kato and M. Billinghurst, Marker tracking and HMD calibration for a video-based augmented reality conferencerencing system, In Proceedings of the 2nd IEEE and ACM intl. workshop on augmented reality (IWAR99) (p. 85), Washington, DC, USA, 1999. [12] M. Fiala, ARTag, a fiducial marker system using digital techniques, In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR05), vol. 2, pp. 590596, Washington, DC, USA. 2005. doi:10.1109/CVPR.2005.74

[28]

[29]

[30]

[13] S. Sanni, Theory and applications of marker-based augmented


reality, Espoo 2012. VTT Science 3. 198 p. + app. 43 p.

[31]

[14] I. Skrypnyk and D. G. Lowe, Scene modelling, recognition and


tracking with invariant image features, In Proceedings of the 3rd IEEE and ACM intl. symposium on mixed and augmented reality (ISMAR04) (pp. 110119). 2004. doi:10.1109/ISMAR. [15] G. Klein and D. Murray, Parallel tracking and mapping for small AR workspaces, In Proceedings of the 6th IEEE and ACM Intl. Symposium on Mixed and Augmented reality (ISMAR07), Nara, Japan. 2007.

Rechard Lee holds a first degree in Computer Science from University Putra Malaysia completed in 2000. He is currently pursuing his MSc degree by research in Mathematics with Computer Graphics at University Malaysia Sabah. Current Research Group is GRAVSLAB (www.gravslab.com) Dr. Abdullah Bade holds MSc degree by research in Computer Science from Universiti Teknologi Malaysia in 2003. He obtained his Doctorate degree in Industrial Computing (Computer Graphics and Visualization) from Universiti Kebangsaan Malaysia in 2008. He has been actively involved in research and manages to secure several research grants from Ministry of Higher Education and Ministry of Science and Technology Malaysia. His research interest lies particularly in developing optimized algorithm for collision detection between objects, Deformable Body Simulation, Serious Games Simulation,Cloth Simulation and Crowd Simulation. Currently, he is part of the School of Science and Technology, UMS and appointed as a Senior Lecturer and Head of Programmes for Mathematics with Computer Graphics. He spends most of his leisure time on listening soft music, surfing internet, reading and travel. Current Research Group is GRAVSLAB (www.gravslab.com)

[16] T. Lee and T. Hllerer, Hybrid feature tracking and user interaction
for markerless augmented reality, In Proceedings of the IEEE virtual reality conference (VR08) (pp. 145152). 2008. doi:10.1109/ VR.2008.4480766

[17] F. Zhou, H. Duh and M. Billinghurst, Trends in augmented reality


tracking, interaction and display: A review of ten years of ismar, In 7th IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR08). 2008.

[18] D. Beier, R. Billert, B. Bruderlin, D. Stichling and B. Kleinjohann,


Marker-less vision based mobile augmented reality, Proc. tracking of Japan, Oct 2003.

[19] G. Simon and M. Berger, Reconstructing while registering: a novel


approach for markerless augmented reality, Proc. of ISMAR, Darmstadt, Germany. 2002.

You might also like