You are on page 1of 7

Department of Computing Macquarie University Gesture Analysis (Project Proposal)

Student Name: Saif Samaan Student ID : 42083850 Supervisor : Dr. Manolya Kavakli

Abstract: The project will describe a 3D hand and arm gesture recognition system so a designer has the benefit of using a sketch by controlling a pointer, a pair of sensor gloves covers both hands and arms we will used to create a real time 3D design interacting with computer based application. In this project I will investigate the usability and feasibility of gesture recognition as a form of command input to a VR system Virtual Reality System and this will be done by integrating existing technologies in motion tracking, 3D modeling and VR. The model application will demonstrate a simple 3D modeling tool, in which 3D objects can be created and modified using gestures as a control mechanism. A suggested interface to be developed will recognize hand-gestures of the designer, pass commands to a 3D modeling package via a motion recognition system, produce the 3D model of the sketch on-the-fly.

1-Project Description 1.1 Project Background Gestures and gesture recognition are terms increasingly encountered in discussions of human-computer interaction. In fact every physical action involves a gesture of some sort in order to be articulated. On the other hand, the Virtual Reality (VR) System is a very fast growing scientific area that can have a wide practical usage in close future. A virtual reality (VR) system is expected to provide realistic representations of objects. The realistic character applies at least as much to the behavior of the objects as to their aspect. Sutherlands sketchpad (1965) was based on structural descriptions of objects. A structural description consists of a set of symbolic propositions about a particular configuration. The disadvantage of this is that two different projections of the same object look different and have different structural descriptions in the picture domain, although they are equivalent within the object domain of a 3D description. Minsky (1975) suggested linking together these different projections in the form of a frame system for recognition. Winston (1975) illustrated the use of such structural descriptions in object recognition and made use of programs like Guzman's (1971) to group regions of the picture together. Witkin and Tenenbaum (1983) focused on the structure and the interrelation of parts as dominated by the general character of the whole. Marr (1982) raised questions about characteristics of shape representation: Does the description 2

of the object in the human brain have an internal structure? Since these first interactive computer graphics systems in the early 70s, sketch recognition remained unexplored for a long time. The difficulty of sketch recognition shifted attention to other areas of object recognition. The importance of freehand sketching in visual discoveries was readily accepted in the early 90s (Fish and Scrivener, 1990, Goldschmidt, 1991,Goel, 1995). However, approaches to sketch recognition have generally treated the sketch as an image , and discarded information about the underlying cognitive processes and actions. Approaches that used properties of drawing production (e.g.,pressure of pen, speed of drawing action, etc.) for recognition purposes include those of Negroponte (1973), Scrivener et al.(1993) with the ROCOCO SketchPAD, and Gross (1994) with the COCKTAIL NAPKIN. Such systems are still rarely used in the idea generation in design, since they are not integrated with the demands of visual cognition. Therefore, a need to generate a 3D models of real objects by sketching using VR in real-time has been arising and I will try to implement this by 3D sketchpad system for hand and arm gesture recognition System.

1.2 Aims, Significance and Expected Outcomes 1.2.1 Project aims The main aim of this project is to design a system that is capable to recognize the hand gestures in real time, this will be in a picture of examining environment in which a designer can sketch by controlling a pointer using a pair of cyber gloves and can interact with the design product in 3D space. I shall try to find out the possible ways of analyzing the output data in order to make it possible for the computer to identify different Hand/Arm gesture of the Sensor gloves operator. To do this we are planning for some experiments with the Sensor Gloves to be held in order to collect data for analysis .Also I will need to rely on part of existing literature about Sensor Gloves output data analysis and methods that were used in current literature for this purpose. The sensor gloves (using both hands)and the pointer incorporate 3D position sensors so that designers draw and design 3D objects real time. 1.2.2.Significance There already exist many methods of hand gesture tracking , but we still in need for a smooth and reliable system to help with 3D design in VR. which directing this project to develop such a system so Many design professions including architecture, fashion, and engineering can benefit from it, since they heavily rely on sketching in the conceptual design process. Other areas, such

as films, computer games, user interface design that involves storyboarding and visualization may also benefit from sketching in virtual reality. 1.2.3.Outcomes The main Outcome will be a Designers 3D sketchpad system for hand and arm gesture recognition. I shall achieve that by collecting and combining the following minor outcomes: 1-A list of data that will be collected and analyzed during the set of recorded experiments; 2-A list of data for analyzed questioners; 3-The list of approaches and techniques that were used for the output data analysis; 4-Description of problems that were faced during the project; 5-Explore the nature of sketching behavior within VR; 6-Come up with possible structure of cognitive actions of designers engaged in a design problem; 7-Suggestion development of a conceptual model for a 3D sketchpad as a precursor for a computational tool; 8-Suggestion of a specific development of a VR system and sensor gloves; 2 Methodology and Plan 2.1 Approach The Project objective is to investigate and identify a set of technologies in Gesture recognition area also to Integrate existing technologies in motion tracking, 3D modeling and VR. This will be approached be achieving each of the following stages:Stage1: I will need to perform a detailed literature review to find out the methods and technologies that were previously used in related areas to my project such as: -gesture recognition; -hand-motions; -sketch-recognition; -motion-capture; - behavioral-modeling; -3D modeling

Stage 2: Because of the nature of this project that requires detailed research on design and intelligent systems. Data will be collected and analyzed from set of recorded experiments. The collected data will address: 1- detailed measurements of the user 3D hands movements that indicate how fast or how slow the hand should move. 2- Define the sensors points to be located in the gloves. 3- Questioners to define what would be the user expectation for the system. 4- outlining a possible techniques and approaches to develop the system. Stage 3: At this stage I shall form the final collection of tools and techniques for the system represented by a final report, which will prepare my project to be implemented as a programmed user interface. 2.2 Task Plan

2.2.1 Task 1 Literature Review {1/08/2011-18/08/2011, Week1-3} The aim of this task is to collect information about hand gesture methods, sketching and 3D modeling; this task will focus on the following areas: -Determine Technologies: The purpose of this deliverable is to analyze the existing technologies and nominate what will be suitable in this project {Week1}. - Documentation: The purpose of this deliverable is document how to integrate the nominated to technologies in a way that serve the project {Week2}. 5

-Previous experience: The purpose of this deliverable is to study similar previous Systems specially hand recognition Systems to get more and more familiar with user expectations {Week3}. 2.2.2 Task 2 Conducting Experiments {20/08/2011-22/09/2011,Week 4-8} Collect Experimental data to the project, this will be done by achieving the following deliverables: Recorded Experiments: The purpose of this deliverable is to determine specific system needs such as hand movement speed and sensor detection {Week4-7}. Questioners: The purpose of this deliverable is to find out user expectation for the system {Week5-7}. Data Collection: Final stage of this task aims to come up with a list of data to be used in the system {Week8}. 2.2.3 Task 3 Experimental Analysis {24/09/2011-20/10/2011,Week 9- 12} Analyze the collected data by: Filter Received Data: This deliverable is about testing the collected data in different combinations {Week9-11}. Define Possible techniques: After the testing I shall determine specific techniques for the system {Week 12}. 2.2.4 Task 4 Report Outline {30/09/2011 - 07/10/2011, Week 10} The purpose of this task is to develop the report outline that will lead to start writing the final report. 2.2.5 Task 5 Report Writing {28/10/2011 - 11/11/2011, Week 13-15} Draft Report: The purpose of this deliverable is to develop a first draft report {Week13}. Review Project Report: The purpose of this deliverable is to revise the draft report to make the necessary changes {Week 14}. Final Report: The purpose of this deliverable is produce the final project report {Week 15} 2.2.6 Task 6 Presentation Design {28/10/2011 - 11/11/2011, Week 13-15} The purpose of this task is to develop presentation material and prepare for the final project presentation.

References
1. Gulrez, T., Kavakli, M., 2007: Precision Position Tracking in Virtual

Reality Environments Using Sensor Networks, IEEE International Symposium on Industrial Electronics (ISIE2007), Vigo (Spain), June 47, 2007 1-6;
2. Kavakli, M., 2008: Gesture Recognition in Virtual Reality, Special Issue

on: "Immersive Virtual, Mixed, or Augmented Reality Art" of The International Journal of Arts and Technology (IJART), ISSN (Online): 1754-8861 - ISSN (Print): 1754-8853, Vol 1, No 2, 215-229;
3. Kavakli, M., Kartiko, I., 2007: Avatar Construction for Believable

Agents, 3IA'2007 CONFERENCE, THE TENTH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND ARTIFICIAL INTELLIGENCE, Athens (GREECE), May 30 - 31, 2007, 1-6.
4. Fish, J. & Scrivener, S.A.R., 1990: Amplifying the Mind's Eye:

Sketching and Visual Cogni., Leonardo, 23(1), 117-126.


5. McNeill, D. (1992). Hand and Mind: What Gestures Reveal About

Tought. Chicago: University of Chicago Press.

6. http://www.hitl.washington.edu/projects/knowledge_base/virtual-

worlds/JOVE/Articles/dsgbjsbb.txt

You might also like