You are on page 1of 54

TRANSFORMING A GRAPHICAL USER INTERFACE (GUI)

BASED SCREEN INTO A NATURAL USER INTERFACE


(NUI) BASED SCREEN

A MINI PROJECT REPORT SUBMITTED IN PARTIAL


FULFILLMENT FOR THE AWARD OF THE DEGREE OF
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND INSTRUMENTATION ENGINEERING
SUBMITTED BY

P.SRUJAN

12071A10A2

P.MADHUKAR REDDY

12071A10A3

B.GOPI NARESH

13075A1021

Under Esteemed Guidance of


Mrs. Anitha Kulkarni,
Associate Professor,
EIE Dept., VNR VJIET.

DEPARTMENT OF ELECTRONICS AND INSTRUMENTATION ENGINEERING


VNR VIGNANA JYOTHI INSTITUTE OF ENGINEERING & TECHNOLOGY
UGC AUTONOMOUS INSTITUTION
(Approved by AICTE, Accredited by NBA, NAAC, New Delhi)
BACHUPALLY, HYDERABAD-500090

2015

DEPARTMENT OF ELECTRONICS AND INSTRUMENTATION ENGINEERING


VNR VIGNANA JYOTHI INSTITUTE OF ENGINEERING & TECHNOLOGY
BACHUPALLY, HYDERABAD-500090

CERTIFICATE
This is to certify that the mini project entitled Transforming a graphical user
interface (GUI) based screen into a natural user interface (NUI) based screen is being
submitted by P.Sujan(12071A10A2), P.Madhukar Reddy(12071A10A3) and B.Gopi Naresh
(13075A1021) in partial fulfillment of the requirement for the award of Degree of Bachelor
of Technology in Electronics and Instrumentation Engineering, VNR Vignana Jyothi
Institute of Engineering and Technology is a record of bonafide work carried out by them
under my guidance and supervision during October 8th 2015 to November 4th 2015. The
results embodied in this thesis have not been submitted to any other University or Institute for
the award of any Degree or Diploma.

Mrs. Anitha Kulkarni.,

Mrs. R. Manjula Sri

Associate Professor

Head of the Department

EIE Dept., VNR VJIET

EIE Dept., VNR VJIET

Hyderabad.

Hyderabad.

ACKNOWLEDGEMENT

This is an acknowledgement of the intensive drive and technical competence of many


individuals who contributed to the success of our project.
We express our special gratitude to our Project Guide, Mrs. ANITHA KULKARNI,
Associate Professor, Department of Electronics and Instrumentation Engineering, VNR
VJIET, for her guidance and help in our present topic TRANSFORMING A
GRAPHICAL USER INTERFACE (GUI) BASED SCREEN INTO A NATURAL
USER INTERFACE BASED (NUI) SCREEN. We thank for her immense cooperation in
carrying out this project work.
Our sincere thanks to Mrs. R.MANJULA SRI, Head of E.I.E Department,
VNRVJIET and Prof. S. RAJA RATNAM and Mr. NAGESHWAR V., Mini-Project
Coordinators for their valuable suggestions and guidance.
We express our sincere thanks to Dr. C.KIRANMAI, Principal, VNR VJIET for the
support and motivation provided to us.
We extend our thanks to the rest of the faculty for their encouragement and guidance.

P.SRUJAN (12071A10A2)
P.MADHUKAR REDDY (12071A10A3)
B.GOPI NARESH (13075A1021)

DECLARATION

We hereby declare that the mini project work TRANSFORMING A GRAPHICAL


USER INTERFACE (GUI) BASED SCREEN INTO A NATURAL USER INTERFACE
(NUI) BASED SCREEN submitted towards partial fulfillment of requirements for the
Degree of Bachelor of Technology in Electronics and Instrumentation Engineering to VNR
Vignana Jyothi Institute of Engineering and Technology, Hyderabad is an authenticate work
and had not been submitted to any other university or institute for any award of degree or
diploma.

P.SRUJAN

P.MADHUKAR

(12071A10A2)

(12071A10A3)

B.GOPI NARESH
(13075A1021)

ABSTRACT
In our everyday life the technology is trying to shape itself in a simplest way possible. Rapid
changes in technologies in order to make human's life simpler is prevalent unobstructed. A
milestone in its journey is the Natural User Interface. In here the paramount attribute is that,
human interacts with the natural environment and acquires an electronic functionality as a
response. In this project we achieve transforming a GUI based PC screen into a NUI based
screen. This process has its significant role in changing different interfaces into a Natural user
interface, resulting into a new subject of study called 'Augmented Reality'.

CONTENTS
Chapter No
1.

2.

3.

4.

5.

Title
INTRODUCTION

1-15

1.1 Our motivation for this project

1.2 Introduction to 2D Graphical User Interface

1.3 Introduction to 3D Graphical User Interface

1.4 Introduction to Natural User Interface

1.5 Emerging technologies

14

REQUIREMENT SPECIFICATION

16-37

2.1 The Wii Remote

17

2.2 Infra Red Pen

27

2.3 SmoothBoard Software

30

2.4 Microsoft .NET framework 3.5 or 4

37

SETUP

38-40

3.1 Block diagram

39

3.2 Flow chart

40

OUTPUT SCREENS

41-44

4.1 Calibration

42

4.2 Power point presentation

43

4.3 Mouse Drag operation

43

4.4 Click operations

44

CONCLUSION
5.1 Summary

6.

Page No

BIBLIOGRAPHY
6.1 Bibliography

45-46
46
47-48
48

CHAPTER-I
INTRODUCTION

1.1 OUR MOTIVATION FOR THIS PROJECT


An alacrity to work in the field of sixth sense technology has lead us to study one of its
most important part called 'Augmented Reality'. Our ameliorating and un-declining interest
towards this field has challenged us in knowing the depths of augmented reality where we
found about the Natural User Interface(NUI).
Making familiar ourselves with the Natural User Interface we came across a way which
can actually transform a normal Graphical User Interfaced(GUI) PC screen into a NUI based
screen.
Hence we motivated ourselves to put forward our efforts in making this project.

GRAPHICAL USER INTERFACE

1.2 INTRODUCTION TO 2D GRAPHICAL USER INTERFACES


Fundamental for the GUIs is that the screen is divided into a grid of picture elements
pixels), and by turning the pixels on or off pictures can be formed.
Similar to Character Based Interfaces, GUIs also uses a keyboard as an input device and a
display as an output device. With the introduction of the desktop metaphor a new input
device that revolutionized the interaction was introduced. The mouse is a small device that
moves the cursor relative to its present position and makes it easy to interact with the
computer system.
GUIs usually features windows, icons, menus, and pointers. The window makes it
possible to divide the screen into different areas and to use each area for different tasks. Each
window can contain icons, which are small pictures of system objects representing
commands, files or windows. Menus are a collection of commands and objects from where a
user can select and execute a choice. The pointer is a symbol that appears on the display
screen, that you move to select and execute objects and commands.
A GUI often includes well-defined standard formats for representing text and
graphics, thus making it possible for different programs using a common GUI to share data.
The use of GUIs helps reduce the mental effort required to interact with the programs.
Instead of remembering sequences of complex command languages, users only need to learn
how to interact with the simulated world of objects, e.g. icons and menus.

1.2.1 Desktop Metaphor


One of the most famous of the first graphical user interfaces is called the Star user
interface and as designed by Xerox Corporations Palo Alto Research Centre in the 1970s.
Yet it was not until the early 1980s that the development of the Apple Lisa and the Macintosh
made graphical user interfaces popular.
The Star interface is based on the physical office and thus designed to be more like
the physical world that already is familiar to the users. The idea of the interface metaphor was
to create electronic counterparts to the physical objects in an office.
This involved representing office objects, e.g. papers, folders, filing cabinets, and trashcans,
on the screen. The overall organizing metaphor presented on the screen was that of a desktop,
resembling the top of an office desk.
Some of the principles that were developed with the Star computer included direct
manipulation, What You See Is What You Get (WYSIWYG), and consistency of commands.
Direct manipulation is a communication style where objects are represented on the screen and
can be manipulated by the user in ways analogues to how the user would work with the real
objects. WYSIWYG is a way to represent the users conceptual model in a form that can be
displayed. For example a document is displayed on the screen, as it would look in printed
form.

1.2.2 Interface Components


Interface components (sometimes known as widgets) are graphical objects with
behaviour that are used to control the application and its objects. The objects, such as buttons,
menus, sliders, texts, labels, droplists, and lists are combined to make up the GUI. They all
allow user interaction via a pointer device and/or a keyboard.

1.3 INTRODUCTION TO 3D GRAPHICAL USER INTERFACES


In the literature the term 3D interface is used to describe a wide variety of interfaces
for displaying and interacting with 3D objects. True 3D interfaces i.e. interfaces with all its
components in a 3D environment have not yet had any major impact outside the laboratory.
The interfaces nowadays referred to as 3D GUIs are almost exclusively hybrids between
2D and 3D interfaces. In this thesis we will refer to these "hybrid" interfaces simply as 3D
GUIs. In the early days of 3D computing the only users of 3D graphics were exclusive
groups of scientists and engineers (i.e. experts) with access to expensive super computers and
workstations.
Today, the growth in platform-level support for real-time 3D graphics has increased
the interest in adding to the dimensionality of the GUIs. The user group has grown from an
exclusive group of expert users, to a diverse group of researchers and practitioners in such
areas as CAD/CAM, medicine, engineering, etc. This imposes high requirements on the user
interface since it is now going to be used by a broad heterogeneous user group with a big
diversity in knowledge of computing. One possible way to accomplish this is to design the
user interface in a way that makes it possible for users to engage more natural skills in
navigating and orientating themselves within applications. The concept Virtual Reality (VR),
also known as Virtual Environment (VE), is an attempt to accomplish this. It is also a field
closely coupled to 3D GUIs.

Figure 1.3

1.3.1 Perception of depth


An important topic in the field of 3D graphics is how to visualise objects so that the users get
a good impression of their shapes as well as their relative positions. To accomplish this it is
important to consider how depth and shape are perceived by human beings. How does one
tell from a two dimensional image where objects are located in space? In a normal situation
this is nothing we have to worry about since our brain takes care of all that for us by using the
information it obtains from the retina. However to be able to create credible 3D graphics it is
necessary to have a deeper insight into these matters. The retina receives information in two
dimensions but the brain translates these cues into three-dimensional perceptions. It does this
by using both monocular cues (which require only one eye) and binocular cues (which
require both eyes)
Monocular Depth Cues
The monocular depth cues are relevant when discussing computer graphics since these are the
cues that will have to be used to create the illusion of depth on a normal two-dimensional
display. These depth effects are experienced just as powerful with one eye closed as when
both eyes are used.
With clever use of monocular cues it is possible to create images whose individual parts
make sense but whose overall organization is impossible in terms of our existing perceptual
schemas. The artist Oscar Reutersvrd skillfully used this in the creation of his famous
impossible figures.
Imagine standing on a straight railroad track watching the rails recede in the distance. The
two rails will seem to get closer together the further away you look. This illustrates an
important monocular cue known as linear perspective.
Other more obvious monocular cues include decreasing size, i.e. objects further away from us
produce a smaller image than objects close to us and the fact that objects further away appear
in a higher horizontal plane (i.e. objects in the distance appear closer to the horizon). Texture
and clarity also affect the way we perceive depth, we can see objects close to us more clearly
and detailed than objects far away, also the texture of the objects is more fine grained in the
distance.
Finally just the effective use of patterns of light and shadow can provide important depth
information. All the monocular depth cues are summarized.

Figure 1.3.1

Binocular Depth Cues


Earlier discussed monocular depth cues
are not as convincing as the much richer depth
effect that occurs when both eyes are used. The
appearance of depth that results from the use of
Figure 1.3.2
both
eyes is called stereopsis, it arises when the brain combines the two slightly different
images it receives from each eye into a single image with depth.The effect of stereopsis can
be achieved artificially by using devices like Head Mounted Displays or shutter glasses. As
opposed to monocular depth cues 10 % of people cannot interpret stereopsis, i.e. they cannot
detect binocular depth cues, thus making them stereoblind

1.3.2 Interface components


The 3D user interfaces of today often take little advantage of the opportunities of the added
third dimension using mostly plain 2D interface components that are mapped to rotation and
translation of the 3D objects. A normal computer mouse is also commonly used. The mouse
movement is then mapped to rotation/translation and a selection mechanism is used for
selecting which degrees-of-freedom the device is controlling.
Various approaches have been made to develop new metaphors and interaction techniques to
take advantage of the greater possibilities of 3D. One approach that is still in its infancy is the
concept of 3D widgets, which involves making the interface components first class objects in
the same 3D environment as the rest of the objects in an application. There are no generally
accepted metaphors for 3D interfaces analogous to the well-established metaphors for 2D
interfaces. The lack of metaphors denotes that essentially no toolkits for 3D interfaces exists.
Selective Dynamic Manipulation is a paradigm for interacting with objects in visualizations.
Users control the visualisations using 3D handles that may be attached to objects. To
manipulate the objects the handles are pushed or pulled. The objects move and react
continuously to the manipulation of the handles. Use of interactive animation in this way can
help users understand and track the changes that occur to an object set.
Other examples of 3D widgets include the Arcball which is a mouse-driven 2D interface to
simulate a physical trackball. The virtual object is shown on the screen, and when the user
clicks and drags on the virtual object, the computer interprets these drags as tugging on the
simulated trackball. The virtual object rotates correspondingly. To provide the third rotational
degree of freedom, a circle is drawn around the object, and when the user clicks and drags in
the area outside of the circle, rotation is constrained to be about the axis perpendicular to the
computer screen.

NATURAL USER INTERFACE

1.4 INTRODUCTION TO NATURAL USER INTERFACE:


Definition:
A natural user interface or NUI or natural interface is the common parlance used by
designers and developers of human machine interfaces to refer to a user interface that is
effectively invisible, and remains invisible as the user continuously learns increasingly
complex interactions. The word natural is used because most computers interfaces use
artificial control devices whose operation as to be learned.

Figure 1.4

A NUI relies on a user being able to quickly transition from novice to expert. While
the interface requires learning, that learning is eased through design which gives the user the
feeling that they are instantly and continuously successful. Thus, natural refers to goal in
the user experience that the interaction comes naturally, while interacting with the
technology, rather than that the interface itself is natural. This is contrasted with the idea of
intuitive interface, referring to one that can be used without previous learning.

1.4.1 Design considerations for a Natural User Interface (NUI)


The first step toward designing an NUI might actually be considered a step back. The
eventual NUI will contribute more effectively to the success of the system in the marketplace
if the design team first analyzes the application and determines the type of user experience
which is mostly dictated by the NUI that will engage and energize users. In many market
segments today, the user experience is just as critical as the functionality provided by the
system itself.
Cost and size parameters
Not unexpectedly, cost will play a major role in the type of user experience provided
by a system. A leading-edge NUI may drive up the overall cost of the system to the point
where the price of the product will hinder its adoption. Often, the tradeoff NUI designers
must consider will involve developing a differentiated interface that may command a
premium or implementing a more established NUI that is already understood by users in the
marketplace.
The size of the end product also has a significant effect on the type of NUI that can be
implemented. Large, immobile systems like DVD kiosks or industrial factory automation
control panels can have a very different user interface from small portable systems like
smart phones and GPS systems or very small control panels such as those found on certain
white goods like refrigerators and thermostats.
Connectivity and power issues
This is a connected world, and most systems are either connected to the Internet
directly or to a local area network, which in turn may be connected to the Internet. The type
of connectivity wired or wireless will factor into the systems NUI.
Some wireless connectivity technologies like near field communications (NFC) and ZigBee
have a fairly low data transfer rate. A large and ponderous NUI might needlessly downgrade
the responsiveness of the system if it depended on a low-speed wireless connection.
Moreover, many systems that rely on a wireless connection are special-purpose, handheld
devices like package tracking devices for shipping companies or point-of-sale devices for car
rental check-out. Since the functionality of these sorts of devices is limited to one or a few
specific tasks, the NUI can be fairly simple and intuitive. This will require less processing
cycles and conserve battery power. The requirements of some applications determine the
wireless protocol implemented, which, in turn, will affect the look and feel of the NUI.
ZigBee-based wireless communications, for example, is often deployed in certain medical
systems because it meets the security requirements of the U.S. governments Health
Insurance Portability and Accountability Act (HIPPA).
Systems wired into a network are more often wired into a power source too. The power
budget for such a system would certainly be larger and more relaxed than the power budget
for a handheld portable device.
Some wired systems have significant power needs for certain ancillary processing
functionality. An ATM system, for example, will have a card reader to power. Other wired
systems might require bar code readers for access. Some wired systems combine their
network connection with their power source by using technologies like Power-over-Ethernet
(PoE) or Power-line data communications.

The larger power budgets that are typical of wired system give NUI design teams
more options to consider since their choice of technologies to implement will not be narrowly
limited to just the low-power technologies.
Processing and power issues
The relative complexity of an NUI will have ramifications on the processing resources
in the system. Many NUIs will run on their own processor or controller, distinct from the
application processor(s). Other NUIs share the systems processor, which may contain
multiple processing cores. Regardless of the systems processing architecture, the NUI will
have certain processing requirements which must be accounted for.
Various factors of the NUI will determine the processing capabilities needed to support a
particular NUI.
For example, the larger the size and the higher the resolution of any display screens
will place a greater processing load on the NUIs processor. System security measures might
be integrated into the NUI. Some security subsystems are quite elaborate, involving extensive
image or video processing for telematic security measures like fingerprint recognition, retina
scanning or facial verification.
The graphic content of the NUI is another consideration that affects the NUIs
processing requirements. To provide a more engaging and captivating user interface, many
NUIs are migrating from 2-D display graphics to 3-D icons and images. This places an extra
burden on the processor since 3-D graphics are much more computationally intense. Indeed,
many designers have found that executing 3-D graphics in software often places too great a
processing burden on the systems or the NUIs processing resources. One of the most
effective alternatives is to deploy processors that include graphic accelerators in hardware so
that the NUIs
2-D or 3-D graphic processing load does not degrade responsiveness. Some NUIs
involve cameras and other sorts of visual recognition equipment. Often, visual technologies
are either a function of a security application, such as a surveillance system for protecting
access to a building, for example, or they provide security to the NUI and the underlying
application. Vision processing-based systems, like gesture recognition, require a very high
amount of computational power from the multi-core architectures. These multi-core
architecture-based systems have to provide adequate processing capacity for image
processing, pixel processing and feature recognition and tracking algorithms.
A strategic decision arises when the manufacturer is delivering multiple systems to
the marketplace within a product line approach. Typically, each member system in a product
line addresses a different price point in the marketplace. As a consequence, each system is
likely based on a processor with the capabilities and cost needed to meet the expectations of
the targeted market segment. Given the range of processing power that will be deployed
throughout the product line, the NUI for the various systems will likely vary according to the
capabilities of the processor. At the same time, the manufacturer will want to reduce
development costs as much as possible by re-using software and firmware across all the
systems that make up the product line.
Therefore, it behooves a manufacturer to consider a platform or generation of
processors that are software compatible and offer software portability, while delivering a
range of processing capabilities at various cost levels. NUI algorithms in general require a
high amount of complex computational decisions, and hence the scalability aspect is very
much needed when building a product pipeline.

Operating systems
Developers of a systems NUI invariably will be affected by the design and
architecture of the rest of the system, such as the type of operating system (OS) high level
or real time that has been chosen. Requirements of the application usually dictate the choice
of an OS. High-level OSs (HLOS) like Windows, Linux and Android provide an
intuitive and easy-to-use layer of abstraction between complex computing or communications
technology and users. For developers and designers, most HLOSs also include extensive
tools. These can be of great assistance to NUI developers for integrating complex
functionality such as multimedia processing. But, because of all their capabilities, HLOSs
usually have a large code footprint, which can hinder their real-time responsiveness.
Latency involved in dealing with the processing algorithms are very time critical, and require
faster, lighter weight processing. Also, when multiple cores are involved in processing, the
handshake between the cores should not take a long time.
Real-time OSs (RTOS) are typically implemented for applications where
responsiveness and predictability are critical. Many embedded applications use an RTOS
because they can ensure a system response in a predetermined period of time or a certain
number of clock cycles. To meet these types of requirements, RTOSs like VxWorks,
Nucleus, QNX/Rim, Neutrino, Integrity, SafeRTOS/FreeRTOS, Micrium and
RoweBots usually have a small code footprint, and they do not include many of the tools and
support mechanisms found in HLOSs. As a consequence, NUI design and development could
be more laborious, especially if advanced functionality such as multimedia processing is
being incorporated into the NUI. There are a myriad of user interface technologies that can be
deployed in an NUI to make it the kind of engaging interface that draws users into the
application while increasing their productivity. Each technology comes with its own set of
tradeoffs. In the end, the application and the circumstances surrounding its use will determine
the optimum NUI technologies to implement.
Touch screens
Touch screens were introduced as a keyboard and mouse replacement almost two
decades ago. The first type of touch panel was resistive. This features a flexible outer layer
typically made of a plastic film which makes electrical contact with an inner layer when it is
deformed. Capacitive is a newer type of touch screen which is prominently featured on many
smart phones and tablet PCs today. The outer layer is typically rigid glass. The capacitive
effects of a finger touch are sensed by the panel. In both instances, the coordinates of the
touch is processed by a touch screen controller and provided to the NUI. Other types of touch
panels such as surface or sonar acoustic wave (SAW) and infrared are also available, but the
adoption of these later two technologies has not been widespread.

Figure 1.4.1

Figure 1.4.2

Voice control
Voice recognition is a well-established technology, but noisy environments are not
conducive to its deployment. In addition, the more sophisticated voice recognition algorithms
those that understand a wide range of vocabularies, languages and accents, and which can
be trained to a many different voices require significant processing cycles. Implementing an
NUI with a sophisticated voice control component could require a quite powerful and more
costly processor.
Graphic considerations
The graphical composition of on-screen displays will affect the performance
capabilities of the processor, as well as the amount of memory and read/write memory
bandwidth in the NUI. For example, the user interface could be based on 2-D (raster), 2.5-D
(vector) or 3-D graphics. Each step up involves improvements in the vibrancy of the graphics,
the on-screen colors, the depth of objects, the gradient range of light on the screen and other
user perceptions. Each step up also comes with requirements for greater processing power,
more memory and faster bandwidth. For higher-order graphics, designers must consider
carefully whether the processor which will be running the NUI includes hardware
acceleration for graphics. Without hardware-based graphic accelerators, sophisticated graphic
processing such as alpha blending, rotation, picture-in-picture and others types of graphic
functionality could severely burden the processor. Of course, the size and resolution of the
display screen supporting the NUI will also dictate the type of graphics incorporated into the
user interface. For example, small, low-resolution screens do not provide an adequate basis
for 3-D graphics.
Short-range wireless
Certain short-range wireless communications technologies like NFC, radio frequency
identification (RFID) and ZigBee are being incorporated into some NUIs, such the
interfaces on handheld terminals used by shipping and logistics companies. Of course, the
limited operational range of these wire-less protocol restrict their implementation to a few
types of applications, but they do have certain advantages. The limited distance their signals
can travel ensures they will not interfere with other wireless communications in their vicinity.

They also consume little power, which extends the battery life of the handheld devices where
they have been deployed.

1.5 EMERGING NUI TECHNOLOGIES


Development is progressing on several emerging technologies that will eventually be
employed in NUIs.
Stereoscopic-3D is the most recent example of an emerging technology brought to life
in the NUI space.
Stereoscopic-3D (S3D)
Stereoscopic-3D (S3D) is quickly emerging as a prime technology across various
markets, and is proving to be a hot trend that adds an additional dimension of reality to
existing 2-D videos, games, movies and images.
With the advent of 3-D TVs hitting store shelves, consumers are now getting acquainted to
large-screen, realistic S3D effects in home entertainment. Today, S3D experiences are
migrating from the large screen to mobile devices, providing realistic and glasses-free,
personalized viewing experiences on the go.

Figure 1.5
Overall, S3D video and imaging use cases can be categorized in two ways S3D
content creation and S3D viewing both of which have a unique set of challenges in mobile
design and development. Next generation ARM-based architectures, such as the OMAP

processors, address design challenges, and help successfully establish S3D experiences in the
mobile world.
Extension of dual-camera-based 3-D leads into more natural 3-D gesturing and can
potentially scale to many applications, including interactive signage in shopping malls.
Cameras and other sensors are used to recognize the touch-less gestures that users perform in
front of the system to control it. In one interactive signage application, shoppers in a mall use
3-D gestures to zoom in or out on a particular portion of the malls floor plan.
Haptics
Another emerging NUI technology is haptics, which refers to the sense of touch.
Specifically, haptics implements vibration feedback through interaction with a NUI device.
Youll find haptics in tablets, handsets, control-interfaces, white goods really any device
that involves human interaction through touch. The typical cycle of haptics feedback starts
when a touch event occurs and the applications processor is notified of the event. From there
the applications processor finds the correct feedback to be implemented and sends that
information to a haptics driver which then drives the vibration actuator which generates the
vibrations that are felt by the user. Haptics enhances a typical user experience by stimulating
the currently under-utilized sense of touch.
Imagine a typical game of Angry birds, instead of just watching your bird make
impact with the barricaded pigs you feel the impact, this is achieved with Haptics.
Researchers predict that Haptics will improve task performance, increase user satisfaction
and provide a more realistic user experience.
Todays and tomorrows NUIs will certainly involve a broad range of capabilities and
technologies, including smart multi-core processors, embedded microcontrollers and
microprocessors, power devices like converters, supplies and managers/supervisors, precision
analog devices for touch-based interfaces and other purposes, interface components, wireless
transceivers and standard linear devices for driving LEDs, as well as a variety of other
functions. TI is one of the few technology suppliers with a broad product portfolio to meet all
of these needs. Moreover, TI supports its technology through hardware development kits
specifically for HMI, a wide range of software tools, development kits and already-ported
code. Community is also a huge benefit to designers. The TI engineer-to-engineer (E2E)
community includes peer collaboration and an ecosystem of complementary third-party
suppliers and service providers. All of this is at the disposal of design teams that are building
the next, revolutionary NUI.

CHAPTER-II
REQUIREMENTS SPECIFICATION

2.1 The WII REMOTE


Exploration To Enhance The Feeling Of Interactivity In Applications

Figure 2.1

2.1.1 INTRODUCTION:
Despite all the revolutionary advancements in technology, the basic interface between
the human and the computer system has received relatively little attention in terms of
evolution. From days Nintendo Co., Ltd., a Japanese multinational consumer electronics and
software company releasing videos of gaming console the Wii. This console is controlled by
a wireless Bluetooth controller, the Wiimote. Through its pointing and motionsensing
abilities Nintendo hoped to make a console accessible to people of all ages and abilities.
Nintendo did not expect that users would reengineer the device to do all sorts of things
having nothing to do with playing videogames. Our aim is to study the Wiimotes features
and possibilities to enhance the feeling of interactivity in visual applications on the personal
computer (Windows) platform. In able to set up a Wiimote with a computer the following
things are needed:
Wiimote
Bluetooth device
Windows operating system
LED sensor bar

2.1.2 INPUT DEVICE:


Wiimote
The Wiimote is a onehanded wireless input device/controller and consist of the
twelve buttons, four of which are arranged into a directional pad and the rest is spread over
the device. Its symmetric design allows it to be used with either the left or right hand. Figure
gives overview of the layout. The expansion port on the bottom of the unit is used to connect
the remote to auxiliary controllers which augment the input options of the Wiimote.

Figure 2.1.1

Figure 2.1.2
Auxiliary controllers use the interface of the Wiimote to communicate with the host.
One of the auxiliary controllers is the Nunchuk (see figure). It features an analog stick
enabling more control options. The controller uses two AA batteries as a power source, which
can power it for 30 hours if used with an auxiliary controller or 60 hours if used alone.
As figure shows the Wiimote contains four blue LEDs, these LEDs are used during
normal use to indicate that the remote is in Bluetooth discoverable mode (all blinking), or to
indicate the users number of the controller (only one illuminated). Furthermore the controller
is provided with a rumble, via a motor with an unbalanced weight attached to it inside the
Wiimote, which can be activated to cause the controller to vibrate. And it has a lowquality
speaker used for short sound effects during use.

2.1.3 COMMUNICATION:
Communication with the host is accomplished through a Bluetooth wireless link. The
Bluetooth controller is a Broadcom 2042 chip, which is designed to be used with devices
which follow the Bluetooth Human Interface Device (HID) standard, such as keyboards and
mice. Not all Bluetooth d0evices are compatible with the Wiimote.

2.1.4 MOTION SENSOR:


Internally the Wiimote has a integrated circuit called the ADXL330 accelerometer.
This device has the ability to measure acceleration along three axes (see figure). The device is
supported by springs built out of silicon, Figure shows motion sensing enough to enable the
positionsensing abilities and through the force exerted on these springs the sensor measures
the acceleration of the Wiimote. Due to the sign convention used, this quantity is proportional
to the net force exerted by the users hand on the Wiimote when holding it. Besides
motionsensing abilities, with the accelerometer, the Wiimote is also augmented with an
infrared image sensor on the front. This feature is designed to locate IR beacons within the
users field of view, and thus creating positionsensing abilities. These beacons are
transmitted through the sensor bar. By tracking the locations of the beacons in contrast to the
Wiimote sensor, the system can derive more accurate pointing information. The Nintendo
sensor bar contains 10 LEDs, five on each side. The LEDs closest to the center are pointing

inward and the farthest are pointing outward. The LEDs enable the Wiimote to calculate the
distance from the bar and even the angle of the device with respect to the ground (tilting and
rotating). Out of these position values other data (size and pixel value).

Figure 2.1.3

can be requested. It is not necessary to use a sensor bar containing as many lights as the
version of Nintendo. A bar containing two LEDs wii

Data from 3-axis accelerometer


o senses instantaneous acceleration on device (i.e., force) along each axis
arbitrary units (+/- 3g)
always sensing gravity
o at rest acceleration is g (upward)
o freefall acceleration is 0
finding position and orientation
o at rest roll and pitch can be calculated easily
o in motion math gets more complex
o error accumulation causes problems
o often not needed gestures sufficient

2.1.5 IR CAMERA:
The Wii Remote includes a 128x96
monochrome camera with built-in image
processing. The camera looks through an
infrared pass filter in the remote's plastic
casing. The camera's built-in image
processing is capable of tracking up to 4
moving objects, and these data are the only
data available to the host. Raw pixel data is
not available to the host, so the camera
cannot be used to take a conventional
picture. The built-in processor uses 8x
subpixel analysis to provide 1024x768
resolution for the tracked points with 100hz.
Figure 2.1.4
This is a significant advantage over a web camera which typically operates between
10 and 30 Hertz. It has a field of view (horizontal) 22 degrees and a field of view (vertical) 23
degrees. The Sensor Bar that comes with the Wii includes two IR LED clusters at each end,
which are tracked by the Wii Remote to provide pointing information. The distance between
the centers of the LED clusters is 20 cm (as measured on one unit).

Figure 2.1.5

2.1.6 WII ACCELEROMETER AND GYRO:


The Wii Motion plus has a three axis gyroscope and a three axis accelerometer.

Figure 2.1.6

Figure 2.1.7

2.1.7 SCRIPTING:
Many interpreters are available in order to use the functioning of the Wiimote. One
such interpreter is Glovepie. GlovePie uses an interface that resembles a text editor. In able
to use GlovePie, as an interpreter to interpret Wiimote signals, it is necessary to program
scripts that correspond to specific keys or buttons. The syntax used in this program is similar
to Java and BASIC.
Key.A = Wiimote.A
With GlovePie it is possible to use multiple Wiimotes, which can be distinguished by using
consecutive numbers with a maximum of 8. For instance to call button A from different
Wiimotes and assign them to different keyboard entries.
Key.X = Wiimote1.A
Key.Y = Wiimote2.A
The available inputs of the Wiimote can be categorized into buttons, Dpad (directionalpad),
acceleration, rotation, sensor bar, LEDs and rumble.
Buttons:
Wiimote.A
Wiimote.B
Wiimote.Plus
Wiimote.Home
Wiimote.One
Wiimote.Two
Dpad:
Wiimote.Up
Wiimote.Down
Wiimote.Left
Wiimote.Right
Acceleration:

Wiimote.RelAccX / Wiimot.RawAccX
Wiimote.RelAccY / Wiimote.RawAccY 7
Wiimote.RelAccZ / Wiimote.RawAccZ
Rotation
Wiimote.Roll
Wiimote.Pitch
Sensor Bar (# = 1,2,3,4)
Wiimote.Dot#x
Wiimote.Dot#y
Wiimote.Dot#size
Wiimote.Dot#vis
LEDs
Wiimote.Led1
Wiimote.Led2
Wiimote.Led3
Wiimote.Led4
Rumble
Wiimote.Rumble
Speaker
Wiimote.Frequency
Wiimote.Volume
Wiimote.Speaker
Wiimote.Mute
Wiimote.SampleRate

2.1.8 USAGE:

provide musical interfaces using a single device


mimic real instruments

Features

guitar, violin/bass, drums, trombone, theremin

MIMI Multi-Instrument Musical Interface

heuristic recognition
exponential smoothing

By attaching Wiimotes to arms and ankles and also heuristic evaluation of motion signals we
can achieve goals like

full body experience that feels like real dancing


untethered: no need to stand in one place or position
increasing amount of recognizable movements
preventing cheating by inaccurate movements

By using leaning metaphor and using Wiimote as camera and also by wearable IR sensor bar
we can

reduce interaction complexity in World of Warcraft


improve navigation
help player concentrate on other tasks

By tracking players foot motion by the sensor bar attached to leg


and using Wiimote as a camera kick, pass and player speed can be
detected, by which we can

explore exergaming
o wiisoccer
and trace natural locomotion

Figure 2.1.8

By using rest orientation for driving robot (tilt sensing from


accelerometers) and gestures with simple classifier like forward,
back, turn left, turn right, stop we can explore robot control using
Wiimote

Figure 2.1.9

INFRA RED PEN

2.2 INFRA-RED PEN:


Required components:

A high-power infra-red light emitting diode


(LED). Infra red light is invisible for the
human eye. It's more like radiating heat than
light. Take one LED that uses 1.5 volt. Here is
a picture of a LED and how it is symbolized in
electronic wiring diagrams:

Figure 2.2

A 1.5 Volt AA Alkaline battery. This is the electronic symbol for a battery.

A momentary switch. When we press the button on such a switch, the LED will go on
but it will not be visible. When we release the button the LED will stop emitting IR
light. The switch simulates the left mouse button on the PC.
Here's the symbol:

About 50 centimeters each of thin, black and red coloured, flexible, insulated wire to
connect the components.
Heat shrinkable tube to insulate connections.
A battery holder. It's not necessary, but useful when, during a lesson,
The electronic components can be connected in such a way that they form a circuit.
Here is the wiring diagram.

When the contacts of the momentary switch are connected, a circuit is formed and the LED
will emit infra-red light.

Figure 2.2.1

Figure 2.2.3

Figure 2.2.2

SMOOTHBOARD 2

2.3 SMOOTHBOARD SOFTWARE


2.3.1 INTRODUCTION
Smoothboard allows you to easily transform your flat screen display (projected screen
or flat panel) into an interactive whiteboard with just a Wiimote and IR Pen.
Smoothboard provides an intuitive interface that enables users to interact with their
computer directly at the screen. Smoothboard also allows you to control the computer away
from the screen with the use of the Wiimote. With Smoothboard, you will have impressive
presentations that can be free from the mouse and keyboard.

Figure 2.3

2.3.2 KEY FEATURES


SmoothConnect - quickly learn and connect your Wiimotes automatically while
Smoothboard waits silently in the taskbar.

Multiple Wiimotes Support - allows the usage of a secondary Wiimote for redundancy

Use two Wiimotes for high quality tracking


Activate Whiteboard Mode and Presenter Mode simultaneously

Configurable Screen Area Tracking - calibrate a selected screen area and/or select
another display to be used as an interactive whiteboard

Built-in Annotation Feature - write and draw anywhere on any window

Size, Color, Shape (Line, Arrow, Rectangle, Ellipse, Scribble)


Undo/Redo (15 steps)
Select and Move/Resize
Screen Tools (Background Color (White/Black), Background Lines/Grid, Snapshot
Screen, Snapshot Region, Open Snapshot Folder).

Outside Screen Area Toggles and Floating Toolbar - control your presentations
effortlessly

Trigger Mouse Click - Right Click and Double Click


Trigger Key Press - allows multiple key combinations
Launch or execute any application/file which has a default viewer
Notification Balloon - displays triggered events

Smart Menu - allows quick access to Right Click, Middle Click (for scrolling) and all of the
Floating Toolbars functionality.

Activate Right Click or Smart Menu if the IR pen is held down for more than 1
second.

Calibration Viewer

Viewable calibration setup to allow easier adjustments for greater tracking utilization
Configurable infrared (IR) camera sensitivity to allow greater range or improved
accuracy

Presenter Mode - allows the control of the computer even when away from the screen
Cursor Control - with a stationary IR source

Key Press - using mapped Wiimote buttons


Timers - tool to keep track of timings

Laser Pointer Cursor

Cursor Smoothing - reduces jagged lines when drawing in Whiteboard Mode and
Presenter Mode

2.3.3 REQUIREMENTS
Hardware Requirements

A computer with a Bluetooth adaptor (Most laptops have built-in Bluetooth chipsets).
An infrared pen.
A Wiimote. Smoothboard supports up to two Wiimotes.

Software Requirements

Windows operating system. Smoothboard software can be operated with Windows


XP, Windows Vista, and Windows 7 for both 32-bit and 64-bit versions.
Bluetooth stack will enable us to connect to the Wiimote. Ensure the laptop should
have Bluetooth connectivity
Microsoft .Net3.5 Framework is required in the operating system. If it is not installed,
an Application Error appears as: The application failed to initilialize properly
(0xc0000135).

2.3.4 INSTALLATION OF SMOOTH BOARD SOFTWARE

Launch the installer and select the language for


the installer.

Figure 2.3.1

Review the license agreement and click on I Agree

Figure 2.3.2

create the Start Menu Shortcut and click on Next.

Figure 2.3.3

Select the destination folder where the


Smoothboard has to be installed. Click on the
Install button to proceed with the installation.

Figure 2.3.4

The Smoothboard installation is now complete.


Click Close to exit the installer.

Figure 2.3.5

Connect the Wiimote as a Bluetooth device to the computer.


Launch the smoothboards software and press A button
SmoothConnect will automatically learn and connect to the Wii Remote.

Figure 2.3.6

When complete, the Smoothboard main window will be loaded, this window shows the status
of the Wii.

Calibration of the device with the Screen:

After loading, Smoothboard will need to be calibrated with


an IR pen.
click on the Quick Calibration button in the main Window.

Face the Wiimote towards the centre of the screen

Click with an IR pen the calibration markers shown on the


screen in the following sequence: top left corner, top right
corner, bottom right corner and the bottom left corner. After
completing the four points, the Calibration Window will
disappear.

Figure 2.3.7

Figure 2.3.8

On Smoothboards main window, check the calibration by looking at the Calibration


Viewer and the Tracking Utilization. The white-colored area which represents the screen
should be within in the black-colored area. The tracking utilization should be more than 30%
for a screen the size of a whiteboard. This value may be required to be higher if the screen is
very large.

Working with the smoothboard software


Configure the IR pen usage settings from the configuration settings available in the smooth
board software.
Configure the detection for a mouse event.
Now, you will be able to control the mouse cursor by just clicking with the IR pen similar to
the operation of an ordinary mouse.
My configuring with to the mouse events we will be able to make following operations
Single Click:

Click and release the IR pen without moving the IR pen will make a single mouse
click event

Double Click :

Click, release, click and release the IR pen without moving the IR pen will implement
the double click/selection operation.

Drag:

Click and hold while moving the IR pen and release the IR pen will implement drag
operation.

Right Click/ Smart Menu:

Click and hold without moving for more than 1 second to trigger the Smart Menu and
right clicking functionality.

MICROSOFT .NET FRAMEWORK

2.4 MICROSOFT .NET Framework 3.5 or 4:


.NET Framework (pronounced dot net) is a software framework developed by
Microsoft that runs primarily on Microsoft Windows. It includes a large class library known
as Framework Class Library (FCL) and provides language interoperability (each language
can use code written in other languages) across several programming languages. Programs
written for .NET Framework execute in a software environment (as contrasted to hardware
environment), known as Common Language Runtime (CLR), an application virtual machine
that provides services such as security, memory management, and exception handling. FCL
and CLR together constitute .NET Framework.

CHAPTER-III
SETUP

3.1 BLOCK DIAGRAM

Arrange the components as shown in the figure below


Ensure that the distance between the laptop and the Wiimote is in the range of 0.5 mt.
to 5 mt.
The IR sensor of the should focus the center of the laptop screen
Place the laptop screen 45 degrees in angle with the normal

Figure 3.1

3.2 FLOW CHART

Figure 3.2

CHAPTER-IV
OUTPUT SCREENS

4.1 CALIBRATION:

Figure 4.1

Figure 4.1.1

4.2 POWER POINT PRESENTATION:

Figure 4.2

4.3 MOUSE DRAG OPERATION:

Figure 4.3

4.4 CLICK OPERATIONS:

Figure 4.4

Figure 4.4.5

CONCLUSION

5.1 SUMMARY:

BIBLIOGRAPHY

6.1 BIBLIOGRAPHY

Smooth Board, http://www.smoothboard.net


Carnegie mellon school of computer science,
https://www.cs.cmu.edu/~bam/pub/uimsCRCrevised_C360X_CH48.pdf
http://home.unileipzig.de/ehrlich/html/airborne/FLUENT_graphical_user_interface.pdf
www.knowledgegainers.com
Texas Instruments, http://www.ti.com/lit/wp/spry181/spry181.pdf
Department of electrical engineering and computer sciences, University of California,
Berkeley, http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-101.pdf
Graphical user interface - Wikipedia, the free encyclopedia
BrianPeek.com | A Compendium of Random Uselessness
Microsoft, http://www.microsoft.com/en-us/download/details.aspx?id=21
http://www.gm.fh-koeln.de/~hk/lehre/sgmci/ss2015/Literatur/Wigdor_Wixon__Brave_NUI_World.pdf
WII REMOTE, http://wiiphysics.site88.net/physics.html
http://wyxs.net/web/wiimote/digital_whiteboard.html
Google Images,
https://www.google.co.in/search?q=wiimote+internal+circuitry&biw=934&bih=563&
source=lnms&tbm=isch&sa=X&ved=0CAYQ_AUoAWoVChMIhtyx8KbyQIVlQOOCh2C0wPO#imgdii=OHLRNzQltF9oPM%3A%3BOHLRNzQltF9
oPM%3A%3BGI_RydFZj2OWfM%3A&imgrc=SDOm6uf0hxmPNM%3A

You might also like