You are on page 1of 6

Home Services Technology Events Blog Labs About Contact

Touching and Gesturing on the iPhone


July 10th, 2008 at 11:28 pm by Neil Roberts

Categories
ajax (91) Everyone who owns an iPhone (or who has been holding out for an iPhone 3G) is bound to be excited
api (14) about a lot of the new things the device can finally do, particularly the introduction of third-party
Bayeux (6) applications. But those of us in the web development community have been itching for something further
browsers (13) still: good web applications on the iPhone. This means we need a suitable replacement for mouse events.
Cometd (14) And boy did we get them! Though at first the APIs seem a little sketchy, once you’ve learned them you
Conferences (16) should be able to do amazing things in your application.
debugging (9)
docs (5) I’ll start with how to set up the iPhone console, since I found it invaluable while testing. Under Settings >
Dojo (108) Safari > Developer, you can turn it on or off. Simple log, error, and warn functions are provided (as part
Dojo Offline (18) of the console object), all of which accept a single object.
DWR (7)
Firebug (7)
JavaScript (55) My quest to understand the API led me to this Apple Developer Connection page that, while providing
Jobs (2) pretty thorough documentation about what’s available, left me a little confused about the details. Also, if
mobile (17) you aren’t a member of ADC, trying to follow this link will leave you even more confused.
News (89)
offline (33) Clearing it Up
Open Source (51)
Apple introduced two new ideas with this API: touches and gestures. Touches are important for keeping
Performance (16)
track of how many fingers are on the screen, where they are, and what they’re doing. Gestures are
status report (5)
important for determining what the user is doing when they have two fingers on the screen and are
storage (8)
Support (4) either pinching, pushing, or rotating them.
thoughts (3)
Training (5) Touches
UI Design (14) When you put a finger down on the screen, it kicks off the lifecycle of touch events. Each time a new
Vector Graphics (14) finger touches the screen, a new touchstart event happens. As each finger lifts up, a touchend event
happen. If, after touching the screen, you move any of your fingers around, touchmove events happen.

Archives We have the following touch events:


July 2008
June 2008 touchstart: Happens every time a finger is placed on the screen
May 2008
April 2008 touchend: Happens every time a finger is removed from the screen
March 2008
February 2008 touchmove: Happens as a finger already placed on the screen is moved across the screen
January 2008
December 2007 touchcancel: The system can cancel events, but I’m not sure how this can happen. I thought it
November 2007 might happen when you receive something like an SMS during a drag, but I tested that with no
October 2007 success
September 2007
August 2007
node.ontouchstart = function(evt){
July 2007
console.log(evt.pageX + "/" + evt.pageY);
June 2007
// OH NO! These values are blank, this must be a bug
May 2007 }
April 2007
March 2007
February 2007 My first mistake was monitoring these events and trying to get location information from the events
January 2007 (pageX, pageY, etc). After consulting the ADC documentation again, I learned about three event lists that
December 2006 come attached to the object. But I wasn’t sure what they did, so I went back to testing, logging, and
November 2006 experimenting.
October 2006
September 2006
September 2006
It helped when I figured out the problem the Apple developers were trying to solve. With a mouse, you
August 2006
really only have one point of contact: through the cursor. With your hand, you can keep two fingers held
July 2006
down on the left of the screen while you keep tapping the right side of the screen.
September 2001

Our event object has a list, and this list contains information for every finger that’s currently touching
the screen. It also contains two other lists, one which contains only the information for fingers that
Search
originated from the same node, and one which contains only the information for fingers that are
Search
associated with the current event. These lists are available to every touch event.

We have the following lists:

touches: A list of information for every finger currently touching the screen

targetTouches: Like touches, but is filtered to only the information for finger touches that
started out within the same node

changedTouches: A list of information for every finger involved in the event (see below)

To better understand what might be in these lists, let’s go over some examples quickly

When I put a finger down, all three lists will have the same information. It will be in
changedTouches because putting the finger down is what caused the event

When I put a second finger down, touches will have two items, one for each finger.
targetTouches will have two items only if the finger was placed in the same node as the first
finger. changedTouches will have the information related to the second finger, because it’s what
caused the event

If I put two fingers down at exactly the same time, it’s possible to have two items in
changedTouches, one for each finger

If I move my fingers, the only list that will change is changedTouches and will contain
information related to as many fingers as have moved (at least one).

When I lift a finger, it will be removed from touches, targetTouches and will appear in
changedTouches since it’s what caused the event

Removing my last finger will leave touches and targetTouches empty, and changedTouches
will contain information for the last finger

Using these lists, I can keep very close tabs on what the user is doing. Imagine creating a(nother) Super
Mario clone in JavaScript. I’d be able to tell what direction the user currently has his or her thumb on,
while also being able to watch for when the user wants to jump or shoot a fireball.

I’ve been saying that these lists contain information about the fingers touching the screen. These objects
are very similar to what you’d normally see in an event object passed to an event handler A limited set of
properties are available in these objects. Following is the full list of properties for these objects:

clientX: X coordinate of touch relative to the viewport (excludes scroll offset)

clientY: Y coordinate of touch relative to the viewport (excludes scroll offset)

screenX: Relative to the screen

screenY: Relative to the screen

pageX: Relative to the full page (includes scrolling)

pageY: Relative to the full page (includes scrolling)

target: Node the touch event originated from

identifier: An identifying number, unique to each touch event

For those of you coming from the normal web design world, in a normal mousemove event, the node
passed in the target attribute is usually what the mouse is currently over. But in all iPhone touch events,
the target is a reference to the originating node.

One of the annoyances of writing web applications for the iPhone has been that even if you set a viewport
for your application, dragging your finger around will move the page around. Fortunately, the
touchmove’s event object has a preventDefault() function (a standard DOM event function) that will
make the page stay absolutely still while you move your finger around.

Drag and Drop with the Touch API


We don’t have to worry about keeping track of down/up events as we do with mousemove since the only
way touchmove is triggered is after touchstart.
node.ontouchmove = function(e){
if(e.touches.length == 1){ // Only deal with one finger
var touch = e.touches[0]; // Get the information for finger #1
var node = touch.target; // Find the node the drag started from
node.style.position = "absolute";
node.style.left = touch.pageX + "px";
node.style.top = touch.pageY + "px";
}
}

Gestures
This was much easier to figure out than the touch API. A gesture event occurs any time two fingers are
touching the screen. If either finger lands in the node you’ve connected any of the gesture handlers
(gesturestart, gesturechange, gestureend) to, you’ll start receiving the corresponding events.

scale and rotation are the two important keys of this event object. While scale gives you the
multiplier the user has pinched or pushed in the gesture (relative to 1), rotation gives you the amount
in degrees the user has rotated their fingers.

Resizing and Rotating with the Gestures API


We’ll be using WebKit’s transform property to rotate the node.

var width = 100, height = 200, rotation = ;

node.ongesturechange = function(e){
var node = e.target;
// scale and rotation are relative values,
// so we wait to change our variables until the gesture ends
node.style.width = (width * e.scale) + "px";
node.style.height = (height * e.scale) + "px";
node.style.webkitTransform = "rotate(" + ((rotation + e.rotation) % 360) + "deg)";
}

node.ongestureend = function(e){
// Update the values for the next time a gesture happens
width *= e.scale;
height *= e.scale;
rotation = (rotation + e.rotation) % 360;
}

Conflicts
Some readers might have noticed that a gesture is just a prettier way of looking at touch events. It’s
completely true, and if you don’t handle things properly, you can end up with some odd behavior.
Remember to keep track of what’s currently happening in a page, as you’ll probably want to let one of
these two operations “win” when they come in conflict.

In Action
I put together a quick demo:
This is a simple application that showcases the incredible flexibility and power of these APIs. It’s a simple
gray square that can have its colors and borders restyled, can be dragged around, and can be resized and
rotated.

Load http://tinyurl.com/sp-iphone up on your iPhone and try the following:

Keep a finger over one of the colored squares, and put another finger on one of the border
squares

Try the same thing using two colored squares or two border squares

Use one finger to drag the square around the page

Pinch and rotate the square

Start dragging the square, but put another finger down and turn it into a pinch and rotate. Lift
one of your fingers back up, and resume dragging the square around

Can I Do X?
I’m not sure what sort of APIs we’ll be able to build on top of what Apple has provided for us. What I do
know is that Apple has given us a very well thought out API.

mousedown and mouseup are events we can easily emulate with this new API. mousemove is a beast. First
of all, we only get touch events after the finger has made contact (the equivalent of mousedown) while we
get mousemove events regardless of whether the button is down or not. Also, preventing the page from
jumping around isn’t something we can automate. Attach a handler to the document and the user
wouldn’t be able to scroll at all!
Which brings us to DnD in general. Even though DnD only cares about mousemove in the context of the
mouse button being down (the way that touchmove works), we don’t have any way to tell what node the
user’s finger is over at the end of the drag (since target refers to the originating node). If a DnD system
is to be used, it would have to be for registered drop targets who are aware of their position and size on
the page.

11 Responses to “Touching and Gesturing on the iPhone”

Chris says:
Posted July 11th, 2008 at 1:32 am

Cool demo Neil. Thanks for the article.


Are you using the Canvas renderer with gfx in this demo or is it also supporting svg now?

nroberts says:
Posted July 11th, 2008 at 8:05 am

This is all just plain old HTML and CSS with the addition of WebKit’s transform property

Ajaxian » iPhone Web Goodies: Drag and Drop with Touch, Resize and Rotate with Gestures says:
Posted July 11th, 2008 at 8:16 am

[…] The video above shows a simple showcase application that Neil Roberts of SitePen created and
wrote about. […]

Dan Kantor says:


Posted July 11th, 2008 at 2:46 pm

Looks like Apple left out Safari full screen mode. They alluded to it a few months back with a meta
tag:

Gestures combined with SQLLite and Full Screen would have allowed us to create web apps very close
to natives. I hope it shows up in a later release.

iPhone Microsites - iPhone Web Development - Gesturing and Touches says:


Posted July 11th, 2008 at 3:46 pm

[…] applications for web based delivery of content to the iPhone. Check out the full article here.
Submit this page to other blogs: These icons link to social bookmarking sites where readers can […]

tlrobinson.net / blog » Blog Archive » Multitouch JavaScript “Virtual Light Table” on iPhone v2.0 says:
Posted July 11th, 2008 at 5:04 pm

[…] a good overview of touch events and gestures, check out this SitePen blog post and Apple’s […]

James says:
Posted July 12th, 2008 at 1:29 am

Wow, this looks amazing. Will be interesting to see how far one can take this. A version of
CanvasPaint that actually works in mobile safari, anyone?

Tom says:
Posted July 12th, 2008 at 6:25 pm

I think this stuff is still under NDA, or have you heard otherwise?

Dylan says:
Posted July 12th, 2008 at 10:48 pm

@Tom: with the iPhone 2.0 software live for a few days now, this is publicly discoverable knowledge
by introspecting on the DOM using standard JavaScript techniques.

coderkind.com » Blog Archive » iPhone touching and gesturing says:


Posted July 13th, 2008 at 7:27 am

[…] interesting article here on some of the things you can achieve with HTML and JavaScript for
iPhone’s Safari browser. […]

Tom says:
Posted July 13th, 2008 at 2:57 pm

@Dylan: good point.

Leave a Reply
Name (required)

Mail (will not be published) (required)

Website

Copyright 2008 - SitePen, Inc. All Rights Reserved Contact - Job Opportunities - About - Privacy Policy - Terms of Use

You might also like