You are on page 1of 23

Fundamentals Of Pervasive Computing Unit-1 Pervasive Architecture

Introduction: Pervasive and its Environment: The essence of that vision was the creation of environments saturated with computing and communication capability, yet gracefully integrated with human users. The most important characteristics of pervasive environments are: Heterogeneity: Computing will be carried out on a wide spectrum of client devices, each with different configurations and functionalities. Prevalence of "Small" Devices: Many devices will be small, not only in size but also in computing power, memory size, etc. High Mobility: Users can carry devices from one place to another without stopping the services. User-Oriented: Services would be related to the user rather than a specific device, or specific location. Highly Dynamic Environment: An environment in which users and devices keep moving in and out of a volatile network. Pervasive Computing(Ubiquitous computing): Pervasive computing integrates computation into the environment, rather than having computers which are distinct objects. It encompasses wide range of research topics, including distributed computing, mobile computing, sensor networks, human-computer interaction, and artificial intelligence. The aim of ubiquitous computing is to design computing infrastructures in such a manner that they integrate seamlessly with the environment and become almost invisible. Local Area Networks: What is a network? Its simply two or more devices that communicate with one another over some type of electronic connection. The connection itself can be copper wire, fiber optic cable, or radio waves. There are all sorts of networks in use today, including the broadcast and cable television networks, the public telephone network, several cellular telephone networks, and the Internet. A local area network (LAN) is a network of computers, located physically close to one another. (The Internet, by the way, is a WAN, or wide area network, that connects millions of LANs.) A LAN consists of two or more computers, each equipped with a communications 1

device called a network interface or network adapter. The network interfaces are connected to one another by some type of communications medium, which provides a pathway for electrical signals that connect all of the computers on a LAN. The most widely used, cost-effective, and highest-performance network medium in use today is twisted-pair Ethernet cable, often called CAT5 or CAT6 cable. (CAT is short for categorythere are several grades of cable that can be used for Ethernet LANs.) A relatively new technology called wireless Ethernet uses radio signals instead of copper cable as the communications medium.

Networks Topologies:
There are five different types of topologies. They are a) Bus b) Star c) Ring d) Mesh e) Tree. When networks are design using multiple topologies it is called Hybrid Networks, this concept is usually utilized in complex networks were larger number of computer clients are required. Bus Topology: Bus topology is one the easiest topologies to install, it does not require lots of cabling. There are two most popular. Bus topology based networks works with very limited devices. It performs fine as long as computer count remain within 12 15, problems occurs when number of computer increases. Bus topology uses one common cable (backbone) to connect all devices in

the network in linear shape. RingTopology: Ring topologies are similar to bus topologies, except they transmit in one direction only from station to station. Typically, a ring architecture will use separate physical ports and wires for transmit and receive. Token Ring is one example of a network technology that uses a ring topology. 2

StarTopology: This is the most commonly used network topology design you will come across in LAN computer networks. In Star, all computers are connected to central device called hub, router or switches using Unshielded Twisted Pair (UTP) or Shielded Twisted Pair cables. In star topology, we require more connecting devices like routers, cables unlike in bus topology where entire

network is supported by single backbone. TreeTopology: Just as name suggest, the network design is little confusing and complex to understand at first but if we have better understanding of Star and Bus topologies then Tree is very simple. Tree topology is basically the mixture of many Star topology designs connected together using bus topology. Devices like Hub can be directly connected to Tree bus and each hub performs as root of a tree of the network devices. Tree topology is very dynamic in nature and it holds potential of expandability of networks far better than other topologies like Bus and Star.

MeshTopology: Mesh topology is designed over the concept of routing. Basically it uses router to choose the shortest distance for the destination. In topologies like star, bus etc, message is broadcasted to entire network and only intended computer accepts the message, but in mesh the message is only sent to the destination computer which finds its route itself with the help of router. Internet is based on mesh topology. Routers plays important role in mesh topology, routers are responsible to route the message to its destination address or computer. When every device is connected to every other device it is known as full mesh topology and if every device is connected indirectly to each other then it is called partial mesh topology.

Router:
A router is a device that forwards data packets across computer networks. Routers perform the data "traffic directing" functions on the Internet. A router is a microprocessorcontrolled device that is connected to two or more data lines from different networks. When a data packet comes in on one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table, it directs the packet to the next network on its journey. A data packet is typically passed from router to router 4

through the networks of the Internet until it gets to its destination computer. Routers also perform other tasks such as translating the data transmission protocol of the packet to the appropriate protocol of the next network, and preventing unauthorized access to a network by the use of a firewall.

Bridges:
A bridge device filters data traffic at a network boundary. Bridges reduce the amount of traffic on a LAN by dividing it into two segments. Bridges operate at the data link layer (Layer 2) of the OSI model. Bridges inspect incoming traffic and decide whether to forward or discard it. An Ethernet bridge, for example, inspects each incoming Ethernet frame - including the source and destination MAC addresses, and sometimes the frame size - in making individual forwarding decisions. Bridges serve a similar function as switches, that also operate at Layer 2. Traditional bridges, though, support one network boundary, whereas switches usually offer four or more hardware ports. Switches are sometimes called "multi-port bridges" for this reason.

Hub:
A hub is a small rectangular box, often made of plastic, that receives its power from an ordinary wall outlet. A hub joins multiple computers (or other network devices) together to form a single network segment. On this network segment, all computers can communicate directly with each other. Ethernet hubs are by far the most common type, but hubs for other types of networks such as USB also exist. A hub includes a series of ports that each accept a network cable. Small hubs network four computers. They contain four or sometimes five ports, the fifth port being reserved for "uplink" connections to another hub or similar device. Larger hubs contain eight, 12, 16, and even 24 ports.

Wireless LANs- Standards and Protocols


Standard data rate band 802.11a 54Mbps 5GHz 802.11b 11Mbps 2.4GHz 802.11g 54Mbps 2.4GHz One of the main areas that concern the development of wireless technology is the standardization. Standardization main purpose is to provide guidelines to the development process and to divide the implementations of this technology and to set rules to these divisions. These set of rules form what is so called protocols which made it easier for devices of various manufacturers to communicate with each other in a certain defined way. IEEE toke the lead in the process of standardization of wireless LANs by introducing the 802.11 family of protocols in 1990, although it was released seven years later, which is specified to the wireless LANs technology. This family of protocols contains 10 members; the first member is 802.11 which is 5

the original protocol for wireless LANs which has a data rate of Up to 2Mbps in the 2.4GHz band. The second family is 802.11a which has a data rate of Up to 54Mbps in the 5GHz band and it is high speed and has better support than 802.11b in the areas of multimedia voice, video and large-image applications in a crowded network. The next suit, introduced in 2000, is 802.11b and it is an extension of the original 802.11 standard and it has a data rate of Up to 11Mbps in the 2.4GHz band and it is capable of covering a wider range of area than 802.11a with less access points. 802.11g suit, which is compatible with 203.11b and expected to be replacing it, was released in 2003 and it has a data rate of Up to 54Mbps in the 2.4GHz band and it has also improved the speed of communications and the security of the wireless LAN. Nowadays, a new standard is introduced and it merged the previous 3 standards with another 5 less used standards, d, e, h, I and j, in one suit called IEEE 802.11-2007 standard. Two advanced standards waiting to be released are 802.11n and 802.11s. 802.11n main advantage is that it supports Multiple-input and multiple-output (MIMO) which allows receivers and transmitters to have multiple antennas to increase the performance of the communications and it is predicted that it will have a data rate of up to 500 Mbps. 802.11s standard studies started at 2003 and its main purpose is to implement devices to work in a mesh network which uses nodes to find paths to data even if there are missing or broken network devices. [1, 2, 5, 6]*

How it works
The Wi-Fi network is a wireless network which uses radio waves. There are some similarities between the radios used for Wi-Fi network and ones used for TV, mobile phones and Walk-Talky. They can send and receive radio waves. Beside, they could convert 1's and 0's into radio waves and vice versa. On the other hand, they transmit at frequencies of 2.4 GHz or 5GHz, which is higher than the frequencies used for cell phones, walkie-talkies and Basically, the wireless network is constructed by two units. The first one is the wireless transmitter (wireless adapter) which could be either built-in or plugged into the PC card slot or USB port. The second unit, which is more important, is the wireless router that contains five parts, a port to connect to your cable or DSL modem, a router, an Ethernet hub, a firewall and a wireless access point. This network works in the following way. First, the data from the computer is converted into radio signals by the wireless adapters and then it is transmitted to the wireless router using the antenna. After that, the router takes and decodes the coming signals. Finally, the information is sent to the internet using a physical, wired Ethernet connection. This process can operate in the reverse way; the information received by the router form the internet is translated into radio signals and are sent to the wireless adapter that in turns, converts them into data used by the computer. In this case and because of wireless adapters, the router can be used by many devices to connect the internet. As mentioned above, Wi-Fi radios can transmit on any of two frequency band, 2.4 GHz and 5 GHz. In addition, they can alternate (hop) very quickly between these two different frequencies. This frequency hopping makes the interference less and allows multiple devices use the same connection concurrently. For the security, there are different methods to make the wireless networks private such that no body can use somebody's signal or his own network. The most important and secure one is called Media Access Network (MAC) method which doesnt use a password to allow users accessing the network. Instead, it utilizes the MAC address. 6

Every computer has a unique MAC address. This address gives the permission to only machines having this specific MAC address for accessing the network. When installing the router, the allowed addresses need to be specified so that it can access the network.

Advantages and Disadvantages


Flexibility Easy Set up Application Transparency Since Wireless Security is less Implementation of WEP is Time Consuming.

Wireless LAN Configurations


Currently, most wireless networks (WLANs) are based on the IEEE 802.11b, 802.11a or 802.11g standards. These standards define how to wirelessly connect computers or devices to a network. Wireless enabled devices send and receive data indoors and out, anywhere within the range of a wireless access point. The choice of standard depends on your requirements, including data communications speed and range, the level of security, noise and interference concerns, compatibility issues and cost. 802.11b was the first 802.11 standard to be released and have commercial products available. Also called Wireless Fidelity, or Wi-Fi, it has a range suitable for use in big office spaces. Wi-Fi is currently the most popular and least expensive wireless LAN specification. It operates in the unlicensed 2.4 GHz radio spectrum and can transmit data at speeds up to 11 Mbps within a 30m range. It can be affected by interference from mobile phones and Bluetooth devices which can reduce transmission speeds. 802.11a has a couple of advantages over Wi-Fi. It operates in a less-populated (but also unlicensed) frequency band (5.15GHz to 5.35GHz) and is therefore less prone to interference. Its bandwidth is much higher than 802.11b, with a theoretical peak of 54 Mbps. However, actual throughput is typically closer to 25 Mbps. 802.11g is the latest standard and promises to be the most popular format. It combines the speed the 802.11a and backward compatibility with 802.11b. It operates in the same frequency band as 802.11b but consequently also can be affected by interference. The following table provides some comparative communications distances at various data communications speeds for each of the 802.11 standards.

802.11 Authentication & Encryption Security Basics


Like installing locks and keys on a door to control entry, wireless LAN security is designed to control which users can access the wireless LAN. The following table provides a summary of various WLAN security protocols and techniques.

Default Security Settings


To provide basic authentication, most APs support simple MAC address filtering. Default security values are built-in and, in most cases, the AP implements these values on power up. However, you may want to make changes. Typically the following three parameters are configurable: SSID The Service Set Identifier will normally default to the manufacturer's name. You can set it to any word or phrase you like. Channel Normally the channel setting will default to channel 6. However, if a nearby neighbor is also using an access point and it is set to channel 6, there can be interference. Choose any other channel between 1 and 11. An easy way to see if your neighbors have access points is to use the search feature that comes with your wireless card. WEP Key WEP is disabled by default. To turn it on you must enter a WEP key and turn on 128-bit encryption.

Wired Equivalent Privacy (WEP)


WEP is the original security protocol for WLANs, defined in the 802.11 standard. WEP was the only encryption available on early 802.11 devices and is not an industrial security algorithm. Although simple to implement, WEP is easily hacked. Significant security improvements can be made simply by implementing two options built in to the Access Point: MAC address filtering and hiding the SSID. These measures will stop unwanted traffic from accidental intrusion and casual hackers, but are not sufficient for sensitive data or missioncritical networks.

Lightweight Extensible Authentication Protocol (LEAP)


LEAP is a proprietary authentication solution that is based on 802.1X but adds proprietary elements of security. The standard was developed by Cisco and, although implementation is simple, it shares some weaknesses with WEP and should not be used if high security is required for your configuration. LEAP helps eliminate security vulnerabilities through the use of the following techniques Mutual Authentication The client must authenticate the network and the network needs to authenticate the client. User-Based Authentication LEAP eliminates the possibility of an unauthorized user access the network through a preauthorized piece of equipment by the use of usernames and passwords. Dynamic WEP Keys LEAP uses 802.1X to continually generate unique WEP keys for each user. 8

Protected Extensible Authentication Protocol (PEAP)


PEAP is a flexible security scheme that creates an encrypted SSL/TLS (Secure Sockets Layer / Transport Layer Security) channel between the client and the authentication server, and the channel then protects the subsequent user authentication exchange. To create the secure channel between client and authentication server, the PEAP client first authenticates the PEAP authentication server using digital certificate authentication. When the secure TLS channel has been established, you can select any standard EAPbased user authentication scheme for use within the channel. After the user is successfully authenticated, dynamically generated keying material is supplied by the authentication server to the wireless AP. From this keying material, the AP creates new encryption keys for data protection.

Temporal Key Integrity Protocol (TKIP)


TKIP is part of the IEEE 802.11i encryption standard for WLANs and is the next generation of WEP. It enhances WEP by adding a per-packet key mixing function, a message integrity check and a re-keying mechanism. TKIP encryption replaces WEPs small (40-bit) static encryption key, manually entered on wireless APs and client devices, with a 128 bit per-packet key. TKIP significantly mitigates WEPs vulnerabilities but does not provide complete resolution for its weaknesses.

Wi-Fi Protected Access (WPA)


WPA was introduced as a subset of the 802.11i security standard based on TKIP. WPA addresses the weaknesses of WEP with the dynamic encryption scheme provided by TKIP. WPA dynamically generates keys and removes the predictability that intruders rely on to exploit the WEP key. WPA also includes a Message Integrity Check (MIC), designed to prevent an attacker from capturing, altering and resending data packets.
WPA WPA2

Challenges and Requirements


The above market and service drivers give rise to some unique terabit network challenges and requirements. Chief among these as described in more detail below are: network scalability, flexibility, efficiency and transparency, improved network management & operations costs, multi-protocol support, rapid service recovery, and authentication, authorization and accounting.

Network Scalability
Terabit network applications are characterized by unpredictable client traffic demands combined with stringent requirements on Quality-of-Service (QoS). Traditionally, traffic planners could consider capacity growth in three-, five- and ten-year increments. Today, the rapid and explosive growth in web video, mobile messaging, wi-fi and wi-max applications, means time frames as short as six months must also be considered. Thus, graceful scalability is a prime terabit network requirement.

Flexibility, Efficiency and Transparency


From a customer service perspective, terabit network platforms must be very flexible, enabling clients to increase service velocity on demand at any time, and from any location. The networks must also efficiently accommodate a diverse set of both differentiated service offerings (e.g., based on priority, resiliency, etc.), and wide ranging traffic characteristics (e.g. real-time traffic, legacy protocols, high peak traffic, etc.).

Improved Network Management & Operations Costs


Todays users not only want more bandwidth for their money, they demand simpler and low-cost network management & operations procedures. Hence terabit network equipment suppliers must offer both operational savings (lower power consumption, reduced management complexity, smaller footprint), and support modular deployments, and continuous growth.

Multi-Protocol Support
As new services proliferate, terabit network operators are looking to new "de-layered" and transparent network infrastructures to support all customer services across all customer locations, while providing reduced transmission and operations overhead for a variety of protocols.

Rapid Service Recovery


SONET/SDH network providers using Resilient Packet Ring (RPR) technology built to meet the IEEE 802.17 RPR standard are accustomed to a maximum dual-ring-topology restoration time of 50 mSec. Some network equipment venders offer even faster recovery times. New terabit network technologies must offer at least this level of protection or better.

Authentication, Authorization and Accounting


Authentication, authorization and accounting are the known as the "triple-A" of network security. A key terabit network infrastructure issue is how to provide the servers and security mechanisms to ensure that no single person or resource can gain network access without proper authorization.

Service Network Architecture - Overview


Figure 1 shows a layered architecture model for terabit networks that is emerging for enterprise and public service provider infrastructures alike. The lowest layer supports multiservice access for all types of data, voice, and video over a single packet-cell-based infrastructure. The benefits of multi-service access are reduced OPerating EXpenses (OPEX)2, higher performance, greater flexibility, integration and control, and faster service deployment. The heart of the architecture is a Core Optical Network (CON) which serves to interconnect the multi-service access points with the service platform. Since per-bit profit margins will still be constrained by aggressive competition, the CON must be designed with minimal complexity to reduce costs, while still flexibly and efficiently supporting multi-service transport. 10

Figure 1. Layered Terabit Network Service Architecture Overview CON packet forwarding overhead is greatly reduced through use of Multi-Protocol Label Switching (MPLS) technology. Internet Protocol (IP) packets have a field in their header containing the address to which the packet is to be routed. Traditional routing networks process this information at every router in a packet's path through the network. Using MPLS, however, when the data packet enters the first router, the header analysis is done just once and a new label is attached to the packet. Subsequent CON MPLS routers can then forward the packet by inspecting only the new label. In MPLS terminology, the CON routers are classified into two categories: highperformance packet classifiers called Edge Routers or Label Edge Routers (LERs) that apply (and remove) the requisite MPLS labels, and core routers that perform routing based only on Label Switching and are also called Label Switch Routers (LSRs). MPLS technology supports both traffic prioritization and QoS, and it can be used to carry many different kinds of traffic, including IP packets, ATM, SONET, and Ethernet. IP will likely be the near-universal technology used to implement the service layer, and Dense Wavelength Division Multiplexing (DWDM) will be used to increase bandwidth over existing fiber-optic backbones. Finally, the CON will link to the service platform which will in turn support execution of a variety of distributed applications, network management processes and signaling and control functions, as well as access to a diversity of information content types.

Service Network Architecture - Detailed View


Figure 2 shows the layered architecture model for terabit networks in more detail. It consists of the following parts:

Personal Area Networks (PANs)


Areas one to three meters in extent which are serviced by wireless technologies such as Bluetooth, Zigbee and Wireless Universal Serial Bus (WUSB). 11

Local Area Networks (LANs)


Link user premises to the first network node. Next generation LANs will be optical and support 100 Gb/s; one terabit Ethernet is being planned for 2010-12.

Metropolitan Area Networks (MANs)


Provide corporate connections inside the city. Here fiber optics and Ethernet protocol is the favorite as a MAC layer, although SONET/SDH is also in use. Terabit network technology will initially have the most impact on MANs and Long Haul Networks (LHNs).

Distribution and Transport Network (LHN)


Is the inter-city equivalent to the "express train" that transports many people through long distances. But if you dont live in a city youll need to access a "local light rail" line somewhere close to home. Thats the distribution network that could be "along the way" of the express route, or a complementary route "orthogonal" to the express line. Ideally, both networks will be planned together and use the same technology.

Regional Area Networks (RANs)


Are useful for localized services from a regional carrier, a local enterprise, or a county or group of cities. RANs are needed for services that exceed geographic boundaries such as those for international corporations, national services, federal police networks,etc. The main topological design problem in terabit networks is deciding where to locate multiservice access nodes, and how to provision and manage traffic flexibly and efficiently as described in more detail in the next section.

Figure 2. Layered Terabit Network Service Architecture - Detailed View 12

Core Optical Network (CON) Traffic Provisioning & Management


Provisioning and management of terabit network traffic must be done simply and efficiently to maximize network throughput, reduce buffer size and processing power, and to minimize delay due to memory allocation and packet processing at CON nodes. Multi-service access nodes and MAN transport will depend on Ethernet Layer 2 (L2) aggregation techniques whereby frame labels such as Virtual LAN (VLAN) tags or MPLS Permanent Virtual Circuits (PVCs) support a finer level of granularity than provided by the Long Haul Network (LHN). VLAN tags and PVCs connect customer IP routers to an IP service switch at the CONs edge. Residences, small businesses and small-to-medium enterprises with links to multi-service access nodes will migrate to Passive Optical Networks (PONs) to-the-curb (or to-the-Building), and will be terminated using a variety of "last mile" technologies including copper, wireless3 and fiber.

Link Capacity Adjustment Scheme (LCAS) with Virtual Concatenation (VC)


The Optical Internetworking Forum (OIF) defines the Optical User-Network Interface (UNI) that provides an interface by which a client may request services from an optical network. The SONET/SDH Link Capacity Adjustment Scheme (LCAS) includes automated traffic provisioning by means of Virtual Concatenation (VC) in a variety of sizes and can automatically adjust the transmission capacity seen by the end user.Automated connection provisioning opens the way to offer additional services such as intelligent protection and restoration of back-up links without requiring expensive hardware components to achieve redundancy.

Traffic Grooming
Multiplexing frames at network ingress points compromises efficiency when the network has many entry points. Accommodating frames inside faster and longer frames requires a tradeoff between load flexibility and efficient use of link capacity. Newer optical grooming technologies support traffic flows that minimize the number of add/drop operations. Admission control enables client traffic to be controlled based on a mutually agreed-upon Service Level Agreement (SLA). Traffic management depends on queuing and scheduling procedures for the incoming traffic flows that were authorized by admission control. LCAS/VC offers network providers flexibility inside virtual circuits to accommodate client traffic fluctuations and add/drop of circuits without changing the network physical structure.

Distributed and Automated Network Management


Terabit networks require a large number of measurements and traffic data that must be processed by the NMSs to prevent traffic overloads. The huge volume of this data can result in long delays before the network traffic is brought under control. Moreover, a central node or link failure can readily erode the QoS on a large portion of any network. Traditional NMSs have centralized control. However, the terabit networks increased complexity, equipment diversity, need for flexible service provisioning, topology reconfiguration and protocol updates, as well as traffic fluctuations, mandate a distributed and automated approach to network management. Current SONET/SDH networks use manual processes and Network Management Systems 13

(NMSs) to implement optical connections from one location to another. Turn-around time to provision a new connection can take as long as six weeks, and the configuration process can take several hours, especially if more than one carrier is involved. While this may be acceptable for LHN where the end nodes are cities and change infrequently, it is by no means responsive enough for MAN solutions where end nodes are enterprise branches or connections between enterprises. Optical links to support MANs require a dynamic automated provisioning system that offers short turnaround times, flexible scalability, fine traffic granularities, and is amenable to frequent changes. Recently, dynamic provisioning protocols have emerged that let carriers establish connections not only within a single carriers territory, but can also provide dynamic provisioning across multiple carriers on an end-to-end basis.

Terabit Optical Technologies


Wavelength Division Multiplexing (WDM) has dominated fiber-optic transmission technology since the development of tunable lasers. Two WDM technologies were developed: Dense Wavelength Division Multiplexing (DWDM) for long haul transmission and Coarse Wavelength Division Multiplexing (CWDM) for metropolitan transmission. The first is very precise and very costly but supports hundreds of optical channels; the second is inexpensive and can be implemented on a variety of physical media but supports only 18 optical channels. CWDM is the appropriate technology for PON local access networks and DWDM is the right technology for Distribution and Transport Network inside the LHON (Long Haul Optical Network). CWDM can be easily implemented with point-to-point or point-to-multipoint topologies, but DWDM requires that optical channels be provisioned on specialized nodes. Synchronous Optical Network (SONET) and the Synchronous Digital Hierarchy (SDH) offer similar packet data containers of 155 Mb/s, 622 Mb/s, 2.25 Gb/s, 10 Gb/s, and 40 Gb/s. The next logical step would result in the evolution of these protocols to terabit rates as multiples of 1.3 Tb/s. At least 100, 1.3 Tb/s channels, can be placed inside a fiber-optic cable consisting of 20 fibers. If ten fibers are used to support one direction of transmission, and ten fibers the opposite direction, the resulting fiber cable capacity is equal to (10 fibers * 100 channels * 1 Tb/s per channel), or 1000 Tb/s (1 Petabit per second). In December 2006, the Ethernet Alliance (www.ethernetalliance.org) delegated the IEEE 802.3 Standards Project to the High Speed Study Group (HSSG). This group is forecasting that 100 Gb/s Ethernet could be the new IEEE standard for 2010. As bandwidth demands continue to require faster access networks, and hardware manufactures implement ever faster chip-sets, the next logical step would be a 1 Tb/s Ethernet protocol.

Ubiquitous computing
Ubiquitous computing is giving architecture many benefits that we will continue to see embedded in our buildings. Ubiquitous computing is the wave of the future providing us with many new architectural functions as well as challenges. For now, lets focus on the benefits. The following are the top seven benefits brought about by ubiquitous computing as they impact architecture and occupants in everyday life: 14

1) INVISIBLE: Smart environments will be embedded with computing technologies that will be mostly out-of-sight. Architecture will gain many more capabilities with less visual clutter. 2) SOCIALIZATION: Interactions with architecture will be more social in nature. Smart buildings will illicit a more social response from occupants as computers user interfaces embed themselves within architecture. 3) DECISION-MAKING: Smart environments will help occupants to make better choices as they go about their everyday lives. At key moments within architectural experiences, a good architectural design will make smart environments helpful. Such architecture will be more proactive than passive. 4) EMERGENT BEHAVIOR: Buildings are now becoming more and more kinetic in form and function. Their movements and constructed designs come together dynamically to yield behaviors that make them more adaptive. Buildings will learn how to learn in order to run efficiently and aesthetically. 5) INFORMATION PROCESSING: Since architecture will be gaining a type of nervous system, information processing will be gaining a whole new meaning. Architecture will go from crunching data to making sense of data; therefore, eliminating our need to constantly input adjustments. 6) ENHANCING EXPERIENCE: As computers ubiquitously embed themselves in our environments, sensors and actuators will create smart environments where architectural space will be goal oriented. Therefore, more occupant needs will be better met. 7) CONVERGENCE: Much of our environment will be supplemented with interconnected digital technologies. Such interconnectivity will allow for a new type of sharing that will serve to eliminate many mundane tasks. Also, fewer errors will occur as systems pull data from shared digital locations (instead of having numerous copies to keep up-to-date

15

What Is Ubiquitous Computing? The word "ubiquitous" can be defined as "existing or being everywhere at the same time," "constantly encountered," and "widespread." When applying this concept to technology, the term ubiquitous implies that technology is everywhere and we use it all the time. Because of the pervasiveness of these technologies, we tend to use them without thinking about the tool. Instead, we focus on the task at hand, making the technology effectively invisible to the user. Ubiquitous technology is often wireless, mobile, and networked, making its users more connected to the world around them and the people in it. Why Is Ubiquitous Computing Important? Ubiquitous computing is changing our daily activities in a variety of ways. When it comes to using today's digital tools users tend to communicate in different ways be more active conceive and use geographical and temporal spaces differently have more control In addition, ubiquitous computing is global and local social and personal public and private invisible and visible an aspect of both knowledge creation and information dissemination

Ambient intelligence(computing):
In computing, ambient intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. Ambient intelligence is a vision on the future of consumer electronics, telecommunications and computing that was originally developed in the late 1990s for the time frame 20102020. In an ambient intelligence world, devices work in concern to support people in carrying out their everyday life activities, tasks and rituals in easy, natural way using information and intelligence that is hidden in the network connecting these devices (see Internet of Things). As these devices grow smaller, more connected and more integrated into our environment, the technology disappears into our surroundings until only the user interface remains perceivable by users. The ambient intelligence paradigm builds upon pervasive computing, ubiquitous computing, profiling practices, and human-centric computer interaction design and is characterized by systems and technologies that are: embedded : many networked devices are integrated into the environment context aware : these devices can recognize you and your situational context personalized : they can be tailored to your needs adaptive : they can change in response to you anticipatory : they can anticipate your desires without conscious mediation.

16

Ambient intelligence is closely related to the long term vision of an intelligent service system in which technologies are able to automate a platform embedding the required devices for powering context aware, personalized, adaptive and anticipatory services.

Overview
An (expected) evolution of computing from 1960-2010. More and more people make decisions based on the effect their actions will have on their own inner, mental world. This experience-driven way of acting is a change from the past when people were primarily concerned about the use value of products and services, and is the basis for the experience economy. Ambient intelligence addresses this shift in existential view by emphasizing people and user experience. The interest in user experience also grew in importance in the late 1990s because of the overload of products and services in the information society that were difficult to understand and hard to use. A strong call emerged to design things from a user's point of view. Ambient intelligence is influenced by user-centered design where the user is placed in the center of the design activity and asked to give feedback through specific user evaluations and tests to improve the design or even co-create the design together with the designer (participatory design) or with other users (end-user development). In order for AmI to become a reality a number of key technologies are required: Unobtrusive hardware (Miniaturisation, Nanotechnology, smart devices, sensors etc.) Seamless mobile/fixed communication and computing infrastructure (interoperability, wired and wireless networks, service-oriented architecture, semantic web etc.) Dynamic and massively distributed device networks, which are easy to control and program (e.g. service discovery, auto-configuration, end-user programmable devices and systems etc.). Human-centric computer interfaces (intelligent agents, multimodal interaction, context awareness etc.) Dependable and secure systems and devices (self-testing and self repairing software, privacy ensuring technology etc.)

Example scenario
Ellen returns home after a long day's work. At the front door she is recognized by an intelligent surveillance camera, the door alarm is switched off, and the door unlocks and opens. When she enters the hall the house map indicates that her husband Peter is at an art fair in Paris, and that her daughter Charlotte is in the children's playroom, where she is playing with an interactive screen. The remote children surveillance service is notified that she is at home, and subsequently the on-line connection is switched off. When she enters the kitchen the family memo frame lights up to indicate that there are new messages. The shopping list that has been composed needs confirmation before it is sent to the supermarket for delivery. There is also a message notifying that the home information system has found new information on the semantic Web about economic holiday cottages with sea sight in Spain. She briefly connects to the playroom to say hello to Charlotte, and her video picture automatically appears on the flat screen that is currently used by Charlotte. Next, she connects to Peter at the 17

art fair in Paris. He shows her through his contact lens camera some of the sculptures he intends to buy, and she confirms his choice. In the mean time she selects one of the displayed menus that indicate what can be prepared with the food that is currently available from the pantry and the refrigerator. Next, she switches to the video on demand channel to watch the latest news program. Through the follow me she switches over to the flat screen in the bedroom where she is going to have her personalized workout session. Later that evening, after Peter has returned home, they are chatting with a friend in the living room with their personalized ambient lighting switched on. They watch the virtual presenter that informs them about the programs and the information that have been recorded by the home storage server earlier that day.

Pervasive Web application architecture: Introduction


The problems that application programmers initially faced when implementing Web applications with browser access from PCs have, to a large degree, been resolved. Various technologies are available that allow application programmers to create transactional Web applications in a straightforward manner, supported by a large number of tools. With the advent of pervasive computing, application programmers now - face many new challenges. Users have many different devices that look and behave in very different ways. These devices provide different user interfaces, use different markup languages, use different communication protocols, and have different ways of authenticating themselves to servers. Ideally, Web applications that support pervasive computing should adapt to whatever device their users are using. Obviously, applications must provide content in a form that is appropriate for the user's particular device WML for WAP phones, VoiceXML voice interaction via a voice browser, HTML for PCs, and so on. However, solely targeting the application's output to devices is not sufficient in most cases. If device capabilities differ significantly, the entire interaction between the user and the Web application has to be tailored the device's capabilities to provide a good user experience. A good example for this is access to a Web application from a PC versus access to the same Web application from a WAP phone. As a consequence, architectures for pervasive computing applications must not only allow for filtering of unnecessary information, and for output targeted to different devices, but must also be flexible enough to accommodate different flows of interaction depending on the user's device. Another challenge that is posed by pervasive computing is increased scalability and performance requirements. Given the ever-increasing numbers of mobile phone owners, and the concurrent increasing number of mobile phones, the number of potential clients for a pervasive computing Web application is several factors larger than for classical Web applications. In addition, the frequency of users accessing the application from mobile phones will be higher than that of PC users, as the phone is always available.

Scalability and availability


Scalability of pervasive computing applications is a very important issue. Large telecommunication companies expect millions of users to subscribe for some applications, for example. Availability is of particular importance in the pervasive computing environment. Unlike PC users, most users of pervasive computing devices and applications 18

will neither understand nor accept comments like 'server currently down for maintenance' - if a service is not available when they need it, they will assume that it does not work, and will stop using the application or switch to another service provider. Both issues can be resolved by system topologies that employ parallelism and redundancy to guarantee scalability and availability. An example of such a topology is shown in Figure 1.

Figure 1 Scalability and availability can be achieved by running multiple instances of every component that might become a bottleneck. Typically the gateways perform tasks that require significant computing power. WAP gateways, for example, may have to execute the WTLS proto~ the direction of the clients, and the SSL protocol in the direction o' servers, for many parallel sessions, requiring computation intensive decryption and encryption of data. Voice gateways use voice-recognition engines and thus require even more computing power. A scalable system will use a cluster of gateways for each device type, to which additional machines can be added as required. From the various gateways, a potentially large number of requests f. to the servers that host pervasive computing Web applications. Typically network dispatcher is used to route incoming requests to the appropriate servers, balancing the load between them. To support efficient handling HTTPS, the dispatchers support a mode in which requests originating from a particular client are always sent to the same server to avoid repeating SSL handshakes. To assure high availability, pairs of network dispatchers can be used, in which one is active and a back-up monitors the heartbeat of the active dispatcher to take over if a failure occurs. To allow for central authentication, authorization, and enforcement ~ access policies, authentication proxies are used, located in the demilitarized zone between two firewalls, so that all incoming requests can flow application servers only via the authentication proxies. They check each 19

incoming request to see whether the client from which it originates is already known, and whether it is allowed to access the desired target function of the Web application according to a centrally defined policy. To do so, it needs access to the credentials required for authentication and to the policies for authorization. If a request from a new client arrives, the authentication proxy performs client authentication before letting any request pass through to the application servers. An authentication proxy may consume significant computing power, e.g. when SSL server authentication has to be performed for a large number of sessions. Thus, a cluster of authentication proxies is required for larger systems. Requests initiated by authenticated clients flow from the authentication proxies to the application servers behind the inner firewall. The application code and the presentation functions that make up the Web application front end is running on these servers. Here, the requests coming from the clients are received and processed. To implement a scalable Web application, a cluster of application servers is usually used to which additional machines can be added when the load increases. Typically, the front end of a Web application interacts with a back end that hosts persistent data and/or legacy systems.

Development of pervasive computing Web applications


To implement Web applications, four major kinds of role are typically required in a development team: 1. Business logic designers 2. User interface designers 3. Application programmers 4. Experts for existing legacy database and transaction systems. Business logic designers define the functions to be performed and the application flow. User interface designers are responsible for application design, defining the look and feel of the Web application, designing user interaction, and guaranteeing good usability. Web designers work with technologies such as HTML and JSPs, mostly using high-level visual tools. Application developers are responsible for implementing the application logic and connectivity to database and transaction systems in the back end. Java developers work with technologies such as servlets, EJBs, LDAP, JDBC, etc. In teams developing pervasive computing applications, an additional role is usually needed - the pervasive computing specialist, who knows about the capabilities of devices and the infrastructure required to support pervasive computing applications, such as WAP gateways, voice gateways and gateways for PDAs. These people are the experts in technologies such as WML and VoicexML, which normally cannot be handled well by traditional Web designers.

Pervasive application architecture


The model-view-controller (MVC) is a good choice when implementing Web applications. We presented standard mapping of the pattern to servlets, JSPs, and EJB controller is implemented as a servlet, the model implemented as a set of EJBs, and the views as JSPs. 20

Pervasive computing applications, however, add an additional level complexity. As devices are very different from each other assume that one controller will fit all device classes. In the MVC the controller encapsulates the dialog flow of an application will be different for different classes of devices, such as W voice-only phones, PCs, or PDAs. Thus, we need different controller for different classes of devices. To support multiple controllers, the servlet's role to that of a simple dispatcher that invokes tl ate controller depending on the type of device being used. To avoid duplication of code for invocation of model functions controllers, we employ the command pattern.1 In our case, a c a bean with input and output properties. An invoker of a con the input properties for the command and then executes the After the command has been executed, the result can be obtained and getting the command's output properties. Instead of invoke functions directly, the controllers create and execute command that encapsulate the code for model invocation. To invoke a view JSP, the controller puts the executed command request object or the session object associated with the request on the desired lifetime. As commands are beans, their output can easily be accessed and displayed within JSP, as shown in Figure 2

Figure 2 21

Securing pervasive computing applications


Like traditional Web applications, Web applications supporting pervasive devices have to be secured by appropriate encryption, authentication, using authorization mechanisms. The secure pervasive access architecture presented here is designed to process client requests on the application server in a secure and efficient way. It addresses user identification, authentication, and authorization of invocation of application depending on configurable security policies. Figure shows an example in which the a user accesses a function of a particular Web application from a WAP phone.

Secure Pervasive Access Architecture


All incoming requests originate from the device connectivity infrastructure. This infrastructure may include different kinds of gateways that convert device specific requests to a canonical form, i.e. HTTP request that may carry information about the device 22

type, the desired language and the desired reply content type, e.g. HTML,WML,or VoiceXML.. Examples of gateways in the device connectivity layer are voice gateways with remote voiceXMLbrowsers, WAP gateways, and gateways for con-necting PDAs. An important function that the device connectivity layer must provide is support of session cookies to allow the application server to associate a session with the device. The secure access component is the only system component allowed to invoke application functions. It checks all incoming requests and calls application functions according to security policies stored in a database or directory. A particular security state - part of the session state is reached by authentication of the client using user- ID and password, public-key client authentication, or authentication with a smart card, for example. If the requirements for permissions defined in the security policy are met by the current security state of a request's session, then the secure access layer invokes the requested application function, e.g. a function that accesses a database and returns a bean. Otherwise, the secure access component can redirect the user to the appropriate authentication page. Typically, the secure access component will be implemented as an authentication proxy within a demilitarized zone as shown earlier. Finally, the output generated by the application logic is delivered back to the user in a form appropriate for the device he or she is using. In the Figure , the information to be displayed is prepared by the application logic and passed to the content-delivery module encapsulated in beans. The content-delivery module then extracts the relevant part of the information from the bean and renders it into content that depends on the device type and desired reply content type, for example by calling appropriate JSPs. The content-delivery module delivers the content generated in the previous step via the device connectivity infrastructure that converts canonical responses (HTTP responses) to device-specific responses, using-appropriate gateways. For example, if a user accesses the system via a telephone, the voice gateway receives the HTTP response with VoiceXML content and leads an appropriate 'conversation' with the user, finally resulting in a new request being sent to the server

23

You might also like