Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Networked Graphics: Building Networked Games and Virtual Environments
Networked Graphics: Building Networked Games and Virtual Environments
Networked Graphics: Building Networked Games and Virtual Environments
Ebook973 pages10 hours

Networked Graphics: Building Networked Games and Virtual Environments

Rating: 5 out of 5 stars

5/5

()

Read preview

About this ebook

Networked Graphics equips programmers and designers with a thorough grounding in the techniques used to create truly network-enabled computer graphics and games. Written for graphics/game/VE developers and students, it assumes no prior knowledge of networking.The text offers a broad view of what types of different architectural patterns can be found in current systems, and readers will learn the tradeoffs in achieving system requirements on the Internet. It explains the foundations of networked graphics, then explores real systems in depth, and finally considers standards and extensions.Numerous case studies and examples with working code are featured throughout the text, covering groundbreaking academic research and military simulation systems, as well as industry-leading game designs.
  • Everything designers need to know when developing networked graphics and games is covered in one volume - no need to consult multiple sources
  • The many examples throughout the text feature real simulation code in C++ and Java that developers can use in their own design experiments
  • Case studies describing real-world systems show how requirements and constraints can be managed
LanguageEnglish
Release dateOct 30, 2009
ISBN9780080922232
Networked Graphics: Building Networked Games and Virtual Environments
Author

Anthony Steed

Anthony Steed is a Professor at University College London. His research interests are in collaborative virtual environments, immersive virtual reality, interaction, and human animation. He has over 110 refereed conference and journal papers to date. He was program chair of the 2007, 2008, and 2009 IEEE Virtual Reality conferences. For part of the academic year 2006 - 2007 he was on sabbatical to Electronic Arts in Guildford. He is also the director of the Engineering Doctorate Centre in Virtual Environment, Imaging, and Visualization.

Related to Networked Graphics

Related ebooks

Software Development & Engineering For You

View More

Related articles

Reviews for Networked Graphics

Rating: 5 out of 5 stars
5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Networked Graphics - Anthony Steed

    Table of Contents

    Cover image

    Copyright

    Chapter 1. Introduction

    1.1. What are NVEs and NGs?

    1.2. The Illusion of a Shared Virtual Environment

    1.3. Some History

    1.4. Scoping the Software Architecture

    1.5. Structure

    Chapter 2. One on one (101)

    2.1. Boids

    2.2. Distributed Boids: Concepts

    2.3. Distributed Boids: Implementation

    2.4. Reflection

    Chapter 3. Overview of the Internet

    3.1. The Internet

    3.2. Application Layer

    3.3. Transport Layer

    3.4. Network Layer

    3.5. Link and Physical Layer

    3.6. Further Network Facilities

    3.7. Summary

    Chapter 4. More than two

    4.1. Boids

    4.2. Simple Peer to Peer

    4.3. Peer to Peer with Master

    4.4. Peer to Peer with Rendezvous Server

    4.5. Client/Server

    4.6. Multicast

    4.7. Extensions

    4.8. Conclusions

    Part II. Foundations

    Chapter 5. Issues in networking graphics

    5.1. Architecture of the Individual System

    5.2. Role of the Network

    5.3. Initialization

    5.4. Server and Peer Responsibilities

    5.5. Critical and Noncritical

    5.6. Synchronized or Unsynchronized

    5.7. Ownership and Locking

    5.8. Persistency

    5.9. Latency and Bandwidth

    5.10. Conclusions

    Chapter 6. Sockets and middleware

    6.1. Role of Middleware

    6.2. Low-Level Socket APIs

    6.3. C and C++ Middleware for Networking

    6.4. Conclusion

    Chapter 7. Middleware and message-based systems

    7.1. Message-Based Systems

    7.2. DIS

    7.3. X3D and DIS

    7.4. X3D, HawkNL and DIS

    7.5. Conclusions

    Chapter 8. Middleware and object-sharing systems

    8.1. Object-Sharing Systems

    8.2. RakNet

    8.3. Boids Using Object-Sharing

    8.4. General Object-Sharing

    8.5. Ownership

    8.6. Scene-Graphs, Object-Sharing and Messages

    8.7. Conclusions

    Chapter 9. Other networking components

    9.1. Remote Method Call

    9.2. DIVE

    9.3. System Architectures

    9.4. Conclusions

    Part III. Real Systems

    Chapter 10. Requirements

    10.1. Consistency

    10.2. Latency and Jitter

    10.3. Bandwidth

    10.4. State of the Internet

    10.5. Connectivity

    10.6. Case Study: Burnout™ Paradise

    10.7. Conclusions

    Chapter 11. Latency and consistency

    11.1. Latency Impact

    11.2. Dumb Client and Lockstep Synchronization

    11.3. Conservative Simulations

    11.4. Time

    11.5. Optimistic Algorithms

    11.6. Client Predict Ahead

    11.7. Extrapolation Algorithms

    11.8. Interpolation, Playout Delays and Local Lag

    11.9. Local Perception Filters

    11.10. Revealing Latency

    11.11. Conclusions

    Chapter 12. Scalability

    12.1. Service Architectures

    12.2. Overview of Interest Management

    12.3. Spatial Models

    12.4. Interest Specification and Interest Management

    12.5. Separating Interest Management from Network Architecture

    12.6. Server Partitioning

    12.7. Group Communication Services

    12.8. Peer to Peer

    12.9. Conclusions

    Chapter 13. Application support issues

    13.1. Security and Cheating

    13.2. Binary Protocols and Compression

    13.3. Streaming

    13.4. Revisiting the Protocol Decision

    13.5. Persistent and Tiered Services

    13.6. Clusters

    13.7. Thin Clients

    13.8. Conclusions

    Index

    Copyright

    Morgan Kaufmann Publishers is an imprint of Elsevier.

    30 Corporate Drive, Suite 400, Burlington, MA 01803, USA

    This book is printed on acid-free paper.

    © 2010 Elsevier Inc. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions.

    This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).

    Notices

    Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary.

    Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.

    To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.

    Library of Congress Cataloging-in-Publication Data

    Application submitted

    British Library Cataloguing-in-Publication Data

    A catalogue record for this book is available from the British Library.

    ISBN: 978-0-12-374423-4

    For information on all Morgan Kaufmann publications, visit our Web site at www.mkp.com or www.elsevierdirect.com

    Printed in the United States of America

    09 10 11 12 135 4 3 2 1

    Chapter 1. Introduction

    Over the last four decades the Internet has radically changed many forms of collaborative activity. Email and more recently instant messaging have enabled efficient asynchronous collaboration between remote people. The World-Wide Web (WWW) has enabled a range of different publishing models for individuals as well as large organizations. More recently though, the Internet has enabled new types of real-time synchronous communication between people. Synchronous collaboration tools include video and audio tools, but also network games (NGs) and networked virtual environments (NVEs). Within NVEs and NGs, users can share a virtual space with business partners to brainstorm or they can be immersed¹ in a fantasy world to go exploring with friends.

    ¹Immersive is a term used often to described computer and video games that are rich, engaging and interactive. In the area of virtual reality systems, the term is used differently to mean displays that encompass and surround the user, see Section 1.3.5.

    This book describes the data communication technologies behind these NGs and NVEs. We focus exclusively on Internet technologies because of their pervasive nature, though we’ll present some of the historical context behind the development of the Internet. The book takes a ground-up approach from basics of networking through to strategies for partitioning large numbers of players between servers. Thus we’ve tried to write the book for a few different audiences: students interested in networking aspects of games and simulations, software engineers getting into the field, game developers involved in the implementation of NGs, hobbyists or games enthusiasts interested in learning more about games technology and researchers interested in a general background text.

    In this introduction we want to present the outline of the book and give some background history to the topic.

    1.1. What are NVEs and NGs?

    With NVEs, we refer to virtual environment systems that are distributed over the network in some sense. That is, usually, there are several computers, each running a piece of software that communicates with similar software on other computers. Users interact with an interface facilitated by devices and the software interact with the virtual environment, perhaps by moving a player character through the virtual environment. If this is for recreational purposes then the whole system might be called a NG, though these would be a subset of the applications that use such technology, which range from medical simulations through education to military training. In effect, there is often little to distinguish between systems that support NVEs and systems that support NGs, other than the type of content that is created for them. Some systems are designed specifically for recreational purposes, being based on fantasy or cartoon styles such as World of Warcraft™ or Disney’s Toontown. Other systems are neutral to use and leave the content creation and its application context to the user. For example, Linden Lab’s SecondLife ® contains both business centers and nightclubs.

    The common feature of both NVEs and NGs, and an important piece of scope for this book, is that by virtual environment, we refer to a rich three-dimensional (3D), or less commonly 2D, space that depicts a real or imaginary place. The client software allows the user to move about this space to get new viewpoints on that space. The space is probably displayed to the user at real-time (30 Hz plus) rate, is animated, has interactive elements to it, and the reality it represents is governed by well-defined rules (e.g. Brownian motion, gravity or refraction). Depending on the type of user interaction supported, the users may be represented in some way.

    Figure 1.1 shows a typical NVE which one of the authors had a small hand in developing. This particular system, Distributed Interactive Virtual Environment (DIVE), was a research prototype built by Swedish Institute of Computer Science (SICS) (Frécon et al, 2001). Although the particular version of the software shown in Figure 1.1 was released in 1999, it has many features of the current crop of online social NVEs: it has text chat, large worlds to visit, supports up to around 30 players in a single location and has audio and video streaming. Most importantly for our introduction, the system has avatars, which is the term commonly used in the field to refer to the visual representation of a user in the world. ² Avatars indicate your location in the world to other people and provide the user with a reference to their interaction model with the environment. Different systems support different visual styles of avatars, from abstract, such as the DIVE avatars, through cartoony to realistic-looking avatars. In DIVE audio communication is enabled by having avatars stand near each other in the virtual environment. Users can select and manipulate many of the objects in the world near to them. We discuss DIVE in more detail in Section 9.3.

    ²The term avatar derives from the Sanskrit word in Hindu texts, where it means incarnation. The first use in a computing context to refer the representation of a user is not well-known, but it was perhaps coined in Habitat (see later), and popularized in Neal Stephenson’s novel SnowCrash.

    1.2. The Illusion of a Shared Virtual Environment

    The foundation of a NVE is to create the illusion of a virtual environment that is shared amongst all participating users. As illustrated in Figure 1.2, the NVE consists of multiple collaborating NVE client computers (or just client), where each client typically runs on a single workstation or home computer. ³ A client is composed of one or more pieces of software that are responsible for key functions such as generating sensorial outputs (e.g. video), processing inputs (e.g. joysticks) and doing physics simulations. The client computers are connected together with a network infrastructure (the Internet being the most pervasive with global connectivity) through modems and/or routers. A more detailed overview of components of the NVE system itself is given in Section 1.4.

    ³We will talk about various types of clusters for graphics in Chapter 13.

    As illustrated by the diagram of Figure 1.2, the aim of a NVE system is not to create a single-user experience, but a shared experience amongst all participating users. Each user in the system is seeing a different view, but these views should be consistent. From a technical point of view this implies that each NVE system shares a common model of the virtual environment. As we will learn, this is extremely difficult over networks where there are real problems with latency, congestion, etc. Thus each client in the system has a slightly different model, and thus the renderings of the virtual environments at each client are all different. This is why this section is entitled the illusion of a shared virtual environment. Fortunately for us, users are sometimes unaware or at least tolerant of the discrepancies between each client’s view of the virtual environment. Users see avatars moving around the virtual environment and interacting with objects, including other avatars. They can often talk to the other users or at least send text messages. Though the media might seem crude, users nonetheless interact successfully with one another. As long as users can experience the virtual environment in such a way that they can agree on the important features of the world (e.g. the placement of enemies in a game or simulation), then they can have the common understanding of sharing the same virtual environment. It is when things start to get desynchronized that illusion of a shared space becomes hard to maintain, and users spend a lot of time talking or texting in an attempt to come to a shared understanding of their situation. ⁴

    ⁴One of the most fascinating aspects of NVEs, which we will only have space to touch on very briefly in the book, is that users seeing avatars tend to treat them as if they were real people. In social NVEs, avatars tend to form social groups in small circles, as they might do in conversation in the real world. There is a large body of work in the overlap between computer science and sociology (see Schroder, 2001; Churchill et al., 2001 and Schroeder and Axelsson, 2006 for overviews).

    The network infrastructure provides the necessary support for the sharing of information to keep a consistent perspective amongst all participating users. How exactly this is done will be discussed throughout the remainder of the book: but aside from the technical details of what must be done, we can already posit that there is going to be a bottleneck in that the Internet is not sufficiently fast to simply copy all changing information to all participants. The network infrastructure needs to choose carefully what is sent over the Internet. In doing so, it will necessarily take into account both where the users are in the virtual environment and who they are engaging with.

    1.3. Some History

    There are many strands to the history of NVEs and a full history would take a whole series of books. Especially in recent years, there has been an explosion in the number of different NVE and NG systems. Figure 1.3 gives a thematic analysis of some of the related areas.

    The themes are:

    Internet. Initially funded by the U.S. for defense purposes, the Internet has become the main internetworked system in the world. It supports many different applications through the use of open and widely deployed protocols, both in client computers and network infrastructure.

    Simulators. Many real-world tasks are difficult, dangerous or expensive to train for. The word simulation can be applied to everything from paper-based simulations through to multi-participant war game scenarios, but we will focus on the thread of work focusing on electronic simulations.

    Multiuser Dungeons. These text-based systems were probably the first-class large-scale multiuser systems to reach significant usage. Although not so popular in their text-form now, the game play styles are very visible in later game forms such as massively multiplayer online role-playing games (MMORPGs, see below).

    Electronic Games. Originally based on analog electronics, video games are now one of the most important media markets.

    Virtual Reality Systems. We use this term to refer to academic and commercial research that uses spatially immersive systems in highly novel application domains. Although by some definitions virtual reality is a superset of the previous technical themes, we will focus on the novel applications and research systems that have been built.

    MMORPGs. These are a genre of computer games where the players interact in a virtual world to role-play character-based story lines. The theme is often a science fiction universe or fantasy world.

    Social Spaces. This is a class of application collaborative world where there is little imposed theme, but the worlds are designed to facilitate social interaction.

    These themes overlap in many ways and are by no means exhaustive of the labels that are applied to NVEs and NGs. We have had to be selective in our choices, and sometimes we have biased our choice towards systems or games where we have personally spent our leisure or work time. More details about many of the systems described below will be found in later chapters.

    1.3.1. Internet

    It is difficult to do justice to the ingenuity and engineering brilliance that was involved in the development of the Internet. We give a brief overview below, but would recommend two books to the interested reader. For a nontechnical overview of the development of the Internet up to the explosion of the World Wide Web, we would suggest Naughton’s A Brief History of the Future: Origins of the Internet (Naughton, 2000). For a similarly nontechnical but entertaining account of the development of ARPANET specifically, we can recommend Hafner and Lyon’s Where Wizards Stay up Late: The Origins of the Internet (Hafner & Lyon, 1996).

    ARPANET

    In a very brief history of computing we will note that digital computers started off as room-sized computers designed for dedicated tasks. In the 1960s and 1970s, machines became more powerful, there was a move to time-sharing computers, where several users could connect through terminals to the computer (commonly called a mainframe), and the computer would dynamically allocate resources to each user. This freed users from the previous batch-mode processing paradigm where they would submit a computing task, and then wait for the result. These mainframes started to spread, but each provided a local resource. If a mainframe was dedicated to a particular task, or connected to a specific instrument, one still need to connect to that machine. One could do this with a remote terminal, connected over a standard telephone network; however if you want to connect to multiple machines you would need multiple terminals or multiple sessions. You would also need to know the specific syntax and operational capabilities peculiar to each mainframe you connected to.

    The combined motivation of visionary statements about the future of computing and funding pressure to make expensive resources more widely available led to the U.S.’s Advanced Research Projects Agency (ARPA, since renamed DARPA with a D for Defense), which provided much of the funding for computing at the time, proposing to build a packet-switching network to connect mainframes together. Packet-switching was a relatively untested technology. Previously, when dialing into a mainframe, the user used a dedicated analog phone line. However, like a normal phone line, the line would be completely unavailable for anyone else while the user was dialed-in even if there was no data being transmitted. Furthermore when a phone call was set up, a circuit was set up from the caller to the sender, taking up capacity in each exchange along the route. This means that unless the line is constantly busy with data or chatter, the actual physical network is not being used to its full capacity. More importantly, it means that a failure at any exchange along the path from caller to sender would cause the connection to drop.

    ARPA’s concern was supporting communication in the presence of infrastructure failure. Packet-switching networks promised this reliability, but this was not proven at the time. Packet-switching was designed for data transmission, not voice, and takes advantage of the fact that data ( messages) can be broken up into chunks ( packets). Packets are dispatched onto the network and independently moved towards the target, where packets are reconstructed into messages. The network is made up of packet routers, which are interconnected with one another. Packets are injected by senders into the network and reach the first router. This router then sends them towards the destination, where towards means the next router that is closer in network terms to the destination. Routers make this decision based on local knowledge only; they don’t have a map of the network to make their decision. Packet-switching networks provide reliability because there are usually several routes to the desired destination. If one router fails, another route to the destination can almost certainly be found.

    ARPA made a public request for bids to build a packet-switching network in 1968. The winning bid was from Bolt, Beranek and Newman (BBN), based in Cambridge, Massachusetts. Although a small team, and being up against competition from much larger computer manufacturers, they had a very detailed proposal of what needed to be done. The proposal called for small computers known as Interface Message Processors (IMPs) to be installed at each site. The IMPs were connected to each other using modems connected to dedicated phone lines running at 50 kbit/second. At each site, the mainframe would be connected to the IMP using a serial connection. This last connection to the mainframe would necessarily be different for each type of computer but the connections between IMPs would be homogenous and IMPs would have no knowledge of the specifics of the nonlocal hosts at the other ends of the connections. This design decision, that the network is neutral about traffic, simplifies the design of the IMP as it simply has to route traffic. The success of the Internet is partly due to such neutrality.

    The initial installation of IMPs connected University California Los Angeles' Network Measurement Centre, Stanford Research Institute’s Augmentation Research Center, University California Santa Barbara and University of Utah’s Computer Science Department. The first interhost protocol was the 1822 Protocol, named for the BBN Report 1822 where it was described. The protocol requires the sending machine to create a message with a numeric identifier of the target machine and the actual data that needs to be sent. This is then passed to the IMP. The protocol was designed to be reliable, in that the IMP would deliver it and then confirm delivery. The 1822 protocol was superseded by Network Control Program (NCP), which provided a standard method to establish reliable communication links between two hosts. NCP was subsequently superseded by Transmission Control Protocol/Internet Protocol (TCP/IP).

    IMPs were added slowly initially, BBN themselves were added in early 1970 and 13 IMPs were installed by December 1970. In 1973, the first satellite connections were made to Hawaii and Norway. A link was then made to London, U.K. ⁵Figure 1.4 shows a logical map of the ARPANET in March 1977. The circles represent IMPs and the text boxes indicate the hosts connected to IMPs. Note that they cover a number of different models and types of computer: the ARPANET is a network of heterogeneous hosts.

    ⁵In London, the connection came to University College London but the authors of this book were still in short trousers at the time.

    The ARPANET can be considered to be comprised of several layers. There is a physical layer there is connection, a leased phone line. Then there is a data/network layer, or link layer, which the IMPs provide for message communication. Above this sits NCP, which is called the transport layer as it runs between the hosts. This collection of layered protocols is referred to as a protocol stack. Note that the separation of layers in a protocol stack is somewhat contentious and we return to the debate about naming when describing protocols on the Internet in more detail in Chapter 3.

    TCP/IP

    The success of ARPANET led to further developments of networking technologies. In particular, as computers became smaller and began to be deployed in greater numbers, the number within one organization would increase. Thus several computer manufacturers developed their own technologies for local-area networks (LANs). LANs would typically support a building or small campus-sized network.

    At each layer, different manufacturers made different decisions. Obviously the physical layer might involve existing or new cables, or might involve radio or optical communication. Many different cabling technologies were tried. One now ubiquitous technology, Ethernet, was developed at Xerox in 1975–1976 (Metcalfe & Boggs, 1976). Ethernet has been through several iterations, from its initial 3 MB/second, through to 1 GB/second and beyond, but it defines standards such as cable form and plugs on the cables, as well as the electrical signals that should be sent. Xerox PARC built a complementary link-layer protocol the PARC Universal Packet (PUP) (Boggs et al. 1980).

    As technologies proliferated, there was a need to connect networks together, or to internetwork them to make wide-area networks (WANs). ARPANET is the prototypical WAN. Internetworking required some form of standard that could be run on a variety of different hardware. PUP, and other similar protocols could perform this internetworking role. However, one set of protocols dominated, TCP/IP. Although many other protocols are still supported by vendors, TCP/IP is the protocol suite that supports the Internet.

    Transmission Control Protocol/Internet Protocol (TCP/IP) (also known as the IP Suite) was initially developed by Robert Kahn and Vinton Cerf (Cerf & Kahn, 1974). Originally it was designed as a single protocol, TCP, to replace previous reliable protocols such as NCP that ran on the ARPANET. However, reliability was notoriously difficult to engineer within a network and this was exacerbated if a protocol had to span networks. Perhaps the main insight of Kahn and Cerf was to split the protocol into two, thus TCP/IP. IP runs on the network. IP was designed to be a subset, or good match, to the properties of existing LAN technology. However, different networks had different guarantees on their reliability, ordering and timing of delivery. As there was no consensus on this, IP is not reliable. It is simply a protocol for moving data messages around in packets. If a packet gets lost, no attempt is made to reconstruct it, and the sender is not notified. Reliability comes from TCP, and the insight is that a reliable protocol can be made if only the sender and receiver of the data care about reliability. The network can lose packets, but the sender and receiver can assess whether this has happened already or is likely to have happened already, and compensate by resending the packets. Also as the network is not pathological in some way (e.g. drops all packets following a particular route or containing the same data!), then as long as packets can be transmitted, a reliable protocol can be constructed.

    It is worth noting that internetworking based on IP is quite simple. Gateways link dissimilar network types together, but their job is quite simple: they must convert packets from one format to another format (i.e. to convert between different link-layer formats). But once the conversion has been done, which might involve creating more than one new packet, it can dispatch the packets and forget about them. Thus IP is easy to implement; further note that the gateway will normally be oblivious to the fact that the packets that the IP packets are actually part of a TCP stream. As mentioned, TCP/IP is the backbone of modern Internet, so we will discuss its working in much more detail in Chapter 3.

    Motivation for the growth of the Internet

    The same motivation that drove the development of the ARPANET also drove the internetworking of different sites: the sharing of scarce resources, particularly in academia. However, there was a growing ulterior motive: electronic messaging.

    Time-sharing systems often had a system for locally connected users to send messages to each other. Messaging allowed system administrator to post messages of the day when users logged in and also supported real-time chat between users. ARPANET got electronic mail (email) in 1971, when Ray Tomlinson created a program called CPYNET that copied files over the network. This allowed users to place files into the local messaging systems of remote machine. Tomlinson is also responsible for choosing @ to combine together the host name and user name to make a network-wide unique identifier for a user. user@host is still the standard for email addressing. Messaging was an increasing important application, and it drove the development of other networks such as BITNET, a cooperative U.S. university network. In 1973, Stephen Lukasik, the then Director of ARPA, commissioned a study that showed that 75% of the traffic on the ARPANET was email (Hafner & Lyon, 1996, p. 194).

    The U.S.’s National Science Foundation (NSF) had started funding connections to the ARPANET in the late 1970s, but in the mid-1980s, it decided to sponsor the creation of a new network, the NSFNET. Unlike the ARPANET, this was designed from the outset to be openly accessible to academia. NSFNET came online in 1986 using TCP/IP. ARPANET had already converted to using TCP/IP in 1983 and NSFNET interoperated with it. Soon NSFNET was the main backbone of the growing Internet. In July 1988, after an upgrade, the NSFNET connected 13 regional networks and supercomputer centers and transmitted 152 million packets of information per month (NSF, 2008). Usage was increasing on the order of 10% per month and soon the NSFNET backbone service was upgraded from T1 (1.5 megabits/second or Mbps) to T3 (45 Mbps) (Figure 1.5).

    The success of the Internet largely results from establishing a common layer that decouples the networking technologies (e.g. ATM and ISDN) from the upper layers, thus avoiding the need of constantly adapting to changes in the underlying technologies. The fact that the protocols were established by a community and freely accessible along with default implementations greatly contributed to the adoption and expansion of the Internet. This opennessed led to the inbuilt support of the protocols by major operating systems which in turn meant that it was simple to connect almost any electronic device to the Internet.

    Expansion and the web

    Figure 1.6 shows the growth of traffic on the NSFNET backbone from late 1992 through 1994. It also shows the types of protocol that make up the traffic. The graphs are in terabytes (TB or TByte). Some of these protocols may be familiar, some not. We will return to discuss some of them in Chapter 2 and Chapter 3.

    Domain Name Service (DNS) is a distributed database for mapping names of machines to Internet addresses.

    IRC is Internet Relay Chat, a real-time text-chat system that supports many users communicating through channels. It is thus different in concept from more recent instant messaging systems.

    Telnet is a basic protocol that allows users to connect to services on other machines. Originally designed for logging into other machines, and thus simply supporting text communication both ways, it is now often used to test other high-level protocols (see Chapter 3).

    Simple Mail Transport Protocol (SMTP) emerged as the principal mechanism for moving email between services.

    Network News Transport Protocol (NNTP) is the protocol that is used to manage Usenet articles.

    Gopher was an early protocol for accessing hypermedia documents. It provided a text-menu interface for navigating hierarchical collections of documents on servers. Although still running⁶ it was superseded rapidly by HTTP.

    ⁶Gopher is supported in Mozilla Firefox (as of version 2.0.0.18), as well as a few other browsers. Try entering gopher://gopher.floodgap.com/1/v2 in the navigation bar.

    HTTP is the Hypertext Transfer Protocol. We will discuss this below, but we can see that by November 1994, HTTP was the second largest source of NSFNET traffic.

    FTP is the File Transfer Protocol. It is a simple protocol for transferring files between machines. It is still in common use for various purposes. Download sites on the Web often provide both FTP and HTTP downloads. In the figure, FTP counts for the largest component of traffic.

    The growth of HTTP is the main story of this figure. In 1989, Tim Berners-Lee, working at the European Organization for Nuclear Research (CERN) in Geneva, Switzerland proposed building a web of stored hypertext pages that could be accessed by browsers. In 1990, he and Robert Cailliau developed the first browser and web server software, and the Hypertext Markup Language (HTML) for describing pages (Berners-Lee & Fischetti, 1999). The first web server, info.cern.ch went live by Christmas 1990. Because it was the first web site, the project was publicized through Usenet news.

    The growth of HTTP and HTML was explosive. HTML was easy to write, HTTP was open and ran over the Internet, and the browser software was free. As pages proliferated, there were a few attempts to make hierarchical directories of pages, similar to gopher, but search engines began to become useful and then essential for finding information on the web. Tools made authoring HTML sites very easy, and now there are thousands of pieces of software to help you create and maintain pages.

    Peer-to-Peer

    Although the Internet is constructed from open protocols, and anyone can publish information on an FTP site or web site, it is very much a publishing focussed system. The information has a location (a URL or Uniform Resource Locator) that indicates a protocol, a server machine and a resource name. Hence the URL http://news.bbc.co.uk/sport/default.stm means Use the HTTP Protocol to connect to the machine news.bbc.co.uk and fetch the file sport/default.htm". There are several types of peer-to-peer networks, some support NVEs, but the most prevalent are file-sharing networks. Networks such as Gnutella have been perceived to be the bane of the lives of holders of copyrighted digital media; such networks allow users to share media amongst themselves, without resorting to a centralized server. A user queries the peer-to-peer network to find a file and then downloads pieces of the information from various other peers. Importantly there is no one publisher.

    There are thus two main activities, querying the network to find hosts that have resources and then downloading those resources. Different peer-to-peer networks work in slightly different ways, some use a central repositories or lists (e.g. BitTorrent), others query across those peers that the user knows about. Each Gnutella client connects to a small number of other clients (peers). As a query is made, it is propagated peer-to-peer by a certain number of hops. Figure 1.7 shows a number of queries being propagated across a local Gnutella network. The operation of Gnutella is described in more detail in section 12.9.1.

    1.3.2. Simulators

    The best known type of simulator is the flight simulator (Rolfe & Staples, 1988). Flight simulators were developed because flight is one activity that is very dangerous to train for. Since the First World War there has been a need to train pilots before they take to the air. Training for flight is also expensive, especially if the trainee is learning to fly a commercial airliner. Thus flight simulators can be a key training resource for pilots. Flight simulators are also used for research purposes, especially in the military, for determining pilot performance under the extremes of modern aircraft flight performance. Flight simulators typically consist of a cockpit, sometimes mounted on a motion platform, with real instrument panels. The view out of the cockpit windows is computer generated. Thus, although the simulator might include several participants (pilot, co-pilot, etc.), it is all one contained system. There might be others role-playing air-traffic control or providing training inputs, but each simulator is primarily a standalone machine. Although flight simulation is the common example, there are simulators for many types of vehicle, including ships, cars and space-craft.

    Simulators are thus most commonly used individually for personal or small group training. Networking of flight or other types of vehicle simulators is most commonly done for military training, ⁷ which leads us to SIMulator Network (SIMNET).

    ⁷There is a remarkable recreational flight simulation network called Virtual Air Traffic Control Simulation (VATSIM, see www.vatsim.net). VATSIM allows users of consumer simulators such Microsoft Flight Simulator to connect online to virtual air traffic control. Thus they can interact with or rather avoid, other users and fly simulations of real routes under various conditions.

    SIMNET

    SIMNET was a network of simulators built for training exercises involving, initially, tanks and helicopters (Neyland, 1997, Chapter 3). It was built in response to the cost and constraints of live training exercises. The project, undertaken by BBN Technologies, who previously developed the IMP for ARPANET, commenced in 1982 and was completed in 1989. The goal was to create a virtual arena for large-scale battle simulations involving both individual vehicles and command and control facilities. The original concept was to link hundreds of tank simulators at Fort Knox, Kentucky, U.S. to a smaller number of aviation simulators at Fort Rucker, Alabama, U.S. Eventually there were 260 simulators at 11 sites in the U.S. and Europe (Cosby, 1999).

    Each individual SIMNET vehicle simulator was relatively cheap for the time, costing approximately $200,000. The graphics were fairly crude as the focus was on the operational behavior within the context of the military command and control structure. See Figure 1.8 for an example of one of the aviation simulators.

    SIMNET was initially implemented over dedicated LANs (Pope, 1989). Later expansion proved the possibility for wide-area simulation exercises. It was by no means a cheap undertaking, but it had proved its worth and it paved the way for the development of Distributed Interactive Simulation (DIS).

    DIS

    DIS refers both to a concept and a set of technologies (Neyland, 1997). The concept is the expansion of SIMNET to more complex distributed simulations. The technologies are standards for interconnection between simulators, and conventions on how simulators will cooperate (IEEE, 1993). We will discuss some of the underlying technologies of DIS in Chapter 7. Here we give an outline of one particular program conducted with DIS, Zen Regard.

    Zen Regard was a program of large-scale exercises involving all the U.S. armed services. Built starting in 1993 on dedicated secure networks, it eventually connected 50 different types of simulators at 20 sites (Neyland, 1997). These ranged from vehicles through static defense systems to command and control systems, and even included live tracking of real aircraft. Each individual vehicle, be it a ship, a tank or an aircraft, could potentially see the others operating within the simulation domain, a part of Southwest Asia. Unlike SIMNET, there was more of an emphasis on higher-quality visual simulation for each operator, such that the experience was similar to actually being in that scenario. Thus individual simulators were often based on the latest simulator technology.

    Flight and dog

    Flight was a flight simulator for Silicon Graphics IRIS workstations originally built in 1983. ⁸ Silicon Graphics were pioneers in the development of graphics workstations, and simulators were an obvious target market. Flight was a standard demonstration on the system for many years. From 1984, networking was added, first using serial cables, then using a suite of networking protocols called Xerox Network Services (XNS) which ran on Xerox’s Ethernet. XNS was an important precursor to the now ubiquitous TCP/IP. The version called dog appeared in 1985 and introduced combat. Dog should perhaps be listed under the network games section, but no doubt it inspired many serious simulators as well as NGs. This version was an early demonstrator of the use of the User Datagram Protocol (UDP) (see Section 2.2), but because it sent information at the graphics frame rate, it caused a lot of network traffic. The initial version worked over the Ethernet using the TCP/IP protocol suite, but it didn’t actually work over a router and thus didn’t support internetworking. Later versions used multicast so could be used on larger networks.

    ⁸On a current SGI machine, the manual page credits include: Original version by Gary Tarolli. Current version by Rob Mace. Contributors Barry Brouillette and Marshal Levine. Network communications Dave ciemo Ciemiewicz and Andrew Cherenson.

    NPSNET

    NPSNET-I through NPSNET-V from the Naval Postgraduate School in Monterey, California were an influential set of simulation systems (Capps et al., 2000; Macedonia et al., 1994). They were developed in parallel with SIMNET and DIS, interoperating with one or the other in different iterations, but they were designed to scale to larger numbers of participants. SIMNET and DIS stretched network capacity to the limits because every simulator would receive packets from all the others.

    NPSNET-I and NPSNET-II were designed for Ethernet-based local networks and used an ASCII protocol. They did not support WANs. NPSStealth was derived from NPSNET-I and supported interoperation with SIMNET. It used bridges between LANs to support wide-area simulations. NPSNET-IV used the DIS protocols and thus interoperated with a far larger set of simulators. It became a test platform for many experiments in networking for large-scale virtual environments and was used at hundreds of sites. Notably NPSNET-IV used multicast to support scaling over WANs. Multicast provides a network-level mechanism for sending one packet that can be routed simultaneously by the network to multiple destinations. Previously in DIS, packets were broadcast, so that every machine would receive them. Multicast provides some scalability, but unless different machines need to receive different sets of events, the simulation still needs to relay every event to each machine. Thus, multicast was coupled with a mechanism called area of interest management (AOIM) (Macedonia, 1995), which exploits the fact that in a large arena participants are more likely to be interested in events close to them than events further away. Thus the virtual space of the NVE can be broken into regions, each assigned to a different multicast group. Multicast and AOIM together provided a basis to scale to very large numbers of players, and thus we discuss them in more detail later in the book (in Chapter 4 and Chapter 12, respectively). Morse, Bic, and Dillencourt (2000) have a good overview of other military simulations and the scale of entities that each supported.

    DARWARS Ambush!

    The field of networked military simulations is vast. Because of the costs involved, they have often taken advantage of the latest advances in computing. Jumping right to the modern day, there is now a very significant overlap between military simulators and computer games. On the gaming side, the U.S. army has released a game, America’s Army, based on the Unreal Engine. This is used as an aid to recruitment, and was developed in collaboration with the Moves Institute at NPS. On the training side, the military has used game engines in a number of training simulations (DODGDC, 2008).

    DARWARS Ambush! is an exemplar of the current networked technologies used. It is a tactical trainer for personnel in the field. It is based on the PC and Xbox game Operation Flashpoint by Codemasters. Situations simulated include road-convoy-operations training, platoon level mounted infantry tactics, dismounted infantry operations, rules-of-engagement training and cross-cultural communications training. DARWARS Ambush! was developed by BBN Technologies. Figure 1.9 shows a participant wearing a head-mounted display viewing a scenario that can be seen on the monitor screens in the middle of the picture.

    1.3.3. Multiuser dungeons

    Multiuser dungeons (MUDs) are multiuser text-based adventure games. Although now not so popular, their legacy is still seen within more modern games. Additional several conventions for text-based chat were pioneered in MUDs. Indeed, some MUDs were little more than text-chat systems with a convenient room system to split up text channels. For an overview of the history of MUDs we can recommend the Online World Timeline (Koster, 2002).

    MUD1 (1978)

    Roy Trubshaw created MUD in 1978 and developed the early versions in collaboration with Richard Bartle who took subsequently over development. Developed while they were students at the Essex University, MUD1 was inspired by text adventures such as Zork, a single-player game that had been popular for many years. From 1980, MUD1 was remotely accessible over an experimental network.

    Some features of the game are recognizable by anyone who has played a text adventure: players can move between discrete locations, carry and use objects (Figure 1.10). Players at a certain level of experience, wizards, can edit the game by adding new objects and rooms.

    The original MUD1 was commercially licensed to CompuServe under the name British Legends. This was possible because CompuServe ran consumer services on the same type of computer, a DECSystem-10, as Trubshaw and Bartle had used in Essex.

    A version of MUD1 is still available to play under the British Legends label. Although ported to modern hardware, the game play is the same. Try telnet british-legends.com 27750. For more on what telnet is, see Chapter 3.

    AberMUD and later

    AberMUD was originally written in 1987 by Alan Cox, Richard Acott, Jim Finnis and Leon Thrane at the University of Wales, Aberystwyth. It went through several versions, with several other contributors. Version 3 was a port to the newly popular UNIX operating system and C language. This and the code being made open source meant that it had considerable influence over the design of other systems. Installations proliferated sites across the world, including the university of one of the authors when he was an undergraduate. In the following few years there was a mini Cambrian-explosion of MUD implementations, all taking ideas from MUD1 and Abermud (Keegan, 1997; MGP, 2008).

    One important aspect to the development of MUD engines was their increasing customizability. Early MUDs had the descriptions and behaviors of objects hard-coded into the system. Only arch-wizards, the system installers, had the capability to alter the system and they would have to restart the system. Later MUDs (e.g. LPMud) became extensible from within but only by wizards: highly experienced players, who had earned their status by playing the game extensively. Some or all of these wizards, depending on the system and the arch-wizards, were allowed access to commands to alter the space by adding objects or commands to the system. Access was restricted not only because the systems were quite fragile, but also because changes would annoy players if they didn’t work. You also needed a certain amount of skill to make interesting content, much as you need skill to be able to successfully gamesmaster (i.e. manage and create content in) face-to-face role-playing games.

    MOOs, MUDs-object-oriented make in-world artefact creation part of the experience for all players. The original MOO server was authored by Stephen White based on his experience from creating the programmable TinyMUCK system. Putting the tools in the hands of the players meant that MOOs were quickly appropriated as test-beds for all sorts of social and work-related collaborations. The best known MOO, one which is still operational, is LambdaMOO.

    Many MUD, MOOs, etc. were based on strong role-playing themes; others were more social in nature. They fostered very strong communities (Rheingold, 1993), but they have fallen out of favor in preference for bulletin boards, social networking sites or MMORPGs. Of course, some of these technologies evolved out of the basic technology behind MUDs.

    1.3.4. Electronic games

    Electronic games or video games have a long and colorful history. For a full overview of the history we refer the reader to another text such as DeMaria and Wilson (2003) or Kent (2002). The following are some key examples.

    SpaceWar!

    SpaceWar! is perhaps best described to a modern audience as two-player Asteroids, but with the players firing at each other rather than asteroids. SpaceWar! although two-player it is not a NG, and thus it doesn’t really deserve a place in this introduction. However, it is often credited as being the first true digital computer game and thus it always features in histories of the area.

    SpaceWar! was programmed initially by Steve Russell for the PDP-1 computer; this was an expensive machine for a game to be running on (Figure 1.11).

    Many computer games today support multiple players simply by having them all represented on the screen simultaneously or in a split-screen mode. These use a single machine, and technically they are little different from single-player games.

    Maze (Maze War)

    Maze or Maze War is important for two main reasons: it was the first first-person shooter, and it was one of the earliest games (alongside SGI’s dogflight simulator, see above) to work over the Internet. It was originated in 1973 at NASA Ames Research Centre by Steve Colley, Howard Palmer and Greg Thompson (Thompson, 2004). Colley was experimenting with 3D images on an Imlac PDS-1. This evolved into a perspective view of a simple maze, and Palmer and Colley developed this into a single-player game where the player had to find the exit to the maze. Palmer and Thompson extended this to an initial two-player version using two Imlacs connected by a serial cable. The ability to shoot each other naturally followed. Thompson then moved to MIT, and extended the Maze code. Dave Lebling wrote a PDP-10/ITS Maze Server allowing for eight-player support. Imlacs were popular on the ARPANET at the time, and players at other sites could connect to the MIT server. According to Thompson (2004):

    Legend has it that at one point during that period, MazeWar was banned by DARPA from the Arpanet because half of all the packets in a given month were MazeWar packets flying between Stanford and MIT.

    Subsequent versions included a 1977 dedicated hardware version, and a version at Xerox Parc using their new raster display and Ethernet networking. It is reported that some Parc engineers created a cheat by displaying player positions on a map. This a common form of cheat in FPS games, which is difficult to engineer against. The solution at the time was to encrypt the source code, so such modifications were not possible. In 1986, Christopher Kent ported Mazewar to use UDP, and thus with SGI dogflight it is one of the earliest Internet-enabled games (Figure 1.12).

    BZFlag

    BZFlag was a game started by Chris Schoeneman in 1992 while he was a student at Cornell. It is based on the seminal Battlezone arcade game from Atari, a wire-frame tank driving game. BZFlag takes the same vector-based graphics approach and has a distinctive visual style that is easily recognizable. Although originally developed for SGI workstations, it was completely independent from the game BZ developed by Chris Fouts of SGI. The two games were very similar because of the shared heritage.

    BZFlag is still available for download and is still being developed (BZFlag, 2008) (Figure 1.13). One anecdote from the developer that is worth repeating is that one game play feature, the presence of flags that can be picked up to give players superpowers, was developed in response to a hack from a player who changed the code to give himself unilaterally such a superpower. While from the developer’s description the hacker in this case was completely open, clandestine hacking is the bane of system administrators' lives, and of course these days can have economic impact on a game. We return to issues of security in Chapter 13.

    DOOM

    DOOM was released by id Software in 1993. It had been widely anticipated, and although it was not the first first-person shooter nor the first NG, it did bring this to broad public attention, possibly due to the shareware business model where the first nine levels of the game were distributed for free along with the game engine. Similar games were long-called DOOM-clones, at least until id Software’s Quake came out, from which point they were called Quake-clones (Figure 1.14).

    The story is very familiar to almost anyone who has played computer or video games: lone solider must battle an increasing fearsome and deadly enemy horde, equipped, thankfully, with increasingly powerful weapons. The game was split into three episodes of nine levels (maps). These started on a space base orbiting Mars and ended up in Hell. The gory content was controversial at the time, although this was probably as much to DOOM being one of the first games to reach wide-spread media awareness as the actual content. The engine behind doom presented 3D graphics which were based on a novel 2.5D rendering technique which ran on modest machines for the time. The engine, like many in the genre, was modifiable by users. This led to many hundreds of new levels being made and distributed over bulletin board systems, magazine cover disks and on the web.

    DOOM could be played multiplayer in a number of ways. First, over null-modem cables connecting two machines or by modem connection between two machines. It also supported the Internetwork Packet Exchange (IPX) protocol over Ethernet. This meant that it could work on many company and university networks, leading to it being banned during work hours at many places. DOOM also made the game mode of deathmatch popular. In this mode, multiple players engage in every-person-for-themselves combat. Every FPS since has included this mode, but balancing the game play for the weapons and power-ups collected, and allowing new players a fun experience still taxes game designers.

    Id Software made the source code available in 1997, and thus DOOM now runs on almost anything with a CPU, and has been ported to various networking technologies. There were some expansions and a new game using the same engine (DOOM 2, 1994), but it was another 10 years (2004) until DOOM 3 was released with a completely new graphics engine.

    Quake and beyond

    Quake, released in 1996, was id Software’s second major FPS technology. Quake supported Internet distribution, rather than simple local-area or modem distribution. This engine supported true 3D environments and used 3D models for characters, unlike the DOOM engine which had used sprites (Figure 1.15). As they did with DOOM, id Software released the game as shareware, so some levels were free to play, but players needed to buy the later levels of the game so that they could complete it.

    Quake included a server process that could be run on a dedicated machine. Clients' processes would connect to the servers. Finding a server was a problem unless you were playing against friends or colleagues on a LAN. Thus websites started to spring up with listings of active Quake servers. Software tools then started to emerge, such as QuakeSpy (later GameSpy), which allowed players to find game servers. These tools would access public lists of games, and then contact the servers to find out if they were full, which game variations they were support, and most importantly, the ping time of the server. Ping time, the time for a packet to reach the server, was incredibly important for game play. We’ll discuss this in more detail in Chapter 10 and Chapter 11.

    Like DOOM, Quake was easily modifiable by users. Because it was easier to model 3D structures in Quake than DOOM, modding

    Enjoying the preview?
    Page 1 of 1