You are on page 1of 6

Pattern-based Architecture for Building Mobile Robotics Remote Laboratories

A. Khamis D.M. Rivero F. Rodrguez M. Salichs


Department of System Engineering and Automation Carlos III University of Madrid Avda, Universidad, 30 28911 Legans, Madrid Spain {akhamis, mrivero, urbano, salichs}@ing.uc3m.es

Abstract The building of remote laboratories for laboratory


experiments in mobile robots requires expertise in a number of different disciplines, such as Internet programming, telematic and mechatronic systems, etc. Remote laboratories offer students access to complementary experiments, not available at their own university, as support to lectures. An intuitive user interface is required for inexperienced people to control the robot remotely. This paper describes a design pattern-based architecture to build remote laboratories for mobile robotics. The proposed remote laboratory is currently used to provide remote experiments on indoor mobile robotics, addressing different approaches to solve the main problems of mobile robotics, such as sensing, motion control, localization, world modeling, planning, etc. These experiments are being used in several mobile robotics and autonomous systems courses, at the undergraduate and graduate levels.

according to their needs. An integrated architecture for a tele-education system in the field of mobile robotics has been proposed in [7], which relies on three main processes: tutoring, authoring and administration. This system has proposed the use of assessment tools and the collaboration mechanisms. II. SYSTEM DESCRIPTION A. Remote Laboratory Overview A remote laboratory can be defined as a network-based laboratory where user and real laboratory equipment are geographically separated and where the telecommunication technologies are used to give users access to laboratory equipment. By networking many remote laboratories, we can obtain a framework called virtual laboratory. The project IECAT at which we are participating is an example of such frameworks in the field of autonomous and teleoperated mechatronic systems [8]. These laboratories represent a coordinated set of experiments for students with hardware facilities physically spread over different locations, but accessible by students via the Internet. In the designing of these distance laboratories for robotic systems, a number of challenges must be addressed, particularly the telematics infrastructure which gives access to the experiments, as well as the user interface which provides the necessary interactivity with the remote hardware supporting the learning process of students through appropriate feedback [9]. To implement a remote laboratory, an Internet-based open-loop teleoperation model is commonly used as shown in Fig.1.

1. INTRODUCTION The Internet has become a major global tool for communication and information sharing. It provides a global, integrated communication infrastructure that enables an easy implementation of distributed systems. For this reason, a great deal of attention is now being paid to the World Wide Web as a tool for building remote laboratories for tele-education on mechatronics. Robotics education provides an ideal field for tele-education systems because of its flexibility. Unlike traditional fields, robotics is still an emerging area. Relatively few programs exist at the graduate level, and even fewer at the undergraduate level. The courses in existence are still new and are open to rapid change and new approaches. Remote laboratories can be used to provide a superior educational experience to a purely in-residence laboratory. These labs are not restricted to synchronized attendance by instructors and students; they have the potential to provide constant access whenever needed by student [1]. Very few remote laboratories for mobile robotics have been commercialized or moved outside the research laboratories to public access. The main problems of remote control robotics have been addressed in [2-3]. The XAVIER system, built from a set of standard components for communication, planning and behavior integration, has been in almost daily use since December 1995 [4]. The RHINO system developed at Bonn University is similar to the XAVIER system and has been used as a tour guide in museums [5]. Schilling has presented a model design for remote mobile robots [6]. These systems have been developed to cover certain subjects by providing online experiments or online classes without providing any kind of generic tools for the remote users by which they will be able to customize the experiment

Fig.1 Internet-based Open-Loop Teleoperation Model

This model is based on the simple protocol commonly used in distributed computation "The Request/Response Protocol". The client interacts with the system using any Web browser to make the request. Client requests are translated to HTTP requests, which are satisfied by the Web

3284

server. These requests are converted to high-level control requests that are received by the robot controller which transmits them as low level control requests to execute the required task. Sensory feedback is required to give the user information about the remote robot's environment and the consequences of his/her commands. By using the concepts derived from this simple model, a remote laboratory can be developed to provide live performance experimentation. B. Mobile Robot Description The robot used is the indoor mobile robot B21 from Real World Interface [10] with mobility software. The B21 hardware consists of two main sections, the base and the enclosure. The base contains the batteries and the motors (4 high torque, 48 VDC servo motors with 4-wheel synchronous drive) which translate (90 cm/s with resolution 1 mm) and rotate (167 /s with resolution 0.35) the robot, as well as dedicated microprocessors which handle very low level functions such as dead reckoning. The enclosure contains two main computers, a power distribution system and a camera is mounted on the top of the enclosure. In addition, the B21 has many types of sensors such sonar, laser, tactile, infrared sensors.

usually related directly to real time sensor measurements and the task and thus time delay will not have effect on the stability [11]. The role of the user in the control loop is just to activate of deactivate robotic skill and therefore the control loop is not sensitive to communication delay. The idea -of overcoming communication constraints by communicating at more abstract level and increasing the robots autonomy- is fundamental to remote control via constrained communications.

Fig.2 Hardware Architecture

III. HARDWARE ARCHITECTURE Remote laboratories offer students access to complementary experiments, not available at their own university, as support to lectures. Thus they can experiment with remote hardware during a time slot which they can select according to their schedule. The implementation of the remote laboratory system should make efficient use of the resources in order to account for bandwidth and time delay restrictions. On the other hand, such systems should be implemented in an extendable way, which guarantees that the different software modules need not to be rewritten when new hardware is installed to be accessed through the experiments. To obtain a maximum level of portability, an important design decision was that all interactions with the remote laboratory being developed could be accomplished with only a Web-browser as an interface; no additional software or plug-ins should be needed for the use of the laboratory. With Java, we can provide a cross-platform user interface for configuration, testing and visualization of our robot software system in action. Fig.2 shows the remote laboratory hardware architecture, which is multi-layered to facilitate quick start-up and an unprecedented level of code reuse and transportability. A. User Layer During the implementation phase, event-based control approach has been used to guarantee the system stability in the presence of network time delay. In this approach, nontime based motion reference is used. This reference is

Users can access the experiment using any Web-browser (Netscape or Internet Explorer). Users request are received by Java applets or MIDlets and events are sent to the corresponding Java Servlet in the Web server to activate/deactivate a robotic skill or to invoke sensory data. B. Middleware Layer In this layer, there is a PC linked internally to an Ethernet LAN and externally to the university's Intranet. It runs Apache Web server, which hosts the HTML files and two agents implemented based on Java servlets: The roboton agent, which deals with the real robot, when the robot is actually running to provide the required information to process certain ability selected by the user or required in an experiments step. This agent contains two groups of java servlets, control servlets that send control commands to the robot and sensor servlets to invoke the sensory data. The other agent, which forms the middleware layer, is the robot-off agent. This agent represents systematic knowledge of mobile robotics by accessing to a database using Java Data Base Connectivity (JDBC). It contains also many servlets such as evaluation servlets, login servlets, etc... C. Robot Layer The robot layer has the robot skills servers, which have been implemented based on a two level architecture called AD (Deliberative and Automatic levels [12]. A skill represents the robots ability to perform a particular task. They are all

3285

built-in robot action and perception capacities [13]. In the deliberative level there are skills capable of carrying out high level tasks, while at the automatic level there are skills in charge of interacting with the environment. The path planner, the environment modeller and the task supervisor are some of the skills included in the deliberative level. The sensorimotor and the sensorial skills are found in the automatic level. The first are in charge of the robot motion. The second ones detect the events needed to produce the sequencer transitions which manage the task performed by the robot. In the AD architecture, skills are client-server modules. Each skill is implemented as a distributed object and it is activated by the deliberative level sequencers [14]. During the period of time in which the skills remain active, they can connect to data objects of other skills or to sensorial servers (Odometry, sonar, laser and camera). As a result, the skills can generate motor actions over the robot actuators (through a drive server), when considering sensorimotor skill, or events, when sensorial skills are considered. The skill outputs actions and events are stored in its data objects. IV. SOFTWARE ARCHITECTURE
An intuitive user interface is required for inexperienced people to control the robot remotely. The proposed architecture is

It asks the other CommandHandler and the DynamicRepository classes for visual proxies as simple JComponents. This class contains a constructor and implementation of all methods required by the interface but it isnt a God class, where object-oriented systems tend to be networks of cooperating objects with no central God class that controls everything from above. It positions the asked proxies within the panel but does absolutely nothing else with them. This class is simply a passive vehicle for holding visual proxies.

visual proxy pattern-based architecture. A design pattern is a formal description of a problem and its solution. Each pattern describes a problem which occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice. By using design patterns, high reuse rate, i.e. the ratio of reused code to the total code, can be obtained. The visual-proxy pattern is in some ways a specialization of the Presentation/Abstraction/Control (PAC) architecture [15], which can be used to build user interfaces for object-oriented systems and can guarantee high degree of extensibility and reusability of the software components [16]. The objective of PAC architecture is to separate the generation of the user interface entirely from the abstraction layer object to provide the reusability and extensibility facilitates. The PAC control object is passive with respect to message flow. The messages go directly from the visual proxy (presentation layer) to the abstraction-layer object that manufactured the proxy. In the visual proxy architecture, the encapsulation is still intact in the sense that the implementation of the abstraction-layer object can change without the outside world knowing about it. Fig. 3 is a static model for the proposed architecture using unified modeling language (UML). This architecture consists of three tiers: A. Client Tier In this tier, the ExperimentTool class implements User_interface so it can produce visual proxies when asked.

Fig. 3 Software Architecture

The proxies communicate directly with the abstractionlevel objects that creates them and these abstraction layer objects can communicate with each other. If the state of an abstraction-level class changes as a result of some user input, it sends a message to another of the abstraction-level classes, which may or may not choose to update its own user interface (its proxy) as a sequence. The aggregation relationship between the ExperimentTools class and user interface components indicates that the ExperimentTools class may or may not have one or more interface component and it is aware of component it has, but the component is not aware for what experiment interface it is for. Here every component can be associated with one or more experiment interface. ExperimentApplet instantiates the ExperimentTools to form the experiment interface seen by the user via any Web browser. CommandHandler class can be direct control, automatic skill, deliberative skill or a combination of

3286

different types of control classes in the same panel. A dependency relationship has been established between CommandHandler and DynamicRepository which specifies that a change in DynamicRepository may affect CommandHandler that use it but not necessarily the inverse CommandHandler may use the DynamicRepository class to invoke sensory data, which may be necessary to complete the control task, such as in the case of obstacle avoidance supervised control skill by using sensory data. DynamicRepository class provides information about sensors status and can be used to remotely acquire the sensory data. This class is implemented as a thread by implementing the Runnable interface to provide updated sensory data in real time. This data may be odometry, which indicates the actual robot position and its translational and rotational velocities or ultrasonic sensor data or laser sensor data. Both CommandHandler and DynamicRepository classes interact with the middleware using http protocol to send the user requests to the robot server. B. Middleware Tier The communication between the servlets and the remote robot servers is done via the Object Request Broker (ORB) of the Common Object Request Broker Architecture (CORBA) where the Java servlet acts as a client to the robot skill. The ORB provides the communication via the unified interface language Interface Definition Language (IDL) and based on the Internet Inter-ORB protocol (IIOP) [17]. The decision to use CORBA as the distributed object architecture of the remote laboratory is based on a qualitative and quantitative comparison between the two most commonly used architectures, CORBA and RMI [18]. This study concluded that CORBA is suitable for large scale or partially Web-enabled applications where legacy support is needed and good performance under heavy client load is expected. Moreover, CORBA servers can be located at any Internet site. RMI, on the other hand, is suitable for small scale fully Web-enabled applications where legacy support can be managed with custom build or pre-built bridges, where ease of learning and ease of use is more critical than performance. C. Server Tier Server tier consists of subsystems based on the robot control framework which can be the commercial Mobility framework [10] or our own AD Architecture-based framework. By using these subsystems the ORB objects are implemented by an encapsulated and modular manner. The object to be implemented may be a sensor object to invoke the sensor data or a control object to send control commands to the robot actuators. IDL skeletons provide a languageindependent object implementation and then communicate with the IDL stubs via ORB and based on IIOP protocol.

Fig. 4 Client/Middleware/Server Interaction

D. Client/Middleware/Server Interaction The client/middleware interaction is based on Http Requests/Responses and the middleware/server interaction is implemented using the broker pattern (ORB). As shown in Fig. 4 to activate or deactivate a robotic skill, the user has to use the SkillGUI to send Http-based commands to the SkillServer through SkillServlet, which servers as an intermediate server between client and server. SkillServlet doesnt call the SkillServer directly but through an IDL interface (SkillIDL). This class calls methods of a proxy object that implements the SkillIDL interface. Because it calls methods through an interface, it doesnt need to be aware of the fact that it is calling the methods of a stub object that is a proxy for the SkillServer object rather than the SkillServer object itself. The stub object encapsulates the details of how calls to the SkillServer object are made and its location. These details are transparent to SkillServlet objects. Also stub objects assemble information identifying the SkillServer object, the method being called and the values of the arguments into a message. On the other end of the connection, part of the message is interpreted by a CallDispatcher object and the rest by a skeleton object. The connection classes are responsible for the transport of message between the environment of a remote SkillServlet (Web Server) and the environment of the SkillServlet (Robot Server). The CallDispatcher receives message through the connection object from a remote stub object and then passes each message to an instance of an appropriate skeleton class, which are created by the CallDispatcher. The CallDispatcher object are responsible for identifying the SkillServer object whose method will be called by the skeleton object. The skeleton classes are responsible for calling the methods of SkillServer objects on behalf of remote SkillServlet objects. The skeleton object extracts the argument values from the message passed to it by the CallDispatcher object and then calls the indicated method passing it the given argument values. If the called message returns a response, then the skeleton object is responsible for creating a message that contains the return value and sending it back to the stub so that the stub object return it. If

3287

the called message throws an exception, the skeleton object is responsible for creating an appropriate message. The SkillServer classes (implemented in C++) implement SkillIDL interface. Instances of this class can be called locally through the SkillIDL interface, or remotely through a stub object that also implements the same skill interface. E. Error Detection & Recovery The error detection and recovering is perhaps the most significant challenge to time-delayed telerobotics. In the remote laboratory, errors can be handled by using a three stage process: autonomous detection, shared diagnosis, and manual recovery. Errors are detected by using the visualization means such as streaming video, graphical models, sensory data or connection status panels. The diagnosis task is shared by the user and the system. In recovering from the error, the user can telecollaborate with a human at the remote site or can ask the necessary privileges to be able to telnet the remote servers to reboot them. V. REMOTE LABORATORY IMPLEMENTATION An online laboratory in a field such as mobile robotics must have live performance characteristic, not just virtual reality or simulation programs. The multi-layered architecture, described in the previous section, has been implemented in order to reach this goal. During the academic year 2001-2002, the developed laboratory has been used to update a postgraduate course about intelligent autonomous robots. Student feedback was gathered using an online questionnaire. The student responses were uniformly positive as to use of the different proposed teaching activities specially the use of the remote laboratory. Most of the student felt that the online experiments helped them to achieve a deeper and longer understanding of the subject material. During the laboratory session, the student can communicate and teleoperate the experiment using a Java applet or MIDlet as shown in Fig.5. The task is then executed by a local control system and the results displayed on the operators browser. This client-level interface includes many operations that can be made on the robot such as position control, obtaining sensor data, drawing world maps and evaluate the errors. The following subsections describe some implemented experiments, which are being used in a postgraduate course on intelligent autonomous robots. A. Direct Motion Control This experiment aims at familiarizing the user with the mobile robot motion control and positioning by using different interaction elements such as PC or any Mobile Information Device Profile (MIDP)-enabled device as handheld PDA or cellular phones. The remote user can send

direct control commands to move the robot forward, backward or to turn it left or right. Using a 2D model for the lab and the robot and the odometry data (actual location with respect to the initial point, translational and rotational robot velocities), the remote user will be able to view the effect of the sent commands. A study has been done to measure the response time during the PC-based direct control experiment, which is the time collapsed between sending the motion commands and the motion start. Table I shows the response time when the experiment was ran from local site (Carlos III University of Madrid) and when it was run from remote sites (University of Applied Science - Germany and University of Reading England).
TABLE I RESPONSE TIME VARIATIONS Min. (ms) 121
273 394

Spain
Germany England

Max. (ms) 137.76


354 2444

Av. (ms) 127.2


293.51 1157.45

The latency and the throughput of the Internet are highly unpredictable and inevitable. There are old qualitative studies [19] that show people seem to be able to compensate for (learn) small added delays, but cannot learn large ones (>100 msec) therefore the delay will be noticed by the user but it can be accepted for such type of educative application.

Fig. 5 Screenshot of Experiment Interfaces

B. Remote Activation for Motion Skills The skills can be activated by execution orders produced by other skills or by a sequencer. They return data and events to the skills or sequencers which have activated them. A skill can send a report about its state while it is active or when it is deactivated. For example, the skill called gotogoal can provide information as to whether the robot has achieved the goal or not. When this skill is deactivated it might supply information about the error between the current robot position and the goal [14]. By using the

3288

proposed architecture, many interfaces have been implemented to remotely activate or deactivate motion skills such as go to goal, orientation control, wall following, obstacle avoidance. Skill sequencer can be used to generate complex skill by combing simple skills. It is responsible for deciding what skills have to be activated at each moment to avoid data inconsistence problems. C. Sensorial Data Acquisition The objective of this experiment is the environment perception using multi-sensor data (sonar and laser). The experiment is divided into two parts: without robot motion and with robot motion. The objective of the first part is to understand the operation of sonar and laser sensors and to be familiar with these readings. The second part aims at recognizing the real environment using the sensory data. D. Environment Modelling using Sensorial Data Building environment maps from sensory data is an important aspect of mobile robot navigation, particularly for those applications in which robots must function in unstructured environments. Ultrasonic range sensors are, superficially, an attractive sensor modality to use in building such maps, due mainly to their low cost, high speed and simple output. Elfess algorithm [20] is used to generate environment maps from sonar data. In this algorithm range measurements from multiple viewpoints are combined in a two-dimensional occupancy grid. Each cell in the grid is assigned a value indicating the probability that the cell is occupied. A self-localization algorithm [21] is used to estimate the robots position by computing sets of poses which provide a maximal-quality match between a set of current sensor data and the constructed map. The remote user will be able to compare the result of the localization algorithm with the odometry data to determine the error. E. Free Tour An unguided tour has been implemented, which does not determine any order and tasks at all. This tour provides generic tools to the user and let him/her to customize the experiment according to his/her needs. These generic tools include 2D model for the robot and the lab, odometry data panel, sonar data panel, laser data panel, motion controller and low-level tele-programming editor. VI. CONCLUSION The paper describes a three-tiers architecture to build mobile robotics remote laboratories. The visual proxy pattern used to build the user interfaces of the remote laboratory, provides flexible user interfaces with minimal coupling relationships between subsystems. The generation of the user interface is entirely separated from the abstraction layer object to provide the reusability and extensibility facilitates.

Based on the described architecture, many interfaces have been implemented for simple automatic movement skills such as direct control (using PC or PDA or Mobile Phone), Go to point, orientation control and wall following skills. A sequencer has been used to combine simple skills to obtain complex skills such as go to point with obstacle avoidance. VII. ACKNOWLEDGMENTS The authors gratefully acknowledge the funds provided by the Spanish Government through the DIP2002-188 project of MCyT (Ministerio de Ciencia y Tecnologa). VIII. REFERENCES
[1] [2] [3] [4] K. Forinash and R. Wisman, "The Viability of Distance Education Science Laboratories". T.H.E. Journal, vol. 29, 2001, No.2 September. C., Sayers. Remote Control Robotics. 1st Edition, Springer Verlag, 1999. D. Barney, and T. Ken, "Distributed Robotics over the Internet," IEEE Robotics and Automation Magazine, vol.7, no.2, pp.22-27, 2000. R. Simmons, J. Fernandez, R. Goodwin, S. Koeing, J. OSullivan, Lessons Learned from Xavier. IEEE Robotics & automation Magazine, Vol.7, No.2, pp. 33-39, June 2000 D. Schulz, W. Burgard, D. Fox, S. Thrun, A. Cremers, Web Interfaces for Mobile Robots in Public Places. IEEE Robotics & automation Magazine, Vol.7, No.1, pp. 48-56, March 2000. K. Schilling, Model Design for Remote Mobile Robots", Proceedings of Tele-Education in Mechatronics Based on Virtual Laboratories, July 18th - 21st, 2001, Weingarten, Germany. F. Rodrguez, A. Khamis and M. Salichs, "Design of a Remote Laboratory on Mobile Robots", Internet-based Control Education, ibce01, 12-14 Dec. 2001, UNED, Madrid, Spain. Innovative Educational Concepts for Autonomous and Teleoperated System (IECAT) http://www.ars.fh-weingarten.de/iecat/index.html A. Khamis, M. Prez Vernet, K. Schilling, "A Remote Experiment on Motor Control of Mobile Robots", the 10th Mediterranean Conference on Control and Automation, MED 2002, Lisbon, Portugal, July 9-12, 2002. Real World Interface, http://www.irobot.com/rwi/ N. Fung, W. Lo, and Y. Liu., Improving Efficiency of Internet-based Teleoperation using Network QoS. Proceedings of the 2002 IEEE International Conference on Robotics & Automation, pp. 2707-2712, Washington, DC, May 2002. R. Barber, M.A. Salichs, A new human based architecture for intelligent autonomous robots. The Fourth IFAC Symposium on Intelligent Autonomous Vehicles, p85-90. Sapporo, Japan. September 2001. R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand. "An Architecture for Autonomy". The International Journal of Robotics Research, 17 (4): 315-337, 1998. M.A. Salichs, M.J. Lpez, R. Barber, Visual Approach Skill for a Mobile Robot using Learning and Fusion of Simple Skills, Robotics and Autonomous Systems, vol.38, 2002, pp.157-170. H. Rohbert, P. Sommerlad and M. Stal. Pattern Oriented Software Architecture: A System of Patterns. Jogn Wiley & Sons, 1996. http://www.javaworld.com/javaworld/jw-07-1999/jw-07-toolbox.html Object Management Group, http://www.omg.org/ M. Juric, I. Rozman, and M. Hericko," Performance Comparison of CORBA and RMI", Information and Software Technology, No. 42, pp. 915-933, 2000. R. Marphy and E. Rogers, Human-Robot Interaction, Final Report for DARPA/NSF Study on Human-Robot Interaction, http://www.csc.calpoly.edu/~erogers/HRI/HRI-report-final.html. A. Elfes, "Occupancy grids: A stochastic spatial-representation for active robot perception", in Proceedings of the Sixth International Conference on uncertainty in AI. R. Brown and B. Donald, Mobile Robot self-Localization without Explicit Landmarks. In Algorithmica, vol. 26, pp. 515-559, 2000.

[5]

[6]

[7]

[8] [9]

[10] [11]

[12]

[13]

[14]

[15] [16] [17] [18]

[19]

[20]

[21]

3289

You might also like