You are on page 1of 2

Human-Robot Interaction in Unstructured Environments

Progress Report
January 27, 2014
Kevin DeMarco
1 Last Weeks Development
Last week I spent most of my time preparing the VideoRay for underwater operations with a diver in the
Mechanical Engineering Acoustic Tank. The diver was operating in the tank to install a mechanical platform
at a specic location for later imaging by an airborne LIDAR system. The team consisted of a group of
researchers from GTRIs Electro-Optical Systems Laboratory (EOSL) that were testing an airborne LIDAR
system that they were designing. Since the diver had to align the platform with a specic coordinate system
based on the LIDARs origin, a plumb bob was hung from the LIDARs origin and submerged to assist the
diver in aligning the platform. However, the actual procedure consisted of a series of steps that would require
the diver to surface to inform the surface team that they could proceed to the next phase of the mission.
I postulated that the VideoRay Underwater Robotic Assistant (UWRA) could be used to communicate
information between the surface team and the diver through the use of the VideoRays camera and lights.
Thus, I developed a state machine diagram for the phases of the mission and provided it to the diver and the
surface team (see attached PDF). Unfortunately, the team informed me too late before the diving began to
setup the mission correctly. Still, I think this is an excellent concrete example of how a UWRA could assist a
diver accomplish a task. The UWRA provided feedback to the diver by ashing its lights once for a command
Acknowledged and ashing its lights twice for a command Not Acknowledged. The lights ashed on and
o quickly to notify the diver that the UWRA requested the divers attention. Finally, to notify the diver that
the current task has been completed, the UWRAs lights gradually increased in brightness to full brightness
and then decreased in brightness to notify the diver that the current task has been completed. A ROS GUI
was designed that allowed the surface team to quickly communicate messages through the UWRA and to
the diver.
To assist in labeling test data, a ROS GUI was created that allowed the user to enter textual data, which
was both published to the ROS system and logged in a text le. For example, after an experimental run
that consisted of the diver swimming away from the sonar head, the user could enter the following phrase in
the ROS GUI: Diver swimming away from sonar. The textual data was timestamped and associated with
specic sonar and video data les.
One major impediment to previous testing sessions with the VideoRay was that the VideoRay required
two separate tethers to communicate both video / control data and sonar data. The VideoRay company
suggested using the disassembled parts from an Ethernet to DSL converter, so that the sonar data could be
transmitted over the existing VideoRay tether. The DSL modems converted the two twisted-pair (4 wires
total) Ethernet physical layer to two single-ended (2 wires total) DSL communications. The two single-ended
wires were connected to the previously unused auxiliary lines on the VideoRay tether. One modem was
encapsulated in PVC pipe and sealed with cement and epoxy to make the enclosure waterproof. A second
DSL modem was placed at the surface and decoded the DSL lines into Ethernet lines. The construction was
completely successful, but unfortunately it wasnt used during the tests because the VideoRay was being
used in tests in Colorado.
As a surrogate UWRA, I repaired the electronics in the Aquabotix Hydroview and operated it in the tank
with the diver. However, the Hydroview does not have hovering capability because it has to be moving in
the forward direction to control its position in the vertical water column, similar to how an airplane needs to
be moving to control its altitude. Also, there was tremendous delay in the video data and teleoperation of
the Hydroview, which made moving the Hydroview into position nearly impossible. I had some interesting
ideas for a nonlinear controller for the Hydroview to provide pseudo hovering performance, but I probably
should not pursue that idea right now.
1
2 Current Understanding of the CADDY System
Based on our conversation with Nikola Miskovic, the following is my current understanding of the CADDY
systems conguration.
The CADDY system consists of a surface vehicle that can locate the diver with an Ultra Short Baseline
(USBL) because the diver is also equipped with an acoustic communication device. The surface vehicle can
also communicate with and locate an underwater robot using acoustic communications. CADDY will be able
to lead the diver to specic navigation points, follow the diver, and observe the diver for safety concerns.
Anthropologists have been enlisted in the project to try to extract information about the internal state of
the diver based on physiological sensors placed on the diver. Information about the divers internal state will
assist CADDY in mission assessment and planning.
3 My Proposed Contributions to the CADDY Program
I am most interested in the high-level Human-Robot Interaction (HRI) between the diver and the robot.
Specically, given a high-level mission plan, can the robot infer the divers intentions and assist the diver
based on a previously dened common ground? To solve this problem, the robot has to be able to rst
sense the divers task goal and then act to assist the diver in obtaining the predicted goal. It would be ideal
for the solution to be sensor-agnostic since there are a variety of underwater robotic systems, with varying
capabilities. Thus, a major research question is: What type of information and level of information delity
is required for the robot to infer the divers intentions? Also, if diver sensor data is degraded or sparse, what
types of intentions can be detected?
There are tasks that require a direct command from the diver, such as, Fetch tool X. There are also
actions that the robot can perform without a direct command. For example, the robot can position itself
to illuminate a divers workspace without the diver specically commanding the robot or specifying the
workspace location. The development of a robust HRI framework that could handle both explicit commands
and inferred commands that align with the overall mission plan would be a signicant contribution to the
CADDY program.
2

You might also like