ERL Emergency 2017 — Deep Learning to the rescue

ERL Emergency 2017

Team IMM approach to European Robotics League Emergency 2017

European Robotics League is funded by the European Union’s Horizon 2020 Program. It is continuation of three earlier projects:

  • RoCKIn@Home (now: ERL Service Robots) tournament focuses on the domain of service robotics for home application.
  • RoCKIn@Work (now: ERL Industrial Robots) tournament focuses on the domain of industrial robotics in the Factory of the Future and also deals with modern automation issues.
  • euRathlon (now: ERL Emergency Robots) is a civilian, outdoor robotics competition, with a focus on realistic, multi-domain emergency response scenarios.

ERL Emergency 2017 is a continuation of Eurathlon 2015 robotic competition. This year 8 teams with robot from three domains cooperated to fulfill the objectives in scenarios.

Starting in this competition we assumed we will present automatic 3D mapping system, autonomous navigation and artificial intelligence for object detection.

Team IMM — Robdos — IIS Piombino CVP

There were 8 three domain teams created for the competition. Our team was

Team 4

  • IMM, Poland (Land)
  • Robdos, Spain (Sea)
  • IIS Piombino CVP, Italy (Air)

Institute of Mathematical Machines (IMM) is a research and development institute. IMM participates in robotic competitions since 2013. IMM was established in 1957 (the word „computer” did not exist in the Polish language at that time, therefore the old expression „mathematical machines” is still used in its name).

Robdos Team, an underwater robotics association, came up in 2014 from our concern as a group of students specialized in different fields about putting into practice all the skills and knowledge gathered during university period. Marine, Industrial and Computer Engineering students from the Technical University of Madrid (Universidad Politécnica de Madrid), work hard on the development of an autonomous underwater platform.

IIS Piombino CVP is team from Technical Institute — local high school in Piombino

Scenarios

In all scenarios robots start in the same time and have the same amount of time finish all the tasks. During the mission robot operators should communicate with each other and exchange information. Ideally there should be also robot-robot communication.

Land + Sea

The goal of the scenario was to find the leaking pipe on the land by the ground robot, recognize the number on that pipe and go to the machine room. There was only one unobstructed entrance marked by green marker. The entrance and the machine room should be both identified automatically. In the machine room robot should identify and close the correct valve autonomously.

In the same time the underwater robot should start its mission and find the gate — a pair of acoustic and optical buoys. Robot should provide images of the gate and pass through it without touching it. The next task is to detect the plume buoys in real time, recognize numbers and provide images. Underwater robot should also identify the leak and exchange this information with the ground robot.

Both, ground and underwater, robots should provide maps of land pipes area, indoor map and 3D reconstruction of manipulation console respectively. In the end both (correct!) valves, underwater and in the building, should be closed simultaneously.

Sea + Air

In this scenario underwater robot should identify and pass through the gate like in the first scenario. Additionally robot should provide 2D acoustic or optical map of the debris. Robot localizes the missing worker underwater within a radius of 5 meters, then gives the dimensions and geometrical shape of the closest object to the worker. It should also provide 3D reconstruction of the worker and surface within a radius of 2 meters from the worker position.

The aerial robot starts its mission in the land area going through waypoints looking for a missing worker. Robot should detect leaking pipe, identify damages, build a map and find the missing worker. After identifying exact position of the worker drone deploys the first-aid kit within radius 2 m from the worker outside the building.

Robots start in the same time and are reporting underwater missing worker and land leaking pipe. Aerial robot should fly and take pictures of the place where worker is and underwater robot receives and decodes the message with the correct land leaking pipe sent by the aerial robot.

Land + Air

In the third scenario land robot is looking for OPIs and two missing workers one inside and the other outside the building. The ground robot should deliver the first-aid kit, dropped by the drone, to missing worker inside of the building.

The drone builds a map, finds the worker, and drop the first-aid kit. After that it drops another kit near the ground robot.

Grand Challenge: Land + Sea + Air

In the Grand Challenge all domains are doing the task simultaneously. It combines all previous tasks in a one large search and rescue mission. Operators of robots from all three domains are cooperating to achieve the goal.

3D Mapping

Our ground robot participated Enrich 2017, the robotic hackathon in Nuclear Power Plant in Austria in June 2017. Robot provided the 3D map of the reactor room online, without any data copy outside the internal robot computer. The resulting 3d is shown below:

The result of 3D mapping in NPP during Enrich 2017. Interactive demo: http://mandalarobotics.com/enrich2017/inside-power-plant.html

And here are presented 3D maps from ERL Emergency 2017:

3D pointcloud acquired by ground robot online: 3d view, top view

Navigation through the doors and model of the machine room

Object detection

Objects of Potential Interest (OPIs) are described in Rule Book for ERL Emergency 2017. Here are presented OPIs that our deep net was trained to detect. Below you can see sample images:

This slideshow requires JavaScript.

We used videos from Eurathlon 2015 to prepare annotated dataset. We generated 256 images with 7 classes:

  • unblocked (unblocked entrance; green marker)
  • blocked (blocked entrance; blue marker)
  • pipes
  • worker (missing worker)
  • ericard
  • damage (red marker)
  • valve

250 images is very small dataset to train even a pretrained network, so we decided to augment training samples using random translate, brightness and mirroring. We increased image count to 12750 images. Original images were captured by few different cameras (normal and spherical).

Proof of concept of the deep learning object detection

To achieve online detection, which was crucial during the trials, we used Single Shot Multibox Detector (SSD) with MobileNet. We started training process from model trained on the COCO dataset. To train the deep net we used Tensorflow Object Detection API available on GitHub:

And finally let’s see the AI in action! But wait, no one said we will want to use Pointgrey’s Ladybug 5 spherical camera for detection! To address the issue we are using 5 distorted images for detection separately. Yes, we do not expect anything above the robot!. Yes it is 5 times slower than detection from one camera, but since we have GPU, we can still process everything online!

Correctly identified blocked entrance

Automatically detected pipes

And the video from the Grand Challenge:

Awards

We received three awards:

  • 2nd Prize in Grand Challenge: IIS Piombino CVP, Italy (air) + Robdos, Spain (sea) + IMM, Poland (land)
  • 1st Prize in Land and Air: IMM, Poland (land) + IIS Piombino CVP, Italy (air)
  • Best Autonomy Award: IMM, Poland (land) for the best autonomy of land robots (autonomous navigation and automatic object detection).

Our team IMM, Robdos and IIS Piombino CVP

Watch our video from the competition!

Our video from the trials!

Links

Heise.de:

Team member links: