czech english

SubT Virtual Tunnel Circuit

Robotika Team

The DARPA Subterranean Challenge Virtual Tunnel Circuit was postponed from August till October 2019, so we decided to split the original Circuit Tunnel report into two. Update: 8/9/2019 — Robotika is leading the board! (29 days)

There are couple paragraphs related to Virtual Tunnel Circuit in the older report, in particular:

1st September 2019 — Restart (38 days)

The System Track of Tunnel Circuit is over, but not the virtual:
SubT Challenge Virtual Competition Teams,
We are excited to see your SubT Challenge innovations in the Virtual Tunnel Circuit competition, and know your teams are hard at work testing and developing novel solutions!
In accordance with our previous communications, the closing date for submissions of solutions for the Virtual Tunnel Circuit is Wednesday, 09 October at 11:59PM AoE (
We highly encourage you to take advantage of the three Tunnel Practice scenarios available for your use to test your entire solution workflow, to include submission through the SubT Virtual Portal. Please be sure to test early and often, and we continue to welcome your feedback to improve the Virtual Competition experience.
Awards and winners will be announced on October 30th.
We appreciate your continued contributions to the SubT Challenge. We welcome any questions you may have and we look forward to seeing your developments.
Sincerely, The SubT Challenge Team
So what to do first? Test first! … it may work without you knowing it. I run the test before we left to Pittsburgh but the log file was too big to play with:
-rw-rr 1 gpu gpu 1.6G Aug  9 12:08 /home/gpu/subt-x2-190809_094511.log
so now there is a tool to strip down only necessary streams (see PR #237). Filter to pose2d and lidar scan was only 16MB (subt-x2-190809_094511-strip.log) … much better. I expected the X2 robot to hit the edges of start area but somehow it recovered, even the entrance was visible via 2D lidar and the robot entered the tunnel! But see the „map” after few traveled meters:
Somehow IMU is no longer working as it used to be?! So now I know the answer to my original question „what to do first?” — investigate what happened to IMU (due the transition from Gazebo 9 to Ignition Engine or because of recent upgrades from system track)…
p.s. there is another rule which says !!!do not trust single measurement!!! so I re-run the test:
./run.bash robotika tunnel_circuit_practice.ign robotName1:=X2 robotConfig1:=X2_SENSOR_CONFIG_4
where robotika docker image is no longer based on nkoenig/subt-virtual-testbed (it is not available now??) but on new osrf/subt-virtual-testbed described in updated tutorial. And guess what? The robot did not reach the edge of starting area this time. Hmm. They said that the position of robots is not going to be random … so what changed?
I still cannot understand how people can live (I mean develop robots) without logfiles?! What they do in this situation?
All I need to do is to run:
python -m subt replay subt-x2-190901_192059.log
and it fails, because in branch feature/virtual-ignition I changed command go straight 9 meters to 3 meters as this was in note in I made in logfile last time … now everything is passing fine, … it is time to download the large old logfile … done. Are you also curious?
It passes fine too! But …
SubT Challenge Ver2!
None go_straight 3.0 (0, 0, 0)
0:00:00.170243 (0.0 0.0 0.0) []
0:00:00.170243 []
maximum delay overshot: 0:00:00.120148
maximum delay overshot: 0:00:00.109536
maximum delay overshot: 0:00:00.128402
maximum delay overshot: 0:00:00.128848
maximum delay overshot: 0:00:00.107339
maximum delay overshot: 0:00:00.107657
so in the old image there was a delay more then 0.1s in the control loop, which is a good news — it could be the reason of strange IMU behavior (not properly synced with rest of the system), so maybe newer docker image is better, faster?!
How long it took to analyze? 5 minutes? (yes, the slowest part was to copy 1.6GB from one machine to another)
So let's try 4 meters …
Well it will be a „lot of fun” — now it generated set of collisions and X2 tried to recover. But it collided even with „tunnel air” when it tried to enter the tunnel. Time to check accelerometers!

3rd September 2019 — Accelerometer and local planner (36 days)

The progress yesterday was small, but at least the logfile confirmed source of false collision detection (see reported issue #166). Shortly, whenever robot X2 wants to go or stop its acceleration was larger (after removal of G-force) then original threshold 12m/s^2. This is probably bug, because the robot has limit 7m/s^2, but in meantime I increased the threshold to 18m/s^2 so it hits the obstacles harder, probably.
After the post Wolfgang wrote me notice: You may already have noticed that the angular velocity of X2 has changed again. The change in the last version of the docker file leaded to silly dancing (turning right and turning left) of the X2 robot in our tests :-( … hmm see already „resolved” issue #161 Rotation Behavior Dependent on Robot Orientation … so based on robot orientation the angular speed is different?? Sigh, system track was easy.
One of the problems with current code is accumulated delay from local planner causing crash at the end. The Practice Tunnel 1 contains many heaps of stones and boards and not all is visible on 2D lidar. Also there are only narrow alleys remaining.
I also switched for convenience to X2 configuration #3 with QVGA camera to speedup cycle test and reduce log size but then the first detected artifact is not as nice:
The first detected artifact
The first detected artifact
Yes, another reason was that even the JPEG images are sometime very large:
Connected by ('', 54808)
Exception in thread Thread-17:
Traceback (most recent call last):
  File "/usr/lib/python3.6/", line 916, in _bootstrap_inner
  File "/home/developer/subt_ws/osgar/osgar/drivers/", line 267, in run
    self.bus.publish('image', parse_jpeg_image(packet))
  File "/home/developer/subt_ws/osgar/osgar/drivers/", line 90, in parse_jpeg_image
    assert size < 230467, size  # expected size for raw image during experiment
AssertionError: 232056
so maybe the ROS and Ignition uses different loss factor? Time to remove this old assert (used to be verification that I am receiving compressed images).

5th September 2019 — Stone Age (34 days)

It looks like the organizers like stones — they are everywhere :-(. And sure enough they are not too big to be visible by 2D lidar:
It is possible to detect collision from accelerometers, but at the moment (I mean current simulation docker) there are too many false alarms. I tried practise tunnel 01 and 02 and very soon you approach the heap of stones … and if you try virtual STIX you hit the „virtual wall” even before entering the tunnel!
0:00:03.057975 go_straight 7.0 (speed: 0.5) (0.0, 0.0, -0.7923794804054256)
0:00:03.151615 Collision! [18.273, 0.004, 18.782] reported: False
0:00:03.176237 Collision! [25.47, 0.0, 19.8] reported: False
0:00:38.168127 Collision! [-19.87, -0.011, 21.035] reported: True
Sigh. There is one good news — so far the robot (only one, so I do not have the problem #168 reported yesterday with deploying multiple robots yet) is placed at approximately (-5, 5, 0), so it is enough to code:
to enter the tunnel. Yeah, I know that it is not very sophisticated, but first I need to get the thing working and score at least one „fake” point on the cloud simulation. Somehow this simple goal is still far far „in the clouds”.
Maybe one more picture — attempt to enter virtual STIX tunnel:

6th September 2019 — Everything under control! (33 days)

p.s. well, maybe I should add some note, what you see and what you should see …
This video was generated from hacked with extra few lines to save pygame screen:, "tmp.jpg")
and OpenCV wrapper to save AVI file. I even uploaded the subt-x2-190906_185628.log (175MB) logfile, so everyone can replay it. Note, that the development is currently in git feature/virtual-ignition branch. The lidarview is „universal” and it was used also for analysis of OSGAR robots in Pittsburgh — this means the parameters could be bit complicated so I added also simple helper subt/script/virtual-view.bat, which contains only this line:
python -m –pose2d app.pose2d –lidar rosmsg_laser.scan
    –camera rosmsg_image.image –keyframes detector.artf %1
So what you see? In the top left corner it ROS image from camera, green points are ROS LIDAR scan points and finally orange is robot 2D pose with orientation and its history/trace. The scale is in the left bottom corner (1 meter). Note stream name app.pose2d, so it is already fused position from IMU and odometry.
Now what you should see and you definitely do not … is that if you combine laser scan with yaw angle from IMU on reasonable flat surface (this was worldName:=simple_tunnel_01 added in the latest release 2019-09-05) the direction should be aligned! No wild rotations!
What could be wrong? One obvious possibility is timing — you simply have LIDAR scan and IMU yaw data from different times. So that is how I came to issue #170 - Irregular X2/odom — timing of IMU gyro and accelerometer is perfect but odometry is two to four time slower. The OSGAR log file contains both: internal streams (used in lidarview for example) as well as raw communication with „external devices” (in this case ROS). You can replay processing of ROS IMU:
python -m osgar.replay –module rosmsg_imu subt-x2-190906_185628.log
or ROS odometry:
python -m osgar.replay –module rosmsg_odom subt-x2-190906_185628.log
and print ROS timestamps in nanoseconds. For IMU you get differences 4000000ns, which is 4ms … so the update is 250Hz? And Addisu Z. Taddese wrote, that odometry should be on 50Hz, so I should have 5 times more messages from IMU than from odometry … and I don't! Moreover the irregularity is really bad:
python -m subt replay e:\osgar\logs\subt2\virtual\subt-x2-190906_161345.log
0:00:10.311060 acc
0:00:10.316274 rot
0:00:10.316339 acc
0:00:10.319772 rot
0:00:10.319824 acc
0:00:10.321231 pose2d
0:00:10.321453 pose2d
0:00:10.321734 pose2d
0:00:10.323033 rot
0:00:10.323096 acc
0:00:10.326389 rot
0:00:10.326485 acc
0:00:10.331349 rot
0:00:10.331419 acc
so here are 3 times pose2d messages in row and none accelerometer or rotation in between. Sigh. I think the cause is clear — it is necessary to down-sample IMU to „reasonable” 50Hz and reduce the internal traffic. Ideally I would have „block update” of all sensors, but from ROS I have only independent TCP connections …

8th September 2019 — 250Hz IMU bottleneck(31 days)

OK, the video from simple_tunnel_01 was not that funny … but it exposed interesting case (#172): the simulation could be fast enough that the bottleneck becomes the TCP communication! In my case it is IMU, which sends updates on 250Hz. It is delayed and this is especially visible on the unit test tunnels, i.e. fast to simulate. What happened was that in 2 minutes of simulation my IMU data were 10 seconds old! Somehow pose2d from normal odometry is much more reliable then the position computed from IMU:
What you see above is that when the robot turned by 180 degrees IMU did not notice it for another 10 seconds (the robot continued straight) so that is why the map is so bad.
What to do if your orientation is 10s old? You can stop and wait 10s. Unfortunately the buffer is not flushed, so it will be still 10s old. We can use the „fall-back” to version 0 — turn 180 degrees and follow the other wall for already traveled distance. With this strategy we could at least return to the tunnel entrance (simple_tunnel_01). It worked for speed 0.5m/s, but did not work for 2m/s. Maybe if I disable LocalPlanner again … ???
Attempt with tunnel_circuit_practice_01 ended up on the robot's roof for speed 1m/s, and it looks like slow down to 0.5m/s does not help too.
p.s. well, it is not only the 10s old orientation, it is also info, that 10s ago you probably hit something … no wonder robot is upside down :-(

10th September 2019 — Robotika is leading the board! (29 days)

(with zero points, practise tunnel 01)
There is new design of SubT Challenge Virtual Testbed Portal and also the Leader Boards are active now! And believe it or not Robotika is on the 1st place! … yeah, just kidding, … almost …
I wanted to make some bad jokes about PR article Autonomní roboti z ČVUT vyhráli v USA soutěž pořádanou agenturou DARPA (Autonomous robots from CTU won the U.S. contest organized by DARPA). I am glad that they are more proud to beat us then to be on the 3rd place next to Carnegie Mellon and NASA/JPL/Caltech/MIT. … so we are winning similarly as we were loosing with 8th points on the last position in Tunnel Qualification. This time it is 0 points, just proof that we tried. Potentially a long way to score at least one point!