ELROB 2024
Nth European Land Robot Trial (donkey blog)
What does ELROB look like after almost twenty years of existence? I don't know, but it will be later less than two years, so it could be such a milestone for our mobile robots and algorithms dust and connect in a new one. Blog update: 29/06/2024 — Pat like a mule
ELROB 2024 - European Land Robot Trial, 12th European Land Robot Trial, 9th Military ELROB24 - 28 June 2024, Trier, Germany
This is the first official announcement for the twelfth European Land Robot Trial 2024! The military ELROB, which takes place for the ninth time now, enables you to get a glance at the latest R&D in the area of unmanned outdoor/off-road ground systems. The scenarios have been developed in closest co-operation with the military users and reflect the up-to-date requirements of the forces. This event offers the fantastic opportunity to mingle with the international experts from the user community, the industry and the R&D sector. ELROB is a strictly non-profit activity! The event is organized and carried out only by non-profit organisations. All participation fees are completely redistributed among the participants to cover travel expenses.
Google translate říká:
ELROB 2024 – European Land Robot Trial, 12. European Land Robot Trial, 9. Military ELROB24 – 28. června 2024, Trier, Německo
Toto je první oficiální oznámení pro dvanáctý European Land Robot Trial 2024! Vojenský ELROB, který se koná již podeváté, vám umožní nahlédnout do nejnovějšího výzkumu a vývoje v oblasti bezpilotních venkovních/terénních pozemních systémů. Scénáře byly vypracovány v nejužší spolupráci s vojenskými uživateli a odrážejí aktuální požadavky sil. Tato událost nabízí fantastickou příležitost setkat se s mezinárodními odborníky z komunity uživatelů, průmyslu a sektoru výzkumu a vývoje. ELROB je přísně nezisková aktivita! Akci organizují a provádějí pouze neziskové organizace. Veškeré účastnické poplatky jsou zcela přerozděleny mezi účastníky na pokrytí cestovních nákladů.
Content
- 240311 - Half an hour (105 days left)
- 240403 - Hali, white (81 days left)
- 240610 - Freezing (fourteen days left)
- 240625 - ELROB Day 1 and 2
- 240626 - C'est la vie
- 240629 - Pat like a mule
Blog
Automatic translation by Google translate
December 22, 2022 - What's it all about? (550 days left) [MD]
Jirka convinced me at yesterday's regular "call" that it would make sense to participate in the "competition"
ELROB. I was last there in 2006 as a “journalist”, but already
then I was struck by the difference between the American and European versions of the “competition”: while the American
DARPA will offer you a million dollars or more to win, in the European
you will (hopefully) receive a diploma and still pay a registration fee of 500 EUR … but maybe
it's not like that anymore — we'll check in time.
So why even bother with it? ELROB offers many comparison categories, from which Jirkov
liked mula.
In short, the robot first follows you to the target position and then you switch it to "autonomous mode".
The robot itself returns to the start along the same route and then commutes start-goal-start-goal-… and when
it becomes too boring, so the organizers will add an obstacle for you.
A similar task can also be used in practice and it doesn't matter if the robot carries a brick (md) or rolls (zw).
You simply create such a temporary "transport channel".
If we were to leave today, Jirka has Freyja, Jakub K2 and K3 hats, “in stock”
and actually all Robotour robots could probably be used. Well, it's such a milestone, or yet
distant deadline, so we'll see…
October 5, 2023 - Time to Begin (263 days left) [JI]
Freyja 2.0 is complete except for minor details. PatAMat floats. It's time to write software and drive!
Freyja has a bit of a head start on this because I started writing the software back in the winter when we started talking about the competition. I tested it on Deedee, which is a smaller home robot that I already used as a "technology demonstrator" at SubT. Deedee followed me around the apartment and then back to the start, going back and forth already in the spring. In the meantime, I finished rebuilding Freyja and started "porting" the software. This week Freyja went there and back for the first time. On 34 meters next to the house, she drove 830 meters in 25 minutes. There, back, there, back, there … simply a thrilling experience.
What did I learn from these experiments?
- The RealSense tracking camera that Deedee uses at home is a powerful weapon. The robot always knows where it is. Freyja does not have such certainty with such precision outside. It is necessary to continue working on the localization and fusion of data from different sensors. Relative localization to the sample path may be sufficient.
- "Follow human" and "avoid obstacles" are mutually exclusive tasks. Man is also an obstacle! I don't see a solution here yet. I tried deleting the tracked person from the local map, but on the one hand, the detection of the leading person is not reliable and accurate enough for this at the moment, and on the other hand, the robot will willingly hack into me. Which I don't like very much. A related problem is when he wipes not only me off the map, but also the wall next to me, and crashes into it.
- When the robot detects obstacles (so far) only with the lidar and the data from the lidar stops, it is bad. Who would have said?
October 18, 2023 - Achilles' Heel (250 days left) [JI]
The robot is driven for as long as the wheel breaks off. Whoever wants to beat a dog can always find a stick. With these and other words, Mr. Jiří does not announce the arrival of Aunt Kateřina, but a broken robot. You've already heard some "I told you so" too.
What's going on? Freyja is a robotic construct of the "Good Programmer" type. And sometimes you can tell. Mechanical attachment at the point of transmission of power from the engine to the wheels is a challenge. On
K2, cousins (or cousins?) to SubT, hold the wheels in turned iron blocks. Robík (or was it his follower?) has an iron block bored and carefully cut in the same place, with a very similar result. Freyje holding the wheels 3D printing. I can totally hear it, "What the heck?" In his defense: "I know." Someday I'll have to learn how to say "I need to turn such and such a bazmek" in the local dialect, Züritüütsch.
To be honest, after this year's stress tests where I let Freyja jump over logs, I was expecting it to hold up. It wasn't like that. When attempting a skid turn almost on the spot - another "I told you so!" - it snapped, the wheel gave way and the cable snapped. "Fortunately" always - well, even the previous experience would be - first they loosen the thin cable to the hall sensors, ODrive gets scared and stops.
The team of experts called to the site of the robot accident, ie me, then concluded that the design team, ie me, could have done a better job. The printed part is made of carbon reinforced PLA, which helps, but I left the infill low and the part was more empty than not. In addition, for version 2.0 there was a screw hole left over from version 1.0, which is no longer needed. It burst along her side.
I still don't master Züritüütsch, so Freyja received a hard copy of version 2.1. Without the extra screw hole, with a larger infill and with a piece of aluminum attached where it gave way. For sure on all four wheels. So Freyja drives again. Hopefully it will last longer now.
December 3, 2023 - Not like this? (203 days left) [JI]
Maybe it won't look like this at ELROB in June:
Or yes?
- Water in the air … it can rain.
- Snow on the ground … not, but mud can pack up quite similarly.
- Visually poor environment … sandy area, concrete area, mowed area, undefined path, it will probably be all.
So what did Freyja and I learn?
- Snow in the air does not matter for lidar obstacle detection.
- The mess on the bikes is okay to some extent, but it can be worse with mud.
- Once Freyja sits on her stomach, nothing can save her.
- The robot remained dry.
- Bacha on the joystick falling when carrying the robot into a snowdrift. The search is bad.
January 5, 2024 - Non-Invertible Behavior (171 days left) [JI]
Q: How do I send Freyje a control signal to turn at 70 degrees per second? Answer: Freyja and turn? Are you serious?
Turning a skid-controlled robot in a controlled manner is not easy. Freyja is no exception. In 2005, we published an article on this topic at a conference about the control of the tracked robot Ester using neural networks.
The idea was pretty straightforward:
- We will "randomly" ride the joystick for a while.
- From the log we collect data <management, behavior>. Or "if I do X, Y will happen".
- I will switch the data to <behaviour, management>. Or "in order for Y to happen, I have to do X".
- I train a neural network modeling behavior -> management.
- Done. When I need to turn, I ask the neural network what control signal to send.
But it's not that simple!
Let's simulate this on a simplified but realistic enough problem. Let our system be linear with inaccuracies / with noise.
First, we generate sample data:
import numpy N = 1000000 numpy.random.seed(42) control = numpy.random.normal(scale=6, size=(N,)) behavior = 0.1 * control + numpy.random.normal(scale=2, size=control.shape)
I.e. to the control signal ("control") our simulated robot turns ("behavior") at a speed corresponding to one tenth of the control signal. We can easily imagine that the turn is in radians per second. It happens to be, perhaps to scale, a reasonable simplified model of Freyja's turning. Freja's turning also depends on the surface, current forward and rotation speed, and a few other things. Our simplified model does not explicitly include these. They fall under noise / imprecision.
Now let's "learn" the inverse model (control_model). We have a linear system, a linear model should do. To be sure, we will also make a forward behavior model (behavior_model).
import scipy behavior_model = scipy.stats.linregress(control, behavior) control_model = scipy.stats.linregress(behavior, control) print(behavior_model) > LinregressResult(slope=0.100300724814567, intercept=-0.00048361981064025785, rvalue=0.28784673170627245, pvalue=0.0, stderr=0.0003337044737230931, intercept_stderr=0.0020026058962632327)
It looks good. The behavior model understood that the yaw rate is a tenth of the control signal and the curve passes through zero.
Who bets
control_model.slope == 10
, i.e. the "opposite" of the forward behavior model? I! The idea is simple: If behavior = 0.1 * control
, then control = 10 * behavior
, right?print(control_model) > LinregressResult(slope=0.8260732024336218, intercept=-0.008403739321622323, rvalue=0.28784673170627245, pvalue=0.0, stderr=0.00274837817757052, intercept_stderr=0.005747146531042411)
Huh? So according to the forward behavior model if I send a control signal of 10.0 the robot will turn 1 rad/s, but according to the behavior model for the robot to turn 1 rad/s I have to send a control signal of 0.8? But the robot will turn only 0.08 rad/s, i.e. almost not at all! What's going on?
First, it's not a lack of training data. We simulated a lot of them, and according to
control_model.stderr
, scipy is confident in its output.Second, it's not that we "drive the joystick badly". The data is simulated, beautifully normally distributed. But it's getting hot here. The distribution of the data matters. And it also depends on what one of those linear models minimizes.
import matplotlib.pyplot as plt plt.xlabel('Control') plt.ylabel('Behavior') plt.plot(control[:500], behavior[:500], 'bo', label='Training data') plt.plot(control[:1000], control[:1000] * behavior_model.slope + behavior_model.intercept, 'ro', label='Behavior model: control -> behavior') plt.plot(behavior[:1000] * control_model.slope + control_model.intercept, behavior[:1000], 'yo', label='Management model: behavior -> control') plt.legend() plt.show()
First, the sample data is denser around the zero control signal. On a real robot, this would correspond to the robot driving mostly straight.
Second, the behavior model minimizes the square of the difference between expected and observed behavior. The control model minimizes the square of the difference between the expected control and the actual control. The first of those two differences is measured in the Y-axis, the second in the X-axis! Note that the average value of the control signal that caused a rotation of 1 rad/s is indeed very close to zero. The two linear models have different tasks and there is not the slightest reason for the control model to be the inverse of the behavior model.
The two points together mean that the data is pulling the control model towards zero.
And indeed, if you "play" with the distribution of the sample data - those two
numpy.random.normal
calls - you can get quite a wide range of values in control_model.slope
.Even evenly distributing the control signal from a limited interval instead of a normal distribution won't do it.
On a real robot, even the inaccuracy in behavior is not symmetrically normally distributed. It is more likely that the robot will fail to turn than that it will unexpectedly begin to spin violently. This also drags the behavior model to zero. But to zero on a different axis than where the control model is drawn. The scissors will open even more.
If you're more confused now than you were at the beginning, that's okay. At least in your own experiments with machine learning, or even with simpler approximations, you will be more cautious than we are about the effect of the distribution of the data and the effect of the error function on the result.
And if this is all clear to you, congratulations. You are eighteen years faster than me. Honestly, I have a nagging feeling that I'm not getting all the implications yet.
So what about that? Maybe I'll leave that for another time. Unless the margin of this page is too small.
Comments (1)
1/6/24, 12:44 Hi Jirko and Martin, I would like to make a few comments on Non-Invertible Behavior. I apologize in advance. Neural networks don't tell me much, but common sense (and a bit of control theory I remember from school) they tell me that the road does not lead here. What you are trying to do here is the so-called forward control, i.e. if I have a model of behavior robot, so by inversion I find the required action to obtain the desired result. This works, but under the crucial assumption that the system is not influenced by others significant influences (what you call noise/disruption) - the system is predictable. Even this problem can be compensated if I am able to somehow "measure" the malfunction (which in this case in this case it probably won't work - e.g. to determine the current slippage). It is used for example in the case of the so-called equiterms of regulation. I measure the temperature outside (fault/noise) and based on the model behavior in response to a fault I control the heating to achieve the desired temperature inside. This works if the conditions resemble the state in which the model was built (if it is e.g. -10 degrees and the sun is shining, it will no longer work so nicely, but it doesn't matter much here, because the house has a huge inertia). The use of a neuron to control rotation feels like I want to drive heating according to the image from the webcam, which I will stick out of the window (= I will announce according to picture the outside temperature and based on that, I will heat). So in my opinion the only chance is to use the proven feedback control which manages based on measurements of the current state. I.e. I'm still measuring the current speed of rotation (gyroscope) and I compare it with the request, I process the difference "some" regulator. This is guaranteed to work though adjusting it of the controller will not be completely trivial, because malfunctions or their rate of change they are huge here. Martin P.S. I am quite tempted to try it.January 9, 2024 - On, off, on, off … (167 days left) [JI]
Something to lighten up today.
Elrob rules require every robot to be equipped with an orange beacon. Available orange beacons are either for construction or for trucks. They hardly fit on a small robot.
But such hipster super-luminous bicycle flashers sure do!
Freyja with lights |
March 11, 2024 - Half Hour (105 days left) [JI]
Robotics is an activity for the patient. Or for fools. When a robot snaps somewhere after fifteen years of work, it makes me sad.
But now I was lucky enough to make an interesting observation. I needed to clean up the logs and free up disk space. So I looked at which ELROB logs I wanted to keep and which I didn't. At the end of July, the robot did not start. At the end of August, he took off, but on the wrong side. Yesterday he rode for 29.5 minutes and covered 1720 meters. And then he parked under the van, well. That's progress, right?
29.5 minutes is close enough to the half-hour limit in Elrob for me to feel some satisfaction. The excuse for the van is that there is probably a solution. Such a van bothers the robot with the height of its chassis. Freyja can see below with the lidar and the only visible obstacle is the wheels, somewhere far away. The good news is that there are also four stereo cameras on the Freyje. One of them detected an obstacle. I am "just" ignoring the detection from the cameras for now, because I am not yet satisfied with the small side cameras. They give rise to frequent false alarms. I should probably start using the front and rear camera to protect the vans in the vicinity. Those are OK. They and the side ones after recalibration and with more conservative settings start to look better.
So what moved Freyja to a brighter tomorrow? I got advice from MartinL. PavelS confirmed that he does something similar. I added one feedback loop for yaw control. In the original solution, the high-level control consumed the compass and gyroscope to produce the required forward and rotational speeds. I transformed these into the required wheel speed on the left and right side, which, according to a simple mathematical model (inversion to odometry), should lead to the desired effect. The left and right motor controllers then, with the help of PI control, made sure the wheels turned at the correct speeds. Unfortunately, this leads to the correct speed and several times underestimated turning.
The new solution cancels that model and the decomposition into left and right speed. Adds mid-level PID control with feedback from gyroscope and encoders, or hello If the robot doesn't turn fast enough, it will take longer. The low level of speed control of individual wheels remains. And it is done. The robot is now much more agile.
I just didn't let MartinL convince me that I needed a microcontroller for this. The real-time priority of processes on the PC and communication via USB lead to the fact that even from the PC I can manage the new loop sufficiently regularly at 100 Hz without the computer suffocating. It's enough for me.
This solves one of the two motivations - the public one - for the recent lengthy discussion on the subject of forward and backward modeling. The second motivation will be a separate conference paper, so I will leave it for another time.
A more nimble robot has discovered a flaw in a new experimental obstacle avoidance. Somehow I can't remember why I didn't use an older, working solution. I guess I had a smart idea :-/ I have to have fun with something Anyway, I fixed the mistake and that helped a lot too.
Oh, and in high-level trajectory planning, the shoe was like a gate too. There was a reason for that, when I said before Christmas 2022 that a year and a half until ELROB 2024 would be just enough with scratched ears.
And what else? The second reason for Freyja's trip under the van was that she got lost after that half hour. To my surprise, she knew her position correctly, but she was disoriented by sixty degrees, so she was headed in the wrong direction. I understand that the two on-board GPS compasses don't always have enough signal, but the CMPS14 combined magnetometer and gyro (BNO080) shouldn't get lost like that, right? Except he does. Not just me. Greetings PavlaJ! I read on the net that it is very sensitive to its location. So I'm moving it today. Compass. Not Paul.
April 3, 2024 - Halí, belí (81 days left) [JI]
So I replaced it. Compass. Not Paul.
UM7, another attempt at "I'll just buy an off-the-shelf solution" went wrong. The sensor holds tilts (roll, pitch) well, but it drifts outrageously in rotation along the vertical axis (yaw). In two days of working with it, I couldn't figure out if it shows the turn from the north, from the starting position, from the east, or from where actually. The advice on the forums is: "Install the older firmware." That's easy to say, but harder to do. Both the "current" manufacturer and the original manufacturer have probably already gone bankrupt. Anyway, their websites are down. When I see that compass, I'm not too surprised.
So the time of total domination (total control) is coming. Or Do it yourself. Like MartinL and PavelS. Since I'm a suspicious creature and there's not much time left for Elrob, I bought the AltIMU-10 v5 - a board with an accelerometer, gyroscope, magnetometer and pressure sensor. The pressure sensor will come later. In SubT he helped a lot with localization stabilization. I added Arduino Nano ESP32 to the AltIMU. This means that the first Arduino also arrives on Freyja. You can make a dash. Not that it was necessary. Some kind of USB-I2C converter can be found in the drawer and, as I have already shown with the CMPS14, 200 Hz communication can be handled via it, and the AHRS (attitude and heading reference system) could easily be calculated on a PC. Maybe one day it will happen. Now I just wanted to play around with the ESP32 for educational reasons. Maybe there will be some Micropython. When there is so little time left
Branch: Arduino Nano ESP32 includes dual-core RISC-V processor at 240 MHz, wifi, bluetooth. It runs FreeRTOS. It's "total overkill" for my purpose. This is what the entire robot should be driving. But what amazed me is that thanks to the Arduino HAL (hardware abstraction layer), all the existing Arduino libraries for working with the accelerometer, gyro, and magnetometer work on it. Some RISC-V, two cores or FreeRTOS don't matter! Unfortunately, there is a lack of support in Arduino-Makefile and Arduino Studio, but I really don't need it. Maybe even in 2000 there was a better development environment. And she didn't freeze. Hell. End of turn.
Back to total domination. With great power comes great responsibility. Have you ever calibrated a magnetometer? "You just take it and turn it in all directions." OK, I can do it. “And you have to do it with the robot too, to decalibrate your own interference. Have you ever twirled a 30-pound object that can't be properly grasped, with fragile antennae sticking out, in all directions, right? Fortunately, these are extreme values for basic calibration. These can be reached by pointing each of the three axes north and south. Not all directions are needed. In our geographical location, the lines of force run downwards between forty and sixty degrees. You also have to take that into account. So take the robot, tilt it and, because of course I can't hit the exact north and the exact slope, swing it. A woman walked by and asked if I wanted to sing a lullaby to Freyja. Nevermind.
That was the calibration on the balcony, which I didn't really trust in the end. Who knows where there is something iron around and where some electricity flows. "You definitely have to go outside!" Until MartinL. So to the yard. And since I don't want to sing, it's time to get creative:
With a suspended robot, it's easier to move!
With the magnetometer calibrated, it's time for software. It is said that Madgwick Filter (pdf) offers an appropriate ratio between the quality of behavior and computational complexity. PavelS uses it. MartinL is happy with the Mahony filter. Madgwickpro Arduino exists, screw it! Did someone say something about great responsibility? But hey! It's acting "somewhat weird". So I reset the gain (beta parameter), the only thing that can be reset. Well, that's not possible. Not in that library. There is a beta "private" parameter. Why?? And while I'm digging into it, why does it initialize to the zero direction and not from the first measurement? This is how the first estimates are completely stupid! Big responsibility, own code fork. Like everyone else. These are exactly the things I didn't want to deal with!
Custom fork, initialization, reset gain, it's still weird. Many. The various copies of the Madgwick filter circulating around the world contain various bugs. E.g. In my [[https://github.com/Ridebeeline/madgwick-investigation/blob/a2b129964542EE4FA4D3787DA608991F25E746C6/MADFILTERS/CPP/MADGWICK_ORIGINALS/MADGWICHARKS_SQRT KAHRS_SQ_FIX.CPP#L108-L109 | missing in two places *2]. In a copy of MartinL, who uses Mahony anyway, too. It's not pretty code to read and neither is the article, but yes, the *2 really belongs there.
Add *2, readjust the magnetometer bias one more time and it starts to make sense. Except that the environment is always working against it. That "yard" is the roof of an underground garage. Reinforced concrete, some electrical lines here and there. Definitely out! Yeah. In one place, the compass points north in the direction where the sun rises. That's probably not right, is it? The new compass on the robot even points south at that point.
Interestingly, the copied filters do not seem to try to ignore the obviously invalid measurements. Oh well, it's my responsibility now. One interesting option is to monitor the strength of the magnetic field. With an external disturbance, there is a chance that it will add to or subtract from the Earth's magnetic field. When an obviously invalid value like this comes in, Madgwick can only hold heading from the gyro for a while.
When I draw a histogram of the magnetic field strength while driving around the house, I see a distinct spike of correct readings, distinct bad too strong and too weak readings, and lots of "sort of in between" readings.
There is no obvious range of correct values. But it can be done with a little hysteresis. When the power exceeds 0.46, it is definitely a biased measurement. And from that moment I don't trust the magnetometer until it drops back below 0.42. Similarly on the low end side. With this method, the IMU ignores the magnetometer while driving back and forth in the same places repeatedly. Among other things, in the place where the compass is completely off. That looks promising. I have yet to test how Madgwick behaves with just the gyro. I hope there are no more errors.
Another method of detecting an invalid measurement is by the direction of the magnetic field. It should point its forty to sixty, or however many, degrees downwards. I can tell the direction down from … the world hold on … I have an IMU! I don't do this yet, and I quietly hope I don't have to.
I'm a bit concerned that these carefully selected cut-off values and expected magnetic field gradients apply here, but not quite at the other end of Germany, where ELROB will be. Perhaps it's close enough that the magnetic field doesn't change that much.
I'm starting to think that there is no such thing as a ready solution because it can't :-/
June 9, 2024 - Freezing (14 days left) [JI]
With two weeks left in the competition, it's time to stop smashing the robot. The last month has been tough. Both humans and robots crumble under pressure. There was also a leak of magic brushes, without which the electronics no longer work. The next two weeks and the competition should not be like this. So! End of hardware redesign. A "hardware freeze" occurs. A "software freeze" also occurs. End of fundamental changes in the program - no new strategies, no new program modules, no new features. From now on "only" repairs and tuning of already existing parts.
What else got into the robot before the slightly shifted deadline?
- New lidars for slope detection and corresponding software.
- Sending messages via XBee on GPS modules to the base station ("base station") and from there saving the trajectory on the WebDav server of the organizers. This will need to be tested on site as it seems WebDav has different versions and flavors and their server and my client may not understand each other.
- Reporting the status of the robot via the on-board http server. Together with the pre-existing startup and shutdown, this means I can start, check and shut down the program from the browser on my mobile phone. Anyone who has ever stood with a laptop in the rain will appreciate it.
- Not avoiding tall grass. Who would have thought that not detecting an obstacle could be a difficult thing? In the videos from Elrob from last year, I looked out for tall grass, and finally the organizers also warned us that they deliberately leave waist-high grass there in several places. Freyja is, for this task, a small robot and she hides completely in such grass. In addition, so that the small robots do not have to follow the path ridden by the big robots, the small robots will go first. So the grass won't even be mowed. Freyja is possibly the smallest robot competing as a mule this year. Let's guess who will go first.
Which way is not an obstacle?
June 25, 2024 - ELROB Day 1 and 2 [MD, photo SP]
It goes by very quickly and after a whole day of testing at a military base near Trier (not far from Luxembourg), one is happy to take a shower and go to sleep, and the “report” falters.
To begin with, I am attaching at least a few photos from Standa from the first day:
Today, Jirka tested in tall grass (crazy), but Freyja gave it until she hummed into some invisible hole. Pat didn't even flinch even though Standa weighed him down. That was clear
a long time ago, Pat actually has no backup, both on the SW and HW side. The competition is tomorrow, so we'll see — hold us, or Thumbs up to Jirko/Freyje!
June 26, 2024 - C'est la vie [JI]
Before the competition, I quite publicly announced, following the example of the Japanese space agency, three levels of goals:
- Minimum: Drive off the start.
- To satisfaction: Lead the robot from the starting point to the destination point.
- Excellent: Autonomously then return to the start.
Any further driving to the destination again and possibly back would be a bonus.
We met the minimum goal.
Yesterday's testing, which Martin commented with the word "crazy", was not only about tall grass, but also about a meadow dug up by wild pigs. We heard from the organizers that this is what awaits us at the competition area. Freyja somehow managed to dig herself out of the holes, but she wasn't very good at it. So I was faced with a difficult choice: Do something about it or let it go? The competition surface, which we did not have access to, I expected even worse.
So I shifted the maximum power of the engine. The result was … well, it was a tough choice between going and not going. With the increased power, the controllers hit the current limit every moment during the competition ride, the software reset them, which means a nice twenty seconds of inactivity with ODRive, and the robot drove a bit again.
After jumping for a while, the robot entered a narrow passage where, prepared to pass through the meter-high grass, it did not consider the grass on the sides as an obstacle. Instead of the path, Freyja took it through the grass. She made good progress for a while, but eventually the grass got wrapped around her wheels in such a way that I couldn't get it off, and the regulators gave up completely. At that moment, I was convinced that I was done with the robot. And Martin, that he has an afternoon ride and Pat.
Much to our surprise, Freyja woke up after a full reboot. We knew about Pat that he doesn't quite give off-road. So a second chance for Freyja.
Back to the previous settings of the controllers, a slightly more conservative setting of Follow Me, so that it does not meander so much to the sides when the route is almost straight, and the organizers are already looking for us, saying that we should go an hour earlier against the schedule and that we should be there in four minutes. Quickly turn off everything, replace the batteries with charged ones, quickly move to the start, turn on the robot … and nothing.
Sometime during that switching off and on, one of the two controllers decided that they really had had enough for today and were calling it quits. Finding out exactly what happened will be a job for forensic detectives.
To illustrate the weather, I'll add that my 3D-printed holder for the GPS antenna stand on the base station (PLA) melted within about fifteen minutes of the first lap. The laptop was so hot in preparation for the second ride that you couldn't touch it. If some electronics decided to quit, I'm not surprised.
Now we have to think about what I learned from it, what follows from it and what to do next. We are probably hitting the limits of our abilities, possibilities and motivation somewhere here.
June 29, 2024 - Pat as a Mule [MD]
You look like Death! — were my wife's first words when I finally arrived at the cottage on Friday night. And probably that was/is apt. I was sick from the moment we
they arrived at their destination (a village not far from Trier/Trier) until the last rest stop on the way to the house in front of Rozvadov. I don't know if it was some kind of "geofence" — how sick I am still and
as a preview age it was/is definitely a warning. So a motivational photo from the Trier church
That's one take on it that I didn't really enjoy.
The second was a bit more fun, meeting and discussing with other teams. Teams from Canada or Finland also flew in, so the Czech Republic (and Switzerland) is just around the corner. The guy next door was good:
SWX-Robotics |
… or he was the first person I ever met who actively participated in RobotWars. And he said, as he was also in India, that they have a category of heavy robots there … and now that
I'm arguing with perplexity.ai, so it looks like the limit was 60kg, but there doesn't his weight of 115 kg fit again?
Maybe other units? Otherwise, he had the robot's wheels shot through to prove that it didn't matter. Just a different world — how did he look at us in a straw hat? (he wore a military uniform throughout the event
and occasionally put on a helmet and set off with the robot "hunting" at a leisurely 5 m/s (rough estimate).
Otherwise, it was mainly Jirka who saw it as a "social event", so maybe he will throw in some more impressions in time.
And thirdly, how did Pat the robot see it? Finally a bigger trip, a lot of spectators, a lot of concrete, where you can drive to the max, :) Probably one Polish team liked it the most, that it is prettier than
green competition. I did most of the tuning on the spot, so it was quite moving and people like that too. He didn't succeed so much in the task itself — I actually wrote that he needed to load,
but the weight of the man was already too much. On the way to the "Scenario Mule" playground, we picked up some broken concrete, and Standa put it together into a beautifully saddled mule. Even positively
was also evaluated by several higher nobles. But there was no time for the test itself, because 2 teams canceled the second attempt in the afternoon… they just somehow forgot to tell us. :-( So we were the last ones
the team, the judges (typically older people) were “cooking” under the shelter, so the original promise that we could deploy both robots in the given time window fell through. I take it that we were all over them
stolen and they enjoyed more when they could teleoperate various "tanks". They just didn't consider super-autonomy that much.
ELROB is
- not organized as a competition but as a trial,
- a strictly non-profit activity and
- strictly R&D oriented. (source elrob.org)
And that's about all I can sweat out right now. I can still add a link to a video of how the whole ride looked from the robot's point of view: https://youtu.be/7JkO5qXuSZ0
(it's a "raw" recording, so long boring)
p.s. check out Standy's video from the opening section—it's a bit snappier and shorter
If you have any additional information or comments
email us.