SparkFun AVC 2011 - Introduction
I went back for another round of autonomous racing at the 2011 SparkFun Autonomous Vehicle Compeititon - and evidently not much has changed since last year. I again made the fatal flaw of being overly ambitious with my design and not nearly methodical enough with my approach. I spent several months tuning the inner-loop controller performance and only the last few weeks concentrating on obstacle avoidance. It was not enough time to develop anything reasonable, so although I showed up at the event, my vehicle never left the starting line.
The sections below describe my strategy for this year, what parts went wrong, and provide some comments on a few of the (outstanding) competitors.
1 - Competition Overview
The rule changes for the 2011 race included adding four large barrels along the narrowest corridor, east of the SparkFun building, in addition to the large hoop, under which ground robots can earn a 30-second time deduction. The objective remained the same: circumnavigate the building as fast as possible - autonomously. I entered the same vehicle platform as last year, although with the benefit of 6 months of additional development. As is now evident, much of that development was in areas that did not necessarily suit the competition task. The result was a disappointing set of DNS, in which I only once made it to the starting line but never actually made it off.
The addition of the obstacles on the course seemed to surely necessitate obstacle avoidance. The rules specifically reference optical obstacle identification of both the barrels and the arch. I assumed that the field of 50 competitors, of which 35 or 40 were ground robots, would include some sophisticated obstacle detection and avoidance techniques.
2 - System Approach
My basic approach was to develop a vehicle system that tracked speed and heading angle commands, the latter coming both from waypoint navigation and from obstacle detection. I spent much of my time developing the GPS-only navigation capability, which by early March 2011 demonstrated continuous waypoint orbits at a maximum tested speed of 18MPH. I spent the month preceding the competition working on obstacle sensing - but sadly this month was not enough to develop a reasonable detection and avoidance system.
The obstacle sensing approach was based on the very excellent article on robotic laser range finders. Instead of an FPGA, I used a Fujitsu U810 ultra-micro PC running OpenCV for the image processing. Five green laser diodes mounted on the front bumper illuminated 5 radially-distributed laser dots, which would intersect obstacles close to the vehicle. An optical filter centered on the laser color wavelength rejected all non-green colors and helped isolate the bright laser dots in the image. I mounted a USB webcam about 15" above the lasers, pointed about 25 degrees down from horizontal with a 76 degree field of view. Because of the filter, the webcam only exposed green colors - which enabled a relatively simple thresholding in both RGB and HSV space, using the known color of the laser along with a criteria for level or intensity. The radial-distribution of the laser points aligned well with the camera field of view, which isolated each of the five lasers into distinct areas of the image plane. The laser point identification routine was simply a search in each of the five areas to determine the center of mass of all pixels that met the 'laser point' threshold in color and intensity. The last step in the program was sending the image-plane coordinates of each identified laser point over the serial port to my DataNinja control system.
Laser point coordinates are converted to physical range estimates using the known separation between the camera and the lasers and some of the camera parameters. The four outside laser points (two on each side) are used to provide desired heading augmentation using a simple strategy of steering away from the closest obstacles. The influence of each laser point was weighted so that the two inner lasers had about three times the gain of the outside lasers, so that objects close to the centerline of the vehicle generated larger avoidance effort. The three inner laser points provided longitudinal control augmentation, where detected obstacles would result in reduced speed command within the far threshold, and would result in braking within the near threshold.
The DataNinja control system on the vehicle uses an STM32 microcontroller, which interfaces with ST angular rate gyros, an Acroname Robotics magnetometer, and a MediaTek 10Hz GPS (with the DIYDrones output frame). The electronics are mounted on the equipment rack above the chassis of the Traxxas Slash 4x4.
3 - Control Architecture
The control system for AVC 2011 was similar to last year's, although the individual loops and kinematic limits were much more refined. The basic function of the controller remains a heading-tracking architecture, where heading errors generate yaw rate commands which are limited based on driving speed to turns with lateral acceleration less than 1G. Speed tracking was substantially improved with the addition of an integrator term, which obviously helps get low-error tracking performance. My hope was to get a solid inertial control and stabilization inner loop and wrap around it the navigation and obstacle avoidance functions.
The heading tracking controller uses just a proportional term on the heading error to compute a desired yaw rate. The heading error is computed as the difference between the desired heading, which is the bearing from the vehicle position to the waypoint, and the current heading. One of the most significant improvements in the heading tracking controller came from deriving the heading estimate from the magnetometer, the yaw rate gyro, and the GPS course. The magnetometer is used for all low speed navigation, whereas the GPS course is used at higher-speeds, provided the heading time-history is continuous. The yaw gyro is integrated to augment the heading estimate in between GPS updates and during any instances of GPS course discontinuity. The misplaced dependence on GPS heading led to poor performance at AVC 2010, particularly during waypoint turns where the speed was low and the GPS heading was uncertain and undependable.
The heading command is augmented by both a cross-track term and an obstacle avoidance term. Cross-track heading correction is computed using the perpendicular distance from the line connecting the previous and current waypoint. The heading penalty drives the vehicle toward the line, such that it is navigating along the lines between waypoints, rather than simply driving toward the waypoints from the current position. This approach helps contain the lateral divergence of the vehicle enroute to waypoints and is particularly good in response to overshooting waypoints. Without a cross-track term, an overshot waypoint would result in the vehicle path being far off the waypoint-connecting line for the entire path, since a simple heading error will force a direct-to form of navigation. Additionally, a simple heading error is small for lateral track displacements, especially when the distance to the next waypoint is large.
The desired yaw rate is kinematically limited to result in a maximum specified lateral acceleration, which prevents the vehicle from commanding a steer angle which exceeds the traction capability of the tires. For large heading changes initiated at high speeds, the controller initially permits a small yaw rate, but then increases the yaw rate as the longitudinal controller simultaneously slows the vehicle. The coupling between the yaw rate and speed tracking controllers is generally good, although a more rigorous approach would use a traction or acceleration circle limit which would specify a combined 1G limit, allowing the vehicle to transition smoothly from 1G braking to ~0.7G braking + ~0.7G turning, then finally to 1G turning.
Longitudinal control was a basic PI controller on speed error with a throttle command output. I initially used a feedforward term on speed command to throttle, which maps reasonably well for slow-to-medium speeds on level ground. Adding the integrator made the response dynamics a bit slower, but eliminated the dependence on the throttle-to-speed model required for the feedforward term.
Desired speed was set as a function of navigation condition, which allowed the speed to vary appropriately throughout the course. Far from waypoints and along straight paths, the vehicle tracked a global speed setpoint, which was tested successfully up to 18MPH. As the vehicle approached a waypoint, the speed command reduced proportionally to a somewhat-slower speed, which slowed the vehicle in the last several meters prior to entering the waypoint threshold radius. A third speed setpoint was defined for large heading errors, forcing the vehicle to slow if a heading change larger than 30 degrees was required. The slowing coupled favorably with the lateral controller, which permitted larger yaw rates for slower speeds. Between 15 and 30 degrees, the speed command varied proportionally between the global and turning speeds.
It turned out to be important to distinguish between the waypoint and turning speeds to handle cases where waypoints were nearly colinear. In these instances, the vehicle merely passes through the intermediate waypoints without commanding much heading change, so the waypoint speed can be set close to the global speed to allow fast transitions. Large turns, however, required a slow vehicle speed to permit heading changes to be made without a large turn radius.
So much of my effort this year went into developing and testing the basic navigation controller - and while my performance at AVC 2011 was dismal - the joy of watching my vehicle navigate both energetically and autonomously for 15 minutes makes it hard to feel bad about losing the race.
4 - Competition Summary
Round 1: Debugging Serial Port
I had joked a week and a half before the competition that it I would teach myself computer vision, even it if took me all week. While I'm delighed to report that I was able to put together a functional laser-dot tracking program, my failure was in learning how to send serial data from the Visual C++ program on the micro PC to the microcontroller. I arrived and registered at 8:30am and found my place in the pits tent next to Scott Harris of Team Tobor. The tent provided shelter from the snow, but certainly not from the cold. It's hard enough to debug code at the field, and made harder yet with numb fingers and frozen cheeks. At one point, Scott plugged in a battery to his vehicle while saying "I hope we don't see smoke!". The moisture from his breath blew a water vapor cloud over his vehicle - which was better than a magic-blue-smoke cloud, but it was still pretty cold.
I continued to debug the serial port issue, which caused the transmitted data to be all NULL characters, rather than any useful information. As I continued troubleshooting, my hardware setup diverged farther away from the actual robot. I tried all varieties of USB-Serial converters, tried all sorts of changes within the transmitting code, various baud rates, level converters, parity bits, and hardware handshaking. I could not consistently get any data transmitted from the Visual C++ program on one computer to the terminal program of another. I *still* have not solved this issue, so I would certainly appreciate any insight if the issue is obvious to any readers.
Team Project 240 was called to line-up for Heat 4, and with all my attentioned diverted in vain to trying to resolve the serial port problem, I had to forfeit my run. Round 1 passed me by without even stepping out of the pits.
Round 2: Regressing to Ultrasonic Sensors
After my unglorious forfeit of the first run - and with no progress in debugging the serial port issue - I tried to salvage my runs by mounting the ultrasonic sensors which I had so hubrously dismissed only a week ago. I changed the code to use heading augmentation from the ultrasonic sensors and tested it using the standard hand-waving procedure. I walked the vehicle to a neighboring parking lot to test both the waypoint navigation and the obstacle avoidance. The vehicle needed to demonstrate at least a humble level of waypoint capture capability before it could be put on the starting line. I set waypoints using my portable DataNinja groundstation, which had taken a surprisingly-long time to boot up, possibly due to the cold temperatures. The sliding potentiometer, which is used to set waypoint offset distance, was marginally responsive and I had difficulty setting the test waypoints. When I enabled navigation, the vehicle turned to a northwest heading rather than toward the waypoints. The data showed that the waypoints were set 1000 m to the northeast and that the controllers were responding normally. I could not determine the reason for this bogus waypoint placement, but hoped that the pre-programmed SparkFun building waypoints from last year remained valid.
I made it onto the starting line finally, hoping the poorly-tested control system would at least marginally work. Instead, when I enabled navigation, the vehicle stood still - unresponsive. I struggled with it briefly, trying to reset the microcontroller and move the channel 3 knob on the transmitter, but nothing worked. I later realized that I had left the default behavior of the system in the 'Ignore R/C Transmitter' mode, which prevents the system from commanding bogus throttle and steering commands if the transmitter is turned off. I usually change this mode as part of my pre-run checklist, but I must have either forgotten or the controller reset while I was preparing to start.
Round 3: Forfeit
After my difficulties in rounds 1 and 2, I decided to forfeit my final run in favor of watching the ground and air rounds. I had a significant deficiency in practicing navigation around a building prior to the event and it was unlikely that I could get the vehicle running competitively in the hour that remained in AVC 2011.
The decision to switch from competitor to spectator was wonderful - in that I could concentrate on studying the behavior of the robots and simply enjoy the race. I brought my GoPro camera along and asked to mount it to one of the ground vehicles during each of the eight heats. Some people, understandably, were reluctant to have a bulky camera mounted hastily on their vehicle, for fear that it would upset the control performance. Even so, I managed to get onboard footage from the start of most of the heats. The gray 3M double-sided tape used on the GoPro bases is normally fantastically-adhesive, but we found it did not stick particularly well to many of the unprepared robot bodies. As a result, the camera frequently tumbled off onto the course - thankfully without incurring damage or scratches.
A tremendous thank-you to all the competitors who allowed me to mount the video camera on their robots. It produced marvelous footage, at the expense of holding up some of the races for a few moments to get the camera connected.
Mass Start: Chaos
I set my vehicle in R/C mode for the Mass Start race and mounted the GoPro (firmly) so I could film the race from a robot's perspective. I made two slow passes in front of the finish line to get a shot of all the robots, then lined up at the left edge of starting line. My plan was to drive slowly behind the actual robots and film the racing, hopefully without interfering. I captured some of the starting-line carnage, but I really should have lined up in the middle to get a better view of the collisions and chaos.
5 - Ground Robots Comparison
The table below provides a brief comparison of some of the robots in the ground category. It is fascinating to see the different strategies used for estimation and control - particularly since some entrants developed very sophisticated methods in some areas (estimation, for instance), with relatively basic strategies in others, such as control or sensing.
I would appreciate hearing about details from any other competitors or corrections for the data listed here. I would be happy to add further detail on the sensors, estimation, control, and hardware - perhaps including part numbers and links to datasheets or tutorials.
6 - Preparing for AVC 2012
Project 240 Strategy
After every marginal AVC performance, I promise myself that next year will include testing so rigorous and systematic that the folks at NASA will drop by for a short course. Even so, I am sure that I will find myself late next April scurrying to resolve issues brought on by a hasty attempt at salvaging a dysfunctional robot.
Several AVC build blogs feature comments that indicate anything short of 30MPH is not worth doing. This concept is fascinating to me - even if I think it is somewhat ambitious. Assuming the rules don't change drastically and the course remains relatively open, what *would* it take to design a SparkFun HQ-navigating robot that could run at 30+ MPH? What are the actual state estimation requirements and how sophisticated must the controller be to compensate for chassis dynamics at that speed? Is it possible to run a course that fast without obstacle avoidance? Is it even possible to run it *with* obstacle avoidance? What are the requirements on obstacle detection range and latency for such a fast vehicle?
The mature thing for me to do now is to continue development of my robot until I demonstrate, at the very least, some basic building circumnavigation capability. I intend to do this, to be sure, although the very near-term efforts might find me revisiting maximum-performance controllers that allow the vehicle to operate on the lateral/longitudinal ellipsoidal traction/acceleration envelope.
Suggestions to SparkFun Staff for a Better AVC 2012
AVC continues to become a larger event each year, and the increased popularity puts a burden on some aspects of the competition. I cannot fault the SparkFun staff for much this year, given the conditions, 50 entrants, 700+ spectators, and relatively small area. However, I would like to suggest a few changes that might improve next year's event.
Contact: Mujahid Abdulrahim - mujahid [at] ufl [dot] edu