Galactic Search Update

We’ve talked a little bit about the work we’ve been doing for the Galactic Search challenge, and we’d like to share some updates on our progress. The general process we’ve defined for completing the challenge is this; we first select a path and place the robot in its starting position. While the robot is disabled, the operator pushes a button to run the vision pipeline. The robot determines which path to run based on the visible power cells and puts the selection on the dashboard. The operator then confirms that the path is correct before enabling (so that we don’t crash into a wall). Since there are no rules requiring that the selection take place after enabling, we thought that this was a safer option than running the pipeline automatically. The profiled path for each profile is defined ahead of time from a known starting position, which means we can manually optimize each one.

We also considered whether continuously tracking the power cells would be a better solution. However, we decided against this for two reasons:

  1. Continuously tracking power cells is a much more complicated vision problem that would likely require use of our Limelight. Making a single selection can be done directly on the RIO with a USB camera (see below for details). Tracking also becomes difficult/impossible when going around tight turns, where the next power cell may not be visible.
  2. Motion profiles give us much more control over the robot’s path through the courses, meaning we can easily test & optimize with predictable results each time.

The Paths

These are the four paths as they are currently defined:

A/Blue:

a-blue

A/Red:

a-red

B/Blue:

b-blue

B/Red:

b-red

These trajectories are defined using cubic splines, which means the intermediate waypoints don’t include headings. This is different from the AutoNav courses, which use quintic splines. For this challenge, we don’t require as tight control over the robot’s path and so a cubic spline is more effective.

The starting positions for each trajectory are placed as far forward as possible such that the bumpers break the plane of the starting area. Given the locations of the first power cells, we found that starting the red paths at an angle was beneficial (just make sure to reset the odometry’s starting position accurately :wink:).

You may notice that our trajectories don’t perfectly match up with the power cells. This is for two reasons:

  1. The trajectory defines the location of the robot’s center, but we need the intake to contact the power cells (which deploys in front of the bumper). Often, this means shifting our waypoints 6-12 inches. For example, the path for the second power cell in b/blue tracks to the right of the power cell such that the intake grabs it in the center during the turn.
  2. Our profiles don’t always track perfectly towards the end of the profile, meaning we end up contacting power cells on the sides of our intake. Shifting the trajectory to compensate is a quick fix for those types of problems.

The other main issue we had to contend with is speed. The robot’s top speed is ~130 in/s, but our current intake only works effectively up to ~90 in/s. Since most power cells are collected while turning, our centripetal velocity constraint usually slows down the robot enough that this isn’t an issue. However, we needed to add custom velocity constraints 1 for some straight sections (like the first power cell of b/blue). There were also instances where the intake contacted the power cell before the robot began turning enough to slow down. Through testing it was fairly easy to check where it needed a little extra help.

Here’s an example of the robot running the B/Red path:

The code for each path is available here:

Vision!

Running the vision pipeline on only single frames vastly simplifies our setup, as opposed to continuously tracking power cells. Rather than using any separate hardware, we can plug our existing driver cam directly into the RIO for the processing. The filtering for this is simple enough that setting it up using our Limelight was overkill (and probably would have ended up being more complicated anyway). Our Limelight is also angled specifically to look for the target, which doesn’t work when finding power cells on the ground. When the operator pushes the button to run the pipeline, the camera captures an image like this:

b-red-lowres

Despite this being a single frame, we quickly realized that scaling down the full 1920×1080 resolution was necessary to allow the RIO to process it. Our pipeline runs at 640×360, which is plenty to identify the power cells. Using GRIP, we put together a simple HSV filter that processes the above image into this:

b-red-threshold

To determine the path, we need to check for the existence of power cells at known positions. Since we don’t need to locate them at arbitrary positions, finding contours is unnecessary. That also means tuning the HSV filter carefully is less critical than with traditional target-tracking (for example, the noise on the left from a nearby power cube has no impact on the logic).

Using the filtered images, our search logic scans rectangular regions for white pixels and calculates if they make up more than ~5-10% of the area. This allows us to distinguish all four paths very reliably. The rectangular scan areas are outlined below, with red & green indicating whether a power cell would be found.

a-red-overlay
b-red-overlay
a-blue-overlay
b-blue-overlay

It starts by searching the large area closest to the center, which determines whether the path is red or blue. The second search area shifts slightly based on that result such that a power cell will only appear on one of the two possible paths. This result is then stored and set in NetworkTables so that the operator knows whether to enable.

Here’s our code for the vision selection along with the GRIP pipeline.

The “updateVision” method handles the vision pipeline, then the command is scheduled during autonomous. We update the vision pipeline when a button is pressed, though it could also be set to run periodically while disabled.

This command can be easily customized for use with a different camera. Depending on where power cells appear in the image, the selection sequence seen below can be restructured to determine the correct path. Similarly, any four commands will be accepted for the four paths.

if (searchArea(hsvThreshold, 0.1, 315, 90, 355, 125)) { // Red path
  if (searchArea(hsvThreshold, 0.05, 290, 65, 315, 85)) {
    path = GalacticSearchPath.A_RED;
    SmartDashboard.putString("Galactic Search Path", "A/Red");
  } else {
    path = GalacticSearchPath.B_RED;
    SmartDashboard.putString("Galactic Search Path", "B/Red");
  }
} else { // Blue path
  if (searchArea(hsvThreshold, 0.05, 255, 65, 280, 85)) {
    path = GalacticSearchPath.B_BLUE;
    SmartDashboard.putString("Galactic Search Path", "B/Blue");
  } else {
    path = GalacticSearchPath.A_BLUE;
    SmartDashboard.putString("Galactic Search Path", "A/Blue");
  }
}

Overall, we’ve found this setup with vision + profiles to work quite reliably. We’re happy to answer any questions.

Scroll to Top