A new motorized hood requires new software, of course! There’s really a trifecta of new features we needed to take advantage of this system.
I’ll do a separate post with updates from our event in the next couple of days.
Part 1: Going Up and Down
To do anything with the hood, we first needed to be able to reliably get and set its angle. All of the angles are measured relative to 0° as horizontal (rather than against the bottom hard stop). This came in handy when that stop moved — all of the other data we collected was still valid.
When the robot is enabled for the first time, it runs the hood down at a low voltage (0.5 volts) to reset it. Once the velocity falls below 1°/s, we know that it is at the hard stop and the measured position is reset to a known angle. Because of backlash in the system, that “reset angle” is actually lower than the minimum angle enforced during normal operation. As soon as the reset finishes, the hood is raised to the normal minimum. Here’s what the position looks like during the reset process for those interested (measured in radians):
After the reset, we use a ProfiledPIDController
running on the RIO to set the angle. The velocity and acceleration are only limited a little bit compared to how quickly the system could theoretically move. The controller is typically able to maintain the angle within 0.5°. Here’s an example from one of our matches (orange is the goal, purple is the measured angle):
We were also able to set up a hood physics simulation using WPILib’s SingleJointedArmSim
. Here’s an example from our testing:
Part 2: The Moving Camera
Hmm… What a convenient spot for a camera that requires precise calibration…
In all seriousness, mounting the Limelight to the hood gives us a nice expanded range for seeing the target compared to a static mount. We can get a good angle to the hub anywhere from the fender to the alliance wall. However, it means that the hood angle needs to be factored into all of the odometry calculations (since both the camera angle and height are essential for calculating the translation to each target corner).
Like the gyro angle, the hood data needed to be latency compensated to get accurate results. With our original system, that would mean:
- The hood subsystem stores a history of its angle.
- The drive subsystem stores a history of its pose.
- When a frame is received, the vision subsystem asks the hood subsystem for its angle at the correct timestamp.
- The vision subsystem sends a robot-to-target translation to the drive subsystem.
- The drive subsystem uses its pose history to create a field-to-robot translation.
- The drive subsystem uses its pose history again to latency compensate the vision pose and adjust the final pose.
This was starting to look like a bit of a mess with each subsystem responsible for tracking the history of its state. To clean up the code, we combined all of the odometry and state tracking to a single class: RobotState
(link below). Our drive, vision, and hood subsystems all feed data into this class, which stores the previous second worth of robot state. The vision subsystem can ask the RobotState
class for the interpolated hood and gyro data at a particular timestamp, then feeds a vision pose back to it. That vision pose is inserted at the correct timestamp, then the previous second worth of data is reprocessed to calculate the final pose (taking into account the new vision data).
Calculating the camera position based on the hood angle was also a fun exercise in pose transformations:
Part 3: Selecting the Angle
This is the fun part. With the vision system ready to go, we can get the distance to the target just using the odometry system:
robotState.getLatestPose().getTranslation().getDistance(FieldConstants.hubCenter)
To set the hood angle and flywheel speed, we manually tuned the shots for a variety of distances. Our goal was to make each shot as flat as possible within a reasonable margin for error, to reduce bounce outs. Last year, we used the manually tuned data to create quadratic regressions (later cubic and quartic as well). After gathering the data, we felt that this year it would be simpler and more reliable to go with a linear interpolation. This means that we adhere more closely to the manual tuning, which is important for the shots at the edges of the range like the fender (or everything past the maximum hood angle). Our current tuning looks like this:
Here’s a video of the interpolation in action during our early testing:
Was this next visualization excessive? I’d like to think not.
I’ll also include this demonstration of the most important new shot in our arsenal. One day we will find a use for this.