Writing Custom Pipelines
Create custom OpenCV pipelines for object detection
What is a Pipeline?
A pipeline is a class that processes each camera frame. It extends OpenCvPipeline and defines what your camera looks for in the live feed. The pipeline's processFrame method is called for every frame captured by the camera.
Basic Pipeline Structure
package org.firstinspires.ftc.teamcode;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import org.opencv.imgproc.Imgproc;
import org.openftc.easyopencv.OpenCvPipeline;
public class SimplePipeline extends OpenCvPipeline {
// Class variables for storing results
private String position = "UNKNOWN";
@Override
public Mat processFrame(Mat input) {
// This method is called for every frame
// Process the frame here
// Return the frame (optionally with annotations)
return input;
}
// Getter method to access results from OpMode
public String getPosition() {
return position;
}
}package org.firstinspires.ftc.teamcode
import org.opencv.core.Core
import org.opencv.core.Mat
import org.opencv.core.Scalar
import org.opencv.imgproc.Imgproc
import org.openftc.easyopencv.OpenCvPipeline
class SimplePipeline : OpenCvPipeline() {
// Class variables for storing results
private var position = "UNKNOWN"
override fun processFrame(input: Mat): Mat {
// This method is called for every frame
// Process the frame here
// Return the frame (optionally with annotations)
return input
}
// Getter method to access results from OpMode
fun getPosition(): String = position
}Understanding the Basics
What is a Mat?
A Mat (matrix) is OpenCV's way of representing an image. It's a 2D array where:
- Each position = one pixel
- Each pixel contains color data (RGB, HSV, or grayscale values)
- Size depends on resolution (640x480 = 307,200 pixels)
What is processFrame?
override fun processFrame(input: Mat): MatThis method is called automatically for every single camera frame (typically 30 times per second).
Parameters:
input- The current camera frame as a Mat
Returns:
- A Mat that will be displayed on the Driver Station screen
Important: Whatever Mat you return is what appears on your screen. You can return the original input, a processed version, or even a completely different visualization like a mask.
Understanding Color Detection
Before we write a pipeline, let's understand the key OpenCV concepts:
What is a Mask?
A mask is a binary (black and white) image where:
- White pixels (255) = color matches your target
- Black pixels (0) = color doesn't match
Think of it like a stencil that shows only where your target color appears.
What does inRange do?
Core.inRange(hsvMat, lowerBound, upperBound, mask);This function:
- Takes each pixel in
hsvMat - Checks if it falls between
lowerBoundandupperBound - If yes → sets that pixel to white (255) in the
mask - If no → sets that pixel to black (0) in the
mask
What is bitwise_or?
Core.bitwise_or(mask1, mask2, combinedMask);Combines two masks:
- If either mask has white at a pixel → result is white
- Only if both are black → result is black
Useful for detecting multiple color ranges (like red, which wraps around in HSV).
Color Detection Pipeline
This example detects a purple game element and draws circles around it.
package org.firstinspires.ftc.teamcode;
import org.opencv.core.*;
import org.opencv.imgproc.Imgproc;
import org.openftc.easyopencv.OpenCvPipeline;
import java.util.ArrayList;
import java.util.List;
public class PurpleDetectionPipeline extends OpenCvPipeline {
// Purple HSV thresholds - these values determine what counts as "purple"
// You'll tune these based on your lighting and game element
private Scalar purpleLower = new Scalar(130, 100, 50); // H, S, V minimums
private Scalar purpleUpper = new Scalar(160, 255, 255); // H, S, V maximums
// Minimum area to consider (filters out noise)
private double minContourArea = 100.0;
// Where we found the largest object
private Point detectedCenter = new Point();
private double detectedRadius = 0;
// Working matrices - reuse these to avoid creating new objects every frame
private Mat hsvMat = new Mat();
private Mat mask = new Mat();
@Override
public Mat processFrame(Mat input) {
// Step 1: Convert from RGB to HSV color space
// HSV separates color (hue) from brightness (value), making detection easier
Imgproc.cvtColor(input, hsvMat, Imgproc.COLOR_RGB2HSV);
// Step 2: Create a mask showing only purple pixels
// inRange checks each pixel: if it's between purpleLower and purpleUpper,
// that pixel becomes white (255) in the mask. Otherwise, it's black (0).
Core.inRange(hsvMat, purpleLower, purpleUpper, mask);
// Step 3: Find contours (outlines) of white regions in the mask
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(mask, contours, hierarchy,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
hierarchy.release();
// Step 4: Find the largest contour by tracking the maximum area
// We start with maxArea = 0 and check each contour.
// If current contour's area is bigger than maxArea, we update maxArea
// and remember this contour as the largest one found so far.
double maxArea = 0;
MatOfPoint largestContour = null;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
// Is this contour bigger than any we've seen? And big enough to not be noise?
if (area > maxArea && area > minContourArea) {
maxArea = area; // Update the maximum
largestContour = contour; // This is now the largest
}
}
// Step 5: If we found something, draw a circle around it
if (largestContour != null) {
MatOfPoint2f contour2f = new MatOfPoint2f(largestContour.toArray());
float[] radius = new float[1];
Imgproc.minEnclosingCircle(contour2f, detectedCenter, radius);
detectedRadius = radius[0];
// Draw the circle on the output
Imgproc.circle(input, detectedCenter, (int)detectedRadius,
new Scalar(255, 0, 255), 3); // Purple circle
// Draw center point
Imgproc.circle(input, detectedCenter, 5,
new Scalar(0, 255, 0), -1); // Green dot
// Add label
Imgproc.putText(input,
String.format("Purple (%.0fpx)", detectedRadius),
new Point(detectedCenter.x - 50, detectedCenter.y - detectedRadius - 10),
Imgproc.FONT_HERSHEY_SIMPLEX, 0.6,
new Scalar(255, 255, 255), 2);
contour2f.release();
}
// Release contours
for (MatOfPoint contour : contours) {
contour.release();
}
return input;
}
// Getters for autonomous to use
public Point getDetectedCenter() {
return detectedCenter;
}
public double getDetectedRadius() {
return detectedRadius;
}
public boolean isDetected() {
return detectedRadius > 0;
}
}package org.firstinspires.ftc.teamcode
import org.opencv.core.*
import org.opencv.imgproc.Imgproc
import org.openftc.easyopencv.OpenCvPipeline
class PurpleDetectionPipeline : OpenCvPipeline() {
// Purple HSV thresholds - use @JvmField to make these tunable in EOCV-Sim!
@JvmField var purpleLowerH = 130.0 // Hue minimum
@JvmField var purpleUpperH = 160.0 // Hue maximum
@JvmField var purpleLowerS = 100.0 // Saturation minimum
@JvmField var purpleUpperS = 255.0 // Saturation maximum
@JvmField var purpleLowerV = 50.0 // Value/brightness minimum
@JvmField var purpleUpperV = 255.0 // Value/brightness maximum
// Minimum area to consider (filters out noise)
@JvmField var minContourArea = 100.0
// Where we found the largest object
private val detectedCenter = Point()
private var detectedRadius = 0.0
// Working matrices - reuse these to avoid creating new objects every frame
private val hsvMat = Mat()
private val mask = Mat()
override fun processFrame(input: Mat): Mat {
// Step 1: Convert from RGB to HSV color space
// HSV separates color (hue) from brightness (value), making detection easier
Imgproc.cvtColor(input, hsvMat, Imgproc.COLOR_RGB2HSV)
// Step 2: Create a mask showing only purple pixels
// inRange checks each pixel: if it's between lower and upper bounds,
// that pixel becomes white (255) in the mask. Otherwise, it's black (0).
Core.inRange(
hsvMat,
Scalar(purpleLowerH, purpleLowerS, purpleLowerV),
Scalar(purpleUpperH, purpleUpperS, purpleUpperV),
mask
)
// Step 3: Find contours (outlines) of white regions in the mask
val contours = ArrayList<MatOfPoint>()
val hierarchy = Mat()
Imgproc.findContours(
mask, contours, hierarchy,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE
)
hierarchy.release()
// Step 4: Find the largest contour by tracking the maximum area
// We start with maxArea = 0 and check each contour.
// If current contour's area is bigger than maxArea, we update maxArea
// and remember this contour as the largest one found so far.
var maxArea = 0.0
var largestContour: MatOfPoint? = null
for (contour in contours) {
val area = Imgproc.contourArea(contour)
// Is this contour bigger than any we've seen? And big enough to not be noise?
if (area > maxArea && area > minContourArea) {
maxArea = area // Update the maximum
largestContour = contour // This is now the largest
}
}
// Step 5: If we found something, draw a circle around it
if (largestContour != null) {
val contour2f = MatOfPoint2f(*largestContour.toArray())
val radius = FloatArray(1)
Imgproc.minEnclosingCircle(contour2f, detectedCenter, radius)
detectedRadius = radius[0].toDouble()
// Draw the circle on the output
Imgproc.circle(
input, detectedCenter, detectedRadius.toInt(),
Scalar(255.0, 0.0, 255.0), 3 // Purple circle
)
// Draw center point
Imgproc.circle(
input, detectedCenter, 5,
Scalar(0.0, 255.0, 0.0), -1 // Green dot
)
// Add label
Imgproc.putText(
input,
"Purple (${detectedRadius.toInt()}px)",
Point(detectedCenter.x - 50, detectedCenter.y - detectedRadius - 10),
Imgproc.FONT_HERSHEY_SIMPLEX, 0.6,
Scalar(255.0, 255.0, 255.0), 2
)
contour2f.release()
}
// Release contours
for (contour in contours) {
contour.release()
}
return input
}
// Getters for autonomous to use
fun getDetectedCenter(): Point = detectedCenter
fun getDetectedRadius(): Double = detectedRadius
fun isDetected(): Boolean = detectedRadius > 0
// Release matrices when done
fun release() {
hsvMat.release()
mask.release()
}
}Using Your Pipeline
Once you've created a pipeline, use it in your OpMode:
// Declare at class level
RedDetectionPipeline pipeline = new RedDetectionPipeline();
@Override
public void runOpMode() {
// ... camera initialization code ...
webcam.setPipeline(pipeline);
waitForStart();
while (opModeIsActive()) {
// Access pipeline results
String position = pipeline.getPosition();
telemetry.addData("Position", position);
telemetry.update();
// Use position for autonomous logic
if (position.equals("LEFT")) {
// Do something
}
}
}Pipeline Best Practices
1. Reuse Mat Objects
Creating new Mat objects in processFrame causes excessive garbage collection. Declare them as class fields:
// Good - reuse Mat objects
private Mat hsvMat = new Mat();
private Mat mask = new Mat();
@Override
public Mat processFrame(Mat input) {
Imgproc.cvtColor(input, hsvMat, Imgproc.COLOR_RGB2HSV);
Core.inRange(hsvMat, lower, upper, mask);
// ...
}2. Release Submat Crops
Always release submats to prevent memory leaks:
Mat crop = input.submat(roi);
// ... process crop ...
crop.release(); // Important!3. Avoid Heavy Processing
Keep processFrame fast. If processing takes too long, reduce resolution or simplify your algorithm.
4. Thread Safety
If you access pipeline variables from your OpMode:
- Use
volatilekeyword for simple values - Use
synchronizedblocks for complex objects
private volatile String position = "UNKNOWN";5. Annotate Your Output
Draw on the input frame to visualize what your pipeline is detecting:
Imgproc.rectangle(input, roi, new Scalar(0, 255, 0), 2);
Imgproc.circle(input, center, 10, new Scalar(255, 0, 0), -1);
Imgproc.putText(input, "Target", point,
Imgproc.FONT_HERSHEY_SIMPLEX, 1, new Scalar(255, 255, 255), 2);Advanced Pipeline Techniques
Contour Detection
Find the boundaries of detected objects:
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(mask, contours, hierarchy,
Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
// Find largest contour by keeping track of the biggest area seen
// Start at 0, and whenever we see a bigger contour, update our maximum
double maxArea = 0;
Rect largestRect = null;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
// Is this contour bigger than the previous largest?
if (area > maxArea) {
maxArea = area; // Yes! Update the max
largestRect = Imgproc.boundingRect(contour);
}
}Multiple Color Detection
Detect multiple colors simultaneously:
Core.inRange(hsvMat, lowerRed, upperRed, redMask);
Core.inRange(hsvMat, lowerBlue, upperBlue, blueMask);
double redPercent = Core.mean(redMask).val[0] / 255.0 * 100;
double bluePercent = Core.mean(blueMask).val[0] / 255.0 * 100;Morphological Operations
Clean up noisy masks:
Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(5, 5));
// Remove small noise
Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_OPEN, kernel);
// Fill small holes
Imgproc.morphologyEx(mask, mask, Imgproc.MORPH_CLOSE, kernel);Distance Estimation
You can estimate how far an object is from your camera using the pinhole camera model. This is useful for autonomous navigation and positioning.
The Pinhole Model
The basic formula is:
Where:
- Real Object Size = Physical size of the object (inches/cm)
- Focal Length = Camera's focal length (from calibration)
- Pixel Size = Detected size in pixels
Camera Calibration
You'll need your camera's intrinsic parameters (focal length, distortion coefficients). Get these from:
- calibdb.net - Database of pre-calibrated FTC cameras
- Manual calibration using OpenCV's calibration tools
Tip: Search for your camera model on calibdb.net to find pre-calculated calibration values that you can use directly in your code!
Distance Estimation Example
private double estimateDistance(RotatedRect rect, double focalLength,
double realObjectWidth, double realObjectHeight) {
// Get the larger dimension (width or height) in pixels
double detectedSizePixels = Math.max(rect.size.width, rect.size.height);
// Use the corresponding real-world dimension
double realObjectSize = Math.max(realObjectWidth, realObjectHeight);
// Apply pinhole model formula
return (realObjectSize * focalLength) / detectedSizePixels;
}
// Example usage in processFrame:
public Mat processFrame(Mat input) {
// ... detection code ...
if (largestContour != null) {
MatOfPoint2f contour2f = new MatOfPoint2f(largestContour.toArray());
RotatedRect rotatedRect = Imgproc.minAreaRect(contour2f);
// Camera focal length from calibration (example value)
double focalLength = 451.07;
// Real object dimensions in centimeters
double objectWidth = 8.6;
double objectHeight = 3.7;
// Calculate distance
double distance = estimateDistance(rotatedRect, focalLength,
objectWidth, objectHeight);
// Draw distance on frame
Imgproc.putText(input,
String.format("Distance: %.1f cm", distance),
new Point(rotatedRect.center.x, rotatedRect.center.y),
Imgproc.FONT_HERSHEY_SIMPLEX, 0.7,
new Scalar(255, 255, 255), 2);
contour2f.release();
}
return input;
}private fun estimateDistance(
rect: RotatedRect,
focalLength: Double,
realObjectWidth: Double,
realObjectHeight: Double
): Double {
// Get the larger dimension (width or height) in pixels
val detectedSizePixels = max(rect.size.width, rect.size.height)
// Use the corresponding real-world dimension
val realObjectSize = max(realObjectWidth, realObjectHeight)
// Apply pinhole model formula
return (realObjectSize * focalLength) / detectedSizePixels
}
// Example usage in processFrame:
override fun processFrame(input: Mat): Mat {
// ... detection code ...
if (largestContour != null) {
val contour2f = MatOfPoint2f(*largestContour.toArray())
val rotatedRect = Imgproc.minAreaRect(contour2f)
// Camera focal length from calibration (example value)
val focalLength = 451.07
// Real object dimensions in centimeters
val objectWidth = 8.6
val objectHeight = 3.7
// Calculate distance
val distance = estimateDistance(rotatedRect, focalLength,
objectWidth, objectHeight)
// Draw distance on frame
Imgproc.putText(
input,
"Distance: ${String.format("%.1f", distance)} cm",
Point(rotatedRect.center.x, rotatedRect.center.y),
Imgproc.FONT_HERSHEY_SIMPLEX, 0.7,
Scalar(255.0, 255.0, 255.0), 2
)
contour2f.release()
}
return input
}Accounting for Camera Angle
If your camera is mounted at an angle, adjust the distance:
import kotlin.math.cos
val cameraMountingAngle = 15.0 // degrees from horizontal
val distanceRaw = estimateDistance(rect, focalLength, width, height)
val distanceAdjusted = distanceRaw * cos(Math.toRadians(cameraMountingAngle))Best Practices
- Calibrate your specific camera - Calibration values vary between camera units
- Test at actual distances - Verify your distance estimates match reality
- Use consistent units - Keep all measurements in the same unit (inches or cm)
- Account for camera angle - Apply trigonometric corrections if mounted at an angle
- Consider object orientation - Detection accuracy decreases with extreme angles
Next Steps
- Learn to test pipelines offline with EOCV-Sim
- Explore advanced color detection techniques
- See real-world pipeline examples
- Integrate with robot subsystems