s:
Using detectHarrisFeatures
and detectSURFFeatures
essentially returns a structure where each field contains relevant information about the interest points that are detected in the image. To give a reproducible example, let's use the cameraman.tif
image that is part of the image processing toolbox. Let's also use both feature detection frameworks with the default parameters:
>> im = imread('cameraman.tif'); >> harrisPoints = detectHarrisFeatures(im); >> surfPoints = detectSURFFeatures(im);
When we display harrisPoints
, this is what we get:
harrisPoints = 184x1 cornerPoints array with properties: Location: [184x2 single] Metric: [184x1 single] Count: 184
When we display surfPoints
, this is what we get:
surfPoints = 180x1 SURFPoints array with properties: Scale: [180x1 single] SignOfLaplacian: [180x1 int8] Orientation: [180x1 single] Location: [180x2 single] Metric: [180x1 single] Count: 180
Using detectHarrisFeatures
and detectSURFFeatures
essentially returns a structure where each field contains relevant information about the interest points that are detected in the image. To give a reproducible example, let's use the cameraman.tif
image that is part of the image processing toolbox. Let's also use both feature detection frameworks with the default parameters:
>> im = imread('cameraman.tif');
>> harrisPoints = detectHarrisFeatures(im);
>> surfPoints = detectSURFFeatures(im);
When we display harrisPoints
, this is what we get:
harrisPoints =
184x1 cornerPoints array with properties:
Location: [184x2 single]
Metric: [184x1 single]
Count: 184
When we display surfPoints
, this is what we get:
surfPoints =
180x1 SURFPoints array with properties:
Scale: [180x1 single]
SignOfLaplacian: [180x1 int8]
Orientation: [180x1 single]
Location: [180x2 single]
Metric: [180x1 single]
Count: 180
As such, both harrisPoints
and surfPoints
have a field called Location
which contains the spatial coordinates of the features you want. This would be a N x 2
matrix where each row gives you the location of a feature point. The first column is the x
or horizontal coordinate and the second column is the y
or vertical coordinate. The origin is at the top left corner of the image, and the y
coordinate is positive when moving downwards.
Therefore, if you want to combine both of the feature points together, access the Location
field of both objects and concatenate them together into a single matrix:
>> Points = [harrisPoints.Location; surfPoints.Location];
Points
should now contain a matrix where each row gives you a feature point.
I'd like to make a small note that the Harris corner detector is just an interest point detection algorithm. All that is given to you are the locations of interesting points in the image. SURF is both a detection and description framework, where not only do you get interest points, but you also get a good robust description of each interest point that you can use to perform matching between other interest points in other images. Therefore, if you wanted to combine both Harris and SURF together, that isn't possible because Harris does not support describing interest points.