Feature Extraction and Descriptors
Below []
in an argument list means an optional argument.
Types
ImageFeatures.Feature
— Typefeature = Feature(keypoint, orientation = 0.0, scale = 0.0)
The Feature
type has the keypoint, its orientation and its scale.
ImageFeatures.Features
— Typefeatures = Features(boolean_img)
features = Features(keypoints)
Returns a Vector{Feature}
of features generated from the true
values in a boolean image or from a list of keypoints.
ImageFeatures.Keypoint
— Typekeypoint = Keypoint(y, x)
keypoint = Keypoint(feature)
A Keypoint
may be created by passing the coordinates of the point or from a feature.
ImageFeatures.Keypoints
— Typekeypoints = Keypoints(boolean_img)
keypoints = Keypoints(features)
Creates a Vector{Keypoint}
of the true
values in a boolean image or from a list of features.
ImageFeatures.BRIEF
— Typebrief_params = BRIEF([size = 128], [window = 9], [sigma = 2 ^ 0.5], [sampling_type = gaussian], [seed = 123])
Argument | Type | Description |
---|---|---|
size | Int | Size of the descriptor |
window | Int | Size of sampling window |
sigma | Float64 | Value of sigma used for initial gaussian smoothing of image |
sampling_type | Function | Type of sampling used for building the descriptor (See BRIEF Sampling Patterns) |
seed | Int | Random seed used for generating the sampling pairs. For matching two descriptors, the seed used to build both should be same. |
ImageFeatures.ORB
— Typeorb_params = ORB([num_keypoints = 500], [n_fast = 12], [threshold = 0.25], [harris_factor = 0.04], [downsample = 1.3], [levels = 8], [sigma = 1.2])
Argument | Type | Description |
---|---|---|
num_keypoints | Int | Number of keypoints to extract and size of the descriptor calculated |
n_fast | Int | Number of consecutive pixels used for finding corners with FAST. See [fastcorners ] |
threshold | Float64 | Threshold used to find corners in FAST. See [fastcorners ] |
harris_factor | Float64 | Harris factor k used to rank keypoints by harris responses and extract the best ones |
downsample | Float64 | Downsampling parameter used while building the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
levels | Int | Number of levels in the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
sigma | Float64 | Used for gaussian smoothing in each level of the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
ImageFeatures.FREAK
— Typefreak_params = FREAK([pattern_scale = 22.0])
Argument | Type | Description |
---|---|---|
pattern_scale | Float64 | Scaling factor for the sampling window |
ImageFeatures.BRISK
— Typebrisk_params = BRISK([pattern_scale = 1.0])
Argument | Type | Description |
---|---|---|
pattern_scale | Float64 | Scaling factor for the sampling window |
Corners and edges
ImageFeatures.corner_orientations
— Functionorientations = corner_orientations(img)
orientations = corner_orientations(img, corners)
orientations = corner_orientations(img, corners, kernel)
Returns the orientations of corner patches in an image. The orientation of a corner patch is denoted by the orientation of the vector between intensity centroid and the corner. The intensity centroid can be calculated as C = (m01/m00, m10/m00)
where mpq is defined as -
`mpq = (x^p)(y^q)I(y, x) for each p, q in the corner patch`
The kernel used for the patch can be given through the kernel
argument. The default kernel used is a gaussian kernel of size 5x5
.
ImageCorners.fastcorners
— Functionfastcorners(img, n, threshold) -> corners
Performs FAST Corner Detection. n
is the number of contiguous pixels which need to be greater (lesser) than intensity + threshold (intensity - threshold) for a pixel to be marked as a corner. The default value for n is 12.
Images.canny
— Functioncanny_edges = canny(img, (upper, lower), sigma=1.4)
Performs Canny Edge Detection on the input image.
Parameters :
(upper, lower) : Bounds for hysteresis thresholding sigma : Specifies the standard deviation of the gaussian filter
Example
imgedg = canny(img, (Percentile(80), Percentile(20)))
Images.phase
— Functionphase(grad_x, grad_y) -> p
Calculate the rotation angle of the gradient given by grad_x
and grad_y
. Equivalent to atan(-grad_y, grad_x)
, except that when both grad_x
and grad_y
are effectively zero, the corresponding angle is set to zero.
BRIEF Sampling Patterns
ImageFeatures.random_uniform
— Functionsample_one, sample_two = random_uniform(size, window, seed)
Builds sampling pairs using random uniform sampling.
ImageFeatures.random_coarse
— Functionsample_one, sample_two = random_coarse(size, window, seed)
Builds sampling pairs using random sampling over a coarse grid.
ImageFeatures.gaussian
— Functionsample_one, sample_two = gaussian(size, window, seed)
Builds sampling pairs using gaussian sampling.
ImageFeatures.gaussian_local
— Functionsample_one, sample_two = gaussian_local(size, window, seed)
Pairs (Xi, Yi)
are randomly sampled using a Gaussian distribution where first X
is sampled with a standard deviation of 0.04*S^2
and then the Yi’s
are sampled using a Gaussian distribution – Each Yi
is sampled with mean Xi
and standard deviation of 0.01 * S^2
ImageFeatures.center_sample
— Functionsample_one, sample_two = center_sample(size, window, seed)
Builds sampling pairs (Xi, Yi)
where Xi
is (0, 0)
and Yi
is sampled uniformly from the window.
Feature Extraction
Feature Description
ImageFeatures.create_descriptor
— Functiondesc, keypoints = create_descriptor(img, keypoints, params)
desc, keypoints = create_descriptor(img, params)
Create a descriptor for each entry in keypoints
from the image img
. params
specifies the parameters for any of several descriptors:
Some descriptors support discovery of the keypoints
from fastcorners
.
Feature Matching
ImageFeatures.hamming_distance
— Functiondistance = hamming_distance(desc_1, desc_2)
Calculates the hamming distance between two descriptors.
ImageFeatures.match_keypoints
— Functionmatches = match_keypoints(keypoints_1, keypoints_2, desc_1, desc_2, threshold = 0.1)
Finds matched keypoints using the hamming_distance
function having distance value less than threshold
.
Texture Matching
Gray Level Co-occurence Matrix
ImageFeatures.glcm
— Functionglcm = glcm(img, distance, angle, mat_size=16)
glcm = glcm(img, distances, angle, mat_size=16)
glcm = glcm(img, distance, angles, mat_size=16)
glcm = glcm(img, distances, angles, mat_size=16)
Calculates the GLCM (Gray Level Co-occurrence Matrix) of an image. The distances
and angles
arguments may be a single integer or a vector of integers if multiple GLCMs need to be calculated. The mat_size
argument is used to define the granularity of the GLCM.
ImageFeatures.glcm_symmetric
— Functionglcm = glcm_symmetric(img, distance, angle, mat_size=16)
glcm = glcm_symmetric(img, distances, angle, mat_size=16)
glcm = glcm_symmetric(img, distance, angles, mat_size=16)
glcm = glcm_symmetric(img, distances, angles, mat_size=16)
Symmetric version of the glcm
function.
ImageFeatures.glcm_norm
— Functionglcm = glcm_norm(img, distance, angle, mat_size)
glcm = glcm_norm(img, distances, angle, mat_size)
glcm = glcm_norm(img, distance, angles, mat_size)
glcm = glcm_norm(img, distances, angles, mat_size)
Normalised version of the glcm
function.
ImageFeatures.glcm_prop
— FunctionMultiple properties of the obtained GLCM can be calculated by using the glcm_prop
function which calculates the property for the entire matrix. If grid dimensions are provided, the matrix is divided into a grid and the property is calculated for each cell resulting in a height x width property matrix.
prop = glcm_prop(glcm, property)
prop = glcm_prop(glcm, height, width, property)
Various properties can be calculated like mean
, variance
, correlation
, contrast
, IDM
(Inverse Difference Moment), ASM
(Angular Second Moment), entropy
, max_prob
(Max Probability), energy
and dissimilarity
.
Missing docstring for max_prob
. Check Documenter's build log for details.
Missing docstring for contrast
. Check Documenter's build log for details.
Missing docstring for ASM
. Check Documenter's build log for details.
Missing docstring for IDM
. Check Documenter's build log for details.
Missing docstring for glcm_entropy
. Check Documenter's build log for details.
Missing docstring for energy
. Check Documenter's build log for details.
Missing docstring for dissimilarity
. Check Documenter's build log for details.
Missing docstring for correlation
. Check Documenter's build log for details.
Missing docstring for glcm_mean_ref
. Check Documenter's build log for details.
Missing docstring for glcm_mean_neighbour
. Check Documenter's build log for details.
Missing docstring for glcm_var_ref
. Check Documenter's build log for details.
Missing docstring for glcm_var_neighbour
. Check Documenter's build log for details.
Local Binary Patterns
Missing docstring for lbp
. Check Documenter's build log for details.
Missing docstring for modified_lbp
. Check Documenter's build log for details.
Missing docstring for direction_coded_lbp
. Check Documenter's build log for details.
Missing docstring for lbp_original
. Check Documenter's build log for details.
Missing docstring for lbp_uniform
. Check Documenter's build log for details.
Missing docstring for lbp_rotation_invariant
. Check Documenter's build log for details.
Missing docstring for multi_block_lbp
. Check Documenter's build log for details.
Misc
ImageFeatures.HOG
— Typehog_params = HOG([orientations = 9], [cell_size = 8], [block_size = 2], [block_stride = 1], [norm_method = "L2-norm"])
Histogram of Oriented Gradient (HOG) is a dense feature descriptor usually used for object detection. See "Histograms of Oriented Gradients for Human Detection" by Dalal and Triggs.
Parameters:
- orientations = number of orientation bins
- cellsize = size of a cell is cellsize x cell_size (in pixels)
- blocksize = size of a block is blocksize x block_size (in terms of cells)
- block_stride = stride of blocks. Controls how much adjacent blocks overlap.
- norm_method = block normalization method. Options: L2-norm, L2-hys, L1-norm, L2-sqrt.
ImageFeatures.hough_transform_standard
— Functionlines = hough_transform_standard(
img_edges::AbstractMatrix;
stepsize=1,
angles=range(0,stop=pi,length=minimum(size(img))),
vote_threshold=minimum(size(img)) / stepsize -1,
max_linecount=typemax(Int))
Returns a vector of tuples corresponding to the tuples of (r,t) where r and t are parameters for normal form of line: x * cos(t) + y * sin(t) = r
r
= length of perpendicular from (1,1) to the linet
= angle between perpendicular from (1,1) to the line and x-axis
The lines are generated by applying hough transform on the image.
Parameters:
img_edges
= Image to be transformed (eltype should beBool
)stepsize
= Discrete step size for perpendicular length of lineangles
= List of angles for which the transform is computedvote_threshold
= Accumulator threshold for line detectionmax_linecount
= Maximum no of lines to return
Example
julia> using ImageFeatures
julia> img = fill(false,5,5); img[3,:] .= true; img
5×5 Array{Bool,2}:
false false false false false
false false false false false
true true true true true
false false false false false
false false false false false
julia> hough_transform_standard(img)
1-element Array{Tuple{Float64,Float64},1}:
(3.0, 1.5707963267948966)
ImageFeatures.hough_circle_gradient
— Functioncircle_centers, circle_radius = hough_circle_gradient(img_edges, img_phase, radii; scale=1, min_dist=minimum(radii), vote_threshold)
Returns two vectors, corresponding to circle centers and radius.
The circles are generated using a hough transform variant in which a non-zero point only votes for circle centers perpendicular to the local gradient. In case of concentric circles, only the largest circle is detected.
Parameters:
img_edges
= edges of the imageimg_phase
= phase of the gradient imageradii
= circle radius rangescale
= relative accumulator resolution factormin_dist
= minimum distance between detected circle centersvote_threshold
= accumulator threshold for circle detection
canny
and phase
can be used for obtaining imgedges and imgphase respectively.
Example
julia> using Images, ImageFeatures, FileIO, ImageView
julia> img = load(download("http://docs.opencv.org/3.1.0/water_coins.jpg"));
julia> img = Gray.(img);
julia> img_edges = canny(img, (Percentile(99), Percentile(80)));
julia> dx, dy=imgradients(img, KernelFactors.ando5);
julia> img_phase = phase(dx, dy);
julia> centers, radii = hough_circle_gradient(img_edges, img_phase, 20:30);
julia> img_demo = Float64.(img_edges); for c in centers img_demo[c] = 2; end
julia> imshow(img_demo)