Summary and function reference
Below, []
in an argument list means an optional argument.
Image loading and saving
FileIO.jl is an IO frontend that provides save
and load
to load images easily. The current available backends for image files are:
- ImageMagick.jl covers most image formats and has extra functionality. This can be your first choice if you don't have a preference.
- QuartzImageIO.jl exposes macOS's native image IO functionality to Julia. In some cases it's faster than ImageMagick, but it might not cover all your needs.
- ImageIO.jl is a new image IO backend (requires julia >=v"1.3") that provides an optimized performance for PNG files. Check benchmark here
- OMETIFF.jl supports OME-TIFF files. If you don't know what it is, then it is likely that you don't need this package.
Standard test images are provided by TestImages.jl
FileIO.load
— Functionload(filename)
loads the contents of a formatted file, trying to infer the format fromfilename
and/or magic bytes in the file (seequery
).load(strm)
loads from anIOStream
or similar object. In this case, there is no filename extension, so we rely on the magic bytes for format identification.load(File{format"PNG"}(filename))
specifies the format directly, and bypasses the formatquery
.load(Stream{format"PNG"}(io))
specifies the format directly, and bypasses the formatquery
.load(f; options...)
passes keyword arguments on to the loader.
FileIO.save
— Functionsave(filename, data...)
saves the contents of a formatted file, trying to infer the format fromfilename
.save(Stream{format"PNG"}(io), data...)
specifies the format directly, and bypasses the formatquery
.save(File{format"PNG"}(filename), data...)
specifies the format directly, and bypasses the formatquery
.save(f, data...; options...)
passes keyword arguments on to the saver.
TestImages.testimage
— Functionimg = testimage(filename; download_only=false, [ops...])
Load test image that partially matches filename
, the first match is used if there're more than one.
If download_only=true
, the full filepath is returned. Any other keyword arguments ops
will be passed to image IO backend through FileIO.load
.
Example
julia> using TestImages
julia> img = testimage("cameraman.tif"); # fullname
julia> img = testimage("cameraman"); # without extension works
julia> img = testimage("c"); # with only partial name also works
Extended help
The following is a complete list of testimages, you can also check them at https://testimages.juliaimages.org/
"airplaneF16"
"autumn_leaves"
"barbara_color"
"barbara_gray_512"
"bark_512"
"bark_he_512"
"beach_sand_512"
"beach_sand_he_512"
"blobs"
"brick_wall_512"
"brick_wall_he_512"
"calf_leather_512"
"calf_leather_he_512"
"cameraman"
"chelsea"
"coffee"
"earth_apollo17"
"fabio_color_256"
"fabio_color_512"
"fabio_gray_256"
"fabio_gray_512"
"grass_512"
"grass_he_512"
"hela-cells"
"herringbone_weave_512"
"herringbone_weave_he_512"
"house"
"jetplane"
"lake_color"
"lake_gray"
"lena_color_256"
"lena_color_512"
"lena_gray_16bit"
"lena_gray_256"
"lena_gray_512"
"lighthouse"
"lilly"
"livingroom"
"m51"
"mandril_color"
"mandril_gray"
"mandrill"
"monarch_color"
"monarch_color_256"
"moonsurface"
"morphology_test_512"
"mountainstream"
"mri-stack"
"multi-channel-time-series.ome"
"peppers_color"
"peppers_gray"
"pigskin_512"
"pigskin_he_512"
"pirate"
"plastic_bubbles_512"
"plastic_bubbles_he_512"
"raffia_512"
"raffia_he_512"
"resolution_test_1920"
"resolution_test_512"
"simple_3d_ball"
"simple_3d_psf"
"straw_512"
"straw_he_512"
"sudoku"
"toucan"
"walkbridge"
"water_512"
"water_he_512"
"woman_blonde"
"woman_darkhair"
"wood_grain_512"
"wood_grain_he_512"
"woolen_cloth_512"
"woolen_cloth_he_512"
Images.shepp_logan
— Functionphantom = shepp_logan(N,[M]; highContrast=true)
output the NxM Shepp-Logan phantom, which is a standard test image usually used for comparing image reconstruction algorithms in the field of computed tomography (CT) and magnetic resonance imaging (MRI). If the argument M is omitted, the phantom is of size NxN. When setting the keyword argument highConstrast
to false, the CT version of the phantom is created. Otherwise, the high contrast MRI version is calculated.
Image construction, conversion, and views
Any array can be treated as an Image. In graphical environments, only arrays with Colorant
element types (Gray
, RGB
, ARGB
, etc.) are automatically displayed as images.
ImageCore.colorview
— Functioncolorview(C, A)
returns a view of the numeric array A
, interpreting successive elements of A
as if they were channels of Colorant C
.
Of relevance for types like RGB and BGR, the elements of A
are interpreted in constructor-argument order, not memory order (see reinterpretc
if you want to use memory order).
Example
A = rand(3, 10, 10)
img = colorview(RGB, A)
See also: channelview
colorview(C, gray1, gray2, ...) -> imgC
Combine numeric/grayscale images gray1
, gray2
, etc., into the separate color channels of an array imgC
with element type C<:Colorant
.
As a convenience, the constant zeroarray
fills in an array of matched size with all zeros.
Example
imgC = colorview(RGB, r, zeroarray, b)
creates an image with r
in the red chanel, b
in the blue channel, and nothing in the green channel.
See also: StackedView
.
colorview(C)
Create a function that is equivalent to (As...) -> colorview(C, Ax...)
.
Examples
julia> ones(Float32, 2, 2) |> colorview(Gray)
2×2 reinterpret(reshape, Gray{Float32}, ::Matrix{Float32}) with eltype Gray{Float32}:
Gray{Float32}(1.0) Gray{Float32}(1.0)
Gray{Float32}(1.0) Gray{Float32}(1.0)
This can be slightly convenient when you want to convert a batch of channel data, for example:
julia> Rs, Gs, Bs = ntuple( i -> [randn(2, 2) for _ in 1:4], 3)
julia> map(colorview(RGB), Rs, Gs, Bs)
ImageCore.channelview
— Functionchannelview(A)
returns a view of A
, splitting out (if necessary) the color channels of A
into a new first dimension.
Of relevance for types like RGB and BGR, the channels of the returned array will be in constructor-argument order, not memory order (see reinterpretc
if you want to use memory order).
Example
img = rand(RGB{N0f8}, 10, 10)
A = channelview(img) # a 3×10×10 array
See also: colorview
ImageCore.normedview
— Functionnormedview([T], img::AbstractArray{Unsigned})
returns a "view" of img
where the values are interpreted in terms of Normed
number types. For example, if img
is an Array{UInt8}
, the view will act like an Array{N0f8}
. Supply T
if the element type of img
is UInt16
, to specify whether you want a N6f10
, N4f12
, N2f14
, or N0f16
result.
See also: rawview
ImageCore.rawview
— Functionrawview(img::AbstractArray{FixedPoint})
returns a "view" of img
where the values are interpreted in terms of their raw underlying storage. For example, if img
is an Array{N0f8}
, the view will act like an Array{UInt8}
.
See also: normedview
ImageCore.StackedView
— TypeStackedView(B, C, ...) -> A
Present arrays B
, C
, etc, as if they are separate channels along the first dimension of A
. In particular,
B == A[1,:,:...]
C == A[2,:,:...]
and so on. Combined with colorview
, this allows one to combine two or more grayscale images into a single color image.
See also: colorview
.
PaddedViews.PaddedView
— Typedatapadded = PaddedView(fillvalue, data, padded_axes)
datapadded = PaddedView(fillvalue, data, padded_axes, data_axes)
datapadded = PaddedView(fillvalue, data, sz)
datapadded = PaddedView(fillvalue, data, sz, first_datum)
datapadded = PaddedView{T}(args...)
Create a padded version of the array data
, where any elements within the span of padded_axes
not assigned in data
will have value fillvalue
.
Supply data_axes
to specify an alterate set of axes for data
, effectively relocating data
to a different set of indices. This is shorthand for
offsetdata = OffsetArray(data, data_axes)
datapadded = PaddedView(fillvalue, offsetdata, padded_axes)
using the OffsetArrays package.
Alternately, the padded array size sz
can be specified, in which case datapadded
starts indexing at 1. One may optionally specify the location of the [1, 1, ...]
element of data
with first_datum
. Specifically, datapadded[first_datum...]
corresponds to data[1, 1, ...]
. first_datum
defaults to all-1s.
The view eltype T
is optional. If not specified, then in most cases, T
is inferred to be eltype(data)
. In cases when fillvalue
can't be converted to eltype(data)
, T
will be promoted the one that does. For example, when fillvalue == nothing
and eltype(data) == Float32
, the inferred eltype T
will be Union{Nothing, Float32}
.
Example
julia> using PaddedViews
julia> a = collect(reshape(1:9, 3, 3))
3×3 Matrix{Int64}:
1 4 7
2 5 8
3 6 9
julia> PaddedView(-1, a, (4, 5))
4×5 PaddedView(-1, ::Matrix{Int64}, (Base.OneTo(4), Base.OneTo(5))) with eltype Int64:
1 4 7 -1 -1
2 5 8 -1 -1
3 6 9 -1 -1
-1 -1 -1 -1 -1
julia> PaddedView(-1, a, (1:5,1:5), (2:4,2:4))
5×5 PaddedView(-1, OffsetArray(::Matrix{Int64}, 2:4, 2:4), (1:5, 1:5)) with eltype Int64 with indices 1:5×1:5:
-1 -1 -1 -1 -1
-1 1 4 7 -1
-1 2 5 8 -1
-1 3 6 9 -1
-1 -1 -1 -1 -1
julia> PaddedView(-1, a, (0:4, 0:4))
5×5 PaddedView(-1, ::Matrix{Int64}, (0:4, 0:4)) with eltype Int64 with indices 0:4×0:4:
-1 -1 -1 -1 -1
-1 1 4 7 -1
-1 2 5 8 -1
-1 3 6 9 -1
-1 -1 -1 -1 -1
julia> PaddedView(-1, a, (5,5), (2,2))
5×5 PaddedView(-1, OffsetArray(::Matrix{Int64}, 2:4, 2:4), (Base.OneTo(5), Base.OneTo(5))) with eltype Int64:
-1 -1 -1 -1 -1
-1 1 4 7 -1
-1 2 5 8 -1
-1 3 6 9 -1
-1 -1 -1 -1 -1
PaddedViews.paddedviews
— FunctionAspad = paddedviews(fillvalue, A1, A2, ...; [dims])
Pad the arrays A1
, A2
, ..., to a common size or set of axes, chosen as the span of axes enclosing all of the input arrays.
The padding is applied to one direction in dimensions dims
. For example, values are filled to bottom-right part of the new array in two-dimensional case. Use sym_paddedviews
if both directions need to be padded.
The axes of original array A
will be preserved in the padded result Ap
, hence it's true that Ap[CartesianIndices(A)] == A
.
Example:
julia> using PaddedViews
julia> a1 = reshape([1, 2, 3], 3, 1)
3×1 Matrix{Int64}:
1
2
3
julia> a2 = [4 5 6]
1×3 Matrix{Int64}:
4 5 6
julia> a1p, a2p = paddedviews(-1, a1, a2);
julia> a1p
3×3 PaddedView(-1, ::Matrix{Int64}, (Base.OneTo(3), Base.OneTo(3))) with eltype Int64:
1 -1 -1
2 -1 -1
3 -1 -1
julia> a2p
3×3 PaddedView(-1, ::Matrix{Int64}, (Base.OneTo(3), Base.OneTo(3))) with eltype Int64:
4 5 6
-1 -1 -1
-1 -1 -1
julia> a1p[CartesianIndices(a1)]
3×1 Matrix{Int64}:
1
2
3
dims
keyword allows padding only for specified dimensions.
julia> a1 = reshape(collect(1:9), 3, 3)
3×3 Matrix{Int64}:
1 4 7
2 5 8
3 6 9
julia> a2 = [4 5;6 7]
2×2 Matrix{Int64}:
4 5
6 7
julia> a1f, a2f = paddedviews(-1, a1, a2; dims=1);
julia> a2f
3×2 PaddedView(-1, ::Matrix{Int64}, (Base.OneTo(3), Base.OneTo(2))) with eltype Int64:
4 5
6 7
-1 -1
julia> a1f, a2f = paddedviews(-1, a1, a2; dims=(1,2));
julia> a2f
3×3 PaddedView(-1, ::Matrix{Int64}, (Base.OneTo(3), Base.OneTo(3))) with eltype Int64:
4 5 -1
6 7 -1
-1 -1 -1
PaddedViews.sym_paddedviews
— FunctionAspad = sym_paddedviews(fillvalue, A1, A2, ...; [dims])
Pad the arrays A1
, A2
, ..., to a common size or set of axes, chosen as the span of axes enclosing all of the input arrays.
The padding is applied to both directions in dimensions dims
, which means original array located at the center the padded result. Use paddedviews
if only one direction need to be padded.
The axes of original array A
will be preserved in the padded result Ap
, hence it's true that Ap[CartesianIndices(A)] == A
.
julia> using PaddedViews
julia> a1 = reshape([1, 2, 3], 3, 1)
3×1 Matrix{Int64}:
1
2
3
julia> a2 = [4 5 6]
1×3 Matrix{Int64}:
4 5 6
julia> a1p, a2p = sym_paddedviews(-1, a1, a2);
julia> a1p
3×3 PaddedView(-1, ::Matrix{Int64}, (1:3, 0:2)) with eltype Int64 with indices 1:3×0:2:
-1 1 -1
-1 2 -1
-1 3 -1
julia> a2p
3×3 PaddedView(-1, ::Matrix{Int64}, (0:2, 1:3)) with eltype Int64 with indices 0:2×1:3:
-1 -1 -1
4 5 6
-1 -1 -1
julia> a1p[CartesianIndices(a1)]
3×1 Matrix{Int64}:
1
2
3
dims
keyword allows padding only for specified dimensions.
```jldoctest; setup=:(using PaddedViews) julia> a1 = reshape(collect(1:9), 3, 3) 3×3 Matrix{Int64}: 1 4 7 2 5 8 3 6 9
julia> a2 = reshape([5, 6], 2, 1) 2×1 Matrix{Int64}: 5 6
julia> a1f, a2f = sym_paddedviews(-1, a1, a2; dims=1);
julia> a2f 3×1 PaddedView(-1, ::Matrix{Int64}, (1:3, 1:1)) with eltype Int64 with indices 1:3×1:1: 5 6 -1
julia> a1f, a2f = sym_paddedviews(-1, a1, a2; dims=(1,2));
julia> a2f 3×3 PaddedView(-1, ::Matrix{Int64}, (1:3, 0:2)) with eltype Int64 with indices 1:3×0:2: -1 5 -1 -1 6 -1 -1 -1 -1 ```
MosaicViews.MosaicView
— TypeMosaicView(A::AbstractArray)
Create a two dimensional "view" of the three or four dimensional array A
. The resulting MosaicView
will display the data in A
such that it emulates using vcat
for all elements in the third dimension of A
, and hcat
for all elements in the fourth dimension of A
.
For example, if size(A)
is (2,3,4)
, then the resulting MosaicView
will have the size (2*4,3)
which is (8,3)
. Alternatively, if size(A)
is (2,3,4,5)
, then the resulting size will be (2*4,3*5)
which is (8,15)
.
Another way to think about this is that MosaicView
creates a mosaic of all the individual matrices enumerated in the third (and optionally fourth) dimension of the given 3D or 4D array A
. This can be especially useful for creating a single composite image from a set of equally sized images.
julia> using MosaicViews
julia> A = [(k+1)*l-1 for i in 1:2, j in 1:3, k in 1:2, l in 1:2]
2×3×2×2 Array{Int64,4}:
[:, :, 1, 1] =
1 1 1
1 1 1
[:, :, 2, 1] =
2 2 2
2 2 2
[:, :, 1, 2] =
3 3 3
3 3 3
[:, :, 2, 2] =
5 5 5
5 5 5
julia> MosaicView(A)
4×6 MosaicViews.MosaicView{Int64,4,Array{Int64,4}}:
1 1 1 3 3 3
1 1 1 3 3 3
2 2 2 5 5 5
2 2 2 5 5 5
MosaicViews.mosaicview
— Functionmosaicview(A::AbstractArray;
[fillvalue=<zero unit>], [npad=0],
[nrow], [ncol], [rowmajor=false]) -> MosaicView
mosaicview(As::AbstractArray...; kwargs...)
mosaicview(As::Union{Tuple, AbstractVector}; kwargs...)
Create a two dimensional "view" from array A
.
The resulting MosaicView
will display all the matrix slices of the first two dimensions of A
arranged as a single large mosaic (in the form of a matrix).
Arguments
In contrast to using the constructor of MosaicView
directly, the function mosaicview
also allows for a couple of convenience keywords. A typical use case would be to create an image mosaic from a set of input images.
The parameter
fillvalue
defines the value that that should be used for empty space. This can be padding caused bynpad
, or empty mosaic tiles in case the number of matrix slices inA
is smaller thannrow*ncol
.The parameter
npad
defines the empty padding space between adjacent mosaic tiles. This can be especially useful if the individual tiles (i.e. matrix slices inA
) are images that should be visually separated by some grid lines.The parameters
nrow
andncol
can be used to choose the number of tiles in row and/or column direction the mosaic should be arranged in. Note that it suffices to specify one of the two parameters, as the other one can be inferred accordingly. The default in case none of the two are specified isnrow = size(A,3)
.If
rowmajor
is set totrue
, then the slices will be arranged left-to-right-top-to-bottom, instead of top-to-bottom-left-to-right (default). The layout only differs in non-trivial cases, i.e., whennrow != 1
andncol != 1
.
This function is not type stable and should only be used if performance is not a priority. To achieve optimized performance, you need to manually construct a MosaicView
.
Examples
The simplest usage is to cat
two arrays of the same dimension.
julia> A1 = fill(1, 3, 1)
3×1 Array{Int64,2}:
1
1
1
julia> A2 = fill(2, 1, 3)
1×3 Array{Int64,2}:
2 2 2
julia> mosaicview(A1, A2)
6×3 MosaicView{Int64,4, ...}:
0 1 0
0 1 0
0 1 0
0 0 0
2 2 2
0 0 0
julia> mosaicview(A1, A2; center=false)
6×3 MosaicView{Int64,4, ...}:
1 0 0
1 0 0
1 0 0
2 2 2
0 0 0
0 0 0
Other keyword arguments can be useful to get a nice looking results.
julia> using MosaicViews
julia> A = [k for i in 1:2, j in 1:3, k in 1:5]
2×3×5 Array{Int64,3}:
[:, :, 1] =
1 1 1
1 1 1
[:, :, 2] =
2 2 2
2 2 2
[:, :, 3] =
3 3 3
3 3 3
[:, :, 4] =
4 4 4
4 4 4
[:, :, 5] =
5 5 5
5 5 5
julia> mosaicview(A, ncol=2)
6×6 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 4 4 4
1 1 1 4 4 4
2 2 2 5 5 5
2 2 2 5 5 5
3 3 3 0 0 0
3 3 3 0 0 0
julia> mosaicview(A, nrow=2)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 3 3 3 5 5 5
1 1 1 3 3 3 5 5 5
2 2 2 4 4 4 0 0 0
2 2 2 4 4 4 0 0 0
julia> mosaicview(A, nrow=2, rowmajor=true)
4×9 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 2 2 2 3 3 3
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 0 0 0
4 4 4 5 5 5 0 0 0
julia> mosaicview(A, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 0 2 2 2 0 3 3 3
1 1 1 0 2 2 2 0 3 3 3
0 0 0 0 0 0 0 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
4 4 4 0 5 5 5 0 0 0 0
julia> mosaicview(A, fillvalue=-1, nrow=2, npad=1, rowmajor=true)
5×11 MosaicViews.MosaicView{Int64,4,...}:
1 1 1 -1 2 2 2 -1 3 3 3
1 1 1 -1 2 2 2 -1 3 3 3
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
4 4 4 -1 5 5 5 -1 -1 -1 -1
ImageAxes.StreamingContainer
— TypeA = StreamingContainer{T}(f!, parent, streamingaxes::Axis...)
An array-like object possessing one or more axes for which changing "slices" may be expensive or subject to restrictions. A canonical example would be an AVI stream, where addressing pixels within the same frame is fast but jumping between frames might be slow.
Here's a simple example of dividing by the mean of each slice of an image before returning values.
A = AxisArrays.AxisArray(reshape(1:36, 3, 3, 4))
function f!(buffer, slice)
meanslice = mean(slice)
buffer .= slice./meanslice
end
B = StreamingContainer{Float64}(f!, A, AxisArrays.axes(A)[3])
julia> A[:,:,1]
3×3 AxisArray{Int64,2,Array{Int64,2},Tuple{Axis{:row,Base.OneTo{Int64}},Axis{:col,Base.OneTo{Int64}}}}:
1 4 7
2 5 8
3 6 9
julia> B[:,:,1]
3×3 Array{Float64,2}:
0.2 0.8 1.4
0.4 1.0 1.6
0.6 1.2 1.8
The user-provided f!
function should take arguments:
f!(buffer, slice)
Where buffer
will be an empty array that can hold a slice of your series, and slice
will hold the current input slice.
It's worth noting that StreamingContainer
is not a subtype of AbstractArray
, but that much of the array interface (eltype
, ndims
, axes
, size
, getindex
, and IndexStyle
) is supported. A StreamingContainer A
can be built from an AxisArray, but it may also be constructed from other "parent" objects, even non-arrays, as long as they support the same functions. In either case, the parent should also support the standard AxisArray functions axes
, axisnames
, axisvalues
, and axisdim
; this support will be extended to the StreamingContainer
.
Additionally, a StreamingContainer A
supports
getindex!(dest, A, axt::Axis{:time}, ...)
to obtain slices along the streamed axes (here it is assumed that :time
is a streamed axis of A
). You can implement this directly (dispatching on the parameters of A
), or (if the parent is an AbstractArray
) rely on the fallback
A.getindex!(dest, view(parent, axs...))
where A.getindex! = f!
as passed as an argument at construction. dest
should have dimensionality ndims(parent)-length(streamingaxes)
.
Optionally, define StreamIndexStyle(typeof(parent),typeof(f!))
.
Images with defined geometry and axis meaning can be constructed using the AxisArrays
package:
using AxisArrays
img = AxisArray(A, (:y, :x, :time), (0.25μm, 0.25μm, 0.125s)) # see Unitful.jl for units
Custom metadata can be added as follows:
img = ImageMeta(A, date=now(), patientID=12345)
Any of these operations may be composed together, e.g., if you have an m×n×3 UInt8
array, you can put it in canonical RGB format and add metadata:
img = ImageMeta(colorview(RGB, normedview(PermutedDimsArray(A, (3,1,2)))), sample="control")
Traits
These functions are the preferred way to access certain types of "internal" data about an image. They can sometimes be useful in allowing you to write generic code.
ImageCore.pixelspacing
— Functionpixelspacing(img) -> (sx, sy, ...)
Return a tuple representing the separation between adjacent pixels along each axis of the image. Defaults to (1,1,...). Use ImagesAxes for images with anisotropic spacing or to encode the spacing using physical units.
ImageCore.spacedirections
— Functionspacedirections(img) -> (axis1, axis2, ...)
Return a tuple-of-tuples, each axis[i]
representing the displacement vector between adjacent pixels along spatial axis i
of the image array, relative to some external coordinate system ("physical coordinates").
By default this is computed from pixelspacing
, but you can set this manually using ImagesMeta.
spacedirections(img)
Using ImageMetadata, you can set this property manually. For example, you could indicate that a photograph was taken with the camera tilted 30-degree relative to vertical using
img["spacedirections"] = ((0.866025,-0.5),(0.5,0.866025))
If not specified, it will be computed from pixelspacing(img)
, placing the spacing along the "diagonal". If desired, you can set this property in terms of physical units, and each axis can have distinct units.
ImageCore.sdims
— Functionsdims(img)
Return the number of spatial dimensions in the image. Defaults to the same as ndims
, but with ImagesAxes you can specify that some axes correspond to other quantities (e.g., time) and thus not included by sdims
.
ImageCore.coords_spatial
— Functioncoords_spatial(img)
Return a tuple listing the spatial dimensions of img
.
Note that a better strategy may be to use ImagesAxes and take slices along the time axis.
ImageCore.size_spatial
— Functionsize_spatial(img)
Return a tuple listing the sizes of the spatial dimensions of the image. Defaults to the same as size
, but using ImagesAxes you can mark some axes as being non-spatial.
ImageCore.indices_spatial
— Functionindices_spatial(img)
Return a tuple with the indices of the spatial dimensions of the image. Defaults to the same as indices
, but using ImagesAxes you can mark some axes as being non-spatial.
ImageCore.nimages
— Functionnimages(img)
Return the number of time-points in the image array. Defaults to
- Use ImagesAxes if you want to use an explicit time dimension.
ImageAxes.timeaxis
— Functiontimeaxis(A)
Return the time axis, if present, of the array A
, and nothing
otherwise.
ImageAxes.istimeaxis
— Functionistimeaxis(ax)
Test whether the axis ax
corresponds to time.
ImageAxes.timedim
— Functiontimedim(img) -> d::Int
Return the dimension of the array used for encoding time, or 0 if not using an axis for this purpose.
Note: if you want to recover information about the time axis, it is generally better to use timeaxis
.
ImageCore.assert_timedim_last
— Functionassert_timedim_last(img)
Throw an error if the image has a time dimension that is not the last dimension.
ImageAxes.StreamIndexStyle
— Typestyle = StreamIndexStyle(A)
A trait that indicates the degree of support for indexing the streaming axes of A
. Choices are IndexAny()
and IndexIncremental()
(for arrays that only permit advancing the time axis, e.g., a video stream from a webcam). The default value is IndexAny()
.
This should be specialized for the type rather than the instance. For a StreamingContainer S
, you can define this trait via
StreamIndexStyle(::Type{P}, ::typeof(f!)) = IndexIncremental()
where P = typeof(parent(S))
.
ImageAxes.IndexAny
— TypeIndexAny()
Indicates that an axis supports full random-access indexing.
ImageAxes.IndexIncremental
— TypeIndexIncremental()
Indicates that an axis supports only incremental indexing, i.e., from i
to i+1
. This is commonly used for the temporal axis with media streams.
Element transformation and intensity scaling
ImageCore.clamp01
— Functionclamp01(x) -> y
Produce a value y
that lies between 0 and 1, and equal to x
when x
is already in this range. Equivalent to clamp(x, 0, 1)
for numeric values. For colors, this function is applied to each color channel separately.
See also: clamp01!
, clamp01nan
.
ImageCore.clamp01!
— Functionclamp01!(array::AbstractArray)
Restrict values in array to [0, 1], in-place. See also clamp01
.
ImageCore.clamp01nan
— Functionclamp01nan(x) -> y
Similar to clamp01
, except that any NaN
values are changed to 0.
See also: clamp01nan!
, clamp01
.
ImageCore.clamp01nan!
— Functionclamp01nan!(array::AbstractArray)
Similar to clamp01!
, except that any NaN
values are changed to 0.
See also: clamp01!
, clamp01nan
ImageCore.scaleminmax
— Functionscaleminmax(min, max) -> f
scaleminmax(T, min, max) -> f
Return a function f
which maps values less than or equal to min
to 0, values greater than or equal to max
to 1, and uses a linear scale in between. min
and max
should be real values.
Optionally specify the return type T
. If T
is a colorant (e.g., RGB), then scaling is applied to each color channel.
Examples
Example 1
julia> f = scaleminmax(-10, 10)
(::#9) (generic function with 1 method)
julia> f(10)
1.0
julia> f(-10)
0.0
julia> f(5)
0.75
Example 2
julia> c = RGB(255.0,128.0,0.0)
RGB{Float64}(255.0,128.0,0.0)
julia> f = scaleminmax(RGB, 0, 255)
(::#13) (generic function with 1 method)
julia> f(c)
RGB{Float64}(1.0,0.5019607843137255,0.0)
See also: takemap
.
ImageCore.scalesigned
— Functionscalesigned(maxabs) -> f
Return a function f
which scales values in the range [-maxabs, maxabs]
(clamping values that lie outside this range) to the range [-1, 1]
.
See also: colorsigned
.
scalesigned(min, center, max) -> f
Return a function f
which scales values in the range [min, center]
to [-1,0]
and [center,max]
to [0,1]
. Values smaller than min
/max
get clamped to min
/max
, respectively.
See also: colorsigned
.
ImageCore.colorsigned
— Functioncolorsigned()
colorsigned(colorneg, colorpos) -> f
colorsigned(colorneg, colorcenter, colorpos) -> f
Define a function that maps negative values (in the range [-1,0]) to the linear colormap between colorneg
and colorcenter
, and positive values (in the range [0,1]) to the linear colormap between colorcenter
and colorpos
.
The default colors are:
colorcenter
: whitecolorneg
: green1colorpos
: magenta
See also: scalesigned
.
ImageCore.takemap
— Functiontakemap(f, A) -> fnew
takemap(f, T, A) -> fnew
Given a value-mapping function f
and an array A
, return a "concrete" mapping function fnew
. When applied to elements of A
, fnew
should return valid values for storage or display, for example in the range from 0 to 1 (for grayscale) or valid colorants. fnew
may be adapted to the actual values present in A
, and may not produce valid values for any inputs not in A
.
Optionally one can specify the output type T
that fnew
should produce.
Example:
julia> A = [0, 1, 1000];
julia> f = takemap(scaleminmax, A)
(::#7) (generic function with 1 method)
julia> f.(A)
3-element Array{Float64,1}:
0.0
0.001
1.0
Storage-type transformation
ImageCore.float32
— Functionfloat32.(img)
converts the raw storage type of img
to Float32
, without changing the color space.
ImageCore.float64
— Functionfloat64.(img)
converts the raw storage type of img
to Float64
, without changing the color space.
ImageCore.n0f8
— Functionn0f8.(img)
converts the raw storage type of img
to N0f8
, without changing the color space.
ImageCore.n6f10
— Functionn6f10.(img)
converts the raw storage type of img
to N6f10
, without changing the color space.
ImageCore.n4f12
— Functionn4f12.(img)
converts the raw storage type of img
to N4f12
, without changing the color space.
ImageCore.n2f14
— Functionn2f14.(img)
converts the raw storage type of img
to N2f14
, without changing the color space.
ImageCore.n0f16
— Functionn0f16.(img)
converts the raw storage type of img
to N0f16
, without changing the color space.
Color channels
You can extract the numeric value of particular color channels:
ColorTypes.gray
— Functiongray(c)
returns the gray component of a grayscale opaque or transparent color.
ColorTypes.red
— Functionred(c)
returns the red component of an AbstractRGB
opaque or transparent color.
ColorTypes.green
— Functiongreen(c)
returns the green component of an AbstractRGB
opaque or transparent color.
ColorTypes.blue
— Functionblue(c)
returns the blue component of an AbstractRGB
opaque or transparent color.
ColorTypes.alpha
— Functionalpha(p)
extracts the alpha component of a color. For a color without an alpha channel, it will always return 1.
You can also perform operations on channels:
ColorTypes.mapc
— Functionmapc(f, rgb) -> rgbf
mapc(f, rgb1, rgb2) -> rgbf
mapc
applies the function f
to each color channel of the input color(s), returning an output color in the same colorspace.
Examples:
julia> mapc(x->clamp(x,0,1), RGB(-0.2,0.3,1.2))
RGB{Float64}(0.0,0.3,1.0)
julia> mapc(max, RGB(0.1,0.8,0.3), RGB(0.5,0.5,0.5))
RGB{Float64}(0.5,0.8,0.5)
julia> mapc(+, RGB(0.1,0.8,0.3), RGB(0.5,0.5,0.5))
RGB{Float64}(0.6,1.3,0.8)
ColorTypes.reducec
— Functionreducec(op, v0, c)
Reduce across color channels of c
with the binary operator op
. v0
is the neutral element used to initiate the reduction. For grayscale,
reducec(op, v0, c::Gray) = op(v0, comp1(c))
whereas for RGB
reducec(op, v0, c::RGB) = op(comp3(c), op(comp2(c), op(v0, comp1(c))))
If c
has an alpha channel, it is always the last one to be folded into the reduction.
ColorTypes.mapreducec
— Functionmapreducec(f, op, v0, c)
Reduce across color channels of c
with the binary operator op
, first applying f
to each channel. v0
is the neutral element used to initiate the reduction. For grayscale,
mapreducec(f, op, v0, c::Gray) = op(v0, f(comp1(c)))
whereas for RGB
mapreducec(f, op, v0, c::RGB) = op(f(comp3(c)), op(f(comp2(c)), op(v0, f(comp1(c)))))
If c
has an alpha channel, it is always the last one to be folded into the reduction.
Color conversion
imgg = Gray.(img)
calculates a grayscale representation of a color image using the Rec 601 luma.
imghsv = HSV.(img)
converts to an HSV representation of color information.
The ColorTypes package has a rich set of traits that allow you to perform generic operations on color types, see its README for more information.
Image algorithms
Linear filtering
ImageFiltering.imfilter
— Functionimfilter([T], img, kernel, [border="replicate"], [alg]) --> imgfilt
imfilter([r], img, kernel, [border="replicate"], [alg]) --> imgfilt
imfilter(r, T, img, kernel, [border="replicate"], [alg]) --> imgfilt
Filter a one, two or multidimensional array img
with a kernel
by computing their correlation.
Details
The term filtering emerges in the context of a Fourier transformation of an image, which maps an image from its canonical spatial domain to its concomitant frequency domain. Manipulating an image in the frequency domain amounts to retaining or discarding particular frequency components—a process analogous to sifting or filtering [1]. Because the Fourier transform establishes a link between the spatial and frequency representation of an image, one can interpret various image manipulations in the spatial domain as filtering operations which accept or reject specific frequencies.
The phrase spatial filtering is often used to emphasise that an operation is, at least conceptually, devised in the context of the spatial domain of an image. One further distinguishes between linear and non-linear spatial filtering. A filter is called linear if the operation performed on the pixels is linear, and is labeled non-linear otherwise.
An image filter can be represented by a function
\[ w: \{s\in \mathbb{Z} \mid -k_1 \le s \le k_1 \} \times \{t \in \mathbb{Z} \mid -k_2 \le t \le k_2 \} \rightarrow \mathbb{R},\]
where $k_i \in \mathbb{N}$ (i = 1,2). It is common to define $k_1 = 2a+1$ and $k_2 = 2b + 1$, where $a$ and $b$ are integers, which ensures that the filter dimensions are of odd size. Typically, $k_1$ equals $k_2$ and so, dropping the subscripts, one speaks of a $k \times k$ filter. Since the domain of the filter represents a grid of spatial coordinates, the filter is often called a mask and is visualized as a grid. For example, a $3 \times 3$ mask can be potrayed as follows:
\[\scriptsize \begin{matrix} \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \\ \\ \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \\ \\ \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \end{matrix}.\]
The values of $w(s,t)$ are referred to as filter coefficients.
Discrete convolution versus correlation
There are two fundamental and closely related operations that one regularly performs on an image with a filter. The operations are called discrete correlation and convolution.
The correlation operation, denoted by the symbol $\star$, is given in two dimensions by the expression
\[\begin{aligned} g(x,y) = w(x,y) \star f(x,y) = \sum_{s = -a}^{a} \sum_{t=-b}^{b} w(s,t) f(x+s, y+t), \end{aligned}\]
whereas the comparable convolution operation, denoted by the symbol $\ast$, is given in two dimensions by
\[\begin{aligned} h(x,y) = w(x,y) \ast f(x,y) = \sum_{s = -a}^{a} \sum_{t=-b}^{b} w(s,t) f(x-s, y-t). \end{aligned}\]
Since a digital image is of finite extent, both of these operations are undefined at the borders of the image. In particular, for an image of size $M \times N$, the function $f(x \pm s, y \pm t)$ is only defined for $1 \le x \pm s \le N$ and $1 \le y \pm t \le M$. In practice one addresses this problem by artificially expanding the domain of the image. For example, one can pad the image with zeros. Other padding strategies are possible, and they are discussed in more detail in the Options section of this documentation.
One-dimensional illustration
The difference between correlation and convolution is best understood with recourse to a one-dimensional example adapted from [1]. Suppose that a filter $w:\{-1,0,1\}\rightarrow \mathbb{R}$ has coefficients
\[\begin{matrix} \boxed{1} & \boxed{2} & \boxed{3} \end{matrix}.\]
Consider a discrete unit impulse function $f: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \{0,1\}$ that has been padded with zeros. The function can be visualised as an image
\[\boxed{ \begin{matrix} 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{1} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \end{matrix}}.\]
The correlation operation can be interpreted as sliding $w$ along the image and computing the sum of products at each location. For example,
\[\begin{matrix} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 3 & & & & & & \\ & 1 & 2 & 3 & & & & & \\ & & 1 & 2 & 3 & & & & \\ & & & 1 & 2 & 3 & & & \\ & & & & 1 & 2 & 3 & & \\ & & & & & 1 & 2 & 3 & \\ & & & & & & 1 & 2 & 3, \end{matrix}\]
yields the output $g: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \mathbb{R}$, which when visualized as a digital image, is equal to
\[\boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{3} & \boxed{2} & \boxed{1} & \boxed{0} & \boxed{0} \end{matrix}}.\]
The interpretation of the convolution operation is analogous to correlation, except that the filter $w$ has been rotated by 180 degrees. In particular,
\[\begin{matrix} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 3 & 2 & 1 & & & & & & \\ & 3 & 2 & 1 & & & & & \\ & & 3 & 2 & 1 & & & & \\ & & & 3 & 2 & 1 & & & \\ & & & & 3 & 2 & 1 & & \\ & & & & & 3 & 2 & 1 & \\ & & & & & & 3 & 2 & 1, \end{matrix}\]
yields the output $h: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \mathbb{R}$ equal to
\[\boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{0} & \boxed{0} \end{matrix}}.\]
Instead of rotating the filter mask, one could instead rotate $f$ and still obtained the same convolution result. In fact, the conventional notation for convolution indicates that $f$ is flipped and not $w$. If $w$ is symmetric, then convolution and correlation give the same outcome.
Two-dimensional illustration
For a two-dimensional example, suppose the filter $w:\{-1, 0 ,1\} \times \{-1,0,1\} \rightarrow \mathbb{R}$ has coefficients
\[ \begin{matrix} \boxed{1} & \boxed{2} & \boxed{3} \\ \\ \boxed{4} & \boxed{5} & \boxed{6} \\ \\ \boxed{7} & \boxed{8} & \boxed{9} \end{matrix},\]
and consider a two-dimensional discrete unit impulse function
\[ f:\{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \times \{y \in \mathbb{Z} \mid 1 \le y \le 7 \}\rightarrow \{ 0,1\}\]
that has been padded with zeros:
\[ \boxed{ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{1} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{matrix}}.\]
The correlation operation $w(x,y) \star f(x,y)$ yields the output
\[ \boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \\ \\ \boxed{0} & \boxed{9} & \boxed{8} & \boxed{7} & \boxed{0} \\ \\ \boxed{0} & \boxed{6} & \boxed{5} & \boxed{4} & \boxed{0} \\ \\ \boxed{0} & \boxed{3} & \boxed{2} & \boxed{1} & \boxed{0} \\ \\ \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \end{matrix}},\]
whereas the convolution operation $w(x,y) \ast f(x,y)$ produces
\[ \boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \\ \\ \boxed{0} & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{0}\\ \\ \boxed{0} & \boxed{4} & \boxed{5} & \boxed{6} & \boxed{0} \\ \\ \boxed{0} & \boxed{7} & \boxed{8} & \boxed{9} & \boxed{0} \\ \\ \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \end{matrix}}.\]
Discrete convolution and correlation as matrix multiplication
Discrete convolution and correlation operations can also be formulated as a matrix multiplication, where one of the inputs is converted to a Toeplitz matrix, and the other is represented as a column vector. For example, consider a function $f:\{x \in \mathbb{N} \mid 1 \le x \le M \} \rightarrow \mathbb{R}$ and a filter $w: \{s \in \mathbb{N} \mid -k_1 \le s \le k_1 \} \rightarrow \mathbb{R}$. Then the matrix multiplication
\[\begin{bmatrix} w(-k_1) & 0 & \ldots & 0 & 0 \\ \vdots & w(-k_1) & \ldots & \vdots & 0 \\ w(k_1) & \vdots & \ldots & 0 & \vdots \\ 0 & w(k_1) & \ldots & w(-k_1) & 0 \\ 0 & 0 & \ldots & \vdots & w(-k_1) \\ \vdots & \vdots & \ldots & w(k_1) & \vdots \\ 0 & 0 & 0 & 0 & w(k_1) \end{bmatrix} \begin{bmatrix} f(1) \\ f(2) \\ f(3) \\ \vdots \\ f(M) \end{bmatrix}\]
is equivalent to the convolution $w(s) \ast f(x)$ assuming that the border of $f(x)$ has been padded with zeros.
To represent multidimensional convolution as matrix multiplication one reshapes the multidimensional arrays into column vectors and proceeds in an analogous manner. Naturally, the result of the matrix multiplication will need to be reshaped into an appropriate multidimensional array.
Options
The following subsections describe valid options for the function arguments in more detail.
Choices for r
You can dispatch to different implementations by passing in a resource r
as defined by the ComputationalResources package. For example,
imfilter(ArrayFireLibs(), img, kernel)
would request that the computation be performed on the GPU using the ArrayFire libraries.
Choices for T
Optionally, you can control the element type of the output image by passing in a type T
as the first argument.
Choices for img
You can specify a one, two or multidimensional array defining your image.
Choices for kernel
The kernel[0,0,..]
parameter corresponds to the origin (zero displacement) of the kernel; you can use centered
to place the origin at the array center, or use the OffsetArrays package to set kernel
's indices manually. For example, to filter with a random centered 3x3 kernel, you could use either of the following:
kernel = centered(rand(3,3))
kernel = OffsetArray(rand(3,3), -1:1, -1:1)
The kernel
parameter can be specified as an array or as a "factored kernel", a tuple (filt1, filt2, ...)
of filters to apply along each axis of the image. In cases where you know your kernel is separable, this format can speed processing. Each of these should have the same dimensionality as the image itself, and be shaped in a manner that indicates the filtering axis, e.g., a 3x1 filter for filtering the first dimension and a 1x3 filter for filtering the second dimension. In two dimensions, any kernel passed as a single matrix is checked for separability; if you want to eliminate that check, pass the kernel as a single-element tuple, (kernel,)
.
Choices for border
At the image edge, border
is used to specify the padding which will be used to extrapolate the image beyond its original bounds. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
"replicate"
(default)
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"circular"
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"reflect"
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"symmetric"
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
Fill(m)
The border pixels are filled with a specified value $m$.
\[\boxed{ \begin{array}{l|c|r} m\, m\, m\, m & a \, b \, c \, d \, e \, f & m \, m \, m \, m \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
Inner()
Indicate that edges are to be discarded in filtering, only the interior of the result is to be returned.
See also: Pad
, padarray
, Inner
, NA
and NoPad
NA()
Choose filtering using "NA" (Not Available) boundary conditions. This is most appropriate for filters that have only positive weights, such as blurring filters.
See also: Pad
, padarray
, Inner
, NA
and NoPad
Choices for alg
The alg
parameter allows you to choose the particular algorithm: FIR()
(finite impulse response, aka traditional digital filtering) or FFT()
(Fourier-based filtering). If no choice is specified, one will be chosen based on the size of the image and kernel in a way that strives to deliver good performance. Alternatively you can use a custom filter type, like KernelFactors.IIRGaussian
.
Examples
The following subsections highlight some common use cases.
Convolution versus correlation
# Create a two-dimensional discrete unit impulse function.
f = fill(0,(9,9));
f[5,5] = 1;
# Specify a filter coefficient mask and set the center of the mask as the origin.
w = centered([1 2 3; 4 5 6 ; 7 8 9]);
#=
The default operation of `imfilter` is correlation. By reflecting `w` we
compute the convolution of `f` and `w`. `Fill(0,w)` indicates that we wish to
pad the border of `f` with zeros. The amount of padding is automatically
determined by considering the length of w.
=#
correlation = imfilter(f,w,Fill(0,w))
convolution = imfilter(f,reflect(w),Fill(0,w))
Miscellaneous border padding options
# Example function values f, and filter coefficients w.
f = reshape(1.0:81.0,9,9)
w = centered(reshape(1.0:9.0,3,3))
# You can designate the type of padding by specifying an appropriate string.
imfilter(f,w,"replicate")
imfilter(f,w,"circular")
imfilter(f,w,"symmetric")
imfilter(f,w,"reflect")
# Alternatively, you can explicitly use the Pad type to designate the padding style.
imfilter(f,w,Pad(:replicate))
imfilter(f,w,Pad(:circular))
imfilter(f,w,Pad(:symmetric))
imfilter(f,w,Pad(:reflect))
# If you want to pad with a specific value then use the Fill type.
imfilter(f,w,Fill(0,w))
imfilter(f,w,Fill(1,w))
imfilter(f,w,Fill(-1,w))
#=
Specify 'Inner()' if you want to retrieve the interior sub-array of f for which
the filtering operation is defined without padding.
=#
imfilter(f,w,Inner())
References
- R. C. Gonzalez and R. E. Woods. Digital Image Processing (3rd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, 2006.
See also: imfilter!
, centered
, padarray
, Pad
, Fill
, Inner
, KernelFactors.IIRGaussian
.
ImageFiltering.imfilter!
— Functionimfilter!(imgfilt, img, kernel, [border="replicate"], [alg])
imfilter!(r, imgfilt, img, kernel, border::Pad)
imfilter!(r, imgfilt, img, kernel, border::NoPad, [inds=axes(imgfilt)])
Filter an array img
with kernel kernel
by computing their correlation, storing the result in imgfilt
.
The indices of imgfilt
determine the region over which the filtered image is computed–-you can use this fact to select just a specific region of interest, although be aware that the input img
might still get padded. Alteratively, explicitly provide the indices inds
of imgfilt
that you want to calculate, and use NoPad
boundary conditions. In such cases, you are responsible for supplying appropriate padding: img
must be indexable for all of the locations needed for calculating the output. This syntax is best-supported for FIR filtering; in particular, that that IIR filtering can lead to results that are inconsistent with respect to filtering the entire array.
See also: imfilter
.
ImageFiltering.imgradients
— Function imgradients(img, kernelfun=KernelFactors.ando3, border="replicate") -> gimg1, gimg2, ...
Estimate the gradient of img
in the direction of the first and second dimension at all points of the image, using a kernel specified by kernelfun
.
Output
The gradient is returned as a tuple-of-arrays, one for each dimension of the input; gimg1
corresponds to the derivative with respect to the first dimension, gimg2
to the second, and so on.
Details
To appreciate the difference between various gradient estimation methods it is helpful to distinguish between: (1) a continuous scalar-valued analogue image $f_\textrm{A}(x_1,x_2)$, where $x_1,x_2 \in \mathbb{R}$, and (2) its discrete digital realization $f_\textrm{D}(x_1',x_2')$, where $x_1',x_2' \in \mathbb{N}$, $1 \le x_1' \le M$ and $1 \le x_2' \le N$.
Analogue image
The gradient of a continuous analogue image $f_{\textrm{A}}(x_1,x_2)$ at location $(x_1,x_2)$ is defined as the vector
\[\nabla \mathbf{f}_{\textrm{A}}(x_1,x_2) = \frac{\partial f_{\textrm{A}}(x_1,x_2)}{\partial x_1} \mathbf{e}_{1} + \frac{\partial f_{\textrm{A}}(x_1,x_2)}{\partial x_2} \mathbf{e}_{2},\]
where $\mathbf{e}_{d}$ $(d = 1,2)$ is the unit vector in the $x_d$-direction. The gradient points in the direction of maximum rate of change of $f_{\textrm{A}}$ at the coordinates $(x_1,x_2)$. The gradient can be used to compute the derivative of a function in an arbitrary direction. In particular, the derivative of $f_{\textrm{A}}$ in the direction of a unit vector $\mathbf{u}$ is given by $\nabla_{\mathbf{u}}f_\textrm{A}(x_1,x_2) = \nabla \mathbf{f}_{\textrm{A}}(x_1,x_2) \cdot \mathbf{u}$, where $\cdot$ denotes the dot product.
Digital image
In practice, we acquire a digital image $f_\textrm{D}(x_1',x_2')$ where the light intensity is known only at a discrete set of locations. This means that the required partial derivatives are undefined and need to be approximated using discrete difference formulae [1].
A straightforward way to approximate the partial derivatives is to use central-difference formulae
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_1'} \approx \frac{f_{\textrm{D}}(x_1'+1,x_2') - f_{\textrm{D}}(x_1'-1,x_2') }{2}\]
and
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_2'} \approx \frac{f_{\textrm{D}}(x_1',x_2'+1) - f_{\textrm{D}}(x_1',x_2'-1)}{2}.\]
However, the central-difference formulae are very sensitive to noise. When working with noisy image data, one can obtain a better approximation of the partial derivatives by using a suitable weighted combination of the neighboring image intensities. The weighted combination can be represented as a discrete convolution operation between the image and a kernel which characterizes the requisite weights. In particular, if $h_{x_d}$ ($d = 1,2)$ represents a $2r+1 \times 2r+1$ kernel, then
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_d'} \approx \sum_{i = -r}^r \sum_{j = -r}^r f_\textrm{D}(x_1'-i,x_2'-j) h_{x_d}(i,j).\]
The kernel is frequently also called a mask or convolution matrix.
Weighting schemes and approximation error
The choice of weights determines the magnitude of the approximation error and whether the finite-difference scheme is isotropic. A finite-difference scheme is isotropic if the approximation error does not depend on the orientation of the coordinate system and anisotropic if the approximation error has a directional bias [2]. With a continuous analogue image the magnitude of the gradient would be invariant upon rotation of the coordinate system, but in practice one cannot obtain perfect isotropy with a finite set of discrete points. Hence a finite-difference scheme is typically considered isotropic if the leading error term in the approximation does not have preferred directions.
Most finite-difference schemes that are used in image processing are based on $3 \times 3$ kernels, and as noted by [7], many can also be parametrized by a single parameter $\alpha$ as follows:
\[\mathbf{H}_{x_{1}} = \frac{1}{4 + 2\alpha} \begin{bmatrix} -1 & -\alpha & -1 \\ 0 & 0 & 0 \\ 1 & \alpha & 1 \end{bmatrix} \quad \text{and} \quad \mathbf{H}_{x_{2}} = \frac{1}{2 + 4\alpha} \begin{bmatrix} -1 & 0 & 1 \\ -\alpha & 0 & \alpha \\ -1 & 0 & 1 \end{bmatrix},\]
where
\[\alpha = \begin{cases} 0, & \text{Simple Finite Difference}; \\ 1, & \text{Prewitt}; \\ 2, & \text{Sobel}; \\ 2.4351, & \text{Ando}; \\ \frac{10}{3}, & \text{Scharr}; \\ 4, & \text{Bickley}. \end{cases}\]
Separable kernel
A kernel is called separable if it can be expressed as the convolution of two one-dimensional filters. With a matrix representation of the kernel, separability means that the kernel matrix can be written as an outer product of two vectors. Separable kernels offer computational advantages since instead of performing a two-dimensional convolution one can perform a sequence of one-dimensional convolutions.
Options
You can specify your choice of the finite-difference scheme via the kernelfun
parameter. You can also indicate how to deal with the pixels on the border of the image with the border
parameter.
Choices for kernelfun
In general kernelfun
can be any function which satisfies the following interface:
kernelfun(extended::NTuple{N,Bool}, d) -> kern_d,
where kern_d
is the kernel for producing the derivative with respect to the $d$th dimension of an $N$-dimensional array. The parameter extended[i]
is true if the image is of size > 1 along dimension $i$. The parameter kern_d
may be provided as a dense or factored kernel, with factored representations recommended when the kernel is separable.
Some valid kernelfun
options are described below.
KernelFactors.prewitt
With the prewit option [3] the computation of the gradient is based on the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{6} \begin{bmatrix} -1 & -1 & -1 \\ 0 & 0 & 0 \\ 1 & 1 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{6} \begin{bmatrix} -1 & 0 & 1 \\ -1 & 0 & 1 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{6} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix} & & = \frac{1}{6} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.prewitt
and Kernel.prewitt
KernelFactors.sobel
The sobel option [4] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{8} \begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{8} \begin{bmatrix} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{8} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 1 \end{bmatrix} & & = \frac{1}{8} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.sobel
and Kernel.sobel
KernelFactors.ando3
The ando3 option [5] specifies the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \begin{bmatrix} -0.112737 & -0.274526 & -0.112737 \\ 0 & 0 & 0 \\ 0.112737 & 0.274526 & 0.112737 \end{bmatrix} & \mathbf{H}_{x_2} & = \begin{bmatrix} -0.112737 & 0 & 0.112737 \\ -0.274526 & 0 & 0.274526 \\ -0.112737 & 0 & 0.112737 \end{bmatrix} \\ & = \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 0.112737 & 0.274526 & 0.112737 \end{bmatrix} & & = \begin{bmatrix} 0.112737 \\ 0.274526 \\ 0.112737 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.ando3
, and Kernel.ando3
; KernelFactors.ando4
, and Kernel.ando4
; KernelFactors.ando5
, and Kernel.ando5
KernelFactors.scharr
The scharr option [6] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_{1}} & = \frac{1}{32} \begin{bmatrix} -3 & -10 & -3 \\ 0 & 0 & 0 \\ 3 & 10 & 3 \end{bmatrix} & \mathbf{H}_{x_{2}} & = \frac{1}{32} \begin{bmatrix} -3 & 0 & 3 \\ -10 & 0 & 10\\ -3 & 0 & 3 \end{bmatrix} \\ & = \frac{1}{32} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 3 & 10 & 3 \end{bmatrix} & & = \frac{1}{32} \begin{bmatrix} 3 \\ 10 \\ 3 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.scharr
and Kernel.scharr
KernelFactors.bickley
The bickley option [7,8] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{12} \begin{bmatrix} -1 & -4 & -1 \\ 0 & 0 & 0 \\ 1 & 4 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{12} \begin{bmatrix} -1 & 0 & 1 \\ -4 & 0 & 4 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{12} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 4 & 1 \end{bmatrix} & & = \frac{1}{12} \begin{bmatrix} 1 \\ 4 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.bickley
and Kernel.bickley
Choices for border
At the image edge, border
is used to specify the padding which will be used to extrapolate the image beyond its original bounds. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
"replicate"
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"circular"
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"symmetric"
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"reflect"
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
Example
This example compares the quality of the gradient estimation methods in terms of the accuracy with which the orientation of the gradient is estimated.
using Images
values = LinRange(-1,1,128);
w = 1.6*pi;
# Define a function of a sinusoidal grating, f(x,y) = sin( (w*x)^2 + (w*y)^2 ),
# together with its exact partial derivatives.
I = [sin( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Ix = [2*w*x*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Iy = [2*w*y*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
# Determine the exact orientation of the gradients.
direction_true = atan.(Iy./Ix);
for kernelfunc in (KernelFactors.prewitt, KernelFactors.sobel,
KernelFactors.ando3, KernelFactors.scharr,
KernelFactors.bickley)
# Estimate the gradients and their orientations.
Gy, Gx = imgradients(I,kernelfunc, "replicate");
direction_estimated = atan.(Gy./Gx);
# Determine the mean absolute deviation between the estimated and true
# orientation. Ignore the values at the border since we expect them to be
# erroneous.
error = mean(abs.(direction_true[2:end-1,2:end-1] -
direction_estimated[2:end-1,2:end-1]));
error = round(error, digits=5);
println("Using $kernelfunc results in a mean absolute deviation of $error")
end
# output
Using ImageFiltering.KernelFactors.prewitt results in a mean absolute deviation of 0.01069
Using ImageFiltering.KernelFactors.sobel results in a mean absolute deviation of 0.00522
Using ImageFiltering.KernelFactors.ando3 results in a mean absolute deviation of 0.00365
Using ImageFiltering.KernelFactors.scharr results in a mean absolute deviation of 0.00126
Using ImageFiltering.KernelFactors.bickley results in a mean absolute deviation of 0.00038
References
- B. Jahne, Digital Image Processing (5th ed.). Springer Publishing Company, Incorporated, 2005. 10.1007/3-540-27563-0
- M. Patra and M. Karttunen, "Stencils with isotropic discretization error for differential operators," Numer. Methods Partial Differential Eq., vol. 22, pp. 936–953, 2006. doi:10.1002/num.20129
- J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
- P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
- S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
- H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
- A. Belyaev, "Implicit image differentiation and filtering with applications to image sharpening," SIAM Journal on Imaging Sciences, vol. 6, no. 1, pp. 660–679, 2013. doi:10.1137/12087092x
- W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
Kernel
ImageFiltering.Kernel.sobel
— Function diff1, diff2 = sobel()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Sobel operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = sobel(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Sobel operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
See also: KernelFactors.sobel
, Kernel.prewitt
, Kernel.ando3
, Kernel.scharr
, Kernel.bickley
and imgradients
.
ImageFiltering.Kernel.prewitt
— Function diff1, diff2 = prewitt()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Prewitt operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = prewitt(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Prewitt operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
See also: KernelFactors.prewitt
, Kernel.sobel
, Kernel.ando3
, Kernel.scharr
,Kernel.bickley
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando3
— Function diff1, diff2 = ando3()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando3(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 3. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando3
, Kernel.ando4
, Kernel.ando5
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando4
— Function diff1, diff2 = ando4()
Return $4 \times 4$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando4(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 4. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando4
, Kernel.ando3
, Kernel.ando5
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando5
— Function diff1, diff2 = ando5()
Return $5 \times 5$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando5(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 5. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando5
, Kernel.ando3
, Kernel.ando4
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.gaussian
— Functiongaussian((σ1, σ2, ...), [(l1, l2, ...)]) -> g
gaussian(σ) -> g
Construct a multidimensional gaussian filter, with standard deviation σd
along dimension d
. Optionally provide the kernel length l
, which must be a tuple of the same length.
If σ
is supplied as a single number, a symmetric 2d kernel is constructed.
See also: KernelFactors.gaussian
.
ImageFiltering.Kernel.DoG
— FunctionDoG((σp1, σp2, ...), (σm1, σm2, ...), [l1, l2, ...]) -> k
DoG((σ1, σ2, ...)) -> k
DoG(σ::Real) -> k
Construct a multidimensional difference-of-gaussian kernel k
, equal to gaussian(σp, l)-gaussian(σm, l)
. When only a single σ
is supplied, the default is to choose σp = σ, σm = √2 σ
. Optionally provide the kernel length l
; the default is to extend by two max(σp,σm)
in each direction from the center. l
must be odd.
If σ
is provided as a single number, a symmetric 2d DoG kernel is returned.
See also: KernelFactors.IIRGaussian
.
ImageFiltering.Kernel.LoG
— FunctionLoG((σ1, σ2, ...)) -> k
LoG(σ) -> k
Construct a Laplacian-of-Gaussian kernel k
. σd
is the gaussian width along dimension d
. If σ
is supplied as a single number, a symmetric 2d kernel is returned.
See also: KernelFactors.IIRGaussian
and Kernel.Laplacian
.
ImageFiltering.Kernel.gabor
— Functiongabor(size_x,size_y,σ,θ,λ,γ,ψ) -> (k_real,k_complex)
Returns a 2 Dimensional Complex Gabor kernel contained in a tuple where
size_x
,size_y
denote the size of the kernelσ
denotes the standard deviation of the Gaussian envelopeθ
represents the orientation of the normal to the parallel stripes of a Gabor functionλ
represents the wavelength of the sinusoidal factorγ
is the spatial aspect ratio, and specifies the ellipticity of the support of the Gabor functionψ
is the phase offset
#Citation N. Petkov and P. Kruizinga, “Computational models of visual neurons specialised in the detection of periodic and aperiodic oriented visual stimuli: bar and grating cells,” Biological Cybernetics, vol. 76, no. 2, pp. 83–96, Feb. 1997. doi.org/10.1007/s004220050323
ImageFiltering.Kernel.Laplacian
— TypeLaplacian((true,true,false,...))
Laplacian(dims, N)
Laplacian()
Laplacian kernel in N
dimensions, taking derivatives along the directions marked as true
in the supplied tuple. Alternatively, one can pass dims
, a listing of the dimensions for differentiation. (However, this variant is not inferrable.)
Laplacian()
is the 2d laplacian, equivalent to Laplacian((true,true))
.
The kernel is represented as an opaque type, but you can use convert(AbstractArray, L)
to convert it into array format.
ImageFiltering.Kernel.bickley
— Function diff1, diff2 = bickley()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Bickley operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = bickley(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Bickley operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
See also: KernelFactors.bickley
, Kernel.prewitt
, Kernel.ando3
, Kernel.scharr
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.scharr
— Function diff1, diff2 = scharr()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Scharr operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = scharr(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Scharr operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
See also: KernelFactors.scharr
, Kernel.prewitt
, Kernel.ando3
, Kernel.bickley
and ImageFiltering.imgradients
.
KernelFactors
ImageFiltering.KernelFactors.sobel
— Function kern1, kern2 = sobel()
Return factored Sobel filters for dimensions 1 and 2 of a two-dimensional image. Each is a 2-tuple of one-dimensional filters.
Citation
P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
See also: Kernel.sobel
and ImageFiltering.imgradients
.
kern = sobel(extended::NTuple{N,Bool}, d)
Return a factored Sobel filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.sobel
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.prewitt
— Function kern1, kern2 = prewitt()
Return factored Prewitt filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
See also: Kernel.prewitt
and ImageFiltering.imgradients
.
kern = prewitt(extended::NTuple{N,Bool}, d)
Return a factored Prewitt filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.prewitt
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando3
— Function kern1, kern2 = ando3()
Return a factored form of Ando's "optimal" $3 \times 3$ gradient filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando3
,KernelFactors.ando4
, KernelFactors.ando5
and ImageFiltering.imgradients
.
kern = ando3(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 3) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: KernelFactors.ando4
, KernelFactors.ando5
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando4
— Function kern1, kern2 = ando4()
Return separable approximations of Ando's "optimal" 4x4 filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando4
and ImageFiltering.imgradients
.
kern = ando4(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 4) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando4
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando5
— Function kern1, kern2 = ando5()
Return a separable approximations of Ando's "optimal" 5x5 gradient filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando5
and ImageFiltering.imgradients
.
kern = ando5(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 5) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
ImageFiltering.KernelFactors.gaussian
— Functiongaussian(σ::Real, [l]) -> g
Construct a 1d gaussian kernel g
with standard deviation σ
, optionally providing the kernel length l
. The default is to extend by two σ
in each direction from the center. l
must be odd.
gaussian((σ1, σ2, ...), [l]) -> (g1, g2, ...)
Construct a multidimensional gaussian filter as a product of single-dimension factors, with standard deviation σd
along dimension d
. Optionally provide the kernel length l
, which must be a tuple of the same length.
ImageFiltering.KernelFactors.IIRGaussian
— FunctionIIRGaussian([T], σ; emit_warning::Bool=true)
Construct an infinite impulse response (IIR) approximation to a Gaussian of standard deviation σ
. σ
may either be a single real number or a tuple of numbers; in the latter case, a tuple of such filters will be created, each for filtering a different dimension of an array.
Optionally specify the type T
for the filter coefficients; if not supplied, it will match σ
(unless σ
is not floating-point, in which case Float64
will be chosen).
Citation
I. T. Young, L. J. van Vliet, and M. van Ginkel, "Recursive Gabor Filtering". IEEE Trans. Sig. Proc., 50: 2798-2805 (2002).
ImageFiltering.KernelFactors.TriggsSdika
— TypeTriggsSdika(a, b, scale, M)
Defines a kernel for one-dimensional infinite impulse response (IIR) filtering. a
is a "forward" filter, b
a "backward" filter, M
is a matrix for matching boundary conditions at the right edge, and scale
is a constant scaling applied to each element at the conclusion of filtering.
Citation
B. Triggs and M. Sdika, "Boundary conditions for Young-van Vliet recursive filtering". IEEE Trans. on Sig. Proc. 54: 2365-2367 (2006).
TriggsSdika(ab, scale)
Create a symmetric Triggs-Sdika filter (with a = b = ab
). M
is calculated for you. Only length 3 filters are currently supported.
ImageFiltering.KernelFactors.bickley
— Function kern1, kern2 = bickley()
Return factored Bickley filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
See also: Kernel.bickley
and ImageFiltering.imgradients
.
kern = bickley(extended::NTuple{N,Bool}, d)
Return a factored Bickley filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.bickley
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.scharr
— Function kern1, kern2 = scharr()
Return factored Scharr filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
See also: Kernel.scharr
and ImageFiltering.imgradients
.
kern = scharr(extended::NTuple{N,Bool}, d)
Return a factored Scharr filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.scharr
and ImageFiltering.imgradients
.
Kernel utilities
ImageFiltering.centered
— Functionshiftedkernel = centered(kernel)
Shift the origin-of-coordinates to the center of kernel
. The center-element of kernel
will be accessed by shiftedkernel[0, 0, ...]
.
This function makes it easy to supply kernels using regular Arrays, and provides compatibility with other languages that do not support arbitrary axes.
See also: imfilter
.
ImageFiltering.KernelFactors.kernelfactors
— Functionkernelfactors(factors::Tuple)
Prepare a factored kernel for filtering. If passed a 2-tuple of vectors of lengths m
and n
, this will return a 2-tuple of ReshapedVector
s that are effectively of sizes m×1
and 1×n
. In general, each successive factor
will be reshaped to extend along the corresponding dimension.
If passed a tuple of general arrays, it is assumed that each is shaped appropriately along its "leading" dimensions; the dimensionality of each is "extended" to N = length(factors)
, appending 1s to the size as needed.
ImageFiltering.Kernel.reflect
— Functionreflect(kernel) --> reflectedkernel
Compute the pointwise reflection around 0, 0, ... of the kernel kernel
. Using imfilter
with a reflectedkernel
performs convolution, rather than correlation, with respect to the original kernel
.
Boundaries and padding
ImageFiltering.padarray
— Function padarray([T], img, border) --> imgpadded
Generate a padded image from an array img
and a specification border
of the boundary conditions and amount of padding to add.
Output
An expansion of the input image in which additional pixels are derived from the border of the input image using the extrapolation scheme specified by border
.
Details
The function supports one, two or multi-dimensional images. You can specify the element type T
of the output image.
Options
Valid border
options are described below.
Pad
The type Pad
designates the form of padding which should be used to extrapolate pixels beyond the boundary of an image. Instances must set style
, a Symbol specifying the boundary conditions of the image.
Symbol must be on one of:
:replicate
(repeat edge values to infinity),:circular
(image edges "wrap around"),:symmetric
(the image reflects relative to a position between pixels),:reflect
(the image reflects relative to the edge itself).
Refer to the documentation of Pad
for more details and examples for each option.
Fill
The type Fill
designates a particular value which will be used to extrapolate pixels beyond the boundary of an image. Refer to the documentation of Fill
for more details and illustrations.
2D Examples
Each example is based on the input array
\[\mathbf{A} = \boxed{ \begin{matrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 6 & 8 & 10 & 12 \\ 3 & 6 & 9 & 12 & 15 & 18 \\ 4 & 8 & 12 & 16 & 20 & 24 \\ 5 & 10 & 15 & 20 & 25 & 30 \\ 6 & 12 & 18 & 24 & 30 & 36 \end{matrix}}.\]
Examples with Pad
The command padarray(A, Pad(:replicate,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 6 & 6 & 6 & 6 \\ 2 & 2 & 2 & 2 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 12 & 12 & 12 & 12 \\ 3 & 3 & 3 & 3 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 18 & 18 & 18 & 18 \\ 4 & 4 & 4 & 4 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 24 & 24 & 24 & 24 \\ 5 & 5 & 5 & 5 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 30 & 30 & 30 & 30 \\ 6 & 6 & 6 & 6 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \end{array} }.\]
The command padarray(A, Pad(:circular,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 \\ 15 & 20 & 25 & 30 & 5 & 10 & 15 & 20 & 25 & 30 & 5 & 10 & 15 & 20 \\ 18 & 24 & 30 & 36 & 6 & 12 & 18 & 24 & 30 & 36 & 6 & 12 & 18 & 24 \\ 3 & 4 & 5 & 6 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 1 & 2 & 3 & 4 \\ 6 & 8 & 10 & 12 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 2 & 4 & 6 & 8 \\ 9 & 12 & 15 & 18 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 4 & 8 & 12 & 16 \\ 15 & 20 & 25 & 30 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 5 & 10 & 15 & 20 \\ 18 & 24 & 30 & 36 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 6 & 12 & 18 & 24 \\ 3 & 4 & 5 & 6 & 1 & 2 & 3 & 4 & 5 & 6 & 1 & 2 & 3 & 4 \\ 6 & 8 & 10 & 12 & 2 & 4 & 6 & 8 & 10 & 12 & 2 & 4 & 6 & 8 \\ 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 \end{array} }.\]
The command padarray(A, Pad(:symmetric,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 16 & 12 & 8 & 4 & 4 & 8 & 12 & 16 & 20 & 24 & 24 & 20 & 16 & 12 \\ 12 & 9 & 6 & 3 & 3 & 6 & 9 & 12 & 15 & 18 & 18 & 15 & 12 & 9 \\ 8 & 6 & 4 & 2 & 2 & 4 & 6 & 8 & 10 & 12 & 12 & 10 & 8 & 6 \\ 4 & 3 & 2 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 5 & 4 & 3 \\ 4 & 3 & 2 & 1 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 6 & 5 & 4 & 3 \\ 8 & 6 & 4 & 2 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 12 & 10 & 8 & 6 \\ 12 & 9 & 6 & 3 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 18 & 15 & 12 & 9 \\ 16 & 12 & 8 & 4 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 24 & 20 & 16 & 12 \\ 20 & 15 & 10 & 5 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 30 & 25 & 20 & 15 \\ 24 & 18 & 12 & 6 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 36 & 30 & 24 & 18 \\ 24 & 18 & 12 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 30 & 24 & 18 \\ 20 & 15 & 10 & 5 & 5 & 10 & 15 & 20 & 25 & 30 & 30 & 25 & 20 & 15 \\ 16 & 12 & 8 & 4 & 4 & 8 & 12 & 16 & 20 & 24 & 24 & 20 & 16 & 12 \\ 12 & 9 & 6 & 3 & 3 & 6 & 9 & 12 & 15 & 18 & 18 & 15 & 12 & 9 \end{array} }.\]
The command padarray(A, Pad(:reflect,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 25 & 20 & 15 & 10 & 5 & 10 & 15 & 20 & 25 & 30 & 25 & 20 & 15 & 10 \\ 20 & 16 & 12 & 8 & 4 & 8 & 12 & 16 & 20 & 24 & 20 & 16 & 12 & 8 \\ 15 & 12 & 9 & 6 & 3 & 6 & 9 & 12 & 15 & 18 & 15 & 12 & 9 & 6 \\ 10 & 8 & 6 & 4 & 2 & 4 & 6 & 8 & 10 & 12 & 10 & 8 & 6 & 4 \\ 5 & 4 & 3 & 2 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 5 & 4 & 3 & 2 \\ 10 & 8 & 6 & 4 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 10 & 8 & 6 & 4 \\ 15 & 12 & 9 & 6 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 15 & 12 & 9 & 6 \\ 20 & 16 & 12 & 8 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 20 & 16 & 12 & 8 \\ 25 & 20 & 15 & 10 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 25 & 20 & 15 & 10 \\ 30 & 24 & 18 & 12 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 30 & 24 & 18 & 12 \\ 25 & 20 & 15 & 10 & 5 & 10 & 15 & 20 & 25 & 30 & 25 & 20 & 15 & 10 \\ 20 & 16 & 12 & 8 & 4 & 8 & 12 & 16 & 20 & 24 & 20 & 16 & 12 & 8 \\ 15 & 12 & 9 & 6 & 3 & 6 & 9 & 12 & 15 & 18 & 15 & 12 & 9 & 6 \\ 10 & 8 & 6 & 4 & 2 & 4 & 6 & 8 & 10 & 12 & 10 & 8 & 6 & 4 \end{array} }.\]
Examples with Fill
The command padarray(A, Fill(0,(4,4),(4,4)))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} }.\]
3D Examples
Each example is based on a multi-dimensional array $\mathsf{A} \in\mathbb{R}^{2 \times 2 \times 2}$ given by
\[\mathsf{A}(:,:,1) = \boxed{ \begin{array}{cc} 1 & 2 \\ 3 & 4 \end{array}} \quad \text{and} \quad \mathsf{A}(:,:,2) = \boxed{ \begin{array}{cc} 5 & 6 \\ 7 & 8 \end{array}}.\]
Note that each example will yield a new multi-dimensional array $\mathsf{A}' \in \mathbb{R}^{4 \times 4 \times 4}$ of type OffsetArray
, where prepended dimensions may be negative or start from zero.
Examples with Pad
The command padarray(A,Pad(:replicate,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & 1 & 2 & 2 \\ 3 & 3 & 4 & 4 \\ 3 & 3 & 4 & 4 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & \boxed{1} & \boxed{2} & 2 \\ 3 & \boxed{3} & \boxed{4} & 4 \\ 3 & 3 & 4 & 4 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & \boxed{5} & \boxed{6} & 6 \\ 7 & \boxed{7} & \boxed{8} & 8 \\ 7 & 7 & 8 & 8 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & 5 & 6 & 6 \\ 7 & 7 & 8 & 8 \\ 7 & 7 & 8 & 8 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:circular,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \\ 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & \boxed{1} & \boxed{2} & 1 \\ 4 & \boxed{3} & \boxed{4} & 3 \\ 2 & 1 & 2 & 1 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & \boxed{5} & \boxed{6} & 5 \\ 8 & \boxed{7} & \boxed{8} & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \\ 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:symmetric,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & 1 & 2 & 2 \\ 3 & 3 & 4 & 4 \\ 3 & 3 & 4 & 4 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & \boxed{1} & \boxed{2} & 2 \\ 2 & \boxed{3} & \boxed{4} & 4 \\ 2 & 3 & 4 & 4 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & \boxed{5} & \boxed{6} & 6 \\ 7 & \boxed{7} & \boxed{8} & 8 \\ 7 & 7 & 8 & 8 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & 5 & 6 & 6 \\ 7 & 7 & 8 & 8 \\ 7 & 7 & 8 & 8 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:reflect,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \\ 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & \boxed{1} & \boxed{2} & 1 \\ 4 & \boxed{3} & \boxed{4} & 3 \\ 2 & 1 & 2 & 1 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & \boxed{5} & \boxed{6} & 5 \\ 8 & \boxed{7} & \boxed{8} & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \\ 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \end{array}} \end{aligned} .\]
Examples with Fill
The command padarray(A,Fill(0,(1,1,1)))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & \boxed{1} & \boxed{2} & 0 \\ 0 & \boxed{3} & \boxed{4} & 0 \\ 0 & 0 & 0 & 0 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & \boxed{5} & \boxed{6} & 0 \\ 0 & \boxed{7} & \boxed{8} & 0 \\ 0 & 0 & 0 & 0 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}} \end{aligned} .\]
ImageFiltering.Pad
— Type struct Pad{N} <: AbstractBorder
style::Symbol
lo::Dims{N} # number to extend by on the lower edge for each dimension
hi::Dims{N} # number to extend by on the upper edge for each dimension
end
Pad
is a type that designates the form of padding which should be used to extrapolate pixels beyond the boundary of an image. Instances must set style
, a Symbol specifying the boundary conditions of the image.
Output
The type Pad
specifying how the boundary of an image should be padded.
Details
When representing a spatial two-dimensional image filtering operation as a discrete convolution between the image and a $D \times D$ filter, the results are undefined for pixels closer than $D$ pixels from the border of the image. To define the operation near and at the border, one needs a scheme for extrapolating pixels beyond the edge. The Pad
type allows one to specify the necessary extrapolation scheme.
The type facilitates the padding of one, two or multi-dimensional images.
You can specify a different amount of padding at the lower and upper borders of each dimension of the image (top, left, bottom and right in two dimensions).
Options
Some valid style
options are described below. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \,d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
:replicate
(Default)
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Fill
, padarray
, Inner
and NoPad
:circular
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Fill
, padarray
, Inner
and NoPad
:symmetric
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Fill
,padarray
, Inner
and NoPad
:reflect
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Fill
,padarray
, Inner
and NoPad
ImageFiltering.Fill
— Type struct Fill{T,N} <: AbstractBorder
value::T
lo::Dims{N}
hi::Dims{N}
end
Fill
is a type that designates a particular value which will be used to extrapolate pixels beyond the boundary of an image.
Output
The type Fill
specifying the value with which the boundary of the image should be padded.
Details
When representing a two-dimensional spatial image filtering operation as a discrete convolution between an image and a $D \times D$ filter, the results are undefined for pixels closer than $D$ pixels from the border of the image. To define the operation near and at the border, one needs a scheme for extrapolating pixels beyond the edge. The Fill
type allows one to specify a particular value which will be used in the extrapolation. For more elaborate extrapolation schemes refer to the documentation of Pad
.
The type facilitates the padding of one, two or multi-dimensional images.
You can specify a different amount of padding at the lower and upper borders of each dimension of the image (top, left, bottom and right in two dimensions).
Example
As an indicative illustration consider an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding with a constant value $m$ only on the left and right border, but analogous consequences hold for the top and bottom border.
\[\boxed{ \begin{array}{l|c|r} m\, m\, m\, m & a \, b \, c \, d \, e \, f & m \, m \, m \, m \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
ImageFiltering.Inner
— TypeInner()
Inner(lo, hi)
Indicate that edges are to be discarded in filtering, only the interior of the result is to be returned.
Example:
imfilter(img, kernel, Inner())
ImageFiltering.NA
— TypeNA(na=isnan)
Choose filtering using "NA" (Not Available) boundary conditions. This is most appropriate for filters that have only positive weights, such as blurring filters. Effectively, the output value is normalized in the following way:
filtered array with Fill(0) boundary conditions
output = -----------------------------------------------
filtered 1 with Fill(0) boundary conditions
Array elements for which na
returns true
are also considered outside array boundaries.
ImageFiltering.NoPad
— TypeNoPad()
NoPad(border)
Indicates that no padding should be applied to the input array, or that you have already pre-padded the input image. Passing a border
object allows you to preserve "memory" of a border choice; it can be retrieved by indexing with []
.
Example
The commands
np = NoPad(Pad(:replicate))
imfilter!(out, img, kernel, np)
run filtering directly, skipping any padding steps. Every entry of out
must be computable using in-bounds operations on img
and kernel
.
Algorithms
ImageFiltering.Algorithm.FIR
— TypeFilter using a direct algorithm
ImageFiltering.Algorithm.FFT
— TypeFilter using the Fast Fourier Transform
ImageFiltering.Algorithm.IIR
— TypeFilter with an Infinite Impulse Response filter
ImageFiltering.Algorithm.Mixed
— TypeFilter with a cascade of mixed types (IIR, FIR)
Internal machinery
ImageFiltering.KernelFactors.ReshapedOneD
— TypeReshapedOneD{N,Npre}(data)
Return an object of dimensionality N
, where data
must have dimensionality 1. The axes are 0:0
for the first Npre
dimensions, have the axes of data
for dimension Npre+1
, and are 0:0
for the remaining dimensions.
data
must support eltype
and ndims
, but does not have to be an AbstractArray.
ReshapedOneDs allow one to specify a "filtering dimension" for a 1-dimensional filter.
Nonlinear filtering and transformation
ImageFiltering.MapWindow.mapwindow
— Functionmapwindow(f, img, window; [border="replicate"], [indices=axes(img)]) -> imgf
Apply f
to sliding windows of img
, with window size or axes specified by window
. For example, mapwindow(median!, img, window)
returns an Array
of values similar to img
(median-filtered, of course), whereas mapwindow(extrema, img, window)
returns an Array
of (min,max)
tuples over a window of size window
centered on each point of img
.
The function f
receives a buffer buf
for the window of data surrounding the current point. If window
is specified as a Dims-tuple (tuple-of-integers), then all the integers must be odd and the window is centered around the current image point. For example, if window=(3,3)
, then f
will receive an Array buf
corresponding to offsets (-1:1, -1:1)
from the imgf[i,j]
for which this is currently being computed. Alternatively, window
can be a tuple of AbstractUnitRanges, in which case the specified ranges are used for buf
; this allows you to use asymmetric windows if needed.
border
specifies how the edges of img
should be handled; see imfilter
for details.
Finally indices
allows to omit unnecessary computations, if you want to do things like mapwindow
on a subimage, or a strided variant of mapwindow. It works as follows:
mapwindow(f, img, window, indices=(2:5, 1:2:7)) == mapwindow(f,img,window)[2:5, 1:2:7]
Except more efficiently because it omits computation of the unused values.
Because the data in the buffer buf
that is received by f
is copied from img
, and the buffer's memory is reused, f
should not return references to buf
. This
f = buf->copy(buf) # as opposed to f = buf->buf
mapwindow(f, img, window, indices=(2:5, 1:2:7))
would work as expected.
For functions that can only take AbstractVector
inputs, you might have to first specialize default_shape
:
f = v->quantile(v, 0.75)
ImageFiltering.MapWindow.default_shape(::typeof(f)) = vec
and then mapwindow(f, img, (m,n))
should filter at the 75th quantile.
See also: imfilter
.
Images.imROF
— Functionimgr = imROF(img, λ, iterations)
Perform Rudin-Osher-Fatemi (ROF) filtering, more commonly known as Total Variation (TV) denoising or TV regularization. λ
is the regularization coefficient for the derivative, and iterations
is the number of relaxation iterations taken. 2d only.
See https://en.wikipedia.org/wiki/Totalvariationdenoising and Chambolle, A. (2004). "An algorithm for total variation minimization and applications". Journal of Mathematical Imaging and Vision. 20: 89–97
Edge detection
Images.magnitude
— Functionm = magnitude(grad_x, grad_y)
Calculates the magnitude of the gradient images given by grad_x
and grad_y
. Equivalent to sqrt(grad_x.^2 + grad_y.^2)
.
Returns a magnitude image the same size as grad_x
and grad_y
.
Images.phase
— Functionphase(grad_x, grad_y) -> p
Calculate the rotation angle of the gradient given by grad_x
and grad_y
. Equivalent to atan(-grad_y, grad_x)
, except that when both grad_x
and grad_y
are effectively zero, the corresponding angle is set to zero.
Images.orientation
— Functionorientation(grad_x, grad_y) -> orient
Calculate the orientation angle of the strongest edge from gradient images given by grad_x
and grad_y
. Equivalent to atan(grad_x, grad_y)
. When both grad_x
and grad_y
are effectively zero, the corresponding angle is set to zero.
Images.magnitude_phase
— Functionmagnitude_phase(grad_x, grad_y) -> m, p
Convenience function for calculating the magnitude and phase of the gradient images given in grad_x
and grad_y
. Returns a tuple containing the magnitude and phase images. See magnitude
and phase
for details.
Images.imedge
— Functiongrad_y, grad_x, mag, orient = imedge(img, kernelfun=KernelFactors.ando3, border="replicate")
Edge-detection filtering. kernelfun
is a valid kernel function for imgradients
, defaulting to KernelFactors.ando3
. border
is any of the boundary conditions specified in padarray
.
Returns a tuple (grad_y, grad_x, mag, orient)
, which are the horizontal gradient, vertical gradient, and the magnitude and orientation of the strongest edge, respectively.
Images.thin_edges
— Functionthinned = thin_edges(img, gradientangle, [border])
thinned, subpix = thin_edges_subpix(img, gradientangle, [border])
thinned, subpix = thin_edges_nonmaxsup(img, gradientangle, [border]; [radius::Float64=1.35], [theta=pi/180])
thinned, subpix = thin_edges_nonmaxsup_subpix(img, gradientangle, [border]; [radius::Float64=1.35], [theta=pi/180])
Edge thinning for 2D edge images. Currently the only algorithm available is non-maximal suppression, which takes an edge image and its gradient angle, and checks each edge point for local maximality in the direction of the gradient. The returned image is non-zero only at maximal edge locations.
border
is any of the boundary conditions specified in padarray
.
In addition to the maximal edge image, the _subpix
versions of these functions also return an estimate of the subpixel location of each local maxima, as a 2D array or image of Graphics.Point
objects. Additionally, each local maxima is adjusted to the estimated value at the subpixel location.
Currently, the _nonmaxsup
functions are identical to the first two function calls, except that they also accept additional keyword arguments. radius
indicates the step size to use when searching in the direction of the gradient; values between 1.2 and 1.5 are suggested (default 1.35). theta
indicates the step size to use when discretizing angles in the gradientangle
image, in radians (default: 1 degree in radians = pi/180).
Example:
g = rgb2gray(rgb_image)
gx, gy = imgradients(g)
mag, grad_angle = magnitude_phase(gx,gy)
mag[mag .< 0.5] = 0.0 # Threshold magnitude image
thinned, subpix = thin_edges_subpix(mag, grad_angle)
Images.canny
— Functioncanny_edges = canny(img, (upper, lower), sigma=1.4)
Performs Canny Edge Detection on the input image.
Parameters :
(upper, lower) : Bounds for hysteresis thresholding sigma : Specifies the standard deviation of the gaussian filter
Example
imgedg = canny(img, (Percentile(80), Percentile(20)))
Images.Percentile
— TypePercentile(x)
Indicate that x
should be interpreted as a percentile rather than an absolute value. For example,
canny(img, 1.4, (80, 20))
uses absolute thresholds on the edge magnitude imagecanny(img, 1.4, (Percentile(80), Percentile(20)))
uses percentiles of the edge magnitude image as threshold
Corner Detection
Images.imcorner
— Functioncorners = imcorner(img; [method])
corners = imcorner(img, threshold; [method])
Performs corner detection using one of the following methods -
1. harris
2. shi_tomasi
3. kitchen_rosenfeld
The parameters of the individual methods are described in their documentation. The maxima values of the resultant responses are taken as corners. If a threshold is specified, the values of the responses are thresholded to give the corner pixels. If threshold
is a Percentile
then its type will be preserved.
Images.harris
— Functionharris_response = harris(img; [k], [border], [weights])
Performs Harris corner detection. The covariances can be taken using either a mean weighted filter or a gamma kernel.
Images.shi_tomasi
— Functionshi_tomasi_response = shi_tomasi(img; [border], [weights])
Performs Shi Tomasi corner detection. The covariances can be taken using either a mean weighted filter or a gamma kernel.
Images.kitchen_rosenfeld
— Functionkitchen_rosenfeld_response = kitchen_rosenfeld(img; [border])
Performs Kitchen Rosenfeld corner detection. The covariances can be taken using either a mean weighted filter or a gamma kernel.
Images.fastcorners
— Functionfastcorners(img, n, threshold) -> corners
Performs FAST Corner Detection. n
is the number of contiguous pixels which need to be greater (lesser) than intensity + threshold (intensity - threshold) for a pixel to be marked as a corner. The default value for n is 12.
Feature Extraction
See the ImageFeatures package for a much more comprehensive set of tools.
Images.blob_LoG
— Functionblob_LoG(img, σscales, [edges], [σshape]) -> Vector{BlobLoG}
Find "blobs" in an N-D image using the negative Lapacian of Gaussians with the specifed vector or tuple of σ values. The algorithm searches for places where the filtered image (for a particular σ) is at a peak compared to all spatially- and σ-adjacent voxels, where σ is σscales[i] * σshape
for some i. By default, σshape
is an ntuple of 1s.
The optional edges
argument controls whether peaks on the edges are included. edges
can be true
or false
, or a N+1-tuple in which the first entry controls whether edge-σ values are eligible to serve as peaks, and the remaining N entries control each of the N dimensions of img
.
Citation:
Lindeberg T (1998), "Feature Detection with Automatic Scale Selection", International Journal of Computer Vision, 30(2), 79–116.
See also: BlobLoG
.
Images.BlobLoG
— TypeBlobLoG stores information about the location of peaks as discovered by blob_LoG
. It has fields:
- location: the location of a peak in the filtered image (a CartesianIndex)
- σ: the value of σ which lead to the largest
-LoG
-filtered amplitude at this location - amplitude: the value of the
-LoG(σ)
-filtered image at the peak
Note that the radius is equal to σ√2.
See also: blob_LoG
.
Images.findlocalmaxima
— Functionfindlocalmaxima(img, [region, edges]) -> Vector{CartesianIndex}
Returns the coordinates of elements whose value is larger than all of their immediate neighbors. region
is a list of dimensions to consider. edges
is a boolean specifying whether to include the first and last elements of each dimension, or a tuple-of-Bool specifying edge behavior for each dimension separately.
Images.findlocalminima
— FunctionLike findlocalmaxima
, but returns the coordinates of the smallest elements.
Exposure
ImageContrastAdjustment.build_histogram
— Functionedges, count = build_histogram(img) # For 8-bit images only
edges, count = build_histogram(img, nbins)
edges, count = build_histogram(img, nbins; minval, maxval)
edges, count = build_histogram(img, edges)
Generates a histogram for the image over nbins
spread between [minval, maxval]
. Color images are automatically converted to grayscale.
Output
Returns edges
which is a AbstractRange
type that specifies how the interval [minval, maxval]
is divided into bins, and an array count
which records the concomitant bin frequencies. In particular, count
has the following properties:
count[0]
is the number satisfyingx < edges[1]
count[i]
is the number of valuesx
that satisfyedges[i] <= x < edges[i+1]
count[end]
is the number satisfyingx >= edges[end]
.length(count) == length(edges)+1
.
Details
One can consider a histogram as a piecewise-constant model of a probability density function $f$ [1]. Suppose that $f$ has support on some interval $I = [a,b]$. Let $m$ be an integer and $a = a_1 < a_2 < \ldots < a_m < a_{m+1} = b$ a sequence of real numbers. Construct a sequence of intervals
\[I_1 = [a_1,a_2], I_2 = (a_2, a_3], \ldots, I_{m} = (a_m,a_{m+1}]\]
which partition $I$ into subsets $I_j$ $(j = 1, \ldots, m)$ on which $f$ is constant. These subsets satisfy $I_i \cap I_j = \emptyset, \forall i \neq j$, and are commonly referred to as bins. Together they encompass the entire range of data values such that $\sum_j |I_j | = | I |$. Each bin has width $w_j = |I_j| = a_{j+1} - a_j$ and height $h_j$ which is the constant probability density over the region of the bin. Integrating the constant probability density over the width of the bin $w_j$ yields a probability mass of $\pi_j = h_j w_j$ for the bin.
For a sample $x_1, x_2, \ldots, x_N$, let
\[n_j = \sum_{n = 1}^{N}\mathbf{1}_{(I_j)}(x_n), \quad \text{where} \quad \mathbf{1}_{(I_j)}(x) = \begin{cases} 1 & \text{if} \; x \in I_j,\\ 0 & \text{otherwise}, \end{cases},\]
represent the number of samples falling into the interval $I_j$. An estimate for the probability mass of the $j$th bin is given by the relative frequency $\hat{\pi} = \frac{n_j}{N}$, and the histogram estimator of the probability density function is defined as
\[\begin{aligned} \hat{f}_n(x) & = \sum_{j = 1}^{m}\frac{n_j}{Nw_j} \mathbf{1}_{(I_j)}(x) \\ & = \sum_{j = 1}^{m}\frac{\hat{\pi}_j}{w_j} \mathbf{1}_{(I_j)}(x) \\ & = \sum_{j = 1}^{m}\hat{h}_j \mathbf{1}_{(I_j)}(x). \end{aligned}\]
The function $\hat{f}_n(x)$ is a genuine density estimator because $\hat{f}_n(x) \ge 0$ and
\[\begin{aligned} \int_{-\infty}^{\infty}\hat{f}_n(x) \operatorname{d}x & = \sum_{j=1}^{m} \frac{n_j}{Nw_j} w_j \\ & = 1. \end{aligned}\]
Options
Various options for the parameters of this function are described in more detail below.
Choices for nbins
You can specify the number of discrete bins for the histogram. When specifying the number of bins consider the maximum number of graylevels that your image type supports. For example, with an image of type N0f8
there is a maximum of 256 possible graylevels. Hence, if you request more than 256 bins for that type of image you should expect to obtain zero counts for numerous bins.
Choices for minval
You have the option to specify the lower bound of the interval over which the histogram will be computed. If minval
is not specified then the minimum value present in the image is taken as the lower bound.
Choices for maxval
You have the option to specify the upper bound of the interval over which the histogram will be computed. If maxval
is not specified then the maximum value present in the image is taken as the upper bound.
Choices for edges
If you do not designate the number of bins, nor the lower or upper bound of the interval, then you have the option to directly stipulate how the intervals will be divided by specifying a AbstractRange
type.
Example
Compute the histogram of a grayscale image.
using TestImages, FileIO, ImageView
img = testimage("mandril_gray");
edges, counts = build_histogram(img, 256, minval = 0, maxval = 1)
Given a color image, compute the histogram of the red channel.
img = testimage("mandrill")
r = red.(img)
edges, counts = build_histogram(r, 256, minval = 0, maxval = 1)
References
[1] E. Herrholz, "Parsimonious Histograms," Ph.D. dissertation, Inst. of Math. and Comp. Sci., University of Greifswald, Greifswald, Germany, 2011.
ImageContrastAdjustment.HistogramAdjustmentAPI.adjust_histogram
— Functionadjust_histogram([T::Type,] img, f::AbstractHistogramAdjustmentAlgorithm, args...; kwargs...)
Adjust histogram of img
using algorithm f
.
Output
The return image img_adjusted
is an Array{T}
.
If T
is not specified, then it's inferred.
Examples
Just simply pass the input image and algorithm to adjust_histogram
img_adjusted = adjust_histogram(img, f)
This reads as "adjust_histogram
of image img
using algorithm f
".
You can also explicitly specify the return type:
img_adjusted_float32 = adjust_histogram(Gray{Float32}, img, f)
See also adjust_histogram!
for in-place histogram adjustment.
ImageContrastAdjustment.HistogramAdjustmentAPI.adjust_histogram!
— Functionadjust_histogram!([out,] img, f::AbstractHistogramAdjustmentAlgorithm, args...; kwargs...)
Adjust histogram of img
using algorithm f
.
Output
If out
is specified, it will be changed in place. Otherwise img
will be changed in place.
Examples
Just simply pass an algorithm to adjust_histogram!
:
img_adjusted = similar(img)
adjust_histogram!(img_adjusted, img, f)
For cases you just want to change img
in place, you don't necessarily need to manually allocate img_adjusted
; just use the convenient method:
adjust_histogram!(img, f)
See also: adjust_histogram
ImageContrastAdjustment.AdaptiveEqualization
— Type AdaptiveEqualization <: AbstractHistogramAdjustmentAlgorithm
AdaptiveEqualization(; nbins = 256, minval = 0, maxval = 1, rblocks = 8, cblocks = 8, clip = 0.1)
adjust_histogram([T,] img, f::AdaptiveEqualization)
adjust_histogram!([out,] img, f::AdaptiveEqualization)
Performs Contrast Limited Adaptive Histogram Equalisation (CLAHE) on the input image. It differs from ordinary histogram equalization in the respect that the adaptive method computes several histograms, each corresponding to a distinct section of the image, and uses them to redistribute the lightness values of the image. It is therefore suitable for improving the local contrast and enhancing the definitions of edges in each region of an image.
Details
Histogram equalisation was initially conceived to improve the contrast in a single-channel grayscale image. The method transforms the distribution of the intensities in an image so that they are as uniform as possible [1]. The natural justification for uniformity is that the image has better contrast if the intensity levels of an image span a wide range on the intensity scale. As it turns out, the necessary transformation is a mapping based on the cumulative histogram–-see Equalization for more details.
A natural extension of histogram equalisation is to apply the contrast enhancement locally rather than globally [2]. Conceptually, one can imagine that the process involves partitioning the image into a grid of rectangular regions and applying histogram equalisation based on the local CDF of each contextual region. However, to smooth the transition of the pixels from one contextual region to another, the mapping of a pixel is not necessarily done soley based on the local CDF of its contextual region. Rather, the mapping of a pixel may be interpolated based on the CDF of its contextual region, and the CDFs of the immediate neighbouring regions.
In adaptive histogram equalisation the image $\mathbf{G}$ is partitioned into $P \times Q$ equisized submatrices,
\[\mathbf{G} = \begin{bmatrix} \mathbf{G}_{11} & \mathbf{G}_{12} & \ldots & \mathbf{G}_{1C} \\ \mathbf{G}_{21} & \mathbf{G}_{22} & \ldots & \mathbf{G}_{2C} \\ \vdots & \vdots & \ldots & \vdots \\ \mathbf{G}_{R1} & \mathbf{G}_{R2} & \ldots & \mathbf{G}_{RC} \\ \end{bmatrix}.\]
For each submatrix $\mathbf{G}_{rc}$, one computes a concomitant CDF, which we shall denote by $T_{rc}(G_{i,j})$. In the most general case, we will require four CDFs
\[\begin{aligned} T_1(v) & \triangleq T_{rc}(G_{i,j}) \\ T_2(v) & \triangleq T_{(r+1)c}(G_{i,j}) \\ T_3(v) & \triangleq T_{(r+1)(c+1)}(G_{i,j}) \\ T_4(v) & \triangleq T_{r(c+1)}(G_{i,j}). \end{aligned}\]
In order to determine which particular CDFs will be used in the interpolation step, it is useful to (i) introduce the function
\[\Phi(\mathbf{G}_{rc}) = \left( \phi_{rc}, \phi'_{rc}\right) \triangleq \left(rP - \frac{P}{2}, cQ - \frac{Q}{2} \right),\]
(ii) form the sequences $\left(\phi_{11}, \phi_{21}, \ldots, \phi_{R1} \right)$ and $\left(\phi'_{11}, \phi'_{12}, \ldots, \phi'_{1C} \right)$, and (iii) define
\[\begin{aligned} t & \triangleq \frac{i - \phi_{r1}}{\phi_{(r+1)1} - \phi_{r1} } \\ u & \triangleq \frac{j - \phi'_{1c}}{\phi'_{1(c+1)} - \phi'_{1c} }. \end{aligned}\]
Case I (Interior)
For a pixel $G_{i,j}$ in the range
\[P - \frac{P}{2} \le i \le RP - \frac{P}{2} \quad \text{and} \quad Q - \frac{Q}{2} \le j \le CQ - \frac{Q}{2}.\]
values of $r$ and $c$ are implicitly defined by the solution to the inequalities
\[\phi_{r1} \le i < \phi_{(r+1)1} \quad \text{and} \quad \phi'_{1c} \le j < \phi'_{1(c+1)}.\]
The bilinearly interpolated transformation that maps an intensity $v$ at location $(i,j)$ in the image to an intensity $v'$ is given by [3]
\[v' \triangleq \bar{T}(v) = (1-t) (1-u)T_1(G_{i,j}) + t(1-u)T_2(G_{i,j}) + tuT_3(G_{i,j}) +(1-t)uT_4(G_{i,j}).\]
Case II (Vertical Border)
For a pixel $G_{i,j}$ in the range
\[P - \frac{P}{2} \le i \le RP - \frac{P}{2} \quad \text{and} \quad 1 \le j < Q - \frac{Q}{2} \; \cup \; CQ - \frac{Q}{2} < j \le CQ,\]
$r$ is implicitly defined by the solution to the inequality $\phi_{r1} \le i < \phi_{(r+1)1}$, while
\[c = \begin{cases} 1, & \text{if } \quad 1 \le j < Q - \frac{Q}{2} \\ C, & \text{if } \quad CQ - \frac{Q}{2} < j \le CQ. \end{cases}\]
The linearly interpolated transformation that maps an intensity $v$ at location $(i,j)$ in the image to an intensity $v'$ is given by
\[v' \triangleq \bar{T}(v) = (1-t)T_1(G_{i,j}) + tT_2(G_{i,j}).\]
Case III (Horizontal Border)
For a pixel $G_{i,j}$ in the range
\[1 \le i < P - \frac{P}{2} \;\cup \; RP - \frac{P}{2} < i \le RP \quad \text{and} \quad Q - \frac{Q}{2} \le j \le CQ - \frac{Q}{2},\]
$c$ is implicitly defined by the solution to the inequality $\phi'_{1c} \le j < \phi'_{1(c+1)}$, while
\[r = \begin{cases} 1, & \text{if } \quad 1 \le i < P - \frac{P}{2} \\ R, & \text{if } \quad RP - \frac{P}{2} < i \le RP . \end{cases}\]
The linearly interpolated transformation that maps an intensity $v$ at location $(i,j)$ in the image to an intensity $v'$ is given by
\[v' \triangleq \bar{T}(v) = (1-u)T_1(G_{i,j}) + uT_4(G_{i,j}).\]
Case IV (Corners)
For a pixel $G_{i,j}$ in the range
\[1 \le i < \frac{P}{2} \;\cup \; RP - \frac{P}{2} < i \le RP \quad \text{and} \quad 1 \le j < CQ - \frac{Q}{2} \; \cup \; CQ - \frac{Q}{2} < j \le CQ ,\]
we have
\[r = \begin{cases} 1, & \text{if } \quad 1 \le i < P - \frac{P}{2} \\ R, & \text{if } \quad RP - \frac{P}{2} < i \le RP \end{cases} \quad \text{and} \quad c = \begin{cases} 1, & \text{if } \quad 1 \le j < Q - \frac{Q}{2} \\ C, & \text{if } \quad CQ - \frac{Q}{2} < j \le CQ. \end{cases}\]
The transformation that maps an intensity $v$ at location $(i,j)$ in the image to an intensity $v'$ is given by
\[v' \triangleq \bar{T}(v) = T_1(G_{i,j}).\]
Limiting Contrast
An unfortunate side-effect of contrast enhancement is that it has a tendency to amplify the level of noise in an image, especially when the magnitude of the contrast enhancement is very high. The magnitude of contrast enhancement is associated with the gradient of $T(\cdot)$, because the gradient determines the extent to which consecutive input intensities are stretched across the grey-level spectrum. One can diminish the level of noise amplification by limiting the magnitude of the contrast enhancement, that is, by limiting the magnitude of the gradient.
Since the derivative of $T(\cdot)$ is the empirical density $\hat{f}_{G}$, the slope of the mapping function at any input intensity is proportional to the height of the histogram $\hat{f}_{G}$ at that intensity. Therefore, limiting the slope of the local mapping function is equivalent to clipping the height of the histogram. A detailed description of the implementation details of the clipping process can be found in [2].
Options
Various options for the parameters of this function are described in more detail below.
Choices for img
The function can handle a variety of input types. The returned image depends on the input type.
For coloured images, the input is converted to YIQ type and the Y channel is equalised. This is the combined with the I and Q channels and the resulting image converted to the same type as the input.
Choices for nbins
in AdaptiveEqualization
You can specify the total number of bins in the histogram of each local region.
Choices for rblocks
and cblocks
in AdaptiveEqualization
The rblocks
and cblocks
specify the number of blocks to divide the input image into in each direction. By default both values are set to eight.
Choices for clip
in AdaptiveEqualization
The clip
parameter must be a value between 0 and 1. It defines an implicit threshold at which a histogram is clipped. Counts that exceed the threshold are redistributed as equally as possible so that no bin exceeds the threshold limit. A value of zero means no clipping, whereas a value of one sets the threshold at the smallest feasible bin limit. A bin limit is feasible if all bin counts can be redistributed such that no bin count exceeds the limit. In practice, a clip
value of zero corresponds to maximal contrast enhancement, whereas a clip
value of one corredponds to minimal contrast enhancement. The default value is 0.1
.
Choices for minval
and maxval
in AdaptiveEqualization
If minval
and maxval
are specified then intensities are equalized to the range [minval
, maxval
]. The default values are 0 and 1.
Example
using TestImages, FileIO, ImageView
img = testimage("mandril_gray")
imgeq = adjust_histogram(img, AdaptiveEqualization(nbins = 256, rblocks = 4, cblocks = 4, clip = 0.2))
imshow(img)
imshow(imgeq)
References
- R. C. Gonzalez and R. E. Woods. Digital Image Processing (3rd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, 2006.
- S. M. Pizer, E. P. Amburn, J. D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B. ter Haar Romeny, J. B. Zimmerman and K. Zuiderveld “Adaptive histogram equalization and its variations,” Computer Vision, Graphics, and Image Processing, vol. 38, no. 1, p. 99, Apr. 1987. 10.1016/S0734-189X(87)80186-X
- W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes: The Art of Scientific Computing (3rd Edition). New York, NY, USA: Cambridge University Press, 2007.
ImageContrastAdjustment.Equalization
— Type Equalization <: AbstractHistogramAdjustmentAlgorithm
Equalization(; nbins = 256, minval = 0, maxval = 1)
adjust_histogram([T,] img, f::Equalization)
adjust_histogram!([out,] img, f::Equalization)
Returns a histogram equalized image with a granularity of nbins
number of bins.
Details
Histogram equalization was initially conceived to improve the contrast in a single-channel grayscale image. The method transforms the distribution of the intensities in an image so that they are as uniform as possible [1]. The natural justification for uniformity is that the image has better contrast if the intensity levels of an image span a wide range on the intensity scale. As it turns out, the necessary transformation is a mapping based on the cumulative histogram.
One can consider an $L$-bit single-channel $I \times J$ image with gray values in the set $\{0,1,\ldots,L-1 \}$, as a collection of independent and identically distributed random variables. Specifically, let the sample space $\Omega$ be the set of all $IJ$-tuples $\omega =(\omega_{11},\omega_{12},\ldots,\omega_{1J},\omega_{21},\omega_{22},\ldots,\omega_{2J},\omega_{I1},\omega_{I2},\ldots,\omega_{IJ})$, where each $\omega_{ij} \in \{0,1,\ldots, L-1 \}$. Furthermore, impose a probability measure on $\Omega$ such that the functions $\Omega \ni \omega \to \omega_{ij} \in \{0,1,\ldots,L-1\}$ are independent and identically distributed.
One can then regard an image as a matrix of random variables $\mathbf{G} = [G_{i,j}(\omega)]$, where each function $G_{i,j}: \Omega \to \mathbb{R}$ is defined by
\[G_{i,j}(\omega) = \frac{\omega_{ij}}{L-1},\]
and each $G_{i,j}$ is distributed according to some unknown density $f_{G}$. While $f_{G}$ is unknown, one can approximate it with a normalized histogram of gray levels,
\[\hat{f}_{G}(v)= \frac{n_v}{IJ},\]
where
\[n_v = \left | \left\{(i,j)\, |\, G_{i,j}(\omega) = v \right \} \right |\]
represents the number of times a gray level with intensity $v$ occurs in $\mathbf{G}$. To transform the distribution of the intensities so that they are as uniform as possible one needs to find a mapping $T(\cdot)$ such that $T(G_{i,j}) \thicksim U$. The required mapping turns out to be the cumulative distribution function (CDF) of the empirical density $\hat{f}_{G}$,
\[ T(G_{i,j}) = \int_0^{G_{i,j}}\hat{f}_{G}(w)\mathrm{d} w.\]
Options
Various options for the parameters of the adjust_histogram
function and Equalization
type are described in more detail below.
Choices for img
The adjust_histogram
function can handle a variety of input types. By default type of the returned image matches the input type.
For colored images, the input is converted to YIQ type and the Y channel is equalized. This is the combined with the I and Q channels and the resulting image converted to the same type as the input.
Choices for nbins
in Equalization
You can specify the total number of bins in the histogram.
Choices for minval
and maxval
in Equalization
If minval
and maxval
are specified then intensities are equalized to the range [minval
, maxval
]. The default values are 0 and 1.
Example
using TestImages, FileIO, ImageView
img = testimage("mandril_gray")
imgeq = adjust_histogram(img, Equalization(nbins = 256, minval = 0, maxval = 1))
imshow(img)
imshow(imgeq)
References
- R. C. Gonzalez and R. E. Woods. Digital Image Processing (3rd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, 2006.
ImageContrastAdjustment.GammaCorrection
— Type GammaCorrection <: AbstractHistogramAdjustmentAlgorithm
GammaCorrection(; gamma = 1)
adjust_histogram([T,] img, f::GammaCorrection)
adjust_histogram!([out,] img, f::GammaCorrection)
Returns a gamma corrected image.
Details
Gamma correction is a non-linear transformation given by the relation
\[f(x) = x^\gamma \quad \text{for} \; x \in \mathbb{R}, \gamma > 0.\]
It is called a power law transformation because one quantity varies as a power of another quantity.
Gamma correction has historically been used to preprocess an image to compensate for the fact that the intensity of light generated by a physical device is not usually a linear function of the applied signal but instead follows a power law [1]. For example, for many Cathode Ray Tubes (CRTs) the emitted light intensity on the display is approximately equal to the voltage raised to the power of γ, where γ ∈ [1.8, 2.8]. Hence preprocessing a raw image with an exponent of 1/γ would have ensured a linear response to brightness.
Research in psychophysics has also established an empirical power law between light intensity and perceptual brightness. Hence, gamma correction often serves as a useful image enhancement tool.
Options
Various options for the parameters of the adjust_histogram
function and the Gamma
type are described in more detail below.
Choices for img
The function can handle a variety of input types. The returned image depends on the input type.
For colored images, the input is converted to YIQ type and the Y channel is gamma corrected. This is the combined with the I and Q channels and the resulting image converted to the same type as the input.
Choice for gamma
The gamma
value must be a non-zero positive number. A gamma
value less than one will yield a brighter image whereas a value greater than one will produce a darker image. If left unspecified a default value of one is assumed.
Example
using ImageContrastAdjustment, ImageView
# Create an example image consisting of a linear ramp of intensities.
n = 32
intensities = 0.0:(1.0/n):1.0
img = repeat(intensities, inner=(20,20))'
# Brighten the dark tones.
imgadj = adjust_histogram( img, GammaCorrection(gamma = 1/2))
# Display the original and adjusted image.
imshow(img)
imshow(imgadj)
References
- W. Burger and M. J. Burge. Digital Image Processing. Texts in Computer Science, 2016. doi:10.1007/978-1-4471-6684-9
ImageContrastAdjustment.LinearStretching
— Type LinearStretching <: AbstractHistogramAdjustmentAlgorithm
LinearStretching(; [src_minval], [src_maxval],
dst_minval=0, dst_maxval=1,
no_clamp=false)
LinearStretching((src_minval, src_maxval) => (dst_minval, dst_maxval))
LinearStretching((src_minval, src_maxval) => nothing)
LinearStretching(nothing => (dst_minval, dst_maxval))
adjust_histogram([T,] img, f::LinearStretching)
adjust_histogram!([out,] img, f::LinearStretching)
Returns an image where the range of the intensities spans the interval [dst_minval
, dst_maxval
].
Details
Linear stretching (also called normalization) is a contrast enhancing transformation that is used to modify the dynamic range of the image. In particular, suppose that the input image has gray values in the range [A,B] and one wishes to change the dynamic range to [a,b] using a linear mapping, then the necessary transformation is given by the relation
\[f(x) = (x-A) \frac{b-a}{B-A} + a.\]
Options
Various options for the parameters of the adjust_histogram
and LinearStretching
type are described in more detail below.
Choices for img
The function can handle a variety of input types. The returned image depends on the input type.
For colored images, the input is converted to the YIQ type and the intensities of the Y channel are stretched to the specified range. The modified Y channel is then combined with the I and Q channels and the resulting image converted to the same type as the input.
Choices for dst_minval
and dst_maxval
If destination value range dst_minval
and dst_maxval
are specified then intensities are mapped to the range [dst_minval
, dst_maxval
]. The default values are 0 and 1.
Choices for src_minval
and src_maxval
The source value range src_minval
and src_maxval
specifies the intensity range of input image. By default, the values are extrema(img)
(finite). If custom values are provided, the output intensity value will be clamped to range [dst_minval, dst_maxval]
if it exceeds that.
no_clamp
Setting no_clamp=true
to disable the automatic clamp even if the output intensity value exceeds the range [dst_minval, dst_maxval]
. Note that a clamp is still applied for types that has limited value range, for example, if the input eltype is N0f8
, then the output will be clamped to [0.0N0f8, 1.0N0f8]
even if no_clamp==true
.
Example
using ImageContrastAdjustment, TestImages
img = testimage("mandril_gray")
# Stretches the contrast in `img` so that it spans the unit interval.
imgo = adjust_histogram(img, LinearStretching(dst_minval = 0, dst_maxval = 1))
For convenience, Constructing a LinearStretching
object using Pair
is also supported
# these two constructors are equivalent
LinearStretching(src_minval=0.1, src_maxval=0.9, dst_minval=0.05, dst_maxval=0.95)
LinearStretching((0.1, 0.9) => (0.05, 0.95))
# replace the part with `nothing` to use default values, e.g.,
# specify only destination value range
LinearStretching(nothing => (0.05, 0.95))
# specify only source value range and use default destination value range, i.e., (0, 1)
LinearStretching((0.1, 0.9) => nothing)
References
- W. Burger and M. J. Burge. Digital Image Processing. Texts in Computer Science, 2016. doi:10.1007/978-1-4471-6684-9
ImageContrastAdjustment.Matching
— Type Matching <: AbstractHistogramAdjustmentAlgorithm
Matching(targetimg; nbins = 256, edges = nothing)
adjust_histogram([T,] img, f::Matching)
adjust_histogram!([out,] img, f::Matching)
Returns a histogram matched image with a granularity of nbins
number of bins. The first argument img
is the image to be matched, whereas the argument targetimg
in Matching()
is the image having the desired histogram to be matched to.
Details
The purpose of histogram matching is to transform the intensities in a source image so that the intensities distribute according to the histogram of a specified target image. If one interprets histograms as piecewise-constant models of probability density functions (see build_histogram
), then the histogram matching task can be modelled as the problem of transforming one probability distribution into another [1]. It turns out that the solution to this transformation problem involves the cumulative and inverse cumulative distribution functions of the source and target probability density functions.
In particular, let the random variables $x \thicksim p_{x}$ and $z \thicksim p_{z}$ represent an intensity in the source and target image respectively, and let
\[ S(x) = \int_0^{x}p_{x}(w)\mathrm{d} w \quad \text{and} \quad T(z) = \int_0^{z}p_{z}(w)\mathrm{d} w\]
represent their concomitant cumulative distribution functions. Then the sought-after mapping $Q(\cdot)$ such that $Q(x) \thicksim p_{z}$ is given by
\[Q(x) = T^{-1}\left( S(x) \right),\]
where $T^{-1}(y) = \operatorname{min} \{ x \in \mathbb{R} : y \leq T(x) \}$ is the inverse cumulative distribution function of $T(x)$.
The mapping suggests that one can conceptualize histogram matching as performing histogram equalization on the source and target image and relating the two equalized histograms. Refer to adjust_histogram
for more details on histogram equalization.
Options
Various options for the parameters of the adjust_histogram
function and Matching
type are described in more detail below.
Choices for img
and targetimg
The adjust_histogram(img, Matching())
function can handle a variety of input types. The type of the returned image matches the input type.
For colored images, the inputs are converted to YIQ type and the distributions of the Y channels are matched. The modified Y channel is then combined with the I and Q channels and the resulting image converted to the same type as the input.
Choices for nbins
You can specify the total number of bins in the histogram. If you do not specify the number of bins then a default value of 256 bins is utilized.
Choices for edges
If you do not designate the number of bins, then you have the option to directly stipulate how the intervals will be divided by specifying a AbstractRange
type.
Example
using Images, TestImages, ImageView
img_source = testimage("mandril_gray")
img_target = adjust_histogram(img_source, GammaCorrection(gamma = 0.5))
img_transformed = adjust_histogram(img_source, Matching(targetimg = img_target))
#=
A visual inspection confirms that img_transformed resembles img_target
much more closely than img_source.
=#
imshow(img_source)
imshow(img_target)
imshow(img_transformed)
References
- W. Burger and M. J. Burge. Digital Image Processing. Texts in Computer Science, 2016. doi:10.1007/978-1-4471-6684-9
ImageContrastAdjustment.MidwayEqualization
— Type MidwayEqualization <: AbstractHistogramAdjustmentAlgorithm
MidwayEqualization(; nbins = 256, minval = 0, maxval = 1)
adjust_histogram([T,] img_sequence, f::MidwayEqualization(nbins = 256, edges = nothing))
adjust_histogram!([out_sequence,] img_sequence, f::MidwayEqualization(nbins = 256, edges = nothing))
Gives a pair of images the same histogram whilst maintaining as much as possible their previous grey level dynamics.
Details
The purpose of midway histogram equalization is to transform the intensities in a pair of images so that the intensities distribute according to a common "midway" distribution. The histogram representing the common distribution is chosen so that the original gray level dynamics of the images are preserved as much as possible. If one interprets histograms as piecewise-constant models of probability density functions (see build_histogram
), then the midway histogram equalization task can be modeled as the problem of transforming one probability distribution into another (see adjust_histogram
). It turns out that the solution to this transformation problem involves the cumulative and inverse cumulative distribution functions of the source and "midway" probability density functions. In particular, let the random variables $X_i \thicksim p_{x_i} \; (i = 1,2)$, and $Z \thicksim p_{z}$ represent an intensity in the first, second and "midway" image respectively, and let
\[ S_{X_i}(x) = \int_0^{x}p_{x_i}(w)\mathrm{d} w \; \quad \text{and} \quad T_{Z}(x) = \frac{2}{\frac{1}{S_{X_1}(x)} + \frac{1}{S_{X_2}(x)}}\]
represent the cumulative distribution functions of the two input images, and their harmonic mean, respectively. Then the sought-after mapping $Q_{X_i}(\cdot)$ $(i = 1,2)$ such that $Q_{X_i}(x) \thicksim p_{z}$ is given by
\[Q_{X_i}(x) = T_{Z}^{-1}\left( S_{X_i}(x) \right),\]
where $T_{Z}^{-1}(y) = \operatorname{min} \{ x \in \mathbb{R} : y \leq T_{Z}(x) \}$ is the inverse cumulative distribution function of $T_{Z}(x)$.
Options
Various options for the parameters of the adjust_histogram
function and MidwayEqualization
types are described in more detail below.
Choices for img_sequence
The function adjust_histogram
expects a length-2 Vector
of images (the pair of images) and returns a length-2 Vector
of modified images. The function can handle a variety of input types. The type of the returned image matches the input type.
For colored images, the inputs are converted to YIQ type and the distributions of the Y channels are transformed according to a "midway" distribution. The modified Y channel is then combined with the I and Q channels and the resulting image converted to the same type as the input.
Choices for nbins
You can specify the total number of bins in the histogram. If you do not specify the number of bins then a default value of 256 bins is utilized.
Choices for edges
If you do not designate the number of bins, then you have the option to directly stipulate how the intervals will be divided by specifying a AbstractRange
type.
Example
using Images, TestImages, ImageView, ImageContrastAdjustment
img = testimage("mandril_gray")
# The same image but with different intensitiy distributions
img1 = adjust_histogram(img, GammaCorrection(gamma = 2))
img2 = adjust_histogram(img, GammaCorrection(gamma = 1.2))
# Midway histogram equalization will transform these two images so that their
# intensity distributions are almost identical.
img_sequence = adjust_histogram([img1, img2], MidwayEqualization(nbins = 256))
img1o = first(img_sequence)
img2o = last(img_sequence)
References
- T. Guillemot and J. Delon, “Implementation of the Midway Image Equalization,” Image Processing On Line, vol. 5, pp. 114–129, Jun. 2016. doi:10.5201/ipol.2016.140
Images.cliphist
— Functionclipped_hist = cliphist(hist, clip)
Clips the histogram above a certain value clip
. The excess left in the bins exceeding clip
is redistributed among the remaining bins.
Images.imstretch
— Functionimgs = imstretch(img, m, slope)
enhances or reduces (for slope > 1 or < 1, respectively) the contrast near saturation (0 and 1). This is essentially a symmetric gamma-correction. For a pixel of brightness p
, the new intensity is 1/(1+(m/(p+eps))^slope)
.
This assumes the input img
has intensities between 0 and 1.
Images.imadjustintensity
— Functionimadjustintensity(img [, (minval,maxval)]) -> Image
Map intensities over the interval (minval,maxval)
to the interval [0,1]
. This is equivalent to map(ScaleMinMax(eltype(img), minval, maxval), img)
. (minval,maxval) defaults to extrema(img)
.
Images.complement
— Functiony = complement(x)
Take the complement 1-x
of x
. If x
is a color with an alpha channel, the alpha channel is left untouched. Don't forget to add a dot when x
is an array: complement.(x)
Spatial transformations and resizing
ImageTransformations.imresize
— Functionimresize(img, sz) -> imgr
imresize(img, inds) -> imgr
imresize(img; ratio) -> imgr
Change img
to be of size sz
(or to have indices inds
). If ratio
is used, then sz = ceil(Int, size(img).*ratio)
. This interpolates the values at sub-pixel locations. If you are shrinking the image, you risk aliasing unless you low-pass filter img
first.
The keyword method
takes any InterpolationType from Interpolations.jl or a Degree, which is used to define a BSpline interpolation of that degree, in order to set the interpolation method used in the image resizing.
Examples
julia> img = testimage("lena_gray_256") # 256*256
julia> imresize(img, 128, 128) # 128*128
julia> imresize(img, 1:128, 1:128) # 128*128
julia> imresize(img, (128, 128)) # 128*128
julia> imresize(img, (1:128, 1:128)) # 128*128
julia> imresize(img, (1:128, )) # 128*256
julia> imresize(img, 128) # 128*256
julia> imresize(img, ratio = 0.5) #128*128
julia> imresize(img, ratio = (2, 1)) # 256*128
julia> imresize(img, (128,128), method=Linear()) #128*128
julia> imresize(img, (128,128), method=BSpline(Linear())) #128*128
julia> imresize(img, (128,128), method=Lanczos4OpenCV()) #128*128
σ = map((o,n)->0.75*o/n, size(img), sz)
kern = KernelFactors.gaussian(σ) # from ImageFiltering
imgr = imresize(imfilter(img, kern, NA()), sz)
See also restrict
.
ImageTransformations.imrotate
— Functionimrotate(img, θ, [indices], [degree = Linear()], [fill = NaN]) -> imgr
Rotate image img
by θ
∈[0,2π) in a clockwise direction around its center point. To rotate the image counterclockwise, specify a negative value for angle.
By default, rotated image imgr
will not be cropped. Bilinear interpolation will be used and values outside the image are filled with NaN
if possible, otherwise with 0
.
Examples
julia> img = testimage("cameraman")
# rotate with bilinear interpolation but without cropping
julia> imrotate(img, π/4)
# rotate with bilinear interpolation and with cropping
julia> imrotate(img, π/4, axes(img))
# rotate with nearest interpolation but without cropping
julia> imrotate(img, π/4, Constant())
The keyword `method` now also takes any InterpolationType from Interpolations.jl
or a Degree, which is used to define a BSpline interpolation of that degree, in
order to set the interpolation method used during image rotation.
julia
rotate with Linear interpolation without cropping
julia> imrotate(img, π/4, method = Linear())
rotate with Lanczos4OpenCV interpolation without cropping
julia> imrotate(img, π/4, method = Lanczos4OpenCV()) ```
See also warp
.
ImageTransformations.restrict
— Functionrestrict(img[, region]) -> imgr
Reduce the size of img
by approximately two-fold along the dimensions listed in region
, or all spatial coordinates if region
is not specified. The term restrict
is taken from the coarsening operation of algebraic multigrid methods; it is the adjoint of "prolongation" (which is essentially interpolation). restrict
anti-aliases the image as it goes, so is better than a naive summation over 2x2 blocks. The implementation of restrict
has been tuned for performance, and should be a fast method for constructing pyramids.
If l
is the size of img
along a particular dimension, restrict
produces an array of size (l+1)÷2
for odd l
, and l÷2 + 1
for even l
. See the example below for an explanation.
See also imresize
.
Example
a_course = [0, 1, 0.3]
If we were to interpolate this at the halfway points, we'd get
a_fine = [0, 0.5, 1, 0.65, 0.3]
Note that a_fine
is obtained from a_course
via the prolongation operator P
as P*a_course
, where
P = [1 0 0; # this line "copies over" the first point
0.5 0.5 0; # this line takes the mean of the first and second point
0 1 0; # copy the second point
0 0.5 0.5; # take the mean of the second and third
0 0 1] # copy the third
restrict
is the adjoint of prolongation. Consequently,
julia> restrict(a_fine)
3-element Array{Float64,1}:
0.125
0.7875
0.3125
julia> (P'*a_fine)/2
3-element Array{Float64,1}:
0.125
0.7875
0.3125
where the division by 2 approximately preserves the mean intensity of the input.
As we see here, for odd-length a_fine
, restriction is the adjoint of interpolation at half-grid points. When length(a_fine)
is even, restriction is the adjoint of interpolation at 1/4 and 3/4-grid points. This turns out to be the origin of the l->l÷2 + 1
behavior.
One consequence of this definition is that the edges move towards zero:
julia> restrict(ones(11))
6-element Array{Float64,1}:
0.75
1.0
1.0
1.0
1.0
0.75
In some applications (e.g., image registration), you may find it useful to trim the edges.
ImageTransformations.warp
— Functionwarp(img, tform, [indices], [degree = Linear()], [fill = NaN]) -> imgw
Transform the coordinates of img
, returning a new imgw
satisfying imgw[I] = img[tform(I)]
. This approach is known as backward mode warping. The transformation tform
must accept a SVector
as input. A useful package to create a wide variety of such transformations is CoordinateTransformations.jl.
Reconstruction scheme
During warping, values for img
must be reconstructed at arbitrary locations tform(I)
which do not lie on to the lattice of pixels. How this reconstruction is done depends on the type of img
and the optional parameter degree
.
When img
is a plain array, then on-grid b-spline interpolation will be used. It is possible to configure what degree of b-spline to use with the parameter degree
. For example one can use degree = Linear()
for linear interpolation, degree = Constant()
for nearest neighbor interpolation, or degree = Quadratic(Flat())
for quadratic interpolation.
In the case tform(I)
maps to indices outside the original img
, those locations are set to a value fill
(which defaults to NaN
if the element type supports it, and 0
otherwise). The parameter fill
also accepts extrapolation schemes, such as Flat()
, Periodic()
or Reflect()
.
For more control over the reconstruction scheme –- and how beyond-the-edge points are handled –- pass img
as an AbstractInterpolation
or AbstractExtrapolation
from Interpolations.jl.
The keyword method
now also takes any InterpolationType from Interpolations.jl or a Degree, which is used to define a BSpline interpolation of that degree, in order to set the interpolation method used.
The meaning of the coordinates
The output array imgw
has indices that would result from applying inv(tform)
to the indices of img
. This can be very handy for keeping track of how pixels in imgw
line up with pixels in img
.
If you just want a plain array, you can "strip" the custom indices with parent(imgw)
.
Examples: a 2d rotation (see JuliaImages documentation for pictures)
julia> using Images, CoordinateTransformations, Rotations, TestImages, OffsetArrays
julia> img = testimage("lighthouse");
julia> axes(img)
(Base.OneTo(512),Base.OneTo(768))
# Rotate around the center of `img`
julia> tfm = recenter(RotMatrix(-pi/4), center(img))
AffineMap([0.707107 0.707107; -0.707107 0.707107], [-196.755,293.99])
julia> imgw = warp(img, tfm);
julia> axes(imgw)
(-196:709,-68:837)
# Alternatively, specify the origin in the image itself
julia> img0 = OffsetArray(img, -30:481, -384:383); # origin near top of image
julia> rot = LinearMap(RotMatrix(-pi/4))
LinearMap([0.707107 -0.707107; 0.707107 0.707107])
julia> imgw = warp(img0, rot);
julia> axes(imgw)
(-293:612,-293:611)
julia> imgr = parent(imgw);
jldoctest using ImageTransformations, CoordinateTransformations, Rotations, TestImages, OffsetArrays using OffsetArrays: IdOffsetRange img = testimage("lighthouse") # axes (1:512, 1:768)
tfm = recenter(RotMatrix(-pi/4), center(img)) imgw = warp(img, tfm)
axes(imgw)
output
(IdOffsetRange(values=-196:709, indices=-196:709), IdOffsetRange(values=-68:837, indices=-68:837)) ```
ImageTransformations.warpedview
— Functionwarpedview(img, tform, [indices], [degree = Linear()], [fill = NaN]) -> wv
Create a view of img
that lazily transforms any given index I
passed to wv[I]
to correspond to img[tform(I)]
. This approach is known as backward mode warping. The given transformation tform
must accept a SVector
as input. A useful package to create a wide variety of such transformations is CoordinateTransformations.jl.
When invoking wv[I]
, values for img
must be reconstructed at arbitrary locations tform(I)
which do not lie on to the lattice of pixels. How this reconstruction is done depends on the type of img
and the optional parameter degree
. When img
is a plain array, then on-grid b-spline interpolation will be used, where the pixel of img
will serve as the coeficients. It is possible to configure what degree of b-spline to use with the parameter degree
. The two possible values are degree = Linear()
for linear interpolation, or degree = Constant()
for nearest neighbor interpolation.
In the case tform(I)
maps to indices outside the domain of img
, those locations are set to a value fill
(which defaults to NaN
if the element type supports it, and 0
otherwise). Additionally, the parameter fill
also accepts extrapolation schemes, such as Flat()
, Periodic()
or Reflect()
.
The optional parameter indices
can be used to specify the domain of the resulting WarpedView
. By default the indices are computed in such a way that the resulting WarpedView
contains all the original pixels in img
. To do this inv(tform)
has to be computed. If the given transformation tform
does not support inv
, then the parameter indices
has to be specified manually.
warpedview
is essentially a non-coping, lazy version of warp
. As such, the two functions share the same interface, with one important difference. warpedview
will insist that the resulting WarpedView
will be a view of img
(i.e. parent(warpedview(img, ...)) === img
). Consequently, warpedview
restricts the parameter degree
to be either Linear()
or Constant()
.
ImageTransformations.invwarpedview
— Functioninvwarpedview(img, tinv, [indices], [degree = Linear()], [fill = NaN]) -> wv
Create a view of img
that lazily transforms any given index I
passed to wv[I]
to correspond to img[inv(tinv)(I)]
. While technically this approach is known as backward mode warping, note that InvWarpedView
is created by supplying the forward transformation. The given transformation tinv
must accept a SVector
as input and support inv(tinv)
. A useful package to create a wide variety of such transformations is CoordinateTransformations.jl.
When invoking wv[I]
, values for img
must be reconstructed at arbitrary locations inv(tinv)(I)
. InvWarpedView
serves as a wrapper around WarpedView
which takes care of interpolation and extrapolation. The parameters degree
and fill
can be used to specify the b-spline degree and the extrapolation scheme respectively.
The optional parameter indices
can be used to specify the domain of the resulting wv
. By default the indices are computed in such a way that wv
contains all the original pixels in img
.
ImageTransformations.WarpedView
— TypeWarpedView(img, tform, [indices]) -> wv
Create a view of img
that lazily transforms any given index I
passed to wv[I]
to correspond to img[tform(I)]
. This approach is known as backward mode warping.
The optional parameter indices
can be used to specify the domain of the resulting wv
. By default the indices are computed in such a way that wv
contains all the original pixels in img
. To do this inv(tform)
has to be computed. If the given transformation tform
does not support inv
, then the parameter indices
has to be specified manually.
see warpedview
for more information.
ImageTransformations.InvWarpedView
— TypeInvWarpedView(img, tinv, [indices]) -> wv
Create a view of img
that lazily transforms any given index I
passed to wv[I]
to correspond to img[inv(tinv)(I)]
. While technically this approach is known as backward mode warping, note that InvWarpedView
is created by supplying the forward transformation
The conceptual difference to WarpedView
is that InvWarpedView
is intended to be used when reasoning about the image is more convenient that reasoning about the indices. Furthermore, InvWarpedView
allows simple nesting of transformations, in which case the transformations will be composed into a single one.
The optional parameter indices
can be used to specify the domain of the resulting wv
. By default the indices are computed in such a way that wv
contains all the original pixels in img
.
see invwarpedview
for more information.
Image statistics
Functions for image statistics are spreaded out in Images.jl, ImageDistances.jl and ImageQualityIndexes.jl
Images.minfinite
— Functionm = minfinite(A)
calculates the minimum value in A
, ignoring any values that are not finite (Inf or NaN).
Images.maxfinite
— Functionm = maxfinite(A)
calculates the maximum value in A
, ignoring any values that are not finite (Inf or NaN).
Images.maxabsfinite
— Functionm = maxabsfinite(A)
calculates the maximum absolute value in A
, ignoring any values that are not finite (Inf or NaN).
Images.meanfinite
— FunctionM = meanfinite(img, region)
calculates the mean value along the dimensions listed in region
, ignoring any non-finite values.
Images.entropy
— Functionentropy(logᵦ, img)
entropy(img; [kind=:shannon])
Compute the entropy of a grayscale image defined as -sum(p.*logᵦ(p))
. The base β of the logarithm (a.k.a. entropy unit) is one of the following:
:shannon
(log base 2, default), or use logᵦ = log2:nat
(log base e), or use logᵦ = log:hartley
(log base 10), or use logᵦ = log10
General Distances
type name | convenient syntax | math definition |
---|---|---|
Euclidean | euclidean(x, y) | sqrt(sum((x - y) .^ 2)) |
SqEuclidean | sqeuclidean(x, y) | sum((x - y).^2) |
Cityblock | cityblock(x, y) | sum(abs(x - y)) |
TotalVariation | totalvariation(x, y) | sum(abs(x - y)) / 2 |
Minkowski | minkowski(x, y, p) | sum(abs(x - y).^p) ^ (1/p) |
Hamming | hamming(x, y) | sum(x .!= y) |
SumAbsoluteDifference | sad(x, y) | sum(abs(x - y)) |
SumSquaredDifference | ssd(x, y) | sum((x - y).^2) |
MeanAbsoluteError | mae(x, y) , sadn(x, y) | sum(abs(x - y))/len(x) |
MeanSquaredError | mse(x, y) , ssdn(x, y) | sum((x - y).^2)/len(x) |
RootMeanSquaredError | rmse(x, y) | sqrt(sum((x - y) .^ 2)) |
NCC | ncc(x, y) | dot(x,y)/(norm(x)*norm(y)) |
Image-specific Distances
Distance type | Convenient syntax | References |
---|---|---|
Hausdorff and ModifiedHausdorff | hausdorff(imgA,imgB) and modified_hausdorff(imgA,imgB) | Dubuisson, M-P et al. 1994. A Modified Hausdorff Distance for Object-Matching. |
CIEDE2000 | ciede2000(imgA,imgB) and ciede2000(imgA,imgB; metric=DE_2000()) | Sharma, G., Wu, W., and Dalal, E. N., 2005. The CIEDE2000 color‐difference formula. |
Image metrics
ImageQualityIndexes.PSNR
— TypePSNR <: FullReferenceIQI
assess(PSNR(), x, ref, [, peakval])
assess_psnr(x, ref [, peakval])
Peak signal-to-noise ratio (PSNR) is used to measure the quality of image in present of noise and corruption.
For gray image x
, PSNR (in dB) is calculated by 10log10(peakval^2/mse(x, ref))
, where peakval
is the maximum possible pixel value of image ref
. x
will be converted to type of ref
when necessary.
Generally, for non-gray image x
, PSNR is reported against each channel of ref
and outputs a Vector
, peakval
needs to be a vector as well.
Conventionally, m×n
rgb image is treated as m×n×3
gray image. To calculated channelwise PSNR of rgb image, one could pass peakval
as vector, e.g., psnr(x, ref, [1.0, 1.0, 1.0])
ImageQualityIndexes.SSIM
— TypeSSIM([kernel], [(α, β, γ)]; crop=false) <: FullReferenceIQI
assess(iqi::SSIM, img, ref)
assess_ssim(img, ref; crop=false)
Structural similarity (SSIM) index is an image quality assessment method based on degradation of structural information.
The SSIM index is composed of three components: luminance, contrast, and structure; ssim = 𝐿ᵅ * 𝐶ᵝ * 𝑆ᵞ
, where W := (α, β, γ)
controls relative importance of each components. By default W = (1.0, 1.0, 1.0)
.
In practice, a mean version SSIM is used. At each pixel, SSIM is calculated locally with neighborhoods weighted by kernel
, returning a ssim map; ssim
is actaully mean(ssim_map)
. By default kernel = KernelFactors.gaussian(1.5, 11)
.
The default parameters comes from [1]. For benchmark usage, it is recommended to not change the parameters, because most other SSIM implementations follows the same settings. Keyword crop
controls whether the boundary of ssim map should be dropped; you need to set it true
to reproduce the result in [1].
Example
assess_ssim(img, ref)
should be sufficient to get a benchmark for algorithms. One could also instantiate a customed SSIM, then pass it to assess
or use it as a function. For example:
iqi = SSIM(KernelFactors.gaussian(2.5, 17), (1.0, 1.0, 2.0))
assess(iqi, img, ref)
iqi(img, ref) # both usages are equivalent
SSIM is defined only for gray images. How RGB
and other Color3
images are handled may vary in different implementations. For RGB
images, channels are handled seperately when calculating 𝐿, 𝐶, and 𝑆. Generic Color3
images are converted to RGB
first before calculation.
Reference
[1] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4), 600-612.
[2] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2003). The SSIM Index for Image Quality Assessment. Retrived May 30, 2019, from http://www.cns.nyu.edu/~lcv/ssim/
ImageQualityIndexes.colorfulness
— Function M = colorfulness(img)
M = colorfulness(HASLER_AND_SUSSTRUNK_M3(), img)
Measures the colorfulness of a natural image. Uses the HASLER_AND_SUSSTRUNK_M3
method by default.
See also: HASLER_AND_SUSSTRUNK_M3
.
ImageQualityIndexes.HASLER_AND_SUSSTRUNK_M3
— TypeHASLER_AND_SUSSTRUNK_M3 <: NoReferenceIQI
M = hasler_and_susstrunk_m3(img)
Calculates the colorfulness of a natural image using method M3 from [1]. As a guide to interpretation of results, the authors suggest:
Attribute | M3 |
---|---|
Not colorful | 0 |
slightly colorful | 15 |
moderately colorful | 33 |
averagely colorful | 45 |
quite colorful | 59 |
highly colorful | 82 |
extremely colorful | 109 |
[1] Hasler, D. and Süsstrunk, S.E., 2003, June. Measuring colorfulness in natural images. In Human vision and electronic imaging VIII (Vol. 5007, pp. 87-96). International Society for Optics and Photonics.
Morphological operations
ImageMorphology.dilate
— Functionimgd = dilate(img, [region])
perform a max-filter over nearest-neighbors. The default is 8-connectivity in 2d, 27-connectivity in 3d, etc. You can specify the list of dimensions that you want to include in the connectivity, e.g., region = [1,2]
would exclude the third dimension from filtering.
Examples
julia> img = zeros(5, 5); img[3, 3] = 1.; img
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> dilate(img)
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> dilate(img, 1)
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
ImageMorphology.erode
— Functionimge = erode(img, [region])
perform a min-filter over nearest-neighbors. The default is 8-connectivity in 2d, 27-connectivity in 3d, etc. You can specify the list of dimensions that you want to include in the connectivity, e.g., region = [1,2]
would exclude the third dimension from filtering.
Examples
julia> img = zeros(5, 5); img[2:4, 2:4] .= 1.; img
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> erode(img)
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
julia> erode(img, 1)
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
ImageMorphology.opening
— Functionimgo = opening(img, [region])
performs the opening
morphology operation, equivalent to dilate(erode(img))
. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(5, 5); img[1, 1] = 1.; img[3:5, 3:5] .= 1.; img
5×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
julia> opening(img)
5×5 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
ImageMorphology.closing
— Functionimgc = closing(img, [region])
performs the closing
morphology operation, equivalent to erode(dilate(img))
. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(7, 7); img[3:5, 3:5] .= 1.; img[4, 4] = 0.; img
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> closing(img)
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
ImageMorphology.tophat
— Functionimgth = tophat(img, [region])
performs top hat
of an image, which is defined as the image minus its morphological opening. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(5, 5); img[1, 1] = 1.; img[3:5, 3:5] .= 1.; img
5×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
0.0 0.0 1.0 1.0 1.0
julia> tophat(img)
5×5 Array{Float64,2}:
1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0
ImageMorphology.bothat
— Functionimgbh = bothat(img, [region])
performs bottom hat
of an image, which is defined as its morphological closing minus the original image. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(7, 7); img[3:5, 3:5] .= 1.; img[4, 4] = 0.; img
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> bothat(img)
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
ImageMorphology.morphogradient
— Functionimgmg = morphogradient(img, [region])
returns morphological gradient of the image, which is the difference between the dilation and the erosion of a given image. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(7, 7); img[3:5, 3:5] .= 1.; img
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> morphogradient(img)
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 0.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
ImageMorphology.morpholaplace
— Functionimgml = morpholaplace(img, [region])
performs Morphological Laplacian
of an image, which is defined as the arithmetic difference between the internal and the external gradient. region
allows you to control the dimensions over which this operation is performed.
Examples
julia> img = zeros(7, 7); img[3:5, 3:5] .= 1.; img[4, 4] = 0.; img
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 1.0 0.0 1.0 0.0 0.0
0.0 0.0 1.0 1.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
julia> morpholaplace(img)
7×7 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 1.0 -1.0 -1.0 -1.0 1.0 0.0
0.0 1.0 -1.0 1.0 -1.0 1.0 0.0
0.0 1.0 -1.0 -1.0 -1.0 1.0 0.0
0.0 1.0 1.0 1.0 1.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0
ImageMorphology.label_components
— Functionlabel = label_components(tf, [connectivity])
label = label_components(tf, [region])
Find the connected components in a binary array tf
. There are two forms that connectivity
can take:
It can be a boolean array of the same dimensionality as
tf
, of size 1 or 3 along each dimension. Each entry in the array determines whether a given neighbor is used for connectivity analyses. For example,connectivity = trues(3,3)
would use 8-connectivity and test all pixels that touch the current one, even the corners.You can provide a list indicating which dimensions are used to determine connectivity. For example,
region = [1,3]
would not test neighbors along dimension 2 for connectivity. This corresponds to just the nearest neighbors, i.e., 4-connectivity in 2d and 6-connectivity in 3d.
The default is region = 1:ndims(A)
.
The output label
is an integer array, where 0 is used for background pixels, and each connected region gets a different integer index.
ImageMorphology.component_boxes
— Functioncomponent_boxes(labeled_array)
-> an array of bounding boxes for each label, including the background label 0
ImageMorphology.component_lengths
— Functioncomponent_lengths(labeled_array)
-> an array of areas (2D), volumes (3D), etc. for each label, including the background label 0
ImageMorphology.component_indices
— Functioncomponent_indices(labeled_array)
-> an array of pixels for each label, including the background label 0
ImageMorphology.component_subscripts
— Functioncomponent_subscripts(labeled_array)
-> an array of pixels for each label, including the background label 0
ImageMorphology.component_centroids
— Functioncomponent_centroids(labeled_array)
-> an array of centroids for each label, including the background label 0
ImageMorphology.FeatureTransform.feature_transform
— Functionfeature_transform(I::AbstractArray{Bool, N}, [w=nothing]) -> F
Compute the feature transform of a binary image I
, finding the closest "feature" (positions where I
is true
) for each location in I
. Specifically, F[i]
is a CartesianIndex
encoding the position closest to i
for which I[F[i]]
is true
. In cases where two or more features in I
have the same distance from i
, an arbitrary feature is chosen. If I
has no true
values, then all locations are mapped to an index where each coordinate is typemin(Int)
.
Optionally specify the weight w
assigned to each coordinate. For example, if I
corresponds to an image where voxels are anisotropic, w
could be the voxel spacing along each coordinate axis. The default value of nothing
is equivalent to w=(1,1,...)
.
See also: distance_transform
.
Citation
'A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions' Maurer et al., 2003
ImageMorphology.FeatureTransform.distance_transform
— Functiondistance_transform(F::AbstractArray{CartesianIndex}, [w=nothing]) -> D
Compute the distance transform of F
, where each element F[i]
represents a "target" or "feature" location assigned to i
. Specifically, D[i]
is the distance between i
and F[i]
. Optionally specify the weight w
assigned to each coordinate; the default value of nothing
is equivalent to w=(1,1,...)
.
See also: feature_transform
.
ImageMorphology.convexhull
— Functionchull = convexhull(img)
Computes the convex hull of a binary image and returns the vertices of convex hull as a CartesianIndex array.
ImageMorphology.GuoAlgo
— Typestruct GuoAlgo <: ThinAlgo end
The Guo algorithm evaluates three conditions in order to determine which pixels of the image should be removed.
The three conditions are explained in the page 361 of Guo, Z., & Hall, R. W. (1989). Parallel thinning with two-subiteration algorithms. Communications of the ACM, 32(3), 359-373.
ImageMorphology.thinning
— Functionthinning(img::AbstractArray{Bool}; algo::ThinAlgo=GuoAlgo())
Applies a binary blob thinning operation to achieve a skeletization of the input image.
See also: GuoAlgo
Interpolation
Images.bilinear_interpolation
— FunctionP = bilinear_interpolation(img, r, c)
Bilinear Interpolation is used to interpolate functions of two variables on a rectilinear 2D grid.
The interpolation is done in one direction first and then the values obtained are used to do the interpolation in the second direction.
Integral Images
Images.integral_image
— Functionintegral_img = integral_image(img)
Returns the integral image of an image. The integral image is calculated by assigning to each pixel the sum of all pixels above it and to its left, i.e. the rectangle from (1, 1) to the pixel. An integral image is a data structure which helps in efficient calculation of sum of pixels in a rectangular subset of an image. See boxdiff
for more information.
Images.boxdiff
— Functionsum = boxdiff(integral_image, ytop:ybot, xtop:xbot)
sum = boxdiff(integral_image, CartesianIndex(tl_y, tl_x), CartesianIndex(br_y, br_x))
sum = boxdiff(integral_image, tl_y, tl_x, br_y, br_x)
An integral image is a data structure which helps in efficient calculation of sum of pixels in a rectangular subset of an image. It stores at each pixel the sum of all pixels above it and to its left. The sum of a window in an image can be directly calculated using four array references of the integral image, irrespective of the size of the window, given the yrange
and xrange
of the window. Given an integral image -
A - - - - - - B -
- * * * * * * * -
- * * * * * * * -
- * * * * * * * -
- * * * * * * * -
- * * * * * * * -
C * * * * * * D -
- - - - - - - - -
The sum of pixels in the area denoted by * is given by S = D + A - B - C.
Pyramids
Images.gaussian_pyramid
— Functionpyramid = gaussian_pyramid(img, n_scales, downsample, sigma)
Returns a gaussian pyramid of scales n_scales
, each downsampled by a factor downsample
> 1 and sigma
for the gaussian kernel.
Image metadata utilities
ImageMetadata.ImageMeta
— TypeImageMeta
is an AbstractArray that can have metadata, stored in a dictionary.
Construct an image with ImageMeta(A, props)
(for a properties dictionary props
), or with ImageMeta(A, prop1=val1, prop2=val2, ...)
.
ImageAxes.arraydata
— Functionarraydata(img::ImageMeta) -> array
Extract the data from img
, omitting the properties dictionary. array
shares storage with img
, so changes to one affect the other.
See also: properties
.
ImageMetadata.properties
— Functionproperties(imgmeta) -> props
Extract the properties dictionary props
for imgmeta
. props
shares storage with img
, so changes to one affect the other.
See also: arraydata
.
ImageMetadata.copyproperties
— Functioncopyproperties(img::ImageMeta, data) -> imgnew
Create a new "image," copying the properties dictionary of img
but using the data of the AbstractArray data
. Note that changing the properties of imgnew
does not affect the properties of img
.
See also: shareproperties
.
ImageMetadata.shareproperties
— Functionshareproperties(img::ImageMeta, data) -> imgnew
Create a new "image," reusing the properties dictionary of img
but using the data of the AbstractArray data
. The two images have synchronized properties; modifying one also affects the other.
See also: copyproperties
.
ImageMetadata.spatialproperties
— Functionspatialproperties(img)
Return a vector of strings, containing the names of properties that have been declared "spatial" and hence should be permuted when calling permutedims
. Declare such properties like this:
img[:spatialproperties] = [:spacedirections]
Image segmentation
ImageSegmentation.SegmentedImage
— TypeSegmentedImage
type contains the index-label mapping, assigned labels, segment mean intensity and pixel count of each segment.
ImageSegmentation.ImageEdge
— Typeedge = ImageEdge(index1, index2, weight)
Construct an edge in a Region Adjacency Graph. index1
and index2
are the integers corresponding to individual pixels/voxels (in the sense of linear indexing via sub2ind
), and weight
is the edge weight (measures the dissimilarity between pixels/voxels).
Images.otsu_threshold
— Functionthres = otsu_threshold(img)
thres = otsu_threshold(img, bins)
Computes threshold for grayscale image using Otsu's method.
Parameters:
- img = Grayscale input image
- bins = Number of bins used to compute the histogram. Needed for floating-point images.
ImageSegmentation.labels_map
— Functionimg_labeled = labels_map(seg)
Return an array containing the label assigned to each pixel.
ImageSegmentation.segment_labels
— Functionlabels = segment_labels(seg)
Returns the list of assigned labels
ImageSegmentation.segment_pixel_count
— Functionc = segment_pixel_count(seg, l)
Returns the count of pixels that are assigned label l
. If no label is supplied, it returns a Dict(label=>pixel_count) of all the labels.
ImageSegmentation.segment_mean
— Functionm = segment_mean(seg, l)
Returns the mean intensity of label l
. If no label is supplied, it returns a Dict(label=>mean) of all the labels.
ImageSegmentation.seeded_region_growing
— Functionseg_img = seeded_region_growing(img, seeds, [kernel_dim], [diff_fn])
seg_img = seeded_region_growing(img, seeds, [neighbourhood], [diff_fn])
Segments the N-D image img
using the seeded region growing algorithm and returns a SegmentedImage
containing information about the segments.
Arguments:
img
: N-D image to be segmented (arbitrary axes are allowed)seeds
:Vector
containing seeds. Each seed is a Tuple of a CartesianIndex{N} and a label. See below note for more information on labels.kernel_dim
: (Optional)Vector{Int}
having length N or aNTuple{N,Int}
whose ith element is an odd positive integer representing the length of the ith edge of the N-orthotopic neighbourhoodneighbourhood
: (Optional) Function taking CartesianIndex{N} as input and returning the neighbourhood of that point.diff_fn
: (Optional) Function that returns a difference measure(δ) between the mean color of a region and color of a point
The labels attached to points must be positive integers, although multiple points can be assigned the same label. The output includes a labelled array that has same indexing as that of input image. Every index is assigned to either one of labels or a special label '0' indicating that the algorithm was unable to assign that index to a unique label.
Examples
julia> img = zeros(Gray{N0f8},4,4);
julia> img[2:4,2:4] .= 1;
julia> seeds = [(CartesianIndex(3,1),1),(CartesianIndex(2,2),2)];
julia> seg = seeded_region_growing(img, seeds);
julia> labels_map(seg)
4×4 Matrix{Int64}:
1 1 1 1
1 2 2 2
1 2 2 2
1 2 2 2
Citation:
Albert Mehnert, Paul Jackaway (1997), "An improved seeded region growing algorithm", Pattern Recognition Letters 18 (1997), 1065-1071
ImageSegmentation.unseeded_region_growing
— Functionseg_img = unseeded_region_growing(img, threshold, [kernel_dim], [diff_fn])
seg_img = unseeded_region_growing(img, threshold, [neighbourhood], [diff_fn])
Segments the N-D image using automatic (unseeded) region growing algorithm and returns a SegmentedImage
containing information about the segments.
Arguments:
img
: N-D image to be segmented (arbitrary axes are allowed)threshold
: Upper bound of the difference measure (δ) for considering pixel into same segmentkernel_dim
: (Optional)Vector{Int}
having length N or aNTuple{N,Int}
whose ith element is an odd positive integer representing the length of the ith edge of the N-orthotopic neighbourhoodneighbourhood
: (Optional) Function taking CartesianIndex{N} as input and returning the neighbourhood of that point.diff_fn
: (Optional) Function that returns a difference measure (δ) between the mean color of a region and color of a point
Examples
julia> img = zeros(Gray{N0f8},4,4);
julia> img[2:4,2:4] .= 1;
julia> seg = unseeded_region_growing(img, 0.2);
julia> labels_map(seg)
4×4 Matrix{Int64}:
1 1 1 1
1 2 2 2
1 2 2 2
1 2 2 2
ImageSegmentation.felzenszwalb
— Functionsegments = felzenszwalb(img, k, [min_size])
index_map, num_segments = felzenszwalb(edges, num_vertices, k, [min_size])
Segments an image using Felzenszwalb's graph-based algorithm. The function can be used in either of two ways -
segments = felzenszwalb(img, k, [min_size])
Segments an image using Felzenszwalb's segmentation algorithm and returns the result as SegmentedImage
. The algorithm uses euclidean distance in color space as edge weights for the region adjacency graph.
Parameters:
- img = input image
- k = Threshold for region merging step. Larger threshold will result in bigger segments.
- min_size = Minimum segment size
index_map, num_segments = felzenszwalb(edges, num_vertices, k, [min_size])
Segments an image represented as Region Adjacency Graph(RAG) using Felzenszwalb's segmentation algorithm. Each pixel/region corresponds to a node in the graph and weights on each edge measure the dissimilarity between pixels. The function returns the number of segments and index mapping from nodes of the RAG to segments.
Parameters:
- edges = Array of edges in RAG. Each edge is represented as
ImageEdge
. - num_vertices = Number of vertices in RAG
- k = Threshold for region merging step. Larger threshold will result in bigger segments.
- min_size = Minimum segment size
ImageSegmentation.fast_scanning
— Functionseg_img = fast_scanning(img, threshold, [diff_fn])
Segments the N-D image using a fast scanning algorithm and returns a SegmentedImage
containing information about the segments.
Arguments:
img
: N-D image to be segmented (arbitrary axes are allowed)threshold
: Upper bound of the difference measure (δ) for considering pixel into same segment; anAbstractArray
can be passed having same number of dimensions as that ofimg
for adaptive thresholdingdiff_fn
: (Optional) Function that returns a difference measure (δ) between the mean color of a region and color of a point
Examples:
julia> img = zeros(Float64, (3,3));
julia> img[2,:] .= 0.5;
julia> img[:,2] .= 0.6;
julia> seg = fast_scanning(img, 0.2);
julia> labels_map(seg)
3×3 Matrix{Int64}:
1 4 5
4 4 4
3 4 6
Citation:
Jian-Jiun Ding, Cheng-Jin Kuo, Wen-Chih Hong, "An efficient image segmentation technique by fast scanning and adaptive merging"
ImageSegmentation.watershed
— Functionsegments = watershed(img, markers; compactness, mask)
Segments the image using watershed transform. Each basin formed by watershed transform corresponds to a segment. If you are using image local minimas as markers, consider using hmin_transform
to avoid oversegmentation.
Parameters:
- img = input grayscale image
- markers = An array (same size as img) with each region's marker assigned a index starting from 1. Zero means not a marker. If two markers have the same index, their regions will be merged into a single region. If you have markers as a boolean array, use
label_components
. - compactness = Use the compact watershed algorithm with the given compactness parameter. Larger values lead to more regularly shaped watershed basins.[1]
- mask = Only segment pixels where the value of
mask
is true, used to restrict segmentation to only areas of interest
Example
julia> seeds = falses(100, 100); seeds[50, 25] = true; seeds[50, 75] = true;
julia> dists = distance_transform(feature_transform(seeds)); # calculate distances from seeds
julia> markers = label_components(seeds); # give each seed a unique integer id
julia> results = watershed(dists, markers);
julia> labels_map(results); # labels of segmented image
ImageSegmentation.hmin_transform
— Functionout = hmin_transform(img, h)
Suppresses all minima in grayscale image whose depth is less than h.
H-minima transform is defined as the reconstruction by erosion of (img + h) by img. See Morphological image analysis by Soille pg 170-172.
ImageSegmentation.region_adjacency_graph
— FunctionG, vert_map = region_adjacency_graph(seg, weight_fn)
Constructs a region adjacency graph (RAG) from the SegmentedImage
. It returns the RAG along with a Dict(label=>vertex) map. weight_fn
is used to assign weights to the edges.
weight_fn(label1, label2)
Returns a real number corresponding to the weight of the edge between label1 and label2.
ImageSegmentation.remove_segment
— Functionnew_seg = remove_segment(seg, label, diff_fn)
Removes the segment having label label
and returns the new SegmentedImage
. For more info, see remove_segment!
ImageSegmentation.remove_segment!
— Functionremove_segment!(seg, label, diff_fn)
In place removal of the segment having label label
, replacing it with the neighboring segment having least diff_fn
value.
d = diff_fn(rem_label, neigh_label)
A difference measure between label to be removed and its neighbors. isless
must be defined for objects of the type of d
.
Examples
# This removes the label `l` and replaces it with the label of
# neighbor having maximum pixel count.
julia> remove_segment!(seg, l, (i,j)->(-seg.segment_pixel_count[j]))
# This removes the label `l` and replaces it with the label of
# neighbor having the least value of euclidian metric.
julia> remove_segment!(seg, l, (i,j)->sum(abs2, seg.segment_means[i]-seg.segment_means[j]))
ImageSegmentation.prune_segments
— Functionnew_seg = prune_segments(seg, rem_labels, diff_fn)
Removes all segments that have labels in rem_labels
replacing them with their neighbouring segment having least diff_fn
. rem_labels
is a Vector
of labels.
new_seg = prune_segments(seg, is_rem, diff_fn)
Removes all segments for which is_rem
returns true replacing them with their neighbouring segment having least diff_fn
.
is_rem(label) -> Bool
Returns true if label label
is to be removed otherwise false.
d = diff_fn(rem_label, neigh_label)
A difference measure between label to be removed and its neighbors. isless
must be defined for objects of the type of d
.
ImageSegmentation.region_tree
— Functiont = region_tree(img, homogeneous)
Creates a region tree from img
by splitting it recursively until all the regions are homogeneous.
b = homogeneous(img)
Returns true if img
is homogeneous.
Examples
julia> img = 0.1*rand(6, 6);
julia> img[4:end, 4:end] .+= 10;
julia> function homogeneous(img)
min, max = extrema(img)
max - min < 0.2
end
homogeneous (generic function with 1 method)
julia> t = region_tree(img, homogeneous);
ImageSegmentation.region_splitting
— Functionseg = region_splitting(img, homogeneous)
Segments img
by recursively splitting it until all the segments are homogeneous.
b = homogeneous(img)
Returns true if img
is homogeneous.
Examples
julia> img = 0.1*rand(6, 6);
julia> img[4:end, 4:end] .+= 10;
julia> function homogeneous(img)
min, max = extrema(img)
max - min < 0.2
end
homogeneous (generic function with 1 method)
julia> seg = region_splitting(img, homogeneous);
ImageFeatures
Geometric features
ImageFeatures.hough_transform_standard
— Functionlines = hough_transform_standard(
img_edges::AbstractMatrix;
stepsize=1,
angles=range(0,stop=pi,length=minimum(size(img))),
vote_threshold=minimum(size(img)) / stepsize -1,
max_linecount=typemax(Int))
Returns a vector of tuples corresponding to the tuples of (r,t) where r and t are parameters for normal form of line: x * cos(t) + y * sin(t) = r
r
= length of perpendicular from (1,1) to the linet
= angle between perpendicular from (1,1) to the line and x-axis
The lines are generated by applying hough transform on the image.
Parameters:
img_edges
= Image to be transformed (eltype should beBool
)stepsize
= Discrete step size for perpendicular length of lineangles
= List of angles for which the transform is computedvote_threshold
= Accumulator threshold for line detectionmax_linecount
= Maximum no of lines to return
Example
julia> using ImageFeatures
julia> img = fill(false,5,5); img[3,:] .= true; img
5×5 Array{Bool,2}:
false false false false false
false false false false false
true true true true true
false false false false false
false false false false false
julia> hough_transform_standard(img)
1-element Array{Tuple{Float64,Float64},1}:
(3.0, 1.5707963267948966)
ImageFeatures.hough_circle_gradient
— Functioncircle_centers, circle_radius = hough_circle_gradient(img_edges, img_phase, radii; scale=1, min_dist=minimum(radii), vote_threshold)
Returns two vectors, corresponding to circle centers and radius.
The circles are generated using a hough transform variant in which a non-zero point only votes for circle centers perpendicular to the local gradient. In case of concentric circles, only the largest circle is detected.
Parameters:
img_edges
= edges of the imageimg_phase
= phase of the gradient imageradii
= circle radius rangescale
= relative accumulator resolution factormin_dist
= minimum distance between detected circle centersvote_threshold
= accumulator threshold for circle detection
canny
and phase
can be used for obtaining imgedges and imgphase respectively.
Example
julia> using Images, ImageFeatures, FileIO, ImageView
julia> img = load(download("http://docs.opencv.org/3.1.0/water_coins.jpg"));
julia> img = Gray.(img);
julia> img_edges = canny(img, (Percentile(99), Percentile(80)));
julia> dx, dy=imgradients(img, KernelFactors.ando5);
julia> img_phase = phase(dx, dy);
julia> centers, radii = hough_circle_gradient(img_edges, img_phase, 20:30);
julia> img_demo = Float64.(img_edges); for c in centers img_demo[c] = 2; end
julia> imshow(img_demo)
Types
ImageFeatures.Feature
— Typefeature = Feature(keypoint, orientation = 0.0, scale = 0.0)
The Feature
type has the keypoint, its orientation and its scale.
ImageFeatures.Features
— Typefeatures = Features(boolean_img)
features = Features(keypoints)
Returns a Vector{Feature}
of features generated from the true
values in a boolean image or from a list of keypoints.
ImageFeatures.Keypoint
— Typekeypoint = Keypoint(y, x)
keypoint = Keypoint(feature)
A Keypoint
may be created by passing the coordinates of the point or from a feature.
ImageFeatures.Keypoints
— Typekeypoints = Keypoints(boolean_img)
keypoints = Keypoints(features)
Creates a Vector{Keypoint}
of the true
values in a boolean image or from a list of features.
ImageFeatures.BRIEF
— Typebrief_params = BRIEF([size = 128], [window = 9], [sigma = 2 ^ 0.5], [sampling_type = gaussian], [seed = 123])
Argument | Type | Description |
---|---|---|
size | Int | Size of the descriptor |
window | Int | Size of sampling window |
sigma | Float64 | Value of sigma used for inital gaussian smoothing of image |
sampling_type | Function | Type of sampling used for building the descriptor (See BRIEF Sampling Patterns) |
seed | Int | Random seed used for generating the sampling pairs. For matching two descriptors, the seed used to build both should be same. |
ImageFeatures.ORB
— Typeorb_params = ORB([num_keypoints = 500], [n_fast = 12], [threshold = 0.25], [harris_factor = 0.04], [downsample = 1.3], [levels = 8], [sigma = 1.2])
Argument | Type | Description |
---|---|---|
num_keypoints | Int | Number of keypoints to extract and size of the descriptor calculated |
n_fast | Int | Number of consecutive pixels used for finding corners with FAST. See [fastcorners ] |
threshold | Float64 | Threshold used to find corners in FAST. See [fastcorners ] |
harris_factor | Float64 | Harris factor k used to rank keypoints by harris responses and extract the best ones |
downsample | Float64 | Downsampling parameter used while building the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
levels | Int | Number of levels in the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
sigma | Float64 | Used for gaussian smoothing in each level of the gaussian pyramid. See [gaussian_pyramid ] in Images.jl |
ImageFeatures.FREAK
— Typefreak_params = FREAK([pattern_scale = 22.0])
Argument | Type | Description |
---|---|---|
pattern_scale | Float64 | Scaling factor for the sampling window |
ImageFeatures.BRISK
— Typebrisk_params = BRISK([pattern_scale = 1.0])
Argument | Type | Description |
---|---|---|
pattern_scale | Float64 | Scaling factor for the sampling window |
ImageFeatures.HOG
— Typehog_params = HOG([orientations = 9], [cell_size = 8], [block_size = 2], [block_stride = 1], [norm_method = "L2-norm"])
Histogram of Oriented Gradient (HOG) is a dense feature desciptor usually used for object detection. See "Histograms of Oriented Gradients for Human Detection" by Dalal and Triggs.
Parameters:
- orientations = number of orientation bins
- cellsize = size of a cell is cellsize x cell_size (in pixels)
- blocksize = size of a block is blocksize x block_size (in terms of cells)
- block_stride = stride of blocks. Controls how much adjacent blocks overlap.
- norm_method = block normalization method. Options: L2-norm, L2-hys, L1-norm, L2-sqrt.
Corners
ImageFeatures.corner_orientations
— Functionorientations = corner_orientations(img)
orientations = corner_orientations(img, corners)
orientations = corner_orientations(img, corners, kernel)
Returns the orientations of corner patches in an image. The orientation of a corner patch is denoted by the orientation of the vector between intensity centroid and the corner. The intensity centroid can be calculated as C = (m01/m00, m10/m00)
where mpq is defined as -
`mpq = (x^p)(y^q)I(y, x) for each p, q in the corner patch`
The kernel used for the patch can be given through the kernel
argument. The default kernel used is a gaussian kernel of size 5x5
.
BRIEF Sampling Patterns
ImageFeatures.random_uniform
— Functionsample_one, sample_two = random_uniform(size, window, seed)
Builds sampling pairs using random uniform sampling.
ImageFeatures.random_coarse
— Functionsample_one, sample_two = random_coarse(size, window, seed)
Builds sampling pairs using random sampling over a coarse grid.
ImageFeatures.gaussian
— Functionsample_one, sample_two = gaussian(size, window, seed)
Builds sampling pairs using gaussian sampling.
ImageFeatures.gaussian_local
— Functionsample_one, sample_two = gaussian_local(size, window, seed)
Pairs (Xi, Yi)
are randomly sampled using a Gaussian distribution where first X
is sampled with a standard deviation of 0.04*S^2
and then the Yi’s
are sampled using a Gaussian distribution – Each Yi
is sampled with mean Xi
and standard deviation of 0.01 * S^2
ImageFeatures.center_sample
— Functionsample_one, sample_two = center_sample(size, window, seed)
Builds sampling pairs (Xi, Yi)
where Xi
is (0, 0)
and Yi
is sampled uniformly from the window.
Feature Description
ImageFeatures.create_descriptor
— Functiondesc, keypoints = create_descriptor(img, keypoints, params)
desc, keypoints = create_descriptor(img, params)
Create a descriptor for each entry in keypoints
from the image img
. params
specifies the parameters for any of several descriptors:
Some descriptors support discovery of the keypoints
from fastcorners
.
Feature Matching
ImageFeatures.hamming_distance
— Functiondistance = hamming_distance(desc_1, desc_2)
Calculates the hamming distance between two descriptors.
ImageFeatures.match_keypoints
— Functionmatches = match_keypoints(keypoints_1, keypoints_2, desc_1, desc_2, threshold = 0.1)
Finds matched keypoints using the hamming_distance
function having distance value less than threshold
.
Texture Matching
Gray Level Co-occurence Matrix
glcm
glcm_symmetric
glcm_norm
glcm_prop
max_prob
contrast
ASM
IDM
glcm_entropy
energy
dissimilarity
correlation
glcm_mean_ref
glcm_mean_neighbour
glcm_var_ref
glcm_var_neighbour
Local Binary Patterns
lbp
modified_lbp
direction_coded_lbp
lbp_original
lbp_uniform
lbp_rotation_invariant
multi_block_lbp
- 1https://www.tu-chemnitz.de/etit/proaut/publications/cwspSLICICPR.pdf