Filtering functions
ImageFiltering.imfilter
— Functionimfilter([T], img, kernel, [border="replicate"], [alg]) --> imgfilt
imfilter([r], img, kernel, [border="replicate"], [alg]) --> imgfilt
imfilter(r, T, img, kernel, [border="replicate"], [alg]) --> imgfilt
Filter a one, two or multidimensional array img
with a kernel
by computing their correlation.
Details
The term filtering emerges in the context of a Fourier transformation of an image, which maps an image from its canonical spatial domain to its concomitant frequency domain. Manipulating an image in the frequency domain amounts to retaining or discarding particular frequency components—a process analogous to sifting or filtering [1]. Because the Fourier transform establishes a link between the spatial and frequency representation of an image, one can interpret various image manipulations in the spatial domain as filtering operations which accept or reject specific frequencies.
The phrase spatial filtering is often used to emphasise that an operation is, at least conceptually, devised in the context of the spatial domain of an image. One further distinguishes between linear and non-linear spatial filtering. A filter is called linear if the operation performed on the pixels is linear, and is labeled non-linear otherwise.
An image filter can be represented by a function
\[ w: \{s\in \mathbb{Z} \mid -k_1 \le s \le k_1 \} \times \{t \in \mathbb{Z} \mid -k_2 \le t \le k_2 \} \rightarrow \mathbb{R},\]
where $k_i \in \mathbb{N}$ (i = 1,2). It is common to define $k_1 = 2a+1$ and $k_2 = 2b + 1$, where $a$ and $b$ are integers, which ensures that the filter dimensions are of odd size. Typically, $k_1$ equals $k_2$ and so, dropping the subscripts, one speaks of a $k \times k$ filter. Since the domain of the filter represents a grid of spatial coordinates, the filter is often called a mask and is visualized as a grid. For example, a $3 \times 3$ mask can be potrayed as follows:
\[\scriptsize \begin{matrix} \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(-1,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \\ \\ \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(0,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \\ \\ \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,-1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,0) \\ \phantom{w(-9,-9)} \\ \end{matrix} } & \boxed{ \begin{matrix} \phantom{w(-9,-9)} \\ w(1,1) \\ \phantom{w(-9,-9)} \\ \end{matrix} } \end{matrix}.\]
The values of $w(s,t)$ are referred to as filter coefficients.
Discrete convolution versus correlation
There are two fundamental and closely related operations that one regularly performs on an image with a filter. The operations are called discrete correlation and convolution.
The correlation operation, denoted by the symbol $\star$, is given in two dimensions by the expression
\[\begin{aligned} g(x,y) = w(x,y) \star f(x,y) = \sum_{s = -a}^{a} \sum_{t=-b}^{b} w(s,t) f(x+s, y+t), \end{aligned}\]
whereas the comparable convolution operation, denoted by the symbol $\ast$, is given in two dimensions by
\[\begin{aligned} h(x,y) = w(x,y) \ast f(x,y) = \sum_{s = -a}^{a} \sum_{t=-b}^{b} w(s,t) f(x-s, y-t). \end{aligned}\]
Since a digital image is of finite extent, both of these operations are undefined at the borders of the image. In particular, for an image of size $M \times N$, the function $f(x \pm s, y \pm t)$ is only defined for $1 \le x \pm s \le N$ and $1 \le y \pm t \le M$. In practice one addresses this problem by artificially expanding the domain of the image. For example, one can pad the image with zeros. Other padding strategies are possible, and they are discussed in more detail in the Options section of this documentation.
One-dimensional illustration
The difference between correlation and convolution is best understood with recourse to a one-dimensional example adapted from [1]. Suppose that a filter $w:\{-1,0,1\}\rightarrow \mathbb{R}$ has coefficients
\[\begin{matrix} \boxed{1} & \boxed{2} & \boxed{3} \end{matrix}.\]
Consider a discrete unit impulse function $f: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \{0,1\}$ that has been padded with zeros. The function can be visualised as an image
\[\boxed{ \begin{matrix} 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{1} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \end{matrix}}.\]
The correlation operation can be interpreted as sliding $w$ along the image and computing the sum of products at each location. For example,
\[\begin{matrix} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 3 & & & & & & \\ & 1 & 2 & 3 & & & & & \\ & & 1 & 2 & 3 & & & & \\ & & & 1 & 2 & 3 & & & \\ & & & & 1 & 2 & 3 & & \\ & & & & & 1 & 2 & 3 & \\ & & & & & & 1 & 2 & 3, \end{matrix}\]
yields the output $g: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \mathbb{R}$, which when visualized as a digital image, is equal to
\[\boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{3} & \boxed{2} & \boxed{1} & \boxed{0} & \boxed{0} \end{matrix}}.\]
The interpretation of the convolution operation is analogous to correlation, except that the filter $w$ has been rotated by 180 degrees. In particular,
\[\begin{matrix} 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 3 & 2 & 1 & & & & & & \\ & 3 & 2 & 1 & & & & & \\ & & 3 & 2 & 1 & & & & \\ & & & 3 & 2 & 1 & & & \\ & & & & 3 & 2 & 1 & & \\ & & & & & 3 & 2 & 1 & \\ & & & & & & 3 & 2 & 1, \end{matrix}\]
yields the output $h: \{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \rightarrow \mathbb{R}$ equal to
\[\boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{0} & \boxed{0} \end{matrix}}.\]
Instead of rotating the filter mask, one could instead rotate $f$ and still obtained the same convolution result. In fact, the conventional notation for convolution indicates that $f$ is flipped and not $w$. If $w$ is symmetric, then convolution and correlation give the same outcome.
Two-dimensional illustration
For a two-dimensional example, suppose the filter $w:\{-1, 0 ,1\} \times \{-1,0,1\} \rightarrow \mathbb{R}$ has coefficients
\[ \begin{matrix} \boxed{1} & \boxed{2} & \boxed{3} \\ \\ \boxed{4} & \boxed{5} & \boxed{6} \\ \\ \boxed{7} & \boxed{8} & \boxed{9} \end{matrix},\]
and consider a two-dimensional discrete unit impulse function
\[ f:\{x \in \mathbb{Z} \mid 1 \le x \le 7 \} \times \{y \in \mathbb{Z} \mid 1 \le y \le 7 \}\rightarrow \{ 0,1\}\]
that has been padded with zeros:
\[ \boxed{ \begin{matrix} 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{1} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & 0 \\ \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{matrix}}.\]
The correlation operation $w(x,y) \star f(x,y)$ yields the output
\[ \boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \\ \\ \boxed{0} & \boxed{9} & \boxed{8} & \boxed{7} & \boxed{0} \\ \\ \boxed{0} & \boxed{6} & \boxed{5} & \boxed{4} & \boxed{0} \\ \\ \boxed{0} & \boxed{3} & \boxed{2} & \boxed{1} & \boxed{0} \\ \\ \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \end{matrix}},\]
whereas the convolution operation $w(x,y) \ast f(x,y)$ produces
\[ \boxed{ \begin{matrix} \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \\ \\ \boxed{0} & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{0}\\ \\ \boxed{0} & \boxed{4} & \boxed{5} & \boxed{6} & \boxed{0} \\ \\ \boxed{0} & \boxed{7} & \boxed{8} & \boxed{9} & \boxed{0} \\ \\ \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} & \boxed{0} \end{matrix}}.\]
Discrete convolution and correlation as matrix multiplication
Discrete convolution and correlation operations can also be formulated as a matrix multiplication, where one of the inputs is converted to a Toeplitz matrix, and the other is represented as a column vector. For example, consider a function $f:\{x \in \mathbb{N} \mid 1 \le x \le M \} \rightarrow \mathbb{R}$ and a filter $w: \{s \in \mathbb{N} \mid -k_1 \le s \le k_1 \} \rightarrow \mathbb{R}$. Then the matrix multiplication
\[\begin{bmatrix} w(-k_1) & 0 & \ldots & 0 & 0 \\ \vdots & w(-k_1) & \ldots & \vdots & 0 \\ w(k_1) & \vdots & \ldots & 0 & \vdots \\ 0 & w(k_1) & \ldots & w(-k_1) & 0 \\ 0 & 0 & \ldots & \vdots & w(-k_1) \\ \vdots & \vdots & \ldots & w(k_1) & \vdots \\ 0 & 0 & 0 & 0 & w(k_1) \end{bmatrix} \begin{bmatrix} f(1) \\ f(2) \\ f(3) \\ \vdots \\ f(M) \end{bmatrix}\]
is equivalent to the convolution $w(s) \ast f(x)$ assuming that the border of $f(x)$ has been padded with zeros.
To represent multidimensional convolution as matrix multiplication one reshapes the multidimensional arrays into column vectors and proceeds in an analogous manner. Naturally, the result of the matrix multiplication will need to be reshaped into an appropriate multidimensional array.
Options
The following subsections describe valid options for the function arguments in more detail.
Choices for r
You can dispatch to different implementations by passing in a resource r
as defined by the ComputationalResources package. For example,
imfilter(ArrayFireLibs(), img, kernel)
would request that the computation be performed on the GPU using the ArrayFire libraries.
Choices for T
Optionally, you can control the element type of the output image by passing in a type T
as the first argument.
Choices for img
You can specify a one, two or multidimensional array defining your image.
Choices for kernel
The kernel[0,0,..]
parameter corresponds to the origin (zero displacement) of the kernel; you can use centered
to place the origin at the array center, or use the OffsetArrays package to set kernel
's indices manually. For example, to filter with a random centered 3x3 kernel, you could use either of the following:
kernel = centered(rand(3,3))
kernel = OffsetArray(rand(3,3), -1:1, -1:1)
The kernel
parameter can be specified as an array or as a "factored kernel", a tuple (filt1, filt2, ...)
of filters to apply along each axis of the image. In cases where you know your kernel is separable, this format can speed processing. Each of these should have the same dimensionality as the image itself, and be shaped in a manner that indicates the filtering axis, e.g., a 3x1 filter for filtering the first dimension and a 1x3 filter for filtering the second dimension. In two dimensions, any kernel passed as a single matrix is checked for separability; if you want to eliminate that check, pass the kernel as a single-element tuple, (kernel,)
.
Choices for border
At the image edge, border
is used to specify the padding which will be used to extrapolate the image beyond its original bounds. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
"replicate"
(default)
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"circular"
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"reflect"
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
"symmetric"
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
Fill(m)
The border pixels are filled with a specified value $m$.
\[\boxed{ \begin{array}{l|c|r} m\, m\, m\, m & a \, b \, c \, d \, e \, f & m \, m \, m \, m \end{array} }\]
See also: Pad
, padarray
, Inner
, NA
and NoPad
Inner()
Indicate that edges are to be discarded in filtering, only the interior of the result is to be returned.
See also: Pad
, padarray
, Inner
, NA
and NoPad
NA()
Choose filtering using "NA" (Not Available) boundary conditions. This is most appropriate for filters that have only positive weights, such as blurring filters.
See also: Pad
, padarray
, Inner
, NA
and NoPad
Choices for alg
The alg
parameter allows you to choose the particular algorithm: FIR()
(finite impulse response, aka traditional digital filtering) or FFT()
(Fourier-based filtering). If no choice is specified, one will be chosen based on the size of the image and kernel in a way that strives to deliver good performance. Alternatively you can use a custom filter type, like KernelFactors.IIRGaussian
.
Examples
The following subsections highlight some common use cases.
Convolution versus correlation
# Create a two-dimensional discrete unit impulse function.
f = fill(0,(9,9));
f[5,5] = 1;
# Specify a filter coefficient mask and set the center of the mask as the origin.
w = centered([1 2 3; 4 5 6 ; 7 8 9]);
#=
The default operation of `imfilter` is correlation. By reflecting `w` we
compute the convolution of `f` and `w`. `Fill(0,w)` indicates that we wish to
pad the border of `f` with zeros. The amount of padding is automatically
determined by considering the length of w.
=#
correlation = imfilter(f,w,Fill(0,w))
convolution = imfilter(f,reflect(w),Fill(0,w))
Miscellaneous border padding options
# Example function values f, and filter coefficients w.
f = reshape(1.0:81.0,9,9)
w = centered(reshape(1.0:9.0,3,3))
# You can designate the type of padding by specifying an appropriate string.
imfilter(f,w,"replicate")
imfilter(f,w,"circular")
imfilter(f,w,"symmetric")
imfilter(f,w,"reflect")
# Alternatively, you can explicitly use the Pad type to designate the padding style.
imfilter(f,w,Pad(:replicate))
imfilter(f,w,Pad(:circular))
imfilter(f,w,Pad(:symmetric))
imfilter(f,w,Pad(:reflect))
# If you want to pad with a specific value then use the Fill type.
imfilter(f,w,Fill(0,w))
imfilter(f,w,Fill(1,w))
imfilter(f,w,Fill(-1,w))
#=
Specify 'Inner()' if you want to retrieve the interior sub-array of f for which
the filtering operation is defined without padding.
=#
imfilter(f,w,Inner())
References
- R. C. Gonzalez and R. E. Woods. Digital Image Processing (3rd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, 2006.
See also: imfilter!
, centered
, padarray
, Pad
, Fill
, Inner
, KernelFactors.IIRGaussian
.
ImageFiltering.imfilter!
— Functionimfilter!(imgfilt, img, kernel, [border="replicate"], [alg])
imfilter!(r, imgfilt, img, kernel, border::Pad)
imfilter!(r, imgfilt, img, kernel, border::NoPad, [inds=axes(imgfilt)])
Filter an array img
with kernel kernel
by computing their correlation, storing the result in imgfilt
.
The indices of imgfilt
determine the region over which the filtered image is computed–-you can use this fact to select just a specific region of interest, although be aware that the input img
might still get padded. Alteratively, explicitly provide the indices inds
of imgfilt
that you want to calculate, and use NoPad
boundary conditions. In such cases, you are responsible for supplying appropriate padding: img
must be indexable for all of the locations needed for calculating the output. This syntax is best-supported for FIR filtering; in particular, that that IIR filtering can lead to results that are inconsistent with respect to filtering the entire array.
See also: imfilter
.
ImageFiltering.imgradients
— Function imgradients(img, kernelfun=KernelFactors.ando3, border="replicate") -> gimg1, gimg2, ...
Estimate the gradient of img
in the direction of the first and second dimension at all points of the image, using a kernel specified by kernelfun
.
Output
The gradient is returned as a tuple-of-arrays, one for each dimension of the input; gimg1
corresponds to the derivative with respect to the first dimension, gimg2
to the second, and so on.
Details
To appreciate the difference between various gradient estimation methods it is helpful to distinguish between: (1) a continuous scalar-valued analogue image $f_\textrm{A}(x_1,x_2)$, where $x_1,x_2 \in \mathbb{R}$, and (2) its discrete digital realization $f_\textrm{D}(x_1',x_2')$, where $x_1',x_2' \in \mathbb{N}$, $1 \le x_1' \le M$ and $1 \le x_2' \le N$.
Analogue image
The gradient of a continuous analogue image $f_{\textrm{A}}(x_1,x_2)$ at location $(x_1,x_2)$ is defined as the vector
\[\nabla \mathbf{f}_{\textrm{A}}(x_1,x_2) = \frac{\partial f_{\textrm{A}}(x_1,x_2)}{\partial x_1} \mathbf{e}_{1} + \frac{\partial f_{\textrm{A}}(x_1,x_2)}{\partial x_2} \mathbf{e}_{2},\]
where $\mathbf{e}_{d}$ $(d = 1,2)$ is the unit vector in the $x_d$-direction. The gradient points in the direction of maximum rate of change of $f_{\textrm{A}}$ at the coordinates $(x_1,x_2)$. The gradient can be used to compute the derivative of a function in an arbitrary direction. In particular, the derivative of $f_{\textrm{A}}$ in the direction of a unit vector $\mathbf{u}$ is given by $\nabla_{\mathbf{u}}f_\textrm{A}(x_1,x_2) = \nabla \mathbf{f}_{\textrm{A}}(x_1,x_2) \cdot \mathbf{u}$, where $\cdot$ denotes the dot product.
Digital image
In practice, we acquire a digital image $f_\textrm{D}(x_1',x_2')$ where the light intensity is known only at a discrete set of locations. This means that the required partial derivatives are undefined and need to be approximated using discrete difference formulae [1].
A straightforward way to approximate the partial derivatives is to use central-difference formulae
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_1'} \approx \frac{f_{\textrm{D}}(x_1'+1,x_2') - f_{\textrm{D}}(x_1'-1,x_2') }{2}\]
and
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_2'} \approx \frac{f_{\textrm{D}}(x_1',x_2'+1) - f_{\textrm{D}}(x_1',x_2'-1)}{2}.\]
However, the central-difference formulae are very sensitive to noise. When working with noisy image data, one can obtain a better approximation of the partial derivatives by using a suitable weighted combination of the neighboring image intensities. The weighted combination can be represented as a discrete convolution operation between the image and a kernel which characterizes the requisite weights. In particular, if $h_{x_d}$ ($d = 1,2)$ represents a $2r+1 \times 2r+1$ kernel, then
\[ \frac{\partial f_{\textrm{D}}(x_1',x_2')}{\partial x_d'} \approx \sum_{i = -r}^r \sum_{j = -r}^r f_\textrm{D}(x_1'-i,x_2'-j) h_{x_d}(i,j).\]
The kernel is frequently also called a mask or convolution matrix.
Weighting schemes and approximation error
The choice of weights determines the magnitude of the approximation error and whether the finite-difference scheme is isotropic. A finite-difference scheme is isotropic if the approximation error does not depend on the orientation of the coordinate system and anisotropic if the approximation error has a directional bias [2]. With a continuous analogue image the magnitude of the gradient would be invariant upon rotation of the coordinate system, but in practice one cannot obtain perfect isotropy with a finite set of discrete points. Hence a finite-difference scheme is typically considered isotropic if the leading error term in the approximation does not have preferred directions.
Most finite-difference schemes that are used in image processing are based on $3 \times 3$ kernels, and as noted by [7], many can also be parametrized by a single parameter $\alpha$ as follows:
\[\mathbf{H}_{x_{1}} = \frac{1}{4 + 2\alpha} \begin{bmatrix} -1 & -\alpha & -1 \\ 0 & 0 & 0 \\ 1 & \alpha & 1 \end{bmatrix} \quad \text{and} \quad \mathbf{H}_{x_{2}} = \frac{1}{2 + 4\alpha} \begin{bmatrix} -1 & 0 & 1 \\ -\alpha & 0 & \alpha \\ -1 & 0 & 1 \end{bmatrix},\]
where
\[\alpha = \begin{cases} 0, & \text{Simple Finite Difference}; \\ 1, & \text{Prewitt}; \\ 2, & \text{Sobel}; \\ 2.4351, & \text{Ando}; \\ \frac{10}{3}, & \text{Scharr}; \\ 4, & \text{Bickley}. \end{cases}\]
Separable kernel
A kernel is called separable if it can be expressed as the convolution of two one-dimensional filters. With a matrix representation of the kernel, separability means that the kernel matrix can be written as an outer product of two vectors. Separable kernels offer computational advantages since instead of performing a two-dimensional convolution one can perform a sequence of one-dimensional convolutions.
Options
You can specify your choice of the finite-difference scheme via the kernelfun
parameter. You can also indicate how to deal with the pixels on the border of the image with the border
parameter.
Choices for kernelfun
In general kernelfun
can be any function which satisfies the following interface:
kernelfun(extended::NTuple{N,Bool}, d) -> kern_d,
where kern_d
is the kernel for producing the derivative with respect to the $d$th dimension of an $N$-dimensional array. The parameter extended[i]
is true if the image is of size > 1 along dimension $i$. The parameter kern_d
may be provided as a dense or factored kernel, with factored representations recommended when the kernel is separable.
Some valid kernelfun
options are described below.
KernelFactors.prewitt
With the prewit option [3] the computation of the gradient is based on the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{6} \begin{bmatrix} -1 & -1 & -1 \\ 0 & 0 & 0 \\ 1 & 1 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{6} \begin{bmatrix} -1 & 0 & 1 \\ -1 & 0 & 1 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{6} \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix} & & = \frac{1}{6} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 1 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.prewitt
and Kernel.prewitt
KernelFactors.sobel
The sobel option [4] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{8} \begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{8} \begin{bmatrix} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{8} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 2 & 1 \end{bmatrix} & & = \frac{1}{8} \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.sobel
and Kernel.sobel
KernelFactors.ando3
The ando3 option [5] specifies the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \begin{bmatrix} -0.112737 & -0.274526 & -0.112737 \\ 0 & 0 & 0 \\ 0.112737 & 0.274526 & 0.112737 \end{bmatrix} & \mathbf{H}_{x_2} & = \begin{bmatrix} -0.112737 & 0 & 0.112737 \\ -0.274526 & 0 & 0.274526 \\ -0.112737 & 0 & 0.112737 \end{bmatrix} \\ & = \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 0.112737 & 0.274526 & 0.112737 \end{bmatrix} & & = \begin{bmatrix} 0.112737 \\ 0.274526 \\ 0.112737 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.ando3
, and Kernel.ando3
; KernelFactors.ando4
, and Kernel.ando4
; KernelFactors.ando5
, and Kernel.ando5
KernelFactors.scharr
The scharr option [6] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_{1}} & = \frac{1}{32} \begin{bmatrix} -3 & -10 & -3 \\ 0 & 0 & 0 \\ 3 & 10 & 3 \end{bmatrix} & \mathbf{H}_{x_{2}} & = \frac{1}{32} \begin{bmatrix} -3 & 0 & 3 \\ -10 & 0 & 10\\ -3 & 0 & 3 \end{bmatrix} \\ & = \frac{1}{32} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 3 & 10 & 3 \end{bmatrix} & & = \frac{1}{32} \begin{bmatrix} 3 \\ 10 \\ 3 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.scharr
and Kernel.scharr
KernelFactors.bickley
The bickley option [7,8] designates the kernels
\[\begin{aligned} \mathbf{H}_{x_1} & = \frac{1}{12} \begin{bmatrix} -1 & -4 & -1 \\ 0 & 0 & 0 \\ 1 & 4 & 1 \end{bmatrix} & \mathbf{H}_{x_2} & = \frac{1}{12} \begin{bmatrix} -1 & 0 & 1 \\ -4 & 0 & 4 \\ -1 & 0 & 1 \end{bmatrix} \\ & = \frac{1}{12} \begin{bmatrix} -1 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 1 & 4 & 1 \end{bmatrix} & & = \frac{1}{12} \begin{bmatrix} 1 \\ 4 \\ 1 \end{bmatrix} \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}. \end{aligned}\]
See also: KernelFactors.bickley
and Kernel.bickley
Choices for border
At the image edge, border
is used to specify the padding which will be used to extrapolate the image beyond its original bounds. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
"replicate"
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"circular"
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"symmetric"
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
"reflect"
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
Example
This example compares the quality of the gradient estimation methods in terms of the accuracy with which the orientation of the gradient is estimated.
using Images
values = LinRange(-1,1,128);
w = 1.6*pi;
# Define a function of a sinusoidal grating, f(x,y) = sin( (w*x)^2 + (w*y)^2 ),
# together with its exact partial derivatives.
I = [sin( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Ix = [2*w*x*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
Iy = [2*w*y*cos( (w*x)^2 + (w*y)^2 ) for y in values, x in values];
# Determine the exact orientation of the gradients.
direction_true = atan.(Iy./Ix);
for kernelfunc in (KernelFactors.prewitt, KernelFactors.sobel,
KernelFactors.ando3, KernelFactors.scharr,
KernelFactors.bickley)
# Estimate the gradients and their orientations.
Gy, Gx = imgradients(I,kernelfunc, "replicate");
direction_estimated = atan.(Gy./Gx);
# Determine the mean absolute deviation between the estimated and true
# orientation. Ignore the values at the border since we expect them to be
# erroneous.
error = mean(abs.(direction_true[2:end-1,2:end-1] -
direction_estimated[2:end-1,2:end-1]));
error = round(error, digits=5);
println("Using $kernelfunc results in a mean absolute deviation of $error")
end
# output
Using ImageFiltering.KernelFactors.prewitt results in a mean absolute deviation of 0.01069
Using ImageFiltering.KernelFactors.sobel results in a mean absolute deviation of 0.00522
Using ImageFiltering.KernelFactors.ando3 results in a mean absolute deviation of 0.00365
Using ImageFiltering.KernelFactors.scharr results in a mean absolute deviation of 0.00126
Using ImageFiltering.KernelFactors.bickley results in a mean absolute deviation of 0.00038
References
- B. Jahne, Digital Image Processing (5th ed.). Springer Publishing Company, Incorporated, 2005. 10.1007/3-540-27563-0
- M. Patra and M. Karttunen, "Stencils with isotropic discretization error for differential operators," Numer. Methods Partial Differential Eq., vol. 22, pp. 936–953, 2006. doi:10.1002/num.20129
- J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
- P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
- S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
- H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
- A. Belyaev, "Implicit image differentiation and filtering with applications to image sharpening," SIAM Journal on Imaging Sciences, vol. 6, no. 1, pp. 660–679, 2013. doi:10.1137/12087092x
- W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
ImageFiltering.MapWindow.mapwindow
— Functionmapwindow(f, img, window; [border="replicate"], [indices=axes(img)]) -> imgf
Apply f
to sliding windows of img
, with window size or axes specified by window
. For example, mapwindow(median!, img, window)
returns an Array
of values similar to img
(median-filtered, of course), whereas mapwindow(extrema, img, window)
returns an Array
of (min,max)
tuples over a window of size window
centered on each point of img
.
The function f
receives a buffer buf
for the window of data surrounding the current point. If window
is specified as a Dims-tuple (tuple-of-integers), then all the integers must be odd and the window is centered around the current image point. For example, if window=(3,3)
, then f
will receive an Array buf
corresponding to offsets (-1:1, -1:1)
from the imgf[i,j]
for which this is currently being computed. Alternatively, window
can be a tuple of AbstractUnitRanges, in which case the specified ranges are used for buf
; this allows you to use asymmetric windows if needed.
border
specifies how the edges of img
should be handled; see imfilter
for details.
Finally indices
allows to omit unnecessary computations, if you want to do things like mapwindow
on a subimage, or a strided variant of mapwindow. It works as follows:
mapwindow(f, img, window, indices=(2:5, 1:2:7)) == mapwindow(f,img,window)[2:5, 1:2:7]
Except more efficiently because it omits computation of the unused values.
Because the data in the buffer buf
that is received by f
is copied from img
, and the buffer's memory is reused, f
should not return references to buf
. This
f = buf->copy(buf) # as opposed to f = buf->buf
mapwindow(f, img, window, indices=(2:5, 1:2:7))
would work as expected.
For functions that can only take AbstractVector
inputs, you might have to first specialize default_shape
:
f = v->quantile(v, 0.75)
ImageFiltering.MapWindow.default_shape(::typeof(f)) = vec
and then mapwindow(f, img, (m,n))
should filter at the 75th quantile.
See also: imfilter
.
ImageFiltering.MapWindow.mapwindow!
— Functionmapwindow!(f, out, img, window; border="replicate", indices=axes(img))
Variant of mapwindow
, with preallocated output. If out
and img
have overlapping memory regions, behaviour is undefined.
Kernel
ImageFiltering.Kernel
— ModuleKernel
is a module implementing filtering (correlation) kernels of full dimensionality. The following kernels are supported:
sobel
prewitt
ando3
,ando4
, andando5
scharr
bickley
gaussian
DoG
(Difference-of-Gaussian)LoG
(Laplacian-of-Gaussian)Laplacian
gabor
moffat
See also: KernelFactors
.
ImageFiltering.Kernel.sobel
— Function diff1, diff2 = sobel()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Sobel operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = sobel(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Sobel operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
See also: KernelFactors.sobel
, Kernel.prewitt
, Kernel.ando3
, Kernel.scharr
, Kernel.bickley
and imgradients
.
ImageFiltering.Kernel.prewitt
— Function diff1, diff2 = prewitt()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Prewitt operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = prewitt(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Prewitt operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
See also: KernelFactors.prewitt
, Kernel.sobel
, Kernel.ando3
, Kernel.scharr
,Kernel.bickley
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando3
— Function diff1, diff2 = ando3()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando3(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 3. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando3
, Kernel.ando4
, Kernel.ando5
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando4
— Function diff1, diff2 = ando4()
Return $4 \times 4$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando4(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 4. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando4
, Kernel.ando3
, Kernel.ando5
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.ando5
— Function diff1, diff2 = ando5()
Return $5 \times 5$ correlation kernels for two-dimensional gradient compution using Ando's "optimal" filters. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = ando5(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using Ando's "optimal" filters of size 5. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: KernelFactors.ando5
, Kernel.ando3
, Kernel.ando4
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.bickley
— Function diff1, diff2 = bickley()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Bickley operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = bickley(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Bickley operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
See also: KernelFactors.bickley
, Kernel.prewitt
, Kernel.ando3
, Kernel.scharr
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.scharr
— Function diff1, diff2 = scharr()
Return $3 \times 3$ correlation kernels for two-dimensional gradient compution using the Scharr operator. The diff1
kernel computes the gradient along the y-axis (first dimension), and the diff2
kernel computes the gradient along the x-axis (second dimension). diff1 == rotr90(diff2)
(diff,) = scharr(extended::NTuple{N,Bool}, d)
Return (a tuple of) the N-dimensional correlation kernel for gradient compution along the dimension d
using the Scharr operator. If extended[dim]
is false, diff
will have size 1 along that dimension.
Citation
H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
See also: KernelFactors.scharr
, Kernel.prewitt
, Kernel.ando3
, Kernel.bickley
and ImageFiltering.imgradients
.
ImageFiltering.Kernel.gaussian
— Functiongaussian((σ1, σ2, ...), [(l1, l2, ...)]) -> g
gaussian(σ) -> g
Construct a multidimensional gaussian filter, with standard deviation σd
along dimension d
. Optionally provide the kernel length l
, which must be a tuple of the same length.
If σ
is supplied as a single number, a symmetric 2d kernel is constructed.
See also: KernelFactors.gaussian
.
ImageFiltering.Kernel.DoG
— FunctionDoG((σp1, σp2, ...), (σm1, σm2, ...), [l1, l2, ...]) -> k
DoG((σ1, σ2, ...)) -> k
DoG(σ::Real) -> k
Construct a multidimensional difference-of-gaussian kernel k
, equal to gaussian(σp, l)-gaussian(σm, l)
. When only a single σ
is supplied, the default is to choose σp = σ, σm = √2 σ
. Optionally provide the kernel length l
; the default is to extend by two max(σp,σm)
in each direction from the center. l
must be odd.
If σ
is provided as a single number, a symmetric 2d DoG kernel is returned.
See also: KernelFactors.IIRGaussian
.
ImageFiltering.Kernel.LoG
— FunctionLoG((σ1, σ2, ...)) -> k
LoG(σ) -> k
Construct a Laplacian-of-Gaussian kernel k
. σd
is the gaussian width along dimension d
. If σ
is supplied as a single number, a symmetric 2d kernel is returned.
See also: KernelFactors.IIRGaussian
and Kernel.Laplacian
.
ImageFiltering.Kernel.Laplacian
— TypeLaplacian((true,true,false,...))
Laplacian(dims, N)
Laplacian()
Laplacian kernel in N
dimensions, taking derivatives along the directions marked as true
in the supplied tuple. Alternatively, one can pass dims
, a listing of the dimensions for differentiation. (However, this variant is not inferrable.)
Laplacian()
is the 2d laplacian, equivalent to Laplacian((true,true))
.
The kernel is represented as an opaque type, but you can use convert(AbstractArray, L)
to convert it into array format.
ImageFiltering.Kernel.gabor
— Functiongabor(size_x,size_y,σ,θ,λ,γ,ψ) -> (k_real,k_complex)
Returns a 2 Dimensional Complex Gabor kernel contained in a tuple where
size_x
,size_y
denote the size of the kernelσ
denotes the standard deviation of the Gaussian envelopeθ
represents the orientation of the normal to the parallel stripes of a Gabor functionλ
represents the wavelength of the sinusoidal factorγ
is the spatial aspect ratio, and specifies the ellipticity of the support of the Gabor functionψ
is the phase offset
#Citation N. Petkov and P. Kruizinga, “Computational models of visual neurons specialised in the detection of periodic and aperiodic oriented visual stimuli: bar and grating cells,” Biological Cybernetics, vol. 76, no. 2, pp. 83–96, Feb. 1997. doi.org/10.1007/s004220050323
ImageFiltering.Kernel.moffat
— Functionmoffat(α, β, ls) -> k
Constructs a 2D, symmetric Moffat kernel k
with core width, α
, and power, β
. Size of kernel defaults to 4 * full-width-half-max or as specified in ls
. See this notebook for details.
Citation
Moffat, A. F. J. "A theoretical investigation of focal stellar images in the photographic emulsion and application to photographic photometry." Astronomy and Astrophysics 3 (1969): 455.
KernelFactors
ImageFiltering.KernelFactors
— ModuleKernelFactors
is a module implementing separable filtering kernels, each stored in terms of their factors. The following kernels are supported:
box
sobel
prewitt
ando3
,ando4
, andando5
(the latter in 2d only)scharr
bickley
gaussian
IIRGaussian
(approximate gaussian filtering, fast even for large σ)
See also: Kernel
.
ImageFiltering.KernelFactors.sobel
— Function kern1, kern2 = sobel()
Return factored Sobel filters for dimensions 1 and 2 of a two-dimensional image. Each is a 2-tuple of one-dimensional filters.
Citation
P.-E. Danielsson and O. Seger, "Generalized and separable sobel operators," in Machine Vision for Three-Dimensional Scenes, H. Freeman, Ed. Academic Press, 1990, pp. 347–379. doi:10.1016/b978-0-12-266722-0.50016-6
See also: Kernel.sobel
and ImageFiltering.imgradients
.
kern = sobel(extended::NTuple{N,Bool}, d)
Return a factored Sobel filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.sobel
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.prewitt
— Function kern1, kern2 = prewitt()
Return factored Prewitt filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
J. M. Prewitt, "Object enhancement and extraction," Picture processing and Psychopictorics, vol. 10, no. 1, pp. 15–19, 1970.
See also: Kernel.prewitt
and ImageFiltering.imgradients
.
kern = prewitt(extended::NTuple{N,Bool}, d)
Return a factored Prewitt filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.prewitt
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.bickley
— Function kern1, kern2 = bickley()
Return factored Bickley filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
W. G. Bickley, "Finite difference formulae for the square lattice," The Quarterly Journal of Mechanics and Applied Mathematics, vol. 1, no. 1, pp. 35–42, 1948. doi:10.1093/qjmam/1.1.35
See also: Kernel.bickley
and ImageFiltering.imgradients
.
kern = bickley(extended::NTuple{N,Bool}, d)
Return a factored Bickley filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.bickley
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.scharr
— Function kern1, kern2 = scharr()
Return factored Scharr filters for dimensions 1 and 2 of your image. Each is a 2-tuple of one-dimensional filters.
Citation
H. Scharr and J. Weickert, "An anisotropic diffusion algorithm with optimized rotation invariance," Mustererkennung 2000, pp. 460–467, 2000. doi:10.1007/978-3-642-59802-9_58
See also: Kernel.scharr
and ImageFiltering.imgradients
.
kern = scharr(extended::NTuple{N,Bool}, d)
Return a factored Scharr filter for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: Kernel.scharr
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando3
— Function kern1, kern2 = ando3()
Return a factored form of Ando's "optimal" $3 \times 3$ gradient filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando3
,KernelFactors.ando4
, KernelFactors.ando5
and ImageFiltering.imgradients
.
kern = ando3(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 3) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
See also: KernelFactors.ando4
, KernelFactors.ando5
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando4
— Function kern1, kern2 = ando4()
Return separable approximations of Ando's "optimal" 4x4 filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando4
and ImageFiltering.imgradients
.
kern = ando4(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 4) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando4
and ImageFiltering.imgradients
.
ImageFiltering.KernelFactors.ando5
— Function kern1, kern2 = ando5()
Return a separable approximations of Ando's "optimal" 5x5 gradient filters for dimensions 1 and 2 of your image.
Citation
S. Ando, "Consistent gradient operators," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no.3, pp. 252–265, 2000. doi:10.1109/34.841757
See also: Kernel.ando5
and ImageFiltering.imgradients
.
kern = ando5(extended::NTuple{N,Bool}, d)
Return a factored Ando filter (size 5) for computing the gradient in N
dimensions along axis d
. If extended[dim]
is false, kern
will have size 1 along that dimension.
ImageFiltering.KernelFactors.gaussian
— Functiongaussian(σ::Real, [l]) -> g
Construct a 1d gaussian kernel g
with standard deviation σ
, optionally providing the kernel length l
. The default is to extend by two σ
in each direction from the center. l
must be odd.
gaussian((σ1, σ2, ...), [l]) -> (g1, g2, ...)
Construct a multidimensional gaussian filter as a product of single-dimension factors, with standard deviation σd
along dimension d
. Optionally provide the kernel length l
, which must be a tuple of the same length.
ImageFiltering.KernelFactors.IIRGaussian
— FunctionIIRGaussian([T], σ; emit_warning::Bool=true)
Construct an infinite impulse response (IIR) approximation to a Gaussian of standard deviation σ
. σ
may either be a single real number or a tuple of numbers; in the latter case, a tuple of such filters will be created, each for filtering a different dimension of an array.
Optionally specify the type T
for the filter coefficients; if not supplied, it will match σ
(unless σ
is not floating-point, in which case Float64
will be chosen).
Citation
I. T. Young, L. J. van Vliet, and M. van Ginkel, "Recursive Gabor Filtering". IEEE Trans. Sig. Proc., 50: 2798-2805 (2002).
ImageFiltering.KernelFactors.TriggsSdika
— TypeTriggsSdika(a, b, scale, M)
Defines a kernel for one-dimensional infinite impulse response (IIR) filtering. a
is a "forward" filter, b
a "backward" filter, M
is a matrix for matching boundary conditions at the right edge, and scale
is a constant scaling applied to each element at the conclusion of filtering.
Citation
B. Triggs and M. Sdika, "Boundary conditions for Young-van Vliet recursive filtering". IEEE Trans. on Sig. Proc. 54: 2365-2367 (2006).
TriggsSdika(ab, scale)
Create a symmetric Triggs-Sdika filter (with a = b = ab
). M
is calculated for you. Only length 3 filters are currently supported.
Kernel utilities
OffsetArrays.center
— Functioncenter(A, [r::RoundingMode=RoundDown])::Dims
Return the center coordinate of given array A
. If size(A, k)
is even, a rounding procedure will be applied with mode r
.
This method requires at least OffsetArrays 1.9.
Examples
julia> A = reshape(collect(1:9), 3, 3)
3×3 Matrix{Int64}:
1 4 7
2 5 8
3 6 9
julia> c = OffsetArrays.center(A)
(2, 2)
julia> A[c...]
5
julia> Ao = OffsetArray(A, -2, -2); # axes (-1:1, -1:1)
julia> c = OffsetArrays.center(Ao)
(0, 0)
julia> Ao[c...]
5
To shift the center coordinate of the given array to (0, 0, ...)
, you can use centered
.
OffsetArrays.centered
— Functioncentered(A, cp=center(A)) -> Ao
Shift the center coordinate/point cp
of array A
to (0, 0, ..., 0)
. Internally, this is equivalent to OffsetArray(A, .-cp)
.
This method requires at least OffsetArrays 1.9.
Examples
julia> A = reshape(collect(1:9), 3, 3)
3×3 Matrix{Int64}:
1 4 7
2 5 8
3 6 9
julia> Ao = OffsetArrays.centered(A); # axes (-1:1, -1:1)
julia> Ao[0, 0]
5
julia> Ao = OffsetArray(A, OffsetArrays.Origin(0)); # axes (0:2, 0:2)
julia> Aoo = OffsetArrays.centered(Ao); # axes (-1:1, -1:1)
julia> Aoo[0, 0]
5
Users are allowed to pass cp
to change how "center point" is interpreted, but the meaning of the output array should be reinterpreted as well. For instance, if cp = map(last, axes(A))
then this function no longer shifts the center point but instead the bottom-right point to (0, 0, ..., 0)
. A commonly usage of cp
is to change the rounding behavior when the array is of even size at some dimension:
julia> A = reshape(collect(1:4), 2, 2) # Ideally the center should be (1.5, 1.5) but OffsetArrays only support integer offsets
2×2 Matrix{Int64}:
1 3
2 4
julia> OffsetArrays.centered(A, OffsetArrays.center(A, RoundUp)) # set (2, 2) as the center point
2×2 OffsetArray(::Matrix{Int64}, -1:0, -1:0) with eltype Int64 with indices -1:0×-1:0:
1 3
2 4
julia> OffsetArrays.centered(A, OffsetArrays.center(A, RoundDown)) # set (1, 1) as the center point
2×2 OffsetArray(::Matrix{Int64}, 0:1, 0:1) with eltype Int64 with indices 0:1×0:1:
1 3
2 4
See also center
.
ImageFiltering.KernelFactors.kernelfactors
— Functionkernelfactors(factors::Tuple)
Prepare a factored kernel for filtering. If passed a 2-tuple of vectors of lengths m
and n
, this will return a 2-tuple of ReshapedVector
s that are effectively of sizes m×1
and 1×n
. In general, each successive factor
will be reshaped to extend along the corresponding dimension.
If passed a tuple of general arrays, it is assumed that each is shaped appropriately along its "leading" dimensions; the dimensionality of each is "extended" to N = length(factors)
, appending 1s to the size as needed.
ImageFiltering.Kernel.reflect
— Functionreflect(kernel) --> reflectedkernel
Compute the pointwise reflection around 0, 0, ... of the kernel kernel
. Using imfilter
with a reflectedkernel
performs convolution, rather than correlation, with respect to the original kernel
.
Boundaries and padding
ImageFiltering.padarray
— Function padarray([T], img, border) --> imgpadded
Generate a padded image from an array img
and a specification border
of the boundary conditions and amount of padding to add.
Output
An expansion of the input image in which additional pixels are derived from the border of the input image using the extrapolation scheme specified by border
.
Details
The function supports one, two or multi-dimensional images. You can specify the element type T
of the output image.
Options
Valid border
options are described below.
Pad
The type Pad
designates the form of padding which should be used to extrapolate pixels beyond the boundary of an image. Instances must set style
, a Symbol specifying the boundary conditions of the image.
Symbol must be on one of:
:replicate
(repeat edge values to infinity),:circular
(image edges "wrap around"),:symmetric
(the image reflects relative to a position between pixels),:reflect
(the image reflects relative to the edge itself).
Refer to the documentation of Pad
for more details and examples for each option.
Fill
The type Fill
designates a particular value which will be used to extrapolate pixels beyond the boundary of an image. Refer to the documentation of Fill
for more details and illustrations.
2D Examples
Each example is based on the input array
\[\mathbf{A} = \boxed{ \begin{matrix} 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 6 & 8 & 10 & 12 \\ 3 & 6 & 9 & 12 & 15 & 18 \\ 4 & 8 & 12 & 16 & 20 & 24 \\ 5 & 10 & 15 & 20 & 25 & 30 \\ 6 & 12 & 18 & 24 & 30 & 36 \end{matrix}}.\]
Examples with Pad
The command padarray(A, Pad(:replicate,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 6 & 6 & 6 \\ 1 & 1 & 1 & 1 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 6 & 6 & 6 & 6 \\ 2 & 2 & 2 & 2 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 12 & 12 & 12 & 12 \\ 3 & 3 & 3 & 3 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 18 & 18 & 18 & 18 \\ 4 & 4 & 4 & 4 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 24 & 24 & 24 & 24 \\ 5 & 5 & 5 & 5 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 30 & 30 & 30 & 30 \\ 6 & 6 & 6 & 6 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \\ 6 & 6 & 6 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 36 & 36 & 36 \end{array} }.\]
The command padarray(A, Pad(:circular,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 \\ 15 & 20 & 25 & 30 & 5 & 10 & 15 & 20 & 25 & 30 & 5 & 10 & 15 & 20 \\ 18 & 24 & 30 & 36 & 6 & 12 & 18 & 24 & 30 & 36 & 6 & 12 & 18 & 24 \\ 3 & 4 & 5 & 6 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 1 & 2 & 3 & 4 \\ 6 & 8 & 10 & 12 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 2 & 4 & 6 & 8 \\ 9 & 12 & 15 & 18 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 4 & 8 & 12 & 16 \\ 15 & 20 & 25 & 30 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 5 & 10 & 15 & 20 \\ 18 & 24 & 30 & 36 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 6 & 12 & 18 & 24 \\ 3 & 4 & 5 & 6 & 1 & 2 & 3 & 4 & 5 & 6 & 1 & 2 & 3 & 4 \\ 6 & 8 & 10 & 12 & 2 & 4 & 6 & 8 & 10 & 12 & 2 & 4 & 6 & 8 \\ 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 & 15 & 18 & 3 & 6 & 9 & 12 \\ 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 & 20 & 24 & 4 & 8 & 12 & 16 \end{array} }.\]
The command padarray(A, Pad(:symmetric,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 16 & 12 & 8 & 4 & 4 & 8 & 12 & 16 & 20 & 24 & 24 & 20 & 16 & 12 \\ 12 & 9 & 6 & 3 & 3 & 6 & 9 & 12 & 15 & 18 & 18 & 15 & 12 & 9 \\ 8 & 6 & 4 & 2 & 2 & 4 & 6 & 8 & 10 & 12 & 12 & 10 & 8 & 6 \\ 4 & 3 & 2 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 6 & 5 & 4 & 3 \\ 4 & 3 & 2 & 1 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 6 & 5 & 4 & 3 \\ 8 & 6 & 4 & 2 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 12 & 10 & 8 & 6 \\ 12 & 9 & 6 & 3 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 18 & 15 & 12 & 9 \\ 16 & 12 & 8 & 4 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 24 & 20 & 16 & 12 \\ 20 & 15 & 10 & 5 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 30 & 25 & 20 & 15 \\ 24 & 18 & 12 & 6 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 36 & 30 & 24 & 18 \\ 24 & 18 & 12 & 6 & 6 & 12 & 18 & 24 & 30 & 36 & 36 & 30 & 24 & 18 \\ 20 & 15 & 10 & 5 & 5 & 10 & 15 & 20 & 25 & 30 & 30 & 25 & 20 & 15 \\ 16 & 12 & 8 & 4 & 4 & 8 & 12 & 16 & 20 & 24 & 24 & 20 & 16 & 12 \\ 12 & 9 & 6 & 3 & 3 & 6 & 9 & 12 & 15 & 18 & 18 & 15 & 12 & 9 \end{array} }.\]
The command padarray(A, Pad(:reflect,4,4))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 25 & 20 & 15 & 10 & 5 & 10 & 15 & 20 & 25 & 30 & 25 & 20 & 15 & 10 \\ 20 & 16 & 12 & 8 & 4 & 8 & 12 & 16 & 20 & 24 & 20 & 16 & 12 & 8 \\ 15 & 12 & 9 & 6 & 3 & 6 & 9 & 12 & 15 & 18 & 15 & 12 & 9 & 6 \\ 10 & 8 & 6 & 4 & 2 & 4 & 6 & 8 & 10 & 12 & 10 & 8 & 6 & 4 \\ 5 & 4 & 3 & 2 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 5 & 4 & 3 & 2 \\ 10 & 8 & 6 & 4 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 10 & 8 & 6 & 4 \\ 15 & 12 & 9 & 6 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 15 & 12 & 9 & 6 \\ 20 & 16 & 12 & 8 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 20 & 16 & 12 & 8 \\ 25 & 20 & 15 & 10 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 25 & 20 & 15 & 10 \\ 30 & 24 & 18 & 12 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 30 & 24 & 18 & 12 \\ 25 & 20 & 15 & 10 & 5 & 10 & 15 & 20 & 25 & 30 & 25 & 20 & 15 & 10 \\ 20 & 16 & 12 & 8 & 4 & 8 & 12 & 16 & 20 & 24 & 20 & 16 & 12 & 8 \\ 15 & 12 & 9 & 6 & 3 & 6 & 9 & 12 & 15 & 18 & 15 & 12 & 9 & 6 \\ 10 & 8 & 6 & 4 & 2 & 4 & 6 & 8 & 10 & 12 & 10 & 8 & 6 & 4 \end{array} }.\]
Examples with Fill
The command padarray(A, Fill(0,(4,4),(4,4)))
yields
\[\boxed{ \begin{array}{ccccccccccccc} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{1} & \boxed{2} & \boxed{3} & \boxed{4} & \boxed{5} & \boxed{6} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{2} & \boxed{4} & \boxed{6} & \boxed{8} & \boxed{10} & \boxed{12} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{3} & \boxed{6} & \boxed{9} & \boxed{12} & \boxed{15} & \boxed{18} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{4} & \boxed{8} & \boxed{12} & \boxed{16} & \boxed{20} & \boxed{24} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{5} & \boxed{10} & \boxed{15} & \boxed{20} & \boxed{25} & \boxed{30} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & \boxed{6} & \boxed{12} & \boxed{18} & \boxed{24} & \boxed{30} & \boxed{36} & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} }.\]
3D Examples
Each example is based on a multi-dimensional array $\mathsf{A} \in\mathbb{R}^{2 \times 2 \times 2}$ given by
\[\mathsf{A}(:,:,1) = \boxed{ \begin{array}{cc} 1 & 2 \\ 3 & 4 \end{array}} \quad \text{and} \quad \mathsf{A}(:,:,2) = \boxed{ \begin{array}{cc} 5 & 6 \\ 7 & 8 \end{array}}.\]
Note that each example will yield a new multi-dimensional array $\mathsf{A}' \in \mathbb{R}^{4 \times 4 \times 4}$ of type OffsetArray
, where prepended dimensions may be negative or start from zero.
Examples with Pad
The command padarray(A,Pad(:replicate,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & 1 & 2 & 2 \\ 3 & 3 & 4 & 4 \\ 3 & 3 & 4 & 4 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & \boxed{1} & \boxed{2} & 2 \\ 3 & \boxed{3} & \boxed{4} & 4 \\ 3 & 3 & 4 & 4 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & \boxed{5} & \boxed{6} & 6 \\ 7 & \boxed{7} & \boxed{8} & 8 \\ 7 & 7 & 8 & 8 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & 5 & 6 & 6 \\ 7 & 7 & 8 & 8 \\ 7 & 7 & 8 & 8 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:circular,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \\ 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & \boxed{1} & \boxed{2} & 1 \\ 4 & \boxed{3} & \boxed{4} & 3 \\ 2 & 1 & 2 & 1 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & \boxed{5} & \boxed{6} & 5 \\ 8 & \boxed{7} & \boxed{8} & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \\ 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:symmetric,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & 1 & 2 & 2 \\ 3 & 3 & 4 & 4 \\ 3 & 3 & 4 & 4 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 1 & 1 & 2 & 2 \\ 1 & \boxed{1} & \boxed{2} & 2 \\ 2 & \boxed{3} & \boxed{4} & 4 \\ 2 & 3 & 4 & 4 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & \boxed{5} & \boxed{6} & 6 \\ 7 & \boxed{7} & \boxed{8} & 8 \\ 7 & 7 & 8 & 8 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 5 & 5 & 6 & 6 \\ 5 & 5 & 6 & 6 \\ 7 & 7 & 8 & 8 \\ 7 & 7 & 8 & 8 \end{array}} \end{aligned} .\]
The command padarray(A,Pad(:reflect,1,1,1))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \\ 8 & 7 & 8 & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & \boxed{1} & \boxed{2} & 1 \\ 4 & \boxed{3} & \boxed{4} & 3 \\ 2 & 1 & 2 & 1 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 8 & 7 & 8 & 7 \\ 6 & \boxed{5} & \boxed{6} & 5 \\ 8 & \boxed{7} & \boxed{8} & 7 \\ 6 & 5 & 6 & 5 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \\ 4 & 3 & 4 & 3 \\ 2 & 1 & 2 & 1 \end{array}} \end{aligned} .\]
Examples with Fill
The command padarray(A,Fill(0,(1,1,1)))
yields
\[\begin{aligned} \mathsf{A}'(:,:,0) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}} & \mathsf{A}'(:,:,1) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & \boxed{1} & \boxed{2} & 0 \\ 0 & \boxed{3} & \boxed{4} & 0 \\ 0 & 0 & 0 & 0 \end{array}} \\ \mathsf{A}'(:,:,2) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & \boxed{5} & \boxed{6} & 0 \\ 0 & \boxed{7} & \boxed{8} & 0 \\ 0 & 0 & 0 & 0 \end{array}} & \mathsf{A}'(:,:,3) & = \boxed{ \begin{array}{cccc} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}} \end{aligned} .\]
ImageFiltering.BorderArray
— TypeBorderArray(inner::AbstractArray, border::AbstractBorder) <: AbstractArray
Construct a thin wrapper around the array inner
, with given border
. No data is copied in the constructor, instead border values are computed on the fly in getindex
calls. Useful for stencil computations. See also padarray
.
Examples
julia> using ImageFiltering
julia> arr = reshape(1:6, (2,3))
2×3 reshape(::UnitRange{Int64}, 2, 3) with eltype Int64:
1 3 5
2 4 6
julia> BorderArray(arr, Pad((1,1)))
BorderArray{Int64,2,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}},Pad{2}} with indices 0:3×0:4:
1 1 3 5 5
1 1 3 5 5
2 2 4 6 6
2 2 4 6 6
julia> BorderArray(arr, Fill(10, (2,1)))
BorderArray{Int64,2,Base.ReshapedArray{Int64,2,UnitRange{Int64},Tuple{}},Fill{Int64,2}} with indices -1:4×0:4:
10 10 10 10 10
10 10 10 10 10
10 1 3 5 10
10 2 4 6 10
10 10 10 10 10
10 10 10 10 10
ImageFiltering.Pad
— Type struct Pad{N} <: AbstractBorder
style::Symbol
lo::Dims{N} # number to extend by on the lower edge for each dimension
hi::Dims{N} # number to extend by on the upper edge for each dimension
end
Pad
is a type that designates the form of padding which should be used to extrapolate pixels beyond the boundary of an image. Instances must set style
, a Symbol specifying the boundary conditions of the image.
Output
The type Pad
specifying how the boundary of an image should be padded.
Details
When representing a spatial two-dimensional image filtering operation as a discrete convolution between the image and a $D \times D$ filter, the results are undefined for pixels closer than $D$ pixels from the border of the image. To define the operation near and at the border, one needs a scheme for extrapolating pixels beyond the edge. The Pad
type allows one to specify the necessary extrapolation scheme.
The type facilitates the padding of one, two or multi-dimensional images.
You can specify a different amount of padding at the lower and upper borders of each dimension of the image (top, left, bottom and right in two dimensions).
Options
Some valid style
options are described below. As an indicative example of each option the results of the padding are illustrated on an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \,d \, e \, f}$. We show the effects of padding only on the left and right border, but analogous consequences hold for the top and bottom border.
:replicate
(Default)
The border pixels extend beyond the image boundaries.
\[\boxed{ \begin{array}{l|c|r} a\, a\, a\, a & a \, b \, c \, d \, e \, f & f \, f \, f \, f \end{array} }\]
See also: Fill
, padarray
, Inner
and NoPad
:circular
The border pixels wrap around. For instance, indexing beyond the left border returns values starting from the right border.
\[\boxed{ \begin{array}{l|c|r} c\, d\, e\, f & a \, b \, c \, d \, e \, f & a \, b \, c \, d \end{array} }\]
See also: Fill
, padarray
, Inner
and NoPad
:symmetric
The border pixels reflect relative to a position between pixels. That is, the border pixel is omitted when mirroring.
\[\boxed{ \begin{array}{l|c|r} e\, d\, c\, b & a \, b \, c \, d \, e \, f & e \, d \, c \, b \end{array} }\]
See also: Fill
,padarray
, Inner
and NoPad
:reflect
The border pixels reflect relative to the edge itself.
\[\boxed{ \begin{array}{l|c|r} d\, c\, b\, a & a \, b \, c \, d \, e \, f & f \, e \, d \, c \end{array} }\]
See also: Fill
,padarray
, Inner
and NoPad
ImageFiltering.Fill
— Type struct Fill{T,N} <: AbstractBorder
value::T
lo::Dims{N}
hi::Dims{N}
end
Fill
is a type that designates a particular value which will be used to extrapolate pixels beyond the boundary of an image.
Output
The type Fill
specifying the value with which the boundary of the image should be padded.
Details
When representing a two-dimensional spatial image filtering operation as a discrete convolution between an image and a $D \times D$ filter, the results are undefined for pixels closer than $D$ pixels from the border of the image. To define the operation near and at the border, one needs a scheme for extrapolating pixels beyond the edge. The Fill
type allows one to specify a particular value which will be used in the extrapolation. For more elaborate extrapolation schemes refer to the documentation of Pad
.
The type facilitates the padding of one, two or multi-dimensional images.
You can specify a different amount of padding at the lower and upper borders of each dimension of the image (top, left, bottom and right in two dimensions).
Example
As an indicative illustration consider an image consisting of a row of six pixels which are specified alphabetically: $\boxed{a \, b \, c \, d \, e \, f}$. We show the effects of padding with a constant value $m$ only on the left and right border, but analogous consequences hold for the top and bottom border.
\[\boxed{ \begin{array}{l|c|r} m\, m\, m\, m & a \, b \, c \, d \, e \, f & m \, m \, m \, m \end{array} }\]
See also: Pad
, padarray
, Inner
and NoPad
ImageFiltering.Inner
— TypeInner()
Inner(lo, hi)
Indicate that edges are to be discarded in filtering, only the interior of the result is to be returned.
Example:
imfilter(img, kernel, Inner())
ImageFiltering.NA
— TypeNA(na=isnan)
Choose filtering using "NA" (Not Available) boundary conditions. This is most appropriate for filters that have only positive weights, such as blurring filters. Effectively, the output value is normalized in the following way:
filtered array with Fill(0) boundary conditions
output = -----------------------------------------------
filtered 1 with Fill(0) boundary conditions
Array elements for which na
returns true
are also considered outside array boundaries.
ImageFiltering.NoPad
— TypeNoPad()
NoPad(border)
Indicates that no padding should be applied to the input array, or that you have already pre-padded the input image. Passing a border
object allows you to preserve "memory" of a border choice; it can be retrieved by indexing with []
.
Example
The commands
np = NoPad(Pad(:replicate))
imfilter!(out, img, kernel, np)
run filtering directly, skipping any padding steps. Every entry of out
must be computable using in-bounds operations on img
and kernel
.
Algorithms
ImageFiltering.Algorithm.FIR
— TypeFilter using a direct algorithm
ImageFiltering.Algorithm.FFT
— TypeFilter using the Fast Fourier Transform
ImageFiltering.Algorithm.IIR
— TypeFilter with an Infinite Impulse Response filter
ImageFiltering.Algorithm.Mixed
— TypeFilter with a cascade of mixed types (IIR, FIR)
Solvers for predefined models
ImageFiltering.Models
— ModuleThis submodule provides predefined image-related models and its solvers that can be reused by many image processing tasks.
- solve the Rudin Osher Fatemi (ROF) model using the primal-dual method:
solve_ROF_PD
andsolve_ROF_PD!
ImageFiltering.Models.solve_ROF_PD!
— Methodsolve_ROF_PD!(out, buffer, img, λ, num_iters)
The in-place version of solve_ROF_PD
.
It is not uncommon to use ROF solver in a higher-level loop, in which case it makes sense to preallocate the output and intermediate arrays to make it faster.
The content and meaning of buffer
might change without any notice if the internal implementation is changed. Use preallocate_solve_ROF_PD
helper function to avoid potential changes.
Examples
using ImageFiltering.Models: preallocate_solve_ROF_PD
out = similar(img)
buffer = preallocate_solve_ROF_PD(img)
solve_ROF_PD!(out, buffer, img, 0.2, 30)
ImageFiltering.Models.solve_ROF_PD
— Methodsolve_ROF_PD([T], img::AbstractArray, λ; kwargs...)
Return a smoothed version of img
, using Rudin-Osher-Fatemi (ROF) filtering, more commonly known as Total Variation (TV) denoising or TV regularization. This algorithm is based on the primal-dual method.
This function applies to generic N-dimensional colorant array and is also CUDA-compatible. See also solve_ROF_PD!
for the in-place version.
Arguments
T
: the output element type. By default it isfloat32(eltype(img))
.img
: the input image, usually a noisy image.λ
: the regularization coefficient. Largerλ
results in more smoothing.
Parameters
num_iters::Int
: The number of iterations before stopping.
Examples
using ImageFiltering
using ImageFiltering.Models: solve_ROF_PD
using ImageQualityIndexes
using TestImages
img_ori = float.(testimage("cameraman"))
img_noisy = img_ori .+ 0.1 .* randn(size(img_ori))
assess_psnr(img_noisy, img_ori) # ~20 dB
img_smoothed = solve_ROF_PD(img_noisy, 0.015, 50)
assess_psnr(img_smoothed, img_ori) # ~27 dB
# larger λ produces over-smoothed result
img_smoothed = solve_ROF_PD(img_noisy, 5, 50)
assess_psnr(img_smoothed, img_ori) # ~21 dB
Extended help
Mathematically, this function solves the following ROF model using the primal-dual method:
\[\min_u \lVert u - g \rVert^2 + \lambda\lvert\nabla u\rvert\]
References
- [1] Chambolle, A. (2004). "An algorithm for total variation minimization and applications". Journal of Mathematical Imaging and Vision. 20: 89–97
- [2] Wikipedia: Total Variation Denoising
Internal machinery
ImageFiltering.KernelFactors.ReshapedOneD
— TypeReshapedOneD{N,Npre}(data)
Return an object of dimensionality N
, where data
must have dimensionality 1. The axes are 0:0
for the first Npre
dimensions, have the axes of data
for dimension Npre+1
, and are 0:0
for the remaining dimensions.
data
must support eltype
and ndims
, but does not have to be an AbstractArray.
ReshapedOneDs allow one to specify a "filtering dimension" for a 1-dimensional filter.