ImageEdgeDetection.jl Documentation

A Julia package containing a number of algorithms for detecting edges in images.

Getting started

This package is part of a wider Julia-based image processing ecosystem. If you are starting out, then you may benefit from reading about some fundamental conventions that the ecosystem utilizes that are markedly different from how images are typically represented in OpenCV, MATLAB, ImageJ or Python.

The usage examples in the ImageEdgeDetection.jl package assume that you have already installed some key packages. Notably, the examples assume that you are able to load and display an image. Loading an image is facilitated through the FileIO.jl package, which uses QuartzImageIO.jl if you are on MacOS, and ImageMagick.jl otherwise. Depending on your particular system configuration, you might encounter problems installing the image loading packages, in which case you can refer to the troubleshooting guide.

Image display is typically handled by the ImageView.jl package. Alternatives include the various plotting packages, including Makie.jl. There is also the ImageShow.jl package which facilitates displaying images in Jupyter notebooks via IJulia.jl. Finally, one can also obtain a useful preview of an image in the REPL using the ImageInTerminal.jl package. However, this package assumes that the terminal uses a monospace font, and tends not to produce adequate results in a Windows environment.

Another package that is used to illustrate the functionality in ImageEdgeDetection.jl is the TestImages.jl which serves as a repository of many standard image processing test images.

Basic usage

Each edge detection algorithm in ImageEdgeDetection.jl is an AbstractEdgeDetectionAlgorithm.

Suppose one wants to mark the edges in an image. This can be achieved by simply choosing an appropriate algorithm and calling detect_edges or detect_edges! in the image.

Let's see a simple demo using the famous Canny edge detection algorithm:

using TestImages, ImageEdgeDetection, MosaicViews
img =  testimage("mandril_gray")
# Detect edges at different scales by adjusting the `spatial_scale` parameter.
img_edges₁ = detect_edges(img, Canny(spatial_scale = 1.4))
img_edges₂ = detect_edges(img, Canny(spatial_scale = 2.8))
img_edges₃ = detect_edges(img, Canny(spatial_scale = 5.6))
demo₁ = mosaicview(img, img_edges₁, img_edges₂, img_edges₃; nrow = 2)
 Downloading artifact: images
edge detection demo 1 image

You can control the Canny hysteresis thresholds by setting appropriate keyword parameters.

# Control the hysteresis thresholds by specifying the low and high threshold values.
img =  testimage("cameraman")
img_edges₄ = detect_edges(img, Canny(spatial_scale = 1.4, low = Percentile(5), high = Percentile(80)))
img_edges₅ = detect_edges(img, Canny(spatial_scale = 1.4, low = Percentile(60), high = Percentile(90)))
img_edges₆ = detect_edges(img, Canny(spatial_scale = 1.4, low = Percentile(70), high = Percentile(95)))
demo₂ = mosaicview(img, img_edges₄, img_edges₅, img_edges₆; nrow = 2)
edge detection demo 2 image

Each edge thinning algorithm in ImageEdgeDetection.jl is an AbstractEdgeThinningAlgorithm.

Suppose one wants to suppress the typical double edge response of an edge detection filter. This can be achieved by simply choosing an appropriate algorithm and calling thin_edges or thin_edges! on the image gradients and gradient magnitudes.

For example, one can suppress undesirable multi-edge responses associated with the Sobel filter:

using TestImages, ImageEdgeDetection, MosaicViews, ImageFiltering, ImageCore
img =  Gray.(testimage("lake_gray"))
# Determine the image gradients
g₁, g₂ = imgradients(img, KernelFactors.sobel)
# Determine the gradient magnitude
mag = hypot.(g₁, g₂)
# Suppress the non-maximal gradient magnitudes
nms₁ = thin_edges(mag, g₁, g₂, NonmaximaSuppression())
nms₂ = thin_edges(mag, g₁, g₂, NonmaximaSuppression(threshold = Percentile(95)))
demo₃ = mosaicview(img, Gray.(nms₂), Gray.(mag), Gray.(nms₁); nrow = 2)
edge thinning demo image

One can also determine the gradient orientation in an adjustable manner by defining an OrientationConvention. An OrientationConvention allows you to specify the compass direction against which you intend to measure the angle, and whether you are measuring in a clockwise or counter-clockwise manner.

In the example below, we map the angles [0...360] to the unit interval to visualise the orientation of the image gradient using different orientation conventions. Note that the angle 360 is used as a sentinel value to demarcate pixels for which the gradient orientation is undefined. The gradient orientation is undefined when the gradient magnitude is effectively zero. This corresponds to regions of constant intensity in the image. In the In the panel of images, the first image constitutes a black circle against a white background. The subsequent images depict the image gradient orientation, where the undefined gradient orientations are represent as pure white pixels.

using ImageEdgeDetection, MosaicViews, ImageFiltering, ImageCore

# Create a test image (black circle against a white background).
a = 250
b = 250
r = 150
img = Gray.(ones(500, 500))
for i in CartesianIndices(img)
   y, x = i.I
   img[i] = (x-a)^2 + (y - b)^2 - r^2 < 0 ? 0.0 : 1.0
end

# Determine the image gradients
g₁, g₂ = imgradients(img, KernelFactors.sobel)

orientation_convention₁ = OrientationConvention(in_radians = false, compass_direction = 'S')
orientation_convention₂ = OrientationConvention(in_radians = false, compass_direction = 'N')
orientation_convention₃ = OrientationConvention(in_radians = false, compass_direction = 'E', is_clockwise = true)

angles₁ = detect_gradient_orientation(g₁, g₂, orientation_convention₁) / 360
angles₂ = detect_gradient_orientation(g₁, g₂, orientation_convention₂) / 360
angles₃ = detect_gradient_orientation(g₁, g₂, orientation_convention₃) / 360

demo₄ = mosaicview(img, Gray.(angles₁), Gray.(angles₂), Gray.(angles₃); nrow = 2)
gradient orientation demo image

For more advanced usage, please check function reference page.