Reference

List of view types

With that as an introduction, let's list all the view types supported by this package. channelview and colorview are opposite transformations, as are rawview and normedview. channelview and colorview typically create objects of type ChannelView and ColorView, respectively, unless they are "undoing" a previous view of the opposite type.

ImageCore.channelviewFunction.
channelview(A)

returns a view of A, splitting out (if necessary) the color channels of A into a new first dimension. This is almost identical to ChannelView(A), except that if A is a ColorView, it will simply return the parent of A, or will use reinterpret when appropriate. Consequently, the output may not be a ChannelView array.

source
ChannelView(A)

creates a "view" of the Colorant array A, splitting out (if necessary) the separate color channels of eltype(A) into a new first dimension. For example, if A is a m-by-n RGB{N0f8} array, ChannelView(A) will return a 3-by-m-by-n N0f8 array. Color spaces with a single element (i.e., grayscale) do not add a new first dimension of A.

Of relevance for types like RGB and BGR, the channels of the returned array will be in constructor-argument order, not memory order (see reinterpret if you want to use memory order).

The opposite transformation is implemented by ColorView.

source
ImageCore.colorviewFunction.
colorview(C, A)

returns a view of the numeric array A, interpreting successive elements of A as if they were channels of Colorant C. This is almost identical to ColorView{C}(A), except that if A is a ChannelView, it will simply return the parent of A, or use reinterpret when appropriate. Consequently, the output may not be a ColorView array.

Example

A = rand(3, 10, 10)
img = colorview(RGB, A)
source
colorview(C, gray1, gray2, ...) -> imgC

Combine numeric/grayscale images gray1, gray2, etc., into the separate color channels of an array imgC with element type C<:Colorant.

As a convenience, the constant zeroarray fills in an array of matched size with all zeros.

Example

imgC = colorview(RGB, r, zeroarray, b)

creates an image with r in the red chanel, b in the blue channel, and nothing in the green channel.

See also: StackedView.

source
ColorView{C}(A)

creates a "view" of the numeric array A, interpreting the first dimension of A as if were the channels of a Colorant C. The first dimension must have the proper number of elements for the constructor of C. For example, if A is a 3-by-m-by-n N0f8 array, ColorView{RGB}(A) will create an m-by-n array with element type RGB{N0f8}. Color spaces with a single element (i.e., grayscale) do not "consume" the first dimension of A.

Of relevance for types like RGB and BGR, the elements of A are interpreted in constructor-argument order, not memory order (see reinterpret if you want to use memory order).

The opposite transformation is implemented by ChannelView.

source
ImageCore.rawviewFunction.
rawview(img::AbstractArray{FixedPoint})

returns a "view" of img where the values are interpreted in terms of their raw underlying storage. For example, if img is an Array{N0f8}, the view will act like an Array{UInt8}.

source
ImageCore.normedviewFunction.
normedview([T], img::AbstractArray{Unsigned})

returns a "view" of img where the values are interpreted in terms of Normed number types. For example, if img is an Array{UInt8}, the view will act like an Array{N0f8}. Supply T if the element type of img is UInt16, to specify whether you want a N6f10, N4f12, N2f14, or N0f16 result.

source
permuteddimsview(A, perm)

returns a "view" of A with its dimensions permuted as specified by perm. This is like permutedims, except that it produces a view rather than a copy of A; consequently, any manipulations you make to the output will be mirrored in A. Compared to the copy, the view is much faster to create, but generally slower to use.

source
StackedView(B, C, ...) -> A

Present arrays B, C, etc, as if they are separate channels along the first dimension of A. In particular,

B == A[1,:,:...]
C == A[2,:,:...]

and so on. Combined with colorview, this allows one to combine two or more grayscale images into a single color image.

See also: colorview.

source

List of value-transformations (map functions)

ImageCore.clamp01Function.
clamp01(x) -> y

Produce a value y that lies between 0 and 1, and equal to x when x is already in this range. Equivalent to clamp(x, 0, 1) for numeric values. For colors, this function is applied to each color channel separately.

See also: clamp01nan.

source
ImageCore.clamp01nanFunction.
clamp01nan(x) -> y

Similar to clamp01, except that any NaN values are changed to 0.

See also: clamp01.

source
ImageCore.scaleminmaxFunction.
scaleminmax(min, max) -> f
scaleminmax(T, min, max) -> f

Return a function f which maps values less than or equal to min to 0, values greater than or equal to max to 1, and uses a linear scale in between. min and max should be real values.

Optionally specify the return type T. If T is a colorant (e.g., RGB), then scaling is applied to each color channel.

Examples

Example 1

julia> f = scaleminmax(-10, 10)
(::#9) (generic function with 1 method)

julia> f(10)
1.0

julia> f(-10)
0.0

julia> f(5)
0.75

Example 2

julia> c = RGB(255.0,128.0,0.0)
RGB{Float64}(255.0,128.0,0.0)

julia> f = scaleminmax(RGB, 0, 255)
(::#13) (generic function with 1 method)

julia> f(c)
RGB{Float64}(1.0,0.5019607843137255,0.0)

See also: takemap.

source
ImageCore.scalesignedFunction.
scalesigned(maxabs) -> f

Return a function f which scales values in the range [-maxabs, maxabs] (clamping values that lie outside this range) to the range [-1, 1].

See also: colorsigned.

source
scalesigned(min, center, max) -> f

Return a function f which scales values in the range [min, center] to [-1,0] and [center,max] to [0,1]. Values smaller than min/max get clamped to min/max, respectively.

See also: colorsigned.

source
ImageCore.colorsignedFunction.
colorsigned()
colorsigned(colorneg, colorpos) -> f
colorsigned(colorneg, colorcenter, colorpos) -> f

Define a function that maps negative values (in the range [-1,0]) to the linear colormap between colorneg and colorcenter, and positive values (in the range [0,1]) to the linear colormap between colorcenter and colorpos.

The default colors are:

  • colorcenter: white

  • colorneg: green1

  • colorpos: magenta

See also: scalesigned.

source
ImageCore.takemapFunction.
takemap(f, A) -> fnew
takemap(f, T, A) -> fnew

Given a value-mapping function f and an array A, return a "concrete" mapping function fnew. When applied to elements of A, fnew should return valid values for storage or display, for example in the range from 0 to 1 (for grayscale) or valid colorants. fnew may be adapted to the actual values present in A, and may not produce valid values for any inputs not in A.

Optionally one can specify the output type T that fnew should produce.

Example:

julia> A = [0, 1, 1000];

julia> f = takemap(scaleminmax, A)
(::#7) (generic function with 1 method)

julia> f.(A)
3-element Array{Float64,1}:
 0.0
 0.001
 1.0
source

List of storage-type transformations

ImageCore.float32Function.
float32.(img)

converts the raw storage type of img to Float32, without changing the color space.

source
ImageCore.float64Function.
float64.(img)

converts the raw storage type of img to Float64, without changing the color space.

source
ImageCore.n0f8Function.
n0f8.(img)

converts the raw storage type of img to N0f8, without changing the color space.

source
ImageCore.n6f10Function.
n6f10.(img)

converts the raw storage type of img to N6f10, without changing the color space.

source
ImageCore.n4f12Function.
n4f12.(img)

converts the raw storage type of img to N4f12, without changing the color space.

source
ImageCore.n2f14Function.
n2f14.(img)

converts the raw storage type of img to N2f14, without changing the color space.

source
ImageCore.n0f16Function.
n0f16.(img)

converts the raw storage type of img to N0f16, without changing the color space.

source

List of traits

pixelspacing(img) -> (sx, sy, ...)

Return a tuple representing the separation between adjacent pixels along each axis of the image. Defaults to (1,1,...). Use ImagesAxes for images with anisotropic spacing or to encode the spacing using physical units.

source
spacedirections(img) -> (axis1, axis2, ...)

Return a tuple-of-tuples, each axis[i] representing the displacement vector between adjacent pixels along spatial axis i of the image array, relative to some external coordinate system ("physical coordinates").

By default this is computed from pixelspacing, but you can set this manually using ImagesMeta.

source
ImageCore.sdimsFunction.
sdims(img)

Return the number of spatial dimensions in the image. Defaults to the same as ndims, but with ImagesAxes you can specify that some axes correspond to other quantities (e.g., time) and thus not included by sdims.

source

coords_spatial(img)

Return a tuple listing the spatial dimensions of img.

Note that a better strategy may be to use ImagesAxes and take slices along the time axis.

source
size_spatial(img)

Return a tuple listing the sizes of the spatial dimensions of the image. Defaults to the same as size, but using ImagesAxes you can mark some axes as being non-spatial.

source
indices_spatial(img)

Return a tuple with the indices of the spatial dimensions of the image. Defaults to the same as indices, but using ImagesAxes you can mark some axes as being non-spatial.

source
ImageCore.nimagesFunction.
nimages(img)

Return the number of time-points in the image array. Defaults to

  1. Use ImagesAxes if you want to use an explicit time dimension.

source
assert_timedim_last(img)

Throw an error if the image has a time dimension that is not the last dimension.

source