Digital Image Processing Image Filtering

Digital Image Processing Image Filtering O. Le Meur [email protected] ... Introduction: image transformation Conclusion There exist 3 yptes of transfor...

0 downloads 403 Views 7MB Size
Table of Content

Digital Image Processing Image Filtering O. Le Meur

[email protected] Univ. of Rennes 1

http://www.irisa.fr/temics/staff/lemeur/

January 2, 2011

1

Table of Content

2

1

Introduction

2

Point-to-point transformation

3

Linear ltering (neighborhood operator)

4

Non Linear ltering

5

Conclusion

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Introduction

3

1

Introduction

2

Point-to-point transformation

3

Linear ltering (neighborhood operator)

4

Non Linear ltering

5

Conclusion

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Introduction: image transformation

There exist 3 types of transformation: Point to point transformation: The output value at a specic coordinate is dependent only on one input value but not necessarily at the same coordinate; Local to point transformation: The output value at a specic coordinate is dependent on the input values in the neighborhood of that same coordinate; Global to point transformation: The output value at a specic coordinate is dependent on all the values in the input image.

Note that the complexity increases with the size of the considered neighborhood...

4

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Point-to-point transformation

5

1

Introduction

2

Point-to-point transformation Spatial coordinates-based transformations Pixel values-based transformations

3

Linear ltering (neighborhood operator)

4

Non Linear ltering

5

Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Spatial coordinates-based transformations Remark: This section is composed of several pictures extracted from http://eeweb.poly.edu/~onur/lectures/lectures.html. Let

im[x , y ] be an input image of size N × N . A spatial coordinates-based transformation, also called warping, aims at providing an image IM [k , l ] from the input image im[x , y ]:

IM [k , l ] = im[x (k , l ), y (k , l )]

x (k , l ) and y (k , l ) are the transformations or the pixel warping functions. just modify the spatial coordinates of a pixel not its value;

These functions

Special cases to take into consideration: the new coordinates [x (k , l ), y (k , l )] is out of the image

IM [k , l ] = 0;

IM .

In this case,

the new coordinates must be integers (Rounding operation, nearest integers...).

6

Transpose

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

The transpose tansformation is given by

x (k , l ) y (k , l )

7

= =

l k

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Vertical Flip

The vertical ip tansformation is given by

x (k , l ) y (k , l )

= =

N −k l

The horizontal ip tansformation is given by

x (k , l ) y (k , l ) 8

= =

k N −l

Translation

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

The translation tansformation is given by

x (k , l ) y (k , l )

= =

k + Tk l + Tl

where Tk and Tl are the transalation values for the x-axis and the y-axis respectively. In the example below, we have (Tl = −50)

IM [k , l ] IM [k , l ]

= =

im[x (k , l ), y (k , l )] im[k , l + Tl ]

Dierent transformations can be obtained depending on Tk and Tl values. 9

Rotation

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

The rotation tansformation is given by

x (k , l ) y (k , l )

= =

(k − x0 )cos θ − (l − y0 )sinθ + x0

(k − x0 )sinθ + (l − y0 )cos θ + y0

where [x0 , y0 ] is the spatial coordinates of the center of the rotation and θ the angle.

Extracted from http://eeweb.poly.edu/ onur/lectures/lectures.html.

10

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Wave

x (k , l ) y (k , l )

= =

k l + α × sin(β × l ) α

11

Spatial coordinates-based transformations Pixel values-based transformations

x (k , l ) y (k , l )

= =

k + α × sin(β × k ) l

and β can be used to strengthen the eect.

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Warp and swirl

x (k , l )

=

y (k , l )

=

k sign(l − y0 ) × (l − y )2 + y 0 0 y0

The swirl eect is a rotation but the angle of the rotation θ varies with the pixel distance from the center of the image [x0 , y0 ]:

d

=

θ

=

q (k − x0 )2 + (l − y0 )2 π

512

r

If r −→ 0, θ is small... 12

Glass eect

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

A glass eect is obtained by adding a small and random displacement to each pixel:

x (k , l ) y (k , l )

= =

k + (RAND (1, 1) − 1/2) × 10 l + (RAND (1, 1) − 1/2) × 10

Extracted from http://eeweb.poly.edu/ onur/lectures/lectures.html.

13

Summary

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

All 2D linear transformations:

x y

a b k c d l

 

Properties: Origin maps to origin; Lines map to lines; Ratios are preserved...

 =

 

Scale, rotation, mirror...

Ane transformations (linear transf. + translation):

x y  w  

a b c k d e f   l  0 0 1 w 

=

Properties: Origin does not necessarily map to origin; Lines map to lines; Ratios are preserved... 14

 

Summary

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Ane transformations (linear transf. + translation): Translation:

x y   

1

1 0 0 

=

tx k ty   l 

0 1 0

 

1

1

Scale:

x y   

1

 =

sx

0

0

0

0 k 0  l  1 1  

sy 0

2D in-plane rotation:

x

  y 

1

15

cos θ  sinθ 

=

0

−sinθ cos θ

0

0 k 0  l  1 1  

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Pixel values-based transformations

Let im[x , y ] be an input image of size N × N . A pixel values-based transformation aims at providing an image IM [k , l ] from the input image im[x , y ]:

IM [k , l ] = f (im[k , l ])

Noticed that the spatial coordinates of pixel are not modied. The function f is used to modify the pixel values. The simplest one is the identity function: f (p ) = p :

IM [k , l ] IM [k , l ]

16

= =

f (im[k , l ]) im[k , l ]

Histogram

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

An image histogram is a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. Histogram gives information about the global distribution of an image. Histogram plots the number of pixels in the image (vertical axis) with a particular brightness value (horizontal axis).

(a) Original

17

(b) Histogram (intensity)

Histogram

18

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

(a) Original

(b) Histogram (intensity)

(c) Original

(d) Histogram (intensity)

Histogram

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

High dynamic range in the last case providing the best quality.

From R.C. Gonzalez, R.E. Woods, Digital image processing. 19

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Histogram equalization

The goal is to increase the global contrast of images, especially when the usable data of the image is represented by close contrast values. Consider a discrete grayscale image {x } and let ni be the number of occurrences of gray level i . The probability of an occurrence of a pixel of level i in the image is px (i ) = p(x = i ) = nni , 0 ≤ i < L, L being the total number of gray levels in the image, n being the total number of pixels in the image. P Let us also dene the cumulative distribution function: cdfx (i ) = ij =0 px (j ). We want to produce an image {y }, such that cdfy (i ) = iK .

y = cdfx (x ) × (max {x } − min {x }) + min {x }

20

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Negative image

The negative image is obtained by f (p ) = 255 − p (pixel values are coded on 8 bits).

IM [k , l ] IM [k , l ]

21

= =

f (im[k , l ])

255 − im[k , l ]

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Piece continuous transformation

The objective of such transformation is: to compress pre-determined ranges of values: Range compression; to accentuate pre-determined ranges of values: Range stretching.

f (p )

=

 α1 × p     α2 × (p − a1 )      ..

. P   i −1 α (α − α  αi (p − ai −1 ) +  j j j −1 j = 1       .. .

Obviously, we have: αi < 1, compression; αi > 1, stretching.

22

0 ≤ p < a1

a 1 ≤ p < a2 ai − 1 ≤ p < ai

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Piece continuous transformation

Example of contrast stretching: Let f a function dened on three pieces:   α1 p , f (p) =  α2 (p − a1 ) + α1 a1 , α3 (p − a2 ) + (α2 (a2 − a1 ) + α1 a1 ),

23

0 ≤ p < a1

a1 ≤ p < a2 a2 ≤ p ≤ 255

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Piece continuous transformation Example of contrast stretching:

Let imaging that α1 and α2 are null. The ltered image contains only grey level belonging to [a1 , a2 ]. We just keep a slice of the image.

24

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Spatial coordinates-based transformations Pixel values-based transformations

Piece continuous transformation Binary thresholding:

IM [x , y ] =



0 255

im[x,y]
otherwise

Gamma correction:

IM [x , y ] = im[x , y ]γ

25

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Linear ltering with neighborhoods operator

26

1

Introduction

2

Point-to-point transformation

3

Linear ltering (neighborhood operator) Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

4

Non Linear ltering

5

Conclusion

Denition

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Denition of neighborhood around a pixel of spatial coordinate (x , y ). The neighborhood is called V (x , y ):

Two examples:

27

Denition

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

2D Finite Impluse Response (FIR) lter:

IM [x , y ] IM [x , y ] | {z } Output

= =

(im ∗ h)[x , y ] X X k ∈V (x ,y ) l ∈V (x ,y )

h(k , l ) im[x − k , y − l ] | {z } | {z } Input Filter coe.

with h is the 2D impulse response called also the Point Spread Function (PSF) or the kernel of the transform. It is composed of the lter coecients (nite length). The gain of the lter is equal to

g=

28

X

X

k ∈V (x ,y ) l ∈V (x ,y )

h(i , j )

Denition

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

X X IM [x , y ] = h(k , l ) im[x − k , y − l ] | {z } | {z } | {z } k ∈V (x ,y ) l ∈V (x ,y ) Output Input Filter coe.

where H is the convolution kernel. Example: the lter support is (3 × 5). The convolution kernel is:

h(−2, −1) =  h(−2, 0) h(−2, −1) 

H

29

h(−1, −1) h(0, −1) h(1, −1) h(2, −1) h(−1, 0) h(0, 0) h(1, 0) h(2, 0)  h(−1, −1) h(0, −1) h(1, −1) h(2, −1) 

Denition

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Example for a neigborhood of size (2N + 1) × (2N + 1):

IM [x , y ] =

N X

N X

k =−N l =−N

h(k , l )im[x −k , y −l ]

Number of multiplications per output point: O(N 2 )

30

N=2

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Average lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The most simple low-pass lter is the local averaging operation. The main eect of a low-pass lter is a blurring. The size of the kernel is (2N + 1) × (2N + 1):

h (k , l ) =

(

1

(2N +1)2

0

−N ≤ k , l ≤ − N

Otherwise

For N = 1, the convolution kernel is given by: 1 H = 1 1 9 1 

31

1 1 1

1 1 1 

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Average lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Three examples of averaging for dierent sizes of kernel. From the left-hand side to the right-side, N = {1, 3, 8}:

The amount of blur increases with the size of the kernel (the number of operation too O(N 2 )). In order to lter pixels located near the edges of the image, edge pixel values are replicated to give sucient data (this is not the case here).

32

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Gaussian lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The kernel h is given by the following function

h(x , y ) = 1 2 exp 2πσ

  x2 + y2 − 2σ2

Each pixel's new value is set to a weighted average of that pixel's neighborhood. The original pixel's value receives the heaviest weight (having the highest Gaussian value) and neighboring pixels receive smaller weights as their distance to the original pixel increases. This results in a blur that preserves boundaries and edges better than other, more uniform blurring lters. Note that the lter support is truncated... 33

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Gaussian lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

h(x , y ) = 1 2 exp

  x2 + y2 − 2σ2

2πσ Example of kernel: σ = 0.84089642, N = 3: Note that the center element (at [4, 4]) has the largest value, decreasing symmetrically as distance from the center increases. 0.00000067

H

0.00002292  0.00019117  = 0.00038771  0.00019117 0.00002292 0.00000067

0.00002292 0.00078633 0.00655965 0.01330373 0.00655965 0.00078633 0.00002292

0.00019117 0.00655965 0.05472157 0.11098164 0.05472157 0.00655965 0.00019117

0.00038771 0.01330373 0.11098164 0.22508352 0.11098164 0.01330373 0.00038771

0.00019117 0.00655965 0.05472157 0.11098164 0.05472157 0.00655965 0.00019117

0.00002292 0.00078633 0.00655965 0.01330373 0.00655965 0.00078633 0.00002292

0.00000067 0.00002292 0.00019117  0.00038771  0.00019117  0.00002292 0.00000067

Note that 0.22508352 (the central one) is 1177 times larger than 0.00019117 which is just outside 3σ.

34

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Gaussian lter

h(x , y ) = 1 2 exp 2πσ

  x2 + y2 − 2σ2

Example of kernel: σ = 0.6, N = 1: 1 H = 1 2 16 1 

σ = 1,

1 2 1 

N = 2: 1 9 1   H= 18 1444  9 1 

35

2 4 2

9 81 162 81 9

18 162 324 162 18

9 81 162 81 9

1 9 18  9 1

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Gaussian lter

Example of Gaussian ltering with σ = 2:

36

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Gaussian lter

Increasing sigma increases the smoothing

37

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Other low pass lters

2D Pyramidal lter: 

H= 1

81

1

2  3 2

1

Conic lter:



H= 1

25

0

0  1 0

0

38

2 4 6 4 2

3 6 9 6 3

2 4 6 4 2

1 2 3  2 1

0 2 2 2 0

1 2 5 2 1

0 2 2 2 0

0 0 1  0 0

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

High-pass lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The high-pass ltered image can be thought of as the original image minus the low pass ltered image.

IM [x , y ] = im[x , y ] − IM [x , y ] =

N X

N X

k =−N l =−N

N X

N X

k =−N l =−N

h(k , l )im[x − k , y − l ]

h(k , l )im[x − k , y − l ]

with h the convolution kernel. 0 H = −1 0 

39

−1

5

−1

0 −1 0 

−1 = −1 −1 

H

−1

9

−1

 −1 −1 −1

1 H = −2 1 

−2

5

−2

1 −2 1 

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

High-pass lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Three examples of high-pass ltering for dierent sizes of kernel. From the left-hand side to the right-side, N = {1, 3, 8}:

When the kernel's size increases, the ltering is more important and then the result is less noisy.

40

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Dierentiation lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

f (x + ∆x ) − f (x ) f 0 (x ) = ∆lim x →0 ∆x

Local variations of intensity are an important source of information in image processing. These local variations are gradient (it measures the rate of change of the function):   ∇im[k , l ] =

In the illustration below: GX =

41

∂ im [ ∂x

∂ im ∂ im [k , l ], [k , l ] ∂x ∂y

k , l ] and GY

= ∂∂im y [k , l ].

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Dierentiation lter GX

GY

= ∂∂im x [k , l ]:

= ∂∂im y [k , l ]

Kernel = hx = −1 

Kernel

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

1



∂ im [k , l ] ≈ im[k + 1, l ] − im[k , l ] ∂x   = hy = −1 1 T ∂ im [k , l ] ≈ im[k , l + 1] − im[k , l ] ∂y  we use the following kernel −1 0

1 and However, most of the time,  T −1 0 1 (phase =zero). Below, from the left-hand side to the right: original,     −1 1 , −1 0 1 .

42



Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Dierentiation lter

However, these lters are very sensitive to the noise. In order to enhance the robustness, these lters are combined with a blurring lter: −1  −2 −1  −1  0 

hx hy

=

=

1

0 0 0 −2

0 2

1 2 1 

 −1 0

1

These kernel are the Sobel's kernel. The blurring lter is 1 

IMx [k , l ] IMy [k , l ]

= =

2

(im ∗ hx )[k , l ]

(im ∗ hy )[k , l ]

In the same vein, Prewitt and Robert's lters.

43

1. 

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Dierentiation lter

Norm of the gradient: k∇IM [k , l ]k2 =

p

Its orientation: arg (∇IM [k , l ]) = arctan

IMx [k , l ]2 + IMy [k , l ]2 ;



IMy [k ,l ] IMx [k ,l ]



From left-hand side to the right: IMx , IMy and the norm.

44

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Dierentiation lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The Laplacian of a picture is the second derivative: ∂ 2 im [k , l ] ∂2x ∂ 2 im [k , l ] ∂2y ∇2 im[k , l ] ∇2 im[k , l ]

=



im[k + 1, l ] + im[k − 1, l ] − 2 × im[k , l ]



im[k , l + 1] + im[k , l − 1] − 2 × im[k , l ]

∂ 2 im ∂ 2 im [k , l ] + 2 [k , l ] 2 ∂ x ∂ y

≈ im[k + 1, l ] + im[k − 1, l ] + im[k , l + 1] + im[k , l − 1] − 4 × im[k , l ] For a 4-neighborhood, the kernel is given by   0 1 0 h = 1 −4 1 0 1 0

We can extent this kernel to compute the laplacian in all directions (8-neighborhood): 1 1 1 

h 45

=

1 −8 1

1 1 1 

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Dierentiation lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The second order derivatives have a stronger response to ne details (e.g. thin lines) than the rst order derivatives.

46

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Frequency domain ltering

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

where H is the convolution kernel.

IM [x , y ]

=

(im ∗ h)[x , y ]

F im1 [x , y ] ∗ im2 [x , y ] → IM1 [u, v ] × IM2 [u, v ] F im1 [x , y ] × im2 [x , y ] → IM1 [u, v ] ∗ IM2 [u, v ]

When the size of the kernel is large, it is better to apply the lter in the frequency domain. For more information:

Digital Image Processing, by R. C. Gonzalez and R. E. Woods, 3rd edition, Pearson Prentice Hall, 2008.

47

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Frequency domain ltering

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

We can spatially lter an image by Fourier transforming and applying a frequency lter:

IM [x , y ] ˜ [u , v ] IM

= =

im[x , y ] ∗ h[x , y ] IM [u, v ] × H [u, v ]

where, H [u , v ] is the lter in the frequency function.

48

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Ideal low pass lter

From the left-hand side to the right: Ideal low pass lter transfert function, lter displayed as an image, lter radial cross section.

H (u, v )

 =

1 0

D (u, v ) ≤ D0 D (u, v ) > D0

With D the euclidean distance from the spectrum center ( N2 , N2 ).

49

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Ideal low pass lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Low pass ltering:

Ringing and blurring

50

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Butterworth low pass lter

H (u , v )

51

=

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

1

1+



 D (u ,v ) 2n D0

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Butterworth low pass lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Top: spatial representation of the lter for dierent orders; Bottom: intensity proles through the center of the lters. Butterworth low pass ltering:

Smooth transition in blurring, no ringing is present. 52

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Gaussian low pass lter

H (u, v )

=

exp

  D 2 (u, v ) − 2D02

with D0 = σ. Gaussian low pass ltering:

Smooth transition in blurring, no ringing is present. 53

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Low pass lter

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Comparison between the ideal, the Butterworth and the gaussian low pass lter: Ideal

Butterworth

Gaussian

54

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

High pass ltering in the frequency domain HHP (u, v )

55

=

1 − HLP (u , v )

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

High pass ltering in the frequency domain

Ideal high-pass lters enhance edges but suer from ringing artefacts, just like Ideal LPF; smoother results with the two others. 56

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Frequency domain ltering

Example of two band-pass lters:

57

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Laplacian in the frequency domain We remind (see previous lecture): ∇2 im[k , l ]

=

∂ 2 im ∂ 2 im [k , l ] + 2 [k , l ] ∂2x ∂ y

d n x (t ) F n dt n −→ (j 2π f ) X (f ).

From this, it follows that ∂ 2 im ∂ 2 im F [k , l ] + 2 [k , l ] −→ −(u 2 + v 2 )IM [u , v ] 2 ∂ x ∂ y

The Laplacian lter is then implemented in the frequency domain by

H (u, v )

58

=

−(u 2 + v 2 )

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Laplacian in the frequency domain

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Finally, to compute the Laplacian, we need : 1 to compute the Fourier transform of the picture;

59

2

to multiply the spectrum by −(u 2 + v 2 );

3

to compute the inverse Fourier transform.

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

The cortex transform

The cortex transform is rst described by A. Watson as the modeling of the neural response of retinal cells to visual stimuli. The cortex lter in the frequency domain is

cortex[bi ,θi ] (ρ, θ)

=

dombi

× fanθi (θ)

Where,

bi and θi represent the frequency band and the index of orientation, respectively; (ρ, θ)

are polar coordinates.

The cortex transform decomposes the input image im[x , y ] into a set of subband images B[bi ,θi ] [k , l ]:

B[bi ,θi ] [k , l ]

60

=

F −1



cortex[bi ,θi ] (ρ, θ) × F {im[x , y ]}



Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

The cortex transform

Denition Low-pass lters in spatial domain High-pass lters in spatial domain Dierentiation lter in spatial domain Frequency domain ltering

Frequency responses of several cortex lters (brightness represents gain for the given spatial frequency). Complete set of cortex lters

61

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Introduction

1

Introduction

2

Point-to-point transformation

3

Linear ltering (neighborhood operator)

4

Non Linear ltering Denition Rank ltering Homomorphic ltering Adaptive ltering

Conditional mean Anisotropic Kuwahara ltering Bilateral ltering

5

62

Conclusion

Denition Rank ltering Homomorphic ltering Adaptive ltering

Denition

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Rank ltering Homomorphic ltering Adaptive ltering

The most important drawback of the linear ltering is that all pixels in the image are modied by the ltering process. To overcome this problem, non linear ltering is used. It aims, for instance, to protect some parts of the picture having particular features (edges...) or to remove data without blurring the whole image (impluse noise).

Three kinds of lters: Rank ltering; Adaptive linear ltering; Morphological operators.

63

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Rank ltering

Rank lters are based on three steps:

64

1

Data selection, also called windowing;

2

Data ranking (in ascendant order);

3

1D linear weighting of the ordered data list.

Denition Rank ltering Homomorphic ltering Adaptive ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Special case of a generalized rank lter

Denition Rank ltering Homomorphic ltering Adaptive ltering

If all weights of the linear lter are null, except one in the median position. This lter is called a median lter.

IM [x , y ] = MED (im[xi , yi ]|[xi , yi ] ∈ V [x , y ]) V [x , y ] is the neighborhood (a set of N samples). if the size of the neighborhood is odd, the output value is the median value; if the size is even, the output value is the average of the two middles values.

The median lter is very ecient in ltering signals corrupted by impulsive noise but it is not very ecient in gaussian noise environment. 65

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Special case of a generalized rank lter

Denition Rank ltering Homomorphic ltering Adaptive ltering

However, when the number of the samples is large, the ordering procedure becomes cumbersome. Idea: the median lter is taken over the outputs of several FIR substructures and the number of the substructures is much smaller than the number of the data samples inside the lter window.

IM [x , y ] = MED (y (1), . . . , y (m)) where, m is linear FIR lters.

66

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Homomorphic ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

Homomorphic ltering is a generalized technique for signal and image processing, involving a nonlinear mapping to a dierent domain in which linear lter techniques are applied, followed by mapping back to the original domain. In many case, we want to remove shading eects from an image. The objective is then to enhance high frequencies; to attenuate low frequencies (but ne details have to be preserved). Consider the following model of image formation:

im[x , y ]

=

i (x , y ) | {z }

illumination

× r (x , y ) | {z }

reection

The illumination component varies slowly and then aects low frequencies mostly; The reection component varies faster and then aects the high frequencies mostly.

67

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Homomorphic ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

What is the solution to separate LF and HF?

im[x , y ] IM [u, v ]

=

Fourier transform =

i [x , y ] × r [x , y ] I [u , v ] ∗ R [u , v ]

Due to the convolution in the frequency domain, LF and HF from i [x , y ] and r [x , y ] are mixed together... Too complex to lter LF and HF in such condtion.

68

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Homomorphic ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

What is the solution to separate LF and HF? We can take the log !

im[x , y ] log (im[x , y ]) log (im[x , y ])

= = =

i [x , y ] × r [x , y ] log (i [x , y ] × r [x , y ]) log (i [x , y ]) + log (r [x , y ])

1 Take the log and apply the Fourier tansform to the new signal:

log (im[x , y ])

F (log (im[x , y ]))

Z [u , v ]

2 Filtering in the frequency domain:

Z [u , v ] × H [u , v ]

=

= =

log (i [x , y ]) + log (r [x , y ])

F (log (i [x , y ])) + F (log (r [x , y ]))

Ilog [u, v ] + Rlog [u, v ] H [u , v ] Ilog [u, v ] × H [u, v ] + Rlog [u, v ] × H [u, v ] =

3 Take the inverse Fourier transform and apply the exponential function:   −1 −1 F (Z [u , v ] × H [u , v ]) = F Ilog [u, v ] × H [u, v ] + F −1 Rlog [u, v ] × H [u, v ]

z [x , y ] exp (z [x , y ])

69

= =

e i [x , y ] + er [x , y ]   exp ei [x , y ] + exp (er [x , y ])

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Homomorphic ltering

H [u , v ]

=

γL +

Denition Rank ltering Homomorphic ltering Adaptive ltering

γH 

1 + √ D20 u

with, γL γH

70

a parameter that aects low frequencies; a parameter that aects high frequencies;

+v 2

2n

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Adaptive ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

An adaptive lter is a lter that self-adjusts its transfer function according to an optimizing algorithm. The goal is still to smooth the signal. However, we want to preserve edges... Filtering by pixel grouping; Conditional mean, Bilateral ltering and mean shift lter; Diusion.

71

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Conditional mean

IM [x , y ] | {z } Output

X

=

Denition Rank ltering Homomorphic ltering Adaptive ltering

X

k ∈V (x ,y ) l ∈V (x ,y )

h(k , l ) im[x − k , y − l ] | {z } | {z } Input Filter coe.

Principle: pixels in a neighbourhood are averaged only if they dier from the central pixel by less than a given threshold.

h(k , l )

 =

1 0

if |IM [x − k , y − k ] − IM [k , l ]| < TH Otherwise.

Example with a neighbourhood equal (2 × 3 + 1)(2 × 3 + 1),TH = 32:

72

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Anisotropic Kuwahara ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

Method proposed by Kuwahara and adapted by Nagao in 1980. Example for a window 5 × 5:

Selection of the sub-domain that has the minimum variance (9 windows for Nagao); Replace the value of the central pixel by the average value of the sub-domain having the minimum variance.

73

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Anisotropic Kuwahara ltering

74

Denition Rank ltering Homomorphic ltering Adaptive ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Denition Rank ltering Homomorphic ltering Adaptive ltering

Bilateral ltering

The idea is to use a weighted ltering but with an outlier rejection. Pixels that are very dierent in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is applied as two Gaussian lters at a localized pixel neighborhood:

IM [x , y ]

=

1

P

k ,l c[x ,y ] [k , l ]s[x ,y ] [k , l ]

X k ,l

im[k , l ] c[x ,y ] [k , l ]s[x ,y ] [k , l ] |

{z

h[k ,l ]

One in the spatial domain named the domain lter:

c[x ,y ] [k , l ]

=

exp

 −

d ([x , y ], [k , l ])



2σd

where d is Euclidean distance. One in the intensity space named the range lter:

s[x ,y ] [k , l ]

=

exp

 −

φ(im[x , y ], im[k , l ]) 2σr

where φ is a suitable measure of distance in intensity space. 75



}

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Bilateral ltering

76

Denition Rank ltering Homomorphic ltering Adaptive ltering

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Bilateral ltering

Denition Rank ltering Homomorphic ltering Adaptive ltering

Low contrast texture has been removed and edges are well preserved.

77

Conclusion

78

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

1

Introduction

2

Point-to-point transformation

3

Linear ltering (neighborhood operator)

4

Non Linear ltering

5

Conclusion

Conclusion

Introduction Point-to-point transformation Linear ltering (neighborhood operator) Non Linear ltering Conclusion

Image Transformation: global to point; point to point; local to point; Histogram; Linear Filtering; Non linear ltering.

79