High resolution digital image correlation using proper

interest independently [38, 37]. An alternative is to seek the displacement field in a global manner by a Galerkin approach [4]. It is based on a weak ...

0 downloads 146 Views 2MB Size
High resolution digital image correlation using proper generalized decomposition: PGD-DIC J.-C. Passieux⋆ and J.-N. P´eri´e Universit´ e de Toulouse ; Institut Cl´ ement Ader ; INSA, UPS, EMAC, ISAE ; ICA 135 avenue de Rangueil, 31077 Toulouse, France

SUMMARY A new method is proposed to measure the finite element (FE) displacement field from a deformed image in comparison to a reference one. In opposition to standard FE approaches, the unknown displacement is sought as a sum of products of separated dimensions functions. The problems in each dimension being uncoupled, the method involves only 1D meshes and 1D problems. An algorithm that builds successive best rank-one approximations is proposed and integrated into the nonlinear iterations of the correlation problem. Although the method can be applied to spaces of any dimension, this paper focuses on 2D images. Many synthetic examples are provided to evaluate the performance of the method. In addition, it is shown that, even with this separated representation, the introduction of a regularization operator is convenient. The latter makes it possible to perform a pixel-wise measure with huge computational savings. key words: variables

FE based DIC ; full field measurement ; model order reduction, PGD, separation of

1. Introduction Full field measurements by digital image correlation (DIC) are now widely used in mechanical engineering and materials science [37]. It is especially attractive in special experimental conditions (high temperature), with soft materials (like stone wool [9]), or with complex heterogeneous microstructures (textile reinforced composite [18]), where conventional measurement techniques can be tricky (or simply unfeasible). In such situations, optical full field measurement seems to be the preferable (if not the only) way of measuring correct deformations. Moreover, heterogeneous displacement and strain field measurements can be used to identify complex constitutive relations [2, 26, 28]. Most frequently, DIC is based on the determination of the displacement of small regions of interest independently [38, 37]. An alternative is to seek the displacement field in a global manner by a Galerkin approach [4]. It is based on a weak formulation of the gray level conservation equation [10]. It leads to the inversion of a system whose solution yields the

∗ Correspondence

to: Universit´ e de Toulouse ; INSA ; ICA ; 135 avenue de Rangueil, 31077 Toulouse, FR

2 displacement over the entire region of interest (ROI). The choice of the approximation subspace in which one seeks the displacement can be varied. Interpolations based on the a priori knowledge of the solution are sometimes directly integrated in the correlation algorithm (IDIC [33]). Thus the elastic crack tip asymptotic field has been successfully used to measure relevant mechanical information such as stress intensity factors or crack tip position [33, 34] in addition to the full displacement field. Among the possible choices of interpolation, an especially attractive method is based on the Finite Element framework [4]. This versatile method has been very effective for many applications in mechanics of materials and structures [4, 29, 34, 16, 9, 27]. Moreover, many numerical tools associated with the finite element method, initially devised for simulation, can be transposed to the measure. This is the case of the extended finite element method (X-FEM) [19] used to represent a discontinuity that does not conform to the mesh. Thus the method X-DIC [29, 30, 31] has greatly simplified the measurement of non planar cracks growth [27]. With these global FE approaches, when the spatial resolution decreases, the number of elements and consequently the number of degrees of freedom (DOF) required to measure the displacement field within a given ROI, increases. For instance, when a pixel-wise mesh is used, the associated computational cost becomes significant. It is even worse with a voxel-wise mesh for digital volume correlation (DVC) [17]. The complexity is so high that the number of elements is limited. The latter can, at best, reach few hundreds in the three dimensions, provided that an enhanced implementation is performed [17]. In this paper, we propose a method based on a tensor product approximation to overcome this barrier. Initially developed for the simulation of evolution problems [13] the proper generalized decomposition (PGD) consists in seeking an unknown field of many dimensions as the finite sum of separated dimensions functions. The solution to an evolution problem is, for instance, sought as the sum of products of space functions by time functions. Then, a progressive construction of successive best rank-one approximations is performed with or without possible updates [15, 24, 21]. In this example, the method involves only time independent spatial problems along with scalar ordinary differential equations iteratively. This has been applied in many other contexts. For instance, it has been successfully used to solve mutlidimensional problems in which the solution depending on a large number of dimensions could not be achieved with standard techniques because of the so-called Curse of Dimensionality [1, 8] (a state of the art review can be found in [6]). The method has also been applied to stochastic problems (Generalized Spectral Decomposition [20]) where stochastic and deterministic dimensions are separated. Finally, the PGD was also used to separate the different dimensions of the space. For example, a 3D plate problem (without plate hypothesis on the kinematics) was solved with the cost of 2D + 1D resolutions [5]. Generally speaking, tensor product approximation is often associated with the Proper Orthogonal Decomposition (POD) (also known as the Singular Value Decomposition (SVD), Principal Component analysis (PCA), or Karhunen-Loeve Decomposition (KLD)). This tool has been widely used to devise model order reduction techniques [12, 35, 11]. It consists in a first learning phase (i.e. the snapshot) from which a POD is a posteriori computed. Then the next problems are solved in projection on the resulting basis. But, in opposition to the POD, the PGD does not assume any of the separated functions. They are directly computed by the solver. PGD should then rather be considered as an a priori model (or dimension) reduction technique. In the present paper a PGD-based digital image correlation method is devised. The unknown 20; :–

3 displacement field is sought as a sum of product of unidimensional functions (for instance x, y and z in 3D). At each iteration of the correlation problem, the solution is corrected by a new best rank-one approximation. This new separated approximation requires the resolution of 1D problems in each dimension. As opposed to Q4-DIC (resp. C8-DVC), no surface (rep. volumetric) mesh is needed. One problem of quadratic (2D) or cubic (3D) complexity is replaced by several 1D problems that are smaller and of linear complexity. The linearized problem at one iteration of the correlation problem is made nonlinear by the PGD. A dedicated nonlinear solution strategy is therefore proposed. It requires a second stage of iterations. With the method proposed herein, the number of elements in each direction has much less impact on the computational cost. This method is therefore a good candidate to go down in resolution, and, ultimately, up to the scale of the pixel. But the problem of digital image correlation being ill-posed, a small number of pixel in each element can lead to a nonconvergence of the algorithm. To reach pixel-wise DIC a regularization is required [17]. In this paper we focus on the feasibility of regularization in the context of the proposed PGDDIC. We only consider a simple frequency filter based on the Laplacian operator. With this regularization, the PGD-DIC reaches pixel-wise digital image correlation with very appealing computational costs. In section 2, the main ideas of the family of Galerkin-type digital image correlation are recalled, then an a posteriori study is proposed to illustrate the interest of separating the space variables. Afterward, the PGD-DIC formulation is presented along with dedicated algorithms for 2D problems. Many 2D synthetic test cases are presented in order to exemplify the intrinsic performances of the proposed method. Finally a formulation of the aforementioned regularization is described and an example of pixel-scale DIC is studied.

2. Digital image correlation and dimmensionnality 2.1. Basics of digital image correlation In this section, we give a brief review of the main aspects of digital image correlation. The reader may refer to Roux et al. [34] for more details. Let us consider the two grayscale digital images before (Figure 1(a)) and after (Figure 1(b)) the deformation of a medium. Let f (x) be the image considered as a reference from which one seeks the displacement vector field u(x) that best matches to the distorted image g(x) f (x) = g(x + u(x))

(1)

where x is the position discretized in pixels. Remark 1. A sub-pixel displacement, or more generally a non-integer displacement results in a change of the gray level in each pixel of the image. In order to be able to measure such a displacement, one needs to introduce an interpolation scheme of the gray level. In the litterature, several schemes have been used (see, for instance, [36, 4]). In the sequel, a basic linear interpolation is used for the sake of simplicity. Equation (1) is the gray level conservation assumption [10]. Since this problem is ill-posed, 20; :–

4

t x + u(x) ✸ s❞❝✑ ✑ u(x)

t x

(a) reference image f (x)

(b) deformed image g(x)

Figure 1. Example of 2 synthetic grayscale images. f (left) and g (right) correspond to the reference and deformed state respectively. To produce g, the pixel intensities of the reference image f are advected using a known displacement field u. A given gray level which is located at pixel x in image f can be found in image g at the position x + u(x).

it is written in a least square sense. Find u(x) minimizing the quadratic distance φ2 : Z 2 2 φ = [f (x) − g(x + u(x))] dx

(2)

This problem is nonlinear and is thus solved thanks to an iterative process. At the present iteration, let us assume that an approximation of the displacement at the previous iteration u0 (x) is known. The unknown displacement field is sought in the form u(x) = u0 + δu(x), where the displacement correction δu(x) is assumed to be small enough to allow for a first order Taylor expansion: g(x + u(x)) ≈ g(x + u0 (x)) + δu(x)T ∇g(x + u0 (x))

(3)

This approximation is inserted into the quadratic distance (2), in order to linearise the formulation: Z  2 2 φ = (f − gu ) − δuT ∇gu dx (4)

where gu (x) = g(x + u0 (x)), (f − gu ) being the correlation residual field. If the nonlinear algorithm converges, then gu (x) is a good approximation of f (x). That is why one chooses to approximate ∇gu by ∇f . Like this it can be computed once and for all before the first iteration. By differentiating the modified quadratic distance φ with respect to δu, the linear prediction of the correlation problem reads, find δu, such that: Z  Z     (5) δu⋆ T ∇f δuT ∇f dx = δu⋆ T ∇f (f − gu ) dx ∀δu⋆ , A priori, the space of the unknown displacement is of infinite dimension. It is restricted to an approximate finite dimension subspace. Without loss of generality, the following interpolation of the unknown field is chosen: N X qn ϕn (x) (6) δu(x) = n=1

20; :–

5 The introduction of this interpolation (6) in Problem (5) leads to the resolution of the following linear system at iteration n: M qn = bn

(7)

where q is the dof vector collecting the values qn , and where the operator M and right hand side b of (7) read: Z Mij = ϕi (x)T ∇f ∇f T ϕj (x) dx Z n bi = ϕi (x)T ∇f (f (x) − g(x + un )) dx

So far the choice of interpolation functions ϕn (x) has not been specified. In the literature, most frequently, these functions have a local support, and represent a set of piecewise constant or linear functions. When one has a priori relevant mechanical information [33], one can use a basis of global modes based on the physical knowledge that we have [34]. Another choice for the interpolation consists in using the finite element framework for the interpolation of the unknown displacement field [4]. For example, quadrilateral elements with 4 nodes (Q4 [4]) and hexahedral 8 nodes (C8 [9]) were used for the analysis of 2D and 3D images. Non-regular grids have also been used with triangular elements (T3), see for instance [16]. The finite element method has been widely studied in the field of simulation over the past 20 years, a number of developments are therefore translatable, more or less directly, to full field measurements. Especially, X-FEM, capable to model a discontinuity that is not compatible with the mesh, has been successfully applied to digital image correlation in the presence of 2D [31] or 3D cracks [27]. To be fair, one must mention that between simulation and experiment is not a one-way relationship. Thanks to this common language, tools originally used for the measurement, can also be transposed to the simulation [32, 23]. Despite its versatility, the finite element method, unlike the local approaches, can lead to the inversion of several large systems whose computational cost can be substantial. For instance, if we consider a 2D image meshed with 1024 x 1024 elements, it corresponds to a nonlinear problem of more than 2 millions of degrees of freedom. Understandably, the number of elements used will be limited by the processing power of the computer, especially when trying to fetch voxel-scale V-DVC [17]. In this paper, we propose a method that reduces dimensionality, in order to push those limits. 2.2. Motivating the use of separation of variables This section aims to assess whether a variable separation technique may or may not be effective on this type of problems. For that, let u(x) be a discrete displacement field correponding to a certain measurement obtained by FE-based digital image correlation described above. Each component of the vector field u is naturally a function of two variables x and y:   u(x, y) u= v(x, y)

Let us consider the component u alone (the same analysis can be made for v). This known function of two variables u is now written as the smallest sum of products of separated functions ai and bi : X ai (x)bi (y) u(x, y) = i

20; :–

6 A priori, an infinite family of pairs (ai , bi ) can satisfy this writing. One seeks the best orthogonal basis, by adding the optimality condition: ∀n ∈ N , (ai , bi )1≤i≤n minimises the form

2 n

X

ai (x)bi (y) (8) J(a1 , b1 , ..., an , bn ) = u(x, y) −

i=1

under the orthogonality constraints: Z ∀i, j 6= i hai , aj ix ≡ ai (x)aj (x) dx = 0

and

hbi , bj iy ≡

x

Z

bi (y)bj (y) dy = 0 (9) y

where k • k2 is the Frobenius norm defined by k • k2 = hh•, •ix iy . Problem (8) can be reformulated as an unconstrained formulation. Find (ai , bi )1≤i≤n minimizing L(a1 , b1 , ..., an , bn ) = J(a1 , b1 , ..., an , bn ) +

n n X X

λxij hai , aj ix + λyij hbi , bj iy

i=1 j=1 j6=i



The optimality conditions for maximizing the Lagrangian with respect to the lagrange multipliers λxij and λyij yields the orthogonality of the basis (9). The optimality conditions for minimizing the Lagrangian with respect to the basis vectors reads: ∂L =0 ∂ai ∂L =0 ∂bi

→ →

hu, bi iy hbi , bi iy hu, ai ix bi = hai , ai ix ai =

(10) (11)

In order to have unicity of the decomposition, one should normalize one of the two functions ai or bi . One can also introduce a series of scalars (ωi )1≤i≤n defined by ∀i, ωi2 = hai , ai ix hbi , bi iy , and then normalize both ai and bi . Then, subtituting equation (11) in (10) yields the eigenvalue problem [14]: T(ai ) = hu, hu, ai ix iy = ωi2 ai

(12)

which, in the discrete case, corresponds to computing the Proper Orthogonal Decomposition (POD) of v(x, y) (also called the Singular Value Decomposition (SVD), Principal Component analysis (PCA), or Karhunen-Loeve Decomposition (KLD)), ωi being the singular values. In the discrete case, let U denote the matrix which collects the values of u, where Uij corresponds to the value of u at the position (xi , yi ). Then Problem (12) consists in computing the m eigenvalues of the operator UT U. The error made by the truncation after the first lth products is quantified by the distance J(a1 , b1 , ..., al , bl ) which, in the discete case, can be shown [12] to be equal to the sum of the truncated singular values: J(a1 , b1 , ..., al , bl ) =

m X

ωi2

i=l+1

20; :–

7 And a normalized truncation error can therefore be defined as: v u X m u u ωi2 s u J(a1 , b1 , ..., al , bl ) u i=l+1 esvd (l) = =u m u X ku(x, y)k2 t ωi2

(13)

i=1

In practice, and it will be shown in the examples, with the solution of a mechanical problem, the magnitude of the singular value ωi decreases very rapidly. It means that, despite of being truncated after very few terms, the algorithm provides a very good approximation of u. It has just been shown that the solution of a correlation problem can admit, a posteriori, a compact decomposition in a separated representation. The focus of this paper is to introduce, a priori, this decomposition in the correlation formulation, in order to reduce the computational burden associated with finite elements.

3. The PGD-DIC method In this section, the formulation of the proposed PGD-DIC method is presented. Then dedicated algorithms are proposed along with error indicators, in order to control the accuracy of the solution. Sections 3.1 and 3.2 explain how to compute a new best rank one approximation in the context of the linear prediction of a digital image correlation problem. Section 3.3 presents the proposed resolution strategy that combines PGD and digital image correlation. In the remaining sections, some examples are given. 3.1. Formulation In the previous section, we saw that the solution to this problem could sometimes be easily separated, especially when it corresponds to a mechanical displacement field. In this paper, similarly to what is done for simulation [5], we propose to seek directly the unknown displacement as a separated vector field:  X   m  u(x, y) uxi (x) · uyi (y) = u(x, y) = (14) v(x, y) vix (x) · viy (y) i=1

(uxi , uyi , vix , viy )

where the quadruplet of functions is unknown a priori. These unknown functions are calculated iteratively by a progressive construction of successive rank one (m = 1) best approximations. To calculate the m + 1 term of this sum, we consider the m first quadruplets as known and fixed. Thus, at this iteration, the unknown displacement field is decomposed as follows:   uα (x) · uγ (y) u(x, y) = u0 (x, y) + (15) vα (x) · vγ (y) where u0 collects the fixed terms: u0 (x, y) =

 n  X uxi (x) · uyi (y) vix (x) · viy (y)

(16)

i=1

20; :–

8 The best correction u−u0 of the displacement field minimizes the quadratic distance associated with the gray level conservation equation (17). After linearization (3) and differentiation (5), we obtain: Z Z ⋆ 2 (17) u · ∇f u dx = u⋆ · ∇f (f − gu ) dx

for all test field assumed as:



u(x, y) =



u⋆α (x) · uγ (y) + uα (x) · u⋆γ (y) vα⋆ (x) · vγ (y) + vα (x) · vγ⋆ (y)



(18)

One can now insert the above separated expression of the unknown and test displacement field in the weak formulation of the linearized correlation problem (5). The minimization with respect to each of the four unknown functions (uα , uγ , vα , vγ ) leads to a system of four coupled equations: Z Z  Z ⋆ ⋆  u · A (x) u dx + u · A (x) v dx = u⋆α · B1 (x) dx  11 α 12 α α α   Z Z x x x Z      vα⋆ · A21 (x) uα dx + vα⋆ · A22 (x) vα dx = vα⋆ · B2 (x) dx  Zx Zx Zx (19)   u⋆γ · C11 (y) uγ dy + u⋆γ · C12 (y) vγ dy = u⋆γ · D1 (y) dy    Zy Zy Zy      vγ⋆ · C21 (y) uγ dy + vγ⋆ · C22 (y) vγ dy = vγ⋆ · D2 (y) dy y

y

y

with

A11 =

Z 



y

∂f ∂x

2

Z

A12 = A21 =

∂f ∂f dy ∂x ∂y

uγ v γ y

A22 =

Z  y



∂f ∂y

2

dy

2 Z  ∂f ∂f ∂f dx C22 = vα dx ∂x ∂y ∂y x x x Z ∂f ∂f B2 = v γ (f − gu ) dy (f − gu ) dy B1 = uγ ∂x ∂y y y Z Z ∂f ∂f uα (f − gu ) dx D2 = vα (f − gu ) dx D1 = ∂x ∂y x x This system is obviously highly nonlinear. In the next section, we propose a suitable algorithm for its resolution. C11 =

Z 



∂f ∂x

2

dy dx Z

C12 = C21 =

Z

uα v α

3.2. Nonlinear solution algorithm The formulation by separation of variables makes the linear prediction of the correlation problem (3) become nonlinear. It might seem like a drawback, but we will see in the following that it is actually an advantage because it turns one problem of quadratic complexity (and even cubic in 3D), into, say, several problems, but of linear complexity, without sacrifying accuracy. To solve this problem, we chose to combine the equations of system (19) by 2, as follows: Z Z α⋆ · A(x) α dx

=

x

Z

γ ⋆ · C(x) γ dy y

α⋆ · B(x) dx

(20)

γ ⋆ · D(x) dy

(21)

x

=

Z

y

20; :–

9 with α(x) =



uα (x) vα (x)





α (x) =

and



u⋆α (x) vα⋆ (x)



γ(y) =

A(x) =



A11 (x) A21 (x)

A12 (x) A22 (x)



C(y) =



C11 (y) C21 (y)

C12 (y) C22 (y)





uγ (y) vγ (y)





γ (y) =

B(x) =



B1 (x) B2 (x)



D(y) =



D1 (y) D2 (y)





u⋆γ (y) vγ⋆ (y)



Then, we apply an alternating directions fixed point algorithm to compute the couple of vector functions (α, γ). a) x dimension problem. First γ is supposed to be fixed, so that γ ⋆ vanishes and system (19) reduces to equation (20). This problem which only involves the dimension x can be solved by an appropriate numerical method. For instance, one can use a finite element method which, in this case, relies on a 1D mesh with linear two node bars (Bar2), associated with Nx nodes. The unknown vector field α(x) is then sought under the following form: ! PNx ϕ (x)a j j Pj=1 = Nx qα (22) α(x) = Nx j=1 ϕj (x)bj

where Nx denotes the matrix that collects all the values of the finite element shape functions and qα the correponding DOF vector. The problem of equation (20) results in the following linear system: (23) Aqα = B

with A=

Z

(N x )T A(x)Nx dx x

B=

Z

(N x )T B(x) dx x

b) y dimension problem Then, in a second step, one fixes the above determined field α. This reduces the system (19) to equation (21). This problem only involves dimension y. In an analoguous way, its resolution with a 1D finite element approach yields the following linear algebraic system: (24) Cqγ = D These two steps are repeated until convergence, ie, when both equations are satisfied simultaneously. We will see in the examples, that, in practice, very few iterations are sufficient to make the overall algorithm converge. To quantify the convergence of the fixed point algorithm, the following criterion of stagnation is used: e2 = kαk − αk−1 k2 + kγk − γk−1 k2 < εe

(25)

where εe is a parameter of the method. This choice will be discussed later. Remark 2. Let us introduce the two following mappings: - Sn is the application which maps a y-dimension function λ into a x-dimension function α = Sn (λ) defined by equation (20). - Tn is the application which maps a x-dimension function α into a y-dimension function λ = Tn (α) defined by equation (21). 20; :–

10 The pair (α, λ) is optimal in the sense of (4), and it verifies: - α is a fixed point of Qn = Sn ◦ Tn , i.e. α = Qn (α) - λ is a fixed point of Q⋆n = Tn ◦ Sn , i.e. λ = Q⋆n (λ) The previous problems are interpreted as pseudo eigenvalue problems (see [14, 20, 21]), α and λ being the dominant eigenfunctions of Qn and Q⋆n respectively. The alternating direction fixed point algorithm described above is seen as a power type method applied to Qn (or Q⋆n ). But this problem has not all the properties of an eigenvalue problem. It is a non-classical mathematical problem which needs for further mathematical investigations (see [20, 21]). In practice, according to our numerical examples and according to the bibliography, this fixed point algorithm is relatively insensitive to initialization, and it always converges in practice with a random initialization, within very few iterations. Remark 3. The calculation of the above operators (left and right hand sides) requires the integration over the x (resp. y) domain of the gradient of f . The latter is a function of two variables x and y. In such a situation, what one usually does with PGD, is to perform a truncated SVD of all the known unseparated quantities. Like this, any N -dimensional integral can be turned into a sum of products of unidimensional integrals. The image gradient being a vector field, it can be decomposed as a sum of products of separated functions as we do for the displacement (which is also a vector field). But in our case, the images being textured, themselves and their gradients are hardly separable. Namely, their decomposition involves so many terms that it is more interesting to compute the unseparated integrals. For this reason, and also because the SVD of the right-hand side should be calculated at each iteration, we choose not to separate the known quantities.

3.3. The coupling strategy between DIC and PGD In the previous sections we saw how to compute a new best approximation of rank one. Therefore, it is necessary to define how to integrate the PGD in the nonlinear correlation algorithm. Different approaches could be considered, here we decided to compute one single rank-one approximation at each nonlinear update of gu . Namely, when an enrichement couple (α, γ) is determined as described in Section 3.1 and 3.2, it is used to correct the displacement approximation u0 , and thus gu = g(x + u0 (x)), and one can then proceed to the next iteration of the correlation problem. Initialization. The correlation problem is ill-posed. It is, therefore, necessary to pay attention to the initialization to avoid local minima. In our case, the initialisation only consists in the determination of the rigid body translations of the area of interest, because it is very inexpensive and very easy to implement in this context. One might consider, and it will probably be necessary for more complex cases, to use more effective techniques to initialize the algorithm. Especially, one could revisit for the PGD, the multiscale initialization technique proposed in [29]. Convergence Like any other digital image correlation method, two indicators are available to quantify the convergence of the iterative algorithm. The first one (see [4, 17]) is based on the metric that is minimized. It is, more or less, equivalent to compute a relative norm of the 20; :–

11 correlation residual (f − gu ): ηr2 =

Z

2

[f (x) − g(x + u(x))] dx x

|max(f ) − min(f )| This measure has all the attributes of a real measure of the error, but it also measures noise and also the approximation introduced by the finite element interpolation. Especially if the field to be measured is not continuous, this error will never be zero (this property can be used advantageously to detect discontinuities (see [30, 34, 27]). Therefore, this metric cannot be used to measure precisely convergence. To do so, a stagnation indicator based on the relative norm of the correction is be preferred [4, 16, 17]: kδux k2 kδuy k2 + < εη 2 kux k kuy k2 where εη is a parameter of the correlation algorithm discussed in the examples. The PGD-DIC method is summarized in Figure 2: η2 =

1: Compute gradients of f . 2: Initialization step: u0

DIC problem (nonlinear iterations) 3: while η > εη do

6:

Update residual (f − gu ) New best rank one approximation (fixed point) Initialization: γ, k = 1 while e > εe AND k < k max do

7:

x-monodimensional problem, γ fixed:

4: 5:

8:

Normalization: α = αkαk

9:

y-monodimensional problem, α fixed:

10: 11: 12: 13: 14: 15:

Aqα = B

−1

Cqγ = D

Fixed point stagnation indicator e (25) k ←k+1 end while Convergence indicator: η   uα (x) · uγ (y) Displacement update: u0 ← u0 + vα (x) · vγ (y) end while

Figure 2. PGD-DIC method consists of a two stage nonlinear iteration algorithm which involves unidimensional problems only. For each linear prediction of the correlation problem, a new best rank one approximation is computed iteratively thanks to an alternating fixed point.

3.4. An artificial numerical example We first analyse some synthetic cases to validate the approach and investigate different properties. To do so, an artificial image (500×500-pixel) based on Perlin noise is built following 20; :–

12 the approach described in [22]. This texture is deformed by a field (shown in Figures 5(a) and 5(d)) obtained from a finite element simulation of a homogeneous linear elastic domain. This 2D square domain is clamped on the right hand side and subjected to a uniform stress on the left hand side (the other external boundaries being traction-free). One can notice that this given displacement field is not too simplistic, since its SVD shows that twenty modes are required to accurately approximate it, for both u(x, y) and v(x, y) (see Fig. 3(a)). Then, a

(a) Normalized error esvd as a function of the SVD truncation order of the prescribed displacement to quantify the separability of the reference solution.

(b) unidimensional meshes (yellow for x-dimension and black for ydimension) used to study a given ROI (black dashed line)

Figure 3. A first synthetic example: Perlin noise is used for the texture of reference image f . Deformed image g is the advection of f by the solution of a finite element linear elastic simulation.

centered region of interest (ROI) of 304×304-pixel is considered and endowed with a mesh. Unlike a standard Q4-DIC approach, here only two one-dimensional meshes are required (Fig. 3(b)). In our case, 19 linear bar elements of 16 pixels long are used in each direction. The quadrature of the shape functions is approximated by a sum over the pixels. 3.4.1. Resolution analysis A DIC analysis usually consists in a non trivial compromise between accuracy (or uncertainties) and spatial resolution (or element size): the higher the resolution is, the larger the displacement uncertainties are (less information). To help make this choice, an a priori performance analysis can be carried out. In this section, such an analysis is performed with the artificial texture f (Figure 3(b)) that we used for this syntetic example. Following [4], a series of images gi are generated by rigid body translations of f in the x-direction. The magnitude upi of the prescribed displacement takes p = 10 values ranging from 0 to 1 pixel. This step requires also to interpolate the gray level. In order to be more objective, we used a interpolation scheme different from to one used for the measure. A scheme based on a shift in the Fourier space was used to generate the translated images. Then the proposed method is run on the pairs of images (f, gi ). The quality of the measured displacement um i is assessed by two indicators. The mean displacement error hδu i and the standard displacement uncertainty hσu i averaged over upi and defined by: 1 X m 1 X m 2 1/2 hδu i = |hui i − upi | hσu i = h(ui − hum i i) i p i p i 20; :–

13 These indicators are shown in Figures 4(a) and 4(b) as a function of the element size for the proposed PGD-DIC method. The latter is compared to Q4-DIC with an equivalent mesh, images and interpolation scheme.

(a) Standard displacement uncertainty hσu i as a function of the element size h in Q4-DIC (◦) and PGDDIC(+). The dashed line is a power-law fit Ahα with α = −1.70

(b) Mean displacement error hδu i as a function of the element size h in Q4-DIC (◦) and PGD-DIC(+). The dashed line is a power-law fit Ahα with α = −1.49

Figure 4. a piori performance analysis.

One can observe that the proposed method has almost the same performances as Q4-DIC regarding standard uncertainty and mean error. This result is not surprising since displacement interpolations are very similar, even if the solver differs between these two methods. Indeed, in practice, the PGD solution is, after convergence, very close to the equivalent FE solution [1, 5]. 3.4.2. Measurement The displacement field, obtained after 12 iterations of the PGD-DIC method, is presented in Figures 6(right). More precisely, there is a very good matching between the measured 5(b) and 5(e) and the prescribed reference field 5(a) and 5(d). The relative error between the two is very small 5(c) et 5(f). This is confirmed by the observation of the correlation residual (Fig. 6). We now analyze some properties of the algorithm. Figure 7 shows the evolution of the criterion of stagnation e (25) during the iterations of the fixed point for each of the 12 modes required for the construction of the solution described above. The 4 highest convergence rates correspond to the first 4 modes. Further, we can notice that the fixed point may converge more or less slowly. This may be due to the presence of modes close to each other, which are difficult for the fixed point algorithm to identify separately. As mentioned above, we choose, as was proposed in refs [15, 21], to stop the fixed point algorithm after a few iterations (in practice less than 10). We assume that a few iterations are enough to improve the iterate and make the problem of correlation converge. To study the influence of this choice on Algorithm 1, we compare two stopping criteria for the fixed point algorithm. Each time, the precision is set at εe = 1e−8 , but the algorithm is, in one case, systematically stopped after k max = 6 iterations (denoted S1), and, in the other case, the number of maximum 20; :–

14

(a) prescribed displ. u(x, y) (px)

(b) measured displ. u(x, y) (px)

(c) absolute raw diff. u(x, y) (px)

(d) prescribed displ. v(x, y) (px)

(e) measured displ. v(x, y) (px)

(f) absolute raw diff. v(x, y) (px)

Figure 5. Accuracy of the solution obtained with PGD-DIC.

Figure 6. Characterization of the solution obtained with PGD-DIC: (left) the correlation residual map in percentage of the dynamic range after convergence, (right) the deformed solution field (amplification factor 15).

iterations is set at k max = 200 (denoted S2) so that precision criterion εe = 1e−8 is paramount. The results are shown in Figure 8. In Figure 8(a), the indicator η is plotted over the iterations of the correlation problem for both above settings. One can notice that the curves are almost identical. This means that the precision of the fixed point has almost no incidence in the convergence rate of PGD-DIC. However, when one observes (Fig. 8(b)) the number of onedimensional problems required to be solved with both parameterizations, we note that this number, and therefore the computational cost, is divided by nearly 20, for a given accuracy. Moreover, like this, the method has a constant computation time per iterations, which can be comfortable for the user. For this example, the PGD-DIC converges very quickly (Fig. 8(a)), 20; :–

15

Figure 7. Evolution of the stagnation criterion e (25) of the fixed point algorithm as a function of the sub-iteration number k for the first 12 nonlinear updates of the correlation problem.

(a) Convergence of the residual ηr (blue mixed line) and relative norm of the correction η with stopping criteria S2 (black dashed line) and S1 (red line)

(b) Number of monodimensional problems solved at each nonlinear iteration with the two different stopping parameters: S2 in black and S1 in red

Figure 8. Influence of stopping criteria, S1 (kmax = 6) or S2 (kmax = 200), on behavior of the correlation algorithm.

since the error η is lower than 10−3 after only 12 iterations. In Figure 8(a), the evolution of the norm of the residual is also displayed (in blue) over the iterations. One can notice a classical horizontal asymptote, reflecting the fact that this metric takes into account the discretization (FE) errors, the noise and also the quality of the pattern. For information, the first 4 one-dimensional modes Fig. 9 and their reconstructed graphical representation Fig. 10 are given for this problem. 3.5. A more complex artificial example The number of terms in the decomposition depends on the unknown displacement. Since only one rank-one approximation is added at each iteration, the overall number of iterations needed 20; :–

16

(a) uα (x)

(b) vα (x)

(c) uγ (y)

(d) vγ (y)

Figure 9. first 4 unidimensional modes (1) blue, (2) green, (3) red and (4) cyan.

(a) 1st mode

(b) 2nd mode

(c) 3rd mode

(d) 4th mode

Figure 10. Graphical representation of the first 4 corrections after normalization. They are reconstructed from the identified unidimensional modes of Figure 9

to converge may also depend on the separability of the unknown field. A field which exhibits a diagonal discontinuity is typically hardly separable. This example is presented to evaluate the robustness of the proposed algorithm to measure such a displacement field. Let us consider another synthetic 500x500-pixel pattern deformed this time by an analytical mode I opening inclined crack of the linear elastic fracture mechanics: p   (r) (κ − 0.5) (2cos(θ/2) − cos(3θ/2)) uprescribed = (κ + 0.5) (2sin(θ/2) + sin(3θ/2)) 2 where r and θ are the local coordinates centered at the crack tip and κ a material parameter. There are more appropriate and very efficient tools for digital image correlation in the presence of cracks [33, 34, 31, 27], and this field goes far beyond the scope of this work. But this example is nevertheless interesting because it presents an inclined discontinuity. Thus, the SVD of the solution reveals a set of slowly decreasing singular values, as show Figure 11. This problem is solved with PGD-DIC with the same parameter/mesh/resolution as the previous example. It takes 81 iterations for PGD-DIC to reach an error η below 10−3 . The problem is also solved with a more standard Q4-DIC within 25 iterations for the same precision. The prescribed and measured fields are reported in Figure 12. The PGD-DIC requires more iterations than the standard Q4-DIC. This is due to the fact that we choose to add only one rank-one approximation by iterations, and, in this case, a large number of first order corrections are needed to correctly represent the solution. In this special case, the separation of variables artificially slacken the convergence rate of the correlation problem. This comparison of the number of iterations is not fair since one iteration of PGD-DIC is much cheaper than that of Q4-DIC. The proposed algorithm remains stable and the measured displacement field is very 20; :–

17

Figure 11. SVD of the prescribed displacement corresponding to an inclined crack in mode I

(a) u measured with Q4-DIC

(b) prescribed displ. u

(c) u measured with PGD-DIC

(d) v measured with Q4-DIC

(e) prescribed displ. v

(f) v measured with PGD-DIC

Figure 12. prescribed and measured displacement of the opening inclined crack in mode I.

close to the one measured with Q4-DIC. Even in this disadvantageous case, the efficiency and accuracy are very satisfactory. 20; :–

18 3.6. Analysis of a realistic example To validate the approach on a real test case, we study the example of the biaxial test on a carbon/carbon composite carried out with the multiaxial machine ASTREE. This example has already been studied in [25] and more recently in [26] to identify a damage law. The displacement is measured within a ROI of 850×780 pixels. Elements of 24 pixels width are used in each direction. The reference image and the meshes are presented Figure 13(a). The stopping criterion is set to η = 10−3 for this example. The problem is solved with the proposed

(a) Reference mesh

image

and

(b) Displacement magnitude on the deformed domain (ampl. 42)

(c) Correlation residual

Figure 13. Application of the PGD-DIC to a real experiment: cruciform specimen made of C/C composite subjected to inplane biaxial loading. The B/W painted speckle pattern (a). The measured displacement field (b) and the correlation residual map in percentage of the dynamic (c)

PGD-DIC. The displacement magnitude is plotted on the deformed domain in Figure 13(b) and the associated correlation residual is given figure 13(c). The solutions obtained using PGD is reconstructed and compared to a reference solution (Fig. 14). The latter is obtained by a standard Q4-DIC method on the same ROI with the same number of pixels by element side. The accuracy is quantified by the following measure of the distance between PGD and Q4

(a) v measured with PGD-DIC (px) (b) v measured with Q4-DIC (px)

(c) absolute raw difference (px)

Figure 14. Comparison of the y component of the displacement measured with PGD-DIC and Q4-DIC. The same accuracy is obtained with ux

20; :–

19 solutions: npix

d(upgd , uq4 ) =

X

− uq4 upgd i i

i=1

npix

X

uq4 i

2

= 0.09%

and

d(v pgd , v q4 ) = 0.12%

2

i=1

where npix is the number of pixel within the ROI. Up to our numerical tests, and as it is the case here, the error due to the PGD approximation is in the same order of magnitude than the stopping criterion η. However this error is much lower than the errors due to noise and interpolation which is equal to ηr = 5.70% for both Q4-DIC and PGD-DIC.

4. Pixel-scale digital image correlation The advantage of a formulation by separation of variables lies in the fact that the considered number of finite elements and therefore the size of the ROI, is much less restricted. It may therefore be tempting to go down in resolution to the pixel level, without being confronted with huge computational costs. Remark 4. As the image gray level is interpolated, the spatial interpolation of the displacement is a priori not dependent from the original pixelization of the image. To capture the high gradients in the displacement field, a fine mesh is required. Nevertheless, in this case, the gradient of the displacement is filtered by the image acquisition (pixelization). The cutoff frequency of this filter is linked to the size of the pixel. Therefore there is a priori no need to consider elements smaller than one pixel. As a consequence, the smaller elements which can be reasonnably used are at the scale of the pixel. As mentioned in the introduction, the problem of correlation being ill-posed, it requires an additional regularization to go down to this scale (see for instance [31, 16, 17]). In this section, we show that taking into account a regularization is very easy to implement in the PGD-DIC method. Here, a simple (but non-physical) frequency filter is used. A regularization with greater physical sense (like [7, 17, 31]) should much more reduce uncertainty. 4.1. Regularization The purpose of this part is to show the feasibility of a one element by pixel approach which is the objective. We therefore choose to regularize the formulation with a frequency-type filter [31, 16, 17]. Specifically, a Tikhonov regularization is used. It corresponds to the addition of the following term in the formulation: ∆u = 0 (26) The associated variationnal formulation reads: Z Z ∂u ⋆ ∂u ∂u ⋆ ∂u u⋆ ∆u dx = − + dx ∂y ∂y x x ∂x ∂x

(27) 20; :–

20 since a flux-free condition is imposed on the boundaries. Given a separated displacement field u, it is very easy to compute the different terms of the regularization as follows:   ∂uα (x) · uγ (y)  ∂u(x, y)  (28) =  ∂v∂x(x)  α ∂x · vγ (y) ∂x and thus for the x-dimension problem, the regularization term writes: Z Z   Z  Z ∂u⋆α ∂uα 2 ⋆ Z  x ∂x ∂x dx · y uγ dy   x uα uα dx · y ∂u ⋆ ∂u ∂u ⋆ ∂u + Z Z Z Z + dx =     ∂vα⋆ ∂vα ∂y ∂y 2 x ∂x ∂x ⋆ dx · vγ dy v v dx · α α x ∂x ∂x y x y

 ∂uγ 2 dy  ∂y  2  ∂vγ dy ∂y

This regularization term is associated with a penalization parameter whose value can be linked to the filter cutoff frequency (see for instance [31, 16]). Such a regularization is applied to the example of section 3.6 with the same mesh (it is actually not a pixel wise mesh) and ROI, to show the effect of regularization. It is illustrated in Figure 15. The study of this parameter (see for instance [7, 17, 31]) is beyond the scope of this paper, whose purpose is to demonstrate the feasibility of such a regularization in the context of separation of variables.

Figure 15. Effect of the regularization for different penalization values (from left to right: 0, 104 , 106 and 108 ) on the x-displacement (u) of the realistic example of section 3.6

4.2. A numerical example In this section, we construct a 1024×1024-pixel artificial image deformed by the simulationbased displacement used for Figure 6(right). The ROI is made of a 710×710-pixel region endowed with two one-dimensional meshes of linear 2-node elements with a size of one single pixel (see Figure 16(a)). With a standard Q4-DIC, such a mesh would lead to the resolution of a nonlinear problem of more than 1 million DOF. The PGD-DIC was used to solve this problem in less than 3 minutes on a general purpose mono-processor laptop. The error indicator η reached 10−3 within 9 iterations. The correlation residual ηr was equal to 0.59%. The relative error on the displacement compared to the prescribed reference field is plotted in Figure 16(b) and 16(c). The absolute raw differences 16(b) and 16(c) presents some fringes which correspond to gray level interpolation errors. Indeed, the dark areas (low error) correspond to a displacement amplitude which is an integer number of pixels and for which interpolation 20; :–

21

(a) zoom on a part of the mesh

(b) ux absolute raw diff. (px)

(c) uy absolute raw diff. (px)

Figure 16. Mesh used and accuracy obtained with the pixel-wise PGD-DIC

is exact. These fringes are particularly visible here since the pair of images are synthetic— interpolation uncertainty being dominating in this case. The proposed method can be easily associated with a regularization operator, in order to tackle pixel-wise resolution. It is a fast and efficient tool, which builds the same solution as Q4DIC (up to the separated approximation). It is especially suitable for very small resolutions, when the computational cost associated with a classical FE approach becomes prohibitive. The computational cost required to solve the nonlinear correlation problem with one-pixel by element is plotted Figure 17 as a function of the size of the ROI. Even if one has to perform sub-iterations to solve the linearized correlation problem, the computational cost of PGD is much lower than that of a classical finite element approach. For this example, the speed-up is especially impressive (more that one order of magnitude) when large number of elements are used in each direction. As usual with the PGD, no quantitative conclusions (speed up or

Figure 17. CPU Time taken by Q4-DIC and PGD-DIC to solve the problem as a function of the number of pixels in each direction, normalized by the time taken by Q4-DIC for a 10×10 pixel ROI.

complexity) can be declared, since the savings strongly depends on the “separablility” of the unknown displacement field. 20; :–

22 5. Conclusion In this article, we propose a new digital image correlation (DIC) technique based on the proper generalized decomposition (PGD). The method preserves the advantages of standard “finiteelement” DIC approaches (modularity, continuity, common language with the simulation, etc. . . ) while trying to avoid their main drawback, i.e. the computational cost. Indeed, whatever the dimension of the problem (2D or 3D), the complexity of the proposed PGDDIC is linear. With such a property, if the interest is clear in 2D, it will be even more obvious in the case of digital volume correlation (DVC). The work presented herein is only a first step and many prospects arise. Among them, a multilevel technique, as developed in [29], is to be developed to give the method the robustness needed to address more realistic problems. The extension to DVC should be straightforward. It is expected to reduce drastically the computational costs associated with, for example, the correlation of tomographic images [9], which can be prohibitive if the expected resolution is small in comparison to the size of the region of interest [17]. Furthermore, a more physical regularization should be used to filter the measure and decrease the noise sensitivity. For instance, an adaptation of the equilibrium gap method [7, 17, 31] in the context of the PGD, should provide even more accurate measurements. Finally, the separation of variables is naturally not restricted to space variables. Indeed, a similar method and similar algorithms should be used to separate time and space in the case of time dependent correlation problems [3].

REFERENCES 1. A. Ammar, B. Mokdad, F. Chinesta, and R. Keunings. A new family of solvers for some classes of multidimensional partial differential equations encountered in kinetic theory modeling of complex fluids: Part II: Transient simulation using space-time separated representations. Journal of Non-Newtonian Fluid Mechanics, 144(2-3):98–121, 2007. 2. S. Avril, M. Bonnet, A.-S. Bretelle, M. Grdiac, F. Hild, P. Ienny, F. Latourte, D. Lemosse, S. Pagano, E. Pagnacco, and F. Pierron. Overview of identification methods of mechanical parameters based on full-field measurements. Experimental Mechanics, 48(4):381–402, 2008. 3. G. Besnard, S. Guerard, S. Roux, and F. Hild. A space-time approach in digital image correlation: Movie-dic. Opt. Lasers Eng., 49:71–81, 2011. 4. G. Besnard, F. Hild, and S. Roux. finite-element displacement fields analysis from digital images: Application to portevinle chtelier bands. Experimental Mechanics, 46(6):789–803, 2006. 5. B. Bognet, A. Leygue, F. Chinesta, and A. Poitou. PGD and separated space variables representation for linear elasticity in plate domains. In AMPT (Advances in Material Processing Technologies) 2010, 2010. 6. F. Chinesta, A. Ammar, and E. Cueto. Recent advances and new challenges in the use of the proper generalized decomposition for solving multidimensional models. Archives of Computational Methods in Engineering State of the Art Reviews, 17(4):327–350, 2010. 7. D. Claire, F. Hild, and S. Roux. A finite element formulation to identify damage fields: the equilibrium gap method. Int J. Numer Methods Eng, 61(2):189–208, 2004. 8. D. Gonz´ alez, A. Ammar, F. Chinesta, and E. Cueto. Recent advances on the use of separated representations. Int. J. Numer. Meth. Eng., 81:637–659, 2010. 9. F. Hild, E. Maire, S. Roux, and J.-F. Witz. Three-dimensional analysis of a compression test on stone wool. Acta Materialia, 57(11):3310–3320, 2009. 10. B.K.P. Horn and G. Schunck. Determining optical flow. Artificial Intelligence, 17:185–203, 1981. 11. P. Kerfriden, P. Gosselet, S. Adhikaric, and S.P.A. Bordas. Bridging proper orthogonal decomposition methods and augmented newtonkrylov algorithms: An adaptive model order reduction for highly nonlinear mechanical problems. Comput. Meth. Appl. Mech. Eng., 200(5–8):850–866, 2011. 20; :–

23 12. K. Kunisch and S. Volkwein. Galerkin proper orthogonal decomposition methods for a general equation in fluid dynamics. SIAM Journal on Numerical Analysis, 40(2):492–515, 2002. 13. P. Ladev` eze. Sur une famille d’algorithmes en m´ ecanique des structures. Compte rendu de l’acad´ emie des Sciences, 300(2):41–44, 1985. 14. P. Ladev` eze. Nonlinear computationnal structural mechanics—New approaches and non-incremental methods of calculation. Springer Verlag, 1999. 15. P. Ladev` eze, J.-C. Passieux, and D. N´ eron. The LATIN multiscale computational method and the proper generalized decomposition. Comput. Methods Appl. Mech. Engng., 199(21):1287–1296, 2009. 16. H. Leclerc, J.-N. P´ eri´ e, S. Roux, and F. Hild. Integrated digital image correlation for the identification of mechanical properties. In Gagalowicz A, Philips W (eds) MIRAGE, volume 5496, pages 161–171, 2009. 17. H. Leclerc, J.-N. P´ eri´ e, S. Roux, and F. Hild. Voxel-scale digital volume correlation. Experimental Mechanics, 51(4):479–490, 2011. 18. S.V. Lomov, P. Boisse, E. De Luycker, F. Morestin, K. Vanclooster, D. Vandepitte, and A. Willems I. Verpoest. Full-field strain measurements in textile deformability studies. Composites Part A-Applied Science and Manufacturing, 39(8):1232–1244, 2008. 19. N. Mo¨ es, J. Dolbow, and T. Belytschko. A finite element method for crack growth without remeshing. International Journal of Engineering Science, 46:131–150, 1999. 20. A. Nouy. Generalized spectral decomposition method for solving stochastic finite element equations: invariant subspace problem and dedicated algorithms. Comput. Meth. Appl. Mech. Eng., 197(51-52):4718– 4736, 2008. 21. A. Nouy. A priori model reduction through proper generalized decomposition for solving time-dependent partial differential equations. Comput. Meth. Appl. Mech. Eng., 199(23-24):1603–1626, 2010. 22. J.-J. Orteu, D. Garcia, L. Robert, and F. Bugarin. A speckle-texture image generator. In Speckle’06 International Conference, volume 6341, page http://dx.doi.org/10.1117/12.695280, 2006. 23. J.-C. Passieux, A. Gravouil, J. R´ ethor´ e, and M.-C. Baietto. Direct estimation of generalized stress intensity factors using a three-scale concurrent multigrid x-fem. Int. J. Numer. Meth. Eng., 85(13):1648–1666, 2011. 24. J.-C. Passieux, P. Ladev` eze, and D. N´ eron. A scalable time-space multiscale domain decomposition method: adaptive time scale separation. Comput. Mech., 46(4):621–633, 2010. 25. J.-N. P´ eri´ e, S. Calloch, C. Cluzel, and F. Hild. Analysis of a multiaxial test on a c/c composite by using digital image correlation and a damage model. Experimental Mechanics, 42:318–328, 2002. 26. J.-N. P´ eri´ e, H. Leclerc, S. Roux, and F. Hild. Digital image correlation and biaxial test on composite material for anisotropic damage law identification. International Journal of Solids and Structures, 46(1112):2388–2396, 2009. 27. J. Rannou, N. Limodin, J. R´ ethor´ e, A. Gravouil, W. Ludwig, M.-C. Baietto-Dubourg, J.-Y. Buffi´ ere, A. Combescure, F. Hild, and S. Roux. Three dimensional experimental and numerical multiscale analysis of a fatigue crack. Computer Methods in Applied Mechanics and Engineering, Vol 199:1307–1325, 2010. 28. J. R´ ethor´ e. A fully integrated noise robust strategy for the identification of constitutive laws from digital images. Int. J. Numer. Meth. Eng., 84(6):631–660, 2010. 29. J. R´ ethor´ e, F. Hild, and S. Roux. Shear-band capturing using a multiscale extended digital image correlation technique. Comput Methods Appl Mech Eng, 196(49–52):5016–5030, 2007. 30. J. R´ ethor´ e, F. Hild, and S. Roux. Extended digital image correlation with crack shape optimization. Int. J. Num. Meth. Eng., 73(2):248–272, 2008. 31. J. R´ ethor´ e, S. Roux, and F. Hild. An extended and integrated digital image correlation technique applied to the analysis of fractured samples. Eur J. Comput Mech, 18:285–306, 2009. 32. J. R´ ethor´ e, S. Roux, and F. Hild. Hybrid analytical and extended finite element method (HAXFEM): A new enrichment procedure for cracked solids. International Journal for Numerical Methods in Engineering, 81(3):269–285, 2010. 33. S. Roux and F. Hild. Stress intensity factor measurements from digital image correlation: post-processing and integrated approaches. Int J. Fract, 140:141–157, 2006. 34. S. Roux, J. R´ ethor´ e, and F. Hild. Digital image correlation and fracture: an advanced technique for estimating stress intensity factors of 2D and 3D cracks. J. Phys. D: Appl. Phys., 42(214004), 2009. 35. D. Ryckelynck. A priori hyperreduction method: an adaptive approach. Journal of Computational Physics, 202:346–366, 2005. 36. H.W. Schreier, J.R. Braasch, and M.A. Sutton. Systematic errors in digital image correlation caused by intensity interpolation. Optical Engineering, 39(11):2915–2921, 2000. 37. M.A. Sutton, J.-J. Orteu, and H. Schreier. Image correlation for shape, motion and deformation measurements: Basic Concepts, Theory and Applications. Springer, New York, NY (USA), 2009. 38. M.A. Sutton, W.J. Wolters, W.H. Peters, W.F. Ranson, and S.R. McNeill. Determination of displacements using an improved digital correlation method. Image and Vision Computing, 1(3):133–139, 1983.

20; :–