Adaptive SQP Method for Shape Optimization

Adaptive SQP Method for Shape Optimization P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani ... program-ming. Inexactness is a consequence of us...

0 downloads 155 Views 410KB Size
Adaptive SQP Method for Shape Optimization P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani

Abstract We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state equation, update the boundary, and compute the geometric functional. We present a novel algorithm that uses a dynamic tolerance and equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution — a new paradigm in adaptivity.

1 Shape Optimization as Adaptive Sequential Quadratic Programming Shape optimization problems governed by partial differential equations (PDE) can be formulated as constrained minimization problems with respect to the shape of a domain Ω in Rd . If u = u(Ω ) is the solution of a PDE in Ω , the state equation is A u(Ω ) = f , (1) and J(Ω ) = J(Ω , u(Ω )) is a cost functional, then we consider the minimization problem

P. Morin Instituto de Matem´atica Aplicada del Litoral, Universidad Nacional del Litoral, CONICET, Santa Fe, Argentina, e-mail: [email protected] Partially supported by Universidad Nacional del Litoral through Grant CAI+D PI-62-312, and CONICET through Grant PIP 112-200801-02182. R. H. Nochetto Department of Mathematics and Institute for Physical Science and Technology, University of Maryland, College Park, USA e-mail: [email protected] Partially supported by NSF grants DMS-0505454 and DMS-0807811. M. S. Pauletti Department of Mathematics, University of Maryland, College Park, and Department of Mathematics, Texas A&M, USA e-mail: [email protected] Partially supported by NSF grants DMS-0505454 and DMS-0807811. M. Verani MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano, Milano, Italy e-mail: [email protected] Partially supported by Italian FIRB RBIP06HF8S.

1

2

P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani

Ω ∗ ∈ Uad :

J(Ω ∗ ) = inf J(Ω ), Ω ∈Uad

(2)

within the set Uad of admissible domains in Rd . This is a constrained minimization problem for J. In this paper we formulate an Adaptive Sequential Quadratic Programming algorithm (or ASQP), that adaptively builds a sequence of domains {Ωk }k≥0 converging to a local minimizer of the shape optimization problem (1)–(2). To motivate and briefly describe the ideas underlying ASQP, we need the concept of shape derivative dJ(Ω ; w) of J(Ω ) in the direction of a normal velocity w dJ(Ω ; w) =

Z

Γ

g(Ω )w,

(3)

see [13] for its precise definition. We observe that g(Ω ), the Riesz representation of the shape derivative dJ(Ω ), depends on u(Ω ). We present ASQP in two steps: we first introduce an infinite dimensional Sequential Quadratic Programming (Exact SQP) algorithm, and next we introduce and motivate its adaptive finite dimensional version, responsible for the inexact nature of ASQP. Exact SQP Algorithm. We let Ωk be the current iterate and Ωk+1 be the new one. We let Γk := ∂ Ωk and let V(Γk ) be a Hilbert space defined on Γk , with scalar product bΓk (·, ·) : V(Γk ) × V(Γk ) → R and norm k · kV(Γk ) . This gives rise to the elliptic selfadjoint operator Bk : V(Γk ) → V(Γk )∗ defined by hBk v, wiΓk = bΓk (v, w). We then consider the following quadratic model Qk : V(Γk ) → R of J around Ωk 1 Qk (w) := J(Ωk ) + dJ(Ωk ; w) + hBk w, wi. 2

(4)

It is easy to check that the unique minimizer vk of Qk (w) satisfies vk ∈ V(Γk ) :

bΓk (vk , w) = −hgk , wiΓk

∀w ∈ V(Γk ),

(5)

with gk := g(Ωk ); i.e. vk = −Bk−1 gk . Moreover, vk is an admissible descent direction; i.e. dJ(Ωk ; vk ) < 0 because bΓk (·, ·) is a scalar product. Once vk has been found, we need to determine a stepsize that is not too small and guarantees sufficient decrease of the functional J. To accomplish this goal we identify a range of admissible stepsizes by adapting the classical Armijo-Wolfe conditions in Rn : given 0 < α < β < 1, we seek a stepsize µ ∈ R+ satisfying J(Ωk + µ vk ) ≤ J(Ωk ) + α µ dJ(Ωk ; vk ),

dJ(Ωk + µ vk ; vk ) ≥ β dJ(Ωk ; vk ),

(6)

where ∂ (Ωk + µ vk ) := {y ∈ Rd : y = x + µ vk (x), x ∈ ∂ Ωk } is the updated domain boundary and vk = vk ν k is a normal vector field. We are now ready to introduce the Exact Sequential Quadratic Programming algorithm for solving the constrained optimization problem (1)–(2): given the initial domain Ω0 , set k = 0 and iterate Compute uk = u(Ωk ) by solving (1) Compute the Riesz representation gk = g(Ωk ) of (3) Compute the search direction vk by solving (5) Determine an admissible stepsize µk satisfying (6) Update: Ωk+1 = Ωk + µk vk ; k ← k + 1 This algorithm is not feasible as it stands, because it requires the exact computation of the following quantities at each iteration: the solution uk to the state equation (1); the solution vk to the linear subproblem (5);

Adaptive SQP Method for Shape Optimization

3

the values of the functional J and of its derivative dJ in the line search routine. Replacing all of the above non-computable operations by finite approximations yields a practical algorithm. Adaptive SQP Algorithm (ASQP). This method adjusts the accuracies of the various approximations relative to the energy decrease for each iteration. It is worth noticing that the adaptive procedure driving our algorithm has to deal with two distinct sources of error: • PDE Error: this hinges on the approximation of (1) and the values of the functional J and its derivative (3); • Geometric Error: this relates to the approximation of (5) which yields the new domain. Since it is wasteful to impose a PDE error finer than the expected geometric error, we have a natural mechanism to balance the computational effort. The ASQP algorithm is an iteration of the form: . . . → Ek → APPROXJ → SOLVE → RIESZ → DIRECTION → LINESEARCH → UPDATE → Ek+1 → . . . where Ek = Ek (Ωk , Sk , Vk ) is the total error incurred in at step k, Sk = Sk (Ωk ) is the finite element space defined on Ωk and Vk = Vk (Γk ) is the finite element space defined on the boundary Γk . To describe briefly each module along with the philosophy behind ASQP, we let Gk be an approximation to the shape derivative gk = g(Ωk ) given by RIESZ, and Vk ∈ Vk (Γk ) be an approximation to the exact solution vk ∈ V(Γk ) of (5) given by DIRECTION. The discrepancy between vk and Vk leads to the geometric error. Upon using a first order Taylor expansion around Ωk , together with (5) for the exact velocity vk , we obtain J(Ωk + µk Vk ) − J(Ωk + µk vk ) ≃ µk dJ(Ωk ;Vk − vk )| = µk bΓ (vk ,Vk − vk ) ≤ µk kvk kΓ kvk − Vk kΓ . k k k

Motivated by this expression, we now describe the modules APPROXJ and DIRECTION , in which adaptivity is carried out. These modules are driven by different adaptive strategies and corresponding different tolerances, say a PDE tolerance γ and a geometric tolerance θ . Their relative values allow for different distributions of the computational effort in dealing with the PDE and the geometry. The routine DIRECTION enriches/coarsens the space Vk to control the quality of the descent direction: kVk − vk kΓk ≤ θ kVk kΓk ,

(7)

where θ ≤ 1/2 guarantees that the angle between Vk and vk is ≤ π /6; in particular kvk kΓk ≤ (1 + θ )kVk kΓk . This implies a geometric error proportional to µk kVk kΓ2k , namely J(Ωk + µk Vk ) − J(Ωk + µk vk ) ≤ δ µk kVk kΓ2 ,

(8)

J(Ωk + µk Vk ) − Jk (Ωk + µk Vk )) ≤ γ µk kVk kΓ2 ,

(9)

k

with δ := θ (1 + θ ) ≤ 32 θ . On the other hand, the module APPROXJ enriches/coarsens the space Sk to control the error in the approximate functional value Jk (Ωk + µk Vk ) to the prescribed tolerance γ µk kVk kΓ2k , k

where γ = 21 − δ ≥ δ prevents excessive numerical resolution relative to the geometric one. This is achieved within the module APPROXJ via the Dual Weighted Residual method (DWR) [2], taylored to the approximation of the functional value J. The remaining modules perform the following tasks. The module SOLVE finds approximate solutions Uk ∈ Sk of (1) and Zk ∈ Sk of an adjoint equation (necessary for the computation of g(Ωk )), while RIESZ builds on Sk an approximation Gk to the shape derivative gk . Finally, the module LINESEARCH enforces an inexact version of (6).

4

P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani

Energy Decrease. The triangle inequality, in conjunction with conditions (8) and (9), yields Jk (Ωk + µk Vk ) − J(Ωk + µk vk ) ≤ 1 µk kVk k2 , Γk 2

(10)

which is a bound on the local error incurred in at step k. On the other hand, the exact energy decrease reads J(Ωk ) − J(Ωk + µk vk ) ≈ −µk dJ(Ωk ; vk ) = µk bΓk (vk , vk ) = µk kvk kΓ2k ≥ (1 − θ )2 µk kVk kΓ2k ,

(11)

and leads to the further constraint (1 − θ )2 > 12 to guarantee the energy decrease Jk (Ωk + µk Vk ) < J(Ωk ). If ASQP converges to a stationary point, i.e. µk kVk kΓ2k → 0 as k → ∞, then the routines DIRECTION and APPROXJ approximate the descent direction Vk and functional J(Ωk ) increasingly better as k → ∞, as dictated by (7) and (9). In other words, this imposes a dynamic error tolerance and progressive improvement in approximating Uk , Zk and Gk as k → ∞. This argument is a consistency check of ASQP. We observe that the test (9) is not very demanding for DWR. So we expect coarse meshes at the beginning, and a combination of refinement and coarsening later as DWR detects geometric singularities, such as corners, and sorts out whether they are genuine to the problem or just due to lack of numerical resolution. This aspect of our approach is a novel paradigm in adaptivity and is documented in §3. Prior Work. The idea of coupling FEM, a posteriori error estimators and optimal design error estimators to efficiently solve shape optimization problems is not new. The pioneering work [3] presents an iterative scheme, where the Zienkiewicz-Zhu error indicator and the L2 norm of the shape gradient are both used at each iteration to improve the PDE error and the Geometric error, respectively. However, the algorithm in [3] does not resort to any dynamically changing tolerance, that would allow, as it happens for ASQP, to produce coarse meshes at the beginning of the iteration and a combination of Geometric and PDE refinement/coarsening later on. Moreover, [3] does not distinguish between fake and genuine geometric singularities that may arise on the domain boundary during the iteration process, and does not allow the former to disappear. More recently, the use of adaptive modules for the numerical approximation of PDEs has been employed by several authors [1, 12, 11] to improve the accuracy of the solution of shape optimization problems. However, in these papers the critical issue of linking the adaptive PDE approximation with an adaptive procedure for the numerical treatment of the domain geometry is absent. We address this linkage below.

2 Drag Minimization for Stokes Flow Let Ω ⊂ Rd , d ≥ 2 be a bounded domain of Rd . Let u := u(Ω ) and p := p(Ω ) solve the Stokes problem: −divT(u, p) = 0,

div u = 0,

in Ω ,

(12)

with Dirichlet boundary condition u = v∞ on Γin , u = 0 on Γs ∪ Γw , and traction-free boundary condition T(u, p) · n = 0 on Γout (see Figure 1). Hereafter, T(u, p) := 2νε (u) − pI is the stress tensor with ε (u) = ∇u+∇uT , and v∞ = V∞ vˆ ∞ , with vˆ ∞ being the unit vector directed as the incoming flow and V∞ a scalar function. 2 The drag exerted by the fluid on the obstacle surrounded by Γs is given by the functional J(Ω ) = J(Ω , u, p) := −

Z

Γs

vˆ ∞ · T(u, p) · n d Γ .

(13)

Adaptive SQP Method for Shape Optimization

5

We consider the following shape optimization problem minΩ ∈Uad J(Ω ) on the set Uad of admissible configurations with given volume, obtained by perturbing only the boundary Γs of the obstacle [9].

Fig. 1 Initial (top) and final (bottom) configuration: Γs is the deformable part of Ω , Γin the left-hand part, Γout the right-hand part and Γw the union of the upper and lower part. The algorithm obtains the optimal ”rugby ball” shape [9]. The mesh refinement takes place mostly around Γs , whereas in the rest of Ω the mesh is rather coarse: this is related to DWR mesh refinement (and coarsening) and the particular expression (13) of the cost functional J(Ω ).

It is possible to prove [7] that, for all sufficiently smooth vector fields v which are non-zero in a neighbourhood of Γs , the shape derivative of J(Ω ) in the direction v is given by dJ(Ω ; v) = −2ν

Z

Γs

ε (u) : ε (z) v dΓ ,

(14)

with v = v · n the normal velocity and z the solution to the adjoint problem −divT(z, q) = 0,

div z = 0,

in Ω ,

(15)

subject to Dirichlet boundary conditions z = −ˆv∞ on Γs , z = 0 on Γw ∪ Γin , and traction-free condition T(z, q)· n = 0 on Γout .

3 Numerical Experiment: Optimal Shape for Drag Minimization In this section we briefly describe key aspects of the implementation of ASQP for the successful realization of simulations. A full description of the algorithm can be found in [7]. The implementation of ASQP was done using the toolbox ALBERTA [10], and the graphics were produced with ParaView [6]. Adaptivity. Adaptivity is carried out inside the modules APPROXJ and DIRECTION. In the module APPROXJ, adaptivity is performed using the goal-oriented Dual Weighted Residual estimator (DWR) driven

6

P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani

by approximation of the boundary functional J(Ω ) [2]. Briefly, the goal-oriented DWR estimator determines where to refine/coarsen the mesh in Ω in order to improve the functional approximation, without imposing a small error in the global energy norm over the whole domain (see Figures 1 and 3).

Fig. 2 Dynamic tolerance for both Geometric and PDE approximation: the adaptive SQP method produces coarse meshes at the beginning and a combination of Geometric and PDE refinement/coarsening later on (see Figure 3). The zig-zag behaviour in the tolerance is due to the combination of refinement/coarsening. Coarsening allows the tolerance to increase (see Table 3).

Iteration DWR-ref DWR-coars LB-ref LB-coars

0 1 81 88 128 150 153 160 161 163 173 175 2000 218 523 2428 2112 777 697 1284 1096 174 176 75 44 4 21 5 25 523 29 65 88 22 11 14 49 44

177 179 181 189 196 7625 178 180 2312 56 819 191 35 4 54 104 11

200 202 204 213 222 3786 4566 32372 1051 81657 1994 5355 3305 2234 1379 1002 0 924 57 113 1000

Table 1 Number of marked elements for refinement/coarsening according to Laplace-Beltrami (LB) and Dual Weighted Residual (DWR). The adaptive SQP method, with dynamically changing tolerance, alternates refinement/coarsening for LB and DWR. After the first two iterations, where refinement/coarsening takes place, the algorithm performs 80 iterations of optimization without changing the numerical resolution. Later on, the tolerance is modified by a sequence of DWR and LB refinement/coarsening.

The scalar velocity vk obeys (5) with V(Γk ) := H 1 (Γk ) and the bilinear form bΓk (v, w) := Γk αb ∇Γ v · ∇Γ w + βb vw, where ∇Γ denotes the surface gradient, and αb = 10−3, βb = 1. The module DIRECTION enforces the bound (7) on kVk − vk kΓk using the a posteriori error estimators for the Laplace-Beltrami (LB) operator ∆Γ developed in [8]. They are of residual type and estimate the energy error when solving ∆Γ u = f on a known surface Γ . They consist of the usual PDE estimator and a new geometric estimator that accounts for the approximation of Γ by piecewise polynomials. Since Γ is unknown in this context, we mimic the W∞1 error between true and discrete surface by properly scaled jumps of the normal vector to the discrete surface. More precisely, the error indicator associated to element T of the k-th surface Γk is given by R

Adaptive SQP Method for Shape Optimization

7

2 ηΓk (T )2 := h2T kR(Vk )k2L2 (T ) + hT kJ (Vk )k2L2 (∂ T ) + max Jn,S k∇Γ Vk k2L2 (T ) , S⊂∂ T

where R(Vk ) = −αb ∆Γ Vk + βbVk − gk is the so-called interior residual, J (Vk ) is the jump residual, namely jump of ∇Γ Vk normal to the edge, and Jn,S is the jump of the unit normal vector (to the surface) across the interelement side S (see Figure 2 and Table 3).

Fig. 3 Combination of DWR and LB refinement/coarsening. Evolution of the initial configuration Γs : iterations 0, 44, 184 and 217. The initially refined corners (top) are subsequently smoothed out and coarsened (see Figure 4). The new corners of the rugby ball, instead, are genuine singularities and are preserved and further refined by ASQP (bottom).

Geometrically Consistent Mesh Modification (GCMM). The presence of corners (or kinks) on the deformable boundary Γk is usually problematic. First, the scalar product bΓk (·, ·) of (5) includes a LB regularization term (αb > 0) which stabilizes the boundary update but cannot remove kinks because Vk is smooth (see (14)). Second, DWR regards kinks as true singularities and tries to refine them accordingly. The combination of these two effects leads to numerical artifacts (ear formation) and halt of computations.

Fig. 4 Detection of genuine geometric singularities. Evolution of the initial upper-left corner of Γs (see top of Figures 1 and 3): snapshots of iterations 0, 1, 160 and 190. The adaptive SQP method is able to sort out whether geometric singularities are genuine to the problem or just due to lack of numerical resolution and to coarsen overrefined regions of the computational grid.

The GCMM method of [4] circumvents this issue; see Figure 4. Whenever the boundary mesh Γk is to be modified (refine, coarsen, or smooth out), then the discrete curvature Hk of Γk is interpolated and the new position Xk of the free boundary is determined from the fundamental geometric identity −∆Γk Xk = Hk . This preserves geometric consistency, which is violated by simply interpolating Γk , as well as accuracy [4]. In addition, this computation rounds fake kinks (due to numerics) and preserves genuine kinks (see Figure 5).

8

P. Morin, R. H. Nochetto, M. S. Pauletti and M. Verani

Fig. 5 Detection of genuine geometric singularities. Zoom on the evolution of the left-hand part of the initial configuration Γs (see top of Figure 1 and botton of Figure 3): snapshots of iterations 140,160,180 and 220. The adaptive SQP method is able to recognize the corner of the rugby ball as genuine singularity of the problem and to refine the mesh (combined use of LB and DWR error estimates) to improve both the PDE and the Geometric approximation.

Mesh Quality. The mesh is evolved by a prescribed discrete velocity of its boundary. To avoid mesh deterioration a mechanism to maintain good quality must be provided. Remeshing in each iteration is expensive and destroys the binary hierarchical data structure used for refinements and coarsenings [10]. Our approach is to use an optimization routine that works on stars and selectively reallocates the center node so as to improve the star quality and approximately preserve the local mesh size. It does not change the mesh topology so it is compatible with the binary data structure. In each star we minimize the SSU (Simultaneous Smoothing and Untangling) cost functional proposed in [14]. When optimization alone is not sufficient to maintain a good quality we remesh the domain. We refer to [7] for the effect of remeshing. Time Step. Control of time step is required to satisfy the Armijo conditions (6) as well as to avoid node crossing when evolving the mesh [5]. The latter constraint sometimes dictates the time step, especially when the mesh is fine. We have found that remeshing ameliorates this issue upon drastically improving the mesh quality. In [7] we allow remeshing inside the Armijo condition. Constraints. The area constraint that defines the class of admissible functions is enforced via a Lagrange multiplier. The algorithm, described in [5], guarantees volume conservation to machine precision in each time iteration and is well suited to be utilized inside the Armijo condition loop.

References 1. P. Alotto, P. Girdinio, P. Molfino and M. Nervi, Mesh adaption and optimization techniques in magnet design, IEEE Trans. Magnetics, 32, 4: 2954–2957, 1996. 2. W. Bangerth and R.Rannacher, Adaptive Finite Element Methods for Differential Equations, Birkh¨auser, 2003 3. N.V. Banichuk, A. Falk and E. Stein, Mesh refinement for shape optimization, Structural Optim., 9:46–51, 1995. 4. A. Bonito and R.H. Nochetto and M.S. Pauletti, Geometrically consistent mesh modification, (submitted). 5. A. Bonito and R.H. Nochetto and M.S. Pauletti, Parametric FEM for geometric biomembranes, (submitted). 6. A. Henderson, ParaView Guide, A Parallel Visualization Application Kitware Inc., 2007. 7. P. Morin, R.H. Nochetto, M.S. Pauletti and M. Verani, AFEM for shape optimization (in preparation). 8. K. Mekchay, P. Morin, and R.H. Nochetto, AFEM for Laplace Beltrami operator on graphs: Design and conditional contraction property (submitted). 9. O. Pironneau, On optimum profiles in Stokes flow, J. Fluid Mech. 59: 117–128, 1973. 10. A. Schmidt and K.G. Siebert, Design of Adaptive Finite Element Software. The Finite Element Toolbox ALBERTA, Lecture Notes in Computational Science and Engineering 42, Springer, Berlin, 2005. 11. A. Schleupen, K. Maute and E. Ramm, Adaptive FE-procedures in shape optimization, Struct. Multidisc. Optim., 19: 282–302, 2000. 12. J.R. Roche, Adaptive method for shape optimization, 6th World Congresses of Structural and Multidisciplinary Optimization, Rio de Janeiro, 2005. 13. J. Sokołowski and J.-P. Zol´esio, Introduction to Shape Optimization, Springer-Verlag, Berlin, 1992. 14. J. M. Escobar, E. Rodriguez, R. Montenegro, G. Montero and J. M. Gonzalez-Yuste, Simultaneous untangling and smoothing of tetrahedral meshes, Comput. Meths. Appl. Mech. Engn., 192 2775–2787, 2003.