Gradient Maximum Principle for Minima

168 JOTA: VOL. 112, NO. 1, JANUARY 2002 In the case where the Lagrangian is nonsmooth and does not satisfy any growth assumption, a maximum principle ...

0 downloads 112 Views 116KB Size
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 112, No. 1, pp. 167–186, January 2002 (2002)

Gradient Maximum Principle for Minima C. MARICONDA1

AND

G. TREU2

Communicated by L. D. Berkovitz

Abstract. We state a maximum principle for the gradient of the minima of integral functionals I(u)G

冮 [ f (∇u)Cg(u)] dx,

on u¯CW 01,1 (Ω ),



just assuming that I is strictly convex. We do not require that f, g be smooth, nor that they satisfy growth conditions. As an application, we prove a Lipschitz regularity result for constrained minima. Key Words. Comparison principle, gradient maximum principle, Lipschitz regularity, maximum principle.

1. Introduction Most of the results on the regularity of the minima of integral functionals have as a starting point the Euler equation of the functional in consideration. This requires the Lagrangian to be smooth and, together with its derivatives, satisfy some growth conditions. Giaquinta and Giusti (Ref. 1) and more recently Cellina (Refs. 2, 3) have tried to study the regularity for minima working directly with the functional instead of using the Euler equation. A classical tool to give an estimate of the gradient of regular solutions to quasilinear elliptic equations is the maximum principle for the gradient (Ref. 4, Theorem 15.1). This can be proved by showing that the derivatives of the solutions satisfy an elliptic equation obtained by differentiating the original one and by using the maximum principle for subsolutions兾supersolutions. In particular, this result can be applied to the regular minima of integral functionals that satisfy the Euler equation. 1

Associate Professor, Faculty of Engineering, University of Padova, Padova, Italy. Associate Professor, Faculty of Statistical Sciences, University of Padova, Padova, Italy.

2

167 0022-3239兾02兾0100-0167兾0  2002 Plenum Publishing Corporation

168

JOTA: VOL. 112, NO. 1, JANUARY 2002

In the case where the Lagrangian is nonsmooth and does not satisfy any growth assumption, a maximum principle for the gradient does still hold for the minima of functionals of the gradient among the Lipschitz functions (with prescribed boundary data); a survey on the subject is given in Ref. 5. In this situation, the proof is not based on the study of the associated Euler equation, but exploits just the minimality property. In Section 4 of this paper, we extend the techniques that are involved in the latter result for the minima of integral functionals I of the form

冮 [ f (∇u)Cg(u)] dx

. I(u)G



among the functions u in u¯ CW 1,1 0 (Ω ). We prove that, if I is strictly convex and τ is in ⺢n, then each minimum w of I satisfies ess sup [w(xCτ )Aw(x)]⁄

Ω ∩ (−τ CΩ)

sup

[w(xCτ )Aw(x)]C,

∂(Ω ∩ (−τ CΩ))

where the latter supremum is intended in the sense of the Sobolev functions, without requiring that f, g be smooth or that they satisfy growth conditions. We look at the variations of the form w(xCτ )Aw(x) as the difference of two minima of the same functional; we then apply a maximum principle to relate these expressions to the boundary data. Here, neither w(x) nor w(xCτ ) are known to be subsolutions or supersolutions to a partial differential equation: the classical maximum principle (Ref. 4, Theorem 10.9) cannot be applied. This motivates a comparison principle for subminima兾superminima for the wider class of strictly convex functionals of the form I(u)G

冮 L(x, ∇u) dx. Ω

In Section 5, we apply the main result to establish that the minima of a strictly convex functional I that lie between two Lipschitz functions (with the same boundary data) are Lipschitz. As a consequence, we prove that that minima of I whose gradient belongs to a prescribed convex set are the minima of I in the set of functions that lie between two suitable functions, extending (in the autonomous case) a result of Ref. 6. Some applications of this result for constrained minima to the study of the existence and regularity for the minima of I will be presented in a forthcoming paper (Ref. 7). 2. Notation r its closure If A is an open bounded subset of ⺢n, n¤1, we denote by A and by ∂A its boundary. For 0⁄k⁄+S, we denote by C k (A) [resp.

JOTA: VOL. 112, NO. 1, JANUARY 2002

169

C kc (A)] the space of the k-times continuously differentiable functions in A [resp. with compact support in A]. Lip(A) is the space of Lipschitz functions r ; we recall that the Lipschitz in A, that we consider to be extended in A functions are differentiable almost everywhere. For u in LS(A), we denote by ess supA u the essential supremum of u in A and by 兩兩u兩兩LS(A) the usual norm of u in LS(A). If u is in W 1,r(A), the weak derivative of u with respect to the ith variable is denoted by uxi and its gradient by ∇u; the directional derivative of u with respect a vector τ ∈⺢n is Dτ u. If uG(u1 , . . . , un ) is in W 1,r(A; ⺢n), its divergence is denoted by div u. For u¯ in Lip(A), we set Lip(A, u¯ )G{u∈Lip(A): uGu¯ on ∂Ω}. If L: AB⺢B⺢n → ⺢∪{+S}, if (x, z, p)>L(x, z, p) is differentiable with respect to z [resp. to pG( p1 , . . . , pn )], we denote by Lz [resp. Lpi , iG 1, . . . , n] the partial derivative of L with respect to z [resp. pi ] and by Lx [resp. Lp ] the gradient of L with respect to x [resp. p]. In the case where L(x, z, p) is convex in z [resp. p], ∂z L(x¯ , z¯ , p¯ ) [resp. ∂p L] is the subdifferential of the map z>L(x¯ , z, p¯ ) in z¯ [resp. p>L(x¯ , z¯ , p) in p¯ ] in the usual sense of convex analysis. Given two vectors a and b in ⺢n, we denote by a · b their usual scalar product in ⺢n and by 兩a兩 the Euclidean norm of a. In what follows, Ω is an open bounded subset of ⺢n and L is a function L: ΩB⺢B⺢n → ⺢∪{+S}, (x, z, p)>L(x, z, p), such that x>L(x, z(x), p(x)) is measurable for every measurable z: Ω → ⺢ and p: Ω → ⺢ n ; this condition is fulfilled if, for instance, L is a normal integrand (see Ref. 8). We define the functional I on W 1,1(Ω) by ∀u∈W 1,1 (Ω ),

I(u)G

冮 L(x, u(x), ∇u(x)) dx. Ω

We assume always that there exist a in ⺢ and b in L1(Ω ) such that L(x, z, p)¤a兩p兩Cb(x),

for every (x, z, p);

this implies that I(u)H−S,

for every u in W 1,1 (Ω ).

3. Subminima/Superminima and Inequalities on ∂Ω We recall here the basic definitions and results that we will use in the next sections of the paper. For u, û in W 1,1(Ω), we set u∧ûGmin{u, û},

u∨ûGmax{u, û},

170

JOTA: VOL. 112, NO. 1, JANUARY 2002

and the positive part of u is u+ Gu∨0; we recall that these functions still belong to W 1,1(Ω). Following Ref. 4, Section 8.1, we give first a precise meaning to the equalities on the boundary of a bounded set for Sololev functions. Definition 3.1. For u in W 1,1(Ω), we say that u⁄0 on ∂Ω if 1,1 (Ω), by u⁄û on ∂Ω, we mean that uAû⁄0 on u ∈W 1,1 0 (Ω ). For u, û in W ∂Ω. +

Some of the well-known properties that we list here will be used in the sequel. Proposition 3.1. Let u, û∈W 1,1(Ω). The following statements hold: (i) (ii) (iii) (iv)

¯ ) and u(x)⁄0 for every x in ∂Ω, then u⁄0 on ∂Ω; if u∈C 0(Ω 1,1 if u⁄û on ∂Ω, then u∧û∈uCW 1,1 0 (Ω ) and u∨û∈ûCW 0 (Ω ); if u⁄0 a.e. on Ω, then u⁄0 on ∂A for every open subset A of Ω; if (ψ n)n ∈⺞ is a sequence in Lip(Ω) converging to u in W 1,1(Ω) such that ψ n(x)⁄0 for every x in ∂Ω and n in ⺞, then u⁄0 on ∂Ω.

Proof. (i) It is straightforward that u+ is continuous and equal to 0 on ∂Ω, then (uAû)+ belongs to W 1,1 0 (Ω ). (ii) Since uAû⁄0 on ∂Ω, then (uAû)+ belongs to W 1,1 0 (Ω ); the identities u∧ûAuG−(uAû)+,

u∨ûAûG(uAû)+

yield the claim. (iii) If u⁄0 a.e. on Ω, then u+ is equal to 0 a.e. and thus + u ∈W 1,1 0 (A) for every open subset A of Ω. (iv) Let (ψ n )n ∈⺞ be a sequence in Lip(Ω) converging to u in W 1,1(Ω) and such that

ψ n (x)⁄0,

for every x in Ω.

Then, ψ +n converges to u+ in W 1,1(Ω) and

ψ +n (x)G0,

for every x in ∂Ω.

By Ref. 9, Theorem 9.17, the functions ψ +n belong to W 1,1 0 (Ω ), proving that u+ belongs to W 1,1 (Ω ). 䊐 0

JOTA: VOL. 112, NO. 1, JANUARY 2002

171

Definition 3.2. A convex subset X of W 1,1(Ω) is said to be a convex sublattice if ∀u, û∈X,

u∨û∈X,

u∧û∈X.

Example 3.1. Let u¯ , l 1, l 2 ∈Lip(Ω), and let C be a convex subset of ⺢n. The function spaces Lip(Ω), Lip(Ω, u¯ ), W 1,q(Ω), u¯ CW 1,q 0 (Ω ) and the sets {u∈W 1,1 (Ω ): ∇u∈C a.e.}, {u∈W 1,1(Ω): l 1 ⁄u⁄l 2 a.e.} are convex sublattices of W 1,1(Ω). Definition 3.3. Let X be a convex sublattice of W 1,1(Ω). A function u in W 1,1(Ω) is said to be a subminimum [resp. superminimum] for I in X if u belongs to X, I(u) is finite, and I(u)⁄I(û),

for every û in X∩(uCW 1,1 0 (Ω )) s.t. û⁄u [resp. û¤u],

a.e. on Ω. Moreover, the function u is a minimum for I in X whenever I(u)⁄I(û),

for every û in X∩(uCW 1,1 0 (Ω )).

Remark 3.1. The notion of subminimum兾superminimum was introduced by Giusti in Ref. 5 for functionals depending on only the gradient. We introduce the definition of subminimum兾superminimum in a convex sublattice, since we consider the minima of the functional I in different sets of functions with given boundary data. We point out that, following our definition, a function u is a minimum for I in W 1,q(Ω) if I(u)⁄I(û),

for every û in uCW 1,q 0 (Ω ).

Definition 3.4. We say that u∈W 1,q(Ω), 1⁄q⁄+S, is a subsolution [resp. supersolution] of the weak Euler equation associated to I in W 1,q(Ω) if there exist k in Lq′ (Ω, ⺢n) and h in Lq′ (Ω )[q′Gq兾(qA1) is the conjugate of q] such that k(x)∈∂p L(x, u(x), ∇u(x)) a.e. and h(x)∈∂z L(x, u(x), ∇u(x)) a.e. satisfying ∀η ∈W 1,q 0 (Ω ),

冮 k · ∇η dx⁄0 [resp. ¤0].

η ¤0 a.e.,



Remark 3.2. When L is of class Euler equation

C 1, u is a subsolution of the (weak)

div Lp (x, û, ∇û)ALz (x, û, ∇û)G0

172

JOTA: VOL. 112, NO. 1, JANUARY 2002

if div Lp (x, û, ∇û)ALz (x, û, ∇û)¤0; i.e., ∀η ∈W 1,q η ¤0 a.e., 0 (Ω ),

冮 L (x, u, ∇u) · ∇ηCL (x, u, ∇u)η dx⁄0. p

z



We show now that the notion of subminimum generalizes that of subsolution. Proposition 3.2. (i)

Assume that u is a subsolution [resp. supersolution] of the Euler equation associated to I in W 1,q(Ω). Then, u is a subminimum [resp. superminimum] for I in W 1,q(Ω). (ii) Assume that L is of class C 1 and that there exists CH0 such that 兩L(x, z, p)兩⁄C(1C兩z兩 qC兩p兩q ),

(1a)

兩Lz (x, z, p)兩C兩Lp (x, z, p)兩⁄C(1C兩z兩qA1C兩p兩qA1),

(1b)

and let u be a subminimum [resp. superminimum] for I in W 1,q(Ω). Then, u is a subsolution [resp. supersolution] of the Euler equation div Lp (x, û, ∇û)ALz (x, û, ∇û)G0. Proof. (i) Let u be a subsolution to the Euler equation associated to I, and let û in uCW 1,q 0 (Ω ) be such that û⁄a.e. on Ω. Then, ûGuAη for some positive η in W 1,q 0 (Ω ) and thus, if k and h are as in Definition 3.4, by convexity we obtain I(û)AI(u)GI(uAη )AI(u)¤

冮 [k(−∇η)Ch(−η)] dx¤0, Ω

showing that u is a subminimum for I in W 1,q(Ω). (ii) Let u be a subminimum for I in W 1,q(Ω), and let ϕ in C cS (Ω) be such that ϕ ¤0: for every negative λ , the quotient [I(uCλϕ )AI(u)]兾λ is negative. As in the standard proofs of the validity of the Euler equation for minima (see for instance Ref. 10, Section 8.2.3), the growth assumptions (1) ' imply that the function x>Lp (x, u(x), ∇u(x)) belongs to Lq (Ω, ⺢n ), that

JOTA: VOL. 112, NO. 1, JANUARY 2002

173



function x>Lz (x, u(x), ∇u(x)) belongs to Lq (Ω ), and that lim

λ→0

G

冮 {[L(x, uCλϕ, ∇uCλ ∇ϕ )AL(x, u, ∇u)]兾λ } dx Ω

冮 [L (x, u, ∇u) · ∇ϕCL (x, u, ∇u)ϕ] dx, p

z



proving that the latter integral in the above formula is negative; a classical density argument yields the conclusion. 䊐

4. Comparison and Maximum Principles for Subminima/Superminima Most of the results of this section generalize those obtained for the minima of integral functionals of the gradient among Lipschitz functions. The basic ideas recall the translation method used in the proof of Lemma 10.0 of Ref. 11. In what follows, we say that the functional I is strictly convex if it is strictly convex in its effective domain, i.e., if I(λ uC(1Aλ )û)Fλ I(u)C(1Aλ )I(û), for every 0Fλ F1 and u, û in W 1,1(Ω) such that I(u) and I(û) are finite. We point out that I is strictly convex if, for instance, L(x, z, p)Gf (x, p)Cg(x, z) and either f is strictly convex in p or g is strictly convex in z. Theorem 4.1. Comparison Principle for Subminima兾Superminima. Let X be a convex sublattice of W 1,1(Ω), and let the functional I be strictly convex. Let u be a subminimum, and let û be a superminimum for I in X such that u⁄û on ∂Ω. Then, u⁄û a.e. on Ω. Proof. Since by Proposition 3.1 (ii) the function u∧û belongs to (uCW 1,1 0 (Ω ))∩X, and since u∧û⁄u,

a.e. on Ω,

then I(u)⁄I(u∧û),

174

JOTA: VOL. 112, NO. 1, JANUARY 2002

so that, denoting by {uHû} [resp. {u⁄û}] the set {x∈Ω: u(x)Hû(x)} [resp. {x∈Ω: u(x)⁄û(x)}], we obtain



L(x, u, ∇u) dxC

{u ⁄ û}





L(x, u, ∇u) dx

{uHû}



L(x, u, ∇u) dxC

{u ⁄ û}



L(x, û, ∇û) dx,

{uHû}

and therefore,



L(x, u, ∇u) dx⁄

{uHû}



L(x, û, ∇û) dx.

{uHû}

Analogously, u∨û belongs to (ûCW 1,1 0 (Ω ))∩X and u∨û¤û,

a.e. on Ω;

it follows that I(û)⁄I(u∨û), whence



L(x, û, ∇û) dx⁄

{uHû}



L(x, u, ∇u) dx;

{uHû}

therefore, we obtain the equality



L(x, û, ∇û) dxG

{uHû}



L(x, u, ∇u) dx.

(2)

{uHû}

If ûFu on a nonnegligible set, then u∨û≠û; by strict convexity, we obtain I((1兾2)(u∨û)C(1兾2)û)F(1兾2)I(u∨û)C(1兾2)I(û).

(3)

Again by Proposition 3.1(ii), the function u∨û belongs to ûCW 1,1 0 (Ω ); thus, (1兾2)(u∨û)C(1兾2)û is in (ûCW 1,1 0 (Ω ))∩X and is greater than û a.e. on Ω. It follows that I(û)⁄I((1兾2)(u∨û)C(1兾2)û), so that by (3) we obtain I(û)F(1兾2)I(u∨û)C(1兾2)I(û), or equivalently,



{uHû}

L(x, û, ∇û) dxF



{uHû}

L(x, u, ∇u) dx,

JOTA: VOL. 112, NO. 1, JANUARY 2002

175

contradicting (2). It follows that a.e. on Ω.

u⁄û,



Remark 4.1. Proposition 3.2 shows that the subsolution兾supersolutions of the Euler equation associated to I in W 1,q(Ω) are subminima兾 superminima for I in W 1,q(Ω). Therefore, the conclusion of Theorem 4.1 still holds when u is a subsolution and V is a supersolution; thus, in the case where I is strictly convex, it generalizes the classical comparison principle (Ref. 4, Theorem 10.7). In this case, when u or û is a minimum, the conclusion of Theorem 4.1 can be obtained also under some alternative assumptions on the Lagrangian (Ref. 12). In what follows, we will assume that the Lagrangian L is the sum of two functions, more precisely that L(x, z, p)Gf (x, p)Cg(x, z), and that X is a convex sublattice of W 1,1(Ω). This is motivated by the following lemma that is a crucial step to prove the next weak maximum principle. Lemma 4.1. Let L(x, z, p)Gf (x, p)Cg(x, z), and assume that the function z>g(x, z) is convex for almost every x in Ω. Let X be a convex sublattice of W 1,1(Ω), and let û be a superminimum for I in X. Then, for every real positive α , the function ûCα is a superminimum for I in α CX. Proof. Let ω in X be such that ûCα ⁄ ω , a.e. on Ω, and ω ∈ûCα CW 1,1 0 (Ω ). Then, û⁄ ω Aα , a.e., and ω Aα ∈(ûCW 1,1 0 (Ω ))∩X. Since û is a superminimum for I in X and ∇(ω Aα )G∇ω , then I(û)G

冮 f (x, ∇û)Cg(x, û) dx Ω

⁄I(ω Aα )

冮 f (x, ∇ω )Cg(x, ω Aα ) dx;

G



176

JOTA: VOL. 112, NO. 1, JANUARY 2002

therefore, since ∇(ûCα )G∇û, we have 0⁄

冮 f (x, ∇ω ) dxA冮 f (x, ∇(ûCα )) dx Ω



C

冮 g(x, ω Aα ) dxA冮 g(x, û) dx. Ω

(4)



Now, for α H0, the convexity assumption on g yields [g(x, ûCα )Ag(x, û)]兾α ⁄[g(x, ω )Ag(x, ω Aα )]兾α , so that g(x, ω Aα )Ag(x, û)⁄g(x, ω )Ag(x, ûCα ). The inequality (4) then implies 0⁄

冮 f (x, ∇ω ) dxA冮 f (x, ∇(ûCα )) dx Ω

C



冮 g(x, ω ) dxA冮 g(x, ω ) dxA冮 g(x, ûCα ) dx Ω





GI(ω )AI(ûCα ), 䊐

proving the claim.

Remark 4.2. The last result holds without any convexity assumption on f. Example 4.1. The conclusion of Lemma 4.1 does not hold in general if A is strictly negative. In fact, let ΩG]0, 1[,

g(z)Gz2,

f ( p)G0.

Then, the function û(x)Gx is a supersolution, but ûA1 is not a supersolution of the equation Dx f (w′)Agz (w)G0.

JOTA: VOL. 112, NO. 1, JANUARY 2002

177

Definition 4.1. Let u∈W 1,1(Ω). The supremum sup∂Ω u of u in ∂Ω is defined by sup Ginf{γ ∈⺢: u⁄ γ in ∂Ω}. ∂Ω

¯ )∩W 1,1(Ω), then Remark 4.3. Again, we notice that, if u∈C 0(Ω sup∂Ω u is the usual pointwise supremum of u in ∂Ω. Theorem 4.2. Maximum Principle for Subminima Superminima. Let X be a convex sublattice of W 1,1(Ω), let L(x, z, p)Gf (x, p)Cg(x, z), and let I be strictly convex. Let u be a subminimum, and let û be a superminimum for I in X. Then, ess sup(uAû)⁄sup(uAû)+. Ω

∂Ω

Proof. Let

α Gsup (uAû)+. ∂Ω

For every (H0, we have uAû⁄ α C(,

on ∂Ω.

By Lemma 4.1, the function ûCα C( is a superminimum for I in X. The comparison principle (Theorem 4.1) then implies that u⁄ûCα C(,

a.e. on Ω, 䊐

proving the claim. Example 4.2. The assumptions of Theorem 4.2 do not imply that ess sup(uAû)⁄sup(uAû). Ω

∂Ω

For instance, let ΩG]0, 1[,

g(z)Gz2,

Then, u(x)G−(xA1)2 is a subsolution and û(x)Gx is a supersolution of the equation Dx f (w′)Agz(w)G0.

f ( p)G0.

178

JOTA: VOL. 112, NO. 1, JANUARY 2002

However, uAûG−1,

on ∂Ω,

but ess sup(uAû)G−3兾4H−1. Ω

Corollary 4.1. Let X be a convex sublattice of W 1,1(Ω), let L(x, z, p)G f (x, p)Cg(x, z), and let I be strictly convex. Let u, û be two minima for I in X. Then, 兩兩uAû兩兩LS(Ω) Gsup 兩uAû兩. ∂Ω

Proof. The functions u and û are both subminima and superminima for I in X; Theorem 4.2 yields the first part of the claim. Again by Theorem 4.2, we have ess sup(uAû)⁄sup(uAû)+, Ω

∂Ω

ess sup(ûAu)⁄sup(ûAu)+. Ω

∂Ω

Since both of the right-hand sides of the previous inequalities are bounded by sup∂Ω 兩uAû兩, it follows that 兩兩uAû兩兩LS(Ω) ⁄sup 兩uAû兩. ∂Ω

Moreover, since 兩uAû兩⁄兩兩uAû兩兩LS(Ω) ,

a.e. on Ω,

the opposite inequality follows from Proposition 3.1(iii).



Remark 4.4. We recall again that the minima of the claim in Corollary 4.1 may have different boundary data; therefore, they are not forced to coincide, even if the functional is strictly convex. For every τ in ⺢n and u in W 1,1(Ω), we introduce the set Ωτ and the function uτ in W 1,1(Ωτ ) defined by Ωτ G−τ CΩG{−τ Cx: x∈Ω}, ∀y∈Ωτ , uτ (y)Gu(yCτ ). For every open subset A of Ω, we define the functional ∀u∈W 1,1 (A),

IA (u)G

冮 L(x, u, ∇u) dx, A

JOTA: VOL. 112, NO. 1, JANUARY 2002

179

and for every sublattice X of W 1,1(Ω), we define X(A) to be the set of the restrictions to A of the functions in X; the restriction of u∈W 1,1(Ω) to A will still be denoted by u. We will use the obvious fact that, if w is a minimum for I in X [i.e., I(w)⁄I(u) for every u∈(wCW 1,1 0 (Ω ))∩X], then the restriction of w to A is a minimum for IA in X(A). Theorem 4.3. Extended Maximum Principle for the Gradient. Let X be a convex sublattice of W 1,1(Ω), let L(x, z, p)Gf ( p)Cg(z), and let I be strictly convex. Let w be a minimum for I in X, and let τ ∈⺢n. Then, ess sup(wτ Aw)⁄ sup (wτ Aw)+, Ω ∩ Ωτ

∂(Ω ∩ Ωτ )

兩兩wτ Aw兩兩LS(Ω ∩ Ωτ )G sup 兩wτ Aw兩. ∂(Ω ∩ Ωτ )

Proof. The function w is a minimum for IΩ ∩ Ωτ in X(Ω∩Ωτ ). Moreover, the fact that L is the sum of two functions which do not depend on x implies that wτ is a minimum for the functional Iτ (û)G



f (∇û)Cg(û) dx

Ωτ

in the lattice Xτ G{uτ : u∈X}; i.e. Iτ (wτ )⁄Iτ (û),

for every û in Xτ such that ûAwτ ∈W 1,1 0 (Ωτ ).

In fact, let n∈Xτ be such that ûAwτ ∈W 1,1 0 (Ωτ ), and let u∈X be such that ûGuτ . Then, uAw∈W 1,1 (Ω ), so that 0 I(w)⁄I(u); therefore, Iτ (wτ )G



f (∇wτ )Cg(wτ ) dx

Ωτ

G

冮 f (∇w)Cg(w) dx Ω

GI(w)⁄I(u) GIτ (û). It follows that the restriction of wτ to Ω∩τ τ is a minimum for IΩ ∩ Ωτ in X(Ω∩Ωτ ). Now, w and wτ are both subminima and superminima for

180

JOTA: VOL. 112, NO. 1, JANUARY 2002

IΩ ∩ Ωτ in X(Ω∩Ωτ ). Since the functional IΩ ∩ Ωτ is strictly convex, the application of Theorem 4.2 and Corollary 4.1 yields the conclusion. 䊐 Corollary 4.2. Gradient Maximum Principle for Minima. Let X be a convex sublattice of W 1,1(Ω), let L(x, z, p)Gf ( p)Cg(z), and let I be strictly ¯ ). Then, convex. Let w be a minimum for I in X, and assume that w∈C 1(Ω 兩兩∇w兩兩LS(Ω) G兩兩∇w兩兩LS(∂Ω) . Proof. We still denote by w an extension of class x0 ∈Ω be such that

C 1 of w to ⺢n. Let

兩兩∇w兩兩LS(Ω) G兩∇w(x0)兩, and let τ in ⺢n, 兩τ兩G1, be such that 兩∇w(x0)兩G兩Dτ w(x0)兩. Let (λ n)n ∈⺞ be a sequence in ⺢\{0} converging to 0; by Theorem 4.3, for ¯ such that every n∈⺞, there exist xn , yn in Ω ynAxn Gλ nτ ,

xn ∈∂Ω or yn ∈∂Ω,

and 兩w(x0Cλ nτ )Aw(x0)兩⁄兩w(yn )Aw(xn )兩. Now, for every n∈⺞, there exists zn in the segment joining xn with yn that satisfy the equality w(yn)Aw(xn)GDτ w(zn)λ n ; therefore, we obtain 兩[w(x0Cλ n τ )Aw(x0)]兾λ n 兩⁄兩Dτ w(zn)兩. We may assume that zn converges to a point x*∈∂Ω: passing to the limit in the latter inequality, we obtain 兩兩∇w兩兩LS(Ω)G兩Dτ w(x0)兩 ⁄兩Dτ w(x*)兩 ⁄兩兩∇w兩兩LS(∂(Ω) , proving the claim.



Remark 4.5. In the case where L is smooth and w∈C 2(Ω ) satisfies the Euler equation, Corollary 4.2 is a consequence of the classical maximum principle for the gradient (Ref. 4, Theorem 15.1) for the solutions of class

JOTA: VOL. 112, NO. 1, JANUARY 2002

181

C 2 of elliptic differential equations. We point out here that we allow L to be extended valued and do not require the smoothness of either the Lagrangian or the minimum; moreover, we do not a priori know whether the minimum is a solutions to a Euler equation. Theorem 4.3 seems then to be an extended version of a maximum principle for the gradient. The next example shows that the conclusion of Corollary 4.2 does not hold in general if L depends also on x. Example 4.3. Let ΩG]A1, 1[,

L(x, z, p)Gf ( p)Cg(x, z),

where f ( p)Gp2,

g(x, z)G2 cosh(1)xzCz2.

Let X be the lattice of the absolutely continuous functions u satisfying u(−1)G1兾e,

u(1)G−1兾e.

The function w(x)Gsinh(x)Ax cosh(1) belongs to X and is a solution of the Euler equation u″AuGcosh(1)x, associated to the strictly convex functional I(u)G



1

L(x, u, u′) dx.

−1

It follows by convexity that w is a minimum for I in X. However, 兩兩w′兩兩LS(−1, 1)H0Gmax{w′(−1), w′(1)}.

5. Some Applications In this section, we apply Theorem 4.3 to prove a regularity result for constrained minima of I in a Sobolev space. Theorem 5.1. Lipschitz Regularity for Constrained Minima. Let L(x, z, p)Gf ( p)Cg(z), and assume that the functional I is strictly convex. Let u¯ ∈Lip(Ω), and let l 1, l 2 be two functions in Lip(Ω, u¯ ). Assume that w is 1 2 a minimum for I in u¯ CW 1,q 0 (Ω ), 1⁄q⁄S, and that l ⁄w⁄l , a.e. on Ω.

182

JOTA: VOL. 112, NO. 1, JANUARY 2002

Then, w is Lipschitz and 兩兩∇w兩兩LS(Ω) ⁄max{兩兩∇l 1兩兩LS(Ω) , 兩兩∇l 2兩兩LS(Ω)}. To prove Theorem 5.1, we need the following technical lemma. Lemma 5.1. Let u¯ ∈Lip(Ω), τ ∈⺢n, and let l 1, l 2 be two functions in 1 2 Lip(Ω, u¯ ). Assume that w∈u¯ CW 1,1 0 (Ω ) is such that l ⁄w⁄l , a.e. on Ω. Then,





sup (wτ Aw)⁄max max (l τ1Al 1), max (l τ2Al 2) ;

(5)

sup 兩wτ Aw兩⁄max{兩兩l τ1Al 1兩兩LS(Ω ∩ Ωτ ), 兩兩l τ2Al τ2 兩兩LS(Ω ∩ Ωτ )}.

(6)

∂(Ω ∩ Ωτ )

Ω ∩ Ωτ

Ω ∩ Ωτ

therefore, ∂(Ω ∩ Ωτ )

Proof. We show first that (5) holds true if w∈Lip(Ω). Let x∈ ∂(Ω )∩Ωτ ): either x∈∂Ω and wτ (x)Aw(x)Gwτ (x)Al 2(x)⁄l τ2 (x)Al 2(x), or x∈∂Ωτ , so that xG−τ Cy,

for some y∈∂Ω,

and wτ (x)Aw(x)Gw(y)Aw−τ ( y) Gl 1(y)Aw−τ (y) ⁄l 1(y)Al 1−τ (y) Gl τ1 (x)Al 1(x), proving the claim. In the general case, since l 2Aw¤0,

on Ω,

and since l Aw belongs to W 1,1 0 (Ω ), there exists a sequence (ϕn )n ∈⺞ of posi(Ω ) converging to l 2Aw in W 1,1(Ω); moreover, since tive functions in W 1,S 0 2

l 2Aw⁄l 2Al 1,

on Ω,

we may assume that

ϕn ⁄l 2Al 1,

on Ω.

Therefore, for every n in ⺞, the Lipschitz function l 2Aϕn satisfies the inequalities l 1 ⁄l 2Aϕn ⁄l 2,

on Ω;

JOTA: VOL. 112, NO. 1, JANUARY 2002

183

the first part of the proof then implies that, for every x in ∂(Ω∩Ωτ ), we have (l 2Aϕn )τ (x)A(l 2Aϕn )(x)⁄ α , where α is the right-hand side of the inequality (5). Now, the sequence ((l 2Aϕn )τ A(l 2Aϕn ))n ∈⺞ converges to wτ Aw in W 1,1(Ω∩Ωτ ); (5) follows from Proposition 3.1(iv). The application of (5) with Aτ instead of τ gives (6). 䊐 Proof of Theorem 5.1. Theorem 4.3 states that, for every τ in ⺢n, 兩兩wτ Aw兩兩LS(Ω ∩ Ωτ ) G sup 兩wτ Aw兩. ∂(Ω ∩ Ωτ )

Since l 1 and l 2 are Lipschitz, for every x in ∂Ω we have 兩l τ1 (x)Al 1(x)兩⁄K兩τ 兩, 兩l τ2 (x)Al 2(x)兩⁄K兩τ 兩, where KGmax{兩兩∇l 1兩兩LS(Ω) , 兩兩∇l 2兩兩LS(Ω)}; therefore, by Lemma 5.1, 兩兩wτ Aw兩兩LS(Ω ∩ Ωτ ) ⁄K兩τ 兩. It then follows that, for every x∈Ω, τ ∈⺢n, and λ ∈⺢ sufficiently small (in such a way that Ω∩Ωλ τ ≠∅), we have 兩[w(xCλτ )Aw(x)]兾λ 兩⁄K兩τ 兩; thus, the classical partial derivative Dτ w(x) of w with respect to τ at x, whenever it exists, satisfies the inequality 兩Dτ w(x)兩⁄K兩τ 兩. We recall that, since w∈W 1,1(Ω), then for every τ ∈⺢ n the partial derivative Dτ w(x) exists for almost every x∈Ω and it coincides with ϕw(x) · τ (Ref. 13). Therefore, if (τ k )k ∈⺞ is a countable dense set in the unitary sphere of ⺢ n, then for almost every x in Ω the partial derivatives Dτ kw(x) exist and moreover 兩Dτ kw(x)兩⁄L兩τ k 兩GK,

for every k∈⺞.

Fix such an x and assume that ∇w(x)≠0; let (τ n(k))k ∈⺞ be a subsequence of (τ k )k ∈⺞ such that lim τ n(k) G∇w(x)兾兩∇w(x)兩.

k → +S

184

JOTA: VOL. 112, NO. 1, JANUARY 2002

Then, 兩∇w(x)兩G lim 兩∇w(x) · τ n(k)兩 k → +S

G lim 兩Dτ n(k)w(x)兩, k → +S

so that 兩∇w(x)兩⁄K, and therefore, 兩兩∇w兩兩LS(Ω) ⁄K. 1,S ¯ CW 1,1 Since w∈U ¯ ∈Lip(Ω), it follows that w∈u¯ CW 0 (Ω ), 0 (Ω ) and u proving the claim. 䊐

We now state a result on the equivalence of two variational problems. Let C be a convex compact subset of ⺢ n containing the origin in its interior, and let u¯ in Lip(⺢an) be such that ∇u¯ ∈C, a.e. Let l 1, l 2 in Lip(Ω, u¯ ) be such that

冮l

1

dxGmin



冮l

冦冮 u dx: u∈Lip(Ω, u¯ ), ∇u∈C a.e. on Ω冧 , Ω

1

dxGmax



冦冮 u dx:u∈Lip(Ω, u¯ ), ∇u∈C a.e. on Ω冧 . Ω

Remark that, if u in Lip(Ω, u¯ ) is such that ∇u∈C, a.e. on Ω, then l 1 ⁄u⁄l 2 in Ω. We introduce the sets

K C G{u∈Lip(Ω, u¯ ): ∇u∈C, a.e. on (Ω}, K l ,l 1

2

1 2 G{u∈u¯ CW 1,1 0 (Ω ): l ⁄u⁄l , a.e. on Ω},

and we consider the problems (PC )

min{I(u): u∈K C },

(Pl 2, l 2) min{I(u): u∈K l 1, l 2}. We notice that problem (PC ) does always admit a solution, whereas in order to ensure that problem (Pl 1, l 2) admits a solution we need some extra assumptions, e.g., some standard growth conditions. The equivalence of problems (PC ) and (PC ) and (Pl 1, l 2)was studied by Brezis–Sibony in Ref. 14 (in the case of the elasto-plastic torsion functional) and for a more general

JOTA: VOL. 112, NO. 1, JANUARY 2002

185

class of functionals and constraints by Treu–Vornicescu in Ref. 6. In particular, Theorem 3.3 in Ref. 6 requires that the integrand be of the form L(x, z, p)Gf ( p)Cg(x, z) and that g be sufficiently smooth. Our previous results allow us to prove, using a different technique, the equivalence of the two problems for a nonsmooth class of functionals whose Lagrangians are of the form f ( p)Cg(z). Theorem 5.2. Equivalence of Two Variational Problems. Let L(x, z, p)Gf ( p)Cg(z), and assume that the functional I is strictly convex. Let u¯ in Lip(⺢n) be such that ∇u¯ ∈C, a.e., and assume that problem (Pl 1, l 2) has a solution. Then, problems (PC ) and (Pl 1, l 2) have the same (unique) minimum. Proof. Let wC be the minimum of I in K C , and let w be the minimum of I in K l 1, l 2 . Since K C is a subset of K l 1, l 2 , then I(w)⁄I(wC ). By Theorem 5.1, the function w is Lipschitz; we claim that ∇w∈C,

a.e.

In fact, we may extend the functions l 1, l 2 to ⺢n by setting l i(x)Gu¯ (x),

for x∈⺢n \Ω,

in such a way that ∇l i ∈C,

a.e. in ⺢n, iG1, 2.

It then follows by Lemma 2.1 in Ref. 6 that, for every τ ∈⺢n, l i (xCτ )Al i (x)⁄ γ C ° (τ ), where γ C ° (τ ), is the Minkowski function of the polar C° of the set C; see for instance Ref. 15. Since γ C ° (τ ) is positive, then Lemma 5.1 yields (wτ Aw)+ ⁄ γ C ° (τ ),

on ∂(Ω∩Ωτ ).

Theorem 4.3 implies that wτ Aw⁄ γ C ° (τ ),

on Ω∩Ωτ .

Lemma 2.1 in Ref. 6 then yields that ∇w∈C,

a.e. on Ω.

Thus, w∈K C and therefore, I(w)¤I(wC ),

186

JOTA: VOL. 112, NO. 1, JANUARY 2002

proving that I(w)GI(wC ). The strict convexity of I yields wGwC .



Remark 5.1. Theorem 5.1 and Theorem 5.2 could be proved also through a nontrivial modification of the proof of Theorem 3.1 in Ref. 6.

References 1. GIAQUINTA, M., and GIUSTI, E., On the Regularity of the Minima of Variational Integrals, Acta Mathematica, Vol. 148, pp. 31–46, 1982. 2. CELLINA, A., On the Bounded Slope Condition and the Validity of the Euler Lagrange Equation, SIAM Journal on Control and Optimization (to appear). 3. CELLINA, A., On the Strong Maximum Principle, Proceedings of the American Mathematical Society (to appear). 4. GILBARG, D., and TRUDINGER, N. S., Elliptic Partial Differential Equations of Second Order, 3rd Edition, Grundenlehren der Matematischen Wissenschaften, Springer Verlag, Berlin, Germany, Vol. 224, 1998. 5. GIUSTI, E., Metodi Diretti nel Calcolo delle Variazioni, Unione Matematica Italiana, Bologna, Italy, 1994. 6. TREU, G., and VORNICESCU, M., On the Equiûalence of Two Variational Problems, Calculus of Variations and Partial Differential Equations, Vol. 11, pp. 307–319, 2000. 7. MARICONDA, C., and TREU, G., Existence and Lipschitz Regularity for Minima, Proceedings of the American Mathematical Society (to appear). 8. EKELAND, I., and TEMAM, R., Conûex Analysis and Variational Problems, North-Holland Publishing Company, Amsterdam, Holland, 1976. 9. BREZIS, H., Analyse Fonctionelle: The´ orie et Applications, Masson, Paris, France, 1983. 10. EVANS, L. C., Partial Differential Equations, Graduate Studies in Mathematics, American Mathematical Society, Providence, Rhode Island, Vol. 19, 1998. 11. HARTMAN, P., and STAMPACCHIA, G., On Some Nonlinear Elliptic DifferentialFunctional Equations, Acta Mathematica, Vol. 115, pp. 271–310, 1966. 12. MARICONDA, C., and TREU, G., A Comparison Principle for Minimizers, Comptes Rendus de l’Acade´ mie des Sciences de Paris, Se´ rie I, Mathe´ matiques, Vol. 330, pp. 681–686, 2000. 13. ZIEMER, W. P., Weakly Differentiable Functions, Graduate Texts in Mathematics, Springer Verlag, New York, NY, Vol. 120, 1989. 14. BREZIS, H., and SIBONY, M., Equiûalence de Deux Ine´ quations Variationelles et Applications, Archive of Rational Mechanics and Analysis, Vol. 41, pp. 254– 265, 1971. 15. ROCKAFELLER, R. T., Conûex Analysis, Princeton University Press, Princeton, New Jersey, 1970.