Elliptic equations with measure data

4. Weak solutions for elliptic equations 14 Chapter 2. Regularity results 21 1. Examples 21 2. Stampacchia’s theorems 24 Chapter 3. Existence via dual...

0 downloads 100 Views 1MB Size
Elliptic equations with measure data Luigi Orsina

Contents Chapter 1. Existence with regular data in the linear case 1. Minimization in Banach spaces 2. Hilbert spaces 3. Sobolev spaces 4. Weak solutions for elliptic equations

5 5 6 10 14

Chapter 2. Regularity results 1. Examples 2. Stampacchia’s theorems

21 21 24

Chapter 3. Existence via duality for measure data 1. Measures 2. Duality solutions for L1 data 3. Duality solutions for measure data 4. Regularity of duality solutions

29 29 32 33 34

Chapter 4. Existence via approximation for measure data

37

Chapter 5. Nonuniqueness for distributional solutions

43

Chapter 6. Entropy solutions

49

Chapter 7. Decomposition of measures using capacity 1. Capacity

61 61

Chapter 8. Renormalized solutions 1. Renormalized solutions

65 66

Bibliography

81

3

CHAPTER 1

Existence with regular data in the linear case Before stating and proving the existence theorem for linear elliptic equations, we need some tools. 1. Minimization in Banach spaces Let E be a Banach space, and let J : E ! R be a functional.

Definition 1.1. A functional J : E ! R is said to be weakly lower semicontinuous if )

un * u

J(u)  lim inf J(un ). n!+1

Definition 1.2. A functional J : E ! R is said to be coercive if lim

kukE !+1

J(u) = +1.

Example 1.3. If E = R, the function J(x) = x2 is an example of a (weakly) lower semicontinuous and coercive functional. Another example is J(u) = kukE . Theorem 1.4. Let E be a reflexive Banach space, and let J : E ! R be a coercive and weakly lower semicontinuous functional (not identically equal to +1). Then J has a minimum on E. Proof. Let m = inf J(v) < +1, v2E

and let {vn } in E be a minimizing sequence, i.e., vn is such that lim J(vn ) = m.

n!+1

We begin by proving that {vn } is bounded. Indeed, if it were not, there would be a subsequence {vnk } such that lim kvnk k = +1.

k!+1

Since J is coercive, we will have m = lim J(vn ) = lim J(vnk ) = +1, n!+1

k!+1

5

6

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

which is false. Therefore, {vn } is bounded in E and so, being E reflexive, there exists a subsequence {vnk } and an element v of E such that vnk weakly converges to v as k diverges. Since J is weakly lower semicontinuous, we have m  J(v)  lim inf J(vnk ) = lim J(vn ) = m, n!+1

k!+1



so that v is a minimum of J. 2. Hilbert spaces

2.1. Linear forms and dual space. We recall that a Hilbert space H is a vector space where a scalar product (·|·) is defined, which is complete with respect to the distance induced by the scalar product by the formula p d(x, y) = (x y|x y). Examples of Hilbert spaces are R (with (x|y) = x y), RN (with the “standard” scalar product), `2 , and L2 (⌦) with Z (f |g) = f g. ⌦

Theorem 1.5 (Riesz). Let H be a separable Hilbert space, and let T be an element of its dual H 0 , i.e., a linear application T : H ! R such that there exists C 0 such that (1.1)

|hT, xi|  Ckxk,

8x 2 H.

Then there exists a unique y in H such that hT, xi = (y|x),

8x 2 H.

Proof. Denote by {eh } a complete orthonormal system in H, i.e. a sequence of vectors of H such that (eh |ek ) = hk , and such that, for every x in H, one has +1 X x= (x|eh )eh . h=1

It is then well known that there exists a bijective isometry F from H to `2 , defined by F(x) = {(x|eh )}. We claim that {hT, eh i} belongs to `2 . Indeed, if n X yn = hT, eh ieh , h=1

2. HILBERT SPACES

7

we have, by linearity and by (1.1), n X h=1

so that

n X

(hT, eh i)2 = hT, yn i  Ckyn k = C n X h=1

h=1

(hT, eh i)2

! 12

,

(hT, eh i)2  C 2 ,

which yields (letting n tend to infinity) that {hT, eh ii} belongs to `2 . Therefore, one has, again by linearity and by (1.1), hT, xi =

+1 X h=1

(x|eh )hT, eh i,

8x 2 H.

Let now y be the vector of H defined by y=

+1 X h=1

hT, eh ieh .

Then, since hT, eh i = (y|eh ), one has hT, xi =

+1 X

(x|eh )(y|eh ),

h=1

8x 2 H,

and the right hand side is nothing but the scalar product in `2 of F(x) and F(y). Since F is an isometry, we then have hT, xi = (y|x),

8x 2 H,

as desired. Uniqueness follows from the fact that (y|x) = (z|x) for every x in H implies y = z (just take x = y z). ⇤ Corollary 1.6. The map T 7! y is a bijective linear isometry between H 0 and H. Proof. Since hT + S, xi = hT, xi + hS, xi, and h T, xi = hT, xi, it is clear that the map T 7! y is linear. In order to prove that it is an isometry, we have |hT, xi| = |(y|x)|  kykkxk,

which implies kT k  kyk. Furthermore

kyk2 = (y|y) = hT, yi  kT kkyk,

so that kyk  kT k. The map is clearly injective, and it is surjective since the application x 7! (y|x) is linear and continuous on H (by Cauchy-Schwartz inequality). ⇤

8

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

2.2. Bilinear forms. An application a : H ⇥ H ! R such that a( x + µy, z) = a(x, z) + µa(y, z), and a(z, x + µy) = a(z, x) + µa(z, x), for every x and y in H, and for every and µ in R, is called bilinear form. A bilinear form is said to be continuous if there exists 0 such that |a(x, y)|  kxkkyk, 8x, y 2 H, and is said to be coercive if there exists ↵ > 0 such that a(x, x)

↵kxk2 ,

8x 2 H.

An example of bilinear form on H is the scalar product, which is both continuous (with = 1, thanks to the Cauchy-Schwartz inequality), and coercive (with ↵ = 1, by definition of the norm in H). Theorem 1.7. Let a : H ⇥ H ! R be a continuous bilinear form. Then there exists a linear and continuous map A : H ! H such that a(x, y) = (A(x)|y),

8x, y 2 H.

Proof. Since a is linear in the second argument and continuous, for every fixed x in H the map y 7! a(x, y) is linear and continuous, so that it belongs to H 0 . By Riesz theorem, there exists a unique vector A(x) in H such that a(x, y) = (A(x)|y),

8x, y 2 H.

Since a is linear in the first argument, the map x 7! A(x) is linear. Furthermore, by the continuity of a, kA(x)k2 = (A(x)|A(x)) = a(x, A(x))  kxkkA(x)k, so that kA(x)k  kxk, and the map is continuous.



2.3. Banach-Caccioppoli and Lax-Milgram theorems. Theorem 1.8 (Banach-Caccioppoli). Let (X, d) be a complete metric space, and let S : X ! X be a contraction mapping, i.e., a continuous application such that there exists ✓ in [0, 1) such that d(S(x), S(y))  ✓ d(x, y),

8x, y 2 X.

Then there exists a unique x in X such that S(x) = x.

2. HILBERT SPACES

9

Proof. Let x0 in X be fixed, and define x1 = S(x0 ), x2 = S(x1 ), and, in general, xn = S(xn 1 ). We then have, since S is a contraction mapping, d(xn+1 , xn ) = d(S(xn ), S(xn 1 ))  ✓ d(xn , xn 1 ),

and iterating we obtain

d(xn+1 , xn )  ✓n d(x1 , x0 ).

Therefore, by the triangular inequality, d(xn , xm ) 

n 1 X

h=m

d(xh+1 , xh ) 

n 1 X

✓h d(x1 , x0 ) =

h=m

✓m 1

✓n . ✓

Since {✓ } is a Cauchy sequence in R (being convergent to zero), it then follows that {xn } is a Cauchy sequence in (X, d), which is complete. Therefore, there exists x in X such that xn converges to x. Since S is continuous, on one hand S(xn ) converges to S(x), and on the other hand S(xn ) = xn+1 converges to x so that x is a fixed point for S. If there exist x and y such that S(x) = x and S(y) = y, then, since S is a contraction mapping, h

d(x, y) = d(S(x), S(y))  ✓ d(x, y),



which implies (since ✓ < 1) d(x, y) = 0 and so x = y.

Theorem 1.9 (Lax-Milgram). Let a : H ⇥ H ! R be a continuous and coercive bilinear form, and let T be an element of H 0 . Then there exists a unique x in H such that a(x, z) = hT, zi,

(1.2)

8z 2 H.

Proof. Using the Riesz theorem and Theorem 1.7, solving the equation (1.2) is equivalent to find x such that a(x, z) = (A(x)|z) = (y|z) = hT, zi,

8z 2 H,

i.e., to solve the equation A(x) = y. Given > 0, this equation is equivalent to x = x A(x) + y, which is a fixed point problem for the function S(x) = x A(x) + y. Since, being A linear, one has S(x1 )

S(x2 ) = x1

x2

A(x1 ) + A(x2 ) = x1

x2

A(x1

x2 ),

in order to prove that S is a contraction mapping, it is enough to prove that there exists > 0 such that kx

A(x)k  ✓kxk,

for some ✓ < 1 and for every x in H. We have kx

A(x)k2 = kxk2 +

2

kA(x)k2

2 (A(x)|x).

10

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

Recalling Theorem 1.7 and the definition of A, we have so that

kA(x)k2 

2

kxk2 ,

(A(x)|x) = a(x, x)

kx A(x)k2  (1 + If 0 < < 2↵2 , we have ✓2 = 1 + contraction mapping.

2 2

↵kxk2 ,

2 ↵)kxk2 . 2 ↵ < 1, so that S is a ⇤

2 2

3. Sobolev spaces The Banach spaces where we will look for solutions are space of functions in Lebesgue spaces “with derivatives in Lebesgue spaces” (whatever this means). 3.1. Definition of Sobolev spaces. Let ⌦ be a bounded, open subset of RN , N 1, and let u be a function in L1 (⌦). We say that u has a weak (or distributional) derivative in the direction xi if there exists a function v in L1 (⌦) such that Z Z @' u = v ', 8' 2 C01 (⌦). @xi ⌦ ⌦

@u In this case we define the weak derivative @x as the function v. If u i has weak derivatives in every direction, we define its (weak, or distributional) gradient as the vector ✓ ◆ @u @u ru = ,..., . @x1 @xN

If p

1, we define the Sobolev space W 1,p (⌦) as

W 1,p (⌦) = u 2 Lp (⌦) : ru 2 (Lp (⌦))N .

The Sobolev space W 1,p (⌦) becomes a Banach space under the norm kukW 1,p (⌦) = kukLp (⌦) + kruk(Lp (⌦))N , and W 1,2 (⌦) is a Hilbert space under the scalar product Z Z (u|v)W 1,2 (⌦) = uv + ru · rv. ⌦



1,2

For historical reasons the space W (⌦) is usually denoted by H 1 (⌦): we will use this notation from now on. Since we will be dealing with elliptic problems with zero boundary conditions, we need to define functions which somehow are “zero” on the boundary of ⌦. Since @⌦ has zero Lebesgue measure, and functions in W 1,p (⌦) are only defined up to almost everywhere equivalence, there

3. SOBOLEV SPACES

11

is no “direct” way of defining the boundary value a function u in some Sobolev space. We then give the following definition. Definition 1.10. We define W01,p (⌦) as the closure of C01 (⌦) in the norm of W 1,p (⌦). If p = 2, we will denote W01,2 (⌦) by H01 (⌦), which is a Hilbert space. From now on we will mainly deal with W01,p (⌦). 3.2. Properties of Sobolev spaces. Since a function in W01,p (⌦) is “zero at the boundary” it is possible to control the norm of u in Lp (⌦) with the norm of its gradient in the same space. This is known as Poincar´e inequality. Theorem 1.11 (Poincar´e inequality). Let p 1; then there exists a constant C, only depending on ⌦, N and p, such that (1.3)

kukLp (⌦)  C kruk(Lp (⌦))N ,

8u 2 W01,p (⌦).

Proof. We only give an idea of the proof in dimension 1. Let u belong to C01 ((0, 1)). Then Z x Z x 0 u(x) = u(0) + u (t) dt = u0 (t) dt, 8x 2 (0, 1). 0

0

Thus, by H¨older inequality Z x p |u(x)| = u0 (t) dt 0

p

x

p p0

Z

x 0

0

p

|u (t)| 

Z

1 0

|u0 (t)|p .

C01 ((0, 1))

Integrating this inequality yields the result for functions. The 1,p result for functions in W0 (⌦) then follows by a density argument. ⇤ As a consequence of Poincar´e inequality, we can define on W01,p (⌦) the equivalent norm built after the norm of ru in (Lp (⌦))N . From now on, we define kukW 1,p (⌦) = kruk(Lp (⌦))N . 0

Even though functions in W01,p (⌦) should only belong to Lp (⌦), the assumptions made on the gradient allow to improve the summability of functions belonging to Sobolev spaces. This is what is stated in the following “embedding” theorem. Theorem 1.12. Let 1  p < N , and let p⇤ = NN pp (p⇤ is called the Sobolev embedding exponent). Then there exists a constant Sp (depending only on N and p) such that (1.4)

kukLp⇤ (⌦)  Sp kukW 1,p (⌦) , 0

8u 2 W01,p (⌦).

12

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

Remark 1.13. The fact that p⇤ is the correct exponent can be easily recovered by a scaling argument. Indeed, if u belongs to W01,p (RN ), then u( x) belongs to the same space. But then Z Z 1 q |u( x)| dx = N |u(y)|q dy, RN

and

Z

RN

p

RN

|ru( x)| dx =

1 N p

Z

RN

|ru(y)|p dy.

Therefore, if (1.4) holds for some constant C (independent on ) and some exponent q, one should have N N p = , q p which implies q =

Np N p

= p⇤ . ⇤

By (1.4), the embedding of W01,p (⌦) in Lp (⌦) is continuous. We recall that a map T : X ! Y (with X and Y Banach spaces) is said to be compact if the closure of T (B) is compact in Y for every bounded set B in X. To obtain compactness of the embedding of W01,p (⌦) in Lebesgue spaces, we cannot consider exponents up to p⇤ . Theorem 1.14. Let 1  p < N , and let 1  q < p⇤ . Then the embedding of W01,p (⌦) into Lq (⌦) is compact. ⇤

Remark 1.15. The fact that the embedding of W01,p (⌦) into Lp (⌦) is not compact is at the basis for several nonexistence results for equations like u = uq if q is “too large”. But this is another story. . . An important role will be played by the dual of a Sobolev space. We have the following representation theorem. 0

Theorem 1.16. Let p > 1, and let T be an element of (W01,p (⌦)) . 0 Then there exists F in (Lp (⌦))N such that Z hT, ui = F · ru, 8u 2 W01,p (⌦). ⌦

of W01,p (⌦) 1

The dual of H01 (⌦) is H

will be denoted by W

1,p0

(⌦), while the dual

(⌦).

Remark 1.17. The space H01 (⌦) is a Hilbert space. Therefore, by Theorem 1.5, it is isometrically equivalent to its dual H 1 (⌦). Furthermore, by Poincar´e inequality, H01 (⌦) is embedded into L2 (⌦), which is itself a Hilbert space. Since the embedding is continuous and dense,

3. SOBOLEV SPACES

13

we also have that the the dual of L2 (⌦) (which is L2 (⌦)) is embedded into H 1 (⌦). We therefore have H01 (⌦) ⇢ L2 (⌦) ⌘ (L2 (⌦))0 ⇢ (H01 (⌦))0 = H

1

(⌦).

If we identify both L2 (⌦) and its dual, and H01 (⌦) and its dual, we obtain a contradiction (since H01 (⌦) and L2 (⌦) are di↵erent spaces). Therefore, we have to choose which identification to make: which will be that L2 (⌦) is equivalent to its dual. Remark 1.18. Since, by Sobolev embedding, W01,p (⌦) is continu⇤ ⇤ ously embedded in Lp (⌦), we have by duality that (Lp (⌦))0 is contin0 uously embedded in W 1,p (⌦). If we define p⇤ = (p⇤ )0 =

Np , Np N + p

we then have Lp⇤ (⌦) ⇢ W If p = 2, we have 2⇤ =

2N , N +2

1,p0

(⌦).

and the embedding of L2⇤ (⌦) into H

1

(⌦).

The final result on Sobolev spaces will be about composition with regular functions. Theorem 1.19 (Stampacchia). Let G : R ! R be a lipschitz continuous functions such that G(0) = 0. If u belongs to W01,p (⌦), then G(u) belongs to W01,p (⌦) as well, and (1.5)

rG(u) = G0 (u) ru,

almost everywhere in ⌦.

Remark 1.20. Recall that a lipschitz continuous function is only almost everywhere di↵erentiable, so that the right-hand side of (1.5) may not be defined. We have however two possible cases: if k is a value such that G0 (k) does not exist, either the set {u = k} has zero measure (and so, since identity (1.5) only holds almost everywhere, this value does not give any problems), or the set {u = k} has positive measure. In this latter case, however, we have both ru = 0 and rG(u) = 0 almost everywhere, so that (1.5) still holds. Let k > 0; in what follows, we will often use composition of functions in Sobolev spaces with the lipschitz continuous functions (1.6)

Tk (s) = max( k, min(s, k)),

14

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE k k k k

and (1.7)

Gk (s) = s

Tk (s) = (|s|

k)+ sgn(s).

k k

By Theorem 1.19, we have rTk (u) = ru

{|u|k} ,

rGk (u) = ru

{|u| k} ,

almost everywhere in ⌦. 4. Weak solutions for elliptic equations We have now all the tools needed to deal with elliptic equations. 2

4.1. Definition of weak solution. Let A : ⌦ ! RN be a matrixvalued measurable function such that there exist 0 < ↵  such that (1.8)

A(x)⇠ · ⇠

↵|⇠|2 ,

|A(x)|  ,

for almost every x in ⌦, and for every ⇠ in RN . We will consider the following uniformly elliptic equation with Dirichlet boundary conditions ( div(A(x) ru) = f in ⌦, (1.9) u=0 on @⌦, where f is a function defined on ⌦ which satisfies suitable assumptions. If the matrix A is the identity matrix, problem (1.9) becomes ( u = f in ⌦, u=0 on @⌦, i.e., the Dirichlet problem for the laplacian operator.

4. WEAK SOLUTIONS FOR ELLIPTIC EQUATIONS

15

4.2. Classical solutions and weak solutions. Suppose that the matrix A and the functions u and f are sufficiently smooth so that one can “classically” compute div(A(x)ru). If ' is a function in C01 (⌦), we can then multiply the equation in (1.9) by ' and integrate on ⌦. Since div(A(x)ru) ' = we get

Z



A(x)ru · r'

div(A(x)ru ') + A(x)ru · r', Z

div(A(x)ru ') = ⌦

Z

f '. ⌦

By Gauss-Green formula, we have (if ⌫ is the exterior normal to ⌦) Z Z div(A(x)ru ') = A(x)ru · ⌫ ' = 0, ⌦

@⌦

since ' has compact support in ⌦. Therefore, if u is a classical solution of (1.9), we have Z Z A(x)ru · rv = f v, 8v 2 C01 (⌦). ⌦



We now remark that there is no need for A, u, ' and f to be smooth in order for the above identity to be well defined. It is indeed enough that A is a bounded matrix, that u and ' belong to H01 (⌦), and that f is in L2 (⌦) (or in L2⇤ (⌦), thanks to Sobolev embedding, see Remark 1.18). We therefore give the following definition. Definition 1.21. Let f be a function in L2⇤ (⌦). A function u in H01 (⌦) is a weak solution of (1.9) if Z Z (1.10) A(x)ru · rv = f v, 8v 2 H01 (⌦). ⌦



If u is a weak solution of (1.9), and u is sufficiently smooth in order to perform the same calculations as above “going backwards”, then it can be proved that u is a “classical” solution of (1.9). The study of the assumptions on f and A such that a weak solution is also a classical solution goes beyond the purpose of this text (also because we are interested in “bad” data!). 4.3. Existence of solutions (using Lax-Milgram). Theorem 1.22. Let f be a function in L2⇤ (⌦). Then there exists a unique solution u of (1.9) in the sense of (1.10).

16

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

Proof. We will use Lax-Milgram theorem. Indeed, if we define the bilinear form a : H01 (⌦) ⇥ H01 (⌦) ! R by Z a(u, v) = A(x)ru · rv, ⌦

and the linear and continuos (thanks to Sobolev embedding) functional T : H01 (⌦) ! R by Z hT, vi =

f v,



solving problem (1.9) in the sense of (1.10) amounts to finding u in H01 (⌦) such that a(u, v) = hT, vi,

8v 2 H01 (⌦),

which is exactly the result given by Lax-Milgram theorem. In order to apply the theorem, we have to check that a is continuous and coercive (the fact that it is bilinear being evident). We have, by (1.8), and by H¨older inequality, Z |a(u, v)|  |A(x)||ru||rv|  kukH 1 (⌦) kvkH 1 (⌦) , 0



0

so that a is continuous. Furthermore, again by (1.8), we have Z Z a(u, u) = A(x)ru · ru ↵ |ru|2 = ↵ kuk2H 1 (⌦) , ⌦

so that a is also coercive.



0



4.4. Existence of solutions (using minimization). If the matrix A satisfies (1.8) and is symmetrical, existence and uniqueness of solutions for (1.9) can be proved using minimization of a suitable functional. Theorem 1.23. Let f be a function in L2⇤ (⌦), and let J : H01 (⌦) ! R be defined by Z Z 1 J(v) = A(x)rv · rv f v, 8v 2 H01 (⌦). 2 ⌦ ⌦

Then J has a unique minimum u in H01 (⌦), which is the solution of (1.9) in the sense of (1.10). Proof. We begin by proving that J is coercive and weakly lower semicontinuous on H01 (⌦), so that a minimum will exist by Theorem 1.4. Recalling (1.8) and using H¨older and Sobolev inequalities, we have Z ↵ J(v) |rv|2 kf kL2⇤ (⌦) kvkL2⇤ (⌦) 2 ⌦ ↵ kvk2H 1 (⌦) S2 kf kL2⇤ (⌦) kvkH 1 (⌦) , 0 0 2

4. WEAK SOLUTIONS FOR ELLIPTIC EQUATIONS

17

and the right hand side diverges as the norm of u in H01 (⌦) diverges, so that J is coercive. Let now {vn } be a sequence of functions which is weakly convergent to some v in H01 (⌦). Since f belongs to L2⇤ (⌦), ⇤ and vn converges weakly to v in L2 (⌦), we have Z Z lim f vn = f v, n!+1





so that the weak lower semicontinuity of J is equivalent to the weak lower semicontinuity of Z K(v) = A(x)rv · rv. ⌦

By (1.8) we have K(v

vn ) =

Z



A(x) r(v

vn ) · r(v

vn )

0,

which, together with the symmetry of A, implies Z Z Z (1.11) 2 A(x)rv · rvn A(x)rv · rv  A(x)rvn · rvn . ⌦





2

N

Since rvn converges weakly to rv in (L (⌦)) , and since A(x)rv is fixed in the same space, we have Z Z lim A(x)rv · rvn = A(x)rv · rv, n!+1





so that taking the inferior limit in both sides of (1.11) implies Z Z K(v) = A(x)rv · rv  lim inf A(x)rvn · rvn = lim inf K(vn ), n!+1



n!+1



which means that K is weakly lower semicontinuous on H01 (⌦), as desired. Let now u be a minimum of J on H01 (⌦). We are going to prove that it is unique. Indeed, if u and v are both minima of J, one has ⇣u + v ⌘ ⇣u + v ⌘ J(u)  J , J(v)  J , 2 2 that is, ⇣u + v ⌘ J(u) + J(v)  2J , 2 which becomes (after cancelling equal terms and multiplying by 4) Z Z Z 2 A(x)ru · ru + 2 A(x)ru · ru = A(x)r(u + v) · r(u + v). ⌦





18

1. EXISTENCE WITH REGULAR DATA IN THE LINEAR CASE

Using the fact that A is symmetric, expanding the right hand side, and cancelling equal terms, we arrive at Z Z Z A(x)ru · ru 2 A(x)ru · rv + A(x)rv · rv  0, ⌦





which can be rewritten as Z A(x)r(u ⌦

v) · r(u

v)  0.

Using (1.8) we therefore have

vk2H 1 (⌦)  0,

↵ ku

0

which implies u = v, as desired. We are now going to prove that the minimum u is a solution of (1.9) in the sense of (1.10). Given v in H01 (⌦) and t in R, we have J(u)  J(u + tv), that is Z Z Z Z 1 1 A(x)ru·ru fu  A(x)r(u+tv)·r(u+tv) f (u+tv). 2 ⌦ 2 ⌦ ⌦ ⌦

Expanding the right hand side, cancelling equal terms, and using the fact that A is symmetric, we obtain Z Z Z t2 t A(x)ru · rv + A(x)rv · rv t f v 0. 2 ⌦ ⌦ ⌦ If t > 0, dividing by t and then letting t tend to zero implies Z Z A(x)ru · rv f v 0, ⌦



while if t < 0, dividing by t and then letting t tend to zero implies the reverse inequality. It then follows that Z Z A(x)ru · rv = f v, 8v 2 H01 (⌦), ⌦



and so u solves (1.9) (in the sense of (1.10)). In order to prove that such a solution is unique, we are going to prove that if u solves (1.9), then u is a minimum of J. Indeed, choosing u v as test function in (1.10), we have Z Z Z A(x)ru · ru A(x)ru · rv = f (u v). ⌦

This implies Z 1 J(u) + A(x)ru · ru 2 ⌦



Z





A(x)ru · rv = J(v)

1 2

Z



A(x)rv · rv,

4. WEAK SOLUTIONS FOR ELLIPTIC EQUATIONS

which implies J(u)  J(v) since Z Z Z 1 1 A(x)ru · ru A(x)ru · rv + A(x)rv · rv 2 ⌦ 2 ⌦ ⌦ is nonnegative by (1.8) since it is equal to Z 1 A(x)r(u v) · r(u v). 2 ⌦

19



For Hilbert spaces, Sobolev spaces, and the definition of weak solution for elliptic equations, see the book by H. Brezis ([6]), chapters V, VIII (in dimension 1) and IX (in dimension N ).

CHAPTER 2

Regularity results Warning to the reader: from now, unless explicitly stated, N

3.

Thanks to the results of the previous section, we have existence of solutions for data f in L2⇤ (⌦). The solution u then belongs to ⇤ H01 (⌦) and (thanks to Sobolev embedding) to L2 (⌦). One then wonders whether an increase on the regularity of f will yield more regular solutions. 1. Examples We are going to study a model case, in which the solution of (1.9) can be explictly calculated. This example will give us a hint on what happens in the general case. Example 2.1. Let ⌦ = B 1 (0), let N 2

f (x) =

|x|↵

3, let ↵ < N and define

1 . ( log(|x|))

It is well known that f belongs to Lp (⌦), with p = study the regularity of the solution u of ( u = f in ⌦, u=0 on @⌦,

N . ↵

We are going to

taking advantage of the fact that the solution will be radially symmetric. Recalling the formula for the laplacian in radial coordinates, we have 1 1 . (⇢N 1 u0 (⇢))0 = ↵ N 1 ⇢ ⇢ ( log(⇢)) Multiplying by ⇢N

1

and integrating between 0 and ⇢, we obtain Z ⇢ N 1 ↵ t ⇢N 1 u0 (⇢) = dt. log(t) 0 21

22

2. REGULARITY RESULTS

Dividing by ⇢N 1 and integrating between 12 and ⇢ we then get (recalling that u( 12 ) = 0) ✓Z s N 1 ↵ ◆ Z 1 2 1 t u(⇢) = dt ds. N 1 s log(t) ⇢ 0 We are integrating on the set E = {(s, t) 2 R2 : ⇢  s  12 , 0  t  s}, t 1/2 ⇢

E ⇢

1/2

s

which, after exchanging t with s, becomes E = {(t, s) 2 R2 : 0  t  1 , max(⇢, t)  s  21 }, 2 s 1/2 E ⇢ ⇢

1/2

t

Exchanging the integration order, we then have ! Z 1 N 1 ↵ Z 1 2 t 2 ds u(⇢) = dt N 1 log(t) 0 max(⇢,t) s # Z 1 N 1 ↵ " ✓ ◆2 N 2 t 1 1 = (max(⇢, t))2 N dt N 2 0 log(t) 2 Z 1 N 1 ↵ Z 1 N 1 ↵ 2 t 2 t 2N 2 1 (max(⇢, t))2 = dt N 2 0 log(t) N 2 0 log(t)

N

dt.

Since ↵ < N , the first integral is bounded, so that it is enough to study the behaviour near zero of the function Z 1 N 1 ↵ 2 t (max(⇢, t))2 N v(⇢) = dt log(t) 0 Z ⇢ N 1 ↵ Z 1 1 ↵ 2 t t 2 N =⇢ dt + dt log(t) log(t) 0 ⇢ = ⇢2

N

w(⇢) + z(⇢).

1. EXAMPLES

23

It is easy to see (using the de l’Hopital rule), that if ↵ 6= 2 ⇢N ↵ ⇢2 ↵ , and z(⇢) ⇡ , log(⇢) log(⇢) as ⇢ tends to zero, so that, if ↵ 6= 2, w(⇢) ⇡

u(⇢) ⇡

⇢2 ↵ , log(⇢)

as ⇢ tends to zero. This implies that u belongs to L1 (⌦) if ↵ < 2, while it is in Lm (⌦), with m = ↵N 2 , if 2 < ↵ < N . Recalling that f belongs to Lp (⌦) with p = N↵ , we therefore have that u belongs to L1 (⌦) if f belongs to Lp (⌦), and p > N2 , while it is in Lm (⌦), with m = NN p2p , if f belongs to Lp (⌦), with 1 < p < N2 . If ↵ = 2, then w(⇢) ⇡

⇢N ↵ , log(⇢)

and z(⇢) ⇡ log( log(⇢)),

so that u is in every Lm (⌦), but not in L1 (⌦), if f belongs to Lp (⌦) with p = N2 . In this case (which we will not study in the following), it can be proved that e|u| belongs to L1 (⌦). Observe that if ↵ = N 2+2 , so that f belongs to L2⇤ (⌦), we get ⇤ that u belongs to L2 (⌦), which is exactly the results we already knew by Sobolev embedding. Also remark that the above example gives informations also if f does not belong to L2⇤ (⌦) (i.e., if N 2+2 < ↵ < N ), an assumption under which we do not have any existence results (yet!). If we want to take ↵ = N , we need to change the definition of f . We fix > 1 and define 1 f (x) = N , |x| ( log(|x|)) which is a function belonging to L1 (⌦). Performing the same calculations as above, we obtain Z 1 2 1 dt u(⇢) = , N 1 1 ⇢ t ( log(t)) 1 so that

1 , ( log(⇢)) 1 as ⇢ tends to zero. Observe that in this case f belongs to L1 (⌦) for every > 1, but u belongs to Lm (⌦), with m = NN ·12·1 = NN 2 if and only if > 2 N2 . If 1 <  2 N2 , the solution u belongs “only” to Lm (⌦), for every m < NN 2 . u(⇢) ⇡

⇢N 2

24

2. REGULARITY RESULTS

We leave to the interested reader the study of the case N = 2. 2. Stampacchia’s theorems The regularity results we are going to prove now show that the previous example is not just an example. We begin with a real analysis lemma. : R+ ! R+ be a nonincreasing

Lemma 2.2 (Stampacchia). Let function such that M (k) (2.12) (h)  , (h k) where M > 0,

> 1 and

8h > k > 0,

> 0. Then d = M (0)

(d) = 0, where 1

2

1

Proof. Let n in N and define dn = d(1

2 n

(dn )  (0) 2

(2.13)

.

1

n

). We claim that

.

Indeed, (2.13) is clearly true if n = 0; if we suppose that it is true for some n, then, by (2.12), (dn+1 ) 

M (dn )  M (0) 2 (dn+1 dn )

n 1

2(n+1) d

= (0) 2

(n+1) 1

,

which is (2.13) written for n + 1. Since (2.13) holds for every n, and since is non increasing, we have 0  (d)  lim inf n!+1

(dn )  lim

n!+1

(0)

1

2

n 1

= 0,

as desired. ⇤ The first result (due to Guido Stampacchia, see [13]), deals with bounded solutions for (1.9). Theorem 2.3 (Stampacchia). Let f belong to Lp (⌦), with p > N2 . Then the solution u of (1.9) belongs to L1 (⌦), and there exists a constant C, only depending on N , ⌦, p and ↵, such that kukL1 (⌦)  C kf kLp (⌦) .

(2.14)

Proof. Let k > 0 and choose v = Gk (u) as test function in (1.9) (Gk (s) has been defined in (1.7)). Defining Ak = {x 2 ⌦ : |u(x)| k} one then has, since rv = ru Ak by Theorem 1.19, and using (1.8) Z Z Z Z 2 ↵ |rGk (u)|  A(x)ru · ru Ak = f Gk (u) = f Gk (u). Ak





Ak

2. STAMPACCHIA’S THEOREMS

25

Using Sobolev inequality (in the left hand side), and H¨older inequality (in the right hand side), one has ✓Z ◆ 22⇤ ✓Z ◆ 21 ✓Z ◆ 21⇤ ⇤ ↵ 2⇤ 2⇤ 2⇤ |Gk (u)|  |f | |Gk (u)| . S22 Ak Ak Ak Simplifying equal terms, we thus have ✓ 2 ◆2 ⇤ ✓ Z ◆ 22⇤ Z ⇤ S ⇤ 2 |Gk (u)|2  |f |2⇤ . ↵ Ak Ak

Recalling that f belongs to Lp (⌦), and that p > 2⇤ since p > have (again by H¨older inequality) ✓ 2 ◆⇤ Z S2 kf kLp (⌦) 2 2⇤ 2⇤ 2⇤ |Gk (u)|  m(Ak ) 2⇤ p . ↵ Ak

N , 2

we

We now take h > k, so that Ah ✓ Ak , and Gk (u) h k on Ah . Thus, ✓ 2 ◆⇤ S2 kf kLp (⌦) 2 2⇤ 2⇤ 2⇤ (h k) m(Ah )  m(Ak ) 2⇤ p , ↵ which implies

m(Ah ) 



S22 kf kLp (⌦) ↵

◆2⇤

2⇤

2⇤

m(Ak ) 2⇤ p . (h k)2⇤

We define now (k) = m(Ak ), so that (h)  where M=



S22 kf kLp (⌦) ↵

◆2⇤

The assumption p > N2 implies have that (d) = 0, where

,

M (k) , (h k) =

2⇤ 2⇤

2⇤ , p

= 2⇤ .

> 1, so that applying Lemma 2.2, we



d2 = C(⌦, N, p) M. Since m(Ad ) = 0, we have |u|  d almost everywhere, which implies kukL1 (⌦)  d = C(N, ⌦, p, ↵) kf kLp (⌦) ,

as desired.



Remark 2.4. Observe that, in order to prove the previous theorem, we did not use two of the properties of the equation: that the matrix A is bounded from above (we only used its ellipticity) and, above all,

26

2. REGULARITY RESULTS

the fact that the equation was linear: in other words, the proof above also holds for every uniformly elliptic operator. The second results deals with the case of unbounded solutions. Theorem 2.5 (Stampacchia). Let f belong to Lp (⌦), with 2⇤  p < N2 . Then the solution u of (1.9) belongs to Lm (⌦), with m = p⇤⇤ = Np , and there exists a constant C, only depending on N , ⌦, p and N 2p ↵, such that kukLp⇤⇤ (⌦)  C kf kLp (⌦) .

(2.15)

Proof. We begin by observing that if p = 2⇤ , then p⇤⇤ = 2⇤ , so that the result is true in this limit case by the Sobolev embedding. Therefore, we only have to deal with the case p > 2⇤ . The original proof of Stampacchia used a linear interpolation theorem; i.e., it is typical of a linear framework. We are going to give another proof, following [5], which makes use of a technique that can also be applied in a nonlinear context. Let k > 0 be fixed, let > 1 and choose v = |Tk (u)|2 2 Tk (u) as test function in (1.9) (Tk (s) has been defined in (1.6)). We obtain, by Theorem 1.19, Z Z 2 2 (2 1) A(x)ru · rTk (u) |Tk (u)| = f |Tk (u)|2 2 Tk (u). ⌦



Using (1.8), and observing that ru = rTk (u) where rTk (u) 6= 0, we then have Z Z 2 2 2 ↵ (2 1) |rTk (u)| |Tk (u)|  |f | |Tk (u)|2 1 . ⌦



2

Since, again by Theorem 1.19, |rTk (u)| |Tk (u)|2 2 = we have Z Z ↵ (2 1) 2 |r|T (u)| |  |f | |Tk (u)|2 k 2 ⌦

1 2

|r|Tk (u)| |2 , 1

.



Using Sobolev inequality (in the left hand side), and H¨older inequality (in the right hand one), we obtain ✓Z ◆ 22⇤ ✓Z ◆ 10 p ↵ (2 1) 2⇤ (2 1)p0 |Tk (u)|  kf kLp (⌦) |Tk (u)| . 2 2 S2 ⌦ ⌦ ⇤⇤

We now choose so that 2⇤ = (2 1)p0 , that is = p2⇤ (as it is easily seen). With this choice, > 1 if and only if p > 2⇤ (which is

2. STAMPACCHIA’S THEOREMS

true). Since p < ✓Z ⌦

Observing that

27

N , 2

we also have 22⇤ > p10 , and so ◆ 22⇤ 10 p p⇤⇤ |Tk (u)|  C(N, ⌦, p, ↵) kf kLp (⌦) .

2 2⇤

1 p0

=

1 , p⇤⇤

we have therefore proved that

kTk (u)kLp⇤⇤ (⌦)  C(N, ⌦, p, ↵) kf kLp (⌦) ,

8k > 0.

Letting k tend to infinity, and using Fatou lemma (or the monotone convergence theorem), we obtain the result. ⇤ Remark 2.6. The results of theorems 2.3 and 2.5 are somehow “natural” if we make a mistake. . . Indeed, let u be the solution of u = f , with f in Lp (⌦). Then, if we read the equation, we have that u has two derivatives in Lp (⌦), so that it belongs to W02,p (⌦). By ⇤ Sobolev embedding, u then belongs to W01,p (⌦) and, again by Sobolev ⇤⇤ embedding, to Lp (⌦) (or to L1 (⌦) if p > N2 ). The “mistake” here is to deduce from the fact that the sum of (some) derivatives of u belongs to Lp (⌦), the fact that all derivatives are in the same space. Surprisingly, it turns out that, in the case of the laplacian, the fact that u belongs to Lp (⌦) actually implies that u is in W02,p (⌦) (this is the so-called Calderun-Zygmund theory), so that the “mistake” is not an actual one. . . Summarizing the results of this chapter, we have the following picture.

H01 (⌦)

?

1

H01 (⌦)

⇤⇤

2N N +2

Lp (⌦)

L1 (⌦)

Theorem 2.5

Theorem 2.3 N 2

p

We will deal with the “?” part in the forthcoming chapter (actually, in all the forthcoming chapters).

As stated in the chapter, the results proved here can be found in the paper by Stampacchia ([13]), also for the proof in the case 2⇤  p < N2 ,

28

2. REGULARITY RESULTS

and for p = N2 (for the interested reader), and in the paper by Boccardo and Giachetti ([5], for the “nonlinear” proof of Theorem 2.5.

CHAPTER 3

Existence via duality for measure data We are now going to deal with existence results for data which do not belong to L2⇤ (⌦) (i.e., they are not in H 1 (⌦)), so that neither LaxMilgram theorem nor minimization techniques can be applied. Before going on, we need some definitions. 1. Measures We recall that a nonnegative measure on ⌦ is a set function µ : B(⌦) ! [0, +1] defined on the -algebra B(⌦) of Borel sets of ⌦ (i.e., the smallest -algebra containing the open sets) such that µ(;) = 0 and such that +1 +1 [ X En = µ(En ), µ n=1

n=1

for every sequence {En } of disjoint sets in B(⌦). This latter property is called -additivity. A -additive measure µ is also -subadditive, i.e., one has +1 +1 [ X µ En  µ(En ), n=1

n=1

for every sequence {En } of sets in B(⌦). A nonnegative measure µ is also monotone, i.e., one has that A✓B

implies µ(A)  µ(B).

A measure µ is said to be regular if for every E in B(⌦) and for every " > 0 there exist an open set A" , and a closed set C" , such that C" ✓ E ✓ A" ,

µ(A" \ C" ) < ".

A measure µ is said to be bounded if µ(⌦) < +1. The set of nonnegative, regular, bounded measures on ⌦ will be denoted by M+ (⌦). We define the set of bounded Radon measures on ⌦ as M(⌦) = {µ1

µ2 , µi 2 M+ (⌦)}. 29

30

3. EXISTENCE VIA DUALITY FOR MEASURE DATA

Given a measure µ in M(⌦), there exists a unique pair (µ+ , µ ) in M+ (⌦) ⇥ M+ (⌦) such that µ = µ+

µ ,

and such there exist E + and E in B(⌦), disjoint sets, such that µ± (E) = µ(E \ E ± ),

8E 2 B(⌦).

The measures µ+ and µ are the positive and negative parts of the measure µ. Given a measure µ in M(⌦), the measure |µ| = µ+ + µ is said to be the total variation of the measure µ. If we define kµkM(⌦) = |µ|(⌦), the vector space M(⌦) becomes a Banach space, which turns out to be the dual of C00 (⌦). A bounded Radon measure µ is said to be concentrated on a Borel set E if µ(B) = µ(B \ E) for every Borel set B. In this case, we will write µ E. For example, we have µ± = µ E ± , with E ± as above. Given two Radon measures µ and ⌫, we say that µ is absolutely continuous with respect to ⌫ if ⌫(E) = 0 implies µ(E) = 0. In this case we will write µ << ⌫. Two Radon measures µ and ⌫ are said to be orthogonal if there exists a set E such that µ(E) = 0, and ⌫ = ⌫ E. In this case, we will write µ ? ⌫. For example, given a Radon measure µ, we have µ+ ? µ . Theorem 3.1. Let ⌫ be a nonnegative Radon measure. Given a Radon measure µ, there exists a unique pair (µ0 , µ1 ) of Radon measures such that µ = µ0 + µ1 , µ0 << ⌫, µ1 ? ⌫. Proof. Suppose that µ is nonnegative, and define A = {µ(E) : E 2 B(⌦), ⌫(E) = 0}.

Let ↵ = sup A, and let En be a maximizing sequence, i.e., a sequence of Borel sets such that lim µ(En ) = ↵,

n!+1

⌫(En ) = 0.

If we define E as the union of the En , clearly ⌫(E) = 0 (since ⌫ is subadditive), and µ(E) = ↵ (since µ(E) µ(En ) for every n). Define now µ1 = µ E, µ0 = µ µ1 . Clearly, µ1 ? ⌫ (since ⌫(E) = 0, and since µ1 is concentrated on E by definition). On the other hand, if ⌫(B) = 0, then µ0 (B) = 0; and

1. MEASURES

31

indeed, if it were µ0 (B) > 0 for some B 6= E, then 0 < µ0 (B) = µ(B)

µ(B \ E) = µ(B \ E),

so that B [ E will be such that ⌫(B [ E) = 0, and

µ(B [ E) = µ(E) + µ(B \ E) = ↵ + µ(B \ E) > ↵,

thus contradicting the definition of ↵. As for uniqueness, if µ = µ0 + µ1 = µ00 + µ01 , then µ0 µ00 = µ01 µ1 . If ⌫(B) = 0, we will have (µ1 µ01 )(B) = 0. Since µ1 µ01 is also orthogonal with respect to ⌫, this implies that (µ1 µ01 )(E) = 0 for every Borel set E, so that µ1 = µ01 , hence µ0 = µ00 . If the measure µ has a sign, it is enough to apply the result to µ+ and µ . ⇤ Examples of bounded Radon measures are the Lebesgue measure LN concentrated on a bounded set of RN , or the measure defined by ( 1 if x0 2 E, x0 (E) = 0 if x0 62 E, which is called the Dirac’s delta concentrated at x0 . We clearly have N x0 ? L . Another example of Radon measure is the measure defined by Z µ(E) =

f (x) dx,

E

with f a function in L1 (⌦). In this case µ << LN , and Z Z ± ± µ (E) = f (x) dx, |µ|(E) = |f (x)| dx. E

E

1

Therefore, L (⌦) ⇢ M(⌦). For sequences of measures, we have two notions of convergence: the weak⇤ : Z Z ' dµn ! ' dµ, 8' 2 C00 (⌦), ⌦



and the narrow convergence: Z Z ' dµn ! ' dµ, ⌦



8' 2 Cb0 (⌦).

For positive measures, narrow convergence is equivalent to weak⇤ convergence and convergence of the “masses” (i.e., µn (⌦) converges to µ(⌦)). If xn is a sequence in ⌦ which converges to a point x0 on @⌦, then xn converges to zero for the weak⇤ convergence (since the measure x0 is indeed the zero measure in ⌦), but not for the narrow convergence.

32

3. EXISTENCE VIA DUALITY FOR MEASURE DATA

Measures can be approximated (in either convergence) by sequences of bounded functions. Before dealing with existence results for elliptic equations with measure data, we will begin with a particular case. 2. Duality solutions for L1 data Let f and g be two functions in L1 (⌦), and let u and v be the solutions of ( ( div(A(x) ru) = f in ⌦, div(A⇤ (x) rv) = g in ⌦, u=0 on @⌦, v=0 on @⌦. where A⇤ is the transposed matrix of A (note that A⇤ satisfies (1.8) with the same constants as A). Since both u and v belong to H01 (⌦), u can be chosen as test function in the formulation of weak solution for v, and vice versa. One obtains Z Z Z Z ⇤ ug = A (x)rv · ru = A(x)ru · rv = f v. ⌦



In other words, one has



Z

ug = ⌦

Z



f v, ⌦

for every f and g in L1 (⌦), where u and v solve the corresponding problems with data f and g respectively. Clearly, both u and v belong to L1 (⌦) by Theorem 2.3, but we remark that the two integrals are well-defined also if f only belongs to L1 (⌦), and u only belongs to L1 (⌦) (always maintaining the assumption that g — and so v — is a bounded function). This fact inspired to Guido Stampacchia the following definition of solution for (1.9) if the datum is in L1 (⌦). Definition 3.2. Let f belong to L1 (⌦). A function u in L1 (⌦) is a duality solution of (1.8) with datum f if one has Z Z ug = f v, ⌦



for every g in L1 (⌦), where v is the solution of ( div(A⇤ (x) rv) = g in ⌦, v=0 on @⌦.

Theorem 3.3 (Stampacchia). Let f belong to L1 (⌦). Then there exists a unique duality solution of (1.8) with datum f . Furthermore, u belongs to Lq (⌦), for every q < NN 2 .

3. DUALITY SOLUTIONS FOR MEASURE DATA

Proof. Let p > by

N 2

33

and define the linear functional T : Lp (⌦) ! R hT, gi =

Z

f v. ⌦

By Theorem 2.3, the functional is well-defined; furthermore, since (2.14) holds, there exists C > 0 such that Z |hT, gi|  |f | |v|  kf kL1 (⌦) kvkL1 (⌦)  C kf kL1 (⌦) kgkLp (⌦) , ⌦

so that T is continuous on Lp (⌦). By Riesz representation Theorem 0 for Lp spaces, there exists a unique function up in Lp (⌦) such that Z hT, gi = up g, 8g 2 Lp (⌦). ⌦

Since L1 (⌦) ⇢ Lp (⌦), we have Z Z up g = hT, gi = f v, ⌦



8g 2 L1 (⌦),

so that up is a duality solution of (1.9), as desired. We claim that up does not depend on p; indeed, if for example p > q > N2 , we have Z Z Z up g = fv= uq g, 8g 2 L1 (⌦), ⌦





so that up = uq in L1 (⌦) (and so they are almost everywhere the same function). Therefore, there exists a unique function u which is a 0 duality solution of (1.9), and it belongs to Lp (⌦) for every p > N2 ; i.e., u belongs to Lq (⌦) for every q < NN 2 , as desired. ⇤ N q Remark that the fact that u belongs to L (⌦) for every q < N 2 is consistent with the results of the last part of Example 2.1 (the case ↵ = N ). 3. Duality solutions for measure data The case of L1 (⌦) data is only a particular one, since L1 (⌦) is a subset of M(⌦). However, recalling that M(⌦) is the dual of C 0 (⌦), the proof of Theorem 3.3 could be performed in exactly the same way if one knew that the solution of (1.9) were not only bounded, but also continuous on ⌦ if the datum is in Lp (⌦) with p > N2 . This is exactly the case if the boundary of ⌦ is sufficiently regular.

34

3. EXISTENCE VIA DUALITY FOR MEASURE DATA

Theorem 3.4 (De Giorgi). Let ⌦ be of class C 1 , and let f be in L (⌦), with p > N2 . Then the solution u of (1.9) with datum f belongs to C 0 (⌦), and there exists a constant Cp such that p

kukC 0 (⌦)  Cp kf kLp (⌦) . Thanks to the previous result, we thus have the following existence result. Theorem 3.5. Let µ be a measure in M(⌦). Then there exists a unique duality solution of (1.8) with datum µ, i.e., a function u in L1 (⌦) such that Z Z ug = v dµ, 8g 2 L1 (⌦), ⌦



where v is the solution of (1.9) with datum g and matrix A⇤ . Furthermore, u belongs to Lq (⌦), for every q < NN 2 . 4. Regularity of duality solutions If the datum f belongs to Lp (⌦), with 1 < p < 2⇤ , then the duality solution of (1.9) is more regular. Theorem 3.6. Let f belong to Lp (⌦), 1 < p < 2⇤ . Then the ⇤⇤ duality solution u of (1.8) belongs to Lp (⌦), p⇤⇤ = NN p2p . Proof. Let q = N p NNp+2p , and define T : Lq (⌦) ! R as in the proof of Theorem 3.3. We then have Z |hT, gi|  |f | |v|  kf kLp (⌦) kvkLp0 (⌦) . ⌦

By Theorem 2.5, the norm of v in Lr (⌦) is controlled by a constant times the norm of g in Ls (⌦), with r = s⇤⇤ . Taking r = p0 , this gives s = q; hence, |hT, gi|  C kf kLp (⌦) kgkLq (⌦) , 0

so that the function u which represents T belongs to Lq (⌦); since we have q 0 = NN p2p , the result is proved. ⇤ p⇤⇤ Once again, the fact that u belongs to L (⌦) is consistent with the results of Example 2.1 (the case N 2+2 < ↵ < N ). The picture at the end of Chapter 2 can now be improved as follows.

4. REGULARITY OF DUALITY SOLUTIONS

? N

LN

2

H01 (⌦)

? "

⇤⇤

(⌦)

Theorem 3.3 1

35

H01 (⌦)

⇤⇤

Lp (⌦)

Lp (⌦)

L1 (⌦)

Theorem 3.6

Theorem 2.5

Theorem 2.3

2N N +2

N 2

p

Once again we refer the reader to the paper by Stampacchia ([13]), where not only the existence and uniqueness of duality solutions is stated and proved, but also a representation formula of the kind Z u(x) = G(x, y) dµ(y) , ⌦

is given; here G(x, y) is the duality solution of the adjoint equation div(A⇤ (y)rG(x, y)) =

x

,

with homogeneous Dirichlet boundary conditions. The H¨older regularity paper by De Giorgi is [8].

CHAPTER 4

Existence via approximation for measure data The result of Theorem 3.5 is somewhat unsatisfactory: even though it proves that there exists a unique solution by duality of (1.9) if the datum belongs to M(⌦), it only states that the solution belongs to some Lebesgue space, and does not say anything about the gradient of such a solution. In order to prove gradient estimates on the duality solution we have to proceed in a di↵erent way. Theorem 4.1. Let µ belong to M(⌦). Then the unique duality solution of (1.8) with datum f belongs to W01,q (⌦), for every q < NN 1 . Proof. Let fn be a sequence of L1 (⌦) functions which converges to µ in M(⌦), with the property that kfn kL1 (⌦)  kµkM(⌦) , and let un be the unique solution in H01 (⌦) of ( div(A(x) run ) = fn in ⌦, un = 0 on @⌦. Let k > 0 and choose v = Tk (un ) as test function of the weak formulation for un . We obtain, recalling that run = rTk (un ) where rTk (un ) 6= 0, and using (1.8), Z Z Z ↵ |rTk (un )|2  A(x)run · rTk (un ) = fn Tk (un )  k kµkM(⌦) , ⌦





where in the last passage we have used that |Tk (un )|  k. Using Sobolev embedding in the left hand side, we have ✓Z ◆ 22⇤ ↵ 2⇤ |Tk (un )|  k kµkM(⌦) . S22 ⌦

Observing that |Tk (un )| = k on the set An,k = {x 2 ⌦ : |un (x)| we have 2 ↵ 2 k (m(An,k )) 2⇤  k kµkM(⌦) , 2 S2 which implies ⇣ kµk ⌘ N M(⌦) N 2 m(An,k )  C , k 37

k},

38

4. EXISTENCE VIA APPROXIMATION FOR MEASURE DATA

with C depending only on N and ↵. Now we fix {|run |

} = {|run |

so that

{|run |

Since

m({|run |

, |un | < k} [ {|run |

} ⇢ {|run |

, |un | < k}) 

1 2

we have m({|run |

}) 

Let q < Z ⌦

N N 1

Z



2

N N

})  C

2 1

|rTk (un )|2 

+C

⇣ kµk

= tq m(⌦) +

N N 1

k kµkM(⌦)

M(⌦)

k

1

⇣ kµk

M(⌦)

2

⌘ NN 2

,

,

N

1 1) kf kLN1 (⌦) N N 1

1

⌘ NN 1

.

|run |q

t}

t

C(q

k},

N 1 kµkM(⌦) , the above inequality

be fixed, and let t > 0. Then Z Z q q |run | = |run | + {|run |
, |un |

, |un | < k} [ An,k .

k kµkM(⌦)

for every k > 0. If we choose k = becomes m({|run |

> 0, and we have

m({|run | Z +1

q 1

}) d N N 1

d

t

1) kµkM(⌦) . N q tN 1 q

Choosing t = kµkM(⌦) , we obtain Z (4.16) |run |q  Cq kµkqM(⌦) , ⌦

so that un is bounded in W01,q (⌦), with q < NN 1 . Note that Cq diverges as q tends to NN 1 . Therefore, up to subsequences, un converges to some function uq weakly in W01,q (⌦) and strongly in L1 (⌦). Since un , being a weak solution, is such that Z Z un g = fn v, 8g 2 L1 (⌦), 8n 2 N, ⌦



we can pass to the limit as n tends to infinity to have Z Z uq g = v dµ, 8g 2 L1 (⌦), ⌦



4. EXISTENCE VIA APPROXIMATION FOR MEASURE DATA

39

so that uq (which belongs to W01,q (⌦) for some q < NN 1 ) is the duality solution of (1.9) with datum µ. This fact is true for every q < NN 1 , so that uq does not depend on q. It then follows that the duality solution u of (1.9) belongs to W01,q (⌦) for every q < NN 1 . ⇤ Remark 4.2. If µ = f is a function in L1 (⌦), and fn converges to f strongly in L1 (⌦), we have that fn is a Cauchy sequence in L1 (⌦). Thus, if we repeat the proof of the previous theorem working with un um , using the linearity of the operator, and “keeping track” of fn fm , we find that (4.16) becomes Z |run um |q  Cq kfn fm kqL1 (⌦) , ⌦ N . N 1

for every q < Since {fn } is a Cauchy sequence in L1 (⌦), it then follows that un is a Cauchy sequence in W01,q (⌦), for every q < NN 1 . This implies that un strongly converges to the solution u in W01,q (⌦), for every q < NN 1 , so that (up to subsequences) run converges to ru almost everywhere in ⌦.

Remark 4.3. If µ = f is a function in L1 (⌦), and we repeat the proof of the previous theorem working with un vn , where vn is the solution of (1.9) with a datum gn which converges to f in L1 (⌦), we find as before that Z (4.17) |r(un vn )|q  C kfn gn kqL1 (⌦) , ⌦

for every q < NN 1 . Since {fn gn } tends to zero in L1 (⌦), it then follows that un vn tends to zero in W01,q (⌦), for every q < NN 1 . In other words, the solution u found by approximation does not depend on the sequence we choose to approximate the datum f . We already knew this fact (since every approximating sequence converges to the duality solution which is unique), but this di↵erent proof may be useful if, for example, the di↵erential operator is not linear, but allows to prove (4.17) in some way, so that the concept of duality solution is not available. If the datum f is “more regular”, one expects solutions with an increased regularity. We already know, from Theorem 3.6, that the summability of u increases with the summability of f , but what happens to the gradient? Recall that if the datum f is “regular” (i.e., if it belongs to L2⇤ (⌦)), the summability of u increases with that of f , but the gradient of u always belongs to (L2 (⌦))N . Surprisingly, this is not the case for “bad” solutions, as the following theorem shows.

40

4. EXISTENCE VIA APPROXIMATION FOR MEASURE DATA

Theorem 4.4. Let f be a function in Lm (⌦), 1 < m < 2⇤ . Then ⇤ the duality solution of (1.9) belongs to W01,m (⌦), m⇤ = NN mm . Proof. Let fn = Tn (f ), and let un be the unique solution of ( div(A(x) run ) = fn in ⌦, un = 0 on @⌦. Since we already know that un will converge to the duality solution of (1.9), it is clear that in order to prove the result it will be enough to ⇤ prove an a priori estimate on un in W01,m (⌦). In order to do that, we fix h > 0 and choose 'h (un ) = T1 (Gh (un )) as test function in the weak formulation for un . If we define Bh = {x 2 ⌦ : h  |un |  h + 1}, and Ah = {x 2 ⌦ : |un | h} (for the sake of simplicity, we omit the dependence on n on the sets), we obtain, recalling (1.8), Z Z Z Z 2 ↵ |run |  A(x)run · r'h (un ) = fn 'h (un )  |f |. Bh





Ak

Let now 0 < < 1; we can then write Z Z +1 Z +1 X X |run |2 |run |2 1 =  |run |2 (1 + |u |) (1 + h) n ⌦ (1 + |u|) B B h h h=0 h=0 Z +1 +1 +1 Z X X X 1 1  |f | = |f | ↵(1 + h) ↵(1 + h) A B h k h=0 h=0 k=h +1 Z k X X 1 = |f | ↵(1 + h) B k k=0 h=0 Z Z +1 X 1 C |f | (1 + k) C |f |(1 + |un |)1 k=0

Bk

 C kf kLm (⌦)

✓Z





(1 + |un |)

(1

)m0



1 m0

.

Let now q > 1 be fixed. Then, by Sobolev and H¨older inequality, ✓Z ◆ qq⇤ Z Z q 1 |run |q q⇤ q 2 |u |  |ru | = q (1 + |un |) n n q Sq ⌦ ⌦ (1 + |un |) 2 ✓⌦Z ◆ 2q ✓Z ◆1 2q 2 q |run |  (1 + |un |) 2 q ⌦ (1 + |u|) ⌦ ✓Z ◆ q0 2m 0  C kf kLm (⌦) (1 + |un |)(1 )m ⌦ ✓Z ◆1 2q q ⇥ (1 + |un |) 2 q . ⌦

4. EXISTENCE VIA APPROXIMATION FOR MEASURE DATA

We now choose

41

and q in such a way that )m0 = q ⇤ =

(1

q 2

q

.

This implies N (2 q) Nm , q = m⇤ = . N q N m It is easy to see that 1 < m < 2⇤ implies 0 < < 1, as desired. We thus have q ✓Z ◆ qq⇤ ✓Z ◆1 2m Z ⇤ ⇤ |un |q C |run |q  C kf kLm (⌦) (1 + |un |)q . =





q q⇤



q 2m

Since > 1 is true (being equivalent to m < N2 ), we obtain ⇤ from the first and third term that un is bounded in Lq (⌦) (which ⇤⇤ is again Lm (⌦), see Theorem 2.5) by a constant depending (among other quantities) on the norm of f in Lm (⌦). Once un is bounded, the boundedness of |run | in Lq (⌦) (with q = m⇤ ) then follows comparing the second and the third term. ⇤ We can now draw the complete picture.

1, NN 1 "

W0

N

LN

2

"



W01,p (⌦)

(⌦)

⇤⇤

(⌦)

Theorem 4.1 1

H01 (⌦)

H01 (⌦)

⇤⇤

Lp (⌦)

Lp (⌦)

L1 (⌦)

Theorem 4.4

Theorem 2.5

Theorem 2.3

2N N +2

N 2

p

The proof by approximation of the existence can be found in [2] (or [3]), and the specific technique of obtaining first an estimate in Marcinkiewicz spaces on un , and then on run , can be found in [1], a paper worth reading for the definition of entropy solutions — and the proof of their uniqueness for L1 (⌦) data (the extension of this result to (some) measure data can be found in [4]).

CHAPTER 5

Nonuniqueness for distributional solutions If the datum µ is a measure, we have proved in Theorem 4.1 that the sequence un of approximating solutions is bounded in W01,q (⌦), for every q < NN 1 . Therefore, and up to subsequences, un weakly converges to the solution u in W01,q (⌦), for every q < NN 1 . Choosing a C01 (⌦) test function ' in the formulation (1.10) for un , we obtain Z Z A(x)run · r' = fn ', ⌦



which, passing to the limit, yields Z Z A(x)ru · r' = ' dµ 8' 2 C01 (⌦), ⌦



so that u is a solution in the sense of distributions of (1.9). Since the definition of solution in the sense of distributions can always be given (even when the notion of duality solution is unavailable due for example to the operator being nonlinear), one may wonder whether there is a way of proving uniqueness of distributional solutions (not passing through duality solutions). The following example is due to J. Serrin (see [12]). Let " > 0 and " A (x) be the symmetric matrix defined by xi xj a"ij (x) = ij + (a" 1) 2 . |x| If a" =

N 1 , "(N 2+")

then the function

w" (x) = x1 |x|1

N "

is a solution in the sense of distributions of (5.18)

div(A" (x) rw" ) = 0,

Indeed, if we rewrite w(x) = x1 |x|↵ and aij (x) =

ij

+

simple (but tedious) calculations imply wx1 (x) = |x|↵ + ↵x21 |x|↵ 2 , 43

in RN \ {0}. xi xj , |x|2

wxi (x) = ↵x1 xi |x|↵ 2 ,

44

5. NONUNIQUENESS FOR DISTRIBUTIONAL SOLUTIONS

so that N X

aij (x) wxi (x) =

i=1

1j |x|



+ (↵ + ↵ + )x1 xj |x|↵ 2 .

Therefore,

(A(x) rw)x1 = ↵x1 |x|↵ and

2

+ (↵ + ↵ + )[2x1 |x|↵

(A(x) rw)xj = (↵ + ↵ + )[x1 |x|↵

2

+ (↵

so that

div(A(x)rw) = x1 |x|↵ 2 [↵ + (N

2

+ (↵

2)x31 |x|↵ 4 ],

2)x1 x2j |x|↵ 4 ],

1 + ↵)(↵ + ↵ + )].

1 Given 0 < " < 1, if we choose ↵ = 1 N ", and = "(NN 2+") + 1, we have ↵ + (N 1 + ↵)(↵ + ↵ + ) = 0, so that w is a solution of (5.18) if x 6= 0. To prove that w" is a solution in the sense of distributions in the whole RN , let ' be a function in C01 (⌦), and observe that since |A" (x)rw" | belongs to L1 (⌦), we have Z Z " " A (x)rw · r' = lim+ A" (x)rw" · r'. r!0

RN

RN \Br (0)

Using Gauss-Green formula, and recalling that w" is a solution of the equation outside the origin, we have Z Z " " A (x)rw · r' = lim+ ' A" (x)rw" · ⌫ d , r!0

RN

@Br (0)

where ⌫ is the exterior normal to Br (0), i.e., ⌫ = xr . By a direct computation, x A" (x)rw" · = Qx1 |r|↵ 1 , r with Q = 1 + ↵ + ↵ + = N " 1 . Therefore, recalling the value of ↵, and rescaling to the unit sphere, Z Z N 11 " " ' A (x)rw · ⌫ d = '(ry)x1 d . " r" @B1 (0) @Br (0) Using again the Gauss-Green formula, we have Z Z '(ry)x1 d = r e1 · r'(rx) dx, @B1 (0)

B1 (0)

where e1 = (1, 0, . . . , 0). Therefore, since 0 < " < 1, we have Z Z " " 1 " lim+ ' A (x)rw · ⌫ d = lim+ r e1 · r'(rx) dx = 0, r!0

@Br (0)

r!0

B1 (0)

5. NONUNIQUENESS FOR DISTRIBUTIONAL SOLUTIONS

45

so that w" is a solution in the sense of distributions of div(A" rw" ) = 0 in the whole RN . Let now ⌦ = B1 (0) be the unit ball, and let v" be the unique solution of ( div(A" (x) rv " ) = div(A" (x) rx1 ) in ⌦, v" = 0 on @⌦, which exists since div(A" (x) rx1 ) is a regular function belonging to H 1 (⌦) (as can be easily seen). Therefore, the function z " = v " + x1 is the unique solution in H 1 (⌦) of the problem (

div(A" (x) rz " ) = 0 in ⌦, z " = x1 on @⌦,

so that the function u" = w" z " is a solution in the sense of distributions of ( div(A" (x) ru" ) = 0 in ⌦, u" = 0 on @⌦, which is not identically zero since z " belongs to H 1 (⌦), while w" belongs to W01,q (⌦) for every q < q" = N N1+" . Hence, the problem (

div(A" (x) ru) = f u=0

in ⌦, on @⌦,

has infinitely many solutions in the sense of distributions, which can be written as u = u + t u" , t in R, where u is the duality solution. One may observe that the solution found by approximation belongs to W01,q (⌦) for every q < NN 1 , while the solution of the above example belongs to W01,q (⌦) for some q < NN 1 , and that we are not allowed to take " = 0 since in this case a" diverges. Thus one may hope that there is still uniqueness of the solution obtained by approximation. However, it is possible to modify Serrin’s example in dimension N 3 (see [11]) to find a nonzero solution in the sense of distributions for ( div(B " (x) ru) = 0 in ⌦, u=0 on @⌦,

46

5. NONUNIQUENESS FOR DISTRIBUTIONAL SOLUTIONS

which belongs to W01,q (⌦), for every q < NN 1 . Here 0 1 x21 x1 x2 (a" 1) x2 +x2 0 C B 1 + (a" 1) x21 +x22 1 2 B C 2 B C " x x x 1 2 2 B (x) = B (a" 1) 2 2 C, 1 + (a 1) 0 2 +x2 " x +x x B C 1 2 1 2 @ A 0 0 I

N 2 where I is the identity , and a" is as above, with " fixed p matrix in R " 2 2 " 1 so that w (x) = x1 ( x1 + x2 ) belongs to W 1,q (R2 ) for every q < 2. On the other hand, in dimension N = 2 there is a unique solution in the sense of distributions belonging to W01,q (⌦), for every q < 2. The proof of this fact uses Meyers’ regularity theorem for linear equations with regular data.

Theorem 5.1 (Meyers). Let A be a matrix which satisfies (1.8). Then there exists p > 2 (p depends on the ratio ↵ and becomes larger as ↵ tends to 1) such that if u is a solution of (1.9) with datum f belonging to L1 (⌦), then u belongs to W01,p (⌦). Theorem 5.2. Let N = 2. Then there exists a unique solution in the sense of distributions of (1.9) such that u belongs to W01,q (⌦), for every q < 2. Proof. Since the equation is linear, it is enough to prove that if u is such that Z A(x) ru · r' = 0, 8' 2 C01 (⌦), ⌦

then u = 0. Since u belongs to W01,q (⌦), for every q < 2, it is enough to prove that Z A(x) ru · r' = 0, 8' 2 W01,p (⌦), ⌦

for some p > 2, implies u = 0. Let B be a subset of ⌦, and let vB be the solution of ( div(A⇤ (x) rvB ) = B in ⌦, v=0 on @⌦. By Meyers’ theorem, vB belongs to W01,p (⌦), for some p > 2. Hence Z A(x) ru · rvB = 0, ⌦

5. NONUNIQUENESS FOR DISTRIBUTIONAL SOLUTIONS

47

while, choosing u as test function in the weak formulation for vB (which can be done using a density argument and the regularity of rvB ), we have Z Z ⇤ A (x) rvB · ru = u. ⌦

Therefore,

and this implies u ⌘ 0.

B

Z

u = 0, B

8B ✓ ⌦,



Apart from the papers of Serrin and Prignet already quoted in the chapter, Meyers’ regularity result can be found in [9].

Bibliography [1] P. B´enilan, L. Boccardo, T. Gallou¨et, R. Gariepy, M. Pierre, J.L. Vazquez, An L1 theory of existence and uniqueness of solutions of nonlinear elliptic equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 22 (1995), 241–273. [2] L. Boccardo, T. Gallou¨et, Nonlinear elliptic and parabolic equations involving measure data, J. Funct. Anal., 87 (1989), 149–169. [3] L. Boccardo, T. Gallou¨et, Nonlinear elliptic equations with right hand side measures, Comm. Partial Di↵erential Equations, 17 (1992), 641–655. [4] L. Boccardo, T. Gallou¨et, L. Orsina, Existence and uniqueness of entropy solutions for nonlinear elliptic equations with measure data, Ann. Inst. H. Poincar´e Anal. Non Lin´eaire, 13 (1996), 539–551. [5] L. Boccardo, D. Giachetti, Some remarks on the regularity of solutions of strongly nonlinear problems, and applications, Ricerche Mat., 34 (1985), 309– 323. [6] H. Brezis, Analyse fonctionnelle, Masson, Paris, 1987. [7] G. Dal Maso, F. Murat, L. Orsina, A. Prignet, Renormalized solutions of elliptic equations with general measure data, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 28 (1999), 741–808. [8] E. De Giorgi, Sulla di↵erenziabilit` a e l’analiticit` a delle estremali degli integrali multipli regolari, Mem. Accad. Sci. Torino Cl. Sci. Fis. Mat. Natur., 3 (1957), 25–43. [9] N. G. Meyers, An Lp estimate for the gradient of solutions of second order elliptic divergence equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 3, (1963), 189–206. [10] M.M. Porzio, A uniqueness result for monotone elliptic problems, C. R. Math. Acad. Sci. Paris, 337 (2003), 313–316. [11] A. Prignet, Remarks on existence and uniqueness of solutions of elliptic problems with right-hand side measures, Rend. Mat., 15 (1995), 321–337. [12] J. Serrin, Pathological solutions of elliptic di↵erential equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 18 (1964), 385–387. [13] G. Stampacchia, Le probl`eme de Dirichlet pour les ´equations elliptiques du second ordre `a coefficients discontinus, Ann. Inst. Fourier (Grenoble), 15 (1965), 189–258.

81

Un’equazione non lineare Supponiamo ora di avere una funzione a : R ! R continua e tale che ↵  a(s) 

(1)

,

8s 2 R ,

con 0 < ↵  costanti reali. Ci chiediamo se, data f in L2 (⌦), esista una soluzione debole dell’equazione ( div(a(u) ru) = f in ⌦, (2) u=0 su @⌦, ovvero una funzione u di H01 (⌦) tale che Z Z a(u) ru · rv = f v, ⌦



8v 2 H01 (⌦) .

Si noti che, essendo a(u) in L1 (⌦) per ogni u misurabile, l’integrale a primo membro `e ben definito. Una prima risposta, positiva, segue da un semplice cambio di variabile: detta Z s

A(s) =

a(t) dt ,

0

e posta v = A(u), allora rv = A0 (u) ru = a(u) ru, cosicch´e u risolve (2) se e solo se v risolve in H01 (⌦) l’equazione v = f . Siccome quest’ultima equazione ha una ed una sola soluzione, ed A `e invertibile (essendo iniettiva), u = A 1 (R(f )) risolve (2). Sfortunatamente, il “trucco” del cambio di variabile non `e pi` u applicabile nel caso in cui la funzione a dipenda anche da x, il che vuol dire che nel caso generale `e necessario seguire un’altra strada. Una strada sbagliata

Un primo tentativo possibile consiste, per analogia con il caso dell’operatore lineare, nel considerare il funzionale Z Z 1 2 J(u) = a(u) |ru| f u , u 2 H01 (⌦) , 2 ⌦ ⌦

e vedere se ad esso si pu`o applicare il teorema di Weierstrass. Essendo a(s) ↵ > 0 si vede facilmente che se kukH 1 (⌦) diverge, allora J(u) 0

diverge. D’altra parte, se un converge debolmente ad u in H01 (⌦), `e facile dimostrare la debole semicontinuit`a inferiore di J passando al limite nell’identit`a Z Z Z 2 2 0 a(un ) |run | + a(un ) |ru| 2 a(un ) run · ru , ⌦



1



2

ed usando R la continuit`a (per il teorema di Rellich-Kondrachov) del termine f u. Dal teorema di Weierstrass segue allora l’esistenza del minimo u di J su H01 (⌦): J(u)  J(v) , 8v 2 H01 (⌦) . Fin qui tutto bene; quale equazione possiamo per`o scrivere (in forma debole) per u? Partendo dalla disuguaglianza J(u)  J(u + tv) si arriva, dopo alcuni passaggi, a Z 1 0 [a(u + tv) a(u)] |ru|2 2 ⌦ Z Z Z t2 2 +t a(u + tv) ru · rv + a(u + tv) |rv| t f v. 2 ⌦ ⌦ ⌦ Dividendo per t > 0 e passando al limite per t tendente a zero, si troverebbe (se ogni passaggio fosse lecito) Z Z Z 1 0 2 0 a (u) |ru| v + a(u) ru · rv f v, 2 ⌦ ⌦ ⌦

e la disuguaglianza opposta dividendo per t < 0. Dunque, u in H01 (⌦) sarebbe tale che Z Z Z 1 0 2 a(u) ru · rv + a (u) |ru| v = f v , 8v 2 H01 (⌦) . 2 ⌦ ⌦ ⌦ Il problema, nell’identit`a appena scritta, `e duplice. Innanzitutto la funzione a `e solo continua, per cui a0 (s) non esiste. Poco male, aggiungiamo l’ipotesi che a sia derivabile con derivata continua (si noti, per`o, che tale ipotesi `e superflua per ottenere l’esistenza del minimo). Successivamente, ed anche se a `e derivabile, il termine Z a0 (u) |ru|2 v ⌦

non `e detto sia definito per ogni v in H01 (⌦): gi`a il termine |ru|2 da solo `e solamente in L1 (⌦). Il che vuol dire che dobbiamo “restringere” la classe delle funzioni test, passando da H01 (⌦) a H01 (⌦) \ L1 (⌦); e non basta: dobbiamo aggiungere anche l’ipotesi a0 (s) limitata (il fatto che a sia limitata non implica che lo sia la sua derivata. . .). Fatte tutte queste ipotesi, ogni minimo u di J `e tale che Z Z Z 1 a(u) ru·rv + a0 (u) |ru|2 v = f v , 8v 2 H01 (⌦)\L1 (⌦) , 2 ⌦ ⌦ ⌦ ed `e quindi soluzione debole dell’equazione ( div(a(u) ru) + 21 a0 (u) |ru|2 = f in ⌦, u=0 su @⌦,

3

la quale si guarda per`o bene dall’essere (2): abbiamo sbagliato strada, sostanzialmente perch´e la derivata del prodotto non `e il prodotto delle derivate! Osservazione 1. Non trovate strano il fatto che il minimo u sia in H01 (⌦) nonostante il termine a0 (u) |ru|2 sia solo in L1 (⌦)? Non c’`e contraddizione con quello che abbiamo visto studiando il problema con dati L1 (⌦)? La strada giusta Fissiamo v in L2 (⌦). Allora, essendo a(v) una funzione limitata e strettamente positiva, la matrice A(x) = a(v(x)) I `e uniformemente ellittica e simmetrica ed esiste quindi unica la soluzione u in H01 (⌦) del problema ( div(a(v) ru) = f in ⌦, (3) u=0 su @⌦. In altre parole, `e ben definita l’applicazione S : L2 (⌦) ! H01 (⌦) data da S(v) = u. Inoltre, dato che H01 (⌦) si immerge in L2 (⌦), S pu`o ` a questo punto essere vista come un’applicazione da L2 (⌦) in s´e. E chiaro che una soluzione di (2) `e un punto fisso per S. Proviamo ad applicare il teorema delle contrazioni? Siano v e w in L2 (⌦), e siano u = S(v) e z = S(w). Scegliendo u z come funzione test nelle formulazioni deboli per u e z e sottraendo, si ottiene Z Z a(v) ru · r(u z) a(w) rz · r(u z) = 0 , ⌦

da cui Z





a(v) r(u

z) · r(u

z) =

Z

[a(v)



a(w)] rz · r(u

z)

Ricordando l’ellitticit`a di a, ed applicando la disuguaglianza di H¨older, si ha ↵ ku

zk2 1

H0 (⌦)

 ka(v)

a(w)kL1 (⌦) kzkH 1 (⌦) ku 0

zkH 1 (⌦) . 0

Ricordando che kzkH 1 (⌦)  R, si ottiene allora 0

↵ ku

zkH 1 (⌦)  R ka(v) 0

a(w)kL1 (⌦) ,

Usando la disuguaglianza di Poincar´e si ottiene allora kS(v)

S(w)kL2 (⌦) = ku

zkL2 (⌦)  B ka(v)

a(w)kL1 (⌦) .

4

Possiamo andare avanti da qui senza ulteriori ipotesi? A destra avremmo bisogno di controllare a(v) a(w) con v w e senza supporre che a sia lipschitziana non ci riusciamo. Poco male, aggiungiamo l’ipotesi di lipschitzianit`a su a. Otteniamo allora kS(v)

S(w)kL2 (⌦)  B L kv

wkL1 (⌦) .

Siamo contenti? Di nuovo, no: non c’`e modo di controllare la norma di v w in L1 (⌦) con la norma di v w in L2 (⌦) (vale infatti la disuguaglianza opposta). Si pu`o sostituire la norma in L2 (⌦) a sinistra con la norma in L1 (⌦)? La dimostrazione del teorema di Stampacchia funziona? Innanzitutto dobbiamo prendere f pi` u regolare di L2 (⌦) per 1 p avere S(v) in L (⌦) (serve f in L (⌦), con p > N2 ). Poi, scegliendo Gk (u z) come funzione test nelle due equazioni e sottraendo si ottiene Z Z a(v) r(u z) · rGk (u z) = [a(v) a(w)] rz · rGk (u z) . ⌦



Usando l’ellitticit`a da una parte e la disuguaglianza di H¨older dall’altra, si arriva a ✓Z ◆ 12 ✓Z ◆ 12 Z 2 2 2 2 ↵ |rGk (u z)|  |a(v) a(w)| |rz| |rGk (u z)| , ⌦

Ak



dove Ak = {|u z| > k}. Semplificando ed elevando al quadrato, si ottiene Z Z 2 |rGk (u z)|  C |a(v) a(w)|2 |rz|2 . ⌦

Ak

Sfortunatamente a destra c’`e una funzione che, anche supponendo a lipschitziana, `e solo in L1 (⌦); in altre parole, non possiamo applicare la disuguaglianza di H¨older una seconda volta per ottenere una potenza della misura di Ak (che era la chiave per far funzionare il metodo di Stampacchia e la stima in L1 (⌦)). In sostanza, il metodo delle contrazioni non funziona: non c’`e modo di stimare una norma delle soluzioni con la stessa norma dei “dati”. Fortunatamente, il teorema delle contrazioni non `e l’unico teorema di punto fisso esistente. . . Teorema 2 (Schauder). Sia K un convesso chiuso e limitato di uno spazio di Banach e sia S : K ! K un’applicazione continua tale che S(K) sia compatto. Allora esiste almeno un punto fisso di S. Per applicare il teorema di Schauder nel nostro caso, osserviamo innanzitutto che esiste R > 0 tale che kS(v)kL2 (⌦)  R per ogni

5

v in L2 (⌦). Scegliendo infatti u = S(v) come funzione test nella formulazione debole di (3), ed usando l’ellitticit`a, si ha Z Z Z 2 2 (4) ↵ |ru|  a(v) |ru| = f u  kf kL2 (⌦) kukL2 (⌦) . ⌦





Ricordando la disuguaglianza di Poincar´e, si ottiene kuk2 2

L (⌦)

 C kf kL2 (⌦) kukL2 (⌦) ,

e quindi la tesi con R = C kf kL2 (⌦) . In questa maniera, la sfera di L2 (⌦) raggio R `e un convesso chiuso e limitato invariante per S. Mostriamo ora che S `e continua. Sia {vn } una successione convergente a v in L2 (⌦), e siano un = S(vn ) le corrispondenti soluzioni di (3). Dalla (4) si ottiene, ricordando che la norma di un in L2 (⌦) `e minore di R, Z ↵ |run |2  R kf kL2 (⌦) , ⌦

da cui segue la limitatezza di un in H01 (⌦). Possiamo dunque estrarre sottosuccessioni da vn ed un in modo tale che vn converga a v quasi ovunque e che un converga ad u debolmente in H01 (⌦) e fortemente in L2 (⌦). A questo punto `e possibile passare al limite su n nelle identit`a Z Z a(vn ) run · rz = f z , 8z 2 H01 (⌦) , ⌦



per dimostrare che u `e soluzione di (3) con a(v), ovvero che u = S(v) (per l’unicit`a della soluzione di (3)). Dal momento che il limite u non dipende dalle sottosuccessioni estratte, tutta la successione un = S(vn ) converge ad u = S(v), e quindi S `e continua. Rimane da dimostrare la compattezza di S(K). Abbiamo per`o appena dimostrato che S(K) `e limitato in H01 (⌦): per il teorema di RellichKondrachov, S(K) `e precompatto in L2 (⌦), come volevasi dimostrare. In definitiva, applicando il teorema di Schauder all’operatore S, si dimostra che esiste almeno una soluzione di (2).

Problemi agli autovalori Sappiamo gi`a che se ⌦ `e un aperto limitato di RN (N 2), e se 2N f appartiene a L2 (⌦) (o, pi` u in generale, a L N +2 (⌦)), allora esiste un’unica soluzione u in H01 (⌦) dell’equazione ellittica ( u = f in ⌦, u=0 su @⌦, nel senso che

Z



ru · rv =

Z

8v 2 H01 (⌦).

f v,



Sappiamo inoltre che tale soluzione `e l’unico punto di minimo in H01 (⌦) del funzionale di Dirichlet Z Z 1 2 I(u) = |ru| f u, 2 ⌦ ⌦

punto di minimo che esiste dal momento che il funzionale `e coercitivo: kuk

lim H01 (⌦)

!+1

I(u) = +1,

e debolmente semicontinuo inferiormente: un * u

=)

I(u)  lim inf I(un ). n!+1

Ci chiediamo ora se la stessa tecnica di minimizzazione funzioni per dimostrare l’esistenza di soluzioni per il problema ( u = u + f in ⌦, (1) u=0 su @⌦, con 2 R. La risposta `e evidentemente a↵ermativa se < 0: considerando il funzionale Z Z Z 1 2 2 I (u) = |ru| + u f u, 2 ⌦ 2 ⌦ ⌦ esso `e evidentemente coercitivo (dato che, essendo debolmente semicontinuo inferiormente dato che Z Z 2 un * u =) un ! u2 , ⌦

< 0, I

I) e



essendo compatta per il Teorema di Rellich l’immersione di H01 (⌦) in L2 (⌦). 1

2

I problemi nascono nel caso > 0: il funzionale `e illimitato inferiormente se `e troppo “grande”. Sia infatti v in H01 (⌦), v 6= 0, e sia ⇤ > 0 tale che Z Z ⇤ v2 2 |rv|2 . ⌦



Chiaramente, un tale valore di ⇤ esiste sempre se v 6= 0. Se calcoliamo I⇤ (tv), con t 2 R, abbiamo Z Z Z Z Z t2 ⇤t2 ⇤ t2 2 2 2 I⇤ (tv) = |rv| v t fv v t f v, 2 ⌦ 2 ⌦ 4 ⌦ ⌦ ⌦ e dunque lim I⇤ (tv) = 1. t!+1

D’altro canto, ricordando la disuguglianza di Poincar´e: Z Z 2 P v  |rv|2 , 8v 2 H01 (⌦), ⌦



`e chiaro che se 0 < < P il funzionale I `e sia coercitivo che debolmente semicontinuo inferiormente, e quindi esiste un minimo in H01 (⌦). Per capire cosa accade nel caso generale, a↵rontiamo innanzitutto il problema con f ⌘ 0, ovvero l’equazione ( u = u in ⌦, (2) u=0 su @⌦. Chiaramente u ⌘ 0 `e una soluzione del problema, ma un confronto con la situazione nel caso unidimensionale ci fa capire che esistono altre possibilit`a. Se, infatti, consideriamo l’equazione ( u00 = u in (0, ⇡), u(0) = u(⇡) = 0, troviamo la soluzione nulla se non `e un quadrato di un numero intero, mentre se = n2 per qualche n in N abbiamo le infinite soluzioni A sen(nx), al variare di A in R. Per capire cosa accade in dimensione qualsiasi, modifichiamo il problema di minimizzazione, trasformandolo da problema “libero” a problema “vincolato”. Sia allora ⇢Z Z 2 1 m = inf |ru| , u 2 H0 (⌦), u2 = 1 . ⌦



Osserviamo innanzitutto che 0  m < +1; sia poi un tale che Z Z |run |2 ! m, u2n = 1. ⌦



3

Chiaramente un `e limitata in H01 (⌦), cosicch´e possiamo estrarne una sottosuccessione (che chiameremo ancora un ) debolmente convergente in H01 (⌦) ad una funzione u. Dal momento che l’immersione di H01 (⌦) in L2 (⌦) `e compatta, l’avere un norma 1 in L2 (⌦) implica che anche u ha norma 1 in L2 (⌦). In altre parole, il limite debole di un soddisfa ancora il vincolo. Si ha allora, ricordando la debole semicontinuit`a inferiore debole della norma, Z Z 2 m |ru|  lim inf |run |2 = m, n!+1





da cui segue che m `e un minimo, e si ha Z m= |ru|2 . ⌦

Essendo u 6= 0 (dato che ha norma 1 in L2 (⌦)), si ha che m > 0. Si pu`o poi dimostrare che il minimo, a meno del segno, `e unico. Osserviamo ora che se v `e in H01 (⌦), con v 6= 0, allora kvk v ha norma 1 in L2 (⌦), L2 (⌦)

e quindi

m da cui si ottiene m

Z

Z



|rv|2 , kvk2 2

2

v 



L (⌦)

Z

|rv|2 .



Dal momento che tale disuguaglianza `e evidentemente vera anche per v ⌘ 0, si ha Z Z 2 (3) m v  |rv|2 , 8v 2 H01 (⌦). ⌦



In altre parole, m = P, la costante di Poincar´e. Sia ora u una delle due funzioni di norma 1 in L2 (⌦) che realizza l’uguaglianza in (3); scriviamo u = u+ u , definiamo v(t) = u+ t u , e calcoliamo Z Z F (t) = |rv(t)|2 m v 2 (t). ⌦

Chiaramente F (t)



0 per ogni t e F (1) = 0. Dal momento che Z Z ru+ · ru = 0 = u+ u , ⌦

si ha (4)

F (t) =

Z





2

|ru+ |

m

Z



u2+

2

+t

✓Z



2

|ru |

m

Z



u

2



.

4

Essendo

Z



2

|ru |

m

Z

u2

per la (3), da (4) si vede facilmente che Z min F (t) = F (0) = |ru+ |2 t2R

a meno che

0



m



Z



2

|ru |

m

Z

Z



u2+ ,

u2 = 0,



nel qual caso F `e costante. Essendo F (1) = 0 un altro punto di minimo, ne segue che F (t) deve essere costantemente nulla. Pertanto Z Z 2 |ru | m u2 = 0, ⌦

da cui segue

F (t) =



Z



2

|ru+ |

m

Z



u2+ = 0.

In altre parole, se u ha norma 1 in L2 (⌦) e realizza l’uguaglianza in (3), allora anche u+ e u realizzano l’uguaglianza. Se u+ 6= 0, allora normalizzando u+ in L2 (⌦) si ottiene una funzione di norma 1 che realizza l’uguaglianza in (3) e quindi (per l’unicit`a a meno del segno di u) si ha u+ = ±u: ne segue che u ha segno costante in ⌦, che supporremo quindi positivo. Che equazione risolve il minimo u? Dal momento che si tratta di minimizzazione vincolata, appariranno dei moltiplicatori di Lagrange. Eseguendo i calcoli si trova che u `e soluzione di 8 > u = m u in ⌦, < u 0, u 6⌘ 0 in ⌦, > : u=0 su @⌦. Per motivi “storici”, definiamo 1 = m e '1 = u, che chiamiamo rispettivamente primo autovalore e prima autofunzione del laplaciano in ⌦. Pertanto, 8 > '1 = 1 '1 in ⌦, > > > <' 0, '1 6⌘ 0 in ⌦, 1 (5) '1 = 0 su @⌦, > > > > : k'1 k 2 = 1. L (⌦) Se lasciamo cadere la normalizzazione in L2 (⌦), allora anche t '1 risolve la stessa equazione, qualsiasi sia t in R: in altre parole, le soluzioni

5

del problema formano uno spazio vettoriale di dimensione almeno 1, che chiameremo E1 . Per quanto detto prima, la dimensione di E1 `e esattamente pari ad 1 (si noti il parallelo con il caso N = 1). Ricordiamo ora che un teorema dovuto a Stampacchia a↵erma che se v risolve l’equazione ( v = f in ⌦, v=0 su @⌦, allora v `e in L1 (⌦) se f appartiene a Lq (⌦) con q > N2 , mentre appar⇤⇤ tiene a Lq (⌦), con q ⇤⇤ = NN q2q , se f appartiene a Lq (⌦) con N2N  +2 N 1 q < 2 . Supponiamo ora N = 3: dal momento che '1 `e in H0 (⌦), per ⇤ l’immersione di Sobolev si ha '1 in L2 (⌦), con 2⇤ = N2N2 = 6. Dal momento che 6 > 32 = N2 , dal teorema di Stampacchia applicato a (5) con f = 1 '1 si ha subito che '1 `e in L1 (⌦). Se N = 4, allora '1 `e in L4 (⌦), ed essendo 4 > 2 = N2 si ha nuovamente '1 in L1 (⌦), ed analogamente per N = 5. Se N = 6, invece, '1 `e in L3 (⌦), e 3 = N2 . Per il secondo risultato di Stampacchia (sempre applicato con f = 1 '1 ), per`o, essendo '1 appartenente ad esempio a L2 (⌦) si ha che '1 appar⇤⇤ tiene a L2 , con 2⇤⇤ = 6 > 3 = N2 . Applicando ora il primo risultato di Stampacchia, si trova che '1 appartiene a L1 (⌦) anche se N = 6. In dimensione maggiore si pu`o ripetere lo stesso procedimento (pi` u `e alta la dimensione, pi` u volte lo si dovr`a ripetere), trovando che, in dimensione qualsiasi, '1 `e in L1 (⌦). A questo punto si usa il teorema di De Giorgi per provare che '1 `e h¨olderiana e, da qui, i risultati classici per dimostrare che '1 `e C 2,↵ (⌦) e poi C 4,↵ (⌦) e poi . . . e poi C 1 (⌦) e poi analitica. Questo metodo, detto di bootstrap, funziona tutte le volte (o quasi. . .) in cui ci si trova a che fare con un’equazione ellittica con la soluzione “da una parte e dall’altra” dell’uguale. Sappiamo allora risolvere (1) per < 1 (qualsiasi sia f ) e per = 1 (se f ⌘ 0). Cosa possiamo dire se = 1 e f 6= 0? Supponiamo che esista una soluzione u: scegliendo '1 come funzione test nell’equazione per u e u come funzione test nell’equazione per '1 , abbiamo Z Z Z ru · r'1 = 1 u '1 + f '1 , ⌦

e



Z



ru · r'1 =



1

Z



u '1 ,

6

da cui si ricava che deve essere Z

f '1 = 0.



Tale condizione (di ortogonalit`a in L2 (⌦)) `e quindi necessaria per l’esistenza di una soluzione. Per mostrare che `e anche sufficiente, consideriamo innanzitutto ⇢Z Z Z 2 1 2 |ru| , u 2 H0 (⌦), u = 1, u '1 = 0 . 2 = inf ⌦





Si pu`o dimostrare (come prima) che 2 `e un minimo e che si ha 2 > 1 . Scrivendo l’equazione per il minimo si trova una soluzione '2 del problema 8 > '2 = 2 '2 in ⌦, < (6) '2 6⌘ 0 in ⌦, > : '2 = 0 su @⌦,

soluzione che possiamo pensare normalizzata in L2 (⌦). Chiaramente perdiamo la positivit`a ('2 dovendo essere ortogonale a '1 deve per forza cambiare segno), ma non la limitatezza (la tecnica di bootstrap `e ancora applicabile). Lo spazio E2 di tutte le soluzioni di (6) si dimostra avere dimensione finita (ma, in generale, maggiore o uguale ad 1). Consideriamo ora il funzionale Z Z Z Z 1 1 2 2 1 J(u) = |ru| u f u, u 2 H0 (⌦), u '1 = 0. 2 ⌦ 2 ⌦ ⌦ ⌦ Si verifica facilmente che J `e debolmente semicontinuo inferiormente e che `e coercitivo su E1? , dato che Z Z 2 |ru| u2 , 8u 2 E1? , 2 ⌦



e 2 > 1 . Se ne deduce che J ammette minimo su E1? , e che tale minimo v risolve ( v = 1 v + f in ⌦, v=0 su @⌦, in E1? , nel senso che v appartiene a E1? e Z Z Z rv · rw = 1 vw+ f w, ⌦





8w 2 E1? .

Sia ora z in H01 (⌦). Possiamo sempre scrivere z = t '1 + w, con w in E1? e t in R. Scegliendo allora w = z t '1 come funzione test si trova Z Z Z rv · r(z t '1 ) = 1 v (z t '1 ) + f (z t '1 ). ⌦





7

Essendo v ed f in E1? , si ha Z Z Z rv r'1 = v '1 = f '1 = 0, ⌦

e quindi

Z





rv · rz =

1

Z





vz +

Z



f z,

8z 2 H01 (⌦),

cosicch´e v risolve (1) con = 1 . Ovviamente anche v + s '1 risolve la stessa equazione, che ha quindi infinite soluzioni. Il procedimento pu`o allora continuare, definendo 3 come ⇢Z Z 2 ? |ru| , u 2 (E1 E2 ) , u2 = 1 , 3 = min ⌦



e determinando '3 ed E3 . Alla fine si ha il seguente risultato. Teorema 1. Esiste una successione n (autovalori) crescente e divergente di numeri reali positivi tale che: • Per ogni n in N esiste almeno una soluzione non identicamente nulla (autofunzione) di ( 'n = n 'n in ⌦, 'n = 0 su @⌦. • 1 `e la costante di Poincar´e in ⌦. • L’insieme delle soluzioni della precedente equazione `e uno spazio vettoriale En (autospazio) di dimensione finita, fatto di funzioni in L1 (⌦). Lo spazio E1 ha dimensione 1 e ogni funzione di E1 ha segno costante in ⌦. • Se ' appartiene ad En e appartiene ad Em (con n 6= m), allora ' e sono ortogonali. • Si ha +1 M 1 H0 (⌦) = En . n=1

• Se 6= n per ogni n in N, il problema (1) ha una ed una sola soluzione per ogni f in L2 (⌦). • Se = n per qualche n in N, il problema (1) ha soluzione se e solo se f in L2 (⌦) appartiene a En? ; in tal caso, esiste un’unica soluzione di (1) in En? . Una volta dimostrato che H01 (⌦) si scrive come somma diretta degli En , le ultime due a↵ermazioni del teorema sono facili da dimostrare.

8

Sia infatti f in L2 (⌦), che scriviamo f=

+1 X

n

'n ,

n

Z

=

f 'n .



n=1

Cercando una soluzione di (1) della forma u=

+1 X

↵n 'n ,

n=1

e ricordando che

'n = +1 X

((

n

n

'n per ogni n, si arriva a ) ↵n

n)

'n = 0,

n=1

da cui (se

non `e un autovalore) si ricava n

↵n =

,

n

mentre se e solo se

n

= n `e un autovalore si ha soluzione scegliendo ↵n = 0 se = 0 (e quindi f `e in En? ). Principio di massimo

Sappiamo gi`a che se = 0 e se f 0, allora la soluzione di (1) `e non negativa. Si dimostra facilmente che se tale propriet`a `e ancora vera se < 0. Cosa accade se > 0? Iniziamo con il suppore 6= 1 , dato che se = 1 nessuna funzione f 0 pu`o essere ortogonale a '1 (che `e positiva anch’essa). Come primo caso, consideriamo < 1 , e sia u l’unica soluzione di (1). Scegliendo u come funzione test, otteniamo Z Z Z 2 ru · ru = u + fu . ⌦



Dal momento che (per definizione di Z ru · ru

1)

1



si ha

0

Z



fu (



Z

u2 ,



1)

Z



u2  0,

da cui si ottiene u = 0, e quindi u 0. Pertanto, il principio di massimo vale se < 1 . Ebbene, questo `e l’unico caso in cui il principio di massimo `e valido in generale. Se > 1 si pu`o dimostrare che non vale, ed anzi, se `e maggiore di 1 ma `e vicino a 1 , vale il cosiddetto anti-principio

9

di massimo: se f 0, allora la soluzione u di (1) `e negativa. Che tale fatto possa essere vero si vede facilmente scegliendo f = '1 , cui corrisponde la soluzione u = '1 /( 1 ), che `e negativa se > 1 . Data l’importanza di 1 ai fini del principio del massimo, e dal momento che 1 dipende da ⌦ (a di↵erenza, ad esempio, della costante di Sobolev, che dipende solo dalla dimensione), pu`o essere interessante chiedersi come il primo autovalore dipenda dalla “taglia” di ⌦. Per capire cosa succede, consideriamo l’esempio delle sfere di raggio r. Chiamiamo r il primo autovalore in Br (0) e 'r 0 la corrispondente prima autofunzione: ( 'r = r 'r in Br (0), 'r = 0 su @Br (0). Definendo v(x) = 'r (r x) per x in B1 (0), si ha v(x) =

r2 'r (rx) = r2

r

'r (rx) = r2

r

v(x).

Se ne deduce che v(x) `e un’autofunzione del Laplaciano in B1 (0), con autovalore ? = r2 r . Dal momento che v `e non negativa, e che solo la prima autofunzione `e non negativa, si ha che ? = 1 , e che v = ↵'1 , con ↵ > 0 e '1 la prima autofunzione del Laplaciano in B1 (0). Si ha allora che 1 = r2 r , e quindi che

⌦1

=

1

. r2 Pertanto, pi` u piccolo `e il raggio, pi` u grande `e il primo autovalore (e quindi `e maggiore l’intervallo dei valori di per i quali vale il principio del massimo). In generale, se ⌦1 ⇢ ⌦2 sono due aperti, allora 1 (⌦1 ) 1 (⌦2 ). Si ha infatti che la prima autofunzione '1,⌦1 di ⌦1 , prolungata a zero in ⌦2 \⌦1 , appartiene a H01 (⌦2 ) ed ha norma 1 in L2 (⌦2 ). Pertanto Z Z 2 |r'1,⌦1 | = |r'1,⌦1 |2 1 (⌦1 ) = 1 (⌦2 ). r

⌦2

1 Da questi risultati segue che se ⌦ ⇢ Br (0), allora 1 (⌦) r = r2 . Se r `e piccolo, allora il primo autovalore di ⌦ `e grande ed `e “pi` u facile” che valga il principio di massimo.

Simmetria delle soluzioni Il fatto che se il dominio `e piccolo allora vale il principio di massimo `e il primo caso incontrato in cui la “taglia” di ⌦ influisce sulle propriet`a qualitative delle soluzioni. Il secondo — ben pi` u famoso — `e un risultato di simmetria delle soluzioni dovuto a Gidas, Ni e Nirenberg.

10

Il teorema `e il seguente. Teorema 2. Sia ⌦ = B1 (0) e sia f : R ! R una funzione lipschitziana. Sia u una soluzione (classica) di 8 > u = f (u) in ⌦, < u>0 in ⌦, > : u=0 su @⌦. Allora u(x) = u(|x|) ed u `e decrescente (come funzione di |x|).

Dimostrazione. Per semplicit`a, supponiamo di essere in R2 (la dimostrazione `e analoga in dimensione maggiore di 2). Dette (x, y) le varabili, sia 1 <  0 e sia r la retta x = . Definiamo ⌦ = {(x, y) 2 R2 : x

, (2

x, y) 2 ⌦},

e definiamo u˜(x, y) = u(2

x, y),

(x, y) 2 ⌦ .

Considerando u e u˜ su ⌦ , abbiamo u = u˜ se x =

, mentre sulla

Figura 1. ⌦ `e la zona colorata di rosso parte di frontiera di ⌦ “proveniente” dalla frontiera di ⌦ abbiamo u > 0 (per ipotesi) e u˜ = 0 (dato che u `e zero sulla frontiera di ⌦). Siccome derivando due volte u˜ rispetto ad x il segno cambia due volte, abbiamo u = f (u) in ⌦ , u˜ = f (˜ u) in ⌦ ,

11

e quindi, detta w = u w = f (u)

u˜, si ha f (u) f (˜ u) = u

f (˜ u) w = g w , in ⌦ , u˜

dove

f (u(x, y)) f (˜ u(x, y)) u(x, y) u˜(x, y) `e una funzione limitata essendo f lispchitziana. In sostanza si ha ( w = g w in ⌦ , w 0 su @⌦ . g(x, y) =

Se `e sufficientemente vicino a 1, il primo autovalore di ⌦ `e maggiore della costante di lipschitzianit`a di f (⌦ si pu`o infatti racchiudere in una sfera di raggio r sufficientemente piccolo), cosicch´e il principio di massimo `e valido in ⌦ . Se ne deduce che w > 0 in ⌦ se `e “vicino” a 1, e quindi u(x, y) > u˜(x, y) = u(2

x, y),

Definiamo allora E={ 1<

(x, y) 2 ⌦ .

 0 : wµ (x, y) > 0 in ⌦µ per ogni

1 < µ  }.

Grazie a quanto abbiamo appena detto, E `e non vuoto, e quindi 1 < ⇤ = sup E  0 .

Se ⇤ < 0, usando il fatto che wµ > 0 per ogni µ < ⇤, si dimostra (usando nuovamente il principio di massimo) che w⇤+" > 0 per ogni " tale che ⇤ + "  0, arrivando cos`ı ad un assurdo. Se ne deduce che ⇤ = 0 (lo stesso ragionamento non si pu`o ripetere perch´e se > 0 si ha che ⌦ non `e pi` u contenuto in ⌦), e quindi che u(x, y) > u(2

x, y) in ⌦

8 < 0.

Facendo tendere a zero, si ottiene u(x, y) u( x, y) per ogni (x, y) in ⌦0 (e ⌦0 `e il semicerchio). Ripetendo il discorso con > 0 (e riflettendo ⌦ a sinistra della retta x = ) si ottiene u(x, y)  u( x, y) per ogni (x, y) in ⌦0 , e quindi u(x, y) = u( x, y) per ogni (x, y) in ⌦. La stessa tecnica si pu`o ripetere considerando una qualsiasi direzione del piano: se r `e la retta passante per l’origine avente tale direzione, la u `e simmetrica a sinistra ed a destra di tale retta (dato che ⌦ viene diviso in due met`a speculari). Siano ora P = (0, r) (con 0 < r < 1), e sia Q un altro punto posto a distanza r dall’origine. Considerando la bisettrice dell’angolo OP Q, P e Q sono simmetrici rispetto ad essa, e quindi u(P ) = u(Q).

12

Abbiamo cos`ı dimostrato che se x2 + y 2 = r2 , allora u(x, y) = u(0, r), e quindi u `e radiale. Il fatto che sia decrescente segue dal fatto che se r1 > r2 allora (0, r1 ) e (0, r2 ) sono simmetrici rispetto alla retta 2 y = r1 +r e quindi u(0, r2 ) > u(0, r1 ). ⇤ 2 Si noti che il metodo, detto del moving plane, di far muovere la retta x = verso destra funziona finch´e l’insieme ⌦ `e contenuto in ⌦, dato che in questo caso `e possibile confrontare u con la sua riflessa. Se, quindi, ⌦ non `e una sfera, ma ad esempio un’ellisse, si pu`o spostare la retta x = fino a = 0, e la retta y = fino a = 0, ma non altre rette, dato che l’ellisse manca di simmetria in altre direzioni. In ogni caso, si riesce a dimostrare che u `e simmetrica rispetto agli assi, ovvero che la soluzione eredita la simmetria dell’insieme. L’importanza di tale teorema `e chiara: se ⌦ = B1 (0), il problema di risolvere l’equazione 8 > u = f (u) in ⌦, < u>0 in ⌦, > : u=0 su @⌦, viene ricondotto al problema di risolvere l’equazione di↵erenziale ordinaria 1 (rN 1 u0 (r))0 = f (u(r)), u(1) = 0, u0 (0) = 0. N r 1 Equazioni sublineari In questo e nel prossimo paragrafo studieremo l’esistenza di soluzioni positive per equazioni ellittiche della forma 8 > u = g(u) in ⌦, < u>0 in ⌦, > : u=0 su @⌦,

nei casi particolari in cui g(s) = s✓ (con 0 < ✓ < 1) e g(s) = sp (con p > 1). Prima di a↵rontare lo studio del primo dei due casi, cerchiamo di capire perch´e `e ragionevole aspettarsi soluzioni. O, meglio, perch´e per determinate funzioni g (ad esempio g(s) = es con grande) il problema possa non avere soluzione. Esattamente come nel caso g(s) = 1 s + f (x), scegliamo '1 come funzione test nel problema risolto da u e viceversa. Otteniamo Z Z Z Z g(u) '1 = ru · r'1 = r'1 · ru = 1 u '1 , ⌦







13

da cui si deduce

Z

[g(u)

1

u] '1 = 0.



Se abbiamo g(s) 0 (o, equivalentemente, g(s)  1 (s) per ogni s (s) per ogni s 0), ` e allora chiaro che l’identit`a precedente non pu`o 1 essere soddisfatta: il problema non ha alcuna soluzione positiva.

Figura 2. Due funzioni per le quali non c’`e esistenza Nel caso in cui g(s) sia s✓ (ovvero sp ), la retta di g.

1

s taglia il grafico

Figura 3. Due funzioni per le quali potrebbe esserci esistenza Questo fatto non vuol dire che esistono di sicuro soluzioni del problema, anche se `e spesso vero che il numero delle volte in cui la retta a un’informazione sul numero di soluzioni 1 s taglia il grafico di g d` positive del corrispondente problema.

14

Questo `e il caso per l’equazione 8 > u = u✓ < (7) u>0 > : u=0

in ⌦, in ⌦, su @⌦,

con 0 < ✓ < 1. Per dimostrare che (7) ammette almeno una soluzione, consideriamo il seguente funzionale Z Z 1 1 2 I(u) = |ru| u✓+1 , u 2 H01 (⌦). 2 ⌦ ✓+1 ⌦ + Osserviamo che I `e ben definito (essendo ✓ + 1 < 2) e che `e debolmente semicontinuo inferiormente (sempre perch´e, essendo ✓ +1 < 2, l’immersione di H01 (⌦) in L✓+1 (⌦) `e compatta). Si ha poi, per le disuguaglianze di H¨older e Poincar´e, ✓Z ◆ ✓+1 Z 1 ✓+1 2 2 1 2 |⌦| 2 I(u) |ru| u+ C1 kuk2 1 C2 kuk✓+1 . H0 (⌦) H01 (⌦) 2 ⌦ ✓+1 ⌦ Essendo ✓ + 1 < 2, I(u) diverge positivamente se la norma di u in H01 (⌦) diverge, e quindi I `e coercitivo. Si ottiene cos`ı l’esistenza di un minimo u per I, minimo che risolve ( u = u✓+ in ⌦, u=0 su @⌦. Dal momento che u✓+ `e non negativa, il principio del massimo implica che u `e non negativa. Pertanto, essendo u+ ⌘ u, si ha 8 > u = u✓ in ⌦, < u 0 in ⌦, > : u=0 su @⌦. Si noti che per dimostrare che I `e ben definito, che I `e debolmente semicontinuo inferiormente, e che I `e coercitivo, abbiamo usato solo il fatto che s✓+1 cresce all’infinito meno di s2 (volendo, che s✓ `e minore di 1 s all’infinito). Abbiamo trovato la nostra soluzione? Non necessariamente: la funzione u ⌘ 0 risolve lo stesso problema, e noi stiamo cercando soluzioni non banali. Come possiamo essere sicuri che il punto di minimo u di I non sia la funzione nulla? Calcoliamo I(t'1 ): otteniamo Z Z t2 t✓+1 I(t'1 ) = |r'1 |2 '✓+1 = C1 t2 C2 t✓+1 . 2 ⌦ ✓+1 ⌦ 1

15

Dal momento che ✓ + 1 < 2, si ha C1 t2 C2 t✓+1 < 0 per t sufficientemente vicino a zero. Pertanto, il valore minimo di I `e strettamente negativo. Essendo I(0) = 0, u ⌘ 0 non pu`o essere il minimo, e quindi la soluzione trovata non `e identicamente nulla. Si noti che per dimostrare che il minimo non `e banale abbiamo usato il fatto che s✓+1 `e maggiore di s2 vicino a zero (volendo, che s✓ `e maggiore di 1 s per s tendente a zero). Equazioni superlineari Molto di↵erente dal caso g(s) = s✓ (con 0 < ✓ < 1) `e il caso in cui g(s) = sp (con p > 1). Anche in questo caso la retta 1 s interseca il grafico di g per cui sembrerebbe che ci si possa aspettare l’esistenza di soluzioni. Come nel caso sublineare, proviamo l’approccio funzionale e definiamo Z Z 1 1 2 I(u) = |ru| up+1 , u 2 H01 (⌦). 2 ⌦ p+1 ⌦ + Chiaramente I non `e ben definito per ogni valore di p: deve essere, +2 infatti, p + 1  2⇤ = N2N2 , ovvero p  N . Se p `e pi` u grande, non N 2 `e detto che una funzione u in H01 (⌦) appartenga ad Lp+1 (⌦). Fatta questa restrizione, ci chiediamo se I sia debolmente semicontinuo inferiormente. La risposta `e a↵ermativa, ma p non pu`o essere uguale a ⇤ N +2 , dato che l’immersione di H01 (⌦) in L2 (⌦) non `e compatta. Per N 2 +2 cui I `e debolmente semicontinuo inferiormente se e solo se p < N . N 2 Per tali valori di p, I `e coercitivo? No: se, infatti, calcoliamo I(t'1 ) per t > 0, abbiamo Z Z t2 tp+1 I(t'1 ) = |r'1 |2 'p+1 = C1 t2 C2 tp+1 . 2 ⌦ p+1 ⌦ 1 Essendo p + 1 > 2, si ha lim I(t'1 ) =

t!+1

1,

cosicch´e I non solo non `e coercitivo, ma `e anche illimitato inferiormente. D’altra parte, se t < 0 si ha Z t2 I(t'1 ) = |r'1 |2 , 2 ⌦ e quindi I `e illimitato superiormente. Cosa fare? Come gi`a nel caso lineare, tentiamo la minimizzazione vincolata: definiamo ⇢Z Z m = inf |ru|2 , u 2 H01 (⌦), up+1 = 1 . ⌦



16 +2 Se p < N si vede facilmente (usando la compattezza dell’immersione N 2 1 di H0 (⌦) in Lp+1 (⌦)) che m `e un minimo, raggiunto in corrispondenza di una funzione v 6= 0. Scrivendo l’equazione risolta da v, otteniamo ( v = m |v|p 1 v in ⌦, v=0 su @⌦.

Ponendo u = m p

1 1

v, si verifica facilmente che u soddisfa l’equazione ( u = |u|p 1 u in ⌦, u=0 su @⌦,

e u 6= 0. Ragionando come nel caso lineare, si vede che si pu`o scegliere u di segno costante, ottenendo cos`ı una soluzione di 8 > u = up in ⌦, < (8) u>0 in ⌦, > : u=0 su @⌦.

N +2 Cosa succede nel caso p ? Per dimostrare che in generaN 2 le non esiste soluzione abbiamo bisogno di un risultato preliminare (importante di per s´e).

Teorema 3 (Identit`a di Pohozaev). Sia u una soluzione di 8 > u = g(u) in ⌦, < u 0 in ⌦, > : u=0 su @⌦.

Allora, detta G(s) la primitiva di g nulla in zero, si ha ◆ Z ✓ Z N 1 (9) 1 g(u) u + N G(u) = |ru|2 (x · ⌫) d , 2 2 ⌦ @⌦ Dimostrazione. Moltiplichiamo l’equazione per x · ru ed integriamo (si noti che tale funzione non `e nulla sulla frontiera di ⌦, per cui compariranno dei termini di bordo). Si ha Z Z (A) = div(ru) x · ru = g(u) x · ru = (B). ⌦



Si ha, usando il teorema della divergenza, Z Z (B) = x · rG(u) = [div(x G(u)) div(x) G(u)] ⌦ ⌦ Z Z Z = G(u) x · ⌫ d N G(u) = N G(u), @⌦





17

essendo G(u) = 0 su @⌦ e div(x) = N . Prima di lavorare con (A), eseguiamo alcuni calcoli: div(ru) x · ru = div((x · ru) ru)

r(x · ru) · ru.

Ora N X @ r(x · ru) · ru = @xi i=1

=

N X N ✓ X

N X j=1

N X @u = @xi i=1

ij

2

+ 2

!

@u @xi

@u @2u + xj @xj @xi @xj

i=1 j=1

N X @u = @xi i=1

@u xj @xj

N X

xj

j=1 N X

1 + 2

j=1

N X i=1



@u @xi

@ 2 u @u @xi @xj @xi

N @ X @u xj @xj i=1 @xi

2

1 = |ru|2 + x · r(|ru|2 ) 2 1 1 = |ru|2 + div(x |ru|2 ) div(x) |ru|2 2 2 ✓ ◆ N 1 = 1 |ru|2 + div(x |ru|2 ). 2 2 Pertanto,



div(ru) x · ru = div((x · ru) ru) da cui (A) =

Z

1 (x·ru) (ru·⌫) d + 2 @⌦

Z

@⌦

1

N 2



|ru|2

1 div(x |ru|2 ), 2

✓ |ru| (x·⌫) d + 1 2

N 2

◆Z



|ru|2 .

Su @⌦ possiamo scrivere ru = (r⌧ u, r⌫ u), dove con r⌧ abbiamo indicato le derivate sul piano tangente a @⌦ e con r⌫ la derivata normale. Essendo u = 0 su @⌦, tutte le sue derivate tangenziali sono nulle, e quindi ru = (ru · ⌫) ⌫

su @⌦.

Pertanto, sempre su @⌦, |ru|2 (x · ⌫) = |ru · ⌫|2 (x · ⌫) = (x · ru) (ru · ⌫),

18

e quindi (A) = Essendo

1 2

Z

@⌦

2

|ru| (x · ⌫) d + 1 Z



si ha allora Z ✓



2

|ru| =

Z

N 2

◆Z



|ru|2 .

g(u) u,



◆ Z N 1 1 g(u) u + N G(u) = |ru|2 (x · ⌫) d , 2 2 ⌦ @⌦ come volevasi dimostrare.



Supponiamo ora che ⌦ sia stellato rispetto all’origine (e quindi x·⌫ 0 per ogni x in @⌦) e prendiamo g(s) = sp . Essendo G(s) = sp+1 /(p+1), l’identit`a di Pohozaev implica ◆ Z ✓ Z N N 1 1 + up+1 = |ru|2 (x · ⌫) d 0. 2 p + 1 2 ⌦ @⌦

N Se p + 1 > N2N2 , la quantit`a 1 N2 + p+1 `e negativa, e quindi u ⌘ 0: non +2 esistono soluzioni positive per (8) se p > N . Se p + 1 = N2N2 allora N 2 N N la quantit`a 1 2 + p+1 `e nulla, e quindi |ru| ⌘ 0 su @⌦. Questo fatto (assieme all’essere u soluzione di (8)), implica nuovamente che u ⌘ 0: +2 non esistono soluzioni positive per (8) se p = N . N 2