Edited by Valentin Fadeev, Wednesday, 15 Sept 2010, 23:37
Moving on with Stepanov's book I have reached the subject equations which have the following form (3 variables):
Where P, Q, R are sufficiently differentiable functions of x,y,z.
Excercise 205:
Integrating factor:
Will we always be lucky to have an appropriate factor to cast the equation into a full differential form? The book gives a negative answer setting out a very specific condition on the coefficients.
Assume the equation does have a solution and this solution is a 2-dimensional manifold, i.e has the form:
or (locally, at least):
Then
On the other hand, by virtue of the equation (assuming R does not vanish identically):
Comparing the coefficients:
This is an overdetermined system: one function, two equations which generally does not have a solution The integrability condition can be obtained by equation mixed second derivatives, however I will quote the geometrical argument which may also shed light on some fact presented below.
Consider an infinitesimal shift along the manifold from to . Then z will take value:
From here we move to the point with coordinates without leaving the manifold. New value of z:
Similarly, if we first move along and then along , we arrive at the following point:
Now we require that whatever route is chosen it leads to the same point on the manifold (up to the terms of the second order). This leads to the following equation:
Now that was the book, here are some thoughts about this theory.
1) First, equation definitely points to some categories of vector analysis. Indeed, the factors of P, Q and R are the components of the rotor of the vector field . Hence, the condition can be rewritten in a more compact form:
At first sight this should hold trivially for any , for the rotor is by definition perpendicular to the plane defined by and tangent . However, this would only be true, if the solution were indeed an 2-dimensional manifold. If there is no such solution, then the whole derivation becomes invalid.
2) There is another reason why I prefer the geometric argument over comparing the mixed derivatives. The logic is very similar to that used to derive Cauchy-Riemann conditions for the analytic function. Remarkably enough, we can also apply complex formalism to the above problem. Consider the following operator:
where .
Assuming again that the solution exists in the form and using the above shortcuts for partial derivatives we obtain:
Now apply to both parts, where is complex conjugate:
stands for "full partial derivative" where dependance of on is taken into account. Replacing and with their values, we obtain:
where is the Laplacian, On the other hand:
Hence
which gives the above integrability condition.
So this is another example of how recourse to complex values can reveal deep facts behind the otherwise unfamiliar looking expressions. And formulate them in a nice compact form as well.
In conclusion here is an example where integrability condition does not hold:
Edited by Valentin Fadeev, Tuesday, 6 July 2010, 23:04
This one gave me some hard time:
where when
The difference with "ordinary" linear equations is that coefficients here depend on . At the same time right hand side is identically zero, so this is not strictly an inhomogeneous equations.
I made false starts trying to find some nice substitute to absorb or alternatively pull out the missing term to construct an inhomogeneous equations. Finally, I found a hint in the book N.M. Günter, Integration of First-Order Partial Differential Equations, ONTI/GTTI, Leningrad/Moscow (1934). In the solution for an equation of a similar structure it was suggested to use the standard method and treat as a constant when integrating the associated system.
So here we go. Searching for a solution in an implicit form:
The associated system:
In the same book it is hinted that one particular integral of this system is which follows from the last identity. Another integral can be found using the first identity:
Treating as constant we can simplify the expressions:
Therefore the general solution can be written in the form:
It is already within reach of sheer guess to let to establish the result, however we proceed with a more lengthy, yet rigorous way.
The system of the first integrals is written as follows:
Using the initial condition :
Solving for and :
Now following the standard method already mentioned below:
MathJax failure: TeX parse error: Extra open brace or missing close brace
Suppressing the solution which does not satisfy intial conditions, finally we obtain:
Edited by Valentin Fadeev, Monday, 21 June 2010, 19:59
Strolling struggling on:
This is an inhomogeneous equation. Following the theory we try to find the general solution in an implicit form:
It is proven that the solution found in this form is indeed general, i.e. we are not losing any solutions on the way.
Now we can write the associated system (in the symmetrical form):
Or more conveniently for this example, in the canonical form:
Multiplying these equations by l, m and n respectively and summing we get:
This is one of the first integrals of the system. Now multiplying the equations by x, y and z respectively amd summing we obtain:
Therefore, the general solution has the following form:
Geometrically the first solution represents a plane in a 3d space with angular coefficients of the normal vector . The second integral represents a sphere centered at the origin. Therefore, the characteristics of the equation (the curves,
resulting from intersection of these surfaces) are the circles centered on the line passing through the origin with the above mentioned angular coefficients.
Indeed, another way to look at it is rewrite the equation in the following form:
Where is a tangential vector to the surface , is the vector of the axis of revolution and is the radius vector of an arbitrary point on the surface. It means that for every point on the surface the tangent vector must lie in the plane
passing through the axis of revolution. This is natural, for the surface is ontained by rotating a plane curve against the axis.
Edited by Valentin Fadeev, Friday, 18 Oct 2019, 07:52
Struggling through the early chapters of partial differential equations (from the all-time-classic Stepanov's book) I often run across the moves that are not really technically demanding, but the logic behind takes time to sink in.
One of them is the trick used to find a particular solution of the first order linear PDE satisfying initial condition:
Here is an excercise: find the solution of the following equation
satisfying the condition: when
We start by writing out the associated system of ODEs:
Use the first identity to find a "first integral" of the system:
Then equate the first and the third quotients to obtain the second "first integral":
Obviously there are no more independent first integrals
According to the theory, the general solution is given as an arbitrary function of these two integrals:
Now we shall find the particular solution. Let x=1. Following the book, we introduce new functions and to which and turn when we set the value of :
Now we solve these equations with respect to and :
And now comes the difficult part. In order to get the final result we need to substitute the above results in the expression of replacing with :
It took me some time to accept that I have to substitute which is a more general expression than in the equation which was derived after giving ist particular value.
Finally:
The reasoning is clear, once it is explained. and are both solutions, hence the function of them is also a solution. Then, if we let they turn to and . Then by virtue of the equations (1) we get the variables and and consequently the expression of as required.
In the book the whole argument is presented in the most general form. However, it takes several excercises and hours of thinking to get a hands-on experience with this method.
This blog might contain posts that are only
visible to logged-in users, or where only logged-in users can comment. If you
have an account on the system, please log in for full access.