Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:25
The Jacobi equation for a functional serving as a test for weak local extremum can be derived in quite different ways. The following geometrical approach was found in the book "Calculus of Variations" by L.E. Elsgoltz, 1958. It hangs upon the question of whether or not the stationary path can be included in a part of a one-parametric family of stationary paths (field of paths). There can be 2 options. Either no 2 paths of the family intersect, or all patghs of the family share 1 common point (but not more) in the given interval.
For example will form a family of the first type in , where , , the family of the second type in , . In the interval , no such family can be constructed.
Suppose we have a one-parametric familyof stationary paths . For example, we can fix one of the boundary points and use the gradient of the paths in this point as parameter C.
The envelope of this family is found by eliminating C from the following system of equations:
Along each path of the family is a function of only x. Denote this function as for some given C. Then .
are solutions of the E-L by assumption. Therefore:
Differentiating this equality by C and letting we obtain:
Rearranging we get:
which is obviously a Jacobi equation.
Thus, if has a zero somewhere in the interval, it follows from (*) above that this is a common point of the stationary path and the envelope. This is a point, conjugate to the left end of the interval.
It seemed to me at first that this proof serves only for theoretical purpose, as another way of deriving the Jacobi equation. However, the idea behind can be used to find the solution of the Jacobi equation, without actually solving the equation itself!
Consider the following example.
(we can suppress , assuming it is absorbed by the constant)
Now apply boundary conditions:
If we only use the first condition, that is fix the left boundary, we get the one-parametric family:
C being the gradient at with a reverse sign. Now following the idea described above we can find :
Finally setting by virtue of the boundary conditions:
Now we move on to dervie the Jacobi equation through the coefficients of the second variation:
Inserting the above results into the equation:
Instead of solving the equation which can be technically demanding, we shall verify if the expression for found above is a solution. We do not need a general solution in this case. All non-trivial solutions of a homogeneous equation of the second order satisfying the condition differ from each other only by a constant multiplier and thus have the same zeros.
So we indeed have a solution.
For the integrand does not depend explicitly on we can simplify the Jacobi equation by exchanging the roles of the variables:
Euler-Lagrange equation has the first integral:L
Again we leave one arbitrary constant to form a family:
Since the transformed integrand does not depend explicitly on the dependent variable, Q will vanish and the Jacobi equation has the first integral:
Thus, up to the sign we get the same expression. (I am not sure where I may be loosing the sign, but it obviously has litle effect on the argument)
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:26
Even if you are faced with a plain separable ODE, the process of separation of variables itself implies multiplying both parts by some factor. Thus the integrating factor seems to be one of the most devious tricks of solving equations.
There is a general path to establish its existence. It can be found in many textbooks. I am interested in some particular cases here which give beautiful solutions.
First, for a homogeneous equation it is possible to find a closed formula for the integrating factor.
It can be shown that for equation
,
where M and N are homogeneous functions of their arguments integrating factor has the form:
Apply this to equation:
Multiplying both parts by this expression we obtain:
Rearranging:
And the result becomes obvious.
For the next example it is useful to note the fact that if is an integrating factor for equation giving solution in the form , then where is any differentiable function shall also be an integrating factor. Indeed
giving the differential for the function
This leads to the following practical trick of finding the factor. All terms of the equations are split in two groups for each of which it is easy to find the integrating factor. Then each factor is written in the most general form involving an arbitrary function as described above. Finally we try to find such functions that make both factors identical.
Consider the following equation:
Rearranging the terms:
For the second term now the integrating factor is trivial, it is 1. Hence the most general form will look like .
For the first part it is easy to see that the factor should be giving solution , hence .
To make the two identical we want to be independent of x. Setting gives .
Applying this one we get:
Both methods were discovered in the classic book "A course on differential equations" by V.V. Stepanov
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:17
There are many good articles on O symbols, some more technical, some more popular. But at the end of the day you often find yourself staring at the excercise remembering all those definitions and still not knowing what to do next. This has been the case for me, until i took some freedom to play around with this device. I think i finally got it down. Here are two examples:
Where in the first equality i used geometric expansion until the linear term and the fact that for non-zero k kO(x)=O(x), hence -O(x)=O(x). Of course "=" sign must be understood as through all manipulations with O.
A more demanding one:
Of course, it would be a bad idea in the 5th transition to cancel out the numerator and denominator simply by division. The two O-s in this case may stand for different classes of functions.
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:18
Some integrals yield only one type of substitution that really brings them into a convenient form. Any other method would make them more complicated. However, in some cases totally different methods can be applied with equal effect. In case of definite integrals it is of course not necessary to come back to original variable which makes things even easier. Here is one example
The most natural way is to apply a trigonometric substitute. We will not consider this method here. Instead an algebraic trick can be employed:
Alternatively we can use integration by parts:
Or apply an even more exotic treatment:
let
let
MathJax failure: TeX parse error: Extra open brace or missing close brace
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:19
This example illustrates the application of the method of "summing multiplier" to solving certain types in recurrencies. It can be found for instance in the book "Concrete Mathematics" by Graham, Knuth and Patashnik. In its essence it translates the idea of the integrating multiplier from the theory of ODEs.
Â
Consider the recurrency:
Rewrite it in the form
then multiply both sides by which is to be determined:
The trick will be done, if we find satisfying the relation:
Solving for and expanding recursively we get:
substituiting into equation:
is easily obtained from the original integral:
Summing from 1 to n we get the so called "telescopic sum" on the left side meaning that only the first and the last terms survive:
Ultimately solve for :
(Note that the second term on the right side is the solution for "homogeneous" variant of the equation, i.e. withot term, suggesting another method borrowed from ODEs)
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:20
One of the rarely used methods of solving ODEs applies to the so-called generalized homogeneous equations. The word "generalized" means that the terms are not homogeneous in the classic sense, if all variables are assigned the same dimension. But they may be made homogeneous in a wider sense by choosing the appropriate dimension for the dependent variable. Here is one example.
If we assign dimension 1 to x and dx and dimension m to y and dy, then the left side has dimension 3+m-1=m+2 on the right side we have m+2 and 2m. To balance things let m+2=2m, hence m=2 and we get a "generalized homogeneous equation" of the 4th order. The trick is to let:
which in this case gives:
Hence the equation becomes:
letting z=1/y
This method can of course, be applied to higher order equations
This blog might contain posts that are only
visible to logged-in users, or where only logged-in users can comment. If you
have an account on the system, please log in for full access.