Edited by Valentin Fadeev, Thursday, 27 Mar 2014, 10:06
M832 did not really fit in the "big picture" of my study plan this year, not only because I am notoriously bad at numerical calculations. Having invested a lot of effort and sleepless nights in developing intuition for the behaviour of analytic functions I was suddenly confronted by the cubic splines which seemed to have all those properties, that well-mannered functions would be never allowed to possess.
Nicely, but artificially glued together of several pieces of cubics, smooth only up to the second derivative, vanishing on the entire intervals, now this is what seems really counter-intuitive. Be that as it may, the TMA deadlines had to be met.
The following is the kind of problem I got stuck with for a while. Obviously I am not replicating a TMA question here, giving an extended solution to one of the problems from the Course Notes. So the task is to express a function, say in terms of cubic B-splines on the entire real axis. I am omitting a lot of background material focusing on one particular idea that arises in the solution.
Since has a supporting interval of length 4 outside which it vanishes, we can start by expressing the function in terms of on and then try to extend the result. Calculating the expressions for the splines on :
multiplying by the respective coefficients, summing and equating powers of on each side we arrive at the following system of equations:
Having the solution (guaranteed by the Schoenberg-Whitney theorem):
Now we want to find all coefficients on each of the intervals for the points . From the general expression for the B-spline it can be deduced that
which for leads to the following recurrence relation:
or, after changing the index
Now here is the trick that I came up with and that was not (at least not explicitly) described in the Course Notes or the set book. The last expression can be thought of as a "second-order linear inhomogeneous recurrence relation". The advantage of this approach is that the structure of the solution instantly becomes clear.
The general solution of the corresponding homogeneous relation
is derived in the course notes, using the standard method of solving this type of recurrencies and is given by the following expression:
It can also be found using generating functions. Not surprisingly it depends on 2 arbitary constants, as it takes 2 initial terms, and to reconstruct the whole sequence from the three-term recurrency. Applying the general ideas from the linear systems we deduce that in order to obtain the general solution of the inhomogeneous recurrency we have to add a particular solution to the expression above.
Since the RHS is the quadratic polynomial it makes sence to look for the particular solution in the form:
Substituting this into the original recurrency and gatherig together the powers of we obtain:
which after equating powers gives the solution
Thus the general solution of the inhomogeneous equation is given by the following formula:
Now we can use the values of and to determine the constants (bearing in mind that ):
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:23
Now I got really fascinated with this topic. The most exciting part of evaluating integrals using residues is constructing the right contour. There are general guidelines for certain types of integrals, but in most cases the contour has to be tailored for a particular problem. Here is another example. Evaluate the following integral:
Where are real, are roots of the integrand.
The approach used in the previous example does not work as the integral along a large circle centered at the origin does not tend to 0. This, in fact, is a clue to the solution, as it prompts to have a look what indeed is happening to the function for large . But first consider the integrand in the neighborhood of the origin (simple pole):
Now the integrand is regular for large hence it can be expanded in the Laurent series convergent for where is large:
The residue at infinity is defined to be the coefficient at with the opposite sign:
Now we construct contour C by cutting the real axis along the segment and integrating along the upper edge of the cut in the negative direction, then along a small circle and along the lower edge of the cut. As the result of that one of the factors of the integrand increases its argument by . Finally integrate along a small circle
Integrating along this contour amounts to integrating in the positive direction along a very large circle centered at infinity. Hence the outside of C is the inside of this large circle which therefore includes both the residue at the origin and at infinity. Therefore, by the residue theorem:
Edited by Valentin Fadeev, Thursday, 27 Mar 2014, 10:11
As a follow up thought I realized just how much easier it would have been to calculate the residue by definition, i.e. expanding the integrand in the Laurent series to get the coefficient . Let where is small:
Of course, care is needed when choosing the value of the root. It depends on the value of the argument set on the upper bound of the cut. Since I chose it to be 0, the correct value of the root is .
Now
Therefore, near the integrand has the following expansion:
where is the regular part of the expansion which is of no interest in this problem.
Edited by Valentin Fadeev, Thursday, 21 Apr 2011, 23:58
I found this example in a textbook dated 1937 which I use as supplementary material for M828. It gave me some hard time but finally sorted out many fine tricks of contour integration. Some inspiration was provided by the discussion of the Pochhammer's extension of the Eulerian integral of the first kind by Whittaker and Watson.
Evaluate the following integral:
Consider the integral in the complex plane:
where contour C is constructed as follows. Make a cut along the segment of the real axis. Let on the upper edge of the cut. Integrate along the upper edge of the cut in the positive direction. Follow around the point along a small semi-circle in the clockwise direction. The argument of the second factor in the numerator will decrease by . Then proceed along the real axis till and further round the circle in the counter-clockwise direction.
This circle will enclose both branch point of the integrand and , however since the exponents add up to unity the function will return to its initial value:
This is the reason why we only need to make the cut along the segment and not along the entire real positive semi-axis. Then integrate from in the negative direction, then along the second small semi-circle around the point where the argument of the second factor will again decrease by . Finally integrate along the lower edge of the cut and along a small circle around the origin where the argument of the first factor will decrease by .
As the result of this construction the integral is split into the following parts:
Two integrals around the small circles add up to an integral over a circle:
Similarly, the integral around a small circle around the origin vanishes:
Integrals along the segment cancel out:
The integral over the large circle also tends to 0 as R increases. This can be shown using the Jordan's lemma, or by direct calculation:
Finally the only two terms that survive allow us to express the contour integral in terms of the integral along the segment of the real axis:
The contour encloses the only singularity of the integrand which is the pole of the third order at . Hence, by the residue theorem:
The residue can be calculated using the standard formula:
Calculation of the derivative can be facilitated by taking logarithm first:
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:23
This is quite a minor trick and like many things listed here may seem quite trivial. However, this is one of those few occasions when I had the tool in mind, before I actually got the example touse it on. Consider:
which does not really require a great effort to solve. But forget all the standard ways for a moment and add to both parts:
Hope this can be stretched to use in more complicated cases
Edited by Valentin Fadeev, Sunday, 23 Jan 2011, 21:49
Had to do some revision of vector calculus/analysis before embarking on M828.
One point which I was not really missing, but did not quite get to grips with was the double vector product. I remembered the formula:
,
but nevertheless had difficulties applying it in excercises.
The reason whas that that the proof I saw used the expression of vector product in coordinates and comparison of both sides of the equation. However, I was aware of another, purely "vector" argument with no reference to any coordinate system.
Eventually I was able to reproduce only part of it, consulting one old textbook for some special trick. So here's how it goes.
For is perpendicular to the plane of and , must lie in this plane, therefore:
Dot-multiply both parts by :
Since , left-hand side is 0, so:
Now define vector lying in the plane of and , perpendicular to and directed so that , and form the left-hand oriented system. This guarantees that the angle between and , .
Dot-multiply both parts by :
However,
MathJax failure: TeX parse error: Missing or unrecognized delimiter for \right therefore
and
Hence
can be calculated in a similar manner, however, it is easier achieved using equation .
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:24
This is problem 1.18 from JS (M821 course book). The question is to investigate the motion of a bead that slides on a smooth parabolic wire rotating with constant angular velocity about a vertical axis. is the distance from the axis of rotation.
To simplyfy the calculations we can choose the scale so that equation of the parabola is .
Yes, I know as a grown up man I should write out the expression of the kinetic energy:
where is the tangential velocity of the bead directed along the wire. Then define potential energy as usual:
bearing in mind that the force of gravity acts in the direction opposite to that of axis, hence changing the sign.
Construct the Lagrangian:
Define the action:
Use variational principle
to obtain the Euler-Lagrange equation:
Calculating separate terms:
Finally obtain the equation of motion:
However, I am tempted to use an alternative method that I learned on my secondary school physics lessons. It is based on the direct application of Newtons laws and projecting vector equation on coordinate axes. Writing out the second law we have to bear in mind that the bead is acted upon by gravitational force and the force of normal reaction which arises due to Newton's third law and acts along the normal to the wire.
Hence Newton's second law is expressed as follows:
Acceleration is split into the tangential and centripetal parts:
Projecting the equation on the vertcal axis we obtain:
where is the angle at which the tangent crosses the horizontal axis (hence )
Then project on the horizontal axis:
Eliminate :
Now calculate the tangential acceleration:
and the centripetal part:
Plug the above results into the equation:
to obtain the same result
Further analysis on the phase plane shows that when the wire rotates not very fast () the bead oscillates around the origin in the vertical plane. If the bead is moving along the wire away from the origin, it's velocity tending asymptotically to value
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:24
The first (unguided) steps in dynamical systems. The task is to prove that the phase paths of the following system:
are isochronous spirals, that is, every circuit of every path around the origin takes the same time. It looks a bit scary, because it involves not one but two "bad" functions. Thle easiest way to deal with these is to break the picture in quadrants and examine each part separately.
First quadrant: ,
Second quadrant: ,
Third quadrant: , , the same solution as for the first quadrant.
Fourth quadrant: , , the same solution as for the second quadrant.
Introduce polar coordinates.
I,III:
II,IV:
Let be a closed circuit enclosing the origin. The elapsed time is given by the following formula:
As the functions are periodic, it is sufficient to give proof for one loop. Due to the axial symmetry of the trajectory we can calculate transit time only in the first and the second quadrants and then double the result. If we integrate counter-clockwise, then we would follow the trajectory in a reversed direction. Therefore, we need to put the negative signe before the integrals.(oh, how long it took me to realize this..) In polar coordinates this expression has the following form:
Differentiating the expression for by we obtain:
A-ha, that's where we get the independence of the final result on the path choice: the arbitrary constant gets cancelled out.
Let , then ,
I made some false starts on getting a negative answer which is impossible for a strictly positive integrand. Still not quite sure where I went wrong. Normally these things happen when a singularity sneaks in inside the domain of integration after variable change. Anyway, I decided to cheat and shift the scale.
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:18
With the new courses yet to start, hopefully providing fresh material for new posts, I have been spending time going through some excercises from new textbooks.
As integrals have always been my favourite part of calculus, I decided to take down this solution, because it just looks nice. It also illustrates the principle: don't make a substitution, until it becomes obvious.
MathJax failure: TeX parse error: Extra close brace or missing open brace
Since we have and , so we need to choose negative sign when taking square root of the quadratic term.
It's now that the substitute becomes an obvious choice.
Edited by Valentin Fadeev, Wednesday, 15 Sept 2010, 23:37
Moving on with Stepanov's book I have reached the subject equations which have the following form (3 variables):
Where P, Q, R are sufficiently differentiable functions of x,y,z.
Excercise 205:
Integrating factor:
Will we always be lucky to have an appropriate factor to cast the equation into a full differential form? The book gives a negative answer setting out a very specific condition on the coefficients.
Assume the equation does have a solution and this solution is a 2-dimensional manifold, i.e has the form:
or (locally, at least):
Then
On the other hand, by virtue of the equation (assuming R does not vanish identically):
Comparing the coefficients:
This is an overdetermined system: one function, two equations which generally does not have a solution The integrability condition can be obtained by equation mixed second derivatives, however I will quote the geometrical argument which may also shed light on some fact presented below.
Consider an infinitesimal shift along the manifold from to . Then z will take value:
From here we move to the point with coordinates without leaving the manifold. New value of z:
Similarly, if we first move along and then along , we arrive at the following point:
Now we require that whatever route is chosen it leads to the same point on the manifold (up to the terms of the second order). This leads to the following equation:
Now that was the book, here are some thoughts about this theory.
1) First, equation definitely points to some categories of vector analysis. Indeed, the factors of P, Q and R are the components of the rotor of the vector field . Hence, the condition can be rewritten in a more compact form:
At first sight this should hold trivially for any , for the rotor is by definition perpendicular to the plane defined by and tangent . However, this would only be true, if the solution were indeed an 2-dimensional manifold. If there is no such solution, then the whole derivation becomes invalid.
2) There is another reason why I prefer the geometric argument over comparing the mixed derivatives. The logic is very similar to that used to derive Cauchy-Riemann conditions for the analytic function. Remarkably enough, we can also apply complex formalism to the above problem. Consider the following operator:
where .
Assuming again that the solution exists in the form and using the above shortcuts for partial derivatives we obtain:
Now apply to both parts, where is complex conjugate:
stands for "full partial derivative" where dependance of on is taken into account. Replacing and with their values, we obtain:
where is the Laplacian, On the other hand:
Hence
which gives the above integrability condition.
So this is another example of how recourse to complex values can reveal deep facts behind the otherwise unfamiliar looking expressions. And formulate them in a nice compact form as well.
In conclusion here is an example where integrability condition does not hold:
Edited by Valentin Fadeev, Friday, 24 Sept 2010, 13:25
Discussing one excercise on the forum recently we disagreed on the point, whether turning to complex numbers makes the solution more or less straightforward.
Here I'm digging out an example showing that this tescnique is not always as obscure as it sounds. And yes, this is another example of inappropriate use of fine methods against a basic school problem:
After latexing the mind-bending plain text of the discussion it looks like this:
assuming . The author's conjecture was that for large :
Summing by parts is quite a standard device. Though, like with integration by parts the difficult part often is use it at the right moment. Whittakker and Watson ascribe it's systematic introduction to Abel. Probably the best account of it is given in "Concrete Mathematics". The authors introduce "definite sums" which are effectively the sums with an omitted last term:
The cryptic is added solely to enhance the analogy with definite integrals.
The general formula is the following:
Where is the difference operator and is the shift operator. The formula is easily proved by evaluating
Part of the sum (*) on the right prompts for the binomial formula. Hence, it would be good to pull it out of the sum. Let's try:
Edited by Valentin Fadeev, Tuesday, 6 July 2010, 23:04
This one gave me some hard time:
where when
The difference with "ordinary" linear equations is that coefficients here depend on . At the same time right hand side is identically zero, so this is not strictly an inhomogeneous equations.
I made false starts trying to find some nice substitute to absorb or alternatively pull out the missing term to construct an inhomogeneous equations. Finally, I found a hint in the book N.M. Günter, Integration of First-Order Partial Differential Equations, ONTI/GTTI, Leningrad/Moscow (1934). In the solution for an equation of a similar structure it was suggested to use the standard method and treat as a constant when integrating the associated system.
So here we go. Searching for a solution in an implicit form:
The associated system:
In the same book it is hinted that one particular integral of this system is which follows from the last identity. Another integral can be found using the first identity:
Treating as constant we can simplify the expressions:
Therefore the general solution can be written in the form:
It is already within reach of sheer guess to let to establish the result, however we proceed with a more lengthy, yet rigorous way.
The system of the first integrals is written as follows:
Using the initial condition :
Solving for and :
Now following the standard method already mentioned below:
MathJax failure: TeX parse error: Extra open brace or missing close brace
Suppressing the solution which does not satisfy intial conditions, finally we obtain:
Edited by Valentin Fadeev, Monday, 21 June 2010, 19:59
Strolling struggling on:
This is an inhomogeneous equation. Following the theory we try to find the general solution in an implicit form:
It is proven that the solution found in this form is indeed general, i.e. we are not losing any solutions on the way.
Now we can write the associated system (in the symmetrical form):
Or more conveniently for this example, in the canonical form:
Multiplying these equations by l, m and n respectively and summing we get:
This is one of the first integrals of the system. Now multiplying the equations by x, y and z respectively amd summing we obtain:
Therefore, the general solution has the following form:
Geometrically the first solution represents a plane in a 3d space with angular coefficients of the normal vector . The second integral represents a sphere centered at the origin. Therefore, the characteristics of the equation (the curves,
resulting from intersection of these surfaces) are the circles centered on the line passing through the origin with the above mentioned angular coefficients.
Indeed, another way to look at it is rewrite the equation in the following form:
Where is a tangential vector to the surface , is the vector of the axis of revolution and is the radius vector of an arbitrary point on the surface. It means that for every point on the surface the tangent vector must lie in the plane
passing through the axis of revolution. This is natural, for the surface is ontained by rotating a plane curve against the axis.
Edited by Valentin Fadeev, Friday, 18 Oct 2019, 07:52
Struggling through the early chapters of partial differential equations (from the all-time-classic Stepanov's book) I often run across the moves that are not really technically demanding, but the logic behind takes time to sink in.
One of them is the trick used to find a particular solution of the first order linear PDE satisfying initial condition:
Here is an excercise: find the solution of the following equation
satisfying the condition: when
We start by writing out the associated system of ODEs:
Use the first identity to find a "first integral" of the system:
Then equate the first and the third quotients to obtain the second "first integral":
Obviously there are no more independent first integrals
According to the theory, the general solution is given as an arbitrary function of these two integrals:
Now we shall find the particular solution. Let x=1. Following the book, we introduce new functions and to which and turn when we set the value of :
Now we solve these equations with respect to and :
And now comes the difficult part. In order to get the final result we need to substitute the above results in the expression of replacing with :
It took me some time to accept that I have to substitute which is a more general expression than in the equation which was derived after giving ist particular value.
Finally:
The reasoning is clear, once it is explained. and are both solutions, hence the function of them is also a solution. Then, if we let they turn to and . Then by virtue of the equations (1) we get the variables and and consequently the expression of as required.
In the book the whole argument is presented in the most general form. However, it takes several excercises and hours of thinking to get a hands-on experience with this method.
Edited by Valentin Fadeev, Thursday, 17 June 2010, 22:02
I am posting it in reply to a question I came across in one social network. There is no original material here. I am just trying to reproduce the argument from one university lecture, as far as I can remember it.
The task is to calculate optimal stock level at a warehouse. Stock is replenished to a certain value (which is to be determined) immediately at fixed equally spaced moments in time. Amount of stock units cnosumed during each period is random.
Let be the stock level which is maintained. Assume that stock is replenished over a fixed period of time (take it to be unitary to simplyfy calculations). Assume that units are used during each time period with probability .
Let be unitary storage cost, is a “penalty” cost assigned when stock is theoretically below 0, or below some minimal admissible level (lost orders, emergency orders, etc).
Assume also that the stock is replenished immediately.
Consider two possible situations.
1) , i.e there is a positive remainder in the warehouse at the end of the period. Storage cost is proportional to the area under the stock graph.
Expectancy of this value is:
Therefore the average cost is:
2) . In this case the cost of both types is incurred. Again using the geometrical approach we find that cost elements are proportional to the areas of the triangles. The same argument as above gives the answer:
Thus the total cost is given by the following expression:
Marginal analysis is used to determine the optimal stock level. Optimal value of minimizing satisfies the relation:
(here my memory is abit vague, so I am giving it without proof and without guarantee)
Practically this means calculating for a series of the values of s until the above double inequality holds.
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:25
The Jacobi equation for a functional serving as a test for weak local extremum can be derived in quite different ways. The following geometrical approach was found in the book "Calculus of Variations" by L.E. Elsgoltz, 1958. It hangs upon the question of whether or not the stationary path can be included in a part of a one-parametric family of stationary paths (field of paths). There can be 2 options. Either no 2 paths of the family intersect, or all patghs of the family share 1 common point (but not more) in the given interval.
For example will form a family of the first type in , where , , the family of the second type in , . In the interval , no such family can be constructed.
Suppose we have a one-parametric familyof stationary paths . For example, we can fix one of the boundary points and use the gradient of the paths in this point as parameter C.
The envelope of this family is found by eliminating C from the following system of equations:
Along each path of the family is a function of only x. Denote this function as for some given C. Then .
are solutions of the E-L by assumption. Therefore:
Differentiating this equality by C and letting we obtain:
Rearranging we get:
which is obviously a Jacobi equation.
Thus, if has a zero somewhere in the interval, it follows from (*) above that this is a common point of the stationary path and the envelope. This is a point, conjugate to the left end of the interval.
It seemed to me at first that this proof serves only for theoretical purpose, as another way of deriving the Jacobi equation. However, the idea behind can be used to find the solution of the Jacobi equation, without actually solving the equation itself!
Consider the following example.
(we can suppress , assuming it is absorbed by the constant)
Now apply boundary conditions:
If we only use the first condition, that is fix the left boundary, we get the one-parametric family:
C being the gradient at with a reverse sign. Now following the idea described above we can find :
Finally setting by virtue of the boundary conditions:
Now we move on to dervie the Jacobi equation through the coefficients of the second variation:
Inserting the above results into the equation:
Instead of solving the equation which can be technically demanding, we shall verify if the expression for found above is a solution. We do not need a general solution in this case. All non-trivial solutions of a homogeneous equation of the second order satisfying the condition differ from each other only by a constant multiplier and thus have the same zeros.
So we indeed have a solution.
For the integrand does not depend explicitly on we can simplify the Jacobi equation by exchanging the roles of the variables:
Euler-Lagrange equation has the first integral:L
Again we leave one arbitrary constant to form a family:
Since the transformed integrand does not depend explicitly on the dependent variable, Q will vanish and the Jacobi equation has the first integral:
Thus, up to the sign we get the same expression. (I am not sure where I may be loosing the sign, but it obviously has litle effect on the argument)
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:25
The first time I encountered these weird objects of analysis was probably while surfing the book of E. Kamke "A reference of ordinary differential equations" which in turn gave a reference to Whittakker and Watson. I was at that time delving into the methods of analytic geometry and only new that lemniscate was an 8-shaped algebraic curve of the 4th order, a particular case of Cassini ovals. So I was pretty shocked to find out that it could give rise to some "trigonometric" system.
Although these functions were already studied by Gauss (no surprise), most of the original and subsequent research concentrated on different series expansions and evaluation of particular elliptic integrals.
Being fresh from the first course on calculus, I attempted an investigation of the properties of lemniscate functions by means of only the very basic techniques used derive similar results for circular functions. I wanted to derive formulas for derivatives and primitives, addition theorems, complementary formulas, etc.
However, looking back at that paper I see that in the most crucial steps I just took the known relations for Jacobi elliptic functions (from W&W book) and then reduced them to the particular case of lemniscate functions.
I think that lemniscate functions can be used for changing variable when evaluating certain integrals, or converting differential equations into manageable forms. Although in my rather limited practice I have never encountered the cases where it would be appropriate, I still want to make a small, but independent account of these devices and keep them in my arsenal waiting for the right moment to come.
So, enough of the words, let's get down to business.
(I am using the original Gaussian (and my own ;)) notation instead of the more lengthy sinlemn and coslemn)
Take the first integral and differentiate both sides by :
MathJax failure: TeX parse error: Extra open brace or missing close brace
The same would hold, if we started with the second integral. Hence we obtain the differential equation for lemniscate functions:
Now we shall find an algebraic relation between sl and cl.
Substitute . Then
Inserting it all in the integral and observing new integration limits we obtain:
Now comparing this with the integral defining and looking at the limits we conclude that:
Conversely:
Now it is easy to establish the expressions for derivatives:
Formula for cl can be obtained similarly, but we can follow a different path using complimentary formula.
Rewrite the definitions in the following way:
Then:
So
Then we immediately obtain:
The constant value which is half the length of the "unitary" lemniscate can be evaluated substituting in the integral:
Integral of the lemniscate function is easily calculated:
Now I am only one step away from deriving the addition theorem using the same method due to Euler that works for Jacobi elliptic functions, but got somewhat lost in the algebra.. Hope to post that later on.
Edited by Valentin Fadeev, Thursday, 15 Apr 2010, 00:21
I have always been amazed by the power of recurrent formulas. Although they are likely to cause stack overflow (this was the first occasion when i learned what it actually means), used carefully they deliver beautiful results.
I once amused myself drawing stellar polygons of arbitrary number of vertices. That required a simple parametrized procedure that produced a sequence of vertices to be connected with line segments.
Starting with the formula of the n-th root of complex number (of unitary modulus):
MathJax failure: TeX parse error: Extra open brace or missing close brace
to get the coordinates of the n vertices, i found the following way to produce the path sequence:
where is the number of the vertex and d () is the number of vertices "skipped" in one step.
So the procedure looks roughly like this:
x0=cos(phi/n)
y0=sin(phi/n)
do
vn=(v+d) mod n
x1=cos((phi+2*pi*vn)/n)
y1=sin((phi+2*pi*vn)/n)
line(x0,y0)-(x1,y1)
v=vn
x0=x1
y0=y1
loop until vn=0
I even considered the idea of enumerating the possible outcomes for different pairs of (n,d), but put it on the shelf then..
Edited by Valentin Fadeev, Sunday, 18 Sept 2011, 23:26
Even if you are faced with a plain separable ODE, the process of separation of variables itself implies multiplying both parts by some factor. Thus the integrating factor seems to be one of the most devious tricks of solving equations.
There is a general path to establish its existence. It can be found in many textbooks. I am interested in some particular cases here which give beautiful solutions.
First, for a homogeneous equation it is possible to find a closed formula for the integrating factor.
It can be shown that for equation
,
where M and N are homogeneous functions of their arguments integrating factor has the form:
Apply this to equation:
Multiplying both parts by this expression we obtain:
Rearranging:
And the result becomes obvious.
For the next example it is useful to note the fact that if is an integrating factor for equation giving solution in the form , then where is any differentiable function shall also be an integrating factor. Indeed
giving the differential for the function
This leads to the following practical trick of finding the factor. All terms of the equations are split in two groups for each of which it is easy to find the integrating factor. Then each factor is written in the most general form involving an arbitrary function as described above. Finally we try to find such functions that make both factors identical.
Consider the following equation:
Rearranging the terms:
For the second term now the integrating factor is trivial, it is 1. Hence the most general form will look like .
For the first part it is easy to see that the factor should be giving solution , hence .
To make the two identical we want to be independent of x. Setting gives .
Applying this one we get:
Both methods were discovered in the classic book "A course on differential equations" by V.V. Stepanov
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:17
There are many good articles on O symbols, some more technical, some more popular. But at the end of the day you often find yourself staring at the excercise remembering all those definitions and still not knowing what to do next. This has been the case for me, until i took some freedom to play around with this device. I think i finally got it down. Here are two examples:
Where in the first equality i used geometric expansion until the linear term and the fact that for non-zero k kO(x)=O(x), hence -O(x)=O(x). Of course "=" sign must be understood as through all manipulations with O.
A more demanding one:
Of course, it would be a bad idea in the 5th transition to cancel out the numerator and denominator simply by division. The two O-s in this case may stand for different classes of functions.
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:18
Some integrals yield only one type of substitution that really brings them into a convenient form. Any other method would make them more complicated. However, in some cases totally different methods can be applied with equal effect. In case of definite integrals it is of course not necessary to come back to original variable which makes things even easier. Here is one example
The most natural way is to apply a trigonometric substitute. We will not consider this method here. Instead an algebraic trick can be employed:
Alternatively we can use integration by parts:
Or apply an even more exotic treatment:
let
let
MathJax failure: TeX parse error: Extra open brace or missing close brace
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:19
This example illustrates the application of the method of "summing multiplier" to solving certain types in recurrencies. It can be found for instance in the book "Concrete Mathematics" by Graham, Knuth and Patashnik. In its essence it translates the idea of the integrating multiplier from the theory of ODEs.
Consider the recurrency:
Rewrite it in the form
then multiply both sides by which is to be determined:
The trick will be done, if we find satisfying the relation:
Solving for and expanding recursively we get:
substituiting into equation:
is easily obtained from the original integral:
Summing from 1 to n we get the so called "telescopic sum" on the left side meaning that only the first and the last terms survive:
Ultimately solve for :
(Note that the second term on the right side is the solution for "homogeneous" variant of the equation, i.e. withot term, suggesting another method borrowed from ODEs)
Edited by Valentin Fadeev, Sunday, 16 Jan 2011, 23:20
One of the rarely used methods of solving ODEs applies to the so-called generalized homogeneous equations. The word "generalized" means that the terms are not homogeneous in the classic sense, if all variables are assigned the same dimension. But they may be made homogeneous in a wider sense by choosing the appropriate dimension for the dependent variable. Here is one example.
If we assign dimension 1 to x and dx and dimension m to y and dy, then the left side has dimension 3+m-1=m+2 on the right side we have m+2 and 2m. To balance things let m+2=2m, hence m=2 and we get a "generalized homogeneous equation" of the 4th order. The trick is to let:
which in this case gives:
Hence the equation becomes:
letting z=1/y
This method can of course, be applied to higher order equations
This blog might contain posts that are only
visible to logged-in users, or where only logged-in users can comment. If you
have an account on the system, please log in for full access.