tag:blogger.com,1999:blog-56055125904934221682020-02-28T22:08:17.499-08:00SOCIS 2013 - Octave Geometric integrators for Hamiltonian SystemsRoberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.comBlogger14125tag:blogger.com,1999:blog-5605512590493422168.post-81196685547800007972014-08-27T01:57:00.000-07:002014-08-27T02:00:07.392-07:00Dense Output<h2><span style="font-size: x-large;">Dense Output</span></h2>Specifying specific output times for the solution, should not affect the internal time steps that the solver uses. The basic idea of the Dense Output concept is to provide the solution at a given time $$s \in [t, t+dt]$$ with the same order of accuracy as the solutions computed at the internal time points by using suitable interpolation methods.<br />Up to now only linear interpolation was performed and this significantly lowered the accuracy if a higher order solver was used.<br />I then implemented a series of interpolation function:<br /><br /><ul><li><b>linear_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = linear_interpolation (t, x, t_out)</code></pre>Given the time span $$t=[t_0, t_1]$$ and the function values $$x=[x_0, x_1]$$, it returns the linear interpolation value $$x_{out}$$ at the point $$t_{out}$$.</li><li><b>quadratic_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = quadratic_interpolation (t, x, der, t_out)</code></pre>Given the time span $$t=[t_0, t_1]$$, the function values $$x=[x_0, x_1]$$ and the derivative of the function at the point $$x_0$$, it returns the quadratic interpolation value $$x_{out}$$ at the point $$t_{out}$$.</li><li><b>hermite_cubic_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = hermite_cubic_interpolation (t, x, der, t_out)</code></pre>Given the time span $$t=[t_0, t_1]$$, the function values $$x=[x_0, x_1]$$ and the derivatives of the function at both points $$x_0$$ and $$x_1$$, it returns the 3rd order approximation&nbsp;$$x_{out}$$&nbsp;at the point $$t_{out}$$&nbsp;by performing Hermite interpolation.</li><li><b>hermite_quartic_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = hermite_quartic_interpolation (t, x, der, t_out)</code></pre>Given the time span $$t=[t_0, t_1]$$, the function values $$x=[x_0, x_{1/2}, x_1]$$ (where $$x_{1/2}$$ is the value of the function at the time $$t_0+dt/2$$) and the derivatives of the function at the extremes $$x0$$ and $$x1$$, it returns the 4th order approximation&nbsp;$$x_{out}$$&nbsp;at the point $$t_{out}$$ by performing Hermite&nbsp;interpolation.</li><li><b>dorpri_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = dorpri_interpolation (t, x, k, t_out)</code></pre>This interpolation method is specific for the Dormand-Prince Runge-Kutta scheme. Given the time span $$t=[t_0, t_1]$$, the function value $$x=x_0$$ and the vector $$k$$ with the function evaluations required in the Dormand-Prince method, it returns the 4th order approximation $$x_{out}$$&nbsp;at the point $$t_{out}$$. For more information on the method have a look at <i>Hairer, Noersett, Wanner, Solving Ordinary Differential Equations I: Nonstiff Problems (pag. 191-193)</i>.</li><li><b>hermite_quintic_interpolation</b>:<br /><pre class="prettyprint linenums"><code class="lang-matlab">x_out = hermite_quintic_interpolation (t, x, der, t_out)</code></pre>Given the time span $$t=[t_0, t_1]$$, the function values $$x=[x_0, x_{1/2}, x_1]$$ and the derivatives of the function at each point, it returns the 5th order approximation&nbsp;$$x_{out}$$&nbsp;at the point $$t_{out}$$ by performing&nbsp;Hermite interpolation.</li></ul>These methods are then used to perform the Dense Output according to the order of the solver chosen. This is the piece of code in <b>integrate_adaptive.m </b>that performs the interpolation: <br /><pre class="prettyprint linenums"><code class="lang-matlab"><br />% if next tspan value is caught, update counter<br />if( (z(end) == tspan(counter)) || (abs (z(end) - tspan(counter)) / ...<br /> (max (abs (z(end)), abs (tspan(counter)))) &lt; 8*eps) )<br /> counter++;<br /><br />% if there is an element in time vector at which the solution is required<br />% the program must compute this solution before going on with next steps<br />elseif( vdirection*z(end) &gt; vdirection*tspan(counter) )<br />% initializing counter for the following cycle<br /> i = 2;<br /> while ( i &lt;= length (z) )<br /><br /> % if next tspan value is caught, update counter<br /> if( (counter &lt;= k) &amp;&amp; ...<br /> ( (z(i) == tspan(counter)) || (abs (z(i) - tspan(counter)) / ...<br /> (max (abs (z(i)), abs (tspan(counter)))) &lt; 8*eps)) )<br /> counter++;<br /> endif<br /> <br /> % else, loop until there are requested values inside this subinterval<br /> while((counter &lt;= k) &amp;&amp; (vdirection*z(i) &gt; vdirection*tspan(counter)))<br /> % choose interpolation scheme according to order of the solver<br /> switch order<br /> case 1<br /> u_interp = linear_interpolation ([z(i-1) z(i)], [u(:,i-1) u(:,i)], tspan(counter));<br /> case 2<br /> if (~isempty (k_vals))<br /> der = k_vals(1);<br /> else<br /> der = feval (func, z(i-1) , u(:,i-1), options.vfunarguments{:});<br /> endif<br /> u_interp = quadratic_interpolation ([z(i-1) z(i)], [u(:,i-1) u(:,i)], der, tspan(counter));<br /> case 3<br /> % only ode23 - use k_vals<br /> u_interp = hermite_cubic_interpolation ([z(i-1) z(i)], [u(:,i-1) u(:,i)], [k_vals(:,1) k_vals(:,end)], tspan(counter));<br /> case 4<br /> % if ode45 used without local extrapolation this function doesn't require a new function evaluation.<br /> u_interp = dorpri_interpolation ([z(i-1) z(i)], [u(:,i-1) u(:,i)], k_vals, tspan(counter));<br /> case 5<br /> % ode45 with Dormand-Prince scheme:<br /> % 4th order approximation of y in t+dt/2 as proposed by Shampine in Lawrence, Shampine, "Some Practical Runge-Kutta Formulas", 1986.<br /> u_half = u(:,i-1) + 1/2*dt*((6025192743/30085553152)*k_vals(:,1) + (51252292925/65400821598)*k_vals(:,3) - (2691868925/45128329728)*k_vals(:,4) + (187940372067/1594534317056)*k_vals(:,5) - (1776094331/19743644256)*k_vals(:,6) + (11237099/235043384)*k_vals(:,7));<br /> u_interp = hermite_quartic_interpolation ([z(i-1) z(i)], [u(:,i-1) u_half u(:,i)], [k_vals(:,1) k_vals(:,end)], tspan(counter));<br /><br /> % it is also possible to do a new function evaluation and the quintic hermite interpolator<br /> %f_half = feval (func, t+1/2*dt, u_half, options.vfunarguments{:});<br /> %u_interp = hermite_quintic_interpolation ([z(i-1) z(i)], [u(:,i-1) u_half u(:,i)], [k_vals(:,1) f_half k_vals(:,end)], tspan(counter));<br /> otherwise<br /> warning ('high order interpolation not yet implemented: using cubic iterpolation instead');<br /> der(:,1) = feval (func, z(i-1) , u(:,i-1), options.vfunarguments{:});<br /> der(:,2) = feval (func, z(i) , u(:,i), options.vfunarguments{:});<br /> u_interp = hermite_cubic_interpolation ([z(i-1) z(i)], [u(:,i-1) u(:,i)], der, tspan(counter));<br /> end<br /><br /> % add the interpolated value of the solution<br /> u = [u(:,1:i-1), u_interp, u(:,i:end)];<br /> <br /> % add the time requested<br /> z = [z(1:i-1);tspan(counter);z(i:end)];<br /><br /> % update counters<br /> counter++;<br /> i++;<br /> endwhile<br /><br /> % if new time requested is not out of this interval<br /> if ((counter &lt;= k) &amp;&amp; (vdirection*z(end) &gt; vdirection*tspan(counter)))<br /> % update the counter<br /> i++;<br /> else<br /> % else, stop the cycle and go on with the next iteration<br /> i = length (z) + 1;<br /> endif<br /><br /> endwhile<br /><br />endif<br /></code></pre><br />It is important to notice that:<br /><br /><ul><li>The 1st order approximation doesn't require any additional function evaluation.</li><li>The 2nd order approximation may require the evaluation of the function at the current time. This can be avoided if the stepper already returns that value.</li><li>The only 3rd order solver implemented is <b>ode23</b>. The 3rd order approximation exploits the Runge-Kutta $$k$$ values to avoid further function evaluations.</li><li>There are no 4th order schemes as yet implemented. However if ones were to use <b>ode45</b> without local extrapolation then the <b>dorpri_interpolation</b> function can be used to obtain a 4th order approximation without any additional function evaluation. For any other 4th order scheme the <b>hermite_quartic_interpolation</b>&nbsp;function can be used.</li><li>For the 5th order method <b>ode45,&nbsp;</b>Shampine proposes to obtain a 4th order approximation at the middle point and to use quartic interpolation. It is however possible to directly do quintic interpolation but this require an additional function evaluation without (according to Shampine) a significant improvement.</li><li>For the higher order solvers (<b>ode78</b>), a suitable interpolator has not yet been implemented.</li></ul>Finally, since I wrote the interpolation functions in such a way that they are independent of the number of output points requested, a possible improvement would be to compute all the values of the dense output between $$t$$ and $$t+dt$$ all at once instead of one value at a time.<br /><ul></ul><br />Anonymoushttp://www.blogger.com/profile/05970588426798212976noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-79314120006810334422014-08-18T06:52:00.001-07:002014-08-18T06:52:33.507-07:00<h2><span style="font-size: x-large;">FSAL - new stepper implementation</span></h2>As stated in the previous post, the implementation of the steppers as it was did not allow the possibility to exploit the FSAL (First Same As Last) property of the Bogacki-Shampine algorithm (<b>ode23</b>) and of the Dormand-Prince algorithm (<b>ode45</b>).<br />The input and output arguments of the steppers have then be modified. As an example here is the <b>runge_kutta_23</b> stepper:<br /><br /><pre class="prettyprint linenums"><code class="lang-matlab">function varargout = runge_kutta_23 (f, t, x, dt, varargin)<br /> options = varargin{1};<br /> k = zeros (size (x, 1), 4);<br /><br /> if (nargin == 5) % only the options are passed<br /> k(:,1) = feval (f, t , x, options.vfunarguments{:});<br /> elseif (nargin == 6) % both the options and the k values are passed<br /> k(:,1) = varargin{2}(:,end); % FSAL property<br /> endif<br /> k(:,2) = feval (f, t + (1/2)*dt, x + dt*(1/2)*k(:,1), options.vfunarguments{:});<br /> k(:,3) = feval (f, t + (3/4)*dt, x + dt*(3/4)*k(:,2), options.vfunarguments{:});<br /><br /> %# computing new time and new values for the unkwnowns<br /> varargout{1} = t + dt; %t_next<br /> varargout{2} = x + dt.*((2/9)*k(:,1) + (1/3)*k(:,2) + (4/9)*k(:,3)); % return the 3rd order approximation x_next<br /><br /> %# if the estimation of the error is required<br /> if (nargout &gt;= 3)<br /> %# new solution to be compared with the previous one<br /> k(:,4) = feval (f, t + dt, varargout{2}, options.vfunarguments{:});<br /> varargout{3} = x + dt.*((7/24)*k(:,1) + (1/4)*k(:,2) + (1/3)*k(:,3) + (1/8)*k(:,4)); %x_est<br /> varargout{4} = k;<br /> endif<br /><br />endfunction<br /></code></pre><br />And the call within the solver becomes: <br /><pre class="prettyprint linenums"><code class="lang-matlab">[s, y, y_est, k_vals] = stepper (func, z(end), u(:,end), dt, options, k_vals);</code></pre><br />where <i>k_vals</i> has to be initialized for the first iteration as <i>f(t, x)</i>.<br />This implementation will reduce the number of function evaluation for each step.<br /><br />Furthermore, after some tests in MATLAB, the return values for the solution and the estimate in the &nbsp;runge_kutta_23 and runge_kutta_45 steppers have been swapped to automatically perform local extrapolation. The MATLAB functions are in fact of order 3 and 5 respectively.Anonymoushttp://www.blogger.com/profile/05970588426798212976noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-89188333263346782692014-08-12T07:12:00.001-07:002014-08-13T01:28:31.320-07:00<h2><span style="font-size: x-large;">Status of the code: bugfixes and new issues</span></h2><br /><h3>ODESET and ODEGET</h3><br /><ul><li><b>odeset</b> and <b>odeget</b> functions have been slightly modified to be compliant with MATLAB. Each MATLAB option is present and all the options are tested. The coding style has been adapted to the GNU-Octave standard.</li><li><b>ode_struct_value_check</b>: this function has been introduced by Roberto in addition to <b>odepkg_structue_check</b>. The relation between the two functions has to be clarified: in particular it is necessary to understand if it is really necessary to have two different functions or one is sufficient.</li></ul><br /><br /><h3>CHANGES TO THE STEPPERS</h3><br /><ul><li>The <b>runge_kutta_78</b> stepper has been implemented.</li><li>Two 4th order steppers have been implemented: <b>runge_kutta_45_dopri</b> (<i>Dormand-Prince</i> coefficients) and <b>runge_kutta_45_fehlberg</b> (<i>Fehlberg</i> coefficients).</li></ul><br /><br /><h3>CHANGES TO THE SOLVERS</h3><br /><ul><li><b>ode78</b> solver has been updated to the new structure. It now exploits the <b>runge_kutta_78</b> stepper.</li><li>A series of tests has been added to each solver to check all the functionalities and the all options. This has made me possible to detect some bugs that have been corrected. In particular the adaptive timestep evaluation had some issues that lead to the use of too short timesteps. This has been corrected and now the algorithm proposed in  is used.</li><li>Furthermore the current implementation uses linear interpolation to evaluate the solution at the user specified times. This leads to a considerable loss in accuracy and is not consistent with MATLAB (which guarantees the same order of accuracy of the scheme used). In  different methods are proposed for the <i>dense output</i>: these will be used as a reference for the implementation of a better solution.</li><li>In the released version of <b>odepkg</b> some of the solvers perform <i>local extrapolation</i>, that is the higher-order estimate is chosen as the solution. With the new stepper structure, as it is now, this choice effects all the solvers. It have to be decided whether to perform it or not (MATLAB doesn't seem to do it, thus I suggest to avoid it).</li><li>MATLAB implementation of <b>ode45</b> uses the <i>Dormand-Prince</i> (DP) coefficients. In the released version of <b>odepkg</b> there exits two solvers: <b>ode45</b> that uses the <i>Fehlberg</i> coefficients and <b>ode54</b> that uses the DP coefficients. To be consistent with MATLAB, <b>ode45</b> now uses the DP method. This makes the&nbsp;<b>runge_kutta_45_fehlberg</b>&nbsp;stepper and the <b>ode54 </b>solver,&nbsp;as it is now, redundant. Either their elimination or a change of the solver might be considered. However one of the advantages of DP&nbsp;coefficients is the possibility to reuse the last function evaluation at a given step as the first evaluation of the subsequent one. This is not easily done with the stepper structure introduced by Roberto.</li></ul><br /><br /><h3>CHANGES TO THE OPTIONS</h3><br /><ul><li><b>InitialStep</b> option has been modified to be MATLAB compatible (it must be a <u>positive</u> scalar).</li><li><b>RelTol</b> defalut value has been changed to <b>1e-3</b> instead of <b>1e-6</b> to be MATLAB compatible.</li><li><b>MaxStep</b> option has been implemented.</li><li><b>NormControl</b> option has been implemented.</li></ul><br /><br /><h3>TODO</h3>In addition to the general plan, a couple of issues need to be addressed:<br /><br /><ul><li>Clarify the relation between <b>ode_struct_value_check</b> and <b>odepkg_structue_check</b>.</li><li>Decide if <i>local extrapolation</i> has to be used or not. My opinion (and the current implementation) is to avoid it to be compliant to what MATLAB seems to be doing.</li><li>Solve the <b>dense output</b> problem in a way that guarantees the consistency with MATLAB.</li><li>Consider if it's possible to reduce the number of function evaluation for the Dormand-Prince stepper (<b>ode45</b>) and the Bogacki-Shampine stepper (<b>ode23</b>) exploiting the FSAL property (first same as last).</li><li>Decide if in the future releases of <b>odepkg</b>&nbsp;<b>ode54</b> has to be removed or maybe changed to become a <i>Fehlberg</i> solver.</li></ul><br /><br /><br /> E. Hairer, S.P. N{\o}rsett, G. Wanner, Solving Ordinary Differential Equations, 1993, Springer.<br /><div><br /></div>Anonymoushttp://www.blogger.com/profile/05970588426798212976noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-89684241639300870662014-07-29T01:31:00.003-07:002014-07-30T04:26:32.799-07:00<h2 style="line-height: 100%; margin-bottom: 0in;"><span style="font-size: x-large;">SoCiS 2014 - New timeline</span></h2><div style="line-height: 100%; margin-bottom: 0in;"><br /></div><div style="line-height: 100%; margin-bottom: 0in;">On the occasion of SoCiS 2014 I will take over the work that has been done last year on <b>odepkg</b> and continue it. The final goal is to release a new stable version of <b>odepkg</b> and to insert the most common solver in <b>core-Octave</b>in such a way that everything is MATLAB-compatible.</div><div style="line-height: 100%; margin-bottom: 0in;"><br /></div><div style="line-height: 100%; margin-bottom: 0in;">The following list explains the main points of the timeline for my SOCIS project:</div><div style="line-height: 100%; margin-bottom: 0in;"><br /></div><div style="line-height: 100%; margin-bottom: 0in;"></div><ol><li><span style="line-height: 100%;">Check of the current status of the code, in particular with respect to the current release on SourceForge. The two repository will be merged so that every test present in the old version will be added to the new code. Verify if there are missing features and add them if necessary.</span></li><li><span style="line-height: 100%;">Comparison of the performance between the old and the new structure. In particular we expect that the introduction of the Levenshtein algorithm for the string comparisons will be a critical issue. If necessary implement <i>levenshtein.m</i>&nbsp;and fuzzy_compare.m in C++.</span></li><li><span style="line-height: 100%;">Verify that the functions <b>odeset</b> and <b>odeget</b> are MATLAB-compatible and compliant to Octave core. Add the two functions to the core.</span></li><li><span style="line-height: 100%;">Move <b>ode45</b>, <b>ode23</b>and <b>ode23s</b> to Octave core.</span></li><li><span style="line-height: 100%;">Implement <b>ode15s</b>solver. This solver is still missing in odepkg but is highly suitable for stiff problems.</span></li><li><span style="line-height: 100%;">Move <b>ode15s</b> to Octave core.</span></li><li><span style="line-height: 100%;">New release of <b>odepkg</b>.</span></li></ol>Anonymoushttp://www.blogger.com/profile/05970588426798212976noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-69765762833855298172014-01-05T06:25:00.001-08:002014-01-05T06:25:09.179-08:00DocumentationThis is the link for the documentation file :<br /><a href="http://www.scribd.com/doc/196151318/SOCIS-2013-Documentation" target="_blank">SOCIS 2013 - Documentation for new functions added to odepkg</a><br /><br />This is the link for the implemented code :<br /><a href="https://bitbucket.org/dashboard/overview" target="_blank">OdePkg with my added functions</a>Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-16560146919829373042013-12-15T15:38:00.000-08:002013-12-15T15:47:50.678-08:00RATTLE algorithm<h3> RATTLE </h3> <p>Constrained mechanical systems form an important class of differential equations on manifolds. For all the theory that I present here and I've used for the implementation I refer to "Geometric Numerical Integration" by Hairer, Lubich and Wanner.</p><p>Consider a mechanical system described by position coordinates $$q_1,\dots q_d$$ and suppose that the motion is constrained to satisfy $$g(\mathbf q) = \mathbf 0$$ where $$g:\mathbb R^d \rightarrow \mathbb R^m$$, with $$m \lt d$$. So that, the equations of motion governing the system become: $$\left\{ \begin{array}{l} \dot{\mathbf q} = \frac{\partial H}{\partial \mathbf p} \\ \dot{\mathbf p} = -\frac{\partial H}{\partial \mathbf q} - G(q)^T\lambda \\ g(\mathbf q) = \mathbf 0 \end{array}\right.$$ where $$G(\mathbf q) = \dfrac{\partial g(\mathbf q)}{\partial \mathbf q}$$.</p><p>Symplectic Euler method can be extended to constrained systems but we focus on <b>SHAKE</b> and <b>RATTLE</b> algorithms. SHAKE is a 2-steps algorithm so that, since I'm implementing only 1-step algorithms and the overall structure of solvers and integrators is made for 1-step solvers, I implemented just <b>RATTLE</b> algorithm.</p> <h3> odeRATTLE </h3> <p>The RATTLE algorithm implemented works with any general Hamiltonian $$H(\mathbf q,\mathbf p$$ and is defined as follows: $$\left\{\begin{array}{l} \mathbf p_{n+\frac{1}{2}} = \mathbf p_n -\frac{h}{2}\big(H_{\mathbf q}(\mathbf q_n,\mathbf p_{n+\frac{1}{2}}) + G(\mathbf q_n)^T \mathbf {\lambda}_n \big) \\ \mathbf q_{n+1} = \mathbf q_n +\frac{h}{2} \big( H_{\mathbf p}(\mathbf q_n,\mathbf p_{n+\frac{1}{2}}) + H_{\mathbf p}(\mathbf q_{n+1},\mathbf p_{n+\frac{1}{2}}) \big) \\ g(\mathbf q_{n+1}) = \mathbf 0 \\ \mathbf p_{n+1} = \mathbf p_{n+\frac{1}{2}} -\frac{h}{2}\big(H_{\mathbf q}(\mathbf q_{n+1},\mathbf p_{n+\frac{1}{2}}) + G(\mathbf q_{n+1})^T \mathbf{\mu}_n \big) \\ G(\mathbf q_{n+1}) H_{\mathbf p}(\mathbf q_{n+1},\mathbf p_{n+1}) = \mathbf 0 \end{array}\right.$$ where $$h=\Delta t=t_{k+1}-t_k$$ and $$\mathbf{\mu}_n$$,$$\mathbf{\mu}_n$$ are Lagrangian multipliers nedded to impose the constraints. </p><p>It can be demonstrated that this numerical method is symmetric, symplectic and convergent of order 2.</p> <p> The following code represent it's implementation: <pre class="prettyprint linenums"><code class="lang-matlab">function [t_next,x_next,err]=rattle(f,t,x,dt,options)<br /> H = odeget(options,'HamiltonianHessFcn',[],'fast');<br /> GG = odeget(options,'ConstraintHessFcn',[],'fast');<br /> if( ~isempty(H) && ~isempty(GG) )<br /> fsolve_opts = optimset('Jacobian','on');<br /> else<br /> fsolve_opts = optimset('Jacobian','off');<br /> end<br /> g = odeget(options,'ConstraintFcn',[],'fast');<br /> G = odeget(options,'ConstraintGradFcn',[],'fast');<br /> c_nb = odeget(options,'ConstraintsNb',[],'fast');<br /><br /> dim = length(x)/2;<br /> q0 = x(1:dim);<br /> p0 = x(dim+1:end);<br /><br /> RATTLE = @(y)constr_sys(y,dim,c_nb,f,H,g,G,GG,t,q0,p0,dt);<br /> y0 = [q0;p0;p0;zeros(2*c_nb,1)];<br /> y0 = fsolve(RATTLE,y0,fsolve_opts);<br /> t_next = t+dt;<br /> x_next = [y0(1:dim);y0(2*dim+1:3*dim)];<br /><br /> if(nargout==3)<br /> dt = dt/2;<br /> RATTLE = @(y)constr_sys(y,dim,c_nb,f,H,g,G,GG,t,q0,p0,dt);<br /> y0 = [q0;p0;p0;zeros(2*c_nb,1)];<br /> y0 = fsolve(RATTLE,y0,fsolve_opts);<br /><br /> q0 = y0(1:dim);<br /> p0 = y0(2*dim+1:3*dim);<br /> t = t+dt;<br /><br /> RATTLE = @(y)constr_sys(y,dim,c_nb,f,H,g,G,GG,t,q0,p0,dt);<br /> y0 = [q0;p0;p0;zeros(2*c_nb,1)];<br /> y0 = fsolve(RATTLE,y0,fsolve_opts);<br /><br /> x_est = [y0(1:dim);y0(2*dim+1:3*dim)];<br /> err = norm(x_est-x_next,2);<br /> end<br />end<br /><br />function [F,J] = constr_sys(y,dim,c_nb,f,H,g,G,GG,t,q0,p0,dt)<br /> F = zeros(3*dim+2*c_nb,1);<br /> F(1:dim) = y(1:dim) - q0 - (dt/2).*(f(t,[q0; ...<br /> y(dim+1:2*dim)])(1:dim) + f(t+dt,y(1:2*dim))(1:dim));<br /> F(dim+1:2*dim) = y(dim+1:2*dim) - p0 - (dt/2).*(f(t,[q0; ...<br /> y(dim+1:2*dim)])(dim+1:end) - G(q0)'*y(3*dim+1:3*dim+c_nb));<br /> F(2*dim+1:3*dim) = y(2*dim+1:3*dim) - y(dim+1:2*dim) - ...<br /> (dt/2)*(f(t+dt,y(1:2*dim))(dim+1:end) - ...<br /> G(y(1:dim))'*y(3*dim+c_nb+1:end));<br /> F(3*dim+1:3*dim+c_nb) = g(y(1:dim));<br /> F(3*dim+c_nb+1:end) = G(y(1:dim))*(f(t+dt,[y(1:dim); ...<br /> y(2*dim+1:3*dim)])(1:dim));<br /><br /> if( nargout==2 )<br /> J = zeros(3*dim+2*c_nb,3*dim+2*c_nb);<br /> J(1:dim,1:dim) = eye(dim) - ...<br /> (dt/2)*(H(t+dt,y(1:2*dim))(dim+1:end,1:dim));<br /> J(1:dim,dim+1:2*dim) = -(dt/2)*(H(t,[q0; ...<br /> y(dim+1:2*dim)])(dim+1:end,dim+1:end) + ...<br /> H(t+dt,y(1:2*dim))(dim+1:end,dim+1:end));<br /> J(dim+1:2*dim,dim+1:2*dim) = eye(dim) + ...<br /> (dt/2)*(H(t,[q0;y(dim+1:2*dim)])(1:dim,dim+1:end));<br /> J(dim+1:2*dim,3*dim+1:3*dim+c_nb) = (dt/2)*G(q0)';<br /> J(2*dim+1:3*dim,1:dim) = (dt/2)*(H(t+dt, ...<br /> y(1:2*dim))(1:dim,1:dim));<br /> for k = 1:1:c_nb<br /> J(2*dim+1:3*dim,1:dim) = J(2*dim+1:3*dim,1:dim) - ...<br /> (dt/2)*(y(3*dim+c_nb+k)*(GG(y(1:dim))(:,:,k)));<br /> end<br /> J(2*dim+1:3*dim,dim+1:2*dim) = -eye(dim) + ...<br /> (dt/2)*(H(t+dt,y(1:2*dim))(1:dim,dim+1:end));<br /> J(2*dim+1:3*dim,2*dim+1:3*dim) = eye(dim) + ...<br /> (dt/2)*(H(t+dt,y(1:2*dim))(1:dim,dim+1:end));<br /> J(2*dim+1:3*dim,3*dim+c_nb+1:end) = (dt/2)*G(y(1:dim))';<br /> J(3*dim+1:3*dim+c_nb,1:dim) = G(y(1:dim));<br /> J(3*dim+c_nb+1:end,1:dim) = G(y(1:dim))* ...<br /> (H(t+dt,[y(1:dim);y(2*dim+1:3*dim)])(dim+1:end,1:dim));<br /> for k = 1:1:c_nb<br /> J(3*dim+c_nb+k,1:dim) = J(3*dim+c_nb+k,1:dim) + ...<br /> ((GG(y(1:dim))(:,:,k))*(f(t+dt,[y(1:dim); ...<br /> y(2*dim+1:3*dim)])(1:dim)))';<br /> end<br /> J(3*dim+c_nb+1:end,2*dim+1:3*dim) = G(y(1:dim))* ...<br /> (H(t+dt,[y(1:dim);y(2*dim+1:3*dim)]) ...<br /> (dim+1:end,dim+1:end));<br /> end<br />end</code></pre>It works with any number of constraint, unless this is equal or greater to system dimension. As usual, all the source code is available at my public repository <a href="https://robs88@bitbucket.org/robs88/octave-odepkg" target="_blank">octave-odepkg</a>. </p>Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-31177762743967659782013-12-09T08:20:00.000-08:002013-12-09T08:20:24.077-08:00Spectral Variational Integrators<h3> Variational integrators </h3> <p> The <b>variational integrators</b> are a class of numerical methods for mechanical systems which comes from the discrete formulation of <b>Hamilton's principle of stationary action</b>. They can be applied to ODEs, PDEs and to both conservative and forced systems. In absence of forcing terms these methods preserve momenta related to symmetries of the problem and don't dissipate energy. So that they exhibit long-term stability and good long-term behaviour. </p> <p>Considering a configuration manifold V, the <b>discrete Lagrangian</b> is a function from V to the real numbers space, which represents an approximation of the action between two fixed configurations: $$L_d(\mathbf q_k,\mathbf q_{k+1}) \approx \int_{t_k}^{t_{k+1}} L(\mathbf q,\dot{\mathbf q};t) dt \hspace{0.5cm}\text{with}\hspace{0.4cm}\mathbf q_{k},\mathbf q_{k+1}\hspace{0.3cm}\text{fixed.}$$ From here, applying the Hamilton's principle, we can get the <b>Euler-Lagrange discrete equations</b>: $$D_2L_d(\mathbf q_{k-1},\mathbf q_k) + D_1 L_d(\mathbf q_k,\mathbf q_{k+1}) = 0\ ,$$ and thanks to the <b>discrete Legendre transforms</b> we get the <b>discrete Hamilton's equation of motion</b>: $$\left\{\begin{array}{l} \mathbf p_k = -D_1 L_d(\mathbf q_k,\mathbf q_{k+1}) \\ \mathbf p_{k+1} = D_2 L_d(\mathbf q_k,\mathbf q_{k+1}) \end{array}\right.\ ,$$ so that we pass from a 2-step system of order N to a 1-step system of order 2N. This system gives the updating map: $$(\mathbf q_k,\mathbf p_k)\rightarrow(\mathbf q_{k+1},\mathbf p_{k+1})\ .$$ For all the theory behind this I refer to "<a href="http://www.google.it/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CDUQFjAA&url=http%3A%2F%2Fwww3.nd.edu%2F~izaguirr%2Fpapers%2Facta_numerica.pdf&ei=OKulUtefM4iCzAPwi4KYDg&usg=AFQjCNEAJuQ1fOeQ9-8fj5qjg1i9JmN9Cw&sig2=f2A1JKMZq0dI4bRFelPV2g&bvm=bv.57752919,d.bGE" target="_blank">Discrete mechanics and variational integrators</a>" by Marsden and West.</p> <h3>Spectral variational integrators</h3> <p> To create a <b>spectral variational integrator</b> I considered a discretization of the configuration manifold on a n-dimensional functional space generated by the orthogonal basis of <a href="http://en.wikipedia.org/wiki/Legendre_polynomials" target="_blank">Legendre polynomials</a>. So that, after rescaling the problem from $$[t_k,t_{k+1}]$$ (with $$t_{k+1}-t_k=h$$) to $$[-1,1]$$, we get: $$\begin{array}{l} \mathbf q_k(z) = \sum_{i=0}^{n-1}\mathbf q_k^i l_i(z)\\ \dot{\mathbf q}_k(z) = \frac{2}{h} \sum_{i=0}^{n-1}\mathbf q_k^i \dot l_i(z) \end{array} \ .$$ Then I approximate the action using the <a href="http://en.wikipedia.org/wiki/Gaussian_quadrature" target="_blank">Gaussian quadrature rule</a> using $$m$$ internal nodes, so putting all together we have: $$\int_{t_k}^{t_{k+1}} L(\mathbf q,\dot{\mathbf q};t)dt\hspace{0.5cm} \approx\hspace{0.5cm} \frac{h}{2}\sum_{j=0}^{m-1} \omega_j L\big( \sum_{i=0}^{n-1}\mathbf q_k^i l_i , \frac{2}{h} \sum_{i=0}^{n-1}\mathbf q_k^i \dot l_i \big)$$ Now imposing Hamilton's principle of stationary action under the constraints: $$\mathbf q_k = \sum_{i=0}^{n-1}\mathbf q_k^i l_i(-1) \hspace{1.5cm} \mathbf q_{k+1} = \sum_{i=0}^{n-1}\mathbf q_k^i l_i(1)\ ,$$ we obtain the system: $$\left\{ \begin{array}{l} \sum_{j=0}^{m-1}\omega_j \bigg[ \mathbf p_j \dot{l}_s(z_j) - \frac{h}{2} l_s(z_j) \frac{\partial H}{\partial \mathbf q} \bigg ( \sum_{i=0}^{n-1}\mathbf q_k^i l_i(z_j),\mathbf p_j \bigg ) \bigg ] + l_s(-1)\mathbf p_k - l_s(1)\mathbf p_{k+1} = 0 \hspace{1cm} \forall s=0,\dots,n-1\\ \frac{\partial H}{\partial \mathbf p}\bigg (\sum_{i=0}^{n-1}\mathbf q_k^i l_i(z_r),\mathbf p_r\bigg ) -\frac{2}{h}\ \sum_{i=0}^{n-1} \mathbf q_k^i \dot{l}_i(z_r)=0 \hspace{1cm} \forall r=0,\dots,m-1 \\ \sum_{i=0}^{n-1} \mathbf q_k^i l_i(-1) - \mathbf q_k = 0\\ \sum_{i=0}^{n-1} \mathbf q_k^i l_i(1) - \mathbf q_{k+1} = 0 \end{array}\right.$$ </p> <h3>odeSPVI</h3> <p>Within <b>odeSPVI</b> I implemented a geometric integrator for Hamiltonian systems, like odeSE and odeSV, which uses spectral variational integrators with any order for polynomials of the basis and with any number of internal quadrature nodes, for both unidimensional and multidimensional problems. <pre class="prettyprint"><code class="lang-matlab">[T,Y]=odeSPVI(@hamilton_equations,time,initial_conditions,options)</code></pre>This solver just uses the stepper <b>spectral_var_int.m</b> which is the function where I implemented the resolution of the system displayed above; <tt>hamilton_equations</tt> must be a function_handle or an inline function where the user has to implement Hamilton's equations of motion: $$\left\{\begin{array}{l} \dot{\mathbf q} = \frac{\partial H}{\partial \mathbf p}(\mathbf q,\mathbf p)\\ \dot{\mathbf p} = - \frac{\partial H}{\partial \mathbf q}(\mathbf q,\mathbf p) + \mathbf f(\mathbf q,\mathbf p)\end{array}\right.$$ where $$H(\mathbf q,\mathbf p)$$ is the Hamiltonian of the system and $$\mathbf f(\mathbf q,\mathbf p)$$ is an optional forcing term. </p><p> This below is the implementation of the stepper: <pre class="prettyprint linenums"><code class="lang-matlab">function [t_next,x_next,err]=spectral_var_int(f,t,x,dt,options)<br /> fsolve_opts = optimset('Jacobian','on');<br /><br /> q_dofs = odeget(options,'Q_DoFs',[],'fast');<br /> p_dofs = odeget(options,'P_DoFs',[],'fast');<br /> nodes = odeget(options,'Nodes',[],'fast');<br /> weights = odeget(options,'Weights',[],'fast');<br /> leg = odeget(options,'Legendre',[],'fast');<br /> deriv = odeget(options,'Derivatives',[],'fast');<br /> extremes = odeget(options,'Extremes',[],'fast');<br /><br /> dim = length(x)/2;<br /> N = q_dofs+p_dofs+2;<br /> q0 = x(1:dim)(:)';<br /> p0 = x(dim+1:end)(:)';<br /><br /> SPVI = @(y)svi_system(y,q_dofs,p_dofs,weights,leg, ...<br /> deriv,extreme,dim,N,f,t,q0,p0,dt);<br /> y0 = [q0;zeros(q_dofs-1,dim);ones(p_dofs,1)*p0;q0;p0];<br /> y0 = reshape(y0,dim*N,1);<br /> z = fsolve(SPVI,y0,fsolve_opts);<br /> %z = inexact_newton(SVI,y0,new_opts) %new_opts must be set <br /> z = reshape(z,N,dim);<br /><br /> y = [leg*z(1:q_dofs,1:dim),z((q_dofs+1):(end-2),1:dim)];<br /> x_next = [y;z(end-1,:),z(end,:)]';<br /> t_next = [t+(dt/2)*(nodes+1), t+dt]';<br /><br /> if(nargout==3)<br /> q_dofs_err = odeget(options,'Q_DoFs_err',[],'fast');<br /> p_dofs_err = odeget(options,'P_DoFs_err',[],'fast');<br /> nodes_err = odeget(options,'Nodes_err',[],'fast');<br /> weights_err = odeget(options,'Weights_err',[],'fast');<br /> leg_err = odeget(options,'Legendre_err',[],'fast');<br /> deriv_err = odeget(options,'Derivatives_err',[],'fast');<br /> extreme_err = odeget(options,'Extremes_err',[],'fast');<br /><br /> N_err = q_dofs_err+p_dofs_err+2;<br /> q1 = x_next(1:dim,end)(:)';<br /> p1 = x_next(dim+1:end,end)(:)';<br /><br /> SPVI = @(y)svi_system(y,q_dofs_err,p_dofs_err, ...<br /> weights_err,leg_err,deriv_err,extreme_err, ...<br /> dim,N_err,f,t,q0,p0,dt);<br /> p_interp = zeros(p_dofs_err,dim);<br /><br /> p_lower_order = [p0;z(q_dofs+1:q_dofs+p_dofs,:);p1];<br /> for i=1:1:p_dofs_err<br /> p_interp(i,:) = .5*(p_lower_order(i,:) + ...<br /> p_lower_order(i+1,:));<br /> end<br /><br /> y0 = [z(1:q_dofs,:);zeros(1,dim);p_interp;q1;p1];<br /> y0 = reshape(y0,dim*N_err,1);<br /> z = fsolve(SPVI,y0,fsolve_opts);<br /> %z = inexact_newton(SVI,y0,new_opts) %new_opts must be set <br /> z = reshape(z,N_err,dim);<br /><br /> x_est = [z(end-1,:),z(end,:)]';<br /> err = norm(x_est-x_next(:,end),2);<br /> end<br />end</code></pre> At lines 22 and 56 I put a comment line to show that it is possible to solve the implicit system also with <b>inexact_newton</b> (which is the one explained in the previous post on <b>backward Euler</b>), so it's not mandatory to use only <tt>fsolve</tt>. I will make it an option in order to let the user choose which solver to use.</p><p>Another important aspect to point out is that in case of an adaptive timestep the error is estimated using a new solution on the same timestep but with polynomials one order higher and one more internal node for the quadrature rule. Furthermore, for this last solution, the <b>starting guess</b> for <tt>fsolve</tt> is chosen in a non-trivial way: at line 53 we see that <tt>y0</tt> has in the first <tt>q_dofs_err-1</tt> rows the same modal values calculated before for the new solution <tt>x_next</tt> and a row of zeros just below. Then the starting nodal values for the momenta are set (lines 47-51) as a <b>trivial average</b> of new solution nodal values. This can seem wrong but I empirically stated that the number of iterations done by fsolve are the same of cases in which the reinitialization of nodal values is more sophisticated, so that is <b>less computationally expensive</b> to do a trivial average. </p><p>In the following code is shown the implementation of the system to be solved: <pre class="prettyprint linenums"><code class="lang-matlab">function [F,Jac] = svi_system(y,q_dofs,p_dofs,w,L,D,C, ...<br /> dim,N,f,t,q0,p0,dt)<br /> F = zeros(N*dim,1);<br /> V = zeros(p_dofs,dim*2);<br /> X = zeros(dim*2,1);<br /> W = reshape(y,N,dim);<br /> for i = 1:1:p_dofs<br /> X = [L(i,:)*W(1:q_dofs,:),W(i+q_dofs,:)]';<br /> V(i,:) = f(t,X);<br /> end<br /> for i = 1:1:dim<br /> F((1+N*(i-1)):(q_dofs+N*(i-1)),1) = (ones(1,p_dofs)* ...<br /> (((w.*y((q_dofs+1+N*(i-1)):(q_dofs+p_dofs+N*(i-1))))* ...<br /> ones(1,q_dofs)).*D + (((dt/2).*w.*V(:,i+dim))* ...<br /> ones(1,q_dofs)).*L) + (p0(i)*ones(1,q_dofs)).*C(1,:) - ...<br /> (y(N*i)*ones(1,q_dofs)).*C(2,:))'; <br /> F(1+N*(i-1)+q_dofs:N*(i-1)+q_dofs+p_dofs,1) = V(:,i) - ...<br /> (2.0/dt)*(D*y((1+N*(i-1)):(q_dofs+N*(i-1))));<br /> F(N*i-1) = C(2,:)*y((1+N*(i-1)):(q_dofs+N*(i-1)))-y(N*i-1);<br /> F(N*i) = C(1,:)*y((1+N*(i-1)):(q_dofs+N*(i-1))) - q0(i);<br /> end<br /> if(nargout>1)<br /> warning('off','Octave:broadcast');<br /> flag = 0;<br /> try<br /> [~,H]=f(0,zeros(2*dim,1));<br /> catch<br /> flag = 1;<br /> warning();<br /> end<br /> DV = zeros((dim*2)^2,p_dofs);<br /> if( flag==1 )<br /> for i = 1:1:p_dofs<br /> X = [L(i,:)*W(1:q_dofs,:),W(i+q_dofs,:)]';<br /> DV(:,i) = differential(f,t,X);<br /> end<br /> else<br /> for i = 1:1:p_dofs<br /> X = [L(i,:)*W(1:q_dofs,:),W(i+q_dofs,:)]';<br /> [~,DV(:,i)] = f(t,X);<br /> end<br /> end<br /> DV = DV';<br /> Jac=zeros(N*dim,N*dim);<br /> for u=1:1:dim<br /> [...]<br /> end<br /> end<br />end </code></pre> Here, at line 46, I don't show the implementation of the Jacobian of the implicit system because, with this visualization style, it may appear very chaotic. The important aspect is that the user can use the implemented Jacobian to <b>speedup the computation</b>. I stated that it really allows to speedup the computation when using <tt>fsolve</tt> as solver. Furthermore the code tries to see if the user has implemented, in the function defining the Hamilton's equations, also the <b>Hessian of the Hamiltonian</b> (which is needed int the computation of the Jacobian of the system). If the function passes as second parameter the Hessian, then the program uses that one and the computation is faster, otherwise it approximates the Hessian with the following function and the computation will be slower: <pre class="prettyprint linenums"><code class="lang-matlab">function Hamilt = differential(f,t,x)<br /> f_x = f(t,x);<br /> dim_f = length(f_x);<br /> dim_x = length(x);<br /> if( dim_f ~= dim_x )<br /> error('not implemented yet');<br /> end<br /> Hamilt = zeros(dim_f*dim_x,1);<br /> delta = sqrt(eps);<br /> for i = 1:1:dim_f<br /> for j = i:1:dim_x<br /> Hamilt(i+(j-1)*dim_f) = (1/delta)*(f(t,x+delta* ...<br /> [zeros(j-1,1);1;zeros(dim_x-j,1)])(i) - f_x(i));<br /> Hamilt(j+(i-1)*dim_x) = Hamilt(i+(j-1)*dim_f);<br /> end<br /> end<br />end</code></pre> </p><p> The Hessian of the Hamiltonian must be stored in a particular way (that I have to optimize yet, but the actual one works fine too) which is showed in the following example which is the definition of the Hamilton's equations for the armonic oscillator: <pre class="prettyprint linenums"><code class="lang-matlab">function [dy,ddy] = armonic_oscillator(t,y)<br /> dy = zeros(size(y));<br /> d = length(y)/2;<br /> q = y(1:d);<br /> p= y(d+1:end);<br /><br /> dy(1:d) = p;<br /> dy(d+1:end) = -q;<br /><br /> H = eye(2*d);<br /> ddy = transf(H);<br />end<br /><br />function v = transf(H)<br /> [r,c] = size(H);<br /> v = zeros(r*c,1);<br /> for i=1:1:r<br /> for j=1:1:c<br /> v(i+(j-1)*r) = H(i,j);<br /> end<br /> end<br />end</code></pre> </p> Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-69544480394635187832013-12-02T05:15:00.000-08:002013-12-02T05:15:13.298-08:00Symplectic Euler (semi-implicit Euler), Velocity-Verlet and Stormer-Verlet methods<h3> Symplectic Euler method </h3> <p>The <b>symplectic Euler</b> method is a modification of the Euler method and is useful to solve Hamilton's equation of motion, that is a system of ODE where the unknowns are the generalized coordinates <b>q</b> and the generalized momenta <b>p</b>. It is of first order but is better than the classical Euler method because it is a symplectic integrator, so that it yelds better results. </p><p>Given a <a href="http://en.wikipedia.org/wiki/Hamiltonian_system" target="_blank">Hamiltonian system</a> with Hamiltonian <b>H=H(t;q,p)</b> then the system of ODE to solve writes: $$\left\{\begin{array}{l} \dot{q} = \frac{dH}{dp}(t;q,p) = f(t;q,p)\\ \dot{p}=-\frac{dH}{dq}(t;q,p) = g(t;q,p) \end{array}\right.$$ </p><p>From E. Hairer, C. Lubich and G. Wanner, "Geometric Numerical Integration" we can state that the semi-implicit Euler scheme for previous ODEs writes: $$\left\{\begin{array}{l} p_{k+1} = p_k + h g(t_k;q_k,p_{k+1}) \\ q_{k+1}=q_k + h f(t_k;q_k,p_{k+1}) \end{array}\right.$$ If g(t;q,p) does not depend on p, then this scheme will be totally explicit, otherwise the first equation will be implicit and the second will be explicit.</p><p>The function that I created to make use of this integrator is <b>odeSE.m</b> and it passes to the integrate functions the stepper defined in the function <b>symplectic_euler.m</b>. The signature of <b>odeSE</b> is similar to those of the other ode solvers: <pre class="prettyprint"><code class="lang-matlab">[t,y]=odeSE(@ode_fcn,tspan,y0)<br />[t,y]=odeSE(@ode_fcn,tspan,y0,options)</code></pre>It's important to know that <b>y0</b> must be a vector containing initial values for coordinates in its first half and initial values for momenta in its second half; the function handle (or inline function) <b>ode_fcn</b> must take exactly two input arguments (the first is the time and the second is the vector containing coordinates and momenta such as in <b>y0</b>) and returns a vector containing dq/dt in its first half and dp/dt in the second half.</p><p><b>options</b> variable can be set with <b>odeset</b> and, if the system of ODE is explicit, the field options.Explicit can be set to 'yes' in order to speedup the computation. If <b>tspan</b> has only one element, then options.TimeStepNumber and options.TimeStepSize must not be empty, so that it will be used the integrate function <b>integrate_n_steps</b>. </p> <p>This is the code of the stepper <b>symplectic_euler.m</b>:<br><pre class="prettyprint linenums"><code class="lang-matlab">function [x_next,err] = symplectic_euler(f,t,x,dt,options)<br /> dim = length(x)/2;<br /> q = x(1:dim);<br /> p = x(dim+1:end);<br /> if( strcmp(options.Explicit,'yes') )<br /> p_next = p + dt*(f(t,[q; p])(dim+1:end));<br /> q_next = q + dt*f(t,[q; p_next])(1:dim);<br /> x_next = [q_next; p_next];<br /> if(nargout == 2)<br /> dt_new = dt/2;<br /> p_next = p + dt_new*(f(t,[q; p])(dim+1:end));<br /> q_next = q + dt_new*f(t,[q; p_next])(1:dim);<br /> q = q_next;<br /> p = p_next;<br /> t = t+dt_new;<br /> p_next = p + dt_new*(f(t,[q; p])(dim+1:end));<br /> q_next = q + dt_new*f(t,[q; p_next])(1:dim);<br /> x_est = [q_next;p_next];<br /> err = norm(x_next-x_est,2);<br /> end<br /> else<br /> SE1 = @(y) (y-p-dt*(f(t,[q; y])(dim+1:end)));<br /> p_next = fsolve(SE1,zeros(size(p)));<br /> q_next = q + dt*f(t,[q; p_next])(1:dim);<br /> x_next = [q_next; p_next];<br /> if(nargout == 2)<br /> dt_new = dt/2;<br /> SE1 = @(y) (y-p-dt_new*(f(t,[q; y])(dim+1:end)));<br /> p_next = fsolve(SE1,zeros(size(p)));<br /> q_next = q + dt_new*f(t,[q; p_next])(1:dim);<br /> q = q_next;<br /> p = p_next;<br /> t = t+dt_new;<br /> SE1 = @(y) (y-p-dt_new*(f(t,[q; y])(dim+1:end)));<br /> p_next = fsolve(SE1,zeros(size(p)));<br /> q_next = q + dt_new*f(t,[q; p_next])(1:dim);<br /> x_est = [q_next;p_next];<br /> err = norm(x_next-x_est,2);<br /> end<br /> end<br />end</code></pre></p> <h3> Velocity-Verlet method </h3> <p>The <b>velocity-Verlet</b> method is a numerical method used to integrate Newton's equations of motion. The Verlet integrator offers greater stability, as well as other properties that are important in physical systems such as preservation of the symplectic form on phase space, at no significant additional cost over the simple Euler method. </p><p>If we call <b>x</b> the coordinate, <b>v</b> the velocity and <b>a</b> the acceleration then the equations of motion write: $$\left\{ \begin{array}{l} \frac{dx}{dt} = v(t,x)\\ \frac{dv}{dt} = a(t,x) \end{array} \right.$$</p><p>So that, given the initial conditions (coordinates and velocities), the <a href="http://en.wikipedia.org/wiki/Verlet_integration" target="_blank">velocity-verlet</a> scheme writes: $$\left\{ \begin{array}{l} x_{k+1} = x_k + h v_k + 0.5 h^2 a_k\\ v_{k+1} = v_k + 0.5 h (a_k + a_{k+1}) \end{array}\right.$$ where $$a_{k+1} = a(t_{k+1},x_{k+1})\ .$$</p><p>This method is one order better than the <b>symplectic Euler</b> method. The global error of this method is of order two. Furthermore, if the acceleration indeed results from the forces in a conservative mechanical or Hamiltonian system, the energy of the approximation essentially oscillates around the constant energy of the exactly solved system, with a global error bound again of order two.</p><p> The function that uses <b>velocity-Verlet</b> scheme is <b>odeVV.m</b> and the stepper called at each iteration is <b>velocity_verlet.m</b>. The signature of <b>odeVV</b> is the same of those of others ODE solvers:<br><pre class="prettyprint"><code class="lang-matlab">[t,y]=odeVV(@ode_fcn,tspan,y0)<br />[t,y]=odeVV(@ode_fcn,tspan,y0,options)</code></pre></p><p>The documentation of input arguments is the same descripted in the previous <b>symplectic Euler</b> section, but there is the difference that now the function <b>ode_fcn</b> must return a vector containing the velocities in its first half and the expression of the acceleration in the second half.</p> <p>This is the code of the stepper <b>symplectic_euler.m</b>:<br><pre class="prettyprint linenums"><code class="lang-matlab">function [x_next,err] = velocity_verlet(f,t,x,dt,options)<br /> dim = length(x)/2;<br /> q = x(1:dim);<br /> v = x(dim+1:end);<br /> a = f(t,x);<br /> q_next = q + dt*v + .5*dt*dt*a(dim+1:end);<br /> v_next = v + .5*dt*((a+f(t+dt,[q_next; v]))(dim+1:end));<br /> x_next = [q_next; v_next];<br /> if(nargout == 2)<br /> dt_new = dt/2;<br /> q_next = q + dt_new*v + .5*dt_new*dt_new*a(dim+1:end);<br /> v_next = v + .5*dt_new*((a + ...<br /> f(t+dt_new,[q_next;v]))(dim+1:end));<br /> t = t+dt_new;<br /> q = q_next;<br /> v = v_next;<br /> a = f(t,[q; v]);<br /> q_next = q + dt_new*v + .5*dt_new*dt_new*a(dim+1:end);<br /> v_next = v + .5*dt_new*((a + ...<br /> f(t+dt_new,[q_next;v]))(dim+1:end));<br /> x_est = [q_next; v_next];<br /> err = norm(x_next - x_est,2);<br /> end<br />end</code></pre></p> <h3> Stormer-Verlet method </h3> <p>The <b>Stormer-Verlet</b> scheme is a symplectic integrator of order two, useful to integrate Hamiltonian systems in the form described in the previous <b>symplectic Euler</b> section. It is characterized by the approximation of the integral defining the discrete Lagrangian $$L_d(q_k,q_{k+1})\approx\int_{t_k}^{t_{k+1}}L(t;q,\dot{q})dt$$ with the <a href="http://en.wikipedia.org/wiki/Trapezoidal_rule" target="_blank">trapezoidal rule</a>. So that $$L_d(q_k,q_{k+1}) = \frac{h}{2}\bigg( L\Big(q_k,\frac{q_{k+1}-q_k}{h}\Big) + L\Big(q_{k+1},\frac{q_{k+1}-q_k}{h}\Big) \bigg)\ .$$</p> <p>Considering again the system: $$\left\{\begin{array}{l} \dot{q} = \frac{dH}{dp}(t;q,p) = f(t;q,p)\\ \dot{p}=-\frac{dH}{dq}(t;q,p) = g(t;q,p) \end{array}\right. \ ,$$ the scheme implemented for this integrator (described in E. Hairer, C. Lubich and G. Wanner, "Geometric Numerical Integration") writes as follows: $$\left\{\begin{array}{l} p_{k+\frac{1}{2}} = p_k + \frac{h}{2}g(t_k;q_k,p_{k+\frac{1}{2}}) \\ q_{k+1}=q_k+\frac{h}{2}\Big( f(t_k;q_k,p_{k+\frac{1}{2}}) + f(t_{k+1};q_{k+1},p_{k+\frac{1}{2}})\Big) \\ p_{k+1} = p_{k+\frac{1}{2}} + \frac{h}{2} g(t_{k+\frac{1}{2}};q_{k+1},p_{k+\frac{1}{2}}) \end{array}\right.$$</p> <p> The function in which this method is implemented is <b>odeSV.m</b> and it calls the stepper <b>stormer_verlet.m</b>. The documentation is the same of <b>odeSE.m</b>. Its implementation is the following:<br><pre class="prettyprint linenums"><code class="lang-matlab">function [x_next,err] = stormer_verlet(f,t,x,dt,options)<br /> dim = length(x)/2;<br /> q = x(1:dim);<br /> p = x(dim+1:end);<br /> if( strcmp(options.Explicit,'yes') )<br /> p_mid = p + .5*dt*(f(t,[q; p])(dim+1:end));<br /> q_next = q + .5*dt*((f(t,[q; p_mid])(1:dim))+ ...<br /> (f(t+dt,[q;p_mid])(1:dim)));<br /> p_next = p_mid +.5*dt*(f(t+dt,[q_next;p_mid])(dim+1:end));<br /> x_next = [q_next; p_next];<br /> if(nargout == 2)<br /> dt_new = dt/2;<br /> p_mid = p + .5*dt_new*(f(t,[q; p])(dim+1:end));<br /> q_next = q + .5*dt_new*((f(t,[q; p_mid])(1:dim))+ ...<br /> (f(t+dt_new,[q;p_mid])(1:dim)));<br /> p_next = p_mid + .5*dt_new* ...<br /> (f(t+dt_new,[q_next;p_mid])(dim+1:end));<br /> q = q_next;<br /> p = p_next;<br /> t = t+dt_new;<br /> p_mid = p + .5*dt_new*(f(t,[q; p])(dim+1:end));<br /> q_next = q + .5*dt_new*((f(t,[q; p_mid])(1:dim))+ ...<br /> (f(t+dt_new,[q;p_mid])(1:dim)));<br /> p_next = p_mid + .5*dt_new* ...<br /> (f(t+dt_new,[q_next;p_mid])(dim+1:end));<br /> x_est = [q_next; p_next];<br /> err = norm(x_est - x_next,2);<br /> end<br /> else<br /> SV1 = @(y) (y - p - .5*dt*(f(t,[q;y])(dim+1:end)));<br /> p_mid = fsolve(SV1,zeros(size(p)));<br /> SV2 = @(y) (y - q - .5*dt*((f(t,[q;p_mid])(1:dim))+ ...<br /> (f(t+dt,[y;p_mid])(1:dim))));<br /> q_next = fsolve(SV2,q);<br /> p_next = p_mid + .5*dt* ...<br /> f(t+dt,[q_next;p_mid])(dim+1:end);<br /> x_next = [q_next;p_next];<br /> if(nargout == 2)<br /> dt_new = dt/2;<br /> SV1 = @(y) (y - p - .5*dt_new*(f(t,[q;y])(dim+1:end)));<br /> p_mid = fsolve(SV1,zeros(size(p)));<br /> SV2 = @(y) (y - q - .5*dt_new* ...<br /> ((f(t,[q;p_mid])(1:dim))+(f(t+dt_new,[y;p_mid])(1:dim))));<br /> q_next = fsolve(SV2,q);<br /> p_next = p_mid + .5*dt_new* ...<br /> f(t+dt_new,[q_next;p_mid])(dim+1:end);<br /> q = q_next;<br /> p = p_next;<br /> t = t+dt_new;<br /> SV1 = @(y) (y - p - .5*dt_new*(f(t,[q;y])(dim+1:end)));<br /> p_mid = fsolve(SV1,zeros(size(p)));<br /> SV2 = @(y) (y - q - .5*dt_new* ...<br /> ((f(t,[q;p_mid])(1:dim))+(f(t+dt_new,[y;p_mid])(1:dim))));<br /> q_next = fsolve(SV2,q);<br /> p_next = p_mid + .5*dt_new* ...<br /> f(t+dt_new,[q_next;p_mid])(dim+1:end);<br /> x_est = [q_next; p_next];<br /> err = norm(x_next-x_est,2); <br /> end<br /> end<br />end</code></pre></p> Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-91382008198002348812013-11-24T01:48:00.000-08:002013-12-09T08:28:10.391-08:00Backward Euler with Inexact Newton solver<h3> Backward Euler </h3> <p>The <b>backward Euler method</b> is one of the most common methods used to solve ODEs. It is an implicit method of order one but its strenght lies in its <a href="http://en.wikipedia.org/wiki/A-stability#A-stability" target="_blank">A-stability</a> property.</p><p>Given a system of ODEs of the form: $$\frac{d\mathbf{y}}{dt} = \mathbf{f}(t,\mathbf{y}) \\ \mathbf{y}(t_0) = \mathbf{y}_0\ ,$$ the backward Euler method is written as: $$\mathbf{y}_{k+1} = \mathbf{y}_k + \Delta t\ \mathbf{f}(t_{k+1},\mathbf{y}_{k+1})\ .$$ </p><p>If we define <b>u</b> to be the exact solution then we have that: $$\frac{d\mathbf{u}}{dt} = \mathbf{f}(t,\mathbf{u}) \\ \mathbf{u}(t_k) = \mathbf{y}_k$$ So that $$\mathbf u(t_{k+1}) = \mathbf u(t_k) + \int_{t_k}^{t_{k+1}} \mathbf{f}(t,\mathbf u) dt\ ,$$ which, under some conditions of regularity, can be rewritten as $$\mathbf u(t_{k+1}) = \Delta t\ \mathbf f(t_{k+1},\mathbf u(t_{k+1})) - \frac{{\Delta t}^2}{2} \mathbf f'(\xi)\ .$$ </p><p>From these results it can be derived that the <a href="http://en.wikipedia.org/wiki/Truncation_error_(numerical_integration)" target="_blank">local truncation error</a> (<b>LTE</b>) is $$\mathbf e_{k+1} = \mathbf y_{k+1} - \mathbf u(t_{k+1}) = \mathbf o({\Delta t}^2)\ .$$</p> <h3> Inexact Newton </h3> <p>Now we have to solve the non-linear system built up with the backward Euler method. We want to use an <b>inexact Newton solver</b>. The classical Newton method is written as: given a non-linear functions system <b>F</b>(<b>x</b>) = <b>0</b> with a given initial condition, solve the linearized system: $$\mathbf F'(\mathbf x_k)\mathbf s_k + \mathbf F(\mathbf x_k) = \mathbf 0 \\ \mathbf x_{k+1} = \mathbf x_k + \mathbf s_k\ .$$ Go on iterating this method until a given tolerance is reached.</p><p>In many cases, especially when we don't have an explicit expression for the Jacobian or it can't be inverted, we can use an inexact Newton method: $$|| \mathbf F'(\mathbf x_k)\mathbf s_k + \mathbf F(\mathbf x_k) || \leq \eta_k || \mathbf F(\mathbf x_k)||\ ,$$ where $$\eta_k$$ is said to be the forcing term. The linearized system can be solved with an iterative method such as <b>GMRES</b> or <b>preconditioned conjugate gradient</b>.</p><p>There are two possible choices for the forcing term: <ol><li> first choice $$\eta_k = \frac{|| \mathbf F(\mathbf x_k)-\mathbf F(\mathbf x_{k-1}) -\mathbf F'(\mathbf x_{k-1})\mathbf s_{k-1} ||}{||\mathbf F(\mathbf x_{k-1}) ||}\ ;$$</li><li> second choice $$\eta_k = \gamma \Big(\frac{|| \mathbf F(\mathbf x_k) ||}{|| \mathbf F(\mathbf x_{k-1}) ||}\Big)^{\alpha}\ .$$ </li></ol></p><p>Choosing the forcing term with one of the two previous possibilities, rather than imposing a fixed tolerance, it is possible to avoid the phenomenon of <b>oversolving</b>. Another advantage is that if we use the GMRES as the linear solver it's not necessary to know the Jacobian nor to datermine it, because we can approximate the term $$\mathbf F'(\mathbf x_{k})\mathbf s_k$$ with a first order finite differences formulae: $$\mathbf F'(\mathbf x_k)\mathbf s_k \approx \frac{\mathbf F(\mathbf x_k + \delta\mathbf s_k) - \mathbf F(\mathbf x_k)}{\delta} \ .$$ </b></p><p>The signature of <b>inexact_newton</b> is the same of <tt>fsolve</tt>: <pre class="prettyprint"><code class="lang-matlab">[x,fval,exitflag,output,jacobian]=inexact_newton(fun,x0,options)</code></pre> </p> Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com1tag:blogger.com,1999:blog-5605512590493422168.post-86912513659081875472013-11-15T08:12:00.000-08:002013-11-15T08:13:24.264-08:00Ode solvers options<h3>Ode solvers options</h3> <p>What I want to do at this moment of the project is to add to all the solvers I've written since now, the ability to manage all the possible options of the ODE structure (defined previously in <b>odeset</b> structure) that are already managed in corresponding MATLAB ode solvers; so that the solvers I've written will be totally MATLAB-compatible and this will be the same also for the new solvers I'm going to implement.</p>Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-80075999630023013712013-11-06T23:23:00.000-08:002013-11-11T14:17:04.275-08:00Timeline<h3>Timeline</h3> <p>The following list explains the main points of the timeline for my SOCIS project: <ol><li> adapt <b>odeset</b> and <b>odeget</b> from Octave odepkg to be completely MATLAB-compatible: all option names supported by MATLAB should be available; furthermore we want the possibility to add new option names that will be useful for example for symplectic integrators;</li><li> implement a new code structure for one-step integrators <b>ode12</b>, <b>ode23</b>, <b>ode45</b>, <b>odefwe</b> (forward Euler), <b>odebwe</b> (backward Euler) and <b>inexact solvers</b>, subdividing the execution into 'blocks' in order to be able to do an easier optimization and an easier extension to new functionalities; in particular implement the <b>steppers</b> which will be the functions called to do one integration step and then implement the <b>integrate functions</b> which will integrate, in different ways, over the whole time interval, calling the respective steppers. Afterwards adapt the interface of <tt>ode45</tt> to be completely MATLAB-compatible;</li><li> implement MATLAB-compatible version of <b>deval</b>;</li><li> implement the following geometric integrators as new steppers: <ul> <li> symplectic Euler;</li> <li> Stormer-Verlet;</li> <li> velocity-Verlet;</li> <li> spectral variational integrators;</li> <li> SHAKE;</li> <li> RATTLE.</li> </ul></li><li> optimization, error checking and documentation of all the code written.</li></ol></p></br >At the moment point 1 is completed; in point 2 I still have to adapt the interface of <tt>ode45</tt> and to refine the implemented code. Point 3 has to be done. In point 4 I have implemented sympelctic Euler, Stormer-Verlet and spectral variational integrators but these need to be adapted to the code structure 'steppers', 'integrate functions'; the other integrators must be implemented. Finally, during the implementation I will start to write a bit of documentation and at the end I will do optimizations and debugging.Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-89897934778378390862013-11-02T10:48:00.000-07:002013-12-09T08:32:15.328-08:00Odeset & odeget<h3>Odeset & odeget</h3> <p><b>Odeset</b> and <b>odeget</b> functions allow to build a valid ODE options structure. They are already available in Octave <a href="http://octave.sourceforge.net/odepkg/" target="_blank">odepkg</a> but they are not perfectly compatible with MATLAB odeset and odeget functions. Furthermore, for geometric integrators like Spectral Variational Integrators,</br > I will need new options into the ODE options structure, which now are not admitted in Octave odepkg.</p> <p>So that I have written my own versions of <b>odeget.m</b>, <b>odeset.m</b> and the new function <b>ode_struct_value_check.m</b> in order to have full compatibility with MATLAB <a href="http://www.mathworks.it/it/help/matlab/ref/odeset.html" target="_blank">odeset</a> and <a href="http://www.mathworks.it/it/help/matlab/ref/odeget.html" target="_blank">odeget</a> (their same behaviour on the basis of their official documentation) and also the possibility to add new field names which i will need for future solvers of this project.</p><p>The new fields are the union of MATLAB ODE options and Octave ODE options, plus my new options (default values are in square brackets): <pre class="prettyprint"><code class="lang-matlab">' AbsTol: scalar or vector, >0, [1.e-6]'<br />' BDF: binary, {on, [off]}'<br />' Events: '<br />' Explicit: binary, {yes, no, []}'<br />' InexactSolver: switch, {inexact_newton, []}'<br />' InitialSlope: vector, []'<br />' InitialStep: scalar, >0, []'<br />' Jacobian: matrix or function_handle, []'<br />' JConstant: binary, {on, [off]}'<br />' JPattern: sparse matrix, []'<br />' Mass: matrix or function_handle, []'<br />' MassConstant: binary, {on, [off]}'<br />' MassSingular: switch, {yes, [maybe], no}'<br />'MaxNewtonIterations: scalar, integer, >0, [1.e3]'<br />' MaxOrder: switch, {1, 2, 3, 4, }'<br />' MaxStep: scalar, >0, []'<br />' MStateDependence: switch, {none, [weak], strong}'<br />' MvPattern: sparse matrix, []'<br />' NewtonTol: scalar or vector, >0, []'<br />' NonNegative: vector of integers, []'<br />' NormControl: binary, {on, [off]}'<br />' OutputFcn: function_handle, []'<br />' OutputSave: scalar, integer, >0, []'<br />' OutputSel: scalar or vector, []'<br />' PolynomialDegree: scalar or vector, integer, >0, []'<br />' QuadratureOrder: scalar or vector, integer, >0, []'<br />' Refine: scalar, integer, >0, []'<br />' RelTol: scalar, >0, [1.e-3]'<br />' Stats: binary, {on, [off]}'<br />' TimeStepNumber: scalar, integer, >0, []'<br />' TimeStepSize: scalar, >0, []'<br />' Vectorized: binary, {on, [off]}'</code></pre> </p><p>The usage of <b>odeset</b> will be one of the following: <pre class="prettyprint"><code class="lang-matlab">odeset<br />opt = odeset()<br />opt = odeset(old_ODE_options,new_ODE_options)<br />opt = odeset(old_ODE_options,'field1',value1,'field2',value2,...)<br />opt = odeset('field1',value1,'field2',value2,'field3',value3,...)</code></pre> </p><p>The usage of <b>odeget</b> will be one of the following: <pre class="prettyprint"><code class="lang-matlab">option_value = odeget(ODE_structure,'field')<br />option_value = odeget(ODE_structure,'field',default_value)<br />option_value = odeget(any_struct,'field',default_value,'fast')</code></pre> The last usage is needed for MATLAB compatibility and represents a fast access to the given structure with no error checking on options names.</p> <h3>Fuzzy string comparison</h3> Users may do mistakes while typing ODE option names or they may want to write everything in low cases for a faster coding. So that my <b>odeset</b> and <b>odeget</b> functions behave in a user-friendly way, calling the <b>fuzzy_compare</b> function which determines the <b>distance</b> of each structure option name from the given word and returns the indices of those options whose distance is under a given tolerance.</p> <p>The idea is that word cases are not relevant, if we want an exact matching then the tolerance will be 0, if no index is returned then it will definitely be a typing error, if one index is returned but words don't match exactly a warning will be displayed saying that the program is going on assuming that the user was intending the closest option name, otherwise all the fields whose distance is under the tolerance will be displayed and the user will be asked to insert the name again.</p><p>Signature of this function follows. <pre class="prettyprint"><code class="lang-matlab">res = fuzzy_compare(string1,string_set,correctness)</code></pre><ul><li><b>string1</b> must be a string and is the option name we're looking for;</li><li><b>string_set</b> must be a cell array of strings or a column vector of strings; it represents the set of option names;</li><li><b>correctness</b> is the tolerance we want. It is an optional input argument;</li></ul>We may set <tt>correctness</tt> equal to 'exact' or 0 and in this case <tt>fuzzy_compare</tt> will look only for exact matching; if we set <tt>correctness</tt> to any positive integer it will be the tolerance of the method (that is the maximum distance, from the given string, accepted). If we don't specify anything for <tt>correctness</tt> then the tolerance will be calculated as a function of the minimum distance and of the length of the given string: the less the distance, the less the tolerance; the greater the length, the greater the tolerance.</br ><pre class="prettyprint"><code class="lang-matlab">tolerance = minimus + floor(length(string1)/4)*floor(minimus/3)</code></pre> There exist many definitions of distance between strings. I've chosen the Levenshtein distance.</p> <h3>Levenshtein distance</h3> <p>The <a href="http://en.wikipedia.org/wiki/Levenshtein_distance" target="_blank">Levenshtein distance</a> is a string metric and is equal to the minimum number of single-characters edit (insertion, deletion and substitution) required to change one word into the other. This is the main algorithm of <b>levenshtein</b> function:</br ><pre class="prettyprint"><code class="lang-matlab">function [dist,d] = levenshtein(string1,string2)<br /> m = length(string1);<br /> n = length(string2);<br /><br /> d = zeros(m+1,n+1);<br /> d(2:m+1,1) = [1:1:m];<br /> d(1,2:n+1) = [1:1:n];<br /><br /> for j=2:1:n+1<br /> for k=2:1:m+1<br /> if(string1(k-1)==string2(j-1))<br /> d(k,j)=d(k-1,j-1);<br /> else<br /> d(k,j)=min(d(k-1,j)+1,min(d(k,j-1)+1,d(k-1,j-1)+1));<br /> end<br /> end<br /> end<br /><br /> dist = d(m+1,n+1);<br />end<br /></code></pre></p> All the code is available under the terms of GNU General Public License at <a href="https://robs88@bitbucket.org/robs88/octave-odepkg" target="_blank">my_octave-odepkg</a>. Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-18114211278112153002013-10-25T07:26:00.001-07:002013-10-25T07:26:45.312-07:00Starting Stepsize<h3>Starting Step Size</h3><p>The implementation of <b>integrate_adaptive</b> function takes (for now) the first timestep as last input argument. Obviously the choice of the first timestep made by the user may be not the best, even though the adaptive technique will adjust the step at the second iteration. So that there exist many techniques to find out a good timestep for the first iteration.</p> <p>Here I present the one I implemented, which was proposed by Gladwell, Shampine and Brankin in 1987. It is based on the hypothesis that:</br >$$\text{local error}\approx Ch^{p+1}y^{(p+1)}(x_0)$$ The algorithm is the following, and usually gives a good guess for the initial timestep.</br ><pre class="prettyprint linenums"><code class="lang-matlab">function h = starting_stepsize(f,x0,t0,p)<br /> d0 = norm(x0,2);<br /> d1 = norm(f(t0,x0),2);<br /> if( d0<1.e-5 || d1<1.e-5 )<br /> h0 = 1.e-6;<br /> else<br /> h0 = .01*(d0/d1);<br /> end<br /> x1 = x0 + h0.*f(t0,x0);<br /> d2 = (1.0/h0)*norm(f(t0+h0,x1)-f(t0,x0),2);<br /> if( max(d1,d2)<= 1.e-15 )<br /> h1 = max(1.e-6,h0*1.e-3);<br /> else<br /> h1 = (1.e-2 / max(d1,d2))^(1/(p+1));<br /> end<br /> h = min(100*h0,h1);<br />end<br /></code></pre> </p><p>So that, what I will do is include this algorithm into <b>integrate_adaptive</b> function which will also have a new signature, because the initial timestep is no more needed:</br ><pre class="prettyprint"><code class="lang-matlab">function [x,t] = integrate_adaptive(stepper,order,f,x0,t0,t1)</code></pre> </p>Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0tag:blogger.com,1999:blog-5605512590493422168.post-69030029879554525362013-10-16T06:38:00.001-07:002013-12-09T08:37:58.137-08:00Steppers and Integrate functions<p>In this first post I want to explain the organization of the code that I'm going to implement. The main idea is to have a structured organization and to subdivide the code according to the most important operations which must be executed. So that should be easier to optimize the bottlenecks and to extend the code with new functionalities.</p> <p>We will have two main actors: the <b>steppers</b> and the <b>integrate functions</b>.</p> <h3>Steppers</h3> <p>A <b>stepper</b> will be a function and will represent the numerical method used for the integration. Its job is to execute just one integration step and its signature will be:</p> <pre class="prettyprint"><code class="lang-matlab">[x_next,err] = stepper(f,x,t,dt)</code></pre> <p><code class="prettyprint lang-matlab">x_next</code> is the solution at the next step, <code class="prettyprint lang-matlab">err</code> is an estimation of the error (obtainable for example with <a href="http://en.wikipedia.org/wiki/Richardson_extrapolation" target="_blank">Richardson Extrapolation</a>), <code class="prettyprint lang-matlab">f</code> is a function handle representing the equations to be integrated, <code class="prettyprint lang-matlab">x</code> is the solution at the current time <code class="prettyprint lang-matlab">t</code> and finally <code class="prettyprint lang-matlab">dt</code> is the time step.</p> <p>Inside this function I'll check if <code class="prettyprint lang-matlab">err</code> is really requested by means of <code class="prettyprint lang-matlab">nargout</code> keyword. The estimation of the error will be useful to determine the optimal <code class="prettyprint lang-matlab">dt</code> in adaptive integrators.</p> <p>To set the parameters for both the numerical method and the solver I'll use <code class="prettyprint lang-matlab">odeset</code> and <code class="prettyprint lang-matlab">odeget</code> which are not embedded into <a href="http://www.octave.org/" target="_blank">Octave</a> but are functions of <a href="http://octave.sourceforge.net/odepkg/" target="_blank">Odepkg</a>. My intent is to edit these functions in order to make them suitable for my parameters setting.</p> <h3>Integrate Functions</h3> <p>An <b>integrate function</b> will be the function executing the algorithm of integration on more steps. We will have different <b>integrate functions</b> depending on how we want to obtain the solution. For example we can have:</p> <ul><li><code class="prettyprint lang-matlab">integrate_const(stepper,f,x0,t0,t1,dt)</code> integrating from t0 to t&lt;=t1 with a fixed <code class="prettyprint lang-matlab">dt</code>;</li><li><code class="prettyprint lang-matlab">integrate_n_steps(stepper,f,x0,t0,dt,n)</code> integrating from t0 for n integration steps with fixed <code class="prettyprint lang-matlab">dt</code>;</li><li><code class="prettyprint lang-matlab">integrate_adaptive(stepper,p,f,x0,t0,t1,dt)</code> integrating from t0 to t1 with an adaptive timestep, where p is the order of the stepper.</li></ul> <h3>A simple example: Forward Euler</h3> <p>In this part I'll show you an example of two simple steppers (<code class="prettyprint lang-matlab">fe_richardson</code> and <code class="prettyprint lang-matlab">fe_heun</code>). Both the steppers make use of the <a href="http://en.wikipedia.org/wiki/Euler_method" target="_blank">Forward Euler method</a> to find the new solution: $$x_{k+1} = x_{k} + h f(t_{k},x_{k})\ .$$ They differ in the way the error is estimated: <code class="prettyprint lang-matlab">fe_richardson</code> makes use of <a href="http://en.wikipedia.org/wiki/Richardson_extrapolation" target="_blank">Richardson Extrapolation</a> while <code class="prettyprint lang-matlab">fe_heun</code> makes use of <a href="http://en.wikipedia.org/wiki/List_of_Runge%E2%80%93Kutta_methods#Heun.E2.80.93Euler" target="_blank">Heun-Euler method</a>.</p> <p>This is the implementation of the <code class="prettyprint lang-matlab">fe_richardson</code> stepper: <pre class="prettyprint linenums"><code class="lang-matlab">function [t_next,x_next,err] = fwe_richardson(f,t,x,dt,options)<br /> x_next = x + dt.*f(t,x);<br /> t_next = t+dt;<br /> if(nargout == 3)<br /> x1 = x + (.5*dt).*f(t,x); <br /> x_est = x1 + (.5*dt).*f(t+ .5*dt,x1);<br /> err = norm(x_next - x_est,2);<br /> end<br />end</code></pre></p> <p>The <b>Heun-Euler method</b> combines the <a href="http://en.wikipedia.org/wiki/Heun's_method" target="_blank">Heun method</a>, which is of order 2, with the Euler method, which is of order 1. This combination allows to get an estimation of the local error. We can write the Heun method as \begin{aligned} \tilde{x}_{k+1} &= x_{k} + h f(t,x_{k}) \\ x_{k+1}^{*} &= x_{k} + \frac{h}{2} (f(t,x_{k}) + f(t+h,\tilde{x}_{k+1})) \end{aligned} So that the error estimation is given by: $$e_{k+1} = \tilde{x}_{k+1} - x^*_{k+1}=\frac{h}{2}(f(t+h,\tilde{x}_{k+1}) - f(t,x_{k}))$$ This is the implementation of the <code class="prettyprint lang-matlab">fe_heun</code> method: <br /><pre class="prettyprint linenums"><code class="lang-matlab">function [t_next,x_next,err]=fwe_heun(f,t,x,dt,options)<br /> x_next = x + dt.*f(t,x);<br /> t_next = t+dt;<br /> if(nargout == 3)<br /> err = .5 * dt * norm(f(t+dt,x_next) - f(t,x),2);<br /> end<br />end</code></pre></p> <h3>The adaptive integrator</h3> <p>Finally I'll describe how <code class="prettyprint lang-matlab">integrate_adaptive</code> is implemented. As i said this function does the integration on the total interval <code class="prettyprint lang-matlab">[t0,t1]</code> adapting the timestep at each iteration, by means of the error estimation returned by the stepper. This function takes as input: <ul><li><code class="prettyprint lang-matlab">stepper</code>: the stepper;</li><li><code class="prettyprint lang-matlab">p</code>: the order of the stepper;</li><li><code class="prettyprint lang-matlab">f</code>: the function to be integrated;</li><li><code class="prettyprint lang-matlab">x0</code>: the initial condition;</li><li><code class="prettyprint lang-matlab">t0, t1</code>: the extremes of the time interval;</li><li><code class="prettyprint lang-matlab">dt</code>: the first timestep.</li></ul> While the actual time is less or equal to the final instant, this function recursively calls the stepper and then updates the solution, the time and the next timestep. Finally, since the very last instant may be greater than <code class="prettyprint lang-matlab">t1</code>, the function substitutes the last element of the solution with a linear interpolation calculated in <code class="prettyprint lang-matlab">t1</code> between the last two elements of the solution.</p> <pre class="prettyprint linenums"><code class="lang-matlab">function [t,x] = integrate_adaptive(stepper,order, ...<br /> func,tspan,x0,options)<br /> t = tspan(1);<br /> x = x0(:);<br /> dt = odeget(options,'InitialStep',[],'fast');<br /> tau = 1.e-6;<br /> counter = 2;<br /> while( t(end) < (tspan(end)-1.e-13) )<br /> [s,y,err] = stepper(func,t(end),x(:,end),dt,options);<br /> if( s(end) > (tspan(counter)+1.e-13) )<br /> z = [t(end);s];<br /> u = [x(:,end),y];<br /> i = 1;<br /> while( i<=length(z) )<br /> if( abs(z(i)-tspan(counter)) < 1.e-13 )<br /> x = [x,y];<br /> t = [t;s];<br /> i_old = i;<br /> i = length(z)+1;<br /> elseif( z(i) > (tspan(counter)+1.e-13) )<br /> x = [x(:,1:end-1),u(:,1:i-1),u(:,i-1)+ ...<br /> ((tspan(counter)-z(i-1))/(z(i)-z(i-1)))* ...<br /> (u(:,i)-u(:,i-1)),u(:,i:end)];<br /> t = [t(1:end-1);z(1:i-1);tspan(counter);z(i:end)];<br /> i_old = i;<br /> i = length(z)+1;<br /> end<br /> i++;<br /> end<br /> counter++;<br /> else<br /> x = [x,y];<br /> t = [t;s];<br /> end<br /> err += eps;<br /> dt = dt.*(tau/abs(err))^(1.0/(order+1));<br /> end<br /> if( counter > max(size(tspan)) )<br /> k = (length(z)-i_old)+1;<br /> x = x(:,1:end-k);<br /> t = t(1:end-k);<br /> end<br /> x = x';<br />end</code></pre> <p>The method used to calculate the next timestep is taken from the book of <a href="http://www.cambridge.org/it/academic/subjects/mathematics/numerical-analysis/solving-odes-matlab" target="_blank">Shampine, Gladwell and Thomson</a>. <code class="prettyprint lang-matlab">tau</code> is a parameter which now is fixed but in the future will be setted by means of <code class="prettyprint lang-matlab">odeset</code> and <code class="prettyprint lang-matlab">odeget</code> functions.</p> <h3>How it will be at the end</h3> <p>At the end, functions like <code class="prettyprint lang-matlab">ode45</code> will be just aliases. For example, we define a function called <code class="prettyprint lang-matlab">odefwe</code> which does the integration by means of Forward Euler method, adapting the timestep with Richardson Extrapolation. The call to this function will just hide a call to <code class="prettyprint lang-matlab">integrate_adaptive</code> function in this way: <pre class="prettyprint"><code class="lang-matlab">odefwe(f,x0,t0,t1,dt) = ...<br /> integrate_adaptive(fe_richardson,1,f,x0,t0,t1,dt)</code></pre> In the same way we'll define a function called <code class="prettyprint lang-matlab">ode12</code> which does the integration in an adaptive way using the Heun-Euler method. So that, now we will have: <pre class="prettyprint"><code class="lang-matlab">ode12(f,x0,t0,t1,dt) = integrate_adaptive(fe_heun,1,f,x0,t0,t1,dt)</code></pre></p>Roberto Porcuhttp://www.blogger.com/profile/18363005778421070960noreply@blogger.com0