Some calculus-related methods waiting to find a better place in the SymPy modules tree.
Find the Euler-Lagrange equations [R1] for a given Lagrangian.
Parameters: | L : Expr
funcs : Function or an iterable of Functions
vars : Symbol or an iterable of Symbols
|
---|---|
Returns: | eqns : list of Eq
|
References
[R1] | (1, 2) http://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation |
Examples
>>> from sympy import Symbol, Function
>>> from sympy.calculus.euler import euler_equations
>>> x = Function('x')
>>> t = Symbol('t')
>>> L = (x(t).diff(t))**2/2 - x(t)**2/2
>>> euler_equations(L, x(t), t)
[-x(t) - Derivative(x(t), t, t) == 0]
>>> u = Function('u')
>>> x = Symbol('x')
>>> L = (u(t, x).diff(t))**2/2 - (u(t, x).diff(x))**2/2
>>> euler_equations(L, u(t, x), [t, x])
[-Derivative(u(t, x), t, t) + Derivative(u(t, x), x, x) == 0]
Finds singularities for a function. Currently supported functions are: - univariate real rational functions
References
[R2] | http://en.wikipedia.org/wiki/Mathematical_singularity |
Examples
>>> from sympy.calculus.singularities import singularities
>>> from sympy import Symbol
>>> x = Symbol('x', real=True)
>>> singularities(x**2 + x + 1, x)
()
>>> singularities(1/(x + 1), x)
(-1,)
This module implements an algorithm for efficient generation of finite difference weights for ordinary differentials of functions for derivatives from 0 (interpolation) up to arbitrary order.
The core algorithm is provided in the finite difference weight generating function (finite_diff_weights), and two convenience functions are provided for:
is also provided (apply_finite_diff).
(as_finite_diff).
Calculates the finite difference approximation of the derivative of requested order at x0 from points provided in x_list and y_list.
Parameters: | order: int :
x_list: sequence :
y_list: sequence :
x0: Number or Symbol :
|
---|---|
Returns: | sympy.core.add.Add or sympy.core.numbers.Number :
|
Notes
Order = 0 corresponds to interpolation. Only supply so many points you think makes sense to around x0 when extracting the derivative (the function need to be well behaved within that region). Also beware of Runge’s phenomenon.
References
Fortran 90 implementation with Python interface for numerics: finitediff
Examples
>>> from sympy.calculus import apply_finite_diff
>>> cube = lambda arg: (1.0*arg)**3
>>> xlist = range(-3,3+1)
>>> apply_finite_diff(2, xlist, map(cube, xlist), 2) - 12
-3.55271367880050e-15
we see that the example above only contain rounding errors. apply_finite_diff can also be used on more abstract objects:
>>> from sympy import IndexedBase, Idx
>>> from sympy.calculus import apply_finite_diff
>>> x, y = map(IndexedBase, 'xy')
>>> i = Idx('i')
>>> x_list, y_list = zip(*[(x[i+j], y[i+j]) for j in range(-1,2)])
>>> apply_finite_diff(1, x_list, y_list, x[i])
(-1 + (x[i + 1] - x[i])/(-x[i - 1] + x[i]))*y[i]/(x[i + 1] - x[i]) + (-x[i - 1] + x[i])*y[i + 1]/((-x[i - 1] + x[i + 1])*(x[i + 1] - x[i])) - (x[i + 1] - x[i])*y[i - 1]/((-x[i - 1] + x[i + 1])*(-x[i - 1] + x[i]))
Returns an approximation of a derivative of a function in the form of a finite difference formula. The expression is a weighted sum of the function at a number of discrete values of (one of) the independent variable(s).
Parameters: | derivative: a Derivative instance (needs to have an variables :
points: sequence or coefficient, optional :
x0: number or Symbol, optional :
wrt: Symbol, optional :
|
---|
See also
sympy.calculus.finite_diff.apply_finite_diff, sympy.calculus.finite_diff.finite_diff_weights
Examples
>>> from sympy import symbols, Function, exp, sqrt, Symbol, as_finite_diff
>>> x, h = symbols('x h')
>>> f = Function('f')
>>> as_finite_diff(f(x).diff(x))
-f(x - 1/2) + f(x + 1/2)
The default step size and number of points are 1 and order + 1 respectively. We can change the step size by passing a symbol as a parameter:
>>> as_finite_diff(f(x).diff(x), h)
-f(-h/2 + x)/h + f(h/2 + x)/h
We can also specify the discretized values to be used in a sequence:
>>> as_finite_diff(f(x).diff(x), [x, x+h, x+2*h])
-3*f(x)/(2*h) + 2*f(h + x)/h - f(2*h + x)/(2*h)
The algorithm is not restricted to use equidistant spacing, nor do we need to make the approximation around x0, but we can get an expression estimating the derivative at an offset:
>>> e, sq2 = exp(1), sqrt(2)
>>> xl = [x-h, x+h, x+e*h]
>>> as_finite_diff(f(x).diff(x, 1), xl, x+h*sq2)
2*h*((h + sqrt(2)*h)/(2*h) - (-sqrt(2)*h + h)/(2*h))*f(E*h + x)/((-h + E*h)*(h + E*h)) + (-(-sqrt(2)*h + h)/(2*h) - (-sqrt(2)*h + E*h)/(2*h))*f(-h + x)/(h + E*h) + (-(h + sqrt(2)*h)/(2*h) + (-sqrt(2)*h + E*h)/(2*h))*f(h + x)/(-h + E*h)
Partial derivatives are also supported:
>>> y = Symbol('y')
>>> d2fdxdy=f(x,y).diff(x,y)
>>> as_finite_diff(d2fdxdy, wrt=x)
-f(x - 1/2, y) + f(x + 1/2, y)
Calculates the finite difference weights for an arbitrarily spaced one-dimensional grid (x_list) for derivatives at ‘x0’ of order 0, 1, ..., up to ‘order’ using a recursive formula.
Parameters: | order : int
x_list: sequence :
x0: Number or Symbol :
|
---|---|
Returns: | list :
|
Notes
If weights for a finite difference approximation of the 3rd order derivative is wanted, weights for 0th, 1st and 2nd order are calculated “for free”, so are formulae using fewer and fewer of the parameters. This is something one can take advantage of to save computational cost.
References
[R3] | Generation of Finite Difference Formulas on Arbitrarily Spaced Grids, Bengt Fornberg; Mathematics of computation; 51; 184; (1988); 699-706; doi:10.1090/S0025-5718-1988-0935077-0 |
Examples
>>> from sympy import S
>>> from sympy.calculus import finite_diff_weights
>>> finite_diff_weights(1, [-S(1)/2, S(1)/2, S(3)/2, S(5)/2], 0)
[[[1, 0, 0, 0],
[1/2, 1/2, 0, 0],
[3/8, 3/4, -1/8, 0],
[5/16, 15/16, -5/16, 1/16]],
[[0, 0, 0, 0], [-1, 1, 0, 0], [-1, 1, 0, 0], [-23/24, 7/8, 1/8, -1/24]]]
the result is two subslists, the first is for the 0:th derivative (interpolation) and the second for the first derivative (we gave 1 as the parameter of order so this is why we get no list for a higher order derivative). Each sublist contains the most accurate formula in the end (all points used).
Beware of the offset in the lower accuracy formulae when looking at a centered difference:
>>> from sympy import S
>>> from sympy.calculus import finite_diff_weights
>>> finite_diff_weights(1, [-S(5)/2, -S(3)/2, -S(1)/2, S(1)/2,
... S(3)/2, S(5)/2], 0)
[[[1, 0, 0, 0, 0, 0],
[-3/2, 5/2, 0, 0, 0, 0],
[3/8, -5/4, 15/8, 0, 0, 0],
[1/16, -5/16, 15/16, 5/16, 0, 0],
[3/128, -5/32, 45/64, 15/32, -5/128, 0],
[3/256, -25/256, 75/128, 75/128, -25/256, 3/256]],
[[0, 0, 0, 0, 0, 0],
[-1, 1, 0, 0, 0, 0],
[1, -3, 2, 0, 0, 0],
[1/24, -1/8, -7/8, 23/24, 0, 0],
[0, 1/24, -9/8, 9/8, -1/24, 0],
[-3/640, 25/384, -75/64, 75/64, -25/384, 3/640]]]
The capability to generate weights at arbitrary points can be used e.g. to minimize Runge’s phenomenon by using Chebyshev nodes:
>>> from sympy import cos, symbols, pi, simplify
>>> from sympy.calculus import finite_diff_weights
>>> N, (h, x) = 4, symbols('h x')
>>> x_list = [x+h*cos(i*pi/(N)) for i in range(N,-1,-1)] # chebyshev nodes
>>> print(x_list)
[-h + x, -sqrt(2)*h/2 + x, x, sqrt(2)*h/2 + x, h + x]
>>> mycoeffs = finite_diff_weights(1, x_list, 0)[1][4]
>>> [simplify(c) for c in mycoeffs]
[(h**3/2 + h**2*x - 3*h*x**2 - 4*x**3)/h**4,
(-sqrt(2)*h**3 - 4*h**2*x + 3*sqrt(2)*h*x**2 + 8*x**3)/h**4,
6*x/h**2 - 8*x**3/h**4,
(sqrt(2)*h**3 - 4*h**2*x - 3*sqrt(2)*h*x**2 + 8*x**3)/h**4,
(-h**3/2 + h**2*x + 3*h*x**2 - 4*x**3)/h**4]