Iterative Methods

itMet: Iterative Methods

This module contains iterative solver for linear systems

\[Ax = b\]

Moreover it contains a routine to generate the discretize laplace operator in 1,2 and 3 dimensions to generate huge test matrices.

Here we can use either stationary methods like

  1. Jacobi
  2. Gauß-Seidel
  3. SOR

or we use krylov methods like

  1. Steepest Descent
  2. CG
  3. GMRES

The stationary methods are based on splittings

\[A = D + L + U\]
\[A = M + N\]

where D corresponds to the diagonal of A, L to to strictly lower part of A and U to the strictly upper part of A.

For future release we are planing to add preconditioning in the krylov methods. There of course you will be able to use the stationary methods as precondition method.

Documentation is available in the docstrings and online here.

Methods

oppy.itMet.cg module

Conjugated gradient method.

This module contains the function cg() to solve a system

\[Ax = f\]

of linear equations using the conjugated gradient (CG) method.

oppy.itMet.cg.cg(A, f, options=None)

Solve the linear system \(Ax=f\) with the conjugate gradient method.

Input matrix A can be a 2-D-array (numpy.ndarray), scipy.sparse matrix or a linear operator, e.g. via a scipy.sparse.linalg.LinearOperator object (matrix free is possible).

Parameters:
  • A (numpy.ndarray, scipy.sparse matrix or operator, shape (n,n) or output (,n)) – The square matrix A or operator for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation.
  • options (Options, optional) –

    Options for the cg solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).
    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If the input A is neither a scipy.sparse matrix, nor a linear operator (e.g. via scipy.sparse.linalg.LinearOperator) a warning is raised and in this case None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the conjugate gradient method

>>> from oppy.itMet import cg

To construct a test we use numpy arrays numpy.ndarray

>>> import numpy as np

Set up test case

The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in this case A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system

>>> cg_results = cg(A,f)

and compare the solution to the exact one

>>> print(np.linalg.norm(cg_results.x - x_sol))
>>> 0.00015858787538540118

Notice, depending on your system, this number could be a bit different but with default options the error should be in the order of \(10^{-4}\).

oppy.itMet.gauss_seidel module

Gauss Seidel method.

This module contains the stationary gauss_seidel() solver to solve a linear system

\[Ax = f\]

with the stationary iteration induced by the splitting

\[A = M - N\]

where \(M = \mathrm{tridiagonal}(A)\).

oppy.itMet.gauss_seidel.gauss_seidel(A, f, options=None)

Solve the linear system \(Ax = f\) with the Gauss-Seidel method.

The input matrix A can be a 2-D-array (numpy.ndarray object) or a scipy.sparse matrix.

Parameters:
  • A (numpy.ndarray or scipy.sparse matrix, shape (n,n)) – The square matrix A for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation.
  • options (Options, optional) –

    Options for the gauss_seidel solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).

      Note

      The absolute tolerance tol_abs is also used to check whether the diagonal elements of the matrix A are greater than this tolerance. If this is not the case, a warning is raised and None is returned (see the section Warns for more information).

    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If one of the diagonal entries of the matrix A is smaller than the absolute tolerance tol_rel in terms of amount, None is returned and a warning is raised.
  • default warning – If A is not a square matrix, a warning is raised and None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the gauss_seidel() method

>>> from oppy.itMet import gauss_seidel

To construct a test we use numpy arrays numpy.ndarray

>>> import numpy as np

Set up test case

The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in this case A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system

>>> gs_results = gauss_seidel(A,f)

and compare the solution to the exact one

>>> print(np.linalg.norm(gs_results.x - x_sol))
>>> 0.00667744566323901

Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\) with default options.

oppy.itMet.gmres module

Generalized minimal residual method.

This module contains the function gmres() to solve a system

\[Ax = f\]

of linear equations using the generalized minimal residual (gmres) method.

oppy.itMet.gmres.gmres(A, f, options=None)

Solve the linear system \(Ax=f\) with the gmres() method.

Input matrix A can be a 2-D-array (numpy.ndarray object), scipy.sparse matrix or a linear operator, e.g. via a scipy.sparse.linalg.LinearOperator object (matrix free is possible).

Parameters:
  • A (numpy.ndarray, scipy.sparse matrix or operator, shape (n,n) or output (,n)) – The square matrix A or operator for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right-hand side of the equation.
  • options (Options, optional) –

    Options for the gmres solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).
    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If the input A is neither a scipy.sparse matrix, nor a linear operator (e.g. via scipy.sparse.linalg.LinearOperator) a warning is raised and in this case None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the gmres() method

>>> from oppy.itMet import gmres

To construct a test we use numpy arrays np.ndarray

>>> import numpy as np

Set up test case

The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in this case A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system

>>> gmres_results = gmres(A,f)

and compare the solution to the exact one

>>> print(np.linalg.norm(gmres_results.x - x_sol))
>>> 0.00155800676948457

Notice, depending on your system, this number could be a bit different but with default options the error should be in the order of \(10^{-3}\).

oppy.itMet.jacobi module

Jacobi method.

This module contains the stationary jacobi() solver to solve a linear system

\[Ax = f\]

with the stationary iteration induced by the splitting

\[A = M - N\]

where \(M = \mathrm{diag}(A)\).

oppy.itMet.jacobi.jacobi(A, f, options=None)

Solve the linear system \(Ax = f\) with the jacobi() method.

The input matrix A can be a 2-D-array (numpy.ndarray object) or a scipy.sparse matrix.

Parameters:
  • A (numpy.ndarray or scipy.sparse matrix, shape (n,n)) – The square matrix A for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right right-hand of the equation.
  • options (Options, optional) –

    Options for the jacobi solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).

      Note

      The absolute tolerance tol_abs is also used to check whether the diagonal elements of the matrix A are greater than this tolerance. If this is not the case, a warning is raised and None is returned (see the section Warns for more information).

    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If one of the diagonal entries of the matrix A is smaller than the absolute tolerance tol_rel in terms of amount, None is returned and a warning is raised.
  • default warning – If A is not a square matrix, a warning is raised and None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the jacobi() method

>>> from oppy.itMet import jacobi

To construct a test we use numpy arrays numpy.ndarray

>>> import numpy as np

Set up test case. The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in this case A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system

>>> jacobi_results = jacobi(A, f)

and compare the solution to the exact one

>>> print(np.linalg.norm(jacobi_results.x - x_sol))
>>> 0.006659729108731856

Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\).

oppy.itMet.sor module

Successive Over-relaxation (SOR) method.

This module contains the stationary sor() solver to solve a linear system

\[Ax = f\]

with the stationary iteration induced by the splitting

\[A = M - N\]

where \(M = \frac{1}{\omega}\cdot \mathrm{diag}(A) + \mathrm{LowerDiag}(A)\).

oppy.itMet.sor.sor(A, f, options=None)

Solve the linear system \(Ax = f\) with the sor() method.

The input matrix A can be a 2-D-array (numpy.ndarray object) or a scipy.sparse matrix.

Parameters:
  • A (numpy.ndarray or scipy.sparse matrix, shape (n,n)) – The square matrix A for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right right-hand of the equation.
  • options (Options, optional) –

    Options for the sor solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).

      Note

      The absolute tolerance tol_abs is also used to check whether the diagonal elements of the matrix A are greater than this tolerance. If this is not the case, a warning is raised and None is returned (see the section Warns for more information).

    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • omega: float
      Value for the over-ralaxation parameter. The default value is \(\frac{1}{2}\).

      Note

      Please ensure that

      \[\omega \in (0,2)\]

      holds, otherwise the method will definitely not converge. For more information take a look at Wikipedia.

    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If one of the diagonal entries of the matrix A is smaller than the absolute tolerance tol_rel in terms of amount, None is returned and a warning is raised.
  • default warning – If A is not a square matrix, a warning is raised and None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the sor() method

>>> from oppy.itMet import sor

To construct a test we use numpy arrays numpy.ndarray

>>> import numpy as np

Set up test case

The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in this case A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system. First we need a parameter \(\omega\). For this matrix one can show that

>>> omega = 2/(1+np.sin(np.pi/(m+1)))

is the best choice. We import the Options class to set the value for \(\omega\) manually.

>>> from oppy.options import Options
>>> sor_options = Options(omega=omega)
>>> sor_results = sor(A, f, sor_options)

and compare the solution to the exact one

>>> print(np.linalg.norm(sor_results.x - x_sol))
>>> 3.329043927513322e-05

Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-5}\).

oppy.itMet.steepest_descent module

Steepest descent method.

This module contains the function steepest_descent() to solve a system

\[Ax = f\]

of linear equations using the steepest_descent() method.

oppy.itMet.steepest_descent.steepest_descent(A, f, options=None)

Solve the linear system \(Ax = f\) with the steepest_descent() method.

The input matrix A can be a 2-D-array (numpy.ndarray object) or a scipy.sparse matrix.

Parameters:
  • A (numpy.ndarray or scipy.sparse matrix, shape (n,n)) – The square matrix A for solving the linear system \(Ax = f\).
  • f (numpy.ndarray, shape (n,)) – The vector representing the right hand side of the equation.
  • options (Options, optional) –

    Options for the steepest_descent solver, represented as a Options object. The default is None. Possible options are

    • u_0numpy.ndarray, shape (n,)
      Initial guess, represented as a ndarry with shape (n,). The default is None and therefore initialized with the zero vector.
    • tol_absfloat
      Absolute tolerance for termination condition, initialized with \(10^{-7}\).
    • tol_relfloat
      Relative tolerance for termination condition, initialized with \(10^{-7}\).

    Note

    The algorithm stops, if the norm of the current residual res[k] satisfies the inequality

    \[\mathrm{res}[k] < \mathrm{stopping\_criteria},\]

    where stopping_criteria is given by

    \[\mathrm{stopping\_criteria} = \mathrm{tol\_rel} \cdot \mathrm{res}[0] + \mathrm{tol\_abs},\]

    and res[0] is the norm of the first residual. If the value of stopping_criteria is bigger than \(0.1\), e.g. if the norm of the first residual is big and the relative tolerance is small, then this value is set to \(0.1\).

    • nmaxint or float
      Maximum number of iterations, initialized with \(10^{5}\).
    • resultsstr or list
      String or list to specify the attributes for the Results object, initialized with 'normal'.
Returns:

results – Instance of the class Results. Represents the results of the solver. Attributes of the object depend on the choice of the options.results parameter of the options attribute. With default choice the object has

  • xnumpy.ndarray, shape (n,)

    Solution vector of the optimization problem.

  • iterationsint

    Number of iterations the algorithm needed.

  • reslist

    List with the norm of the residuals.

Return type:

Results

Warns:
  • default warning – If A is not a square matrix, a warning is raised and None is returned.
  • default warning – If the initial guess \(u_0\) is set manually, the algorithm checks whether the sizes of \(u_0\) and the right-hand side f are the same. If not, a warning is raised and \(u_0\) is set to the zero vector.

Examples

Solving a linear system where A is given as the discrete Laplacian operator computed with finite differences. For tests, this is implemented in FDLaplacian().

>>> from oppy.tests.itMet import FDLaplacian

The minimizing method was chosen as the steepest_descent() method

>>> from oppy.itMet import steepest_descent

To construct a test we use numpy arrays numpy.ndarray

>>> import numpy as np

Set up test case

The following computes the 2 dimensional discrete Laplacian

>>> m = 100
>>> d = 2
>>> A = FDLaplacian(m,d)

Notice, in that case, that A is a scipy.sparse matrix. Choosing a random solution and computing the right hand side

>>> x_sol = np.random.rand(m**d)
>>> f = A.dot(x_sol)

Now, we can solve our system

>>> sd_results = steepest_descent(A, f)

and compare the solution to the exact one

>>> print(np.linalg.norm(sd_results.x-x_sol))
>>> 0.0013979296735099711

Notice, depending on your system, this number could be a bit different but the error should be something around \(10^{-3}\).