harold module¶
The MIT License (MIT)
Copyright (c) 2015 Ilhan Polat
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
class
harold.
Transfer
(num, den=None, dt=False)¶ Bases:
object
Transfer is the one of two main system classes in harold (together with State()).
Main types of instantiation of this class depends on whether the user wants to create a Single Input/Single Output system (SISO) or a Multiple Input/Multiple Output system (MIMO) model.
For SISO system creation, 1D lists or 1D numpy arrays are expected, e.g.,:
>>>> G = Transfer(1,[1,2,1])
For MIMO systems, depending on the shared denominators, there are two distinct ways of entering a MIMO transfer function:
1. Entering “list of lists of lists” such that every element of the inner lists are numpy array-able (explicitly checked) for numerator and entering a 1D list or 1D numpy array for denominator (and similarly for numerator):
>>>> G = Transfer([[[1,3,2],[1,3]],[[1],[1,0]]],[1,4,5,2]) >>>> G.shape (2,2)
2. Entering the denominator also as a list of lists for individual entries as a bracket nightmare (thanks to Python’s nonnative support for arrays and tedious array syntax):
>>>> G = Transfer([ [ [1,3,2], [1,3] ], [ [1] , [1,0] ] ],# end of num [ [ [1,2,1] , [1,3,3] ], [ [1,0,0] , [1,2,3,4] ] ]) >>>> G.shape (2,2)
There is a very involved validator and if you would like to know why or how this input is handled ,provide the same numerator and denominator to the static method below with ‘verbose=True’ keyword argument, e.g.
>>>> n , d , shape , is_it_static = Transfer.validate_arguments( [1,3,2], # common numerator [[[1,2,1],[1,3,3]],[[1,0,0],[1,2,3,4]]],# explicit den verbose=True # print the logic it followed )
would give information about the context together with the regularized numerator, denominator, resulting system shape and boolean whether or not the system has dynamics.
However, the preferred way is to make everything a numpy array inside the list of lists. That would skip many compatibility checks. Once created the shape of the numerator and denominator cannot be changed. But compatible sized arrays can be supplied and it will recalculate the pole/zero locations etc. properties automatically.
The Sampling Period can be given as a last argument or a keyword with ‘dt’ key or changed later with the property access.:
>>>> G = Transfer([1],[1,4,4],0.5) >>>> G.SamplingSet 'Z' >>>> G.SamplingPeriod 0.5 >>>> F = Transfer([1],[1,2]) >>>> F.SamplingSet 'R' >>>> F.SamplingPeriod = 0.5 >>>> F.SamplingSet 'Z' >>>> F.SamplingPeriod 0.5
Providing ‘False’ value to the SamplingPeriod property will make the system continous time again and relevant properties are reset to CT properties.
Warning
Unlike matlab or other tools, a discrete time system needs a specified sampling period (and possibly a discretization method if applicable) because a model without a sampling period doesn’t make sense for analysis. If you don’t care, then make up a number, say, a million, since you don’t care.
-
SamplingSet
¶ If this property is called
G.SamplingSet
then returns the setZ
orR
for discrete and continous models respectively. This is a read only property and cannot be set. Instead an appropriate setting should be given to theSamplingPeriod
property.
-
NumberOfInputs
¶ A read only property that holds the number of inputs.
-
NumberOfOutputs
¶ A read only property that holds the number of outputs.
-
shape
¶ A read only property that holds the shape of the system as a tuple such that the result is
(# of inputs , # of outputs)
.
-
polynomials
¶ A read only property that returns the model numerator and the denominator as the outputs.
-
SamplingPeriod
¶ If this property is called
G.SamplingPeriod
then returns the sampling period data. If this property is set toFalse
, the model is assumed to be a continuous model. Otherwise, a discrete time model is assumed. Upon changing this value, relevant system properties are recalculated.
-
num
¶ If this property is called
G.num
then returns the numerator data. Alternatively, if this property is set then the provided value is first validated with the existing denominator shape and causality.
-
den
¶ If this property is called
G.den
then returns the numerator data. Alternatively, if this property is set then the provided value is first validated with the existing numerator shape and causality.
-
DiscretizedWith
¶ This property is used internally to keep track of (if applicable) the original method used for discretization. It is used by the
undiscretize()
function to reach back to the continous model that would hopefully minimize the discretization errors. It is also possible to manually set this property such thatundiscretize
uses the provided method.
-
DiscretizationMatrix
¶ This matrix denoted with \(Q\) is internally used to represent the upper linear fractional transformation of the operation \(\frac{1}{s} I = \frac{1}{z} I \star Q\). For example, the typical tustin, forward/backward difference methods can be represented with
\[\begin{split}Q = \begin{bmatrix} I & \sqrt{T}I \\ \sqrt{T}I & aTI \end{bmatrix}\end{split}\]then for different \(a\) values corresponds to the transformation given below:
\(a\) method \(0\) backward difference (euler) \(0.5\) tustin \(1\) forward difference (euler) This operation is usually given with a Riemann sum argument however for control theoretical purposes a proper mapping argument immediately suggests a more precise control over the domain the left half plane is mapped to. For this reason, a discretization matrix option is provided to the user.
The available methods (and their aliases) can be accessed via the internal
_KnownDiscretizationMethods
variable.Note
The common discretization techniques can be selected with a keyword argument and this matrix business can safely be avoided. This is a rather technical issue and it is best to be used sparingly. For the experts, I have to note that the transformation is currently not tested for well-posedness.
Note
SciPy actually uses a variant of this LFT representation as given in the paper of Zhang et al.
-
PrewarpFrequency
¶ If the discretization method is
tustin
then a frequency warping correction might be required the match of the discrete time system response at the frequency band of interest. Via this property, the prewarp frequency can be provided.
-
static
validate_arguments
(num, den, verbose=False)¶ A helper function to validate whether given arguments to an Transfer instance are valid and compatible for instantiation.
Since there are many cases that might lead to a valid Transfer instance, Pythonic “try,except” machinery is not very helpful to check every possibility and equally challenging to branch off. A few examples of such issues that needs to be addressed is static gain, single entry for a MIMO system with common denominators and so on.
Thus, this function provides a front-end to the laborious size and type checking which would make the Transfer object itself seemingly compatible with duck-typing while keeping the nasty branching implementation internal.
The resulting output is compatible with the main harold Transfer class convention such that
- If the recognized context is MIMO the resulting outputs are list of lists with numpy arrays being the polynomial coefficient entries.
- If the recognized context is SISO the entries are numpy arrays with any list structure is stripped off.
Parameters: - num –
The polynomial coefficient containers. Either of them can be (not both) None to assume that the context will be derived from the other for static gains. Otherwise both are expected to be one of np.array, int , float , list , list of lists of lists or numpy arrays.
For MIMO context, element numbers and causality checks are performed such that numerator list of list has internal arrays that have less than or equal to the internal arrays of the respective denominator entries.
For SISO context, causality check is performed between numerator and denominator arrays.
- den – Same as num
- verbose (boolean) – A boolean switch to print out what this method thinksabout the argument context.
Returns: - num (List of lists or numpy array (MIMO/SISO))
- den (List of lists or numpy array (MIMO/SISO))
- shape (2-tuple) – Returns the recognized shape of the system
- Gain_flag (Boolean) –
Returns
True
if the system is recognized as a static gainFalse
otherwise (for both SISO and MIMO)
-
-
class
harold.
State
(a, b=None, c=None, d=None, dt=False)¶ Bases:
object
State() is the one of two main system classes in harold (together with Transfer() ).
A State object can be instantiated in a straightforward manner by entering 2D arrays, floats, 1D arrays for row vectors and so on.:
>>>> G = State([[0,1],[-4,-5]],[[0],[1]],[[1,0]],1)
However, the preferred way is to make everything a numpy array. That would skip many compatibility checks. Once created the shape of the numerator and denominator cannot be changed. But compatible sized arrays can be supplied and it will recalculate the pole/zero locations etc. properties automatically.
The Sampling Period can be given as a last argument or a keyword with ‘dt’ key or changed later with the property access.:
>>>> G = State([[0,1],[-4,-5]],[[0],[1]],[[1,0]],[1],0.5) >>>> G.SamplingSet 'Z' >>>> G.SamplingPeriod 0.5 >>>> F = State(1,2,3,4) >>>> F.SamplingSet 'R' >>>> F.SamplingPeriod = 0.5 >>>> F.SamplingSet 'Z' >>>> F.SamplingPeriod 0.5
Setting SamplingPeriod property to ‘False’ value to the will make the system continous time again and relevant properties are reset to continuous-time properties.
Warning: Unlike matlab or other tools, a discrete time system needs a specified sampling period (and possibly a discretization method if applicable) because a model without a sampling period doesn’t make sense for analysis. If you don’t care, then make up a number, say, a million, since you don’t care.
-
SamplingSet
¶
-
NumberOfStates
¶
-
NumberOfInputs
¶
-
NumberOfOutputs
¶
-
shape
¶
-
matrices
¶
-
a
¶
-
b
¶
-
c
¶
-
d
¶
-
SamplingPeriod
¶
-
DiscretizedWith
¶
-
DiscretizationMatrix
¶
-
PrewarpFrequency
¶
-
static
validate_arguments
(a, b, c, d, verbose=False)¶ An internal command to validate whether given arguments to a State() instance are valid and compatible.
It also checks if the lists are 2D numpy.array’able entries.
-
-
harold.
statetotransfer
(*state_or_abcd, output='system')¶ Given a State() object of a tuple of A,B,C,D array-likes, converts the argument into the transfer representation. The output can be selected as a Transfer() object or the numerator, denominator if ‘output’ keyword is given with the option ‘polynomials’.
If the input is a Transfer() object it returns the argument with no modifications.
The algorithm is to first get the minimal realization of the State() representation. Then implements the conversion ala Varga,Sima 1981 which can be summarized as iterating over every row/cols of B and C to get SISO Transfer representations via c*(sI-A)^(-1)*b+d
Parameters: - state_or_abcd (State() or a tuple of A,B,C,D matrices.) –
- output ({‘system’,’polynomials’}) – Selects whether a State() object or individual numerator, denominator will be returned.
Returns: G (Transfer()) – If ‘output’ keyword is set to ‘system’
- num ({List of lists of 2D-numpy arrays for MIMO case,) –
2D-Numpy arrays for SISO case}
If the ‘output’ keyword is set to ‘polynomials’
den (Same as num)
-
harold.
transfertostate
(*tf_or_numden, output='system')¶ Given a Transfer() object of a tuple of numerator and denominator, converts the argument into the state representation. The output can be selected as a State() object or the A,B,C,D matrices if ‘output’ keyword is given with the option ‘matrices’.
If the input is a State() object it returns the argument with no modifications.
For SISO systems, the algorithm is returning the controllable companion form.
For MIMO systems a variant of the algorithm given in Section 4.4 of W.A. Wolowich, Linear Multivariable Systems (1974). The denominators are equaled with haroldlcm() Least Common Multiple function.
Parameters: - tf_or_numden (Transfer() or a tuple of numerator and denominator.) – For MIMO numerator and denominator arguments see Transfer() docstring.
- output ({‘system’,’matrices’}) – Selects whether a State() object or individual state matrices will be returned.
Returns: - G (State()) – If ‘output’ keyword is set to ‘system’
- A,B,C,D ({(nxn),(nxm),(p,n),(p,m)} 2D Numpy-arrays) – If the ‘output’ keyword is set to ‘matrices’
-
harold.
transmission_zeros
(A, B, C, D)¶ Computes the transmission zeros of a (A,B,C,D) system matrix quartet.
Note
This is a straightforward implementation of the algorithm of Misra, van Dooren, Varga 1994 but skipping the descriptor matrix which in turn becomes Emami-Naeini,van Dooren 1979. I don’t know if anyone actually uses descriptor systems in practice so I removed the descriptor parts to reduce the clutter. Hence, it is possible to directly row/column compress the matrices without caring about the upper Hessenbergness of E matrix.
Parameters: A,B,C,D ({(nxn),(nxm),(p,n),(p,m) 2D Numpy arrays}) – Returns: z – The array of computed transmission zeros. The array is returned empty if no transmission zeros are found. Return type: {1D Numpy array}
-
harold.
discretize
(G, dt, method='tustin', PrewarpAt=0.0, q=None)¶
-
harold.
undiscretize
(G, OverrideWith=None)¶ Converts a discrete time system model continuous system model. If the model has the Discretization Method set, then uses that discretization method to reach back to the continous system model.
-
harold.
rediscretize
(G, dt, method='tustin', alpha=0.5)¶ Todo
Not implemented yet!
-
harold.
kalman_controllability
(G, compress=False)¶ Computes the Kalman controllability related quantities. The algorithm is the literal computation of the controllability matrix with increasing powers of A. Numerically, this test is not robust and prone to errors if the A matrix is not well-conditioned or its entries have varying order of magnitude as at each additional power of A the entries blow up or converge to zero rapidly.
Parameters: - G (State() or tuple of {(n,n),(n,m)} array_like matrices) – System or matrices to be tested
- compress (Boolean) – If set to True, then the returned controllability matrix is row compressed, and in case of uncontrollable modes, has that many zero rows.
Returns: - Cc ({(n,nxm)} 2D numpy array) – Kalman Controllability Matrix
- T ((n,n) 2D numpy arrays) – The transformation matrix such that T^T * Cc is row compressed and the number of zero rows at the bottom corresponds to the number of uncontrollable modes.
- r (integer) – Numerical rank of the controllability matrix
-
harold.
kalman_observability
(G, compress=False)¶ Computes the Kalman observability related objects. The algorithm is the literal computation of the observability matrix with increasing powers of A. Numerically, this test is not robust and prone to errors if the A matrix is not well-conditioned or too big as at each additional power of A the entries blow up or converge to zero rapidly.
Parameters: - G (State() or {(n,n),(n,m)} array_like matrices) – System or matrices to be tested
- compress (Boolean) – If set to True, then the returned observability matrix is row compressed, and in case of unobservability modes, has that many zero rows.
Returns: - Co ({(n,nxm)} 2D numpy array) – Kalman observability matrix
- T ((n,n) 2D numpy arrays) – The transformation matrix such that T^T * Cc is row compressed and the number of zero rows on the right corresponds to the number of unobservable modes.
- r (integer) – Numerical rank of the observability matrix
-
harold.
kalman_decomposition
(G, compute_T=False, output='system', cleanup_threshold=1e-09)¶ By performing a sequence of similarity transformations the State representation is transformed into a special structure such that if the system has uncontrollable/unobservable modes, the corresponding rows/columns of the B/C matrices have zero blocks and the modes are isolated in the A matrix. That is to say, there is no contribution of the controllable/observable states on the dynamics of these modes.
Note that, Kalman operations are numerically not robust. Hence the resulting decomposition might miss some ‘almost’ pole-zero cancellations. Hence, this should be used as a rough assesment tool but not as actual minimality check or maybe to demonstrate the concepts academic purposes to show the modal decomposition. Use canceldistance() and minimal_realization() functions instead with better numerical properties.
Example usage and verification :
G = State([[2,1,1],[5,3,6],[-5,-1,-4]],[[1],[0],[0]],[[1,0,0]],0) print('Is it Kalman Cont'ble ? ',is_kalman_controllable(G)) print('Is it Kalman Obsv'ble ? ',is_kalman_observable(G)) F = kalman_decomposition(G) print(F.a,F.b,F.c) H = minimal_realization(F.a,F.b,F.c) print('The minimal system matrices are:',*H)
Expected output :
Is it Kalman Cont'ble ? False Is it Kalman Obsv'ble ? False [[ 2. 0. -1.41421356] [ 7.07106781 -3. -7. ] [ 0. 0. 2. ]] [[-1.] [ 0.] [ 0.]] [[-1. 0. 0.]] The minimal system matrices are: [[ 2.]] [[ 1.]] [[ 1.]]
Warning
Kalman decomposition is often described in an ambigous fashion in the literature. I would like to thank Joaquin Carrasco for his generous help on this matter for his lucid argument as to why this is probably happening. This is going to be reimplemented with better tests on pathological models.
Parameters: - G (State()) – The state representation that is to be converted into the block triangular form such that unobservable/uncontrollable modes corresponds to zero blocks in B/C matrices
- compute_T (boolean) – Selects whether the similarity transformation matrix will be returned.
- output ({‘system’,’matrices’}) – Selects whether a State() object or individual state matrices will be returned.
- cleanup_threshold (float) – After the similarity transformation, the matrix entries smaller than this threshold in absolute value would be zeroed. Setting this value to zero turns this behavior off.
Returns: Gk (State() or if output = ‘matrices’ is selected (A,B,C,D) tuple) – Returns a state representation or its matrices as a tuple
T ((nxn) 2D-numpy array) – If compute_T is True, returns the similarity transform matrix that brings the state representation in the resulting decomposed form such that
Gk.a = inv(T)*G.a*T Gk.b = inv(T)*G.b Gk.c = G.c*T Gk.d = G.d
-
harold.
is_kalman_controllable
(G)¶ Tests the rank of the Kalman controllability matrix and compares it with the A matrix size, returns a boolean depending on the outcome.
Parameters: G (State() or tuple of {(nxn),(nxm)} array_like matrices) – The system or the (A,B) matrix tuple Returns: test_bool – Returns True if the input is Kalman controllable Return type: Boolean
-
harold.
is_kalman_observable
(G)¶ Tests the rank of the Kalman observability matrix and compares it with the A matrix size, returns a boolean depending on the outcome.
Parameters: G (State() or tuple of {(nxn),(pxn)} array_like matrices) – The system or the (A,C) matrix tuple Returns: test_bool – Returns True if the input is Kalman observable Return type: Boolean
-
harold.
staircase
(A, B, C, compute_T=False, form='c', invert=False)¶ The staircase form is used very often to assess system properties. Given a state system matrix triplet A,B,C, this function computes the so-called controller/observer-Hessenberg form such that the resulting system matrices have the block-form (x denoting the nonzero blocks)
\[\begin{split}\begin{array}{c|c} \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & \times \end{bmatrix} & \begin{bmatrix} \times \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \\ \hline \begin{bmatrix} \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \end{bmatrix} \end{array}\end{split}\]For controllability and observability, the existence of zero-rank subdiagonal blocks can be checked, as opposed to forming the Kalman matrix and checking the rank. Staircase method can numerically be more stable since for certain matrices, A^n computations can introduce large errors (for some A that have entries with varying order of magnitudes). But it is also prone to numerical rank guessing mismatches.
Notice that, if we use the pertransposed data, then we have the observer form which is usually asked from the user to supply the data as \(A,B,C \Rightarrow A^T,C^T,B^T\) and then transpose back the result. This is just silly to ask the user to do that. Hence the additional
form
option denoting whether it is the observer or the controller form that is requested.Parameters: - A,B,C ({(n,n),(n,m),(p,n)} array_like) – System Matrices to be converted
- compute_T (bool, optional) – Whether the transformation matrix T should be computed or not
- form ({ ‘c’ , ‘o’ }, optional) – Determines whether the controller- or observer-Hessenberg form will be computed.
- invert (bool, optional) – Whether to select which side the B or C matrix will be compressed. For example, the default case returns the B matrix with (if any) zero rows at the bottom. invert option flips this choice either in B or C matrices depending on the “form” switch.
Returns: Ah,Bh,Ch ({(n,n),(n,m),(p,n)} 2D numpy arrays) – Converted system matrices
T ((n,n) 2D numpy array) – If the boolean
compute_T
is true, returns the transformation matrix such that\[\begin{split}\left[\begin{array}{c|c} T^{-1}AT &T^{-1}B \\ \hline CT & D \end{array}\right]\end{split}\]is in the desired staircase form.
k (Numpy array) – Array of controllable block sizes identified during block diagonalization
-
harold.
canceldistance
(F, G)¶ Given matrices \(F,G\), computes the upper and lower bounds of the perturbation needed to render the pencil \(\left[ \begin{array}{c|c}F-pI & G\end{array}\right]\) rank deficient. It is used for assessing the controllability/observability degenerate distance and hence for minimality assessment.
Implements the algorithm given in D.Boley SIMAX vol.11(4) 1990.
Parameters: F,G (2D arrays) – Pencil matrices to be checked for rank deficiency distance Returns: - upper2 (float) – Upper bound on the norm of the perturbation \(\left[\begin{array}{c|c}dF & dG\end{array}\right]\) such that \(\left[\begin{array}{c|c}F+dF-pI & G+dG \end{array} \right]\) is rank deficient.
- upper1 (float) – A theoretically softer upper bound than the upper2 for the same quantity.
- lower0 (float) – Lower bound on the same quantity given in upper2
- e_f (complex) – Indicates the eigenvalue that renders [F + dF - pI | G + dG ] rank deficient i.e. equals to the p value at the closest rank deficiency.
- radius (float) – The perturbation with the norm bound “upper2” is located within a disk in the complex plane whose center is on “e_f” and whose radius is bounded by this output.
-
harold.
minimal_realization
(A, B, C, mu_tol=1e-09)¶ Given state matrices A,B,C computes minimal state matrices such that the system is controllable and observable within the given tolerance mu.
- Implements a basic two pass algorithm :
- 1- First distance to mode cancellation is computed then also the Hessenberg form is obtained with the identified o’ble/c’ble block numbers. If staircase form reports that there are no cancellations but the distance is less than the tolerance, distance wins and the respective mode is removed.
Uses canceldistance(), and staircase() for the aforementioned checks.
Parameters: - A,B,C ({(n,n), (n,m), (pxn)} array_like) – System matrices to be checked for minimality
- mu_tol (float (default 1-e6)) – The sensitivity threshold for the cancellation to be compared with the first default output of canceldistance() function.
Returns: A,B,C – System matrices that are identified as minimal with k states instead of the original n where (k <= n)
Return type: {(k,k), (k,m), (pxk)} array_like
-
harold.
haroldsvd
(D, also_rank=False, rank_tol=None)¶ This is a wrapper/container function of both the SVD decomposition and the rank computation. Since the regular rank computation is implemented via SVD it doesn’t make too much sense to recompute the SVD if we already have the rank information. Thus instead of typing two commands back to back for both the SVD and rank, we return both. To reduce the clutter, the rank information is supressed by default.
numpy svd is a bit strange because it compresses and looses the S matrix structure. From the manual, it is advised to use u.dot(np.diag(s).dot(v)) for recovering the original matrix. But that won’t work for rectangular matrices. Hence it recreates the rectangular S matrix of U,S,V triplet.
Parameters: - D ((m,n) array_like) – Matrix to be decomposed
- also_rank (bool, optional) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
- rank_tol ({None,float} optional) – The tolerance used for deciding the numerical rank. The default is set to None and uses the default definition of matrix_rank() from numpy.
Returns: - U,S,V ({(m,m),(m,n),(n,n)} 2D numpy arrays) – Decomposed-form matrices
- r (integer) – If the boolean “also_rank” is true, this variable is the numerical rank of the matrix D
-
harold.
ssconcat
(G)¶
-
harold.
ssslice
(H, n)¶
-
harold.
matrixslice
(M, M11shape)¶
-
harold.
blockdiag
(*args)¶
-
harold.
eyecolumn
(width, nth=0)¶
-
harold.
system_norm
(state_or_transfer, p=inf, validate=False, verbose=False, max_iter_limit=100, hinf_tolerance=1e-10, eig_tolerance=1e-12)¶ Computes the system p-norm. Currently, no balancing is done on the system, however in the future, a scaling of some sort will be introduced. Another short-coming is that while sounding general, only H2 and Hinf norm are understood.
For H2 norm, the standard grammian definition via controllability grammian can be found elsewhere is used.
Currently, the Hinf norm is computed via so-called Boyd-Balakhrishnan- Bruinsma-Steinbuch algorithm (See e.g. [2]).
However, (with kind and generous help of Melina Freitag) the algorithm given in [1] is being implemented and depending on the speed benefit might be replaced as the default.
[1] M.A. Freitag, A Spence, P. Van Dooren: Calculating the $H_infty$-norm using the implicit determinant method. SIAM J. Matrix Anal. Appl., 35(2), 619-635, 2014
[2] N.A. Bruinsma, M. Steinbuch: Fast Computation of $H_infty$-norm of transfer function. System and Control Letters, 14, 1990
Parameters: - state_or_transfer ({State,Transfer}) – System for which the norm is computed
- p ({int,Inf}) – Whether the rank of the matrix should also be reported or not. The returned rank is computed via the definition taken from the official numpy.linalg.matrix_rank and appended here.
- validate (boolean) – If applicable and if the resulting norm is finite, the result is validated via other means.
- verbose (boolean) – If True, the (some) internal progress is printed out.
- max_iter_limit (int) – Stops the iteration after max_iter_limit many times spinning the loop. Very unlikely but might be required for pathological examples.
- hinf_tolerance (float) – Convergence check value such that when the progress is below this tolerance the result is accepted as converged.
- eig_tolerance (float) – The algorithm relies on checking the eigenvalues of the Hamiltonian being on the imaginary axis or not. This value is the threshold such that the absolute real value of the eigenvalues smaller than this value will be accepted as pure imaginary eigenvalues.
Returns: - n (float) – Computed norm. In NumPy, infinity is also float-type
- omega (float) – For Hinf norm, omega is the frequency where the maximum is attained (technically this is a numerical approximation of the supremum).
-
harold.
haroldlcm
(*args, compute_multipliers=True, cleanup_threshold=1e-09)¶ Takes n-many 1D numpy arrays and computes the numerical least common multiple polynomial. The polynomials are assumed to be in decreasing powers, e.g. s^2 + 5 should be given as numpy.array([1,0,5])
Returns a numpy array holding the polynomial coefficients of LCM and a list, of which entries are the polynomial multipliers to arrive at the LCM of each input element.
Example:
>>>> a , b = haroldlcm(*map( np.array, ([1,3,0,-4],[1,-4,-3,18],[1,-4,3],[1,-2,-8]) ) ) >>>> a (array([ 1., -7., 3., 59., -68., -132., 144.]) >>>> b [array([ 1., -10., 33., -36.]), array([ 1., -3., -6., 8.]), array([ 1., -3., -12., 20., 48.]), array([ 1., -5., 1., 21., -18.])] >>>> np.convolve([1,3,0,-4],b[0]) # or haroldpolymul() for poly mult (array([ 1., -7., 3., 59., -68., -132., 144.]),
-
harold.
haroldgcd
(*args)¶ Takes *args-many 1D numpy arrays and computes the numerical greatest common divisor polynomial. The polynomials are assumed to be in decreasing powers, e.g. s^2 + 5 should be given as numpy.array([1,0,5])
Returns a numpy array holding the polynomial coefficients of GCD. The GCD does not cancel scalars but returns only monic roots. In other words, the GCD of polynomials 2 and 2s+4 is computed as 1.
Example:
>>>> a = haroldgcd(*map( haroldpoly, ([-1,-1,-2,-1j,1j],[-2,-3,-4,-5],[-2]*10) ) ) >>>> print(a) array([ 1., 2.])
Warning
It uses the LU factorization of the Sylvester matrix. Use responsibly. It does not check any certificate of success by any means (maybe it will in the future). I’ve tried the recent ERES method too. When there is a nontrivial GCD it performed satisfactorily however did not perform as well when GCD = 1 (maybe due to my implementation). Hence I’ve switched to matrix-based methods.
-
harold.
haroldcompanion
(somearray)¶ Takes an 1D numpy array or list and returns the companion matrix of the monic polynomial of somearray. Hence [0.5,1,2] will be first converted to [1,2,4]
Example:
>>>> haroldcompanion([2,4,6]) array([[ 0., 1.], [-3., -2.]]) >>>> haroldcompanion([1,3]) array([[-3.]]) >>>> haroldcompanion([1]) array([], dtype=float64)
-
harold.
haroldtrimleftzeros
(somearray)¶
-
harold.
haroldpoly
(rootlist)¶
-
harold.
haroldpolyadd
(*args, trimzeros=True)¶ Similar to official polyadd from numpy but allows for multiple args and doesn’t invert the order,
-
harold.
haroldpolymul
(*args, trimzeros=True)¶ Simple wrapper around the scipy convolve function for polynomial multiplication with multiple args. The arguments are passed through the left zero trimming function first.
Example:
>>>> haroldpolymul([0,2,0],[0,0,0,1,3,3,1],[0,0.5,0.5]) array([ 1., 4., 6., 4., 1., 0.])
-
harold.
haroldpolydiv
(dividend, divisor)¶ Polynomial division wrapped around scipy deconvolve function. Takes two arguments and divides the first by the second.
Returns, two arguments: the factor and the remainder, both passed through a left zeros trimming function.
-
harold.
frequency_response
(G, custom_grid=None, high=None, low=None, samples=None, custom_logspace=None, input_freq_unit='Hz', output_freq_unit='Hz')¶ Computes the frequency response matrix of a State() or Transfer() object. Transfer matrices are converted to state representations before the computations. The system representations are always checked for minimality and, if any, unobservable/uncontrollable modes are removed.