Skip to main content

Subsection B.3 Frequently used Python commands

If some code that you write isn’t working, check whether you are missing one of the required import statements. The following will be used frequently.
Less frequently we will use some of the following:
To make things a bit easier in this book, we will often include a code chunks near the top of a section, like the ones above below, to load the packages we require. In an interactive document, these will be auto-evaluated. You can tell that a cell has already been evaluated by the green output box below the code and the lack of a button to execute the code.
Creating matrices
There are a couple of ways to create matrices. For instance, the matrix
\begin{equation*} \begin{bmatrix} -2 \amp 3 \amp 0 \amp 4 \\ 1 \amp -2 \amp 1 \amp -3 \\ 0 \amp 2 \amp 3 \amp 0 \\ \end{bmatrix} \end{equation*}
can be created in either of the two following ways.
Be aware that Python can treat mathematically equivalent matrices in different ways depending on how they are entered. For instance, the matrix
has integer entries while
has floating point entries. If any of the entries in a matrix are provided as floating point numbers, then all of the entries will be converted to floating point values.
Special matrices
The \(4\by 4\) identity matrix can be created with the curiously named function np.eye() (or sympy.eye()). This has nothing to do with eyes, but is a play on the pronunciation of \(\boldsymbol I\text{.}\) Notice that sympy creates a matrix with integer entries, but np produces a matrix with floating point entries.
A diagonal matrix can be created from a list of its diagonal entries. For instance,
Reduced row echelon form
The reduced row echelon form of a matrix can be obtained using the rref() method after converting our matrix to a sympy.Matrix. For instance,
Vectors
NumPy arrays do not need to be 2-dimensional. A vector is defined by listing its components. Notice that the shape of v is a 1-tuple.
Addition
The + operator performs vector and matrix addition.
Multiplication
From a mathematical perspecitve, the * operator performs scalar multiplication of vectors and matrices.
Computationally, numpy is using something called broadcasting, which is more general than scalar multiplication. When two numpy arrays have different shapes, numpy starts from the end of the shape tuples and compares their values. If they match, that’s good. But it is also OK if one of them is 1 (or non-existant, which amounts to nearly the samething) and the other is not. In this case, we can imagine duplicating the array with 1 in that dimension to fill out its shape to match the other. (NumPy does not actually do this duplication, since that would be ineffcient, but it is a good mental model for how the operation behaves.) Working from back to front, each axis is considered, and in the end, if the shapes are compatible, they can be treated as if they had the same shape. At that point, the opreation proceeds element by element in the expanded arrays.
You can find out much more about broadcasting at numpy.org/doc/stable/user/basics.broadcasting.html.
Broadcasting means that * cannot be used for for matrix-vector and matrix-matrix multiplication in the linear algebra sense. Instead we use @.
Operations on vectors
  1. The length of a vector v is found using scipy.linalg.norm().
    Actually, np.linalg.norm() can compute many different norms of both vectors and matrices.
  2. The dot product of two vectors v and w is also computed using @
Operations on matrices
  1. The transpose of a matrix A is obtained using np.transpose()
  2. The inverse of a matrix A is obtained using either linalg.inv(A). But for serious computational work, there is almost always something better than explicitly computing an inverse this way.
  3. The determinant of A is linalg.det(A).
  4. An orthonormal basis for the null space \(\nul(A)\) is found with linalg.nullspace(A).
Eigenvectors and eigenvalues
  1. The eigenvalues of a matrix A can be found with linalg.eigvals(A). The number of times that an eigenvalue appears in the list equals its multiplicity.
  2. The eigenvectors of a matrix A can be found with linalg.eig(A).
  3. If \(A\) can be diagonalized as \(A=PDP^{-1}\text{,}\) then
    provides the matrices D and P. Recall the sympy does symbolic computation. We can use evalf() to compute a numerical approximation.
  4. The characteristic polynomial of sympy.Matrix A is A.charpoly('x').
Matrix factorizations
  1. The \(LU\) factorization of a matrix
    gives matrices so that \(A = PLU\text{,}\) where \(P\) is a permutation matrix, \(L\) is lower diagonal, and \(U\) is upper diagonal.
  2. A singular value decomposition is obtained with
    Note that s is not a matric but a 1-d array of singular values. If the matrix is needed, we can construct it with
  3. The \(QR\) factorization of A is linalg.qr(A) provided that A.