Subsection B.3 Frequently used Python commands
If some code that you write isn’t working, check whether you are missing one of the required import statements. The following will be used frequently. Less frequently we will use some of the following:
To make things a bit easier in this book, we will often include a code chunks near the top of a section, like the ones above below, to load the packages we require. In an interactive document, these will be auto-evaluated. You can tell that a cell has already been evaluated by the green output box below the code and the lack of a button to execute the code.
- Creating matrices
-
There are a couple of ways to create matrices. For instance, the matrix\begin{equation*} \begin{bmatrix} -2 \amp 3 \amp 0 \amp 4 \\ 1 \amp -2 \amp 1 \amp -3 \\ 0 \amp 2 \amp 3 \amp 0 \\ \end{bmatrix} \end{equation*}can be created in either of the two following ways.Be aware that Python can treat mathematically equivalent matrices in different ways depending on how they are entered. For instance, the matrix has integer entries while has floating point entries. If any of the entries in a matrix are provided as floating point numbers, then all of the entries will be converted to floating point values.
- Special matrices
- The \(4\by 4\) identity matrix can be created with the curiously named function
np.eye()
(orsympy.eye()
). This has nothing to do with eyes, but is a play on the pronunciation of \(\boldsymbol I\text{.}\) Notice thatsympy
creates a matrix with integer entries, butnp
produces a matrix with floating point entries. A diagonal matrix can be created from a list of its diagonal entries. For instance, - Reduced row echelon form
- The reduced row echelon form of a matrix can be obtained using the
rref()
method after converting our matrix to asympy.Matrix
. For instance, - Vectors
- NumPy arrays do not need to be 2-dimensional. A vector is defined by listing its components. Notice that the shape of
v
is a 1-tuple. - Addition
- The
+
operator performs vector and matrix addition. - Multiplication
-
From a mathematical perspecitve, the
*
operator performs scalar multiplication of vectors and matrices.Computationally, numpy is using something called broadcasting, which is more general than scalar multiplication. When two numpy arrays have different shapes, numpy starts from the end of the shape tuples and compares their values. If they match, that’s good. But it is also OK if one of them is 1 (or non-existant, which amounts to nearly the samething) and the other is not. In this case, we can imagine duplicating the array with 1 in that dimension to fill out its shape to match the other. (NumPy does not actually do this duplication, since that would be ineffcient, but it is a good mental model for how the operation behaves.) Working from back to front, each axis is considered, and in the end, if the shapes are compatible, they can be treated as if they had the same shape. At that point, the opreation proceeds element by element in the expanded arrays.You can find out much more about broadcasting atnumpy.org/doc/stable/user/basics.broadcasting.html
.Broadcasting means that*
cannot be used for for matrix-vector and matrix-matrix multiplication in the linear algebra sense. Instead we use@
. - Operations on vectors
-
The length of a vector
v
is found usingscipy.linalg.norm()
.Actually,np.linalg.norm()
can compute many different norms of both vectors and matrices. - The dot product of two vectors
v
andw
is also computed using@
-
- Operations on matrices
- The transpose of a matrix
A
is obtained usingnp.transpose()
- The inverse of a matrix
A
is obtained using eitherlinalg.inv(A)
. But for serious computational work, there is almost always something better than explicitly computing an inverse this way. - The determinant of
A
islinalg.det(A)
. - An orthonormal basis for the null space \(\nul(A)\) is found with
linalg.nullspace(A)
.
- Eigenvectors and eigenvalues
- The eigenvalues of a matrix
A
can be found withlinalg.eigvals(A)
. The number of times that an eigenvalue appears in the list equals its multiplicity. - The eigenvectors of a matrix
A
can be found withlinalg.eig(A)
. - If \(A\) can be diagonalized as \(A=PDP^{-1}\text{,}\) then provides the matrices
D
andP
. Recall thesympy
does symbolic computation. We can useevalf()
to compute a numerical approximation. - The characteristic polynomial of
sympy.Matrix
A
isA.charpoly('x')
.
- Matrix factorizations
- The \(LU\) factorization of a matrix gives matrices so that \(A = PLU\text{,}\) where \(P\) is a permutation matrix, \(L\) is lower diagonal, and \(U\) is upper diagonal.
- A singular value decomposition is obtained with Note that
s
is not a matric but a 1-d array of singular values. If the matrix is needed, we can construct it with - The \(QR\) factorization of
A
islinalg.qr(A)
provided thatA
.