Skip to main content

Linear Algebra with SageMath Learn math with open-source software

Section 7.2 Cramer’s Rule

Cramer’s rule is a theorem in linear algebra that provides an explicit formula for solving a system of linear equations with as many equations as unknowns, provided that the system’s coefficient matrix is non-singular.
Consider a system of \(n\) linear equations with \(n\) unknowns represented in matrix form as \(Ax = b\text{,}\) where \(A\) is the coefficient matrix, \(x\) is the column vector of unknowns, and \(b\) is the column vector of constants. According to Cramer’s rule, if \(A\) is non-singular, then the system has a unique solution given by \(x_i = \frac{\det(A_i)}{\det(A)}\text{,}\) where \(A_i\) is the matrix formed by replacing the \(i\)-th column of \(A\) with the vector \(b\text{.}\)
To illustrate Cramer’s rule with an example, let’s consider the system of equations from Subsection 4.3.1. Let’s first start by checking if the coefficient matrix \(A\) is non-singular and compute its determinant:
As it turned out, the matrix \(A\) is non-singular, let’s then compute its determinant:
Next, we compute the determinants of the matrices formed by replacing each column of \(A\) with the vector \(b\text{,}\) and then find the values of the unknowns. For the first variable \(x_1\text{,}\) we replace the first column of \(A\) with \(b\) to form \(A_1\text{,}\) compute its determinant, and then find \(x_1=\frac{\det(A_1)}{\det(A)}\text{.}\)
We repeat this process for \(x_2\) by replacing the second column:
We repeat the same process for \(x_3\) by replacing the third column as follows:
In the above example, we solved the system of equations of Subsection 4.3.1 using Cramer’s rule. To verify the solution, we can substitute the values of \(x_1\text{,}\) \(x_2\) and \(x_3\) back into the original equation:
Note that Cramer’s rule is an iterative process, hence, can be automated using loops and a reusable function.
Let’s create a function solve_system that takes as input a coefficient matrix \(A\) and a constant vector \(b\text{,}\) and returns the solution vector \(x\) using Cramer’s rule. The function will first check if the matrix \(A\) is non-singular, and if so, it will compute the determinant of \(A\) and the determinants of the matrices formed by replacing each column of \(A\) with \(b\text{.}\) Finally, it will return \(x\text{,}\) the solution vector.
Which then can be used as follows to solve the given system of equations.
Once again, we can verify the solution by substituting the values of \(x\) back into the original equation:
Here is another example for a singular coefficient matrix where Cramer’s rule cannot be applied:

Note 7.2.1. Floating-point Equality.

Because computers represent real numbers using finite binary precision, the result of a floating-point computation is almost never exact. For this reason, testing whether two floating-point expressions are “exactly equal” will often return false even when the values should be mathematically identical. For instance, in the previous \(5\times 5\) linear-system example: recomputing the solutions of the system in \(\mathbb{R}\) would yield a solution vector with floating-point entries:
However the previous equality comparison A*x == b would typically fail because of tiny rounding errors.
For numerical purposes, instead of checking equality, the correct approach is to compare the difference to a small tolerance (for example, \(10^{-12}\)). If the absolute error is below this threshold, then the values should be considered equal.