In the spirit of Ryan’s most recent post, I will discuss a fundamental snippet from numerical linear algebra that facilitates computation for the same price of not facilitating it. In our everyday lives, we often come across theoretical expressions that involve matrix inverses stapled to vectors, such as with . When we proceed to code this up, it is very tempting to first compute . Resist doing this! There are several points for why there is no point to actually find an explicit, tangible inverse.
To start off, solving the system for —using, for example, LU decomposition—is faster and more numerically stable than inverting . Moreover, the real world often bestows us with sparse matrices. The speed of solution of our linear system increases with this sparsity, as it expedites manipulations such as multiplication. Matrix inverses, on the other hand, tend to be dense, and as such do not enjoy these privileges.
Secondly, as the dimension gets large, our available memory starts to become an issue. Storing the inverse in its full memory requirement glory is not a task for the faint-hearted. We would much rather deal with matrix-vector products of the form , which consume memory. Furthermore, even if we can’t store an matrix in its entirety, in the case of a sparse matrix, there is hope of storing its nonzero elements. However, again, due to its density, we don’t have such hope for an inverse.
Alright, so if we need a single matrix-vector product, it might be better to solve a linear system. What if we are now interested in a number of such products? Even now, it’s still better (under a reasonable assumption) to solve a linear system. Once the system is solved for the first time via some decomposition, we have in our hands a factorization of . Hence, we now need an order-of-magnitude less operations to form each future product! Thus, we get a good deal if we need to evaluate less than such products.