I don't think I ever learned this method for ODE systems; I did other things related to the matrix exponent like finding generalized eigenvectors and so forth. Is there an intuition behind using [imath](sI - A)^{-1}[/imath]? It looks a lot like [imath]A - \lambda I[/imath]. Is this some sort of [imath]s[/imath] domain eigenspace computation?
If you have studied Laplace transform before, it all would make sense to you.
It all began when a bald man said that: if [imath]x = ce^{at}[/imath] is a solution to [imath]x' = ax[/imath], then [imath]\bold{X} = e^{\bold{A}t}\bold{C}[/imath] must be a solution to [imath]\bold{X}' = \bold{A}\bold{X}[/imath].
He wanted to write [imath]e^{\bold{A}t}[/imath] in a different form, so he tried to solve the following initial value problem by Laplace transform:
[imath]\bold{X}' = \bold{A}\bold{X}, \ \ \ \bold{X}(0) = \bold{I}[/imath]
Take the Laplace transform of both sides.
[imath]s\bold{x(s)} - \bold{X}(0) = \bold{A}\bold{x}(s)[/imath]
Rearrange.
[imath](s - \bold{A})\bold{x}(s) = \bold{I}[/imath]
This arrangement is wrong because [imath]s[/imath] is a scalar and [imath]\bold{A}[/imath] is a matrix. So we have to multiply [imath]s[/imath] by [imath]\bold{I}[/imath] to properly apply [imath]s[/imath] to each component of the matrix [imath]\bold{x}(s)[/imath].
Therefore, the correct arrangement is:
[imath](s\bold{I} - \bold{A})\bold{x}(s) = \bold{I}[/imath]
Multiply both sides by [imath](s\bold{I} - \bold{A})^{-1}[/imath]
[imath]\bold{x}(s) = \bold{I}(s\bold{I} - \bold{A})^{-1} = (s\bold{I} - \bold{A})^{-1}[/imath]
We already know that:
[imath]\bold{X}(t) = e^{\bold{A}t}[/imath]
Then
[imath]e^{\bold{A}t} = \mathcal{L}^{-1}\{\bold{x}(s)\} = \mathcal{L}^{-1}\{(s\bold{I} - \bold{A})^{-1}\}[/imath]