I was introduced to linear algebra as an undergraduate. I barely passed the course. At some point, the concepts became too abstract. Yet, the takeaway for me was that linear algebra offers short-cuts which augment regular algebra (by reducing the workload). Linear algebra can find solutions to certain problems that cannot be solved in a useful way otherwise. In other cases, linear algebra is the only way to a solution -- not because other methods aren't available, but because computers can't yet handle the task using regular algebra.
I remember a statement (paraphrased): The importance of linear algebra is directly proportional to increases in computing power.
A large airline uses the solution to a linear system, to guide certain management decisions. Such a system may contain over 10,000 equations and variables. Without using linear algebra, the fastest computers available to the airline might require 17 hours, to solve the system (likely, too late, to be of use). Whereas, linear algebra provides a means to get the solution in 23 minutes. If the airline pays a third-party for the CPU time, then linear algebra has saved them money in two ways: less CPU-time expense and timely management decisions. (All numbers in this example are fictional.)
Also, there are real matrices today that are too big to fit in a computer's high-speed memory. Linear algebra provides ways to partition such matrices, so that current hardware and software can handle the original matrix (two or three partitions at a time). I found a reference to such a system, albeit from 1992. After partitioning a matrix into 837 rows and 51 columns, a Cray supercomputer solved the problem in four minutes. Each 51-block column contained about 250,000 individual columns. The original problem contained 837 equations involving over 12,750,000 variables. Nearly 10 million of the more than 10 billion entries in the original matrix were non-zero. Without linear algebra, the solution could not have be found (in 1992, at least).