It's not just computationally intensive. It's also numerically unstable. Observe this:
$$A = P D P^{-1}, \quad P = \begin{bmatrix} 1 & 1 \\ 1 & 1 + \varepsilon \end{bmatrix}, \quad D = \begin{bmatrix} 0 \\ & \varepsilon \end{bmatrix}.$$
For $\epsilon = 10^{-13}$, for example, Mathematica code
\[Epsilon] = 10^-9;
P = {{1, 1}, {1, 1 + \[Epsilon]}} // N;
mD = N[DiagonalMatrix[{0, \[Epsilon]}], 100] // N;
Print["A = ", (A = P.mD.Inverse[P]) // MatrixForm]
Print["A^17 = ", (P1 = MatrixPower[A, 17]) // MatrixForm]
Print["P D^17 P^-1 = ", (P2 = P.MatrixPower[mD, 17].Inverse[P]) // MatrixForm]
Print["A^17 - P D^17 P^-1 = ", P1 - P2 // MatrixForm]
will output
A = (
-1. 1.
-1. 1.
)
A^17 = (
0. 0.
0. 0.
)
P D^17 P^-1 = (
-1.*10^-144 1.*10^-144
-1.*10^-144 1.*10^-144
)
A^17 - P D^17 P^-1 = (
1.*10^-144 -1.*10^-144
1.*10^-144 -1.*10^-144
)
This is, of course, a fabricated example. However, with larger matrices, errors will, of course, occur much easier.
Also, unless you know your eigenvalues to be real, the above approach (via Jordan decomposition) would involve complex arithmetic.
Unless you know $A$ to have some nice properties (i.e., very distinct eigenvalues), it's better to use the Schur decomposition. If $A$ is real, this can be done with real arithmetic (with the small price of replacing triangular with quasitriangular factor) and there are quite nice ways to compute it. For example, the QR algorithm.
The drawback: you still need to implement the matrix power of a (quasi)triangular matrix.