So, this question asks about how useful computational tricks are to mathematics research, and several people's response was "well, computational tricks are often super cool theorems in disguise." So what "computational tricks" or "easy theorems" or "fun patterns" turn out to be important theorems?
The ideal answer to this question would be a topic that can be understood at two different levels that have a great gulf in terms of sophistication between them, although the simplistic example doesn't have to be "trivial."
For example, the unique prime factorization theorem is often proven from the division algorithm through Bezut's lemma and the fact that $p|ab\implies p|a$ or $p|b$. A virtually identical proof allows you to establish that every Euclidean Domain is a unique factorization domain, and the problem as a whole - once properly abstracted - gives rise to the notion of ideals and a significant amount of ring theory.
For another example, it's well known that finite dimensional vector spaces are uniquely determined by their base field and their dimension. However, a far more general theorem in Model Theory basically lets you say "given a set of objects that have a dimension-like parameter that are situated in the right manner, every object with finite "dimension" is uniquely determined by its minimal example and the "dimension." I don't actually quite remember the precise statement of this theorem, so if someone wants to explain in detail how vector spaces are a particular example of $k$-categorical theories for every finite $k$ that would be great.
From the comments: In a certain sense I'm interested in the inverse question as this Math Overflow post. Instead of being interested in deep mathematics that produce horribly complicated proofs of simple ideas, I want simple ideas that contain within them, or generalize to, mathematics of startling depth.
