So, this question asks about how useful computational tricks are to mathematics research, and several people's response was "well, computational tricks are often super cool theorems in disguise." So what "computational tricks" or "easy theorems" or "fun patterns" turn out to be important theorems?

The ideal answer to this question would be a topic that can be understood at two different levels that have a great gulf in terms of sophistication between them, although the simplistic example doesn't have to be "trivial."

For example, the unique prime factorization theorem is often proven from the division algorithm through Bezut's lemma and the fact that $p|ab\implies p|a$ or $p|b$. A virtually identical proof allows you to establish that every Euclidean Domain is a unique factorization domain, and the problem as a whole - once properly abstracted - gives rise to the notion of ideals and a significant amount of ring theory.

For another example, it's well known that finite dimensional vector spaces are uniquely determined by their base field and their dimension. However, a far more general theorem in Model Theory basically lets you say "given a set of objects that have a dimension-like parameter that are situated in the right manner, every object with finite "dimension" is uniquely determined by its minimal example and the "dimension." I don't actually quite remember the precise statement of this theorem, so if someone wants to explain in detail how vector spaces are a particular example of $k$-categorical theories for every finite $k$ that would be great.

From the comments: In a certain sense I'm interested in the inverse question as this Math Overflow post. Instead of being interested in deep mathematics that produce horribly complicated proofs of simple ideas, I want simple ideas that contain within them, or generalize to, mathematics of startling depth.

share|cite|improve this question
1  
This question is similar mathoverflow.net/questions/42512/… . – Oscar Cunningham yesterday
    
My favorite example I heard once. A question from a multivariable calculus textbook. But this guy gives s solution using jet bundles. – GEdgar yesterday
    
@OscarCunningham In a certain sense, I'm asking about the inverse idea: easy and everyday theorems that are secretly reflective of deep mathematics, rather than deep mathematics appearing and flattening simple problems in needless complicated ways. – Stella Biderman yesterday
1  
Fundamental theorem of algebra? – Simply Beautiful Art yesterday
1  

16 Answers 16

In school they teach us that

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C$$

But as Tom Leinster points out, this is an incomplete solution. The function $x\mapsto 1/x$ has more antiderivatives than just the ones of the above form. This is because the constant $C$ could be different on the positive and negative portions of the axis. So really we should write:

$$\int\frac 1x\;\mathrm dx=\log\left|x\right|+C\cdot1_{x>0}+D\cdot1_{x<0}$$

where $1_{x>0}$ and $1_{x<0}$ are the indicator functions for the positive and negative reals.

This means that the space of antiderivatives of the fuction $x\mapsto 1/x$ is two dimensional. Really what we have done is to calculate the zeroth de Rham cohomology of the manifold $\mathbb R-\{0\}$ (the domain on which $x\mapsto 1/x$ is defined). The fact that $H^0_{dR}\left(\mathbb R-\{0\}\right)=\mathbb R^2$ results from the fact that $\mathbb R-\{0\}$ has two components.

share|cite|improve this answer
2  
You just have to know what the indefinite integral really stands for. After all it doesn't make sense to compute $\int_{-1}^1 (1/x) dx$... – Thompson yesterday
15  
@Thompson Sure. But note that the answer $\log\left|x\right|+C$ that I was taught in school is never correct. Either you are doing a definite integral, in which case you have to stick to the positives or the negatives and the answer is $\log x-\log a$ or $\log -x-\log -a$, or you want an antiderivative, in which case $\log\left|x\right|$ is fine, or you want all the antiderivatives, in which case the answer is $\log\left|x\right|+C\cdot1_{x>0}+D\cdot1_{x<0}$. No reasonable question has the answer $\log\left|x\right|+C$. – Oscar Cunningham yesterday
2  
Yep. And the official solution to the 2015 Math Methods Examination 1 gets this wrong, see Question 2. I think it's pretty sad that an official solution to the official exam that thousands of students take is just... mistaken, the solution is incorrect. This is what we get for requiring our math teachers to study teaching, when really, we should require them to study math. – goblin yesterday
2  
Every calculus book I've read makes it a specific point to say that by convention they will only ever consider one-sided antiderivatives of $1/x$ (they will never have a situation where sometimes you want one, sometimes the other), which makes the specification of a second constant an error. The function 1/x in this context is implicitly assumed to have domain in $x<0$ (exclusively) or $x>0$; the domain is never split into two pieces. – zibadawa timmy 14 hours ago
5  
Every calculus book I've read explicitly mentions the de Rham cohomology! (I've never read any calculus books.) – Oscar Cunningham 12 hours ago

I'm not sure if this answer really fits the question. But the nice question prompted me to write down some thoughts I've been mulling for a while.

I think the simple distributive law is essentially deep mathematics that comes up early in school.

I hang out in K-3 classrooms these days. I'm struck by how often understanding a kid's problem turns out to hinge on showing how the distributive law applies. For example to explain $20+30=50$ (sometimes necessary) - you start with "2 apples + 3 apples = 5 apples" and then $$ 20 + 30 = 2 \text{ tens} + 3 \text{ tens} = (2+3)\text{ tens} = 5 \text{ tens} = 50. $$ So the distributive law is behind positional notation, and the idea that you "can't add apples to oranges" (unless you generalize to "fruits"). You even get to discuss a little etymology: "fifty" was literally once "five tens".

The distributive law is behind lots of algebra exercises in multiplying and factoring. If it were more explicit I think kids would understand FOIL as well as memorizing the rule. I think they'd also enjoy seeing how Euclid demonstrated it as a computation of areas. (I should find the proposition in the Elements and link it here.)

Later on you wish they'd stop thinking everything distributes, leading to algebra errors with square roots (and squares), logarithms (and powers).

All of this before you study linear transformations, abstract algebra, rings, and ring-like structures where you explore the consequences when distributivity fails.

share|cite|improve this answer
15  
This is related to the fact that place-value arithmetic is actually a specific instance of polynomial arithmetic. If I know that $11^2=121$ then I know that $(x+1)^2=x^2+2x+1$. Of course, in high school this is never explained to anyone because that would be unreasonable. I remember having an argument with a student when they insisted that they understood long division but not polynomial long division and I refused to teach them a technique for "polynomial long division" and instead started talking about the nature of the symbols we use to represent numbers. – Stella Biderman yesterday
5  
+1 Personally, I strongly dislike "FOIL", as once learned, the majority of students stop making any effort to understand how to multiply sums by distributing, and thus are at a loss about what to do with more complex problems. – Paul Sinclair yesterday
10  
I just sit down the 8th grader and ask him what's $117 \times 277 - 116 \times 277$. A surprising number will compute it the long way. I don't even point it out to them; they often still don't see it. Then I give them progressively bigger numbers, like $13754 \times 347 - 13654 \times 347$ (and they get annoyed at being asked to do this without a calculator) until they suddenly get it. Then we go from there to trickier problems, like $97 \times 103$ without a calculator, then $498 \times 502$, and so on. – Wildcard yesterday
8  
For those who, like me, wonder what FOIL is: en.wikipedia.org/wiki/FOIL_method – Oliphaunt yesterday
24  
@Wildcard My saddest experience akin to yours occurred in a sophomore number theory class. I pointed out that it was easy to factor $2491 = 2500-9$ since it was a difference of squares. One student said "I didn't know $a^2-b^2 = (a-b)(a+b)$ works for numbers too." – Ethan Bolker 20 hours ago

Let's get the obvious example one out of the way - almost all representation theorems are shadows of the Yoneda lemma. In particular all of the following facts, some of which are elementary, follow from the (enriched) Yoneda lemma.

  • That every group is a isomorphic to a subgroup of a permutation group. (Cayley's theorem)
  • That every partially ordered set embeds into some power set ordered by inclusion.
  • That every graph is the intersection graph of some sets.
  • That every ring has a faithful module.
  • That for every proposition or truth value $p$ we have $p\Rightarrow \top$.
share|cite|improve this answer

The school arithmetic is a particular case of the cohomology. Reference: A Cohomological Viewpoint on Elementary School Arithmetic by Daniel C. Isaksen.

share|cite|improve this answer

Everyone knows: There are even numbers and odd numbers. And there are rules when doing arithmetic with them: Even plus even is even, as is odd plus odd. Even plus odd gives odd. Also, odd times odd is odd, even times odd is even, as is even times even.

Of course when saying this in school, this is considered as an abbreviation of "an even number plus an even number is an even number" etc. But those formulations make sense on their own, and are just a special case of a more general structure, the rings of integers modulo $n$, which even is a field if $n$ is prime. Even/odd just are the integers modulo $2$ (and as $2$ is prime, even and odd actually form a field). The set of even numbers and the set of odd numbers are the congruence classes modulo $2$.

But there's more to it: The concept generalises from numbers to more general rings. For example it generalizes to polynomials. And then one way to define the complex numbers is to take the real polynomials modulo $x^2+1$.

But the concept of congruence can be defined much more generally. In all above examples, congruence classes are equivalence classes under the specific equivalence relation $a\equiv b \pmod n$ iff $n$ divides $a-b$. But there is no need to have the equivalence relation defined this way; one can use any equivalence relation that's compatible with the structure one considers.

This concept of congruence can for example be used to define the tensor product from the free product of vector spaces, and the exterior and symmetric algebras from the tensor product. It also, in the form of quotient groups, is an important concept in group theory.

But you can also go in a different direction: Given a prime $p$, an integer $k$ is completely determined by the sequence of its congruence classes modulo $p$, modulo $p^2$, modulo $p^3$ etc., but not all consistent series correspond to an integer. It is a natural question whether one can make sense of the other sequences, and indeed one can; the result is the $p$-adic integers, which then can be extended to the field of $p$-adic numbers.

share|cite|improve this answer
    
Riffing off of the even/odd numbers, then congruence modulo a polynomial, I've always meant to sit down and figure out if there's some nice algebraic stuff happening with even and odd functions. I guess since the even/oddness really gets at the degrees of monomial terms, they behave nicely when we multiply, not when we add (and so have more in common with the $(\{\pm 1\}, \cdot)$ version of the group with two elements, rather than the $(\{0, 1\}, +_{\text{mod }2})$ version). – pjs36 19 hours ago
    
Even and odd functions are subspaces of the vector space of all functions on $\Bbb R$. In fact, each is the other's orthogonal complement. (I think means that quotienting out by the even functions gives the odd functions and vice versa.) – Akiva Weinberger 10 hours ago

An easy theorem is quadratic reciprocity from elementary number theory. However, it reflects deep mathematics, namely that reciprocity is a very deep principle within number theory and mathematics. There is a nice article by Richard Taylor on Reciprocity Laws and Density Theorems, where he explains what the related ideas of reciprocity laws (such as quadratic reciprocity and the Shimura-Taniyama conjecture) and of density theorems (such as Dirichlet’s theorem and the Sato-Tate conjecture) are.

share|cite|improve this answer
    
Indeed, I learned this theorem in the context of elementary number theory in high school, and about halfway through my first algebraic number theory course this connection struck me suddenly. – Stella Biderman yesterday
    
Relatedly, ramification is an abstraction of some pretty straight forward elementary number theory. Given how algebraic number theory evolved, historically speaking, it seems like a place rife with examples. – Stella Biderman yesterday

The fundamental theorem of calculus is familiar to many: $\int_a^bf'(x)dx=f(b)-f(a)$ for suitable functions $f\colon[a,b]\to\mathbb R$. Here are some ideas stemming from it:

  • The usual fundamental theorem of calculus is very one-dimensional. How might one generalize that to several variables? There are different kinds of derivatives (gradients, curls, divergences and whatnot), but how do they all fit in? One natural generalization is Stokes' theorem for differential forms, which indeed contains the familiar theorem (and several higher dimensional results) as a special case.

  • The fundamental theorem of calculus implies that if the derivative of a nice function $\mathbb R\to\mathbb R$ vanishes, the function has to be constant. If the derivative is small (in absolute value), the function is almost constant. In some sense, it means that you can control the amount of change in the function by its derivative. This might not sound surprising, given the definition of a derivative, but certain generalizations of this idea are immensely useful in analysis. Perhaps the best known result of this kind is the Poincaré inequality, and it is indispensable in the study of partial differential equations.

  • Consider a function $f\colon M\to\mathbb R$ on a Riemannian manifold. Its differential $\alpha=df$ is a one-form, which satisfies $\int_\gamma\alpha=\gamma(b)-\gamma(a)$ for any geodesic $\gamma\colon[a,b]\to M$. Proving this is nothing but the good old one-dimensional theorem applied along the geodesic. If $M$ is a Riemannian manifold with boundary (simple example: closed ball in Euclidean space) and $f\colon M\to\mathbb R$ vanishes at the boundary, then $df$ integrates to zero over every maximal geodesic. You can ask the reverse question1: If a one-form $\alpha$ on $M$ integrates to zero over all maximal geodesics, is there necessarily a function $f\colon M\to\mathbb R$ vanishing at the boundary so that $\alpha=df$? This turns out to be true in some cases, for example when the manifold is "simple". (This is a not-so-simple technical condition that I will not discuss here. The Euclidean ball is simple.) You can also ask similar questions for symmetric covariant tensor fields of higher order. Questions of this kind have, perhaps surprisingly, applications in real-word indirect measurement problems. Problems of this kind are known as tensor tomography, and I refer you to this review for details.


1 Asking reverse questions of certain kinds is its own field of mathematics, known as inverse problems. Tensor tomography is only one of many kinds of inverse problems one could study, but surprisingly many are related to some version of it.

share|cite|improve this answer

The chain-rule in calculus is pretty intuitive to students learning it for the first time. "If you get 3 y per x, and 4 z per y, how many z per x?" $$\frac{dz}{dy}\frac{dy}{dx} = (4)(3) = 12 = \frac{dz}{dx}$$ But the chain-rule and its extensions and related theorems are pretty fundamental to all of calculus.

I also think that a lot of probability theory people can intuitively reason out when given very concrete problems, but the underlying math necessary to make rigorous what is going on is amazingly deep. Results about "probability" predate measure theory, so it's clear that the difficult rigor lagged behind the simple intuition. "What are the odds?" a little kid intuitively asks about an unlikely situation... "What are odds?" asks a mathematician who dedicates his life to laying groundwork for measure theory.

share|cite|improve this answer

Schur's lemma (in its various incarnations) is my go-to example for this sort of question. It is quite simple to prove - Serre does it in in a matter of two short paragraphs in ''Linear Representations of Finite Groups'' - yet is the backbone for many foundational results in basic representation theory, including the usual orthogonality relations for characters.

It is also a very useful result in the setting of basic noncommutative algebra, where it is similarly simple to prove (Lam does it in two lines in ''A First Course in Noncommutative Rings''!), and has a host of interesting and important consequences. For instance, in ''A First Course in Noncommutative Rings'', Lam uses it in his proof of the Artin-Wedderburn classification of left semisimple rings, a major result in basic noncommutative ring theory.

I should add that Wikipedia notes that Schur's lemma has generalizations to Lie Groups and Lie Algebras, though I am less familiar with these results.

share|cite|improve this answer

Planimeter may be a rather simple mechanical computer. You can call its job a "computational trick". The theorem is as simple as:

The area of the shape is proportional to the number of turns through which the measuring wheel rotates.

Still the explanation of why it works starts with

The operation of a linear planimeter can be justified by applying Green's theorem onto the components of the vector field $N$ […]

and then it gets deeper.

share|cite|improve this answer

The Brouwer fixed point theorem is highly nontrivial, but the 1D case is an easy consequence of the Bolzano's Theorem.

share|cite|improve this answer

If you allow conjectures, then I'm gonna throw the Collatz Conjecture into the mix:

xkcd

A problem simple enough to describe to just about anyone, but as Paul Erdős said "mathematics is simply not ready for such problems"

share|cite|improve this answer
    
+1. I've watched fifth graders stumble on the Collatz conjecture, do lots of ariithmetic and just "know" that it's true, even if they're not sure they've solved it. This is an instance of a generic problem that might be the best general answer to the OPs general question: playing with easy/elementary numerical examples can suggest fiendishly difficult problems. Goldbach? Fermat? Edit that into your answer? – Ethan Bolker 19 hours ago
1  
@EthanBolker: Those who think it is true based on their measly evidence should go and learn some logic, besides checking out the big list of conjectures that have extremely large counter-examples. – user21820 19 hours ago
1  
@user21820 I think the fifth graders' problems are as much psychological as logical. It's hard to grasp the fact that any finite initial segment of the integers is essentially 0% of them all. You can use this discussion to distinguish between proof and pattern. (The first place for that is often proving odd + odd = even abstractly, not just by example.) Thanks for the link. – Ethan Bolker 16 hours ago
    
Really? There is still no one who has bothered to write up a formal proof? – g------ 9 hours ago
1  
@g------ It's not a matter of bothering. I don't think anyone knows how to prove it. The problem is so famous that it's safe to assume a proof is published if one is found. – Joonas Ilmavirta 5 hours ago

Thinking about the words that the OP wrote: "simple ideas that contain within them, or generalize to, mathematics of startling depth", it comes to my mind the special case of Euler's formula known as Euler's identity. It is indeed (excerpt from Wikipedia) "often cited as an example of deep mathematical beauty".

$$e^{i \pi}+1=0$$

A short and simple formulation, but the result lies on the development of several fields, the study of the periodicity of the trigonometric functions, complex logarithms, and series expansions of the exponential and trigonometric expressions by Bernoulli, Euler and others.

share|cite|improve this answer

The case $n = 4$ of the Fermat's Last Theorem can be proved by elementary means. But the proof of the general case

[...] stimulated the development of algebraic number theory in the 19th century and the proof of the modularity theorem in the 20th century.
share|cite|improve this answer

$\nexists BAC\iff\pi=ABC+BCA$

The maths behind Euclid's parallel postulate is so profound that it took two thousand years for us to deduce that it is not, in fact, self-evident. The consequences of this fact are fundamental to our laws of geometry; and the fact it is not self-evident, suggested that other geometries such as Special and General Relativity may be required to understand the Universe 2,000 years before the invention of Newtonian mechanics.

share|cite|improve this answer

Take $\sin$ and $\cos$. At first you define them geometrically. You draw triangles and you can find formulas for $\sin(\frac \alpha 2)$, $ \cos(\beta + \gamma)$, $\frac {{\rm d} \sin (\alpha)} {{\rm d} \alpha}$, etc.

And then you learn and understand the concept of ${\rm e}^{i x}$, you can express $\sin(x)$ and $\cos(x)$ with it. Suddenly all those triangle-based formulas hook up to algebra and you can derive them relatively easily without drawing triangles.

share|cite|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.