What is so wrong with thinking of real numbers as infinite decimals?

One of the early objectives of almost any university mathematics course is to teach people to stop thinking of the real numbers as infinite decimals and to regard them instead as elements of the unique complete ordered field, which can be shown to exist by means of Dedekind cuts, or Cauchy sequences of rationals.

Indeed, many of the traditional arguments of analysis become more intuitive when one does, even if they are less neat. Neatness is of course a great advantage, and I do not wish to suggest that universities should change the way they teach the real numbers. However, isn't it good to see how the conventional treatment is connected to, and grows out of, more `naive' ideas?

share|cite|improve this question
20  
I can think of lots of university mathematics courses that don't have that objective... – Robert Israel 7 hours ago
2  
In any case, the uniqueness that the question asserts is not so simple... – paul garrett 7 hours ago
5  
A real number is...a real number. A decimal representation is only a representation of a real number. You are quite confusing a thing with its representation. – Paul 6 hours ago
1  
@Paul: I disagree. A real number isn't...a real number until you define what you mean by "real number". You can define real numbers as Dedekind cuts, or as equivalence classes of Cauchy sequences, or as infinite sequences of decimal digits. It can be shown that each of these three definitions leads to a complete ordered field, and that they are therefore equivalent; but they are different definitions. – TonyK 5 hours ago
1  
@TonyK How do you know that decimal form is the most natural for normal humans? Apart from seeing it written, I don't think I've ever observed an infinite decimal I recognized, or could see in nature intuitively.... – floorcat 3 hours ago

There is nothing wrong with sometimes thinking of real numbers as infinite decimals, and indeed this perspective is useful in some contexts. There are a few reasons that introductory real analysis courses tend to push students to not think of real numbers this way.

First, students are typically already familiar with this perspective on real numbers, but are not familiar with other perspectives that are more useful and natural most of the time in advanced mathematics. So it is not especially necessary to teach students about real numbers as infinite decimals, but it is necessary to teach other perspectives, and to teach students to not exclusively (or even primarily) think about real numbers as infinite decimals.

Second, a major goal of many such courses is to rigorously develop the theory of the real numbers from "first principles" (e.g., in a naive set theory framework). Students who are familiar with real numbers as infinite decimals are almost never familiar with them in a truly rigorous way. For instance, do they really know how to rigorously define how to multiply two infinite decimals? Almost certainly not, and most of them would have a lot of difficulty doing so even if they tried to. It is possible to give a completely rigorous construction of the real numbers as infinite decimals, but it is not particularly easy or enlightening to do so (in comparison with other constructions of the real numbers). In any case, if you are constructing the real numbers rigorously from scratch, that means you need to "forget" everything you already "knew" about real numbers. So students need to be told to not assume facts about real numbers based on whatever naive understanding they might have had previously.

Third, it is misleading to describe infinite decimals as the basic "naive" understanding of the real numbers. It is unfortunately often the main understanding that is taught in grade school, but this emphasis obscures the fact that ultimately the motivation for real numbers is the intuitive idea of measuring non-discrete quantities, such as geometric lengths. When you think about real numbers this way, they are much more closely related to the concept of a "complete ordered field" than they are to the concept of infinite decimals. Ancient mathematicians reasoned about numbers in this way for centuries without the modern decimal notation for them. So actually the idea of representing numbers by infinite decimals is not at all a simple "naive" idea but a complicated and quite clever idea (which has some important subtleties, such as the fact that two different decimal expansions can represent the same number). It's kind of just an accident that nowadays students are taught about this perspective on real numbers long before any others.

share|cite|improve this answer

For the same reason that it is incorrect to think of matrices as linear transformations, or to think of real $n$-dimensional vector spaces as just $\mathbb{R}^{n}.$

What's so special about base $10$? Why not binary numbers or ternary numbers? In particular, ternary numbers would come in handy for understanding Cantor ternary sets. But apparently we should think in decimals?

This non-canonical choice is unnecessary, aesthetically displeasing, and distracts from certain intuitions to be gained from, say, the geometric view of the reals as points on a line. It's better to think of the real numbers as what they are: A system of things (what are those things? Equivalence classes of sequences of rationals? Points on a line? It doesn't matter) that have certain very nice properties. Just like vectors are just elements of vector spaces: Objects that obey certain rules. Shoving extra stuff in there distracts from the mathematics.

share|cite|improve this answer
    
Good answer, it is just a question about how much the model or interpretation suits to particular part of the mathematical object. – z100 5 hours ago
    
By coincidence, this is a hot question at the same time I answered this question. – Will R 5 hours ago
    
I think in the first sentence it should be "to think of linear transformations as matrices"? – Paŭlo Ebermann 5 hours ago
    
@PauloEbermann: What's the difference? Given a matrix, you don't know the transformation without being given the bases. Given a transformation, you can't write down a matrix unless you choose two bases. Am I mistaken? – Will R 5 hours ago
    
I really wish either I had waited until you posted this answer, or that you had posted this answer an hour earlier than you did. I really like how you described this! – floorcat 3 hours ago

I think that teaching reals as infinite decimals in the first place is a mistake arising out of lack of a better description.

Here is how I present the problem of infinite decimals. How for example does one define the sum of two infinite decimals ? One is supposed to start to add decimals at the right and work to the left, as in elementary school. But there is no right most digit. So how can we define addition ? Well, OK, to salvage the idea one takes all truncations add them and then prove that they stabilize. That is we must prove that they converge. This can be done, but is is kind of messy. How do we then define multiplication ? Same thing but worse. Imagine proving the distributive law. In any case you have arrived by necessity at the concept of a Cauchy sequence.

The other reason is that infinite decimals lacks the geometric intuition necessary for understanding the topology of the reals.

share|cite|improve this answer
    
I'm curious, and forgive me if this is a simple question....but couldn't a series representation be used in place of a truncated form, and allow showing convergence to be more straightforward? And if a series representation is possible, wouldn't the relative summation and product spaces be more or less straightforward to express? I guess this could just as easily lead to Cauchy sequence though... – floorcat 6 hours ago
    
But what would be a better description? As infinite decimals do seem a bad way -- look at how many people ask, over and over "does 0.999999... = 1?" "why does 0.999999... = 1?" "isn't 0.999999.... just a 'little bit less than' 1?" and so forth... – mike4ty4 6 hours ago
    
@floorcat A series is the same thing as an infinite decimal. – Rene Schipperus 5 hours ago
    
I was confused because most of the comments/answers use the actual non-terminating decimal as opposed to what I consider a series representation form, e.g. summation or product space notation. – floorcat 5 hours ago

The infinite decimal interpretation of $\mathbb{R}$ leads to problems: $$ 0.49999\dots = 0.5 $$ If you are trying to find a bijection $\left[0,1\right[ \to \left[0,1\right[ \times \left[0,1\right[$, you might try: $$ 0.ababababab\dots \mapsto (0.aaaaaaa\dots,0.bbbbbbb\dots) $$ which does not work due to the existence of $0.4999\dots$.

share|cite|improve this answer
4  
How is this a "problem", exactly? – Eric Wofsey 7 hours ago
4  
@EricWofsey It gives the incorrect intuition that $0.4999\dots$ is different from $0.5$. – Henry W. 7 hours ago
    
Intuition is neither fixed nor objective. It's certainly not intuitive that an operation can be noncommutative, for instance, until one has learned enough that it becomes intuitive. Further, in this particular case, it arises from incorrect understanding, so of course there's a problem, but it's not caused by what you imply it to be. – Nij 4 hours ago
    
I think this may be one of the key reasons why you don't think of reals as infinite decimals. Infinite decimals have more than one representation for any number (such as .4999... and .5 given here). This can lead to false proofs. For example, if you need to pick a number smaller than .5, it is not immediately obvious that .499... is not such a number. By the time you understand enough to understand why this is the case, you might as well have learned to think of the reals properly. Thinking of them as infinite decimals didn't help. – Cort Ammon 4 hours ago
    
this distinction is necessary when considering the proof that the real numbers between 0 and 1 are uncountable. – robert bristow-johnson 2 hours ago

One objection to thinking of real numbers as "infinite decimals" is that it lends itself to thinking of the distinction between rational and irrational numbers as being primarily about whether the decimals repeat or not. This in turn leads to some very problematic misunderstandings, such as the one in the image below: enter image description here

Yes, you read it right: the author of this book thinks that $8/23$ is an irrational number, because there is no pattern to the digits.

Now it is easy to dismiss this as simple ignorance: of course the digits do repeat, but you have to go further out in the decimal sequence before this becomes visible. But once you notice this, you start to recognize all kinds of problems with the "non-repeating decimal" notion of irrational number: How can you ever tell, by looking at a decimal representation of a real number, whether it is rational or irrational? After all, the most we can ever see is finitely many digits; maybe the repeating portion starts after the part we are looking at? Or maybe what looks like a repeating decimal (say $0.6666\dots$) turns out to have a 3 in the thirty-fifth decimal place, and thereafter is non-repeating?

Now obviously these problems can be circumvented by a more precise notion of "rational": A rational number is one that can be expressed as a ratio of integers, an irrational number is one that cannot be. But you would be surprised how resilient the misconception shown in the image above can be. Related errors are pervasive: I am sure I am not the only one who has seen students use a calculator to get a decimal approximation of some irrational number (say for example $\log 2$) and several steps later use a "convert to fraction" command on their graphing calculator to express a string of digits as some close-but-not-equal rational number.

If you really want to get students away from these kinds of mistakes, at some point you have to provide them with a notion of "number" that is independent of the decimal representation of that number.

share|cite|improve this answer

One answer to

What is so wrong with thinking of real numbers as infinite decimals?

is that it's somewhat old fashioned. Another is that it doesn't work really well in more advanced courses in analysis. But Courant's calculus, one of the best texts ever, shows this in Section 2 of the first chapter:

enter image description here

That's a screenshot from page 8; you can see it at

https://archive.org/stream/DifferentialIntegralCalculusVolI/Courant-DifferentialIntegralCalculusVolI#page/n23/mode/2up

share|cite|improve this answer
    
@amWhy I think it's an interesting bit of history that addresses the OP's question, which is why I posted it. Of course one can link to the full text of the book - if you know that it contains something relevant. Feel free to downvote. – Ethan Bolker 7 hours ago
    
I don't question that. But it remains little more than a "link-only" answer. – amWhy 7 hours ago
    
I'm not trying to be mean. And I haven't flagged the answer, nor voted to delete it as a link-only answer. I'm trying to inform you, that's all. – amWhy 7 hours ago
    
@amWhy I don't find your comments mean (and shouldn't have been snarky about downvote). I too dislike link only answers. I agree that this one's a close call. We just came down on different sides. – Ethan Bolker 6 hours ago

Decimal notation for general real numbers is not universally intuitive. It has a number of problems:

  • Arithmetic with nonterminating decimals has unfamiliar complications, because people are used to doing arithmetic from the right end.
  • Some people have great difficulty accepting that representation is not unique
  • Some people don't grasp the infinite nature, and instead imagine a decimal as simply having a large but finite number of digits — sometimes with the belief that decimals are inherently approximate and cannot represent a number exactly.
  • Some people fail to grasp the infinite nature in a different way, only being able to conceptualize a sequence of terminating decimals

It even has severe philosophical problems; e.g. they are ill-suited for various constructive approaches to mathematics. You can't compute even the first digit of $.333\ldots + .666\ldots$ unless you can prove that a carry does or does not propagate in from the right. (or you can prove it is a special case, such as the numbers actually being $1/3 + 2/3$, rather than some unknown sequence you actually have to generate to find its digits)

There are pedagogical problems as well; the theory behind decimals arises from that of infinite summations, but calculus and analysis courses tend to prefer to start with limits, continuity, and similar topological notions.


Incidentally, I believe I have seen at least one text whose proof that a complete ordered field exists is by showing the arithmetic on decimal numbers is such.

share|cite|improve this answer

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.