I am using an int type to store a value. By the semantics of the program, the value always varies in a very small range (0 - 36), and int (not a char) is used only because the CPU efficiency.
It seems like that many special arithmetical optimizations can be performed on such a small range of integers. Many function calls on those integers might be optimized into a small set of "magical" operations, and some functions may even be optimized into table look-ups.
So, is it possible to tell the compiler that this int is always in that small range, and is it possible for the compiler to do those optimizations?