# 2.4 — Integers

An integer type (sometimes called an integral type) variable is a variable that can only hold non-fractional numbers (e.g. -2, -1, 0, 1, 2). C++ has five different fundamental integer types available for use:

Category Type Minimum Size Note
character char 1 byte
integer short 2 bytes
int 2 bytes Typically 4 bytes on modern architectures
long 4 bytes
long long 8 bytes C99/C++11 type

Char is a special case, in that it falls into both the character and integer categories. We’ll talk about the special properties of char later. In this lesson, you can treat it as a normal integer.

The key difference between the various integer types is that they have varying sizes -- the larger integers can hold bigger numbers. Note that C++ only guarantees that integers will have a certain minimum size, not that they will have a specific size. See lesson 2.3 -- variable sizes and the sizeof operator for information on how to determine how large each type is on your machine.

Defining integers

Defining some integers:

While short int, long int, and long long int are valid, the shorthand versions short, long, and long long should be preferred. In addition to being more typing, adding the int suffix makes the type harder to distinguish from variables of type int. This can lead to mistakes if the short or long modifier is inadvertently missed.

Identifying integer

Because the size of char, short, int, and long can vary depending on the compiler and/or computer architecture, it can be instructive to refer to integers by their size rather than name. We often refer to integers by the number of bits a variable of that type is allocated (e.g. “32-bit integer” instead of “long”).

Integer ranges and sign

As you learned in the last section, a variable with n bits can store 2n different values. But which specific values? We call the set of specific values that a data type can hold its range. The range of an integer variable is determined by two factors: its size (in bits), and its sign, which can be “signed” or “unsigned”.

A signed integer is a variable that can hold both negative and positive numbers. To explicitly declare a variable as signed, you can use the signed keyword:

By convention, the keyword “signed” is placed before the variable’s data type.

A 1-byte signed integer has a range of -128 to 127. Any value between -128 and 127 (inclusive) can be put in a 1-byte signed integer safely.

Sometimes, we know in advance that we are not going to need negative numbers. This is common when using a variable to store the quantity or size of something (such as your height -- it doesn’t make sense to have a negative height!). An unsigned integer is one that can only hold positive values. To explicitly declare a variable as unsigned, use the unsigned keyword:

A 1-byte unsigned integer has a range of 0 to 255.

Note that declaring a variable as unsigned means that it can not store negative numbers, but it can store positive numbers that are twice as large.

Now that you understand the difference between signed and unsigned, let’s take a look at the ranges for different sized signed and unsigned variables:

Size/Type Range
1 byte signed -128 to 127
1 byte unsigned 0 to 255
2 byte signed -32,768 to 32,767
2 byte unsigned 0 to 65,535
4 byte signed -2,147,483,648 to 2,147,483,647
4 byte unsigned 0 to 4,294,967,295
8 byte signed -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
8 byte unsigned 0 to 18,446,744,073,709,551,615

For the math inclined, an n-bit signed variable has a range of -(2n-1) to 2n-1-1. An n-bit unsigned variable has a range of 0 to (2n)-1. For the non-math inclined… use the table. 🙂

New programmers sometimes get signed and unsigned mixed up. The following is a simple way to remember the difference: in order to differentiate negative numbers from positive ones , we typically use a negative sign. If a sign is not provided, we assume a number is positive. Consequently, an integer with a sign (a signed integer) can tell the difference between positive and negative. An integer without a sign (an unsigned integer) assumes all values are positive.

Default signs and integer best practices

So what happens if we do not declare a variable as signed or unsigned?

Category Type Default Sign Note
character char Signed or Unsigned Usually signed
integer short Signed
int Signed
long Signed
long long Signed

All integer variables except char are signed by default. Char can be either signed or unsigned by default (but is usually signed for conformity).

Generally, the signed keyword is not used (since it’s redundant), except on chars (when necessary to ensure they are signed).

Best practice is to avoid use of unsigned integers unless you have a specific need for them, as unsigned integers are more prone to unexpected bugs and behaviors than signed integers.

Rule: Favor signed integers over unsigned integers

Overflow

What happens if we try to put a number outside of the data type’s range into our variable? Overflow occurs when bits are lost because a variable has not been allocated enough memory to store them.

In lesson 2.1 -- Fundamental variable definition, initialization, and assignment, we mentioned that data is stored in binary format.

In binary (base 2), each digit can only have 2 possible values (0 or 1). We count from 0 to 15 like this:

Decimal Value Binary Value
0 0
1 1
2 10
3 11
4 100
5 101
6 110
7 111
8 1000
9 1001
10 1010
11 1011
12 1100
13 1101
14 1110
15 1111

As you can see, the larger numbers require more bits to represent. Because our variables have a fixed number of bits, this puts a limit on how much data they can hold.

Overflow examples

Consider a hypothetical unsigned variable that can only hold 4 bits. Any of the binary numbers enumerated in the table above would fit comfortably inside this variable (because none of them are larger than 4 bits).

But what happens if we try to assign a value that takes more than 4 bits to our variable? We get overflow: our variable will only store the 4 least significant (rightmost) bits, and the excess bits are lost.

For example, if we tried to put the decimal value 21 in our 4-bit variable:

Decimal Value Binary Value
21 10101

21 takes 5 bits (`10101`) to represent. The 4 rightmost bits (`0101`) go into the variable, and the leftmost (`1`) is simply lost. Our variable now holds `0101`, which is the decimal value 5.

Note: At this point in the tutorials, you’re not expected to know how to convert decimal to binary or vice-versa. We’ll discuss that in more detail in section 3.7 -- Converting between binary and decimal.

Now, let’s take a look at an example using actual code, assuming a short is 16 bits:

What do you think the result of this program will be?

```x was: 65535
x is now: 0
```

What happened? We overflowed the variable by trying to put a number that was too big into it (65536), and the result is that our value “wrapped around” back to the beginning of the range.

 For advanced readers, here’s what’s actually happening behind the scenes: the number 65,535 is represented by the bit pattern `1111 1111 1111 1111` in binary. 65,535 is the largest number an unsigned 2 byte (16-bit) integer can hold, as it uses all 16 bits. When we add 1 to the value, the new value should be 65,536. However, the bit pattern of 65,536 is represented in binary as `1 0000 0000 0000 0000`, which is 17 bits! Consequently, the highest bit (which is the 1) is lost, and the low 16 bits are all that is left. The bit pattern `0000 0000 0000 0000` corresponds to the number 0, which is our result.

Similarly, we can overflow the bottom end of our range as well, resulting in “wrapping around” to the top of the range.

```x was: 0
x is now: 65535
```

Overflow results in information being lost, which is almost never desirable. If there is any suspicion that a variable might need to store a value that falls outside its range, use a larger variable!

Also note that the results of overflow are only predictable for unsigned integers. Overflowing signed integers or non-integers (e.g. floating point numbers) may result in different results on different systems.

Rule: Do not depend on the results of overflow in your program.

Integer division

When dividing two integers, C++ works like you’d expect when the result is a whole number:

This produces the expected result:

```5
```

But let’s look at what happens when integer division causes a fractional result:

This produces a possibly unexpected result:

```1
```

When doing division with two integers, C++ produces an integer result. Since integers can’t hold fractional values, any fractional portion is simply dropped (not rounded!).

Taking a closer look at the above example, 8 / 5 produces the value 1.6. The fractional part (0.6) is dropped, and the result of 1 remains.

Rule: Be careful when using integer division, as you will lose any fractional parts of the result

What is size_t?

Consider the following code:

Pretty simple, right? We can infer that operator sizeof returns an integral value -- but what type of integer is that value? An int? A short? The answer is that sizeof (and many functions that return a size or length value) return a value of type “size_t”. size_t is an unsigned, integral value that is typically used to represent the size or length of objects.

Amusingly, we can use sizeof (which returns a value of type size_t) to ask for the size of size_t itself:

Compiled as a 32-bit (4 byte) console app on the author’s system, this prints:

```4
```

Much like an integer can vary in size depending on the system, size_t also varies in size. size_t is guaranteed to be unsigned and at least 16 bits, but on most systems will be equivalent to the address-width of the application. That is, for 32-bit applications, size_t will typically be a 32 bit unsigned integer, and for a 64-bit application, size_t will typically be a 64-bit unsigned integer. size_t is defined to be big enough to hold the size of the largest object creatable on your system (in bytes). For example, if size_t is 4 bytes, the largest object creatable on your system can’t be larger than the largest number representable by a 4 byte unsigned integer (per the table above, 4,294,967,295 bytes).

By definition, any object larger than the largest value size_t can hold is considered ill-formed (and will cause a compile error), as the sizeof operator would not be able to return the size without overflow.

Incidentally, the _t suffix means “type”, and it is common to see this naming convention applied to the newly defined types from newer iterations of C and C++.

### 172 comments to 2.4 — Integers

• Jim

Alex,

I came back here from the bitwise lesson. I think I missed something about storing integers that you may have written somewhere.  But I don't recall seeing.

How does C++ or the compiler handle all the leading zero's when your system and mine need 4 bytes 32bits to store small integers like 1 (one}?  Certainly other data types or situations require throwing out leading or trailing zero's.

Char also come to mind here too.  What happens of someone uses a long-long integer and then stores a small number there?

Can you give us some insight, Thanks

• Alex

The compiler or CPU's instruction set should handle the padding as appropriate. For example, if you assign the integer value 1 to a 32-bit memory location, it will assign the value 00000000 00000000 00000000 00000001. It's not something you need to worry about as a programmer.

• Amandeep

Why do we need to write "return 0;" in the last line of this program?

#include <iostream>

int main()
{
using namespace std;
unsigned short x = 0; // smallest 2-byte unsigned value possible
cout << "x was: " << x << endl;
x = x - 1; // overflow!
cout << "x is now: " << x << endl;
return 0;
}

• Todd

Typos.

"See lesson 23 (2.3) -- variable sizes and the sizeof operator"

"In lesson 21 (2.1) -- Basic addressing and variable declaration"

"If there is any doubt (suspicion) that a variable might need to store a value that falls outside its range, use a larger variable!" Either say 'suspicion that it won't work' or 'doubt that it will work', not 'doubt that it won't work' (double negative). You probably just started saying one these and switched when writing.

• Alex

Fixed! Many thanks.

• Ariel Cabib

sizeof(long) on machine is 8 and not 4 as mentioned above. (64bit Ubuntu).

• Alex

4 bytes is the minimum size that C++ guarantees a long will be. However, it can be more on some architectures, such as the one you are using.

The first line calls even the negative numbers whole numbers, which is mathematically incorrect.

• Alex

You are correct. I've updated the terminology.

• Catreece

Minor thing I noticed while reading through:

The decimal / binary value table would make a bit more sense if the binary values were right-aligned on the table instead of left aligned. This would imply visually that each new digit added appears on the left, rather than the right as new bits are added.

It'd technically also be useful to highlight the new digit with a bold font each time it goes up a bit for much the same reason.

It's not a big deal, but it'd likely make it easier for people to visualize what's going on with the overflow example immediately after. =3

• Alex

Good idea to use right-alignment. It does make the table more comprehensible. Thanks for the suggestion.

• Hi again,
around the example of overflow of 65535 + / - 1, the quotation marks are misplaced in the codes.
It should be:

Also, some redirections have names like "24-integers" instead of "2.4-integers," although it doesn't really matter.

One final question:
I understand how 65535 + 1 becomes zero, but I can't understand how 0 - 1 becomes 65535 for unsigned shorts.

Matthew

• Alex

Thanks, I fixed the quotes.

Regarding the last question, I just answered that one: here.

• Simon

Why specify that an integer is signed? I understand why we'd specify it's unsigned, but just declaring an int without adding "signed" still lets you input negative numbers. So why?

• Alex

The primary use of the signed keyword is to explicitly specify whether char is signed or unsigned (since it could be either by default). Although you can use it with the other integer types, it is completely redundant.

• malhar

hey im new to programming . i have one doubt.

Isnt 'char' used to denote characters ?
how come youre telling its an integer data type??

• rameye

It is an integral data type, in the fact that it can only represent an integer, the same limitation that the other integral types (short, int, long) have.

The special handling of char traces back to the early days, when there were only 8 bits available to assign for text characters on a display screen.

You can do arithmetic with char types, you just have to be careful when you want to look at the results.

```char a=65, b=66; cout << a << b << endl; cout << a+b << endl;```

Notice how cout implicitly cast the a+b evaluation to int type so you see the actual sum of 65+66 displayed, but when simple char variables are inserted into cout then the ASCII characters are output back to you.

If you are using char to write or read text, as most uses of it are, everything is cool as can be, in fact strings are simply of type char*

• cmastah

Ummm....please ignore the program I put up above (it's displayed incorrectly). Although I did use the code from the comprehensive quiz from 1.11 (using an unsigned int) to try adding up two negatives, -1 and -1 and still got a proper answer: -2. When I tried an unsigned short, I got issues (I got the number 131070, which is way more than what an unsigned short is supposed to give based on the table above (seeing as a short is an unsigned 2 bytes integer variable)).

What I also wanted to know was (and I think I might know the answer after thinking back on games), is the 0-255 supposed to be LITERALLY the number 255, or as in 255 digits?

• cmastah

Sorry for triple posting, but I think I found the problem. Please correct me on these assumptions (for which I'm using the comprehensive quiz question from 1.11):

1. The integer variable within the "int readnumber()" function limits the maximum/minimum number it can reach (hence with an unsigned short, it's 65,535) while the "void writeanswer(int x)" allows for a maximum/minimum of what int is capable of (which is a larger number). Hence, if the integer variable within the readnumber function was capable of int size while writenumber is only (writenumber(short x)), then the maximum/minimum achievable is only what short is (which is 65,535).

2. Doing an unsigned short within the "int readnumber()" function, if we input -1 for the first number and -1 for the second, we get 131070 because they each take a step back from 0 and arrive at 65,535 each (hence a total of 131,070). This is still strange for me because unsigned int still gives the proper -2 after inputting -1 (for the first) and -1 (for the second). Unsigned short is NOT capable of adding up -1 and -1, but unsigned int is.

3. I think I had one more assumption, but it escapes me right now.

• cmastah

oh...my...god....I can actually feel my brain cells evaporating. I have NO idea what this article is talking about and can't understand it no matter how many times I try to read it.

Using code::blocks and the program from the comprehensive quiz from chapter 1:

----------------------------

main.cpp:
#include
#include "io.h"

using namespace std;

int main()
{
cout << "short: " << sizeof(short) << endl;
int b = ReadNumber();
int z = ReadNumber();
return 0;
}

io.cpp:
#include

using namespace std;

{
cout <> x;
return x;
}

{
cout << "Total: " << a << endl;
}

io.h:
#ifndef IO_H
#define IO_H

#endif // IO_H

-----------------------------

When I run the program I can still calculate -10 + -10 to a total of -20, am I (grossly) misunderstanding this lesson? I'm trying to understand the whole bytes thing and such but I'm not getting it at all. When I use unsigned int, I can calculate -1 + -1, but for some reason with unsigned short, -1 + -1 gives me what I'm assuming is an overflow issue (I get 131070 as a total). Please help 🙁

• Maverick95

* missed the last 1 off the 2nd to last line

• Maverick95

i can understand how integer overflow happens when you increase an unsigned integer -

i.e. 65535 = 1111 1111 1111 1111

and 65536 = [1] 0000 0000 0000 0000

so only given 2 bytes of data this reverts back to meaning 0.

However, I can't understand how this happens in reverse?

i.e. 0 = 0000 0000 0000 0000

So when you subtract one to get -1, how does this revert back to 1111 1111 1111 111

in terms of how memory is stored?

• rameye

You are working with an unsigned int, there is no negative value.

So you cannot ever reach a value of -1, 0 is as low as you can go before overflow.

Counting down: 5 4 3 2 1 0 65535 65534 65533 ...

• Alex

You guys ask hard questions.

The C99 spec says: "the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the newtype until the value is in the range of the newtype".

So in this case, -1 is converted to an unsigned int by adding (UINT_MAX + 1) to the value. The resultant value (UINT_MAX) is between 0 and UINT_MAX. UINT_MAX is the maximum int (e.g. 65535). So -1 maps to 65535.

The C++ spec essentially says the same thing, only in a much more complicated way.

• Alex

An an aside, it turns out the using two's complement as the underlying representation makes this trivial.

Consider: -1 in two's complement:
binary representation for 1: 0000 0001
flip the bits: 1111 1110
add 1: 1111 1111 in two's complement
1111 1111 as an unsigned = 65535

Consider: -2 in two's complement:
binary representation for 2: 0000 0010
flip the bits: 1111 1101
add 1: 1111 1110 in two's complement
1111 1110 as an unsigned = 65534

Consider: -65535 in two's complement (yes, this is outside the range for a 16-bit signed number, and should take 17 bits to represent properly in two's complement, but we only have 16 bits, so lets use them and see what happens)
binary representation for 65535: 1111 1111
flip the bits: 0000 0000
add 1: 0000 0001 in two's complement
0000 0001 as an unsigned = 1 (it still works!)

So if the compiler is using two's complement binary representation for signed numbers (which many do), then all that's needed is to interpret the number as an unsigned number.

when you say, an int variable has a size of 2 bytes or 4 bytes, what do u mean? does it dynamically change size from 2 to 4 bytes as the number get larger?

• Alex

No, it's up to your compiler, which generally picks an appropriate value based on your computer's architecture.

This means int will always have the same fixed size on a given system.

• joe

What is the difference between the "long" and "int" variable types? (They both have the same size).

• rameye

They are same size on your platform. That does not guarantee they are the same size on another platform.

In your particular case there is no difference. But never assume it's also the same for anyone else.

A long is guaranteed to be the same size or larger than an int on the same machine. That's as far as the contract goes.

• Alex

I've updated the tutorials to indicate that different variables have a guaranteed minimum size. For int, it's 2 bytes. For long, it's 4 bytes.

• mfz

For "1 byte signed" the range is -128 to 127, how is -128 represented in 1 byte, does'nt it cause an overflow?
127 => 01111111
-128 => 111111111 (the right most 1 represents - negative)

Also why cant we represent 128 in 1 byte (why is the range only till 127?)
128 => 10000000

Thx

• rameye

A signed char -128 in binary format is 10000000

Now you may question why is this so if the left most bit is the sign bit and the other seven bits are the value?

Well you don't want to ever use -0, that's kind of useless.

So the last 7 bits are actually a twos-complement of the absolute value of the negative value.

See http://en.wikipedia.org/wiki/Two%27s_complement for an explanation of twos-complement.

• Subhasis Rout

Can someone tell me why this does not work.

As per the above lesson char is type of INT. so this should work.

• rameye

This is because when you insert a char into std::cout (which is actually `basic_ostream<char>`) it doesn't display on the console as the literal value in base 10, it displays as a single ASCII character.

If it worked as you assumed then `cout << "Hello";` would print `72101108108111` (if iomanipulator flag set to std::dec)

Cast char to int before insertion:

``` unsigned char ch1=5; std::cout << "The value of ch1 is : "<< (int)ch1 << endl; ```

Then you will see the value of the char displayed as you wished it to be.

• Alex

Correct, std::cout prints characters as ASCII values instead of integer values because that is what they are more often used for.

(int)ch1 is an old-school C-style cast. In C++ you should do int(ch1).

We discuss casting in lesson 4.4 -- Type conversation and casting.

• Yzak

oops...
12 - 1100
13 - 1101
14 - 1110
15 - 1111

• Yzak

In the overflow section, I noticed you only have 14 numbers. Shouldn't there be 16 since you are starting from 0. Wouldn't it be:
0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1101, 1111
0 - 0
1 - 1
2 - 10
3 - 11
4 - 100
5 - 101
6 - 110
7 - 111
8 -1000
9 -1001
10-1010
11-1011
12-1101
13-1110
14-1111
15-10000
?

• joha

I thing it is 0-1-10-11-100-101-110-111-1000-1001-1010-1011-1100-1101-1110 to make a 15 and to make a 16, just add a 1111......

Let me know if I am mistaken.....great tutorials...

• zsb

• upk

Concrete basics of binary counting:
If you understand that in decimal system 248 stands for
2 x 10^2 +
4 x 10^1 +
8 x 10^0

and 13 for
1 x 10^1 +
3 x 10^0

then it is quite easy to convert in your head small decimals to binary and vice versa. For example 1011 is
1 x 2^3 = 8 +
0 x 2^2 = 0 +
1 x 2^1 = 2 +
1 x 2^1 = 1
= 11.

When you go one integer (1) up in whatever the base (binary, octal, decimal etc.) you increase the lowest nominator to the highest possible until it reaches maximum, then you increase the second to the lowest if possible, if not, the third to the lowest etc. For example in octal system next from 4677 is 4700, because you can't get higher than 7 (and after that of course 4701). In binary next from 1011 is 1100 (because from right to left first two 1's can't get higher). After 1100 -> 1101, 1110, 1111. And you can assure it by counting:

1100 = 8 + 4 + 0 + 0 = 12
1101 = 8 + 4 + 0 + 1 = 13
1110 = 8 + 4 + 2 + 0 = 14
1111 = 8 + 4 + 2 + 1 = 15.

• Yes, fixed. 🙂 Thanks for pointing that out.

• techsavvy....aye

The simplest method of converting decimal to binary is to write the 2 raised to powers below each binary digit and striking out the powers below 0

for e.g-
the number is 1101001

1    1    0    1    0    0    1

2^6  2^5  2^4  2^3  2^2  2^1  2^0

Now cancel all numbers below 0 i.e cancel the powers 1,2,4

add the others = 1+8+32+64
= 105

• lampamp

65535 is 0011011000110101001101010011001100110101
65536 is 0011011000110101001101010011001100110110

how is short able to maintain the first but not the second?
can u please explain?

• Alex

I don't think you have your binary right,

• rameye

That's called BCD (Binary Coded Decimal)

• How do you prevent an integer overflow?

My program prevents users entering a number higher than a billion, that works just fine. But, however, if a user enters a number that exceeds the integer-range, my program gets stuck in a loop. Is there a way to prevent a user from entering a number that causes an overflow?

• Alex

There are (at least) a few possible ways to do this:

1) Read the user's input as a string, validate that the user entered something that fits in your variable, and then convert the string to your numeric value.
2) Read in the user's input character by character and validate that input as they enter it (stop them from entering any character that would overflow your variable).

Neither of these is easy.

• William

Thanks again for your excellent tutorials Alex.

Just 2 quests;

1. in the 'Range' table above, 4 byte unsigned 0 to 4,294,967,296: is it 4,294,967,296 or 4,294,967,295?

2. when Stroustrup says “The unsigned integer types are ideal for uses that treat storage as a bit array.", does he mean when you are using the bits within a variable to check if they are on or off?

• Alex

1) 4,294,967,295. I fixed the error.
2) A bit array is typically used when you have a bunch of independent bit-size variables (booleans) and want to store them in a compact format. So yes, using the individual bits within a variable. An unsigned variable would be better for this purpose than a signed one is because the underlying (binary) representation is well defined. The underlying (binary) representation for signed variables can vary from system to system.

• John

What do you do if 4/8/16 bits isn't big enough? For instance, number theorists like to do arithmetic on very big numbers, ~ 100 to 200 digits large.

• On modern architectures, generally longs are 32 bits. Most modern compilers also give you access to a 64-bit integer type (often called a long long, but sometimes it has other names, like __int64).

However, if you need even larger integers, then you will have to write your own data type. You will learn how to do this in the section on classes (chapter 8).

• Godel

Great tutorial! Knew nothing this morning, now already something.

Detail: the math behind the table with unsigned/signed range:
doesn't the n-bit unsigned variable have a range of 0 to 2^n instead of 0 to 2^n-1?

As it was mentioned above:

As you learned in the last section, a variable with n bits can store 2^n different values...

As 0 (zero) is also a value, the maximum number is (2^n)-1 and range becomes 0 ... (2^n)-1 (inclusive).

• CuView

When overflow, does it dangerous? It can change other memory bits right (That may be used by other variables/application)?

• CuView

OK, I understand after reading forward.
Dangerouse because it could change the other variables.

Thank you

• Actually, overflow will just result in the most significant bits being lost. It won't overflow into other variables.

Actually, overflow will just result in the most significant bits being lost. It won’t overflow into other variables.

It's just because mathematical operations do not work with memory directly. The operand is put into CPU register (mostly EAX (on x86 machines) or its part - as the only register for integer mathematical purposes) for processing. The result (which can also occupy EDX register) is then taken from the initial place (EAX register) leading to higher bits lose.

• PReinie

However, if you're dealing with putting the contents of the EAX register back into memory and the memory isn't large enough to hold the register's value (putting it into a char variable) that might cause problems.

Back to the original "is it dangerous?" if your plane altimeter value overflows and the auto-pilot now thinks you're at 0 above ground and says CLIMB - NOW when you're really way up in the air, who knows what could happen? Dangerous all depends on the application.

Alot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people... or you get a blue screen of death or an eternal spinny (roulette) wheel of death.

• PReinie

However, if you're dealing with putting the contents of the EAX register back into memory and the memory isn't large enough to hold the register's value (putting it into a char variable) that might cause problems.

Back to the original "is it dangerous?" if your plane altimeter value overflows and the auto-pilot now thinks you're at 0 above ground and says CLIMB - NOW when you're really way up in the air, who knows what could happen? Dangerous all depends on the application.

A lot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people... or you get a blue screen of death or an eternal spinny (roulette) wheel of death.

• CuView

Thanks for your tutorial,
now i'm able to make my own String, Array, etc classes similar to std::string and vector after 4 months learning c++

Thank you very much.

• PoisonedV

I have a question- if long and int are both the same amount of bytes, do they hold the same amount?

• Yes.

• C++ Student

Another great set of examples, thanks.

• Fun fact: The old Final Fantasy games on the NES only allowed your stats to go up to 255 because they used a 1 byte unsigned variable to store stats. It would be neat to see someone allude this in a modern game... especially when memory isn't a huge issue anymore. 🙂

• Frederik

Memory is ALWAYS a huge issue. Modern games usually have more stats or more monsters to keep track of, so doubling or quadrupling the memory needs, just because you can, is never a good idea. Designing good rules usually allows to reduce the height of stats, rather than extend it.

• That is actually a really difficult question. 🙂 I have seen various programmers argue it either way, and there's no clear answer.

Bjarne Stroustrup (who designed C++) says, "The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules."

I think Bjarne is correct on this one.

Using signed instead of unsigned integers even when you don't expect negative numbers gives you a few benefits:

1) Many programmers use signed integers even when only dealing with positive numbers, because negative numbers can then be used as "error conditions". For example, it's pretty common to write a function that is expected to return a positive number. However, you can have it return a negative number if something goes wrong. That way, the caller has a way of detecting something went astray. (Note: You can also use exception handling as an alternative mechanism for returning errors)

2) What happens in this case:
int foo(unsigned int nValue)
{
// something
}

caller:
int nValue = foo(-1);

The -1 gets silently converted into an unsigned integer (which would be a large positive number), and the function has no way of detecting that an invalid input was given to it.

3) If you expect a number to be positive, and your signed variable suddenly has a negative value, that's a good indication your algorithm is wrong.

In short, as a rule of thumb, unless you have a good reason not to, it is better to use signed integers.

• From the line "Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea." what I understood is that unsigned integers save 1 bit in memory because their value is always positive and there is no need to use an extra bit to decide whether the value is going to be positive or negative. Am I right? Forgive me, if my English is not so good.

One comment for this tutorial in my language (Hindi):
Mast hai bhai...

That means...
This tutorial is awesome bro.

• Alex

Unsigned integers don't really "save 1 bit in memory", they just put their bits to use in a different way.

If you look at the range for an unsigned 8-bit number, you'll see that it's 0 to 255.
If you look at the range for a signed 8-bit number, you'll see that it's -128 to 127.

Both signed and unsigned numbers use all 8 bits to represent 256 possible unique values. It's just that the range of numbers they can represent is slightly different.

• It's better to use unsigned rather than signed, and here's why.  RISC processors like the 8051 don't have multiply or divide instructions so it has to be done with a library.  When multiplying (or dividing) signed values, the function has to do the following:

1) Extract the sign bits for both the multiplicand and multiplier
2) Convert both values to unsigned
3) Perform the multiplication
4) Use the saved sign bits to find the sign of the product
5) Apply the sign to the product

When multiplying (or dividing) unsigned values, the function has to do the following:

1) Perform the multiplication

If the application is time critical, it's better to use unsigned to eliminate a bunch of unnecessary steps.  Even if your processor has the signed multiply and divide, it's a good habit to not use something if you don't need it.

As for "Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.”:  Using signed variables to circumvent implicit conversion rules is just bad programming.  You should know the rules and know what will happen if you mix unsigned and signed variables in an expression.  And you should always use explicit casts when mixing variables.

I work with an embedded 8051 project that has about 50,000 lines of code.  We have hundreds of variables, mostly unsigned.  My estimate is we have no more than a couple dozen signed variables, and they are used for incrementing (1) and decrementing (-1) when using stepper motors.  Timing is critical so using signed variables is out of the question because we use a lot of multiplication and division.  I've seen the generated assembly code and unsigned is clearly the winner.

• Alex

> It’s better to use unsigned rather than signed

No, it's not. For your very specific performance-critical use case on a processor that has crippled handling of signed values, you might make the call to favor unsigned over signed for performance reasons.

But for general computing, the best minds in the field have decided that using signed is safer than using unsigned. Most modern processors support both signed and unsigned arithmetic operations natively, so the performance difference between the two is negligible. Even knowing the rules about how signed and unsigned values interact, it's easy to get into trouble, especially if you mix them (which can happen inadvertently). It's better to program defensively and optimize later where needed.

The C++ style guidelines from Google explicitly state, "You should not use the unsigned integer types such as uint32_t, unless there is a valid reason such as representing a bit pattern rather than a number, or you need defined overflow modulo 2^N. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this.". Those Google guys are pretty smart -- they must have had a good reason to include this.

There are actually fewer error states for unsigned numbers than signed; overflow on signed is actually UNDEFINED.  That means anything could happen at all.  Plus, using the negation operator (-) on a minimum-value signed number is also undefined.   I've seen it both ignored entirely, AND also sudden and unexplained program termination as a result.

gcc silently ignores that with default settings (at least my obsolete 4.4.5 version does - it'll print the resulting x as -2147483648, whereas it prints -5 as 5), whereas I was actually getting program termination out of MSVC compiling a Windows-binary.  Neither is an acceptable answer.  There actually ISN'T an acceptable answer as the correct number is 2147483648, which cannot be represented by signed 32-bit integers.  Keep in mind an abs() macro is probably using -x in it.

Undefined stuff is especially nasty as "modern" compiler writers (and that goes in quotes, MSVC and gcc struggle to show you where an error happened, whereas SAS/C usually points directly at the offending place in most cases.. modern != better necessarily) will often optimize away undefined behavior, even if it's actually VALID on the target platform.

The only real error state to an unsigned value (assuming you aren't mixing-and-matching with floating point and signed, which brings in all the error states for those two nasty number systems) is dividing by zero...which is present in all other representations too.

Plus unsigned values have some cases where you can skip tests

In the example above, there's no need to test to see if u >= 0, as it always is.  Note that it may not be the correct buffer in either case, as there's no error states for the functions.. a user might have typed 99999999999999999999, for instance.  However, the stack won't be smashed in either case, but the unsigned case involves one fewer comparison (comparisons and branches will NEVER get cheaper).

I personally like to use an unsigned integer to represent values in the range of 1.0 > x >= 0.0.  This works especially well for angles that have to be restrained to a 360-degree circle, as I can literally just add or subtract from the unsigned value and let overflow handle the rounding.  I don't have to do crap like x = x%360 or other modf/fmod performance-eating error-prone nonsense.  a 32-bit uint has a higher resolution than a single-precision float (it has four effective bytes of mantissa as opposed to three for the float in this case), plus the rest of the math tends to be fp-heavy and the integer execution units/ports are idle anyways, so bonus performance.   Doubly so when I can do something like SineTable[ Angle >> 16 ] instead of sin(blah)...that's actually faster AND getting faster (memory is improving faster than CPU performance these days -- it was only two times faster in the Core 2 era, and for my i7-3820, it's six times faster).    All of that would be invalid/undefined for a signed integer.  The only real drawback is that 15 degrees is 178956970, but then again, the standard math library works in radians, which would be 0.261799 (in single-precision.. approximately)..

There are some gotchas for unsigned in some common uses for novice programmers, but that's why they're NOVICE.  Stuff like for(u=9;u>=0;u--) -- u>=0 is always true (using the "u" from the example above naturally).  That would have to be rewritten to use a signed integer, or an offset, or a do/while(u!=0) sort of construct... assuming that 0 was intended to be used in the first place.  The novice programmer likely included zero by mistake there anyhow 😉

BTW, just because 'smart people' do something, doesn't mean it's right.  My Galaxy S7's software (Android -- google *cough*) is a good example of that.  Uses over 2.1 gigabytes of RAM (out of 4), and is less responsive than a significantly hardware-inferior iPhone.

Anyhow, I've been otherwise enjoying your article, even if it is biased towards new users.  It's pretty clear and well-written.  A current project requires some integration with C++ code, and this has been helping me update my C++ skillset from the old pre-namespace era.  When I first started on that project, I'd thought that Perl had somehow attacked the source 🙂

• Would you say it is a good idea to work with signed integers even if it is unlikely the number would be negative, just as a precaution? Or maybe allow it but have a warning print to the screen?