An integer is an integral type that can represent positive and negative whole numbers, including 0 (e.g. -2, -1, 0, 1, 2). C++ has *4* different fundamental integer types available for use:

Type | Minimum Size | Note |
---|---|---|

short | 16 bits | |

int | 16 bits | Typically 32 bits on modern architectures |

long | 32 bits | |

long long | 64 bits |

The key difference between the various integer types is that they have varying sizes -- the larger integers can hold bigger numbers.

A reminder

C++ only guarantees that integers will have a certain minimum size, not that they will have a specific size. See lesson 4.3 -- Object sizes and the sizeof operator for information on how to determine how large each type is on your machine.

Signed integers

When writing negative numbers in everyday life, we use a negative sign. For example, *-3* means “negative 3”. We’d also typically recognize *+3* as “positive 3” (though common convention dictates that we typically omit plus prefixes). This attribute of being positive, negative, or zero is called the number’s sign.

By default, integers are signed, which means the number’s sign is preserved. Therefore, a signed integer can hold both positive and negative numbers (and 0).

In this lesson, we’ll focus on signed integers. We’ll discuss unsigned integers (which can only hold non-negative numbers) in the next lesson.

Defining signed integers

Here is the preferred way to define the four types of signed integers:

1 2 3 4 |
short s; int i; long l; long long ll; |

All of the integers (except int) can take an optional *int* suffix:

1 2 3 |
short int si; long int li; long long int lli; |

This suffix should not be used. In addition to being more typing, adding the *int* suffix makes the type harder to distinguish from variables of type *int*. This can lead to mistakes if the short or long modifier is inadvertently missed.

The integer types can also take an optional *signed* keyword, which by convention is typically placed before the type name:

1 2 3 4 |
signed short ss; signed int si; signed long sl; signed long long sll; |

However, this keyword should not be used, as it is redundant, since integers are signed by default.

Best practice

Prefer the shorthand types that do not use the *int* suffix or signed prefix.

Signed integer ranges

As you learned in the last section, a variable with *n* bits can hold 2^{n} different values. But which specific values? We call the set of specific values that a data type can hold its range. The range of an integer variable is determined by two factors: its size (in bits), and whether it is signed or not.

By definition, an 8-bit signed integer has a range of -128 to 127. This means a signed integer can store any integer value between -128 and 127 (inclusive) safely.

As an aside...

Math time: an 8-bit integer contains 8 bits. 2^{8} is 256, so an 8-bit integer can hold 256 different values. There are 256 different values between -128 to 127, inclusive.

Here’s a table containing the range of signed integers of different sizes:

Size/Type | Range |
---|---|

8 bit signed | -128 to 127 |

16 bit signed | -32,768 to 32,767 |

32 bit signed | -2,147,483,648 to 2,147,483,647 |

64 bit signed | -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 |

For the math inclined, an n-bit signed variable has a range of -(2^{n-1}) to 2^{n-1}-1.

For the non-math inclined… use the table. :)

Integer overflow

What happens if we try to assign the value *280* to an 8-bit signed integer? This number is outside the range that a 8-bit signed integer can hold. The number 280 requires 9 bits (plus 1 sign bit) to be represented, but we only have 7 bits (plus 1 sign bit) available in a 8-bit signed integer.

Integer overflow (often called *overflow* for short) occurs when we try to store a value that is outside the range of the type. Essentially, the number we are trying to store requires more bits to represent than the object has available. In such a case, data is lost because the object doesn’t have enough memory to store everything.

In the case of signed integers, which bits are lost is not well defined, thus signed integer overflow leads to undefined behavior.

Warning

Signed integer overflow will result in undefined behavior.

In general, overflow results in information being lost, which is almost never desirable. If there is *any* suspicion that an object might need to store a value that falls outside its range, use a type with a bigger range!

Integer division

When dividing two integers, C++ works like you’d expect when the quotient is a whole number:

1 2 3 4 5 6 7 |
#include <iostream> int main() { std::cout << 20 / 4; return 0; } |

This produces the expected result:

5

But let’s look at what happens when integer division causes a fractional result:

1 2 3 4 5 6 7 |
#include <iostream> int main() { std::cout << 8 / 5; return 0; } |

This produces a possibly unexpected result:

1

When doing division with two integers (called integer division), C++ always produces an integer result. Since integers can’t hold fractional values, any fractional portion is simply dropped (not rounded!).

Taking a closer look at the above example, 8 / 5 produces the value 1.6. The fractional part (0.6) is dropped, and the result of 1 remains.

Similarly, -8 / 5 results in the value -1.

Warning

Be careful when using integer division, as you will lose any fractional parts of the quotient. However, if it’s what you want, integer division is safe to use, as the results are predictable.

If fractional results are desired, we show a method to do this in lesson 5.2 -- Arithmetic operators.

4.5 -- Unsigned integers, and why to avoid them |

Index |

4.3 -- Object sizes and the sizeof operator |

Alex,

I came back here from the bitwise lesson. I think I missed something about storing integers that you may have written somewhere. But I don't recall seeing.

How does C++ or the compiler handle all the leading zero's when your system and mine need 4 bytes 32bits to store small integers like 1 (one}? Certainly other data types or situations require throwing out leading or trailing zero's.

Char also come to mind here too. What happens of someone uses a long-long integer and then stores a small number there?

Can you give us some insight, Thanks

The compiler or CPU's instruction set should handle the padding as appropriate. For example, if you assign the integer value 1 to a 32-bit memory location, it will assign the value 00000000 00000000 00000000 00000001. It's not something you need to worry about as a programmer.

Why do we need to write "return 0;" in the last line of this program?

#include <iostream>

int main()

{

using namespace std;

unsigned short x = 0; // smallest 2-byte unsigned value possible

cout << "x was: " << x << endl;

x = x - 1; // overflow!

cout << "x is now: " << x << endl;

return 0;

}

Typos.

"See lesson 23 (2.3) -- variable sizes and the sizeof operator"

"In lesson 21 (2.1) -- Basic addressing and variable declaration"

"If there is any doubt (suspicion) that a variable might need to store a value that falls outside its range, use a larger variable!" Either say 'suspicion that it won't work' or 'doubt that it will work', not 'doubt that it won't work' (double negative). You probably just started saying one these and switched when writing.

Fixed! Many thanks.

sizeof(long) on machine is 8 and not 4 as mentioned above. (64bit Ubuntu).

4 bytes is the minimum size that C++ guarantees a long will be. However, it can be more on some architectures, such as the one you are using.

The first line calls even the negative numbers whole numbers, which is mathematically incorrect.

You are correct. I've updated the terminology.

Minor thing I noticed while reading through:

The decimal / binary value table would make a bit more sense if the binary values were right-aligned on the table instead of left aligned. This would imply visually that each new digit added appears on the left, rather than the right as new bits are added.

It'd technically also be useful to highlight the new digit with a bold font each time it goes up a bit for much the same reason.

It's not a big deal, but it'd likely make it easier for people to visualize what's going on with the overflow example immediately after. =3

Good idea to use right-alignment. It does make the table more comprehensible. Thanks for the suggestion.

Hi again,

around the example of overflow of 65535 + / - 1, the quotation marks are misplaced in the codes.

It should be:

instead of:

Also, some redirections have names like "24-integers" instead of "2.4-integers," although it doesn't really matter.

One final question:

I understand how 65535 + 1 becomes zero, but I can't understand how 0 - 1 becomes 65535 for unsigned shorts.

Matthew

Thanks, I fixed the quotes.

Regarding the last question, I just answered that one: here.

Why specify that an integer is signed? I understand why we'd specify it's unsigned, but just declaring an int without adding "signed" still lets you input negative numbers. So why?

The primary use of the signed keyword is to explicitly specify whether char is signed or unsigned (since it could be either by default). Although you can use it with the other integer types, it is completely redundant.

hey im new to programming . i have one doubt.

Isnt 'char' used to denote characters ?

how come youre telling its an integer data type??

It is an integral data type, in the fact that it can only represent an integer, the same limitation that the other integral types (short, int, long) have.

The special handling of char traces back to the early days, when there were only 8 bits available to assign for text characters on a display screen.

You can do arithmetic with char types, you just have to be careful when you want to look at the results.

`char a=65, b=66;`

cout << a << b << endl;

cout << a+b << endl;

Notice how cout implicitly cast the a+b evaluation to int type so you see the actual sum of 65+66 displayed, but when simple char variables are inserted into cout then the ASCII characters are output back to you.

If you are using char to write or read text, as most uses of it are, everything is cool as can be, in fact strings are simply of type char*

Ummm....please ignore the program I put up above (it's displayed incorrectly). Although I did use the code from the comprehensive quiz from 1.11 (using an unsigned int) to try adding up two negatives, -1 and -1 and still got a proper answer: -2. When I tried an unsigned short, I got issues (I got the number 131070, which is way more than what an unsigned short is supposed to give based on the table above (seeing as a short is an unsigned 2 bytes integer variable)).

What I also wanted to know was (and I think I might know the answer after thinking back on games), is the 0-255 supposed to be LITERALLY the number 255, or as in 255 digits?

Sorry for triple posting, but I think I found the problem. Please correct me on these assumptions (for which I'm using the comprehensive quiz question from 1.11):

1. The integer variable within the "int readnumber()" function limits the maximum/minimum number it can reach (hence with an unsigned short, it's 65,535) while the "void writeanswer(int x)" allows for a maximum/minimum of what int is capable of (which is a larger number). Hence, if the integer variable within the readnumber function was capable of int size while writenumber is only (writenumber(short x)), then the maximum/minimum achievable is only what short is (which is 65,535).

2. Doing an unsigned short within the "int readnumber()" function, if we input -1 for the first number and -1 for the second, we get 131070 because they each take a step back from 0 and arrive at 65,535 each (hence a total of 131,070). This is still strange for me because unsigned int still gives the proper -2 after inputting -1 (for the first) and -1 (for the second). Unsigned short is NOT capable of adding up -1 and -1, but unsigned int is.

3. I think I had one more assumption, but it escapes me right now.

oh...my...god....I can actually feel my brain cells evaporating. I have NO idea what this article is talking about and can't understand it no matter how many times I try to read it.

Using code::blocks and the program from the comprehensive quiz from chapter 1:

----------------------------

main.cpp:

#include

#include "io.h"

using namespace std;

int main()

{

cout << "short: " << sizeof(short) << endl;

int b = ReadNumber();

int z = ReadNumber();

WriteAnswer(b+z);

return 0;

}

io.cpp:

#include

using namespace std;

int ReadNumber()

{

cout <> x;

return x;

}

void WriteAnswer(int a)

{

cout << "Total: " << a << endl;

}

io.h:

#ifndef IO_H

#define IO_H

int ReadNumber();

void WriteAnswer(int a);

#endif // IO_H

-----------------------------

When I run the program I can still calculate -10 + -10 to a total of -20, am I (grossly) misunderstanding this lesson? I'm trying to understand the whole bytes thing and such but I'm not getting it at all. When I use unsigned int, I can calculate -1 + -1, but for some reason with unsigned short, -1 + -1 gives me what I'm assuming is an overflow issue (I get 131070 as a total). Please help :(

* missed the last 1 off the 2nd to last line

i can understand how integer overflow happens when you increase an unsigned integer -

i.e. 65535 = 1111 1111 1111 1111

and 65536 = [1] 0000 0000 0000 0000

so only given 2 bytes of data this reverts back to meaning 0.

However, I can't understand how this happens in reverse?

i.e. 0 = 0000 0000 0000 0000

So when you subtract one to get -1, how does this revert back to 1111 1111 1111 111

in terms of how memory is stored?

You are working with an unsigned int, there is no negative value.

So you cannot ever reach a value of -1, 0 is as low as you can go before overflow.

Counting down: 5 4 3 2 1 0 65535 65534 65533 ...

You guys ask hard questions.

The C99 spec says: "the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the newtype until the value is in the range of the newtype".

So in this case, -1 is converted to an unsigned int by adding (UINT_MAX + 1) to the value. The resultant value (UINT_MAX) is between 0 and UINT_MAX. UINT_MAX is the maximum int (e.g. 65535). So -1 maps to 65535.

The C++ spec essentially says the same thing, only in a much more complicated way.

An an aside, it turns out the using two's complement as the underlying representation makes this trivial.

Consider: -1 in two's complement:

binary representation for 1: 0000 0001

flip the bits: 1111 1110

add 1: 1111 1111 in two's complement

1111 1111 as an unsigned = 65535

Consider: -2 in two's complement:

binary representation for 2: 0000 0010

flip the bits: 1111 1101

add 1: 1111 1110 in two's complement

1111 1110 as an unsigned = 65534

Consider: -65535 in two's complement (yes, this is outside the range for a 16-bit signed number, and should take 17 bits to represent properly in two's complement, but we only have 16 bits, so lets use them and see what happens)

binary representation for 65535: 1111 1111

flip the bits: 0000 0000

add 1: 0000 0001 in two's complement

0000 0001 as an unsigned = 1 (it still works!)

So if the compiler is using two's complement binary representation for signed numbers (which many do), then all that's needed is to interpret the number as an unsigned number.

when you say, an int variable has a size of 2 bytes or 4 bytes, what do u mean? does it dynamically change size from 2 to 4 bytes as the number get larger?

No, it's up to your compiler, which generally picks an appropriate value based on your computer's architecture.

This means int will always have the same fixed size on a given system.

What is the difference between the "long" and "int" variable types? (They both have the same size).

They are same size on your platform. That does not guarantee they are the same size on another platform.

In your particular case there is no difference. But never assume it's also the same for anyone else.

A long is guaranteed to be the same size or larger than an int on the same machine. That's as far as the contract goes.

I've updated the tutorials to indicate that different variables have a guaranteed minimum size. For int, it's 2 bytes. For long, it's 4 bytes.

For "1 byte signed" the range is -128 to 127, how is -128 represented in 1 byte, does'nt it cause an overflow?

127 => 01111111

-128 => 111111111 (the right most 1 represents - negative)

Also why cant we represent 128 in 1 byte (why is the range only till 127?)

128 => 10000000

Thx

A signed char -128 in binary format is 10000000

Now you may question why is this so if the left most bit is the sign bit and the other seven bits are the value?

Well you don't want to ever use -0, that's kind of useless.

So the last 7 bits are actually a twos-complement of the absolute value of the negative value.

See http://en.wikipedia.org/wiki/Two%27s_complement for an explanation of twos-complement.

I discuss two's complement in section 3.7 -- converting between binary and decimal

Can someone tell me why this does not work.

As per the above lesson char is type of INT. so this should work.

This is because when you insert a char into std::cout (which is actually

`basic_ostream<char>`

) it doesn't display on the console as the literal value in base 10, it displays as a single ASCII character.If it worked as you assumed then

`cout << "Hello";`

would print`72101108108111`

(if iomanipulator flag set to std::dec)Cast char to int before insertion:

unsigned char ch1=5;

std::cout << "The value of ch1 is : "<< (int)ch1 << endl;

Then you will see the value of the char displayed as you wished it to be.

Correct, std::cout prints characters as ASCII values instead of integer values because that is what they are more often used for.

(int)ch1 is an old-school C-style cast. In C++ you should do int(ch1).

We discuss casting in lesson 4.4 -- Type conversation and casting.

oops...

12 - 1100

13 - 1101

14 - 1110

15 - 1111

Adam,

In the overflow section, I noticed you only have 14 numbers. Shouldn't there be 16 since you are starting from 0. Wouldn't it be:

0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1101, 1111

0 - 0

1 - 1

2 - 10

3 - 11

4 - 100

5 - 101

6 - 110

7 - 111

8 -1000

9 -1001

10-1010

11-1011

12-1101

13-1110

14-1111

15-10000

?

I thing it is 0-1-10-11-100-101-110-111-1000-1001-1010-1011-1100-1101-1110 to make a 15 and to make a 16, just add a 1111......

Let me know if I am mistaken.....great tutorials...

Concrete basics of binary counting:

If you understand that in decimal system 248 stands for

2 x 10^2 +

4 x 10^1 +

8 x 10^0

and 13 for

1 x 10^1 +

3 x 10^0

then it is quite easy to convert in your head small decimals to binary and vice versa. For example 1011 is

1 x 2^3 = 8 +

0 x 2^2 = 0 +

1 x 2^1 = 2 +

1 x 2^1 = 1

= 11.

When you go one integer (1) up in whatever the base (binary, octal, decimal etc.) you increase the lowest nominator to the highest possible until it reaches maximum, then you increase the second to the lowest if possible, if not, the third to the lowest etc. For example in octal system next from 4677 is 4700, because you can't get higher than 7 (and after that of course 4701). In binary next from 1011 is 1100 (because from right to left first two 1's can't get higher). After 1100 -> 1101, 1110, 1111. And you can assure it by counting:

1100 = 8 + 4 + 0 + 0 = 12

1101 = 8 + 4 + 0 + 1 = 13

1110 = 8 + 4 + 2 + 0 = 14

1111 = 8 + 4 + 2 + 1 = 15.

Yes, fixed. :) Thanks for pointing that out.

The simplest method of converting decimal to binary is to write the 2 raised to powers below each binary digit and striking out the powers below 0

for e.g-

the number is 1101001

1 1 0 1 0 0 1

2^6 2^5 2^4 2^3 2^2 2^1 2^0

Now cancel all numbers below 0 i.e cancel the powers 1,2,4

add the others = 1+8+32+64

= 105

65535 is 0011011000110101001101010011001100110101

65536 is 0011011000110101001101010011001100110110

how is short able to maintain the first but not the second?

can u please explain?

I don't think you have your binary right,

That's called BCD (Binary Coded Decimal)

How do you prevent an integer overflow?

My program prevents users entering a number higher than a billion, that works just fine. But, however, if a user enters a number that exceeds the integer-range, my program gets stuck in a loop. Is there a way to prevent a user from entering a number that causes an overflow?

There are (at least) a few possible ways to do this:

1) Read the user's input as a string, validate that the user entered something that fits in your variable, and then convert the string to your numeric value.

2) Read in the user's input character by character and validate that input as they enter it (stop them from entering any character that would overflow your variable).

Neither of these is easy.

Thanks again for your excellent tutorials Alex.

Just 2 quests;

1. in the 'Range' table above, 4 byte unsigned 0 to 4,294,967,296: is it 4,294,967,296 or 4,294,967,295?

2. when Stroustrup says “The unsigned integer types are ideal for uses that treat storage as a bit array.", does he mean when you are using the bits within a variable to check if they are on or off?

1) 4,294,967,295. I fixed the error.

2) A bit array is typically used when you have a bunch of independent bit-size variables (booleans) and want to store them in a compact format. So yes, using the individual bits within a variable. An unsigned variable would be better for this purpose than a signed one is because the underlying (binary) representation is well defined. The underlying (binary) representation for signed variables can vary from system to system.

What do you do if 4/8/16 bits isn't big enough? For instance, number theorists like to do arithmetic on very big numbers, ~ 100 to 200 digits large.

On modern architectures, generally longs are 32 bits. Most modern compilers also give you access to a 64-bit integer type (often called a long long, but sometimes it has other names, like __int64).

However, if you need even larger integers, then you will have to write your own data type. You will learn how to do this in the section on classes (chapter 8).

Great tutorial! Knew nothing this morning, now already something.

Detail: the math behind the table with unsigned/signed range:

doesn't the n-bit unsigned variable have a range of 0 to 2^n instead of 0 to 2^n-1?

As it was mentioned above:

As you learned in the last section, a variable with n bits can store 2^n different values...

As 0 (zero) is also a value, the maximum number is (2^n)-1 and range becomes 0 ... (2^n)-1 (inclusive).

When overflow, does it dangerous? It can change other memory bits right (That may be used by other variables/application)?

OK, I understand after reading forward.

Dangerouse because it could change the other variables.

Thank you

Actually, overflow will just result in the most significant bits being lost. It won't overflow into other variables.

Actually, overflow will just result in the most significant bits being lost. It won’t overflow into other variables.

It's just because mathematical operations do not work with memory directly. The operand is put into CPU register (mostly EAX (on x86 machines) or its part - as the only register for integer mathematical purposes) for processing. The result (which can also occupy EDX register) is then taken from the initial place (EAX register) leading to higher bits lose.

However, if you're dealing with putting the contents of the EAX register back into memory and the memory isn't large enough to hold the register's value (putting it into a char variable) that might cause problems.

Back to the original "is it dangerous?" if your plane altimeter value overflows and the auto-pilot now thinks you're at 0 above ground and says CLIMB - NOW when you're really way up in the air, who knows what could happen? Dangerous all depends on the application.

Alot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people... or you get a blue screen of death or an eternal spinny (roulette) wheel of death.

However, if you're dealing with putting the contents of the EAX register back into memory and the memory isn't large enough to hold the register's value (putting it into a char variable) that might cause problems.

Back to the original "is it dangerous?" if your plane altimeter value overflows and the auto-pilot now thinks you're at 0 above ground and says CLIMB - NOW when you're really way up in the air, who knows what could happen? Dangerous all depends on the application.

A lot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people... or you get a blue screen of death or an eternal spinny (roulette) wheel of death.

Thanks for your tutorial,

now i'm able to make my own String, Array, etc classes similar to std::string and vector after 4 months learning c++

Thank you very much.

I have a question- if long and int are both the same amount of bytes, do they hold the same amount?

Yes.

Another great set of examples, thanks.

Fun fact: The old Final Fantasy games on the NES only allowed your stats to go up to 255 because they used a 1 byte unsigned variable to store stats. It would be neat to see someone allude this in a modern game... especially when memory isn't a huge issue anymore. :)

Memory is ALWAYS a huge issue. Modern games usually have more stats or more monsters to keep track of, so doubling or quadrupling the memory needs, just because you can, is never a good idea. Designing good rules usually allows to reduce the height of stats, rather than extend it.

That is actually a really difficult question. :) I have seen various programmers argue it either way, and there's no clear answer.

Bjarne Stroustrup (who designed C++) says, "The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules."

I think Bjarne is correct on this one.

Using signed instead of unsigned integers even when you don't expect negative numbers gives you a few benefits:

1) Many programmers use signed integers even when only dealing with positive numbers, because negative numbers can then be used as "error conditions". For example, it's pretty common to write a function that is expected to return a positive number. However, you can have it return a negative number if something goes wrong. That way, the caller has a way of detecting something went astray. (Note: You can also use exception handling as an alternative mechanism for returning errors)

2) What happens in this case:

int foo(unsigned int nValue)

{

// something

}

caller:

int nValue = foo(-1);

The -1 gets silently converted into an unsigned integer (which would be a large positive number), and the function has no way of detecting that an invalid input was given to it.

3) If you expect a number to be positive, and your signed variable suddenly has a negative value, that's a good indication your algorithm is wrong.

In short, as a rule of thumb, unless you have a good reason not to, it is better to use signed integers.

From the line "Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea." what I understood is that unsigned integers save 1 bit in memory because their value is always positive and there is no need to use an extra bit to decide whether the value is going to be positive or negative. Am I right? Forgive me, if my English is not so good.

One comment for this tutorial in my language (Hindi):

Mast hai bhai...

That means...

This tutorial is awesome bro.

Unsigned integers don't really "save 1 bit in memory", they just put their bits to use in a different way.

If you look at the range for an unsigned 8-bit number, you'll see that it's 0 to 255.

If you look at the range for a signed 8-bit number, you'll see that it's -128 to 127.

Both signed and unsigned numbers use all 8 bits to represent 256 possible unique values. It's just that the range of numbers they can represent is slightly different.

It's better to use unsigned rather than signed, and here's why. RISC processors like the 8051 don't have multiply or divide instructions so it has to be done with a library. When multiplying (or dividing) signed values, the function has to do the following:

1) Extract the sign bits for both the multiplicand and multiplier

2) Convert both values to unsigned

3) Perform the multiplication

4) Use the saved sign bits to find the sign of the product

5) Apply the sign to the product

When multiplying (or dividing) unsigned values, the function has to do the following:

1) Perform the multiplication

If the application is time critical, it's better to use unsigned to eliminate a bunch of unnecessary steps. Even if your processor has the signed multiply and divide, it's a good habit to not use something if you don't need it.

As for "Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.”: Using signed variables to circumvent implicit conversion rules is just bad programming. You should know the rules and know what will happen if you mix unsigned and signed variables in an expression. And you should always use explicit casts when mixing variables.

I work with an embedded 8051 project that has about 50,000 lines of code. We have hundreds of variables, mostly unsigned. My estimate is we have no more than a couple dozen signed variables, and they are used for incrementing (1) and decrementing (-1) when using stepper motors. Timing is critical so using signed variables is out of the question because we use a lot of multiplication and division. I've seen the generated assembly code and unsigned is clearly the winner.

> It’s better to use unsigned rather than signed

No, it's not. For your very specific performance-critical use case on a processor that has crippled handling of signed values, you might make the call to favor unsigned over signed for performance reasons.

But for general computing, the best minds in the field have decided that using signed is safer than using unsigned. Most modern processors support both signed and unsigned arithmetic operations natively, so the performance difference between the two is negligible. Even knowing the rules about how signed and unsigned values interact, it's easy to get into trouble, especially if you mix them (which can happen inadvertently). It's better to program defensively and optimize later where needed.

The C++ style guidelines from Google explicitly state, "You should not use the unsigned integer types such as uint32_t, unless there is a valid reason such as representing a bit pattern rather than a number, or you need defined overflow modulo 2^N. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this.". Those Google guys are pretty smart -- they must have had a good reason to include this.

There are actually fewer error states for unsigned numbers than signed; overflow on signed is actually UNDEFINED. That means anything could happen at all. Plus, using the negation operator (-) on a minimum-value signed number is also undefined. I've seen it both ignored entirely, AND also sudden and unexplained program termination as a result.

gcc silently ignores that with default settings (at least my obsolete 4.4.5 version does - it'll print the resulting x as -2147483648, whereas it prints -5 as 5), whereas I was actually getting program termination out of MSVC compiling a Windows-binary. Neither is an acceptable answer. There actually ISN'T an acceptable answer as the correct number is 2147483648, which cannot be represented by signed 32-bit integers. Keep in mind an abs() macro is probably using -x in it.

Undefined stuff is especially nasty as "modern" compiler writers (and that goes in quotes, MSVC and gcc struggle to show you where an error happened, whereas SAS/C usually points directly at the offending place in most cases.. modern != better necessarily) will often optimize away undefined behavior, even if it's actually VALID on the target platform.

The only real error state to an unsigned value (assuming you aren't mixing-and-matching with floating point and signed, which brings in all the error states for those two nasty number systems) is dividing by zero...which is present in all other representations too.

Plus unsigned values have some cases where you can skip tests

In the example above, there's no need to test to see if u >= 0, as it always is. Note that it may not be the correct buffer in either case, as there's no error states for the functions.. a user might have typed 99999999999999999999, for instance. However, the stack won't be smashed in either case, but the unsigned case involves one fewer comparison (comparisons and branches will NEVER get cheaper).

I personally like to use an unsigned integer to represent values in the range of 1.0 > x >= 0.0. This works especially well for angles that have to be restrained to a 360-degree circle, as I can literally just add or subtract from the unsigned value and let overflow handle the rounding. I don't have to do crap like x = x%360 or other modf/fmod performance-eating error-prone nonsense. a 32-bit uint has a higher resolution than a single-precision float (it has four effective bytes of mantissa as opposed to three for the float in this case), plus the rest of the math tends to be fp-heavy and the integer execution units/ports are idle anyways, so bonus performance. Doubly so when I can do something like SineTable[ Angle >> 16 ] instead of sin(blah)...that's actually faster AND getting faster (memory is improving faster than CPU performance these days -- it was only two times faster in the Core 2 era, and for my i7-3820, it's six times faster). All of that would be invalid/undefined for a signed integer. The only real drawback is that 15 degrees is 178956970, but then again, the standard math library works in radians, which would be 0.261799 (in single-precision.. approximately)..

There are some gotchas for unsigned in some common uses for novice programmers, but that's why they're NOVICE. Stuff like for(u=9;u>=0;u--) -- u>=0 is always true (using the "u" from the example above naturally). That would have to be rewritten to use a signed integer, or an offset, or a do/while(u!=0) sort of construct... assuming that 0 was intended to be used in the first place. The novice programmer likely included zero by mistake there anyhow ;)

BTW, just because 'smart people' do something, doesn't mean it's right. My Galaxy S7's software (Android -- google *cough*) is a good example of that. Uses over 2.1 gigabytes of RAM (out of 4), and is less responsive than a significantly hardware-inferior iPhone.

Anyhow, I've been otherwise enjoying your article, even if it is biased towards new users. It's pretty clear and well-written. A current project requires some integration with C++ code, and this has been helping me update my C++ skillset from the old pre-namespace era. When I first started on that project, I'd thought that Perl had somehow attacked the source :)

Would you say it is a good idea to work with signed integers even if it is unlikely the number would be negative, just as a precaution? Or maybe allow it but have a warning print to the screen?