An integer type (sometimes called an integral type) variable is a variable that can only hold nonfractional numbers (e.g. 2, 1, 0, 1, 2). C++ has five different fundamental integer types available for use:
Category  Type  Minimum Size  Note 

character  char  1 byte  
integer  short  2 bytes  
int  2 bytes  Typically 4 bytes on modern architectures  
long  4 bytes  
long long  8 bytes  C99/C++11 type 
Char is a special case, in that it falls into both the character and integer categories. We’ll talk about the special properties of char later. In this lesson, you can treat it as a normal integer.
The key difference between the various integer types is that they have varying sizes  the larger integers can hold bigger numbers. Note that C++ only guarantees that integers will have a certain minimum size, not that they will have a specific size. See lesson 2.3  variable sizes and the sizeof operator for information on how to determine how large each type is on your machine.
Defining integers
Defining some integers:
1 2 3 4 5 6 7 8 
char c; short int si; // valid short s; // preferred int i; long int li; // valid long l; // preferred long long int lli; // valid long long ll; // preferred 
While short int, long int, and long long int are valid, the shorthand versions short, long, and long long should be preferred. In addition to being less typing, adding the prefix int makes the type harder to distinguish from variables of type int. This can lead to mistakes if the short or long modifier is inadvertently missed.
Identifying integer
Because the size of char, short, int, and long can vary depending on the compiler and/or computer architecture, it can be instructive to refer to integers by their size rather than name. We often refer to integers by the number of bits a variable of that type is allocated (e.g. “32bit integer” instead of “long”).
Integer ranges and sign
As you learned in the last section, a variable with n bits can store 2^{n} different values. But which specific values? We call the set of specific values that a data type can hold its range. The range of an integer variable is determined by two factors: its size (in bits), and its sign, which can be “signed” or “unsigned”.
A signed integer is a variable that can hold both negative and positive numbers. To explicitly declare a variable as signed, you can use the signed keyword:
1 2 3 4 5 
signed char c; signed short s; signed int i; signed long l; signed long long ll; 
By convention, the keyword “signed” is placed before the variable’s data type.
A 1byte signed integer has a range of 128 to 127. Any value between 128 and 127 (inclusive) can be put in a 1byte signed integer safely.
Sometimes, we know in advance that we are not going to need negative numbers. This is common when using a variable to store the quantity or size of something (such as your height  it doesn’t make sense to have a negative height!). An unsigned integer is one that can only hold positive values. To explicitly declare a variable as unsigned, use the unsigned keyword:
1 2 3 4 5 
unsigned char c; unsigned short s; unsigned int i; unsigned long l; unsigned long long ll; 
A 1byte unsigned integer has a range of 0 to 255.
Note that declaring a variable as unsigned means that it can not store negative numbers, but it can store positive numbers that are twice as large.
Now that you understand the difference between signed and unsigned, let’s take a look at the ranges for different sized signed and unsigned variables:
Size/Type  Range 

1 byte signed  128 to 127 
1 byte unsigned  0 to 255 
2 byte signed  32,768 to 32,767 
2 byte unsigned  0 to 65,535 
4 byte signed  2,147,483,648 to 2,147,483,647 
4 byte unsigned  0 to 4,294,967,295 
8 byte signed  9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 
8 byte unsigned  0 to 18,446,744,073,709,551,615 
For the math inclined, an nbit signed variable has a range of (2^{n1}) to 2^{n1}1. An nbit unsigned variable has a range of 0 to (2^{n})1. For the nonmath inclined… use the table. 🙂
New programmers sometimes get signed and unsigned mixed up. The following is a simple way to remember the difference: in order to differentiate negative numbers from positive ones , we typically use a negative sign. If a sign is not provided, we assume a number is positive. Consequently, an integer with a sign (a signed integer) can tell the difference between positive and negative. An integer without a sign (an unsigned integer) assumes all values are positive.
Default signs and integer best practices
So what happens if we do not declare a variable as signed or unsigned?
Category  Type  Default Sign  Note 

character  char  Signed or Unsigned  Usually signed 
integer  short  Signed  
int  Signed  
long  Signed  
long long  Signed 
All integer variables except char are signed by default. Char can be either signed or unsigned by default (but is usually signed for conformity).
Generally, the signed keyword is not used (since it’s redundant), except on chars (when necessary to ensure they are signed).
Best practice is to avoid use of unsigned integers unless you have a specific need for them, as unsigned integers are more prone to unexpected bugs and behaviors than signed integers.
Rule: Favor signed integers over unsigned integers
Overflow
What happens if we try to put a number outside of the data type’s range into our variable? Overflow occurs when bits are lost because a variable has not been allocated enough memory to store them.
In lesson 2.1  Basic addressing and variable declaration, we mentioned that data is stored in binary format.
In binary (base 2), each digit can only have 2 possible values (0 or 1). We count from 0 to 15 like this:
Decimal Value  Binary Value 

0  0 
1  1 
2  10 
3  11 
4  100 
5  101 
6  110 
7  111 
8  1000 
9  1001 
10  1010 
11  1011 
12  1100 
13  1101 
14  1110 
15  1111 
As you can see, the larger numbers require more bits to represent. Because our variables have a fixed number of bits, this puts a limit on how much data they can hold.
Overflow examples
Consider a hypothetical unsigned variable that can only hold 4 bits. Any of the binary numbers enumerated in the table above would fit comfortably inside this variable (because none of them are larger than 4 bits).
But what happens if we try to assign a value that takes more than 4 bits to our variable? We get overflow: our variable will only store the 4 least significant (rightmost) bits, and the excess bits are lost.
For example, if we tried to put the decimal value 21 in our 4bit variable:
Decimal Value  Binary Value 

21  10101 
21 takes 5 bits (10101
) to represent. The 4 rightmost bits (0101
) go into the variable, and the leftmost (1
) is simply lost. Our variable now holds 0101
, which is the decimal value 5.
Note: At this point in the tutorials, you’re not expected to know how to convert decimal to binary or viceversa. We’ll discuss that in more detail in section 3.7  Converting between binary and decimal.
Now, let’s take a look at an example using actual code, assuming a short is 16 bits:
1 2 3 4 5 6 7 8 9 10 
#include <iostream> int main() { unsigned short x = 65535; // largest 16bit unsigned value possible std::cout << "x was: " << x << std::endl; x = x + 1; // 65536 is out of our range  we get overflow because x can't hold 17 bits std::cout << "x is now: " << x << std::endl; return 0; } 
What do you think the result of this program will be?
x was: 65535 x is now: 0
What happened? We overflowed the variable by trying to put a number that was too big into it (65536), and the result is that our value “wrapped around” back to the beginning of the range.
For advanced readers, here’s what’s actually happening behind the scenes: the number 65,535 is represented by the bit pattern 1111 1111 1111 1111 in binary. 65,535 is the largest number an unsigned 2 byte (16bit) integer can hold, as it uses all 16 bits. When we add 1 to the value, the new value should be 65,536. However, the bit pattern of 65,536 is represented in binary as 1 0000 0000 0000 0000 , which is 17 bits! Consequently, the highest bit (which is the 1) is lost, and the low 16 bits are all that is left. The bit pattern 0000 0000 0000 0000 corresponds to the number 0, which is our result.

Similarly, we can overflow the bottom end of our range as well, resulting in “wrapping around” to the top of the range.
1 2 3 4 5 6 7 8 9 10 
#include <iostream> int main() { unsigned short x = 0; // smallest 2byte unsigned value possible std::cout << "x was: " << x << std::endl; x = x  1; // overflow! std::cout << "x is now: " << x << std::endl; return 0; } 
x was: 0 x is now: 65535
Overflow results in information being lost, which is almost never desirable. If there is any suspicion that a variable might need to store a value that falls outside its range, use a larger variable!
Also note that the results of overflow are only predictable for unsigned integers. Overflowing signed integers or nonintegers (e.g. floating point numbers) may result in different results on different systems.
Rule: Do not depend on the results of overflow in your program.
Integer division
When dividing two integers, C++ works like you’d expect when the result is a whole number:
1 2 3 4 5 6 7 
#include <iostream> int main() { std::cout << 20 / 4; return 0; } 
This produces the expected result:
5
But let’s look at what happens when integer division causes a fractional result:
1 2 3 4 5 6 7 
#include <iostream> int main() { std::cout << 8 / 5; return 0; } 
This produces a possibly unexpected result:
1
When doing division with two integers, C++ produces an integer result. Since integers can’t hold fractional values, any fractional portion is simply dropped (not rounded!).
Taking a closer look at the above example, 8 / 5 produces the value 1.6. The fractional part (0.6) is dropped, and the result of 1 remains.
Rule: Be careful when using integer division, as you will lose any fractional parts of the result
2.4a  Fixedwidth integers and the unsigned controversy 
Index 
2.3  Variable sizes and the sizeof operator 
Would you say it is a good idea to work with signed integers even if it is unlikely the number would be negative, just as a precaution? Or maybe allow it but have a warning print to the screen?
That is actually a really difficult question. 🙂 I have seen various programmers argue it either way, and there’s no clear answer.
Bjarne Stroustrup (who designed C++) says, “The unsigned integer types are ideal for uses that treat storage as a bit array. Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea. Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.”
I think Bjarne is correct on this one.
Using signed instead of unsigned integers even when you don’t expect negative numbers gives you a few benefits:
1) Many programmers use signed integers even when only dealing with positive numbers, because negative numbers can then be used as “error conditions”. For example, it’s pretty common to write a function that is expected to return a positive number. However, you can have it return a negative number if something goes wrong. That way, the caller has a way of detecting something went astray. (Note: You can also use exception handling as an alternative mechanism for returning errors)
2) What happens in this case:
int foo(unsigned int nValue)
{
// something
}
caller:
int nValue = foo(1);
The 1 gets silently converted into an unsigned integer (which would be a large positive number), and the function has no way of detecting that an invalid input was given to it.
3) If you expect a number to be positive, and your signed variable suddenly has a negative value, that’s a good indication your algorithm is wrong.
In short, as a rule of thumb, unless you have a good reason not to, it is better to use signed integers.
From the line "Using an unsigned instead of an int to gain one more bit to represent positive integers is almost never a good idea." what I understood is that unsigned integers save 1 bit in memory because their value is always positive and there is no need to use an extra bit to decide whether the value is going to be positive or negative. Am I right? Forgive me, if my English is not so good.
One comment for this tutorial in my language (Hindi):
Mast hai bhai…
That means…
This tutorial is awesome bro.
Unsigned integers don’t really “save 1 bit in memory”, they just put their bits to use in a different way.
If you look at the range for an unsigned 8bit number, you’ll see that it’s 0 to 255.
If you look at the range for a signed 8bit number, you’ll see that it’s 128 to 127.
Both signed and unsigned numbers use all 8 bits to represent 256 possible unique values. It’s just that the range of numbers they can represent is slightly different.
It’s better to use unsigned rather than signed, and here’s why. RISC processors like the 8051 don’t have multiply or divide instructions so it has to be done with a library. When multiplying (or dividing) signed values, the function has to do the following:
1) Extract the sign bits for both the multiplicand and multiplier
2) Convert both values to unsigned
3) Perform the multiplication
4) Use the saved sign bits to find the sign of the product
5) Apply the sign to the product
When multiplying (or dividing) unsigned values, the function has to do the following:
1) Perform the multiplication
If the application is time critical, it’s better to use unsigned to eliminate a bunch of unnecessary steps. Even if your processor has the signed multiply and divide, it’s a good habit to not use something if you don’t need it.
As for "Attempts to ensure that some values are positive by declaring variables unsigned will typically be defeated by the implicit conversion rules.”: Using signed variables to circumvent implicit conversion rules is just bad programming. You should know the rules and know what will happen if you mix unsigned and signed variables in an expression. And you should always use explicit casts when mixing variables.
I work with an embedded 8051 project that has about 50,000 lines of code. We have hundreds of variables, mostly unsigned. My estimate is we have no more than a couple dozen signed variables, and they are used for incrementing (1) and decrementing (1) when using stepper motors. Timing is critical so using signed variables is out of the question because we use a lot of multiplication and division. I’ve seen the generated assembly code and unsigned is clearly the winner.
> It’s better to use unsigned rather than signed
No, it’s not. For your very specific performancecritical use case on a processor that has crippled handling of signed values, you might make the call to favor unsigned over signed for performance reasons.
But for general computing, the best minds in the field have decided that using signed is safer than using unsigned. Most modern processors support both signed and unsigned arithmetic operations natively, so the performance difference between the two is negligible. Even knowing the rules about how signed and unsigned values interact, it’s easy to get into trouble, especially if you mix them (which can happen inadvertently). It’s better to program defensively and optimize later where needed.
The C++ style guidelines from Google explicitly state, “You should not use the unsigned integer types such as uint32_t, unless there is a valid reason such as representing a bit pattern rather than a number, or you need defined overflow modulo 2^N. In particular, do not use unsigned types to say a number will never be negative. Instead, use assertions for this.”. Those Google guys are pretty smart  they must have had a good reason to include this.
Fun fact: The old Final Fantasy games on the NES only allowed your stats to go up to 255 because they used a 1 byte unsigned variable to store stats. It would be neat to see someone allude this in a modern game… especially when memory isn’t a huge issue anymore. 🙂
Memory is ALWAYS a huge issue. Modern games usually have more stats or more monsters to keep track of, so doubling or quadrupling the memory needs, just because you can, is never a good idea. Designing good rules usually allows to reduce the height of stats, rather than extend it.
Another great set of examples, thanks.
I have a question if long and int are both the same amount of bytes, do they hold the same amount?
Yes.
When overflow, does it dangerous? It can change other memory bits right (That may be used by other variables/application)?
OK, I understand after reading forward.
Dangerouse because it could change the other variables.
Thank you
Actually, overflow will just result in the most significant bits being lost. It won’t overflow into other variables.
Actually, overflow will just result in the most significant bits being lost. It won’t overflow into other variables.
It’s just because mathematical operations do not work with memory directly. The operand is put into CPU register (mostly EAX (on x86 machines) or its part  as the only register for integer mathematical purposes) for processing. The result (which can also occupy EDX register) is then taken from the initial place (EAX register) leading to higher bits lose.
However, if you’re dealing with putting the contents of the EAX register back into memory and the memory isn’t large enough to hold the register’s value (putting it into a char variable) that might cause problems.
Back to the original “is it dangerous?” if your plane altimeter value overflows and the autopilot now thinks you’re at 0 above ground and says CLIMB  NOW when you’re really way up in the air, who knows what could happen? Dangerous all depends on the application.
Alot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people… or you get a blue screen of death or an eternal spinny (roulette) wheel of death.
However, if you’re dealing with putting the contents of the EAX register back into memory and the memory isn’t large enough to hold the register’s value (putting it into a char variable) that might cause problems.
Back to the original “is it dangerous?” if your plane altimeter value overflows and the autopilot now thinks you’re at 0 above ground and says CLIMB  NOW when you’re really way up in the air, who knows what could happen? Dangerous all depends on the application.
A lot of computer exploits occur when something is overlowed or underflowed and the OS switches into protected modes for recovery, and next thing you know your PC is compromised and sending out spam to thousands of people… or you get a blue screen of death or an eternal spinny (roulette) wheel of death.
Thanks for your tutorial,
now i’m able to make my own String, Array, etc classes similar to std::string and vector after 4 months learning c++
Thank you very much.
Great tutorial! Knew nothing this morning, now already something.
Detail: the math behind the table with unsigned/signed range:
doesn’t the nbit unsigned variable have a range of 0 to 2^n instead of 0 to 2^n1?
As it was mentioned above:
As you learned in the last section, a variable with n bits can store 2^n different values…
As 0 (zero) is also a value, the maximum number is (2^n)1 and range becomes 0 … (2^n)1 (inclusive).
What do you do if 4/8/16 bits isn’t big enough? For instance, number theorists like to do arithmetic on very big numbers, ~ 100 to 200 digits large.
On modern architectures, generally longs are 32 bits. Most modern compilers also give you access to a 64bit integer type (often called a long long, but sometimes it has other names, like __int64).
However, if you need even larger integers, then you will have to write your own data type. You will learn how to do this in the section on classes (chapter 8).
Thanks again for your excellent tutorials Alex.
Just 2 quests;
1. in the ‘Range’ table above, 4 byte unsigned 0 to 4,294,967,296: is it 4,294,967,296 or 4,294,967,295?
2. when Stroustrup says “The unsigned integer types are ideal for uses that treat storage as a bit array.”, does he mean when you are using the bits within a variable to check if they are on or off?
1) 4,294,967,295. I fixed the error.
2) A bit array is typically used when you have a bunch of independent bitsize variables (booleans) and want to store them in a compact format. So yes, using the individual bits within a variable. An unsigned variable would be better for this purpose than a signed one is because the underlying (binary) representation is well defined. The underlying (binary) representation for signed variables can vary from system to system.
How do you prevent an integer overflow?
My program prevents users entering a number higher than a billion, that works just fine. But, however, if a user enters a number that exceeds the integerrange, my program gets stuck in a loop. Is there a way to prevent a user from entering a number that causes an overflow?
There are (at least) a few possible ways to do this:
1) Read the user’s input as a string, validate that the user entered something that fits in your variable, and then convert the string to your numeric value.
2) Read in the user’s input character by character and validate that input as they enter it (stop them from entering any character that would overflow your variable).
Neither of these is easy.
65535 is 0011011000110101001101010011001100110101
65536 is 0011011000110101001101010011001100110110
how is short able to maintain the first but not the second?
can u please explain?
I don’t think you have your binary right,
That’s called BCD (Binary Coded Decimal)
Adam,
In the overflow section, I noticed you only have 14 numbers. Shouldn’t there be 16 since you are starting from 0. Wouldn’t it be:
0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1101, 1111
0  0
1  1
2  10
3  11
4  100
5  101
6  110
7  111
8 1000
9 1001
101010
111011
121101
131110
141111
1510000
?
I thing it is 0110111001011101111000100110101011110011011110 to make a 15 and to make a 16, just add a 1111……
Let me know if I am mistaken…..great tutorials…
Concrete basics of binary counting:
If you understand that in decimal system 248 stands for
2 x 10^2 +
4 x 10^1 +
8 x 10^0
and 13 for
1 x 10^1 +
3 x 10^0
then it is quite easy to convert in your head small decimals to binary and vice versa. For example 1011 is
1 x 2^3 = 8 +
0 x 2^2 = 0 +
1 x 2^1 = 2 +
1 x 2^1 = 1
= 11.
When you go one integer (1) up in whatever the base (binary, octal, decimal etc.) you increase the lowest nominator to the highest possible until it reaches maximum, then you increase the second to the lowest if possible, if not, the third to the lowest etc. For example in octal system next from 4677 is 4700, because you can’t get higher than 7 (and after that of course 4701). In binary next from 1011 is 1100 (because from right to left first two 1’s can’t get higher). After 1100 > 1101, 1110, 1111. And you can assure it by counting:
1100 = 8 + 4 + 0 + 0 = 12
1101 = 8 + 4 + 0 + 1 = 13
1110 = 8 + 4 + 2 + 0 = 14
1111 = 8 + 4 + 2 + 1 = 15.
Yes, fixed. 🙂 Thanks for pointing that out.
The simplest method of converting decimal to binary is to write the 2 raised to powers below each binary digit and striking out the powers below 0
for e.g
the number is 1101001
1 1 0 1 0 0 1
2^6 2^5 2^4 2^3 2^2 2^1 2^0
Now cancel all numbers below 0 i.e cancel the powers 1,2,4
add the others = 1+8+32+64
= 105
oops…
12  1100
13  1101
14  1110
15  1111
Can someone tell me why this does not work.
As per the above lesson char is type of INT. so this should work.
This is because when you insert a char into std::cout (which is actually
basic_ostream<char>
) it doesn’t display on the console as the literal value in base 10, it displays as a single ASCII character.If it worked as you assumed then
cout << "Hello";
would print72101108108111
(if iomanipulator flag set to std::dec)Cast char to int before insertion:
unsigned char ch1=5;
std::cout << "The value of ch1 is : "<< (int)ch1 << endl;
Then you will see the value of the char displayed as you wished it to be.
Correct, std::cout prints characters as ASCII values instead of integer values because that is what they are more often used for.
(int)ch1 is an oldschool Cstyle cast. In C++ you should do int(ch1).
We discuss casting in lesson 4.4  Type conversation and casting.
For “1 byte signed” the range is 128 to 127, how is 128 represented in 1 byte, does’nt it cause an overflow?
127 => 01111111
128 => 111111111 (the right most 1 represents  negative)
Also why cant we represent 128 in 1 byte (why is the range only till 127?)
128 => 10000000
Thx
A signed char 128 in binary format is 10000000
Now you may question why is this so if the left most bit is the sign bit and the other seven bits are the value?
Well you don’t want to ever use 0, that’s kind of useless.
So the last 7 bits are actually a twoscomplement of the absolute value of the negative value.
See http://en.wikipedia.org/wiki/Two%27s_complement for an explanation of twoscomplement.
I discuss two’s complement in section 3.7  converting between binary and decimal
What is the difference between the “long” and “int” variable types? (They both have the same size).
They are same size on your platform. That does not guarantee they are the same size on another platform.
In your particular case there is no difference. But never assume it’s also the same for anyone else.
A long is guaranteed to be the same size or larger than an int on the same machine. That’s as far as the contract goes.
I’ve updated the tutorials to indicate that different variables have a guaranteed minimum size. For int, it’s 2 bytes. For long, it’s 4 bytes.
when you say, an int variable has a size of 2 bytes or 4 bytes, what do u mean? does it dynamically change size from 2 to 4 bytes as the number get larger?
No, it’s up to your compiler, which generally picks an appropriate value based on your computer’s architecture.
This means int will always have the same fixed size on a given system.
i can understand how integer overflow happens when you increase an unsigned integer 
i.e. 65535 = 1111 1111 1111 1111
and 65536 = [1] 0000 0000 0000 0000
so only given 2 bytes of data this reverts back to meaning 0.
However, I can’t understand how this happens in reverse?
i.e. 0 = 0000 0000 0000 0000
So when you subtract one to get 1, how does this revert back to 1111 1111 1111 111
in terms of how memory is stored?
You are working with an unsigned int, there is no negative value.
So you cannot ever reach a value of 1, 0 is as low as you can go before overflow.
Counting down: 5 4 3 2 1 0 65535 65534 65533 …
You guys ask hard questions.
The C99 spec says: “the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the newtype until the value is in the range of the newtype”.
So in this case, 1 is converted to an unsigned int by adding (UINT_MAX + 1) to the value. The resultant value (UINT_MAX) is between 0 and UINT_MAX. UINT_MAX is the maximum int (e.g. 65535). So 1 maps to 65535.
The C++ spec essentially says the same thing, only in a much more complicated way.
An an aside, it turns out the using two’s complement as the underlying representation makes this trivial.
Consider: 1 in two’s complement:
binary representation for 1: 0000 0001
flip the bits: 1111 1110
add 1: 1111 1111 in two’s complement
1111 1111 as an unsigned = 65535
Consider: 2 in two’s complement:
binary representation for 2: 0000 0010
flip the bits: 1111 1101
add 1: 1111 1110 in two’s complement
1111 1110 as an unsigned = 65534
Consider: 65535 in two’s complement (yes, this is outside the range for a 16bit signed number, and should take 17 bits to represent properly in two’s complement, but we only have 16 bits, so lets use them and see what happens)
binary representation for 65535: 1111 1111
flip the bits: 0000 0000
add 1: 0000 0001 in two’s complement
0000 0001 as an unsigned = 1 (it still works!)
So if the compiler is using two’s complement binary representation for signed numbers (which many do), then all that’s needed is to interpret the number as an unsigned number.
* missed the last 1 off the 2nd to last line
oh…my…god….I can actually feel my brain cells evaporating. I have NO idea what this article is talking about and can’t understand it no matter how many times I try to read it.
Using code::blocks and the program from the comprehensive quiz from chapter 1:

main.cpp:
#include
#include “io.h”
using namespace std;
int main()
{
cout << "short: " << sizeof(short) << endl;
int b = ReadNumber();
int z = ReadNumber();
WriteAnswer(b+z);
return 0;
}
io.cpp:
#include
using namespace std;
int ReadNumber()
{
cout <> x;
return x;
}
void WriteAnswer(int a)
{
cout << "Total: " << a << endl;
}
io.h:
#ifndef IO_H
#define IO_H
int ReadNumber();
void WriteAnswer(int a);
#endif // IO_H

When I run the program I can still calculate 10 + 10 to a total of 20, am I (grossly) misunderstanding this lesson? I'm trying to understand the whole bytes thing and such but I'm not getting it at all. When I use unsigned int, I can calculate 1 + 1, but for some reason with unsigned short, 1 + 1 gives me what I'm assuming is an overflow issue (I get 131070 as a total). Please help 🙁
Ummm….please ignore the program I put up above (it’s displayed incorrectly). Although I did use the code from the comprehensive quiz from 1.11 (using an unsigned int) to try adding up two negatives, 1 and 1 and still got a proper answer: 2. When I tried an unsigned short, I got issues (I got the number 131070, which is way more than what an unsigned short is supposed to give based on the table above (seeing as a short is an unsigned 2 bytes integer variable)).
What I also wanted to know was (and I think I might know the answer after thinking back on games), is the 0255 supposed to be LITERALLY the number 255, or as in 255 digits?
Sorry for triple posting, but I think I found the problem. Please correct me on these assumptions (for which I’m using the comprehensive quiz question from 1.11):
1. The integer variable within the “int readnumber()” function limits the maximum/minimum number it can reach (hence with an unsigned short, it’s 65,535) while the “void writeanswer(int x)” allows for a maximum/minimum of what int is capable of (which is a larger number). Hence, if the integer variable within the readnumber function was capable of int size while writenumber is only (writenumber(short x)), then the maximum/minimum achievable is only what short is (which is 65,535).
2. Doing an unsigned short within the “int readnumber()” function, if we input 1 for the first number and 1 for the second, we get 131070 because they each take a step back from 0 and arrive at 65,535 each (hence a total of 131,070). This is still strange for me because unsigned int still gives the proper 2 after inputting 1 (for the first) and 1 (for the second). Unsigned short is NOT capable of adding up 1 and 1, but unsigned int is.
3. I think I had one more assumption, but it escapes me right now.
hey im new to programming . i have one doubt.
Isnt ‘char’ used to denote characters ?
how come youre telling its an integer data type??
It is an integral data type, in the fact that it can only represent an integer, the same limitation that the other integral types (short, int, long) have.
The special handling of char traces back to the early days, when there were only 8 bits available to assign for text characters on a display screen.
You can do arithmetic with char types, you just have to be careful when you want to look at the results.
char a=65, b=66;
cout << a << b << endl;
cout << a+b << endl;
Notice how cout implicitly cast the a+b evaluation to int type so you see the actual sum of 65+66 displayed, but when simple char variables are inserted into cout then the ASCII characters are output back to you.
If you are using char to write or read text, as most uses of it are, everything is cool as can be, in fact strings are simply of type char*
Why specify that an integer is signed? I understand why we’d specify it’s unsigned, but just declaring an int without adding “signed” still lets you input negative numbers. So why?
The primary use of the signed keyword is to explicitly specify whether char is signed or unsigned (since it could be either by default). Although you can use it with the other integer types, it is completely redundant.
Hi again,
around the example of overflow of 65535 + /  1, the quotation marks are misplaced in the codes.
It should be:
instead of:
Also, some redirections have names like "24integers" instead of "2.4integers," although it doesn't really matter.
One final question:
I understand how 65535 + 1 becomes zero, but I can't understand how 0  1 becomes 65535 for unsigned shorts.
Matthew
Thanks, I fixed the quotes.
Regarding the last question, I just answered that one: here.
Minor thing I noticed while reading through:
The decimal / binary value table would make a bit more sense if the binary values were rightaligned on the table instead of left aligned. This would imply visually that each new digit added appears on the left, rather than the right as new bits are added.
It'd technically also be useful to highlight the new digit with a bold font each time it goes up a bit for much the same reason.
It's not a big deal, but it'd likely make it easier for people to visualize what's going on with the overflow example immediately after. =3
Good idea to use rightalignment. It does make the table more comprehensible. Thanks for the suggestion.
The first line calls even the negative numbers whole numbers, which is mathematically incorrect.
You are correct. I’ve updated the terminology.
sizeof(long) on machine is 8 and not 4 as mentioned above. (64bit Ubuntu).
4 bytes is the minimum size that C++ guarantees a long will be. However, it can be more on some architectures, such as the one you are using.
Typos.
"See lesson 23 (2.3)  variable sizes and the sizeof operator"
"In lesson 21 (2.1)  Basic addressing and variable declaration"
"If there is any doubt (suspicion) that a variable might need to store a value that falls outside its range, use a larger variable!" Either say ‘suspicion that it won’t work’ or ‘doubt that it will work’, not ‘doubt that it won’t work’ (double negative). You probably just started saying one these and switched when writing.
Fixed! Many thanks.
Why do we need to write "return 0;" in the last line of this program?
#include <iostream>
int main()
{
using namespace std;
unsigned short x = 0; // smallest 2byte unsigned value possible
cout << "x was: " << x << endl;
x = x  1; // overflow!
cout << "x is now: " << x << endl;
return 0;
}
Alex,
I came back here from the bitwise lesson. I think I missed something about storing integers that you may have written somewhere. But I don’t recall seeing.
How does C++ or the compiler handle all the leading zero’s when your system and mine need 4 bytes 32bits to store small integers like 1 (one}? Certainly other data types or situations require throwing out leading or trailing zero’s.
Char also come to mind here too. What happens of someone uses a longlong integer and then stores a small number there?
Can you give us some insight, Thanks
The compiler or CPU’s instruction set should handle the padding as appropriate. For example, if you assign the integer value 1 to a 32bit memory location, it will assign the value 00000000 00000000 00000000 00000001. It’s not something you need to worry about as a programmer.
"2^n1" should be written as (2)^n1. Otherwise it implies that the first number in the range can be positive should the exponent be positive, which is impossible. Just a little syntax error :p
Good point. However, your solution is equally incorrect. 🙂 It should be (2^n1), so the 2^n1 part evaluates first, and then the negative is applied.
I’ve updated the lesson accordingly.
Hi, I have a question about the last example under "Overflow Examples". A wraparound occurred which gave a result of
x was: 0
x is now: 65535
I understand the wraparound result in the previous example, but not quite sure why this wraparound results in 65535? Trying to understand via the binary, but I can’t figure it out. Can anyone help??
Note to Alex: Thanks for this tutorial! You’re awesome.
In the previous example, we showed that 65535 + 1 = 0. If we subtract one from each side, we get 65535 =  1. So symmetrically, this makes sense.
Same with the binary version. 0 is 0000 0000 0000 0000 in binary. If we subtract 1, then we get binary 1111 1111 1111 1111, which is 65535. We talk more about how integers convert to binary in chapter 3.
If i get this right, signed short can take values form 256 to 255 (2^8 bits) right ? But
#include "stdafx.h"
#include <iostream>
int main()
{
using namespace std;
short x = 255; // largest 16bit unsigned value possible
cout << "x was: " <<sizeof (x) << endl;
x = x + 1; // 65536 is out of our range  we get overflow because x can’t hold 17 bits
cout << "x is now: " << x << endl;
return 0;
}
prints 255, 256 normally. What am i missing?
Shorts are normally 2 bytes, so the range of a signed short is usually 32768 to 32767.
Yeeea i just got it… 15 bits used for numbers amd the 16th used for +/.
Keep up the amazing work. Thanks for replying so quickly.
Alex, I’ve been trying to get this for a long long time… in 2 articles (2.3 and 2.4) you managed to explain it to me in such a simple way. Thank you for your work and thank you for sharing it.
When you will add more hindi chapters….?
My Hindi translator went on hiatus, so I’m not sure. 🙁
Great Tutorial,i have read many pdf tutorials but i gave up along the way because i wasn’t understanding,i thank God i have found this online, and now am back again with C++. Better understanding May Almighty Allah(God) Bless you Abundantly Great ALEX.
char data type doesn’t stores any numeric value and signed /unsigned just tells us that whether the data type would have ve values or not.
then what is the point of char data type being signed or unsigned
Char does store numeric values, and you can do integer math with them. They’re just not _typically_ used for that purpose.
can u give me an example code explaining how char can be used to do integer maths. 🙂
btw alex, i must say that u have got an awesome site (which i rarely say for any site).. so thanx 4 sharing all that stuff.. nd i too know hindi.. so i would be happy to contribute for ur hindi site =D
Hello and many thanks for putting up such a well written tutorial. It’s perfect for people like me who want to start from scratch (or almost).
I seem to have a small problem related to this lesson, and I suspect it’s because of the compiler (codeblocks 16.01, archlinux x64). This is my quick test program:
My sizeof(unsigned short)=2, so it should show 65535, yet it shows up 1. I tried then with x{65535} and x+1 , but it outputs 65536. Then I tried x{65536} and, using
(to go outside of codeblocks), it shows the warning:
I am very confused right now. Could you please shed some light?
It’s not your compiler. If you do this:
You’ll get 65535.
But if you do this:
You get 1. Why? Because 1 is a signed number, whereas x is unsigned. Because you’re mixing types, the compiler does an implicit conversion, and x is converted to a signed integer (the rules for this are covered in lesson 4.4). Thus, x  1 is signed, and so you get 1 instead of 65535.
To get the answer you expect, do this:
This avoids the type conversion.
In general, you should not mix signed and unsigned numbers, because the results are often unexpected.
Thank you for the clarification. And, again, for the whole tutorial, even though I’m only at bitwise operations right now. I have to say that these 3 days (4 this one) since I started reading this, made me realize I *understood*.
There are 10 people in the world who understand binary. Those who do and those who don’t.
Better phrasing: There are 10 types of people in this world, those who understand binary and those who don’t.
i’m using VS2015
i tried the signed int x = 1; and got an error:
error C4430: missing type specifier  int assumed. Note: C++ does not support defaultint
this happened when i do’nt use “signed” too.
Something else must be going on, because that line is fine.
This should print 1:
Somewhere in this article, you talk about how a variable integer can be named 32bits integer instead on long. Is that because, in some machines, long takes 32 bits? What am I not understanding correctly?
On 32bit systems, long is often 32bits. On 64bit systems, long is often 64 bits. So yes, we typically use the number of bits when discussing integers because the names (int, long) can vary in size on different machines/architectures.
Ok… I mixed bytes and bits…
Thank you for your answer you’re really helpful. Are you the author of this website?
Yes, I am.
Dear Alex
How can I learn easily the C++?
PLEASE!
I NEED SOME SUGGESTIONS.
Step 1: Open tutorial
Step 2: Read
Hi Alex,
#include <iostream>
using namespace std;
int main ()
{
unsigned int a = 7;
int b = 2;
int result;
result = a  b;
cout << result;
return 0;
}
==========================================
An unsigned integer is one that can only hold positive values.
When I assigned "7" to "unsigned int a", I expected something wrong in the output but it works! The output is 9.
Could you please let me know why?
Thanks, Have a great day.
This one is a little challenging to explain  it has to do with the way signed and unsigned numbers are stored in binary, and the way they are interpreted based on whether the variables are signed or unsigned. Try this:
You’ll see that it prints 4294967289. So even though you assigned 7, it stored that huge number. However, if you cast that huge number back to a signed number, you’ll see it prints 7:
Essentially, 7 and 4294967289 are stored as the same number in binary  the difference is in how they are interpreted based on type. Because your result variable is signed, your result is being interpreted as a signed number (9) instead of a large unsigned number (4294967291 I think).
I cover related topics in chapter 3, so if the above seems a bit incomprehensible at this point, keep reading.
Hi Alex,
I’ve finished Chapter 3. Please correct me if I am wrong.
7 is represented in binary as 1111 1111 1111 1111 1111 1111 1111 1001.
2 is represented in binary as 0000 0000 0000 0000 0000 0000 0000 0010.
When “result = a – b;” statement is executed, something like this happens:
result = 1111 1111 1111 1111 1111 1111 1111 1001 – 0000 0000 0000 0000 0000 0000 0000 0010
result = 1111 1111 1111 1111 1111 1111 1111 0111.
Because result is signed, it is being interpreted as a signed variable that can hold both negative & positive numbers (9 & 4294967287). Both numbers (9 & 4294967287) are represented in binary as 1111 1111 1111 1111 1111 1111 1111 0111.
Now I need to know which number (9 or 4294967287) should be printed.
From what I’ve learned in Chapter 3 *Signed numbers and two’s complement, I am able to tell (9) should be printed.
*Signed numbers and two’s complement: Signed integers are typically stored using a method known as two’s complement. In two’s complement, the leftmost (most significant) bit is used as the sign bit. A 0 sign bit means the number is positive, and a 1 sign bit means the number is negative.
To convert a two’s complement binary number back into decimal, first look at the sign bit.
If the sign bit is 0, just convert the bits for unsigned numbers (I modified this sentence for my example).
If the sign bit is 1, then we invert the bits, add 1, then convert to decimal, then make that decimal number negative (because the sign bit was originally negative).//
So, looking at “1”111 1111 1111 1111 1111 1111 1111 0111, I can tell the number is negative (a 1 sign bit).
The next steps are to invert the bits, add 1, then convert to decimal, then make that decimal number negative (because the sign bit was originally negative):
Invert the bits: 0000 0000 0000 0000 0000 0000 0000 1000
Add 1: 0000 0000 0000 0000 0000 0000 0000 1001
Covert to decimal: 0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+9+0+0+0 = 9.
Since the original sign bit (1) was negative, the final value is 9.
Yup!
That said, generally you should just avoid unsigned numbers altogether and not worry about conversions like these. 🙂
hey guys i have a small problem.why does this program give no value?Is it because the +1 is out of the range and one gets and over flow of the bit.
Here is the program.
numeric limits<int>::max()+1
That’s not a program, it’s an expression. If it’s giving no value, it may be because you’re not sending the result of the expression to std::cout?
Hi Alex,
I played with numbers after going through this part and noticed an odd thing. If I use the code below, and add a decimal number to X, like say, 0.5. The program goes wild and starts spamming "Add to X: X is 1". What happens here and why does it completely break the program like that?
It has to do with how cin and operator>> process input. If you type 0.5, the 0 gets extracted to x, but the ‘.’ gets left in the input stream. Next iteration, it tries to extract the ‘.’, which fails because character ‘.’ can’t be extracted to an integer. This causes cin to go into failure mode. I talk about this in more detail in lesson 5.10, if you want to read ahead.