An integer type (sometimes called an integral type) variable is a variable that can only hold nonfractional numbers (e.g. 2, 1, 0, 1, 2). C++ has five different fundamental integer types available for use:
Category  Type  Minimum Size  Note 

character  char  1 byte  
integer  short  2 bytes  
int  2 bytes  Typically 4 bytes on modern architectures  
long  4 bytes  
long long  8 bytes  C99/C++11 type 
Char is a special case, in that it falls into both the character and integer categories. We’ll talk about the special properties of char later. In this lesson, you can treat it as a normal integer.
The key difference between the various integer types is that they have varying sizes  the larger integers can hold bigger numbers. Note that C++ only guarantees that integers will have a certain minimum size, not that they will have a specific size. See lesson 2.3  variable sizes and the sizeof operator for information on how to determine how large each type is on your machine.
Defining integers
Defining some integers:
1 2 3 4 5 6 7 8 
char c; short int si; // valid short s; // preferred int i; long int li; // valid long l; // preferred long long int lli; // valid long long ll; // preferred 
While short int, long int, and long long int are valid, the shorthand versions short, long, and long long should be preferred. In addition to being less typing, adding the prefix int makes the type harder to distinguish from variables of type int. This can lead to mistakes if the short or long modifier is inadvertently missed.
Identifying integer
Because the size of char, short, int, and long can vary depending on the compiler and/or computer architecture, it can be instructive to refer to integers by their size rather than name. We often refer to integers by the number of bits a variable of that type is allocated (e.g. “32bit integer” instead of “long”).
Integer ranges and sign
As you learned in the last section, a variable with n bits can store 2^{n} different values. But which specific values? We call the set of specific values that a data type can hold its range. The range of an integer variable is determined by two factors: its size (in bits), and its sign, which can be “signed” or “unsigned”.
A signed integer is a variable that can hold both negative and positive numbers. To explicitly declare a variable as signed, you can use the signed keyword:
1 2 3 4 5 
signed char c; signed short s; signed int i; signed long l; signed long long ll; 
By convention, the keyword “signed” is placed before the variable’s data type.
A 1byte signed integer has a range of 128 to 127. Any value between 128 and 127 (inclusive) can be put in a 1byte signed integer safely.
Sometimes, we know in advance that we are not going to need negative numbers. This is common when using a variable to store the quantity or size of something (such as your height  it doesn’t make sense to have a negative height!). An unsigned integer is one that can only hold positive values. To explicitly declare a variable as unsigned, use the unsigned keyword:
1 2 3 4 5 
unsigned char c; unsigned short s; unsigned int i; unsigned long l; unsigned long long ll; 
A 1byte unsigned integer has a range of 0 to 255.
Note that declaring a variable as unsigned means that it can not store negative numbers, but it can store positive numbers that are twice as large.
Now that you understand the difference between signed and unsigned, let’s take a look at the ranges for different sized signed and unsigned variables:
Size/Type  Range 

1 byte signed  128 to 127 
1 byte unsigned  0 to 255 
2 byte signed  32,768 to 32,767 
2 byte unsigned  0 to 65,535 
4 byte signed  2,147,483,648 to 2,147,483,647 
4 byte unsigned  0 to 4,294,967,295 
8 byte signed  9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 
8 byte unsigned  0 to 18,446,744,073,709,551,615 
For the math inclined, an nbit signed variable has a range of (2^{n1}) to 2^{n1}1. An nbit unsigned variable has a range of 0 to (2^{n})1. For the nonmath inclined… use the table. 🙂
New programmers sometimes get signed and unsigned mixed up. The following is a simple way to remember the difference: in order to differentiate negative numbers from positive ones , we typically use a negative sign. If a sign is not provided, we assume a number is positive. Consequently, an integer with a sign (a signed integer) can tell the difference between positive and negative. An integer without a sign (an unsigned integer) assumes all values are positive.
Default signs and integer best practices
So what happens if we do not declare a variable as signed or unsigned?
Category  Type  Default Sign  Note 

character  char  Signed or Unsigned  Usually signed 
integer  short  Signed  
int  Signed  
long  Signed  
long long  Signed 
All integer variables except char are signed by default. Char can be either signed or unsigned by default (but is usually signed for conformity).
Generally, the signed keyword is not used (since it’s redundant), except on chars (when necessary to ensure they are signed).
Best practice is to avoid use of unsigned integers unless you have a specific need for them, as unsigned integers are more prone to unexpected bugs and behaviors than signed integers.
Rule: Favor signed integers over unsigned integers
Overflow
What happens if we try to put a number outside of the data type’s range into our variable? Overflow occurs when bits are lost because a variable has not been allocated enough memory to store them.
In lesson 2.1  Fundamental variable definition, initialization, and assignment, we mentioned that data is stored in binary format.
In binary (base 2), each digit can only have 2 possible values (0 or 1). We count from 0 to 15 like this:
Decimal Value  Binary Value 

0  0 
1  1 
2  10 
3  11 
4  100 
5  101 
6  110 
7  111 
8  1000 
9  1001 
10  1010 
11  1011 
12  1100 
13  1101 
14  1110 
15  1111 
As you can see, the larger numbers require more bits to represent. Because our variables have a fixed number of bits, this puts a limit on how much data they can hold.
Overflow examples
Consider a hypothetical unsigned variable that can only hold 4 bits. Any of the binary numbers enumerated in the table above would fit comfortably inside this variable (because none of them are larger than 4 bits).
But what happens if we try to assign a value that takes more than 4 bits to our variable? We get overflow: our variable will only store the 4 least significant (rightmost) bits, and the excess bits are lost.
For example, if we tried to put the decimal value 21 in our 4bit variable:
Decimal Value  Binary Value 

21  10101 
21 takes 5 bits (10101
) to represent. The 4 rightmost bits (0101
) go into the variable, and the leftmost (1
) is simply lost. Our variable now holds 0101
, which is the decimal value 5.
Note: At this point in the tutorials, you’re not expected to know how to convert decimal to binary or viceversa. We’ll discuss that in more detail in section 3.7  Converting between binary and decimal.
Now, let’s take a look at an example using actual code, assuming a short is 16 bits:
1 2 3 4 5 6 7 8 9 10 
#include <iostream> int main() { unsigned short x = 65535; // largest 16bit unsigned value possible std::cout << "x was: " << x << std::endl; x = x + 1; // 65536 is out of our range  we get overflow because x can't hold 17 bits std::cout << "x is now: " << x << std::endl; return 0; } 
What do you think the result of this program will be?
x was: 65535 x is now: 0
What happened? We overflowed the variable by trying to put a number that was too big into it (65536), and the result is that our value “wrapped around” back to the beginning of the range.
For advanced readers, here’s what’s actually happening behind the scenes: the number 65,535 is represented by the bit pattern 1111 1111 1111 1111 in binary. 65,535 is the largest number an unsigned 2 byte (16bit) integer can hold, as it uses all 16 bits. When we add 1 to the value, the new value should be 65,536. However, the bit pattern of 65,536 is represented in binary as 1 0000 0000 0000 0000 , which is 17 bits! Consequently, the highest bit (which is the 1) is lost, and the low 16 bits are all that is left. The bit pattern 0000 0000 0000 0000 corresponds to the number 0, which is our result.

Similarly, we can overflow the bottom end of our range as well, resulting in “wrapping around” to the top of the range.
1 2 3 4 5 6 7 8 9 10 
#include <iostream> int main() { unsigned short x = 0; // smallest 2byte unsigned value possible std::cout << "x was: " << x << std::endl; x = x  1; // overflow! std::cout << "x is now: " << x << std::endl; return 0; } 
x was: 0 x is now: 65535
Overflow results in information being lost, which is almost never desirable. If there is any suspicion that a variable might need to store a value that falls outside its range, use a larger variable!
Also note that the results of overflow are only predictable for unsigned integers. Overflowing signed integers or nonintegers (e.g. floating point numbers) may result in different results on different systems.
Rule: Do not depend on the results of overflow in your program.
Integer division
When dividing two integers, C++ works like you’d expect when the result is a whole number:
1 2 3 4 5 6 7 
#include <iostream> int main() { std::cout << 20 / 4; return 0; } 
This produces the expected result:
5
But let’s look at what happens when integer division causes a fractional result:
1 2 3 4 5 6 7 
#include <iostream> int main() { std::cout << 8 / 5; return 0; } 
This produces a possibly unexpected result:
1
When doing division with two integers, C++ produces an integer result. Since integers can’t hold fractional values, any fractional portion is simply dropped (not rounded!).
Taking a closer look at the above example, 8 / 5 produces the value 1.6. The fractional part (0.6) is dropped, and the result of 1 remains.
Rule: Be careful when using integer division, as you will lose any fractional parts of the result
2.4a  Fixedwidth integers and the unsigned controversy 
Index 
2.3  Variable sizes and the sizeof operator 
Ah, the overflow issue reminds me of this glitch from the original Civilization game. Ghandi's aggression rating starts at 1, and researching democracy reduces aggression by 2... lo and behold, nuclear armed Ghandi with 255 aggression in the late game. Normally, leaders are given a rating from 1 to 10, but I guess they had an unsigned integer there.
https://kotaku.com/whygandhiissuchanassholeincivilization1653818245
Yes, this is a great example of a famous overflow glitch! And one that wouldn't have happened if they'd used a signed integer instead of an unsigned char!
Thanks for bringing this up, undoubtedly other readers will find this interesting.
how can a char be signed?
Hi Anushka!
A char is an 8bit integer. Your compiler/console just know how to read/display it as a character.
excuse me mr nascardriver sir but um what does this have to do with bernie sanders?
Bernie Sanders is an AI written in c++
"Note that C++ only guarantees that integers will have a certain minimum size, not that they will have a specific size."
Does it mean that on some architecture the int data type can have a size of 4 bytes, but on some other architecture it can have a size of 8 bytes? If the answer is true, does it affect the range of int (on the architecture with 4byte sized int a range of 2,147,483,648 to 2,147,483,647 and on the architecture with 8byte sized int a range of 9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.) Thanks!
Yes, though most often an int will be 2 or 4 bytes. I'm not aware of any architectures that have an int size of 8 bytes, but that doesn't meant there aren't or won't ever be.
Thank you! I'm still wondering if it does affect the range. I don't think the maximum value of an 4byte sized int can fit in a 2byte sized int. Basically, a 2byte sized one is the same thing as the short data type. Right?
Hi Cosmin!
The maximum of a 4byte sized integral date type cannot fit in a 2byte sized one. If that was the case there wouldn't be a need for anything bigger than 2 bytes.
Here are the sizes and maximum values of some common integral data types:
Produced by
Compiled with gcc version 7.1.1 20170622 (Red Hat 7.1.13) (GCC)
Thank you, nascardriver!
Hello sir,
while i was executing this programming, i was getting first 255 interger number output like 65 
"A" .....97 "a" ..... 256 "smily" ..etc + my program continuously executing countless time with computer warning sound "beep"
what is the problem in this program?
#include<iostream>
int main()
{
unsigned char c ;
for(c=0; c<256; c++)
{
std::cout << c << std::endl;
}
return 0;
}
When you print a char using std::cout it prints as the ASCII value with that code. So your program is printing all of the ASCII code points from 0 to 255. However, not all ASCII code points are printable, and the ones above 127 aren't well defined. If you try to print these, the results are indeterminate. In your case, one of those unprintable characters is translated by your console as a beep. The smiley is your console's font showing a smiley for one of the characters (probably one of the ones above 127).
Dear Alex,
Quick (and perhaps stupidly obsessive) question: Is overflow the same (or does it at least work the same) as "narrowing"?
I started this question on StackOverflow, but as usual, it was met with dubious ire: https://stackoverflow.com/questions/44895350/whywouldanoverflowonanobjectofbuiltintypecauseanexceptionundefined
I would also assume that overflow occurs on all the arithmetic types, since it is usually referred to as either "arithmetic overflow" or "arithmetic underflow," but is that even true?
Thank you Alex for your help, I'm 13 and I learned a course on Openclassrooms but the course was obsolete, I looked for a book in the web I saw C++ Primer and few time after I saw your site Learncpp.com. Since I had these two ressources, I don't lose the time. I have to say that you did the layoutfor the homepage very attracting. I'm very happy for reaching a site like this and a book like C++ primer which is a good ressource too. I moving very well. God Bless you, you deserve it.
McSteven
IN THE LAST SAMPLE PROGRAM:
FIRST
WHO'S THERE TO TELL THE COMPILER TO PRODUCE AN INTEGER RESULT?
SECOND
WILL BOTH THE INTEGERS NOT BE TYPECASTED TO FLOAT BY THE COMPILER ITSELF TO PRODUCE A FLOAT RESULT?
The compiler produces an integer result if both of the operands are integers, which they are in this case. They will not be typecast to a float. If either or both of the operands are float, then you'll get a float result.
In future lessons we talk more about how type conversions happen and more about how operators work, which should help make this clearer.
Hi Alex, I want to say I'm enjoying the site and the C++ lessons are outstanding. I just have 1 concern for this topic about integers. That being, I really don't understand the purpose of these integers and their limits. Like char, short, long, and long long I dont understand why they would be used in a code and why having a limit to their size would be about. Keep in mind I'm a beginner with absolutely no experience, just the lessons following up to this one.
Integers are used when you want to store whole numbers (e.g. 1, 2, 3). For example, someone's age in years, or the number of times something has occurred. The types (char, short, int, long, etc...) determine how large of an integer that type can store.
Here's a sample program where we use an integer to store an age:
What else can I help clarify for you?
That helps me all I needed, thanks a lot.
I was curious to know how the example with unsigned short x = 0; x = x 1 got the result of 65335, but I was able to work out why using the complement method as detailed on this page here: http://www.wikihow.com/SubtractBinaryNumbers.
You can get the same result by initializing an unsigned short to 1.
I talk more about binary representation and twos complement in chapter 3.
OK, great, thanks!
how to find the absolute difference between 2 integers?
is there any built in function for it?
Yes, you can use the abs() function that lives in the cstdlib header.
Hi Alex,
I played with numbers after going through this part and noticed an odd thing. If I use the code below, and add a decimal number to X, like say, 0.5. The program goes wild and starts spamming "Add to X: X is 1". What happens here and why does it completely break the program like that?
It has to do with how cin and operator>> process input. If you type 0.5, the 0 gets extracted to x, but the '.' gets left in the input stream. Next iteration, it tries to extract the '.', which fails because character '.' can't be extracted to an integer. This causes cin to go into failure mode. I talk about this in more detail in lesson 5.10, if you want to read ahead.
hey guys i have a small problem.why does this program give no value?Is it because the +1 is out of the range and one gets and over flow of the bit.
Here is the program.
numeric limits<int>::max()+1
That's not a program, it's an expression. If it's giving no value, it may be because you're not sending the result of the expression to std::cout?
Hi Alex,
#include <iostream>
using namespace std;
int main ()
{
unsigned int a = 7;
int b = 2;
int result;
result = a  b;
cout << result;
return 0;
}
==========================================
An unsigned integer is one that can only hold positive values.
When I assigned "7" to "unsigned int a", I expected something wrong in the output but it works! The output is 9.
Could you please let me know why?
Thanks, Have a great day.
This one is a little challenging to explain  it has to do with the way signed and unsigned numbers are stored in binary, and the way they are interpreted based on whether the variables are signed or unsigned. Try this:
You'll see that it prints 4294967289. So even though you assigned 7, it stored that huge number. However, if you cast that huge number back to a signed number, you'll see it prints 7:
Essentially, 7 and 4294967289 are stored as the same number in binary  the difference is in how they are interpreted based on type. Because your result variable is signed, your result is being interpreted as a signed number (9) instead of a large unsigned number (4294967291 I think).
I cover related topics in chapter 3, so if the above seems a bit incomprehensible at this point, keep reading.
Hi Alex,
I’ve finished Chapter 3. Please correct me if I am wrong.
7 is represented in binary as 1111 1111 1111 1111 1111 1111 1111 1001.
2 is represented in binary as 0000 0000 0000 0000 0000 0000 0000 0010.
When “result = a – b;” statement is executed, something like this happens:
result = 1111 1111 1111 1111 1111 1111 1111 1001 – 0000 0000 0000 0000 0000 0000 0000 0010
result = 1111 1111 1111 1111 1111 1111 1111 0111.
Because result is signed, it is being interpreted as a signed variable that can hold both negative & positive numbers (9 & 4294967287). Both numbers (9 & 4294967287) are represented in binary as 1111 1111 1111 1111 1111 1111 1111 0111.
Now I need to know which number (9 or 4294967287) should be printed.
From what I’ve learned in Chapter 3 *Signed numbers and two’s complement, I am able to tell (9) should be printed.
*Signed numbers and two’s complement: Signed integers are typically stored using a method known as two’s complement. In two’s complement, the leftmost (most significant) bit is used as the sign bit. A 0 sign bit means the number is positive, and a 1 sign bit means the number is negative.
To convert a two’s complement binary number back into decimal, first look at the sign bit.
If the sign bit is 0, just convert the bits for unsigned numbers (I modified this sentence for my example).
If the sign bit is 1, then we invert the bits, add 1, then convert to decimal, then make that decimal number negative (because the sign bit was originally negative).//
So, looking at “1”111 1111 1111 1111 1111 1111 1111 0111, I can tell the number is negative (a 1 sign bit).
The next steps are to invert the bits, add 1, then convert to decimal, then make that decimal number negative (because the sign bit was originally negative):
Invert the bits: 0000 0000 0000 0000 0000 0000 0000 1000
Add 1: 0000 0000 0000 0000 0000 0000 0000 1001
Covert to decimal: 0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+9+0+0+0 = 9.
Since the original sign bit (1) was negative, the final value is 9.
Yup!
That said, generally you should just avoid unsigned numbers altogether and not worry about conversions like these. 🙂
Dear Alex
How can I learn easily the C++?
PLEASE!
I NEED SOME SUGGESTIONS.
Step 1: Open tutorial
Step 2: Read
I don't often read the comments, but I'm glad I did so I could see this gem.
Somewhere in this article, you talk about how a variable integer can be named 32bits integer instead on long. Is that because, in some machines, long takes 32 bits? What am I not understanding correctly?
On 32bit systems, long is often 32bits. On 64bit systems, long is often 64 bits. So yes, we typically use the number of bits when discussing integers because the names (int, long) can vary in size on different machines/architectures.
Ok... I mixed bytes and bits...
Thank you for your answer you're really helpful. Are you the author of this website?
Yes, I am.
i'm using VS2015
i tried the signed int x = 1; and got an error:
error C4430: missing type specifier  int assumed. Note: C++ does not support defaultint
this happened when i do'nt use "signed" too.
Something else must be going on, because that line is fine.
This should print 1:
There are 10 people in the world who understand binary. Those who do and those who don't.
Better phrasing: There are 10 types of people in this world, those who understand binary and those who don't.
Hello and many thanks for putting up such a well written tutorial. It's perfect for people like me who want to start from scratch (or almost).
I seem to have a small problem related to this lesson, and I suspect it's because of the compiler (codeblocks 16.01, archlinux x64). This is my quick test program:
My sizeof(unsigned short)=2, so it should show 65535, yet it shows up 1. I tried then with x{65535} and x+1 , but it outputs 65536. Then I tried x{65536} and, using
(to go outside of codeblocks), it shows the warning:
I am very confused right now. Could you please shed some light?
It's not your compiler. If you do this:
You'll get 65535.
But if you do this:
You get 1. Why? Because 1 is a signed number, whereas x is unsigned. Because you're mixing types, the compiler does an implicit conversion, and x is converted to a signed integer (the rules for this are covered in lesson 4.4). Thus, x  1 is signed, and so you get 1 instead of 65535.
To get the answer you expect, do this:
This avoids the type conversion.
In general, you should not mix signed and unsigned numbers, because the results are often unexpected.
Thank you for the clarification. And, again, for the whole tutorial, even though I'm only at bitwise operations right now. I have to say that these 3 days (4 this one) since I started reading this, made me realize I *understood*.
char data type doesn't stores any numeric value and signed /unsigned just tells us that whether the data type would have ve values or not.
then what is the point of char data type being signed or unsigned
Char does store numeric values, and you can do integer math with them. They're just not _typically_ used for that purpose.
can u give me an example code explaining how char can be used to do integer maths. 🙂
btw alex, i must say that u have got an awesome site (which i rarely say for any site).. so thanx 4 sharing all that stuff.. nd i too know hindi.. so i would be happy to contribute for ur hindi site =D
Great Tutorial,i have read many pdf tutorials but i gave up along the way because i wasn't understanding,i thank God i have found this online, and now am back again with C++. Better understanding May Almighty Allah(God) Bless you Abundantly Great ALEX.
When you will add more hindi chapters....?
My Hindi translator went on hiatus, so I'm not sure. 🙁
Alex, I've been trying to get this for a long long time... in 2 articles (2.3 and 2.4) you managed to explain it to me in such a simple way. Thank you for your work and thank you for sharing it.
If i get this right, signed short can take values form 256 to 255 (2^8 bits) right ? But
#include "stdafx.h"
#include <iostream>
int main()
{
using namespace std;
short x = 255; // largest 16bit unsigned value possible
cout << "x was: " <<sizeof (x) << endl;
x = x + 1; // 65536 is out of our range  we get overflow because x can't hold 17 bits
cout << "x is now: " << x << endl;
return 0;
}
prints 255, 256 normally. What am i missing?
Shorts are normally 2 bytes, so the range of a signed short is usually 32768 to 32767.
Yeeea i just got it... 15 bits used for numbers amd the 16th used for +/.
Keep up the amazing work. Thanks for replying so quickly.
Hi, I have a question about the last example under "Overflow Examples". A wraparound occurred which gave a result of
x was: 0
x is now: 65535
I understand the wraparound result in the previous example, but not quite sure why this wraparound results in 65535? Trying to understand via the binary, but I can't figure it out. Can anyone help??
Note to Alex: Thanks for this tutorial! You're awesome.
In the previous example, we showed that 65535 + 1 = 0. If we subtract one from each side, we get 65535 =  1. So symmetrically, this makes sense.
Same with the binary version. 0 is 0000 0000 0000 0000 in binary. If we subtract 1, then we get binary 1111 1111 1111 1111, which is 65535. We talk more about how integers convert to binary in chapter 3.
"2^n1" should be written as (2)^n1. Otherwise it implies that the first number in the range can be positive should the exponent be positive, which is impossible. Just a little syntax error :p
Good point. However, your solution is equally incorrect. 🙂 It should be (2^n1), so the 2^n1 part evaluates first, and then the negative is applied.
I've updated the lesson accordingly.