11.8 — The stack and the heap

The memory that a program uses is typically divided into a few different areas, called segments:

  • The code segment (also called a text segment), where the compiled program sits in memory. The code segment is typically read-only.
  • The bss segment (also called the uninitialized data segment), where zero-initialized global and static variables are stored.
  • The data segment (also called the initialized data segment), where initialized global and static variables are stored.
  • The heap, where dynamically allocated variables are allocated from.
  • The call stack, where function parameters, local variables, and other function-related information are stored.

For this lesson, we’ll focus primarily on the heap and the stack, as that is where most of the interesting stuff takes place.

The heap segment

The heap segment (also known as the “free store”) keeps track of memory used for dynamic memory allocation. We talked about the heap a bit already in lesson 10.13 -- Dynamic memory allocation with new and delete, so this will be a recap.

In C++, when you use the new operator to allocate memory, this memory is allocated in the application’s heap segment.

The address of this memory is passed back by operator new, and can then be stored in a pointer. You do not have to worry about the mechanics behind the process of how free memory is located and allocated to the user. However, it is worth knowing that sequential memory requests may not result in sequential memory addresses being allocated!

When a dynamically allocated variable is deleted, the memory is “returned” to the heap and can then be reassigned as future allocation requests are received. Remember that deleting a pointer does not delete the variable, it just returns the memory at the associated address back to the operating system.

The heap has advantages and disadvantages:

  • Allocating memory on the heap is comparatively slow.
  • Allocated memory stays allocated until it is specifically deallocated (beware memory leaks) or the application ends (at which point the OS should clean it up).
  • Dynamically allocated memory must be accessed through a pointer. Dereferencing a pointer is slower than accessing a variable directly.
  • Because the heap is a big pool of memory, large arrays, structures, or classes can be allocated here.

The call stack

The call stack (usually referred to as “the stack”) has a much more interesting role to play. The call stack keeps track of all the active functions (those that have been called but have not yet terminated) from the start of the program to the current point of execution, and handles allocation of all function parameters and local variables.

The call stack is implemented as a stack data structure. So before we can talk about how the call stack works, we need to understand what a stack data structure is.

The stack data structure

A data structure is a programming mechanism for organizing data so that it can be used efficiently. You’ve already seen several types of data structures, such as arrays and structs. Both of these data structures provide mechanisms for storing data and accessing that data in an efficient way. There are many additional data structures that are commonly used in programming, quite a few of which are implemented in the standard library, and a stack is one of those.

Consider a stack of plates in a cafeteria. Because each plate is heavy and they are stacked, you can really only do one of three things:
1) Look at the surface of the top plate
2) Take the top plate off the stack (exposing the one underneath, if it exists)
3) Put a new plate on top of the stack (hiding the one underneath, if it exists)

In computer programming, a stack is a container data structure that holds multiple variables (much like an array). However, whereas an array lets you access and modify elements in any order you wish (called random access), a stack is more limited. The operations that can be performed on a stack correspond to the three things mentioned above:

1) Look at the top item on the stack (usually done via a function called top(), but sometimes called peek())
2) Take the top item off of the stack (done via a function called pop())
3) Put a new item on top of the stack (done via a function called push())

A stack is a last-in, first-out (LIFO) structure. The last item pushed onto the stack will be the first item popped off. If you put a new plate on top of the stack, the first plate removed from the stack will be the plate you just pushed on last. Last on, first off. As items are pushed onto a stack, the stack grows larger -- as items are popped off, the stack grows smaller.

For example, here’s a short sequence showing how pushing and popping on a stack works:

Stack: empty
Push 1
Stack: 1
Push 2
Stack: 1 2
Push 3
Stack: 1 2 3
Stack: 1 2
Stack: 1

The plate analogy is a pretty good analogy as to how the call stack works, but we can make a better analogy. Consider a bunch of mailboxes, all stacked on top of each other. Each mailbox can only hold one item, and all mailboxes start out empty. Furthermore, each mailbox is nailed to the mailbox below it, so the number of mailboxes can not be changed. If we can’t change the number of mailboxes, how do we get a stack-like behavior?

First, we use a marker (like a post-it note) to keep track of where the bottom-most empty mailbox is. In the beginning, this will be the lowest mailbox (on the bottom of the stack). When we push an item onto our mailbox stack, we put it in the mailbox that is marked (which is the first empty mailbox), and move the marker up one mailbox. When we pop an item off the stack, we move the marker down one mailbox (so it’s pointed at the top non-empty mailbox) and remove the item from that mailbox. Anything below the marker is considered “on the stack”. Anything at the marker or above the marker is not on the stack.

The call stack segment

The call stack segment holds the memory used for the call stack. When the application starts, the main() function is pushed on the call stack by the operating system. Then the program begins executing.

When a function call is encountered, the function is pushed onto the call stack. When the current function ends, that function is popped off the call stack. Thus, by looking at the functions pushed on the call stack, we can see all of the functions that were called to get to the current point of execution.

Our mailbox analogy above is fairly analogous to how the call stack works. The stack itself is a fixed-size chunk of memory addresses. The mailboxes are memory addresses, and the “items” we’re pushing and popping on the stack are called stack frames. A stack frame keeps track of all of the data associated with one function call. We’ll talk more about stack frames in a bit. The “marker” is a register (a small piece of memory in the CPU) known as the stack pointer (sometimes abbreviated “SP”). The stack pointer keeps track of where the top of the call stack currently is.

We can make one further optimization: When we pop an item off the call stack, we only have to move the stack pointer down -- we don’t have to clean up or zero the memory used by the popped stack frame (the equivalent of emptying the mailbox). This memory is no longer considered to be “on the stack” (the stack pointer will be at or below this address), so it won’t be accessed. If we later push a new stack frame to this same memory, it will overwrite the old value we never cleaned up.

The call stack in action

Let’s examine in more detail how the call stack works. Here is the sequence of steps that takes place when a function is called:

  1. The program encounters a function call.
  2. A stack frame is constructed and pushed on the stack. The stack frame consists of:
    • The address of the instruction beyond the function call (called the return address). This is how the CPU remembers where to return to after the called function exits.
    • All function arguments.
    • Memory for any local variables.
    • Saved copies of any registers modified by the function that need to be restored when the function returns
  3. The CPU jumps to the function’s start point.
  4. The instructions inside of the function begin executing.

When the function terminates, the following steps happen:

  1. Registers are restored from the call stack
  2. The stack frame is popped off the stack. This frees the memory for all local variables and arguments.
  3. The return value is handled.
  4. The CPU resumes execution at the return address.

Return values can be handled in a number of different ways, depending on the computer’s architecture. Some architectures include the return value as part of the stack frame. Others use CPU registers.

Typically, it is not important to know all the details about how the call stack works. However, understanding that functions are effectively pushed on the stack when they are called and popped off when they return gives you the fundamentals needed to understand recursion, as well as some other concepts that are useful when debugging.

A technical note: on some architectures, the call stack grows away from memory address 0. On others, it grows towards memory address 0. As a consequence, newly pushed stack frames may have a higher or a lower memory address than the previous ones.

A quick and dirty call stack example

Consider the following simple application:

The call stack looks like the following at the labeled points:




foo() (including parameter x)



Stack overflow

The stack has a limited size, and consequently can only hold a limited amount of information. On Windows, the default stack size is 1MB. On some unix machines, it can be as large as 8MB. If the program tries to put too much information on the stack, stack overflow will result. Stack overflow happens when all the memory in the stack has been allocated -- in that case, further allocations begin overflowing into other sections of memory.

Stack overflow is generally the result of allocating too many variables on the stack, and/or making too many nested function calls (where function A calls function B calls function C calls function D etc…) On modern operating systems, overflowing the stack will generally cause your OS to issue an access violation and terminate the program.

Here is an example program that will likely cause a stack overflow. You can run it on your system and watch it crash:

This program tries to allocate a huge (likely 40MB) array on the stack. Because the stack is not large enough to handle this array, the array allocation overflows into portions of memory the program is not allowed to use.

On Windows (Visual Studio), this program produces the result:

HelloWorld.exe (process 15916) exited with code -1073741571.

-1073741571 is c0000005 in hex, which is the Windows OS code for an access violation. Note that “hi” is never printed because the program is terminated prior to that point.

Here’s another program that will cause a stack overflow for a different reason:

In the above program, a stack frame is pushed on the stack every time function foo() is called. Since foo() calls itself infinitely, eventually the stack will run out of memory and cause an overflow.

The stack has advantages and disadvantages:

  • Allocating memory on the stack is comparatively fast.
  • Memory allocated on the stack stays in scope as long as it is on the stack. It is destroyed when it is popped off the stack.
  • All memory allocated on the stack is known at compile time. Consequently, this memory can be accessed directly through a variable.
  • Because the stack is relatively small, it is generally not a good idea to do anything that eats up lots of stack space. This includes passing by value or creating local variables of large arrays or other memory-intensive structures.

11.9 -- std::vector capacity and stack behavior
11.7 -- Function Pointers

247 comments to 11.8 — The stack and the heap

  • yeokaiwei

    Hi Alex,
    I took this from Quora as I noticed that did not mention any real-life examples of how Abstracts Data Structures (ADTs) were used.

    Perhaps it would help to understand data structures if students knew what their application was for.

    "Hash Table - used for fast data lookup - symbol table for compilers, database indexing, caches,Unique data representation.

    Trie - dictionary, such as one found on a mobile telephone for autocompletion and spell-checking.

    Suffix tree - fast full text searches used in most word processors.

    Stack - undo\redo operation in word processors, Expression evaluation and syntax parsing, many virtual machines like JVM are stack oriented.

    Queues - Transport and operations research where various entities are stored and held to be processed later ie the queue performs the function of a buffer.

    Priority queues - process scheduling in the kernel

    Trees - Parsers, Filesystem

    Radix tree - IP routing table

    BSP tree - 3D computer graphics

    Graphs - Connections/relations in social networking sites, Routing ,networks of communication, data organization etc."

    Heap - Dynamic memory allocation in lisp"

  • Waldo Lemmer

    - > A technical note: on some architectures, the call stack grows away from
      > memory address 0
      Where's memory address 0? Is it a starting point given by the OS to the
    - The following program runs forever in (VS 2019 x64) Release configuration but
      ends at ~3500 in Debug. What gives?


    - Your analogies are always so fun and effective, nice job :)

    • Alex

      All memory is given an address. Let's say a machine with 16GB of memory has addresses numbered 0 - 16GB. When allocating stack memory, an architecture can count up (e.g. the first bit of memory used is 0, the second is 1, etc...), or down (the first is 16GB, the second is 16GB - 1, etc...)

      The effective stack space may be smaller on debug builds because the compiler may add additional information into the stack frames to help with debugging.

      • Waldo Lemmer

        Thanks! I also think it runs forever in Release config because VS 2019 optimizes tail-recursive functions ( (I discovered this yesterday)

  • Now I know where stackoverflow got it's name

  • Szymon :)

    For me (windows, vs  2019) this program results in just a warning and does not crash (prints hi):

    The warning:

    • DreiK

      visual studio prevents you from doing obvious stack overflow like this they check the size of your array during compilation and if its too large it won't compile

  • EternalSkid

    I actually really liked this chapter, I was reading up on x86 assembly and how these work some time ago. After going back to the video, i understood much more ;). Thanks Alex and nascardriver for this journey, I still have a long way to go!

  • Forhad Rahman

    LOL. Running the stack overflow program at last blocks me from executing(unused variable 'stack'). But putting the 'stack' in console crashes the program as expected.

  • Jono

    Good article thanks

  • Patrick

    For this paragraph -
    The only difference between our hypothetical mailbox stack and the call stack is that when we pop an item off the call stack, we don’t have to erase the memory (the equivalent of emptying the mailbox). We can just leave it to be overwritten by the next item pushed to that piece of memory. Because the stack pointer will be below that memory location, we know that memory location is not on the stack.

    Shouldn't it be "Because the stack pointer IS ON that memory location, we know that memory location is not on the stack."?

  • Sahil

    I know std::vector would be on the heap since its dynamically allocated under the hood, but what about std::arrays? They are similar to fixed arrays but much safer, so do they take up memory on the stack or the heap?

    • nascardriver

      `std::array` is a wrapper around a built-in array. They have the exact same memory footprint.

      • VicDiesel

        The question was whether `std::array` goes on the stack or heap. I'm interested in the answer.

        • nascardriver

          `std::array` doesn't perform any dynamic allocations. It's only on the heap if you put it there.

          • VicDiesel

            How do you put it there?

            • nascardriver

              By dynamically allocating it

              • VicDiesel

                And you just said std::array does not do dynamic allocations. I really don't get it. Maybe you can include a line or two of code?

                • nascardriver

                  `std::array` doesn't allocate anything internally. Its location in memory is identical to that of a built-in array or any other plain old data.

                  A `std::vector` is also on the stack unless you put it on the heap. The data contained in a `std::vector` is always on the heap (When ignoring optimizations).

                  There's some magic that creates `m_data` when `DumbedDownVector` gets created, you'll see that in the next chapter. `v` is on the stack, but the data that `m_data` is pointing to is on the heap.

  • yeokaiwei

    Severity    Code    Description    Project    File    Line    Suppression State
    Warning    C6262    Function uses '40000000' bytes of stack:  exceeds /analyze:stacksize '16384'.  Consider moving some data to heap.    StackOverflow    C:\Users\user\source\repos\StackOverflow\StackOverflow\StackOverflow.cpp    3    
    Severity    Code    Description    Project    File    Line    Suppression State
    Error    C2220    the following warning is treated as an error    StackOverflow    C:\Users\user\source\repos\StackOverflow\StackOverflow\StackOverflow.cpp    5    
    Severity    Code    Description    Project    File    Line    Suppression State
    Warning    C4101    'stack': unreferenced local variable    StackOverflow    C:\Users\user\source\repos\StackOverflow\StackOverflow\StackOverflow.cpp    5    

    These are the errors I got from Visual Studio 2019 for the stack overflow example.

  • TonyCheeze

    So, I recall a few sections back when you introduced Enumerations that the definition of the enumeration is "not allocated any memory" and it's only once you create an instance of the enum that memory is allocated for that object itself. This always confused me, but now I'm assuming you mean that the enumeration definition simply sits in the program binary, which sits in the code segment of the memory, right? And does this also apply to struct and class definitions? Thanks for any help, btw, this tutorial is amazing!

    • nascardriver

      An `enum` (Or `enum class`) definition does not occupy _any_ memory. All uses of an enum are substituted for their underlying type (Some integer type) at compile-time. C++ does not, with some exceptions, have type information at run-time.

      The same is true for data-only `class` and `struct` definitions. If you add a member function, the function takes up some memory (Independent of instances). If you add virtual member functions (covered later), the class' virtual table takes up some memory as well.

      You can think of type declarations as blueprints. When an architect draws a house on a paper, he's not using up any concrete or wood. That only happens when he builds a house based on the blueprint.

  • yeokaiwei

    Can we have a diagram of this?

    "The code segment (also called a text segment), where the compiled program sits in memory. The code segment is typically read-only.
        The bss segment (also called the uninitialized data segment), where zero-initialized global and static variables are stored.
        The data segment (also called the initialized data segment), where initialized global and static variables are stored.
        The heap, where dynamically allocated variables are allocated from.
        The call stack, where function parameters, local variables, and other function-related information are stored. "

    Like this?

  • srt1104

    Can you explain further on "All memory allocated on the stack is known at compile time. Consequently, this memory can be accessed directly through a variable." Does that mean that if you call a function recursively 1000 times, the compiler at compile-time itself, would calculate all the 1000 + 1 (for main()) locations where the stack frames will be placed and you could access those addresses when compilation is done?
    What if the user enters a number and the function recurses for that number of times? That cannot be calculated at compile time.

    • nascardriver

      The compiler knows where a function's local variables are stored relative to the top of the stack. That relative location is always the same no matter how often the function is called.

  • 345h6c


  • Kunal Mukherjee

    What is the difference between uninitialized data segment and intialized data segment, both seems to hold initialized global and static variables. Please elaborate

    • rayT

      from stackoverflow pages:
      The .bss section is guaranteed to be all zeros when the program is loaded into memory. So any global data that is uninitialized, or initialized to zero is placed in the .bss section.

      The thing is, since this is all zeroes the space for these variables doesn't need to be stored on disc, so smaller executable file, the loader knows how much spacce needs to be allocated when the program runs.

  • Yolo

    You said: "Memory allocated on the stack stays in scope as long as it is on the stack. It is destroyed when it is popped off the stack." But what exactly do u mean with destroyed? Memory cant get destroyed, right?

Leave a Comment

Put all code inside code tags: [code]your code here[/code]