A **recursive function** in C++ is a function that calls itself. Here is an example of a poorly-written recursive function:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#include <iostream> void countDown(int count) { std::cout << "push " << count << '\n'; countDown(count-1); // countDown() calls itself recursively } int main() { countDown(5); return 0; } |

When countDown(5) is called, “push 5” is printed, and countDown(4) is called. countDown(4) prints “push 4” and calls countDown(3). countDown(3) prints “push 3” and calls countDown(2). The sequence of countDown(n) calling countDown(n-1) is repeated indefinitely, effectively forming the recursive equivalent of an infinite loop.

In lesson 7.9 -- The stack and the heap, you learned that every function call causes data to be placed on the call stack. Because the countDown() function never returns (it just calls countDown() again), this information is never being popped off the stack! Consequently, at some point, the computer will run out of stack memory, stack overflow will result, and the program will crash or terminate. On the author’s machine, this program counted down to -11732 before terminating!

**Recursive termination conditions**

Recursive function calls generally work just like normal function calls. However, the program above illustrates the most important difference with recursive functions: you must include a recursive termination condition, or they will run “forever” (actually, until the call stack runs out of memory). A **recursive termination** is a condition that, when met, will cause the recursive function to stop calling itself.

Recursive termination generally involves using an if statement. Here is our function redesigned with a termination condition (and some extra output):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#include <iostream> void countDown(int count) { std::cout << "push " << count << '\n'; if (count > 1) // termination condition countDown(count-1); std::cout << "pop " << count << '\n'; } int main() { countDown(5); return 0; } |

Now when we run our program, countDown() will start by outputting the following:

push 5 push 4 push 3 push 2 push 1

If you were to look at the call stack at this point, you would see the following:

countDown(1) countDown(2) countDown(3) countDown(4) countDown(5) main()

Because of the termination condition, countDown(1) does not call countDown(0) -- instead, the “if statement” does not execute, so it prints “pop 1” and then terminates. At this point, countDown(1) is popped off the stack, and control returns to countDown(2). countDown(2) resumes execution at the point after countDown(1) was called, so it prints “pop 2” and then terminates. The recursive function calls get subsequently popped off the stack until all instances of countDown have been removed.

Thus, this program in total outputs:

push 5 push 4 push 3 push 2 push 1 pop 1 pop 2 pop 3 pop 4 pop 5

It’s worth noting that the “push” outputs happen in forward order since they occur before the recursive function call. The “pop” outputs occur in reverse order because they occur after the recursive function call, as the functions are being popped off the stack (which happens in the reverse order that they were put on).

**A more useful example**

Now that we’ve discussed the basic mechanics of recursive function calls, let’s take a look at another recursive function that is slightly more typical:

1 2 3 4 5 6 7 8 9 10 |
// return the sum of all the numbers between 1 and value (inclusive) int sumTo(int value) { if (value <= 0) return 0; // base case (termination condition) else if (value == 1) return 1; // base case (termination condition) else return sumTo(value - 1) + value; // recursive function call } |

Recursive programs are often hard to figure out just by looking at them. It’s often instructive to see what happens when we call a recursive function with a particular value. So let’s see what happens when we call this function with value = 5.

sumTo(5) called, 5 <= 1 is false, so we return sumTo(4) + 5. sumTo(4) called, 4 <= 1 is false, so we return sumTo(3) + 4. sumTo(3) called, 3 <= 1 is false, so we return sumTo(2) + 3. sumTo(2) called, 2 <= 1 is false, so we return sumTo(1) + 2. sumTo(1) called, 1 <= 1 is true, so we return 1. This is the termination condition.

Now we unwind the call stack (popping each function off the call stack as it returns):

sumTo(1) returns 1. sumTo(2) returns sumTo(1) + 2, which is 1 + 2 = 3. sumTo(3) returns sumTo(2) + 3, which is 3 + 3 = 6. sumTo(4) returns sumTo(3) + 4, which is 6 + 4 = 10. sumTo(5) returns sumTo(4) + 5, which is 10 + 5 = 15.

At this point, it's easier to see that we're adding numbers between 1 and the value passed in.

Because recursive functions can be hard to understand by looking at them, good comments are particularly important.

**Recursive algorithms**

Recursive functions typically solve a problem by first finding the solution to a subset of the problem (recursively), and then modifying that sub-solution to get to a solution. In the above algorithm, sumTo(value) first solves sumTo(value-1), and then adds the value of variable value to find the solution for sumTo(value).

In many recursive algorithms, some inputs produce trivial outputs. For example, sumTo(1) has the trivial output 1 (you can calculate this in your head), and does not benefit from further recursion. Inputs for which an algorithm trivially produces an output is called a **base case**. Base cases act as termination conditions for the algorithm. Base cases can often be identified by considering the output for an input of 0, 1, "", '', or null.

**Fibonacci numbers**

One of the most famous mathematical recursive algorithms is the Fibonacci sequence. Fibonacci sequences appear in many places in nature, such as branching of trees, the spiral of shells, the fruitlets of a pineapple, an uncurling fern frond, and the arrangement of a pine cone.

Here is a picture of a Fibonacci spiral:

Each of the Fibonacci numbers is the length of the side of the square that the number appears in.

Fibonacci numbers are defined mathematically as:

F(n) = | 0 if n = 0 1 if n = 1 f(n-1) + f(n-2) if n > 1 |

Consequently, it's rather simple to write a recursive function to calculate the nth Fibonacci number:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
#include <iostream> int fibonacci(int number) { if (number == 0) return 0; // base case (termination condition) if (number == 1) return 1; // base case (termination condition) return fibonacci(number-1) + fibonacci(number-2); } // And a main program to display the first 13 Fibonacci numbers int main() { for (int count=0; count < 13; ++count) std:: cout << fibonacci(count) << " "; return 0; } |

Running the program produces the following result:

0 1 1 2 3 5 8 13 21 34 55 89 144

Which you will note are exactly the numbers that appear in the Fibonacci spiral diagram.

**Recursive vs iterative**

One question that is often asked about recursive functions is, "Why use a recursive function if you can do many of the same tasks iteratively (using a *for loop* or *while loop*)?". It turns out that you can always solve a recursive problem iteratively -- however, for non-trivial problems, the recursive version is often much simpler to write (and read). For example, while it's possible to write the Fibonacci function iteratively, it's a little more difficult! (Try it!)

Iterative functions (those using a for-loop or while-loop) are almost always more efficient than their recursive counterparts. This is because every time you call a function there is some amount of overhead that takes place in pushing and popping stack frames. Iterative functions avoid this overhead.

That’s not to say iterative functions are always a better choice. Sometimes the recursive implementation of a function is so much cleaner and easier to follow that incurring a little extra overhead is more than worth it for the benefit in maintainability, particularly if the algorithm doesn't need to recurse too many times to find a solution.

In general, recursion is a good choice when most of the following are true:

- The recursive code is much simpler to implement.
- The recursion depth can be limited (e.g. there’s no way to provide an input that will cause it to recurse down 100,000 levels).
- The iterative version of the algorithm requires managing a stack of data.
- This isn’t a performance-critical section of code.

However, if the recursive algorithm is simpler to implement, it may make sense to start recursively and then optimize to an iterative algorithm later.

*Rule: Generally favor iteration over recursion, except when recursion really makes sense*.

**Quiz time**

1) A factorial of an integer N (written N!) is defined as the product (multiplication) of all the numbers between 1 and N (0! = 1). Write a recursive function called factorial that returns the factorial of the input. Test it with the first 7 factorials.

Hint: Remember that x * y = y * x, so the product of all the numbers between 1 and N is the same as the product of all the numbers between N and 1.

2) Write a recursive function that takes an integer as input and returns the sum of all the numbers in the integer (e.g. 357 = 15). Print the answer for input 93427 (which is 25).

3) This one is slightly trickier. Write a program that asks the user to enter an integer, and then uses a recursive function to print out the binary representation for that number. Use method 1 from lesson 3.7 -- Converting between binary and decimal.

Hint: Using method 1, we want to print the bits from the "bottom up", which means in reverse order. This means your print statement should be _after_ the recursive call.

7.12 -- Handling errors (assert, cerr, exit, and exceptions) |

Index |

7.10 -- std::vector capacity and stack behavior |

Recursion seems like makes sense when you are iterating through a tree:

Root

|_tag

| |_tag

| | |_dataPoint

| | |_dataPoint

| |_dataPoint

|_tag

| |_dataPoint

| |_dataPoint

| |_dataPoint

|_dataPoint

|_dataPoint

Recursion helps with walking through every "dataPoint" in the tree. The pseudo code I use for iterating through a tree is like so:

analyzeTag(Tag tag)

analyze data points in tag.

check for sub-tags.

On sub-tag(s):

iterate through sub-tag list.

call tag analyze method on each sub-tag. <- recursion

Is there an easy way to do this using an iterator, or is the recursion way of doing this suffice.

Yes, one of the most common uses of recursion is to iterate through a tree. If you do your own recursion you can define how you want to traverse the tree (depth first or breadth first).

If your tree class has an iterator, you can use that instead, but that will probably come with a predefined traversal method (likely depth first).

Although the C++ standard library doesn’t contain any tree containers, std::map is usually implemented as a red/black tree, and does have an iterator to traverse through it (depth first, since this produces sorted results).

Do I need multidimensional array.

Yes, if your recursive function has multiple parameters.

Hello, how to use recursive method for use in a function like Ackermann function which depends on 2 variable by storing the results for later use. like if you want to calculate A(3,20) by using previous results already calculated to avoid many recursive calculations.

You can always stash the results of a function call in an array for lookup later. That way if you need them, you don’t need to recalculate them, you can just pull the value out of the array.

Not sure if that answered your question.

In the section "A more useful example" you had the following comment:

I found this confusing. It might be clearer to say something like "Sums the integers from 1 to value"

PS: Your tutorials are excellent. I’m learning more here than I learned in the class I took in college.

Thanks for the feedback. I’ve updated the comment to be more clear.

I wrote my quiz 3 function like this :

That’s much more elegant than what I came up with. Nice work. I’ve adopted your solution (with a minor tweak).

Thank you Alex , I am feeling honored 😀

Hi Alex,

First, Thank you for the wonderful tutorials. I have a doubt regarding the place of variable declaration. Lets say I have to declare a variable of non-fundamental type. This variable should be used to populate a collection in an iteration using for/while loop. Which way is more efficient

1. to declare this non-fundamental type variable immediately above the for/while loop only once

or

2. to declare this non-fundamental type variable with in for/while loop so that it will be declared and destroyed for each iteration

As per previous lessons, this should be done just before the variable usage. If that is the case, 2nd approach is recommended. My question is doesn’t it need more CPU time to allocate and de-allocate memory from stack for this variable for each and every iteration? If so, then 1st approach for variable declaration seems good. Can you please suggest and explain which one is recommended?

Option 1 is better. Creating and destroying non-fundamental variables takes time, and you don’t want to do that over and over if you can avoid it.

Generally, you want to declare your variables as close to the first use as reasonable. However, there may be specific reasons you don’t want to do so immediately prior to first use, such as the case above (to prevent repeated creation and destruction) or to prevent the variable from going out of scope (e.g. if the first use is in a nested block but you need the variable later outside that nested block).

Thank you Alex. My doubt is cleared… 🙂

Great lesson! I found a way to avoid the use of "base cases" in my try to solve the Fibonacci sequence by iteration…

Hey there,

when Alex wrote ‘Try it!’, I could not resist to find myself an iterative version of the fibonacci generator. I ended up with the following, which might be slightly more compact and it gets along with no if statement and only 2 variables. I wanted to share my solution, since I did not see any with no case differentiation in the comments before (hope I did not just miss it). I would be happy if u let me know how you like it!

Ps: Thanks again for all this awesome lessons!

Your function is good for calculating individual fibonacci values, but not good for calculating fibonacci sequences, because it does a lot of redundant work (e.g. to calculate the 5th fibonacci number, it does so from the base case rather than leveraging the fact that you just calculated the 3rd and 4th fibonacci numbers).

My example has the same issue though.

Otherwise, looks good.

Okay, but isnt starting from the first elements of the sequence kind of the basic idea of recursive and iterating functions?

Anyway, I got one more question about the efficiency of a program: Can I just count, how many operations must be done until the return is called and assume that any operation costs the same time? Or are there some operations, taking more time then others.. for example is adding faster then comparing? And if so, is there a rule how many "+ operations" are equalizing one "?: operation"?

eg. in the function fibo() above: I have some fix operations (assigning 3 variables in the beging + return in the end), which are executed only one time, no matter how often we iterate. I guess these operations are relatively unimportant for the overall performance. In the loop the function does:

line 13: assign

line 14: add, assign

line 15: assign

that means: 4+4*n calculations if the loop runs n times.

If I use this function instead:

this fibo also has 4 fixed operations (assigning 2 variables in the beginning, comparing and return in the end), but the loop gets along with 3 operations (comparing, adding, assigning):

So overall: 4 + 3*n operations

or do I rather have to count 4 operations (comparing, jump, adding, assigning)?

> Okay, but isnt starting from the first elements of the sequence kind of the basic idea of recursive and iterating functions?

Not always, you can start from the last element and iterate backwards if that makes sense. The larger point I was trying to get across is that your recursive function (and mine) is great for calculating single values, but not so much for sequences (which is what our sample programs are using it for). If I wanted to calculate fibonacci sequences, I’d write a second fibonacci function to do so, not reuse this one (this one does too much redundant work).

You can’t really make any assumptions about how long something will take to run, especially since compilers will optimize things. The only way to really tell is to put in some timing code and measure how long it takes to run. There’s also Big-O notation, which is used to talk about performance more broadly. I plan to write a lesson on that one at some point in the near future, but you can do your own reading in the meantime if interested.

I gave up in quiz 2 although I understood how to do it in iterative way. Looking at the solution, I realized that the scope of recurrent function stays preserved. Am I right? If yes, it would be useful to mention it in the main text. Thanks for a great tutorial!

Recursive functions work exactly like normal functions in terms of scope and duration. They just have the added complexity of needing a termination condition so you don’t recurse all your stack space away.

You articulated what I was trying to say, maybe better.

I thought it very interesting that a lot of variables never went out of scope as the recursion went forward! Am I right? Product would de destroyed in a normal function when it went out of scope. I think here something different is going on?

No, you’re mixing up scope and duration.

Scope determines when a variable can be seen/used. Duration determines when it is destroyed.

Consider:

In this program, variable x goes out of scope when function a() is called (you can’t use it inside function a), comes back into scope when function a() returns, and is destroyed at the end of function main().

Can you give an example of variable duration? Good scope explanation.

I talk about duration in Lesson 4.1a -- Local variables, scope, and duration. Duration just controls when variables are created and destroyed. For example, local variables are created at the start of the block they are defined in, and destroyed at the end of that same block. Global variables are created at the start of the program and destroyed at the end.

With recursion though the functions are LIFO on the stack. If I have created variables in my first function and go to a depth of 9 for example, the variable duration of those vars created in the first function will at least persist through 9 function returns. So technically block 1 in this example stays "persists" through at least 9 returns before block is destroyed. Hopefully this thinking is correct. I didn’t see anything on recursion in 4.1a, unless the principles of nesting and scope/duration are similar. I may have imprecise terminology here, please pardon. Thx in advance.

What you say is true, but it’s a strange way of thinking about things. To me, it’s more intuitive to say that as long as the block the variable was created on is still on the stack (even if it’s not at the top), the local variables inside that block still exist.

Please make some tutorials on Time Complexity of programs and how we can reduce it.

I am planning on writing a lesson on big-O notation soon.

I will make those changes. Thanks for your comments.

Hello Alex,

You did a great job explaining the Fibonacci sequence. I attempted to come up with an iterative solution for the Fibonacci sequence in your tutorial. I would like to get your comments on my code, and how I could improve it. Thanks.

I’d suggest two minor changes:

1) Put your Fibonacci generator in a function

2) Pick better names than sum, sum1, and sum2. Maybe something like current, prev1, and prev2?

Great lesson, Alex! I haven’t seen a better explanation on recursion. 🙂

Personally I

loverecursion. It exercises your brain like no other. Reading through the comments, I found some readers pointing out the inefficiency of the Fibonacci-series generator. I dug up an old function I wrote some two years ago in school and examined it (there were no comments 😀 ). Figured it out, here it is with proper documentation:It just takes n+1 calls to print n terms (n calls to print n terms + 1 call in which the base case is met). Can an iterative version still get much more efficient than this?

Also one thing I’d like to point out, in quiz #1 you asked us to ‘test it with the first 5 factorials’, but the solution prints 7 factorials.

Thanks for the lesson, again!

> Can an iterative version still get much more efficient than this?

If by efficient you mean in terms of lines of code or variables used, maybe not. But we typically measure efficiency in terms of time required to run, and if a function call is expensive on a given architecture, then the iterative version would likely be more efficient.

Also, fixed the quiz question. Thanks for pointing that out.

I see… Is that what reader Kerrick said about? That your Fibonacci function is

"O(exp(n)) in both time and space".Is that how we measure efficiency? Then my function would be O(n+1), correct?Your algorithm would be O(n) (the +1 doesn’t matter), which is much better than O(exp(n)). The iterative version is also O(n).

That said, Big-O notation only really tells you how an algorithm scales with the number of elements it has to operate on. It doesn’t tell you anything about how efficient it is compared to other algorithms in the same efficiency class. So while a recursive and iterative version of an algorithm might both be O(n), one still might be significantly faster than the other.

I’ve been itching to write a tutorial on Big-O notation. It’s a fun topic, and a good general programming tool to have in your toolkit.

It’d be great! I hope it’s already in your to-do list.

Thanks for clearing it up! 🙂

Hi Alex,

With the factorial function, it is clear that the output grows quite rapidly as a function of its input. So I was wondering…

1) If I wanted to return a long value, what would be the correct way to do this? I noticed that, at least with my compiler, doing

works, but the original way I did it was replacing return 1 with

my thought process being that the 1l would end up converting the last product to a long, and then the second-to-last product to a long, etc… , since long had a higher priority when it comes to coercion (please correct me if I’m wrong, I’m just trying to remember what you taught in the implicit conversion tutorial!).

So which one is preferred? I’m actually a little bit surprised the first way works actually but I tested it with large enough values and it does.

2) Let’s say I wanted to test the value of the argument factorial(n) to see whether I should return an int or a long. This would require specifying the return value beforehand however, such as

verses

int factorial(n) {…}

[/code]

The only workaround I can think of is passing n to another function which has the appropriate return type, such as

Would this be the correct approach?

Thanks!

Good questions.

1) Returning 1 and 1l should produce the same result, as 1 would be implicitly converted to a long before being returned. The real challenge here is that “n*factorial(n-1);” only contains integers, and will produce an integer result (which could overflow). This (possibly overflowed) result will then be converted to a long for returning to the caller. What we actually should do is convert one of the operands to a long, so that all of the other operands get implicitly converted to longs as well: static_cast<long>(n)*factorial(n-1);

2) Functions can’t differ only by return type, so your functions would need different names. While you could do as you suggest, is there any reason not to just always return a long?

Thanks for the quick response! I assume you mean static_cast<long>(n) right? I guess I could also just read in n as a long, such as

and then avoid the static casting.

And upon further inspection of my code for my second question, I actually don’t think it would work because the call from int factorial(n) would be on the bottom of the stack, and so it would end up being converted to an int anyways (I believe). I know for this example simply using a long return value would be fine, but I was thinking ahead to possible other scenerios where one may wish to return a different type than what the return type for the calling function is. I don’t know what these other scenerios may be, and perhaps a simple cast may resolve the issue 😛

Yes, the commenting system interpreted my <long> as an HTML code. Oops.

But upon reflection, I think my previous analysis was incorrect, as factorial() will be interpreted as a long (since that’s what it returns), which should force everything else to be implicitly converted to a long. So your recursive line should work as intended. And as noted above, the base case of 1 will get implicitly converted if it’s not a long, so no worries there.

Is there a way to extend the recursion depth of a function? (just involving code, without changing any configuration in the OS)

I tried to extend it by using just pointers when calling a function, because I thought that in the stack would be the reference of the variable as the information itself, so I tried that the pointers would be the only thing in the stack and the information to be dynamically allocated, I don’t know if I explained myself well. (I don’t know if what I’m saying is also true).

The only way I can think of to allow your function to recurse deeper in a code-only manner would be to do something that reduced the amount of stack memory each function call took (which would allow it to recurse more times before running out of memory). That could be using pointers and dynamically allocating anything larger than the size of your pointer on the heap, removing variables or parameters, etc…

Involving that, is it true that what an actual variable weights is the reference to the object and the information of that object in the stack all together, in that case, for example, i declare a integer variable, which has 4 bytes, but the implicit pointer is 8 bytes, then the real weight is 12 bytes? or how does that work?

I’m not sure what you’re asking.

All variables allocated on the stack take up an amount of stack memory equal to the size of that variable.

Memory allocated on the heap does not take up stack memory, since that comes from a different pool.

So if your pointer size is 8 bytes, and you want to minimize your stack footprint, you’d want to allocate all memory above 8 bytes on the heap. For anything 8 bytes or less, the size of the variable is equal to or less than the size of the pointer, so you might as well just instantiate it directly.

I don’t know why you’d do any of this though.

That’s because some competitive problems of programming involves sometimes recursion, and is important to know the depth that a recursion can reach, in python there’s something similar, but the limit is with a finite number and not explicitly with the size of the heap, what I’m trying to do is to squeeze the recursion limit in C++.

"It turns out that you can always solve a recursive problem iteratively…"

Are you sure about this? To be honest, I’m not entirely sure why, but I think I’ve heard that recursive functions are sometimes impossible to use iteratively because of forward recursion and tail recursion.

based on my notes for a proficiency exam:

Tail Recursion: A form of recursion where the recursive call is the final thing that you do in the recursive case of your algorithm

These sorts of algorithms can be changed into loops because, when the recursive call is to be made, there is no more information that is still needed from the previous recursive instance

_____

Forward Recursion: A form of recursion where there is still work to be done after you make another recursive call

These sorts of algorithms cannot be changed into loops because there is still information to be used when the algorithm wants to make the recursive call

Comments? This class was based around the Java language, so I don’t know if that has anything to do with it, but I doubt it.

Yes, I’m sure. However, that says nothing about how pretty/effective doing so would be.

Tail recursion functions are usually pretty easy to convert. Forward recursion functions are usually not easy to convert -- doing so would likely involve your function maintaining its own stack! And why do that yourself when the language provides that functionality for you as part of function calls? But you could.

What does "int main(void)" means?

Why the "void" parameter?

including "void" as a parameter type just tells the compiler that there are no parameters included in the function. It is recommended by Alex (In another chapter) that you do not use void when creating function prototypes or defining functions in general.

It means that main is not taking any parameters. It’s old school usage, and deprecated in C++. In C++ we’re supposed to use empty parenthesis (e.g. int main()). I’ve fixed the examples.

Alex, you haven’t fixed the second example. 🙂

Meh. 🙂 Thanks for letting me know. It should be fixed now.

Question 2 took me about 15 minutes to figure out, it was quite hard for me.

I felt really accomplished once I did figure it out though, thank you for the tough question!

Quiz 1 solution:

Since we know that 1! == 1 , it seems logical to add it to a base case, changing n < 1 to n < 2 (in which case, 1 and lower, inc 0, will return 1). (Otherwise every call to factorial(x) will do an extra useless «iteration» over 0.)

(Also, the solution is missing a return value in main().)

Hi Alex.

The very first code example in this article won’t compile because there should be << ‘\n’ instead of << \n’ at the end of line 5.

The last code example (recursive Fibonacci) won’t compile either because the function is defined as int fibonacci(int number), and its return statement calls Fibonacci (capital F), which is undefined.

Thanks! Both issues have been fixed.

Sorry for my typo in the first line of my comment. Count will be greater than Zero.

Hi Alex,

In your second example program, you have mentioned that countDown(1) does not call countDown(0). But countDown(1) calls countDown(0) because count will be greater than 1 and if statement executes. So your program is printing

push 5

push 4

push 3

push 2

push 1

push 0

pop 0

pop 1

pop 2

pop 3

pop 4

pop 5

Thank You very much for your tutorials. I am learning a lot from this site. Love your tutorials. You are like my master (My Guru in Indian culture). Waiting eagerly for your updates in Object Oriented Programming section.

I’ve updated the example so it doesn’t call countDown(0) as intended. Thanks for pointing out the mistake.

Hey Alex,

I was wondering if you could tell me how you would implement a fibonacci sequence function considering the limit of the stack. From what I understood from the previous tutorial, the stack has less memory than the heap and since we are calling the same function every time, the return values start to accumulate and are never popped until the stack overflows.

My question is: when would you use recursion and when would you use an iterative function? Is there really any use in using recursive functions? Before reading this I always used recursive functions, but now that I see that they are less efficient I am starting to wonder if I should even use them.

This is a great question.

Personally, I only use recursion when:

* The recursive code is _much_ simpler to implement.

* The recursion depth can be limited (e.g. there’s no way to provide an input that will cause it to recurse down 100,000 levels).

* The iterative version of the algorithm requires managing a stack of data.

* This isn’t a performance-critical section of code.

Personally, I’d write a Fibonacci sequence function iteratively because it’s relatively simple to do, and will be more performant. But that said, because of the way that Fibonacci numbers grow, you’d overflow your integers before hitting the stack limit with a recursive implementation.

There are some algorithms that are much easier to write recursively than iteratively, such as quick-sort or merge-sort and some tree-walking algorithms. I’d start with a recursive version of those, and then optimize to an iterative version if performance demands required it.

In short, I use recursion for the few things where it really makes sense, and iteration for everything else.

Note: In C/C++, iteration is generally quite a bit faster than recursion. However, in other languages, this is not necessarily (as) true, so the above may not be generalizable.

Thanks Alex! That’s really useful to know, thanks for the quick reply!

I would also like to thank you for taking time to write these tutorials and even answer the questions. This is hands down the best C++ tutorial I have ever seen and I keep referring all my friends who are trying to get into coding. Keep up the great work 🙂

I found this chapter very hard to get my head around, but i think i managed it.

This youtube video also helps: https://www.youtube.com/watch?v=Mv9NEXX1VHc

Alex, here is my iterative fibonacci generator:

I want to remove the if statement inside the for loop because it looks ugly. But i can’t because y needs to be incremented once. How can I do that?

Start your count at 2 instead of 1 and print the first numbers outside of the loop.

I try to include Fibonacci in my diet every morning.

Although the Fibonacci recursive implementation tends to be how the idea of recursion is introduced, the recursive implementation has extremely poor run-time efficiency compared to the iterative approach - think big-O. Many of the necessary calculations are computed repeatedly and it is therefore redundant. Formulate a recursion tree for the algorithm and prove it to yourself.

Here is my iterative Fibonacci program =) :

#include <iostream>

using namespace std;

int Fibonacci(int x)

{

int fibBack2 = 0;

int fibBack1 = 1;

int fibNow = 0;

for(int iii=0; iii < x; iii++)

{

if(iii == 0)

fibNow = 0;

else if(iii == 1)

fibNow = 1;

else

fibNow = ((fibBack1) + (fibBack2));

cout << fibNow << " ";

fibBack2 = fibBack1;

fibBack1 = fibNow;

}

}

int main()

{

int x;

cin >> x;

cout << Fibonacci(x) << endl;

`return 0;`

}

Hey there,

Very useful post! I would like to invite you to visit my blog as well, and read my latest post about sequence points in C and C++.

http://blog.panqnik.pl/dev/sequence-points-in-c-cpp/

Best regards,

panqnik

Check out my Fibonacci algorithm. I can barely figure out how I did this!

`#include <iostream>`

using namespace std;

int nFibarray[3];

int nLimit;

int main()

{

cout << "Display Fibonacci numbers up to what number? ";

cin >> nLimit;

for (*(nFibarray+2) = 1; *(nFibarray + 1) <= nLimit; *(nFibarray+2) = *nFibarray + *(nFibarray + 1) )

{

cout << *(nFibarray + 1) << " " ;

*nFibarray = *(nFibarray + 1);

*(nFibarray + 1) = *(nFibarray + 2);

}

`return 0;`

}

My pc stopped at -130146. Facinating!

“On the authors machine, this program counted down to -11732 before terminating!”

Then you must have quite some stack memory, my computer stopped counting at -4607 xD

“Hahaha… apparently Linux doesn’t believe in stack overflows.

I ran your stack overflowing program and it just kept going all the way past -4,000,000, when I stopped it.”

Lmao, apparently you have infinite stack memory on your pc 😛

Probably related to the compiler and your RAM, or maybe even the byte size of your operating system (I hit -130,146 before crashing, just like clementl below me, and I’ve got 12 GB of RAM using Dev-C++ on Windows 7 64-bit). Perhaps the compiler Quinn used had some kind of fancy dynamic/large stack, unless he has a few TB of RAM.

Fun to test, though. Someone should make a program using this principle that pops up error messages instead of cout’s to simulate the Windows experience for Mac/Linux users!

Hey there.

I’m just another guy who’s been reading through your tutorials (not always in a linear fashion) and I liked this one - especially coming from the previous article dealing with stack memory.

I noticed something - when I write recursive statements, I don’t use a return statement until I break the recursion. I noticed that you use return statements as part of your recursion. For example:

When I’ve written code like this, I wouldn’t have thought to put the function call after a return statement in the else block. (In practice, it yields the same results.) You may have stated this, but I may have missed it - does using a return statement in front of the recursive function prevent the stack from filling up? Based on how you stated stacks worked in the previous tutorial, this seems to make sense…by issuing a return statement, the function variables and whatnot are removed from the stack each time it recurses so it doesn’t build up. Am I right or am I missing something? It seems to make sense, but something isn’t quite right….

As a side note, I made my own fibonacci sequence before reading this article (as practice for command-line arguments) but I used a recursive sequence that added the last and second-to-last numbers to get the next number. 1+1=2, 2+1=3, 3+2=5, 5+3=8, and so on. Seems like this method takes less overhead?

Thank you for your tutorials!

Putting the function call on the same line as the return statement in the example was done just to avoid having to store the return value from the recursive function call in a temporary variable before returning it.

Is there actually a use of fibonacci numbers in C++? Apart from in a maths program!

Here’s an extension of the fibonacci sequence to negative numbers. The results are a little counter-intuitive for negative numbers, cf. http://en.wikipedia.org/wiki/Negafibonacci for details.

thank you very much, i understand a recursion, it is easy and fine, thnks again. i feel happy.

Hahaha… apparently Linux doesn’t believe in stack overflows. I ran your stack overflowing program and it just kept going all the way past -4,000,000, when I stopped it. It didn’t even need to allocate any additional memory, I’m disappointed!

Never tried doing that before, now I’m gonna see if I can fork bomb this beast!

Keep in mind, I am pretty sure stacks on Linux are defaulted to be much larger than on windows environments.

Same thing happened to me. It goes on for millions and I don’t think stacks are ever that big. Seems more likely the compiler is doing tail recursion optimization, where it automatically converts to iteration for you.

http://stackoverflow.com/questions/34125/which-if-any-c-compilers-do-tail-recursion-optimization

Note that the recursive implementation of a Fibonacci number generator is very inefficient. For example, to calculate Fibonacci(5), it calculates Fibonacci(4) and Fibonacci(3). To calculate Fibonacci(4), it recalculates Fibonacci(3), and so forth. In general, it will take an exponential number of calls to calculate the number; that is, it is O(exp(n)) in both time and space. See this code for a demonstration.

That is very true. It’s much more efficient to write an iterative version. I often see this question come up on job applications. They’ll ask you to write a Fibonacci function, but the thing they are really interested in is whether you understand that the recursive version of Fibonacci is too inefficient to use for practical purposes.

Iterative function that does the same. It’s quite simple to do in a loop as you can just start adding the numbers up as you work up the sequence. It’s easy to figure out when you look at the sequences working from left to right.

0 1 1 2 3 5 8 13 21 34 55 89 144

It is correct

hey,

I wanted to ask about the efficiency of Recursion versus using for loop or while.

btw: your articles are great! I enjoy very much to read.

thanks,

khen.

Iterative functions (those using a for or while loop) are almost always more efficient than their recursive counterparts. The reason for this is because every time you call a function there is some amount of overhead that takes place. Iterative functions avoid this overhead.

That’s not to say iterative functions are always a better choice. Sometimes the recursive implementation of a function is so much cleaner and easier to follow that incurring a little extra overhead is more than worth it for the benefit in maintainability.

This makes me wonder, what would happen if I made the countDown function inline?

Your compiler would probably just ignore the inline request.

void quiz(int i)

{

if (i > 1)

{

quiz(i / 2);

quiz(i / 2);

}

cout << “*”;

}

if quiz (5) ??? then how many time start printed?

get confused for this one..

how is going to solve step by step.. like u solved example one. .. ?

Hey this functin is going to print * 15 times.

7 times…

hay alex.. thanks lot man.. i know recursive function.. but after read this article.its so easy to solve recursive. now i completely understand recursive function. thanks lot

cheers

mind-blowing!

hi nice post, i enjoyed it