A **recursive function** in C++ is a function that calls itself. Here is an example of a poorly-written recursive function:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
#include <iostream> void countDown(int count) { std::cout << "push " << count << '\n'; countDown(count-1); // countDown() calls itself recursively } int main() { countDown(5); return 0; } |

When countDown(5) is called, “push 5” is printed, and countDown(4) is called. countDown(4) prints “push 4” and calls countDown(3). countDown(3) prints “push 3” and calls countDown(2). The sequence of countDown(n) calling countDown(n-1) is repeated indefinitely, effectively forming the recursive equivalent of an infinite loop.

In lesson 7.9 -- The stack and the heap, you learned that every function call causes data to be placed on the call stack. Because the countDown() function never returns (it just calls countDown() again), this information is never being popped off the stack! Consequently, at some point, the computer will run out of stack memory, stack overflow will result, and the program will crash or terminate. On the author’s machine, this program counted down to -11732 before terminating!

**Recursive termination conditions**

Recursive function calls generally work just like normal function calls. However, the program above illustrates the most important difference with recursive functions: you must include a recursive termination condition, or they will run “forever” (actually, until the call stack runs out of memory). A **recursive termination** is a condition that, when met, will cause the recursive function to stop calling itself.

Recursive termination generally involves using an if statement. Here is our function redesigned with a termination condition (and some extra output):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
#include <iostream> void countDown(int count) { std::cout << "push " << count << '\n'; if (count > 1) // termination condition countDown(count-1); std::cout << "pop " << count << '\n'; } int main() { countDown(5); return 0; } |

Now when we run our program, countDown() will start by outputting the following:

push 5 push 4 push 3 push 2 push 1

If you were to look at the call stack at this point, you would see the following:

countDown(1) countDown(2) countDown(3) countDown(4) countDown(5) main()

Because of the termination condition, countDown(1) does not call countDown(0) -- instead, the “if statement” does not execute, so it prints “pop 1” and then terminates. At this point, countDown(1) is popped off the stack, and control returns to countDown(2). countDown(2) resumes execution at the point after countDown(1) was called, so it prints “pop 2” and then terminates. The recursive function calls get subsequently popped off the stack until all instances of countDown have been removed.

Thus, this program in total outputs:

push 5 push 4 push 3 push 2 push 1 pop 1 pop 2 pop 3 pop 4 pop 5

It’s worth noting that the “push” outputs happen in forward order since they occur before the recursive function call. The “pop” outputs occur in reverse order because they occur after the recursive function call, as the functions are being popped off the stack (which happens in the reverse order that they were put on).

**A more useful example**

Now that we’ve discussed the basic mechanics of recursive function calls, let’s take a look at another recursive function that is slightly more typical:

1 2 3 4 5 6 7 8 9 10 |
// return the sum of all the numbers between 1 and value (inclusive) int sumTo(int value) { if (value <= 0) return 0; // base case (termination condition) else if (value == 1) return 1; // base case (termination condition) else return sumTo(value - 1) + value; // recursive function call } |

Recursive programs are often hard to figure out just by looking at them. It’s often instructive to see what happens when we call a recursive function with a particular value. So let’s see what happens when we call this function with value = 5.

sumTo(5) called, 5 <= 1 is false, so we return sumTo(4) + 5. sumTo(4) called, 4 <= 1 is false, so we return sumTo(3) + 4. sumTo(3) called, 3 <= 1 is false, so we return sumTo(2) + 3. sumTo(2) called, 2 <= 1 is false, so we return sumTo(1) + 2. sumTo(1) called, 1 <= 1 is true, so we return 1. This is the termination condition.

Now we unwind the call stack (popping each function off the call stack as it returns):

sumTo(1) returns 1. sumTo(2) returns sumTo(1) + 2, which is 1 + 2 = 3. sumTo(3) returns sumTo(2) + 3, which is 3 + 3 = 6. sumTo(4) returns sumTo(3) + 4, which is 6 + 4 = 10. sumTo(5) returns sumTo(4) + 5, which is 10 + 5 = 15.

At this point, it's easier to see that we're adding numbers between 1 and the value passed in.

Because recursive functions can be hard to understand by looking at them, good comments are particularly important.

**Recursive algorithms**

Recursive functions typically solve a problem by first finding the solution to a subset of the problem (recursively), and then modifying that sub-solution to get to a solution. In the above algorithm, sumTo(value) first solves sumTo(value-1), and then adds the value of variable value to find the solution for sumTo(value).

In many recursive algorithms, some inputs produce trivial outputs. For example, sumTo(1) has the trivial output 1 (you can calculate this in your head), and does not benefit from further recursion. Inputs for which an algorithm trivially produces an output is called a **base case**. Base cases act as termination conditions for the algorithm. Base cases can often be identified by considering the output for an input of 0, 1, "", '', or null.

**Fibonacci numbers**

One of the most famous mathematical recursive algorithms is the Fibonacci sequence. Fibonacci sequences appear in many places in nature, such as branching of trees, the spiral of shells, the fruitlets of a pineapple, an uncurling fern frond, and the arrangement of a pine cone.

Here is a picture of a Fibonacci spiral:

Each of the Fibonacci numbers is the length of the side of the square that the number appears in.

Fibonacci numbers are defined mathematically as:

F(n) = | 0 if n = 0 1 if n = 1 f(n-1) + f(n-2) if n > 1 |

Consequently, it's rather simple to write a recursive function to calculate the nth Fibonacci number:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
#include <iostream> int fibonacci(int number) { if (number == 0) return 0; // base case (termination condition) if (number == 1) return 1; // base case (termination condition) return fibonacci(number-1) + fibonacci(number-2); } // And a main program to display the first 13 Fibonacci numbers int main() { for (int count=0; count < 13; ++count) std:: cout << fibonacci(count) << " "; return 0; } |

Running the program produces the following result:

0 1 1 2 3 5 8 13 21 34 55 89 144

Which you will note are exactly the numbers that appear in the Fibonacci spiral diagram.

**Recursive vs iterative**

One question that is often asked about recursive functions is, "Why use a recursive function if you can do many of the same tasks iteratively (using a *for loop* or *while loop*)?". It turns out that you can always solve a recursive problem iteratively -- however, for non-trivial problems, the recursive version is often much simpler to write (and read). For example, while it's possible to write the Fibonacci function iteratively, it's a little more difficult! (Try it!)

Iterative functions (those using a for-loop or while-loop) are almost always more efficient than their recursive counterparts. This is because every time you call a function there is some amount of overhead that takes place in pushing and popping stack frames. Iterative functions avoid this overhead.

That’s not to say iterative functions are always a better choice. Sometimes the recursive implementation of a function is so much cleaner and easier to follow that incurring a little extra overhead is more than worth it for the benefit in maintainability, particularly if the algorithm doesn't need to recurse too many times to find a solution.

In general, recursion is a good choice when most of the following are true:

- The recursive code is much simpler to implement.
- The recursion depth can be limited (e.g. there’s no way to provide an input that will cause it to recurse down 100,000 levels).
- The iterative version of the algorithm requires managing a stack of data.
- This isn’t a performance-critical section of code.

However, if the recursive algorithm is simpler to implement, it may make sense to start recursively and then optimize to an iterative algorithm later.

*Rule: Generally favor iteration over recursion, except when recursion really makes sense*.

**Quiz time**

1) A factorial of an integer N (written N!) is defined as the product (multiplication) of all the numbers between 1 and N (0! = 1). Write a recursive function called factorial that returns the factorial of the input. Test it with the first 7 factorials.

Hint: Remember that x * y = y * x, so the product of all the numbers between 1 and N is the same as the product of all the numbers between N and 1.

2) Write a recursive function that takes an integer as input and returns the sum of all the numbers in the integer (e.g. 357 = 15). Print the answer for input 93427 (which is 25).

3a) This one is slightly trickier. Write a program that asks the user to enter an integer, and then uses a recursive function to print out the binary representation for that number. Use method 1 from lesson 3.7 -- Converting between binary and decimal. Assume the number the user enters is positive.

Hint: Using method 1, we want to print the bits from the "bottom up", which means in reverse order. This means your print statement should be _after_ the recursive call.

3b) Update your code from 3a to handle the case where the user may enter 0 or a negative number.

Here's a sample output (assuming 32-bit integers):

Enter an integer: -15 11111111111111111111111111110001

Hint: You can turn a negative integer into a positive one by static casting it to an unsigned integer. These have identical bit representations (the type is used to determine how to interpret the number).

7.12 -- Handling errors, cerr and exit |

Index |

7.10 -- std::vector capacity and stack behavior |

I think mine is simplier for quiz 3.

Edit : I see I can't handle high values.

Hi, for the quiz number 2 adding the integer, I got this

And it did work, I am surprised it works. But I saw your version is much simpler, May I ask the one I have does it considered a bad code? because it's not efficient? thanks

The word "bad" is judgmental, and you'll probably get different answers to that question. So rather than give my opinion about whether it's "bad", instead I'll say the following:

Your code is definitely inefficient, but the limitations of the size of the integer and the nature of the recursion prevent this function from recursing too deep, which helps mask the inefficiency. But the larger sin here is that it's not obvious what the code is doing. At the very least, I'd want to see a comment explaining how the else statement works.

I'm struggling with this concept, and I'm finding very hard to do the second exercise, there is any cue you could give to be able to see more clearly how to code a recursive function?

Never mind, I forgot that integers round down in c++. I think you should put that as a hint for the the second question in the Quiz.

Not really. Perhaps if you tell me in more detail what part of recursion you're struggling with in particular, I can help more.

My difficulty in the quiz was only that i forgot that integers when divided they have remainder if the division is not exact , and it truncate the tenths, and so on. But other thing which I found a little hard is to know how to write a recursive function, finding the base case and knowing the result it will return.

Yup, recursive functions can be pretty non-intuitive. It definitely takes a bit of practice to figure them out.

I don't get it. Seems that the only difference between solution 3a and 3b is if one enters a negative number is it static cast to a positive number and you run the same code as before.

How does all the 1s get outputted? I can't follow what's going on here.

Remember that the type is used to determine how to interpret a number from the sequence of bits. So when you convert a number from a signed to an unsigned (or vice-versa) you're not actually changing the underlying bits, you're just changing the type. Since our code from 3a can print the binary for positive numbers, we can use this conversion trick to change a negative number to a positive one so we can print its binary.

On my system, the last quiz program doesn't work using static_cast<unsigned int>(n) with n < 0. Sample output (I added some llamas):

Still works fine for positive inputs:

Then I realized I hadn't made the recursive function's parameter unsigned. Fixing that, we now get this result for inputs < 0:

So, I changed it to simply multiply n by -1 if n < 0 before calling the recursive function (instead of trying to use static_cast<unsigned int>(n)) and that works fine for both positive and negative inputs.

I'm compiling with clang++ 8.1.0 on macOS 10.12 with these options:

There are no warnings using this command for the build.

is correct. That's the 32-bit representation of integer -15 in two's complement.

I realized that earlier. I guess I wasn't expecting it, though, with a cast to unsigned, and in the context of the examples you were giving there. But now it sounds like you were expecting me to get such a result.

FYI .. I'm recalling liking the quiz's that give a main () function and show the expected output and ask us to write the function(s) and/or class(s), etc. to fill out the rest of the program. Perhaps if there was something more on this one along those lines about expected output?

Thank you for the feedback .. keep up the good work.

Alex,

Does it make sense to use local static variable as a limit on recursion depth (to prevent crash)? I've just read about Big-O notation... Cannot wait to read your thoughts about. As always, a great thank for your tutorial.

How are you thinking you'd use a local static in this case?

Big-O is a fun topic. I'm hoping to get around to writing that tutorial soon, but life keeps getting in the way.

Local static numOfcalls is initialized to 1 only once and will be incremented by 1 on each function call. If numOfcalls exceeds MaxNumberOfCalls, then recursion has an error (for debugging purpose only).

That might work if you decremented the static value as the stack unwound (so it ends up in the same state it started).

An alternative way to do this would be to have a function parameter representing recursion depth. It can be the rightmost parameter and default to 0. Each recursive call increments by 1. At some arbitrary point, the recursion can either stop recursing or throw an exception. That feels like a better solution to me as all the data is local.

I like your solution.

For debugging purposes, I'm OK with the idea of an "extra" optional parameter for a recursion counter. I know that at this point in the lesson series, you have yet to cover "private:", but I generally like the notion of keeping the internals "private" from the function call (consider it a "best practice" in my house style), so I would probably favor the static var approach.

For a recursive function going into a "library" (internal or published) for reuse, I would strongly favor now using the optional parameter approach, unless doing so adds a needed "feature" to the function. Better to keep the internals away from other users. Plus, there's that principle of writing code (and comments) so as to treat yourself (in the future) just like everyone else who might use your code. I like keeping myself safe, too.

You're right, in that the recursive function using a static var to track recursion count needs to also decrement it after every return. However, it would also use less memory on the stack than passing another parameter would.

Another case where I've used a static value in a recursive function is when I want to know what input started this "sleigh ride". There are times when that value is useful in the algorithm. The tricky part is deciding that this is the first time in and setting it properly. Sure, one could nest the recursive function under a wrapper function and then put the original input value in as an extra last parameter (and maybe that's the safest way to keep the keyboard monkey from messing it up while writing the code?).

One minor note: the static variable version is not multi-threading friendly. That won't be an issue unless your application is multi-threaded. But that's a good reason to use the parameter version in a library -- you don't know whether the calling program will be or not.

You're right; excellent point. I wasn't thinking about threading (not there yet in the lessons :), but I always like to see things be thread-safe, as one never knows when you'll need multi-threading and also you want others to be able to utilize your library in any way they need to.

It would suck to have to go back through to fix thread-safety later. I would think that it really shouldn't be all that hard to keep things thread-safe from the get-go, if you're willing to keep it in mind?

I don't cover multi-threading in these tutorials. I'd like to at some point, now that the standard library contains functionality to support it, but my writing time has been mostly limited to answering comments recently.

Making functions thread-safe is fairly simple for most cases (just make sure everything is a local variable or function parameter). It's when you get into access of shared resources (e.g. files, databases, dynamic memory) that things become more interesting and challenging.

As far as I can see, the solution for quiz 3) has two issues:

1) it doesn’t print the twos-complement if x < 0

2) it doesn’t print anything for x == 0

I used a similar approach with bitwise operators and solved the first of the two issues:

With two functions, one can solve both issues and make the code more readable:

P.S.: Sorry for the double post, but one can’t edit older posts…

Good call, and nice solution. I've split this out into a secondary quiz question. Hat tip to you.

Hello Alex,

in your tutorial you stated that use of recursive functions is limited by their function overhead. Is it true that all their "call by value" parameters (and all local variables) are put on the stack each time the function is called, while for "call by reference/adress" parameters only a pointer is put on the stack, just like for "normal" non-recursive function calls?

If so, than one should try to use mostly "(const) references" in recursive functions, except for base types that have the same or less size than a pointer, right?

Many thanks for the insight on recursive functions!

You should follow the same parameter passing rules for recursive functions as for non-recursive functions. Generally you won't need to recurse so deep that you run out of stack space. If that's a possibility, then you might want to look for an iterative approach.

I confused myself making a distinction between recursive and "normal" functions calls... thanks for making the point clear again that only the underlying logic differs, but the calls to both types of functions work the same, as do the "rules" for how one should pass parameters to them!

Hi Alex!

First of all, big thanks for those tutorials. Thanks to you I discovered that coding isn't something incomprehensible and is surprisingly easy!

Regarding the 3rd task from the quiz, although you presented much easier and shorter solution, I believe it's interesting to solve it using std::vector functionality that was covered in the previous lesson.

In the call stack window the function has the same address everytime it is called, i know that it is because it has it's memory allocated in STACK, but then in INLINE FUNCTIONS chapter, What did you mean by "PERFORMANCE OVERHEAD".Please make this clear to me.

The function's code is at the same memory address, but every time you call the function, that function call (along with any parameters) has to be pushed onto the stack, and then popped back off the stack when the function call is done. That function call and return is the overhead that doesn't exist for iterative functions.

In some cases (particularly with tail recursion), the compiler can optimize away the function call. But in other cases this isn't possible.

iterative Fibonacii function

can you say what is wrong in this code?

The inner loop is unnecessary.

Hi!

Firstly, thank you very much for these beneficial tutorials. Actually, there is one point in the recursive example that I couldn't get it, which is in the first part. Where I understood how pushing is happening from 5 to 1, and after that number 1 will be poped since this number couldn't be accepted in the termination condition. BUT what I couldn't get it is:

HOW could the count number (when poping) change from 1 to 2... and until it reaches 5??

And you said: "At this point, countDown(1) is popped off the stack, and control returns to countDown(2)."

But truly, I couldn't understand when you said that control returns to countdown(2). In what basis or logic could this happen?

Imagine that instead of function calls, we had plates with numbers on them. When we call a function, we put a plate on the stack of plates. When a function ends, we pull the plate off.

The first function call is countDown(5). So we put a plate with number 5 on the stack. However, before we get to the end of the function (and remove the plate from the stack), we call countDown(4). So we put a plate with number 4 on the stack. That calls countDown(3), so we put a plate with number 3 on the stack. Then a plate with 2, and a plate with 1. At this point, we have 5 plates on the stack, with number 1 on top and 5 on the bottom. Now we hit our termination condition, so the call to countDown(1) is allowed to print "pop 1" and return to the caller. When it returns, we pop that plate off the stack, so the top plate is numbered 2. Execution continues from the line after the function call to countDown(2). This prints "pop 2", and returns to the caller. So we pop another plate off, and execution continues from beyond the function call. This prints "pop 3". And so on, until we've printed "pop 4" and "pop 5".

In general, stuff that happens before a recursive call happens in forwards order (as the plates are put on the stack). Stuff that happens after a recursive call happens in reverse order (as the plates are popped off the stack).

Sincerely, thank you very much for your effort and interactivity! I really got it, this was very useful! 🙂

Hi Alex!

I have a theoretical question which says:

"Analyze the data structure of the recursive function"

my question is

"Does the question wants to know about the data structure used in the recursion function? If yes, It is the Stack! right?!"

OR

"is it wanting us to analyze the recursion itself as a data structure?"

I have a question, on the third question of the Quiz, the return is used inside the void function, but I'm not sure how it works because In an int function wouldn't the return just tell the CPU to just back into main? or where ever the function was called from, but inside the void function it does not return back, does it have to do something with the stack?

When a function returns, it always returns back to the caller. The only difference between an int function and a void function is that the int function returns an integer back to the caller (who can use it, or ignore it), whereas the void function does not return a value back to the caller.

Just for fun & practice, I took the factorial recursive function and changed it from calculating # of permutations (order matters), to calculating # of combinations (order doesn't matter). Also added a parameter for choosing how many of the total items can be chosen. Here's the code for anyone interested:

Recursion seems like makes sense when you are iterating through a tree:

Root

|_tag

| |_tag

| | |_dataPoint

| | |_dataPoint

| |_dataPoint

|_tag

| |_dataPoint

| |_dataPoint

| |_dataPoint

|_dataPoint

|_dataPoint

Recursion helps with walking through every "dataPoint" in the tree. The pseudo code I use for iterating through a tree is like so:

analyzeTag(Tag tag)

analyze data points in tag.

check for sub-tags.

On sub-tag(s):

iterate through sub-tag list.

call tag analyze method on each sub-tag. <- recursion

Is there an easy way to do this using an iterator, or is the recursion way of doing this suffice.

Yes, one of the most common uses of recursion is to iterate through a tree. If you do your own recursion you can define how you want to traverse the tree (depth first or breadth first).

If your tree class has an iterator, you can use that instead, but that will probably come with a predefined traversal method (likely depth first).

Although the C++ standard library doesn't contain any tree containers, std::map is usually implemented as a red/black tree, and does have an iterator to traverse through it (depth first, since this produces sorted results).

Do I need multidimensional array.

Yes, if your recursive function has multiple parameters.

Hello, how to use recursive method for use in a function like Ackermann function which depends on 2 variable by storing the results for later use. like if you want to calculate A(3,20) by using previous results already calculated to avoid many recursive calculations.

You can always stash the results of a function call in an array for lookup later. That way if you need them, you don't need to recalculate them, you can just pull the value out of the array.

Not sure if that answered your question.

In the section "A more useful example" you had the following comment:

I found this confusing. It might be clearer to say something like "Sums the integers from 1 to value"

PS: Your tutorials are excellent. I'm learning more here than I learned in the class I took in college.

Thanks for the feedback. I've updated the comment to be more clear.

I wrote my quiz 3 function like this :

That's much more elegant than what I came up with. Nice work. I've adopted your solution (with a minor tweak).

Thank you Alex , I am feeling honored 😀

Amazing solution! I cannot do it.

I think that I do not have a deep understanding of the recursive function.

This what I've achieved:

It can only print the binary with a regular sequence.

Gapo's solution is great!

I try to understand what's going in his code but with the following difficulties:

1. What does it mean when a void type function has a return statement in the code?

void printBinary(int x)

something

return;

What the thing it return?

2. I can not exactly follow the data flow of this magic piece of code. Particularly the sequence of the code within the function printBinary:

After the input becomes 0, the function "return;", but I do not understand what it returns to...

Oh, my poor logic...

Hi Ran!

> What does it mean when a void type function has a return statement in the code?

It means "Stop execution here"

You code can be minimized (because you have repeated lines), making it look more like Gapo's submission:

Hi nascardiver!

Nice to see you here. Thanks for always sharing your thoughts on cpp.

Yes! I think the code I wrote yesterday was terribly bad...

I rewrite the code according to your suggestion but it still print the binary with a regular sequence.

I think the "return;" in a void function might have still have some magic that I do not understand.

Then, I try to use Geno's solution to print the binary data in a regular format.

However, I am very surprised by the result. I just switch the print out line with the callback line.

38 in the binary format using my code, Geno's code, and a revised Geno's code

011001 // good

100110 // good

010011 // not good

So, just put the output command after the recurse line does not make sense to me.

Hi Alex,

First, Thank you for the wonderful tutorials. I have a doubt regarding the place of variable declaration. Lets say I have to declare a variable of non-fundamental type. This variable should be used to populate a collection in an iteration using for/while loop. Which way is more efficient

1. to declare this non-fundamental type variable immediately above the for/while loop only once

or

2. to declare this non-fundamental type variable with in for/while loop so that it will be declared and destroyed for each iteration

As per previous lessons, this should be done just before the variable usage. If that is the case, 2nd approach is recommended. My question is doesn't it need more CPU time to allocate and de-allocate memory from stack for this variable for each and every iteration? If so, then 1st approach for variable declaration seems good. Can you please suggest and explain which one is recommended?

Option 1 is better. Creating and destroying non-fundamental variables takes time, and you don't want to do that over and over if you can avoid it.

Generally, you want to declare your variables as close to the first use as reasonable. However, there may be specific reasons you don't want to do so immediately prior to first use, such as the case above (to prevent repeated creation and destruction) or to prevent the variable from going out of scope (e.g. if the first use is in a nested block but you need the variable later outside that nested block).

Thank you Alex. My doubt is cleared... 🙂

Great lesson! I found a way to avoid the use of "base cases" in my try to solve the Fibonacci sequence by iteration...

Hey there,

when Alex wrote 'Try it!', I could not resist to find myself an iterative version of the fibonacci generator. I ended up with the following, which might be slightly more compact and it gets along with no if statement and only 2 variables. I wanted to share my solution, since I did not see any with no case differentiation in the comments before (hope I did not just miss it). I would be happy if u let me know how you like it!

Ps: Thanks again for all this awesome lessons!

Your function is good for calculating individual fibonacci values, but not good for calculating fibonacci sequences, because it does a lot of redundant work (e.g. to calculate the 5th fibonacci number, it does so from the base case rather than leveraging the fact that you just calculated the 3rd and 4th fibonacci numbers).

My example has the same issue though.

Otherwise, looks good.

Okay, but isnt starting from the first elements of the sequence kind of the basic idea of recursive and iterating functions?

Anyway, I got one more question about the efficiency of a program: Can I just count, how many operations must be done until the return is called and assume that any operation costs the same time? Or are there some operations, taking more time then others.. for example is adding faster then comparing? And if so, is there a rule how many "+ operations" are equalizing one "?: operation"?

eg. in the function fibo() above: I have some fix operations (assigning 3 variables in the beging + return in the end), which are executed only one time, no matter how often we iterate. I guess these operations are relatively unimportant for the overall performance. In the loop the function does:

line 13: assign

line 14: add, assign

line 15: assign

that means: 4+4*n calculations if the loop runs n times.

If I use this function instead:

this fibo also has 4 fixed operations (assigning 2 variables in the beginning, comparing and return in the end), but the loop gets along with 3 operations (comparing, adding, assigning):

So overall: 4 + 3*n operations

or do I rather have to count 4 operations (comparing, jump, adding, assigning)?

> Okay, but isnt starting from the first elements of the sequence kind of the basic idea of recursive and iterating functions?

Not always, you can start from the last element and iterate backwards if that makes sense. The larger point I was trying to get across is that your recursive function (and mine) is great for calculating single values, but not so much for sequences (which is what our sample programs are using it for). If I wanted to calculate fibonacci sequences, I'd write a second fibonacci function to do so, not reuse this one (this one does too much redundant work).

You can't really make any assumptions about how long something will take to run, especially since compilers will optimize things. The only way to really tell is to put in some timing code and measure how long it takes to run. There's also Big-O notation, which is used to talk about performance more broadly. I plan to write a lesson on that one at some point in the near future, but you can do your own reading in the meantime if interested.

I gave up in quiz 2 although I understood how to do it in iterative way. Looking at the solution, I realized that the scope of recurrent function stays preserved. Am I right? If yes, it would be useful to mention it in the main text. Thanks for a great tutorial!

Recursive functions work exactly like normal functions in terms of scope and duration. They just have the added complexity of needing a termination condition so you don't recurse all your stack space away.

You articulated what I was trying to say, maybe better.

I thought it very interesting that a lot of variables never went out of scope as the recursion went forward! Am I right? Product would de destroyed in a normal function when it went out of scope. I think here something different is going on?

No, you're mixing up scope and duration.

Scope determines when a variable can be seen/used. Duration determines when it is destroyed.

Consider:

In this program, variable x goes out of scope when function a() is called (you can't use it inside function a), comes back into scope when function a() returns, and is destroyed at the end of function main().

Can you give an example of variable duration? Good scope explanation.

I talk about duration in Lesson 4.1a -- Local variables, scope, and duration. Duration just controls when variables are created and destroyed. For example, local variables are created at the start of the block they are defined in, and destroyed at the end of that same block. Global variables are created at the start of the program and destroyed at the end.

With recursion though the functions are LIFO on the stack. If I have created variables in my first function and go to a depth of 9 for example, the variable duration of those vars created in the first function will at least persist through 9 function returns. So technically block 1 in this example stays "persists" through at least 9 returns before block is destroyed. Hopefully this thinking is correct. I didn't see anything on recursion in 4.1a, unless the principles of nesting and scope/duration are similar. I may have imprecise terminology here, please pardon. Thx in advance.

What you say is true, but it's a strange way of thinking about things. To me, it's more intuitive to say that as long as the block the variable was created on is still on the stack (even if it's not at the top), the local variables inside that block still exist.

Please make some tutorials on Time Complexity of programs and how we can reduce it.

I am planning on writing a lesson on big-O notation soon.

I will make those changes. Thanks for your comments.

Hello Alex,

You did a great job explaining the Fibonacci sequence. I attempted to come up with an iterative solution for the Fibonacci sequence in your tutorial. I would like to get your comments on my code, and how I could improve it. Thanks.

I'd suggest two minor changes:

1) Put your Fibonacci generator in a function

2) Pick better names than sum, sum1, and sum2. Maybe something like current, prev1, and prev2?

Great lesson, Alex! I haven't seen a better explanation on recursion. 🙂

Personally I

loverecursion. It exercises your brain like no other. Reading through the comments, I found some readers pointing out the inefficiency of the Fibonacci-series generator. I dug up an old function I wrote some two years ago in school and examined it (there were no comments 😀 ). Figured it out, here it is with proper documentation:It just takes n+1 calls to print n terms (n calls to print n terms + 1 call in which the base case is met). Can an iterative version still get much more efficient than this?

Also one thing I'd like to point out, in quiz #1 you asked us to 'test it with the first 5 factorials', but the solution prints 7 factorials.

Thanks for the lesson, again!

> Can an iterative version still get much more efficient than this?

If by efficient you mean in terms of lines of code or variables used, maybe not. But we typically measure efficiency in terms of time required to run, and if a function call is expensive on a given architecture, then the iterative version would likely be more efficient.

Also, fixed the quiz question. Thanks for pointing that out.

I see... Is that what reader Kerrick said about? That your Fibonacci function is

"O(exp(n)) in both time and space".Is that how we measure efficiency? Then my function would be O(n+1), correct?Your algorithm would be O(n) (the +1 doesn't matter), which is much better than O(exp(n)). The iterative version is also O(n).

That said, Big-O notation only really tells you how an algorithm scales with the number of elements it has to operate on. It doesn't tell you anything about how efficient it is compared to other algorithms in the same efficiency class. So while a recursive and iterative version of an algorithm might both be O(n), one still might be significantly faster than the other.

I've been itching to write a tutorial on Big-O notation. It's a fun topic, and a good general programming tool to have in your toolkit.

It'd be great! I hope it's already in your to-do list.

Thanks for clearing it up! 🙂