- Post Reply Bookmark Topic Watch Topic
- New Topic

programming forums
Java
Mobile
Certification
Databases
Caching
Books
Engineering
Micro Controllers
OS
Languages
Paradigms
IDEs
Build Tools
Frameworks
Application Servers
Open Source
This Site
Careers
Other
all forums

this forum made possible by our volunteer staff, including ...

Marshals:

- Campbell Ritchie
- Liutauras Vilda
- Jeanne Boyarsky
- Devaka Cooray
- Paul Clapham

Sheriffs:

- Tim Cooke
- Knute Snortum
- Bear Bibeault

Saloon Keepers:

- Ron McLeod
- Tim Moores
- Stephan van Hulst
- Piet Souris
- Ganesh Patekar

Bartenders:

- Frits Walraven
- Carey Brown
- Tim Holloway

posted 13 years ago

I am having a great deal of trouble wrapping my mind around big O. When the professor explained it in class it seemed to make sense and not be difficult but when I'm looking at my assignment problems they don't seem to make any sense at all. I have been trying to think about it like he told us too which is to think about what doubling the size of the number of elements does and to think about how many times each function is going to be called. For simple for loops and simple statements it seems to make sense but once I get into recursion or nested for loops all the big O formlae I calculate predict wildly inaccurate outcomes. I need someone to take a look at my methods and then my predictions and try to help me understand where I'm going wrong here.

I calculated this to be n^3 figuring that each time the method is called it must go through the inner e calls 2 times. But This isn't working out for me. I was wondering if anyone had a more systematic way of determining the Big o of a method since my teachers rather vague approach isn't working for me thanks for any help.

I calculated this to be n^3 figuring that each time the method is called it must go through the inner e calls 2 times. But This isn't working out for me. I was wondering if anyone had a more systematic way of determining the Big o of a method since my teachers rather vague approach isn't working for me thanks for any help.

posted 13 years ago

Ummm...I know big-O notation well enough to use, but in attempting to frame an answer for you, realized I do **not** know it well enough to teach it! Here's a link that may help, though: Algorithmic Efficiency: Beating a Dead Horse Faster

Now, for your specific example - ummm...isn't that O(2^n)?

As a brute-force approach to solving it, assume that entering at n=0 takes 1 time unit, denoted t(0)=1. At t(1), we get units (1+2*t(0))=3. t(2)=(1+2*t(1))=7. t(3)=15, t(4)=31, and t(5)=63. For a given "n", t(n)=(2^n+1)-1 - which is equivalent to 2^n for big-O purposes.

I think.

Anybody with more clues to rub together care to critique? It's been a Very Long Time since I did this kind of analysis...

Grant

Now, for your specific example - ummm...isn't that O(2^n)?

As a brute-force approach to solving it, assume that entering at n=0 takes 1 time unit, denoted t(0)=1. At t(1), we get units (1+2*t(0))=3. t(2)=(1+2*t(1))=7. t(3)=15, t(4)=31, and t(5)=63. For a given "n", t(n)=(2^n+1)-1 - which is equivalent to 2^n for big-O purposes.

I think.

Anybody with more clues to rub together care to critique? It's been a Very Long Time since I did this kind of analysis...

Grant

In Theory, there is no difference between theory and practice.<br />In Practice, there is no relationship between theory and practice.

posted 13 years ago

Do you know what a recurrence relation? Basically, it is a function that is defined in terms of itself. Recurrence relations are the basic tool for solving the exact time-complexity (using Big-Oh notation) for recursive functions. I don't know if your instructor intends to get this specific, although I think you should.

Grant is correct that this algorithm is O(2^n). To solve this, we can write a mathematical function that models the return value of the method:

f(n) = 2, if n = 0

f(n) = 3 + 2*f(n-1), if n > 0

Note that the recursive case is defined in terms of the function itself (with a smaller value as the "argument"). This is the basic idea of a recurrence relation. In fact, recurrence relations are exactly like recursive methods in programming. (This is no coincidence.)

To solve the problem, first note that the base case is basically irrelevant since it is a constant. The interesting part happens with the recursive case. We manipulate the recursive case to get an equivalent equation:

f(n) - 2*f(n-1) = 3

Then, you create what is called the "characteristic polynomial". From the left, we get (x - 2) by using the coefficients of each f(n) term with decreasing values of n replaced by decreasing powers of x. In other words, f(n) is replaced by x^1 = x and -2*f(n-1) is replaced by -2*x^0 = -2. If we had f(n-2), then f(n) would be replaced by x^2 and on down.

The right hand side can be written as 3*1^n (because 1^n = 1). Notice that 3 is a polynomial of degree 0. So this means that we get (x - 1)^1 = (x - 1). The 1 inside the parentheses comes from 1^n and the 1 in the exponent comes from the degree of the polynomial plus 1 (i.e. 0 + 1 = 1).

Now we multiply (x - 2) and (x - 1) and set it equal to zero:

(x - 2)(x - 1) = 0

This means that x = 2 or x = 1. Now we can write a solution of f(n) in terms of n only, and not in terms of the function itself:

f(n) = c1 * 2^n + c2 * 1^n

The 2 and 1 come from the solutions that we found for x in the characteristic polynomial. In problems like this, you always put it in the form x^n where x is a solution to the characteristic polynomial. Also, c1 and c2 are arbitrary constants that depend on the initial value (i.e. f(0) = 2). If you want, you can solve for c1 and c2, but it isn't necessary if you are only intersted in the Big-Oh.

First we should simplify f(n) to

f(n) = c1 * 2^n + c2

since 1^n = 1 for all n.

Now you should be able to see that f(n) is O(2^n) as Grant said.

I bet this is clear as mud. I will be glad to work on a clearer explanation. Feel free to contact me privately if you want me to go over this with you some more.

Layne

Grant is correct that this algorithm is O(2^n). To solve this, we can write a mathematical function that models the return value of the method:

f(n) = 2, if n = 0

f(n) = 3 + 2*f(n-1), if n > 0

Note that the recursive case is defined in terms of the function itself (with a smaller value as the "argument"). This is the basic idea of a recurrence relation. In fact, recurrence relations are exactly like recursive methods in programming. (This is no coincidence.)

To solve the problem, first note that the base case is basically irrelevant since it is a constant. The interesting part happens with the recursive case. We manipulate the recursive case to get an equivalent equation:

f(n) - 2*f(n-1) = 3

Then, you create what is called the "characteristic polynomial". From the left, we get (x - 2) by using the coefficients of each f(n) term with decreasing values of n replaced by decreasing powers of x. In other words, f(n) is replaced by x^1 = x and -2*f(n-1) is replaced by -2*x^0 = -2. If we had f(n-2), then f(n) would be replaced by x^2 and on down.

The right hand side can be written as 3*1^n (because 1^n = 1). Notice that 3 is a polynomial of degree 0. So this means that we get (x - 1)^1 = (x - 1). The 1 inside the parentheses comes from 1^n and the 1 in the exponent comes from the degree of the polynomial plus 1 (i.e. 0 + 1 = 1).

Now we multiply (x - 2) and (x - 1) and set it equal to zero:

(x - 2)(x - 1) = 0

This means that x = 2 or x = 1. Now we can write a solution of f(n) in terms of n only, and not in terms of the function itself:

f(n) = c1 * 2^n + c2 * 1^n

The 2 and 1 come from the solutions that we found for x in the characteristic polynomial. In problems like this, you always put it in the form x^n where x is a solution to the characteristic polynomial. Also, c1 and c2 are arbitrary constants that depend on the initial value (i.e. f(0) = 2). If you want, you can solve for c1 and c2, but it isn't necessary if you are only intersted in the Big-Oh.

First we should simplify f(n) to

f(n) = c1 * 2^n + c2

since 1^n = 1 for all n.

Now you should be able to see that f(n) is O(2^n) as Grant said.

I bet this is clear as mud. I will be glad to work on a clearer explanation. Feel free to contact me privately if you want me to go over this with you some more.

Layne

Grant Gainey

Ranch Hand

Posts: 65

posted 13 years ago
In Theory, there is no difference between theory and practice.<br />In Practice, there is no relationship between theory and practice.

Very nice, Layne. As I said, it's been A Very Long Time since I did this formally - a few of those neurons woke up while reading your response. Not enough, alas; I clearly need to go dig out my books again...

Grant

Grant

posted 13 years ago

Heck, I was going to point you to an adult's only forum website.

Mark

Mark

posted 13 years ago

Wow!

What do you guys hold back for the intermediate forum?

What do you guys hold back for the intermediate forum?

Spot false dilemmas now, ask me how!

(If you're not on the edge, you're taking up too much room.)

posted 13 years ago

Good Question. I thought that intermediate-level Java stuff was things like how to decompile JIT machine code from an in-memory process (using two matchsticks, a swiss-army knife, and a pair of dentures), manually seize control of the garbarge collector, and discuss the size in bits of a boolean.

But I guess that this qualifies.

But I guess that this qualifies.

Piscis Babelis est parvus, flavus, et hiridicus, et est probabiliter insolitissima raritas in toto mundo.

Grant Gainey

Ranch Hand

Posts: 65

posted 13 years ago
In Theory, there is no difference between theory and practice.<br />In Practice, there is no relationship between theory and practice.

Heh - but there's no intermediate **Java** in this a'tall, a'tall; just theory. We were just moseyin' down the trail, shootin' the Big-O Bull.

(There's an idea - maybe we need a Big-O Ranch section, devoted to Computing Science topics?)

On a marginally more-serious note - does this perhaps belong in "General Computing"? "Programming Diversions"? "Performance"? I'm happy to follow trails like this one through whatever ravine (and into whatever box canyon) they may lead...

Grant

(who drove his son screaming from the room last night by trying to explain, at 10PM, what Big-O notation was and why it was useful. I Are Geek - hear me theorize!)

(There's an idea - maybe we need a Big-O Ranch section, devoted to Computing Science topics?)

On a marginally more-serious note - does this perhaps belong in "General Computing"? "Programming Diversions"? "Performance"? I'm happy to follow trails like this one through whatever ravine (and into whatever box canyon) they may lead...

Grant

(who drove his son screaming from the room last night by trying to explain, at 10PM, what Big-O notation was and why it was useful. I Are Geek - hear me theorize!)

Layne Lund

Ranch Hand

Posts: 3061

posted 13 years ago

I agree. This isn't Java, but it sure is interesting nonetheless. Is there an appropriate forum for purely theoretic Computer Science topics such as this? I guess there's a design pattern forum, but this doesn't fit there...

Layne

[ October 21, 2005: Message edited by: Layne Lund ]

Layne

[ October 21, 2005: Message edited by: Layne Lund ]

- Post Reply Bookmark Topic Watch Topic
- New Topic

Boost this thread!