Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed. The n in the equation depicts the units that determine the complexity. So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n). So O(n square ) is more complex than O(n).

You need to study Analysis and design of algorithm to understand this much deeply. However, I assume, my rely can help you for a while. Thank you.

Originally posted by Christy John: Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed. The n in the equation depicts the units that determine the complexity. So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n). So O(n square ) is more complex than O(n).

Christy, I am not sure if I agree with this.

I always thought that the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

For example, if I have to run a task with 1000 items, and it takes 5 minutes to do it, then running the task with 2000 items, should take 10 minutes to do it, if it has an order of N.

You can have a really complex algorithm, with an order of N. And you can have a really simple algorithm, with an order of N squared.

Big O measure the number of steps in an algorithm, in many cases that is the number of comparisons. It is completely neutral in regards to hardware, language, operating system, etc.

I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested. Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.

"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration."- Stan Kelly-Bootle

Originally posted by Henry Wong: ... the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

Most often it's used for time complexity, but as Christy notes, sometimes it's applied to memory complexity.

John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115

posted

0

Thanks, but I still have no idea how to answer these questions, even after all my reading on the subject. I pulled them straight from a text book, unfortunately the text book doesn't contain the answers. I am trying to answer those questions in preperation for a test coming up. I might understand better if I saw the solution to the problem, then I can figure out how I was supposed to get there.

John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115

posted

0

Here's the answers i jotted down. If it needs correcting, someone please tell me what I did wrong and post the correct solution. For some reason this is going right over my head.

John - in math, have you studied sequences and series? That is helpful background for this discussion.

Generally, with, big-O problems, the process to solve the problem formally would be

1. get a formula for the amount of time required to complete a given algorithm or code snippet. Generally, this means a formula for the number of times that the innermost loop of the code will execute. 2. simplify that formula by removing all but the most significant term, and remove any constant multiplier as well. So a formula like 10n^2 + 7n + 113 would simplify to 10n^2 and then to n^2 (where n^2 means n squared - not the Java meaning).

Knowing that the final formula will be simplified allows you to take many shortcuts in deriving the formula. E.g. you didn't need to worry about the 10, or the 7n, or the 133. You could skip those details in the first place if you know what you're doing. With practice, big-O problems can usually be solved with some quick mathematical intuition rather than formal proof. But this is difficult to develop except through practice.

I suspect that questions 1 and 2 were supposed to relate to only the second step above - how to simplify the formula. That is, if we assume that 10n + 3n2 + log n and n(n3 + 8) are formulas for the time to execute, rather than code to be executed, then you just need to simplify them. However this assumption was not stated in the problem. Also I'm guessing 3n2 is supposed to be 3n^2, is that right? [ March 28, 2007: Message edited by: Jim Yingst ]

"I'm not back." - Bill Harding, Twister

Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671

posted

0

Ah, good, looks like you actually do have some idea how to do these.

Your answers look correct except for the third one. Specifically, look more carefully at the inner loop. How many times will it execute if i = 10? What if i = 100? Or i = 1000? How can you generalize this behavior? [ March 28, 2007: Message edited by: Jim Yingst ]

Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671

posted

0

[David McCombs]: I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested.

I think there's an implied "approximately" and/or "for large values of N" in most big-O discussions. If algorithmic complexity is not the most significant factor in determining execution time, then it makes little sense to worry about the big-O performance of the algorithm.

[David McCombs]: Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.

Note that Henry did say "if it has an order of N."

John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115

posted

0

in the third one, the inner loop is described as n/2 right? I mean as long as i>1 is one of the conditions, so it executes n/2 times, not in a case like i<1, where it would only execute once. Explain?

It doesn't execute in n/2. It may help to step through the code one line at a time to see what happens on each iteration. Or write a test program if you prefer, and print the value of j each time through the inner loop. It's pretty easy to establish that the behavior is not n/2 (or anything directly proportional to n). Figuring out what it is may depend on your mathematical background.

John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115

posted

0

it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array (meaning both execute n times). Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.

Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671

posted

0

[John]: it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array

All true so far...

[John] (meaning both execute n times).

No. Merely being dependent on n does not automatically mean executing n times. Sure, that's the most common case (for a loop from 0 to n-1) but it's not what we have here.

[John]: Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.

And yet, it's not. Again, I recommend that you write some code to see what happens. It doesn't matter what you think the answer should be - you need to discover what it is.

Note: it will be simpler if you focus on the inner loop only, at first:

How many times would this execute for i = 10? i = 100? Or i = 1000? Don't guess. Find out.