GeeCON Prague 2014*
The moose likes Beginning Java and the fly likes Big-O Notation??? (Don't understand) Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


JavaRanch » Java Forums » Java » Beginning Java
Bookmark "Big-O Notation??? (Don Watch "Big-O Notation??? (Don New topic
Author

Big-O Notation??? (Don't understand)

John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115
Give the order of growth: (what does that mean?)

1. 10n + 3n2 + log n
2. n(n3 + 8)

Give the worst case order of growth: (what does that mean?)

3.

for (int i = 0; i < a.length; i++) {
for (int j = i - 1; j > 1; j = j / 2) {
// Do something with a[i], a[j]
}
}

4.

for (int i = 0; i < a.length; i = i + 2) {
for (int j = 0; j < 5; j++) {
for (int k = i; k > i / 2; k--) {
// Do something with a[i], a[j], a[k]
}
}
}
pete stein
Bartender

Joined: Feb 23, 2007
Posts: 1561
http://en.wikipedia.org/wiki/Big_o_notation
Christy John
Greenhorn

Joined: Mar 26, 2007
Posts: 7
Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed.
The n in the equation depicts the units that determine the complexity.
So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n).
So O(n square ) is more complex than O(n).

You need to study Analysis and design of algorithm to understand this much deeply. However, I assume, my rely can help you for a while.
Thank you.
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18876
    
  40

Originally posted by Christy John:
Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed.
The n in the equation depicts the units that determine the complexity.
So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n).
So O(n square ) is more complex than O(n).


Christy, I am not sure if I agree with this.

I always thought that the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

For example, if I have to run a task with 1000 items, and it takes 5 minutes to do it, then running the task with 2000 items, should take 10 minutes to do it, if it has an order of N.

You can have a really complex algorithm, with an order of N. And you can have a really simple algorithm, with an order of N squared.

Henry


Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor)
Christy John
Greenhorn

Joined: Mar 26, 2007
Posts: 7
Sorry if I made a mistake here. This is what I understood. Let me check.
Thank you.
David McCombs
Ranch Hand

Joined: Oct 17, 2006
Posts: 212
Big O measure the number of steps in an algorithm, in many cases that is the number of comparisons. It is completely neutral in regards to hardware, language, operating system, etc.

I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested. Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.


"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration."- Stan Kelly-Bootle
Ulf Dittmer
Marshal

Joined: Mar 22, 2005
Posts: 42047
    
  64
Originally posted by Henry Wong:
... the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.


Most often it's used for time complexity, but as Christy notes, sometimes it's applied to memory complexity.


Ping & DNS - my free Android networking tools app
John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115
Thanks, but I still have no idea how to answer these questions, even after all my reading on the subject. I pulled them straight from a text book, unfortunately the text book doesn't contain the answers. I am trying to answer those questions in preperation for a test coming up. I might understand better if I saw the solution to the problem, then I can figure out how I was supposed to get there.
John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115
Here's the answers i jotted down. If it needs correcting, someone please tell me what I did wrong and post the correct solution. For some reason this is going right over my head.

Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
[written without seeing the last post]

John - in math, have you studied sequences and series? That is helpful background for this discussion.

Generally, with, big-O problems, the process to solve the problem formally would be

1. get a formula for the amount of time required to complete a given algorithm or code snippet. Generally, this means a formula for the number of times that the innermost loop of the code will execute.
2. simplify that formula by removing all but the most significant term, and remove any constant multiplier as well. So a formula like 10n^2 + 7n + 113 would simplify to 10n^2 and then to n^2 (where n^2 means n squared - not the Java meaning).

Knowing that the final formula will be simplified allows you to take many shortcuts in deriving the formula. E.g. you didn't need to worry about the 10, or the 7n, or the 133. You could skip those details in the first place if you know what you're doing. With practice, big-O problems can usually be solved with some quick mathematical intuition rather than formal proof. But this is difficult to develop except through practice.

I suspect that questions 1 and 2 were supposed to relate to only the second step above - how to simplify the formula. That is, if we assume that
10n + 3n2 + log n and n(n3 + 8) are formulas for the time to execute, rather than code to be executed, then you just need to simplify them. However this assumption was not stated in the problem. Also I'm guessing 3n2 is supposed to be 3n^2, is that right?
[ March 28, 2007: Message edited by: Jim Yingst ]

"I'm not back." - Bill Harding, Twister
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
Ah, good, looks like you actually do have some idea how to do these.

Your answers look correct except for the third one. Specifically, look more carefully at the inner loop. How many times will it execute if i = 10? What if i = 100? Or i = 1000? How can you generalize this behavior?
[ March 28, 2007: Message edited by: Jim Yingst ]
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
[David McCombs]: I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested.

I think there's an implied "approximately" and/or "for large values of N" in most big-O discussions. If algorithmic complexity is not the most significant factor in determining execution time, then it makes little sense to worry about the big-O performance of the algorithm.

[David McCombs]: Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.

Note that Henry did say "if it has an order of N."
John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115
in the third one, the inner loop is described as n/2 right? I mean as long as i>1 is one of the conditions, so it executes n/2 times, not in a case like i<1, where it would only execute once. Explain?
Campbell Ritchie
Sheriff

Joined: Oct 13, 2005
Posts: 39084
    
  23
Does it execute n/2 times? Not convinced.
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
It doesn't execute in n/2. It may help to step through the code one line at a time to see what happens on each iteration. Or write a test program if you prefer, and print the value of j each time through the inner loop. It's pretty easy to establish that the behavior is not n/2 (or anything directly proportional to n). Figuring out what it is may depend on your mathematical background.
John Lockheart
Ranch Hand

Joined: Oct 13, 2006
Posts: 115
it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array (meaning both execute n times). Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.
Jim Yingst
Wanderer
Sheriff

Joined: Jan 30, 2000
Posts: 18671
[John]: it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array

All true so far...

[John] (meaning both execute n times).

No. Merely being dependent on n does not automatically mean executing n times. Sure, that's the most common case (for a loop from 0 to n-1) but it's not what we have here.

[John]: Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.

And yet, it's not. Again, I recommend that you write some code to see what happens. It doesn't matter what you think the answer should be - you need to discover what it is.

Note: it will be simpler if you focus on the inner loop only, at first:

How many times would this execute for i = 10? i = 100? Or i = 1000? Don't guess. Find out.
 
GeeCON Prague 2014
 
subject: Big-O Notation??? (Don't understand)