• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

big O notation

 
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
May some one explain about Big O notation?

Is there any software to analyze it when you are working with JAVA code?
 
author & internet detective
Posts: 41878
909
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Read this first: Big_O_notation

Now, do you have a specific question about Big O notation?

I don't know of a program that f O. It's usually done on algorithms before you code. Once you have code, you can just throw different sized datasets at it and see the actual size.
 
Marshal
Posts: 79239
377
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
As Jeanne says: most well-known algorithms will have their big-O values in their descriptions.
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Is there any software or IDE can draw graph for a program?
 
Bartender
Posts: 1561
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:Is there any software or IDE can draw graph for a program?



Yes.
 
Ranch Hand
Posts: 42
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
it is better practice to work out the Big O notation of an algorithm by hand using the mathmatical technique. in most cases it is easier than getting software to work it out for you. doing it yourself you can get the best worst and average cases of the algorithm quickly.
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


May you know me these softwares ?
 
Rancher
Posts: 4803
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:May you know me these softwares ?



What are you really asking about?

Big-O is just an analysis tool. When you read up on it, you will see that there are always at least two constants that the analysis ignores, but can and do have significant real world performance implecations.
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I read this somewhere:


2. linear for loops: O(n)
k=0;
for(i=0; i<n; i++)
k++
For loops are considered n time units because they will repeat a programming statement n times.
The term linear means the for loop increments or decrements by 1



cstutoring.com

Does it mean it takes 3 sec? 3 milisecond??

Is it possible to estimate the maximum time that is needed for an algorithm by big O?For example say this code takes 20 sec!>
 
Pat Farrell
Rancher
Posts: 4803
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:Is it possible to estimate the maximum time that is needed for an algorithm by big O?For example say this code takes 20 sec!>



No, not even a little bit.
Big-Oh is about relative speed between approaches at large values on N. It says nothing about specific execution times for any particular static value for N and
it specifically is true for all single core applications.

By definition Big-Oh is calculated from the limit as N -> infinity

There are always a number of constant values that become unimportant when N is really large, but are very important in real world timings.

The function is really more like t(N) = A + B * N + C * n^2 + d*n^3 ....

for small values on N, the constants can be big. If an algorithm has a lot of setup time, the A constant may dominate until N is well up into the thousands or even millions.
 
Java Cowboy
Posts: 16084
88
Android Scala IntelliJ IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The only thing that Big O notation tells you is how computationally intensive an algorithm is; specifically if the size of the input gets larger.

For example, if you put a list of input values that is twice as long into the algorithm, will it take twice as long, or four times as long, or maybe only one and a half times as long to run the algorithm?

The time an O(n) algorithm needs is proportional to the size of the input: twice as many input values = it takes twice as long.

The time an O(n^2) algorithm needs is proportional to the square of the size of the input: twice as many input values = it takes four times as long.

Big O notation is a theoretical framework to estimate the complexity of algorithms. It doesn't say anything about absolute running time. Absolute running time ofcourse depends on a specific implementation of an algorithm and the computer that you're running it on.
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

k=0;
for(i=0; i<n; i++)
k++
For loops are considered n time units because they will repeat a programming statement n times.
The term linear means the for loop increments or decrements by 1



I read it in ebook.

Is it right definition about linear for?May someone explain more?>
 
Pat Farrell
Rancher
Posts: 4803
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:Is it right definition about linear for?


Its an example, not a definition.
You need to read the real definitions of Big Oh and Big Omega terminology
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

O(N) = as you increase the size of input, the time taken to complete operations scales linearly with that size



Big O specifically describes the worst-case scenario, and can be used to describe the execution time required or the space used (e.g. in memory or on disk) by an algorithm.



When I check images in this site:

http://nerdgerl.wordpress.com/2009/11/18/an-intro-to-big-oh-notation-with-java/

I see they use big O in order to find the relationship between size on input and time.In fact, I think Big o is a function that represent If size of input being increased, How much time of running of algorithm will be increase. Is it right?

But then there is another question, If it is about the size of input, Then size of for example a number, will be unlimited. then there is worst case after worst case.Then we can not say this is the worst case, because there is another worst case too.



I really get confused!


Another question:



It is said the worst case here is : o(n)

Now my question is:

If it is the worst case, Then what can be the average case for this?
>
 
Ranch Hand
Posts: 492
Firefox Browser VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:
Another question:



It is said the worst case here is : o(n)

Now my question is:

If it is the worst case, Then what can be the average case for this?
>



It will always be O(n) because that for loop only executes n times.
 
Pat Farrell
Rancher
Posts: 4803
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

abalfazl hossein wrote:I see they use big O in order to find the relationship between size on input and time.In fact, I think Big o is a function that represent If size of input being increased, How much time of running of algorithm will be increase. Is it right?

But then there is another question, If it is about the size of input, Then size of for example a number, will be unlimited. then there is worst case after worst case.Then we can not say this is the worst case, because there is another worst case too.



Big Oh is about design analysis, not details. Look at the real definitions, there are lots of constants that can be very big.

O(n^2) means that when you double the size of the input, it runs four times longer, plus or minus some.


Most folks do not understand how fast polynomial functions grow. Code that works fine with small N and is O(n^4) becomes unusable with N ~= 1000 and is totally dead with N > 10K


Worse, some algorithms are exponential, something like O(2^n) they get insane fast.
 
Jesper de Jong
Java Cowboy
Posts: 16084
88
Android Scala IntelliJ IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Pat Farrell wrote:Worse, some algorithms are exponential, something like O(2^2) they get insane fast.


You probably meant: O(2^N) which means if you make the input N times larger, it will take 2^N as long.
 
Sheriff
Posts: 22784
131
Eclipse IDE Spring VI Editor Chrome Java Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Right; O(2^2) is O(1).
 
Pat Farrell
Rancher
Posts: 4803
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
good catch of typo

 
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Runtime of algorithms has nothing to do with exact time like seconds, minutes, hours, etc. It is used to compare 2 given algorithms to each other.
Say you have algorithms to sort a list of integers. Selection sort or insertion sort have worst and average case of O(n^2), whereas heapsort, quicksort or mergesort have average case of O(n*log(n)). This means that for high values of n, the last 3 algorithms are faster. The first 2 are only faster in very specific cases, like when the list is almost sorted beforehand. Another important thing is that constant values always have O(1), no matter how high they might be.
Use this as an example:

whereas

or

The next slope has O(n^2), because the 2nd slope is within the first. Therefore the runtimes are multiplied.

Suppose you have a slope with O(n) and outside this slope you have one with O(n*log(n)). The bigger one dominates the overall runtime of the code, so the program in total has O(n*log(n)).
I hope you get the idea.
 
lowercase baba
Posts: 13089
67
Chrome Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Big-O notation is useless when you only have one way to do something. If there is only one way to code your needs, then it doesn't matter if its O(1), O(n) or O(N!)...you don't have a choice.

Where it comes in handy is when you DO have a choice. Sorting is a primary example. There are many, many different ways you can write a sort algorithm. here is a comparison, showing the average, worst, and memory usage of all the sorts.

So, by looking at this chart, I can balance speed against memory against the need for a stable sort. Perhaps I know I will NEVER need to sort more than 30 things. I can pick a bubble sort, because it doesn't matter that it grows in n^2 time.

however, if i'm going to sort 10 million items, it won't work. A Heapsort might be better - but only if I don't need it to be a stable sort.

Big-O notation is a tool, and one of several, that is used to weigh different algorithms. That's all. And depending on your specific needs, it may not be the most important one.

 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

k=0;
for(i=0; i<n; i++)
k++;
k=0;
for(j=0; j><n; j++)
k++;
Sequential for loops are not related and loop independently of each other. The first loop will execute
n times. The second loop will execute n times after the first loop finished executing. The worst case
timing will be:
O(n) + O(n) = 2 * O(n) = O(n)
We drop the constant because constants represent 1 time unit. The worst case timing is O(n)
.



I read this in a tutorial.

May you explain more about this:

O(n) + O(n) = 2 * O(n) = O(n)

Another question:



In this situation we calculate the worst case timing using both loops. For every i loop and for start of
the inner loop j will be n-1 , n-2, n-3…
O(1) + O(2) + O(3) + O(4) + ......
which is the binomial series:
n ( n - 1) n^2 - n n^2
------------ = ----------- = ------ = O(n2)
2 2 2




May you explain more about this


third question



What is Big O for this code?
 
Christoph Czon
Greenhorn
Posts: 15
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

May you explain more about this:
O(n) + O(n) = 2 * O(n) = O(n)


Instead of giving examples, it might be better to give a little introduction in computing theory.

Runtime of algorithms is only interesting for high values of n. High values of n means that n grows towards infinity. (lim_n->infinity). The limes of a constant is always the constant itself. Therefore, no matter how high a constant is, it is always ignored when determining the runtime of an algorithm. This is because we want to measure runtime of algorithms machine independently.
Say computer A needs 10 seconds to execute something 100 times (100 is a constant). Computer B needs 20 seconds to execute the same thing the same number of times. In 10 years time maybe a computer will be invented that can do the same task in 1 millisecond. Therefore it is absolutely useless to compare execution time of constant values because it depends only on the underlying hardware, not on the efficiency of the algorithm.

This is not the case when you have execution time depending on n (which is a variable and can therefore in theory have the abstract value of infinity). Here you can and want to compare efficiency of the algorithms. As the example with sorting algorithms already showed, different algorithms implement the same task differently. Each algorithm has its own strength and is therefore best suited for a specific situation. Thus it is important for programmers to know which situation asks for which algorithm. This is the reason O(...) notation was invented. To provide a sort of measurement that allows comparison of algorithms.

Now back to your question:
As already posted above, 2 algorithms executed after each other are independent of each other. Also, in determining the overall runtime of a program, the algorithm with the highest runtime dominates the program runtime.
Independent runtimes are added up. Runtimes within runtimes are multiplied.

In your example you have 2 independent algorithms with O(n). Adding them up makes 2*O(n) = O(2*n). Now you can view this in two ways:
1) constants factors are ignored -> 2*O(n) = O(2*n) = O(n)
2) the highest (independent) runtime dominates the program runtime -> both are the same -> O(n)

Please note that constants are only ignored as factors (see above) or as base of logarithm. O(log_2(n)) = O(log_10(n)) = O(log(n)). They are not ignored as powers of exponential functions: O(n^2) != O(n^3)

I will skip your second question and move on to the third:

Although your code doesnt say where k++ belongs (please use brackets in loops, so it is clear what belongs into what loop body), it doesnt matter, because k is not used as a condition in any of the loops. So we can ignore k completely.
The first loop starts with i = 1, goes till i<=n. i is multiplied by 2, so this outer loop has O(log(n)).

The second loop however (I assume it is inside the body of the 1st loop because otherwise the problem would be trivial) starts with j=1 and goes till j<=i. Each time j is increased by 1.
This means that the inner loop is executed 1 time in the first execution. Then the outer loop is multiplied by 2 -> i = 2, in the inner loop j goes from 1 to 2 and so on. In total, j goes from 1 to i (inner loop) and i goes from 1 to n (outer loop) -> so j goes from 1 to n. This gives the inner loop a runtime of O(n) because j is only increased by 1.

Total runtime = O(n*log(n)) (Please correct me if I'm wrong)

I hope I could explain it well enough!
 
abalfazl hossein
Ranch Hand
Posts: 635
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
First Of all, thank you very much Christoph!

I put here what I read exactly in that tutorial.


8. power loops: O(2n)
k = 0;
for(i=1; i<=n; i=i*2)
for(j=1; j<=i; j++)
k++;
To calculate worst case timing we need to combine the results of both loops. For every iteration of
the loop counter i will multiply by 2. The values for j will be 1, 2, 4, 8, 16 and k will be the sum of
these numbers 31 which is 2n- 1.



I also edit my second question:



In this situation we calculate the worst case timing using both loops. For every i loop and for start of
the inner loop j will be n-1 , n-2, n-3…
O(1) + O(2) + O(3) + O(4) + ......
which is the binomial series:


n ( n - 1)
------------ =
2


n^2 - n
----------- =
2


n^2
------ = O(n^2)
2



I can 't understand it at all!
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic