Don't know anything about "Big OOOooooo" notation, but writing something like O(N) is often called "Big O" notation, so I'll assume that's what we're talking about.
When you do an estimate like that, you just take the worst case. The worst case is that you compute a random number every time through the inner loop, so you compute N^2/2 or O(N^2) random numbers. Am I missing something?
Originally posted by Ernest Friedman-Hill: Am I missing something?
which means that in the worst case scenario, it will run endlessly: O(infinity)
The soul is dyed the color of its thoughts. Think only on those things that are in line with your principles and can bear the light of day. The content of your character is your choice. Day by day, what you do is who you become. Your integrity is your destiny - it is the light that guides your way. - Heraclitus
Joined: Jan 04, 2003
Ernest Friedman-Hill and Ilja Preuss thanks you very much it will run endlessly: O(infinity) I agree that in the worst case scenario, it will run endlessly. It is probability question that when n-->infinity, the probability for generating the last unique random number n that assigns to an array a[infinity] is 1/infinity. I do not know how to write it in Big O notation
In the first (for loop) it is used in N times In the second (for loop) it is used about N times Within the second for loop, (if condition) will be infinity So I do not know what is the Big O notation which will be O( ) Ummm......Can I write it in O(infinity)?
Without being an expert, I remember to read something (Hey, Mr. Sedgewick, nice book!), where the O(n) notation was used to estimate average or typical cases too, which is very reasonable.
If you're not good in theoritical analysis, you could try empirical analysis.
(perhaps we can agree to avoid #define at least on a java-page?)
Then use a counter, to count the inner loop. Increase MAX to 1.000, 10,000, 100,000, 1,000,000 and protocoll the values. Put them into a diagram, to see whether it looks logarithmic, linear or exponential.
I'm not sure whether I understood the purpose of your codesnipplet, You generate a rand-value, and if it's not in the generated values so far, you generate a new one?
So you have to ensure N is smaller than or equal to (MAX-MIN)! But is N perhaps allways equal (MAX-MIN)?
For an arraysize of 100 and (MAX-MIN)=100, the first value in the array is found for sure in one guess (100%) and the last by 1%. So your inner loop is run by an average of 100 times, to find a new value. Element i is found by an average of i runs. 1 + 2 + 3 + ... 100 = (1 + 100) + (2 + 99) + (3 + 98) + ... = 101 + 101 + 101 + ... = about 50 * 101 = 5050.
For thousend elements it would be 1 + ...... + 1000 1001 + 1001 + ... = 500,000 (not exactly). It's increasing faster than linear.
It's about N*N/2.
But how about Arraysize being smaller than the random-range? Assume 100,000 random-candidates and an Array of 100. The first is gratis again. now we have a hit expection of 1 100 2 100 3 100 ... 100 99
It would only need an average of something between 100 or 101 runs to fill the array. For 200 values perhaps 203 runs.
If the arraysize is small, compared to (MAX-MIN), the algorithm is quite good. If the arraysize is nearly or exactly (MAX-MIN), it's poor.
You generate a rand-value, and if it's not in the generated values so far, you generate a new one? Yes But is N perhaps allways equal (MAX-MIN)? yes You have answered the question I require to compute expected values from the program and then estimate the worse case. [ September 25, 2004: Message edited by: siu chung man ]