That is cute! If you had an very precise timer, you could do one pass over the data to compute the smallest workable base interval, and then proceed as above. Linear-time sort with no space overhead!

Yeah, no benefit in computing different time intervals anyway -- why wouldn't you just always use the smallest one you can do? I should have thought a little more before posting

It doesn't look like any algorithm. It is seems to be just an obfuscated way to use an existing sorting algorithm implemented in the OS. You might as well call a sort function from a library, no?
Furthermore, if you have a smaller number at the end of the list, it might be printed out of order.

Ernest Friedman-Hill wrote:That is cute! If you had an very precise timer, you could do one pass over the data to compute the smallest workable base interval, and then proceed as above. Linear-time sort with no space overhead!

Well, I'm thinking all those forked processes take up some system resources somewhere. If you try to sort a million numbers with this, you might find that there's a bit more space overhead that originally considered.

Paul Anilprem wrote:It doesn't look like any algorithm. It is seems to be just an obfuscated way to use an existing sorting algorithm implemented in the OS.

Hm, I don't see how. What part is a sorting algorithm implemented in the OS? The call to sleep?

Paul Anilprem wrote:You might as well call a sort function from a library, no?

Sure, for any practical application, there are plenty of more reliable and efficient ways to do this. The point here is just that it's clever and different, just for fun.

Paul Anilprem wrote:Furthermore, if you have a smaller number at the end of the list, it might be printed out of order.

True, if we have a large enough number of arguments, and/or if we reduce the time interval enough. Also, if the user passes in floating-point numbers that are sufficiently close to each other in value.

Mike Simmons wrote:Well, I'm thinking all those forked processes take up some system resources somewhere. If you try to sort a million numbers with this, you might find that there's a bit more space overhead that originally considered.

You think? If this was NCIS, Gibbs would smack the OP on the head.

Forking a process is cheap compared to having the user shoot the computer because your app is unresponsive, but its not free, and its only cheap when you look at Big OH notation, The two constants are fairly big. I could easily believe that each process take say 100 words of memory. If N is big, this becomes a serious problem.

Plus the overhead to create and join back a tread is far higher than that of comparing two integers or even two small strings.

But for MD, its perfect.

Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 3025

10

posted

0

Mike Simmons wrote:

Paul Anilprem wrote:It doesn't look like any algorithm. It is seems to be just an obfuscated way to use an existing sorting algorithm implemented in the OS.

Hm, I don't see how. What part is a sorting algorithm implemented in the OS? The call to sleep?

Never mind - I thought about it some more and I think I agree. The part of the OS that determines when to resume each process is probably doing some sorting to schedule the wake-up calls. Oh well.

Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 3025

10

posted

0

And I agree with Pat on all points, especially the last.

I wonder how you'd express this algorithm in Big-O notation...

O(magnitude of the largest element being sorted)???

and in my 30 seconds of testing, it does not work for floats, strings or even negative integers - although you could probably come up with some hash function to compensate for some/most/all those cases...

Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 3025

10

posted

0

fred rosenberger wrote:I wonder how you'd express this algorithm in Big-O notation...

O(magnitude of the largest element being sorted)???

Yep.

fred rosenberger wrote:and in my 30 seconds of testing, it does not work for floats, strings or even negative integers - although you could probably come up with some hash function to compensate for some/most/all those cases...

Hmmm, this works fine with floats, on my system (MacBook Pro running OS x 10.6.7). Other than being vulnerable to error if two floats are very close to each other, as noted in my last post.

I did replace "wait" with "wait > /dev/null", to get rid of some annoying extra output.

fred rosenberger wrote:I wonder how you'd express this algorithm in Big-O notation...
O(magnitude of the largest element being sorted)???

That is still O(1) since you have arbitrarily large constants A and B in the formula: time ~~ A + B * N
where N is the number of elements you are processing. Big-O is all about limits, so the actual values of A and B disappear.

Saying something is O(1) does not mean its faster than something O(N^2) for all values of N. Just that as N becomes really large, then it is safe to say T(N) < T(N^2)

I don't know...it's been a while since I've done any big-0 notation calculations, but I would say this sort doesn't really work with it. I mean, assuming you ignore hardware and cpu limits - i.e. a machine with infinite memory and cpu cycles, you could sort one element or one trillion elements in 1 second - assuming all your input values were '1'.

However, sorting one element may take one second or 2^64 - 1 seconds, if your single input is 2^64 - 1.

The time is not dependent on the number of elements in any way I can see.

Arun Giridharan wrote:I don't think this should be in the Meaningless Drivel.

I think this is EXACTLY where this should be.

;-)

Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 3025

10

posted

0

It all depends on what parts you consider constant and what parts you consider variables. Big O analysis is typically carried out considering only N, the size of the input, as the thing that varies - but that needn't be the only variable that can be considered. And in this case, the max value of the input is far more significant than anything else, so have no problem saying the algorithm is O(max value) - that's far more useful and informative than saying O(1), IMO.

Actually I believe there is also an O(N) component to the execution time. It's just currently insignificant next to the O(1) component). But still, there's a loop in there that has to visit every element of the input once, parse it as a number, and start a new process for that input. If N were sufficiently large, and the wait time intervals were small enough, the O(N) could dominate over the O(1). Of course the algorithm would be failing badly at this point.

Actually I believe there is also an O(N) component to the execution time. It's just currently insignificant next to the O(1) component). But still, there's a loop in there that has to visit every element of the input once, parse it as a number, and start a new process for that input. If N were sufficiently large, and the wait time intervals were small enough, the O(N) could dominate over the O(1). Of course the algorithm would be failing badly at this point.

Arun Giridharan wrote:I don't think this should be in the Meaningless Drivel.

I think this is EXACTLY where this should be.

;-)

Since it deals with Algorithm ,i don't feel it suits in Meaningless Drivel.

Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 3025

10

posted

0

Well, it's a very silly and inefficient algorithm, for fun only. I wouldn't put it in any of the "serious" forums. I suppose it might fit fine in Programming Diversions though.

That's a good one. By the way, it's called sleep sort and started off on 4chan. See this epic thread for a few laughs. It's also on wikipedia: Sleep_sort