1.) "differences" isn't really the way to think about this - it's really an issue of "timer resolution". System.currentTimeMillis() just returns the current count of milliseconds since midnight, January 1, 1970 - it gets this from the underlying OS. In some OSes (the versions of Windows you mentioned) the OS doesn't update the millisecond count every millisecond, but every 20 or 40 milliseconds. So if you had code like this:
On a system that updates every 1 millisecond, you'd end up with something like this:
0
0
0
1
0
0
1
.
.
.
Mostly 0s and 1s (maybe some higher numbers every so often if you had lots other programs running, or heavy processing going on in the background). The 0s are there because lines of code run faster than 1ms a line, the 1s show up when the system millisecond counter updates.
On a computer that has a 20ms timer resolution you'd end up with output like this:
0
0
0
.
.
.
0
15
0
0
.
.
.
*Lots* more 0s and rarely a 15 (again, usually around 15 - you might end up with some 16/17/etc. because other things are running, etc.). Again, the 0s show up both because operations take less than 1ms, but also because even though more than 1ms has passed in real time, the OS timer keeps returning the same value until it updates.
2.) To understand this code plug in some fake values that are easy to work with - let's say DEMO_TIME is 5 (which means we want it to run a duration of 5ms), and assume a 1ms timer resolution (System.currentTimeMillis() returns a new value every 1ms).
The code starts, System.currentTimeMillis() is called and returns 100. This value is saved in the variable startTime and copied into currTime.
We enter the loop and test the condition: currTime (100) - startTime (100) is 0, which is less than DEMO_TIME (5), so we enter the loop.
We call System.currentTimeMillis() again, this time we get 102. We subtract the value in currTime (100), so we get 2 and save it into elapsedTime (2ms have passed since we set currTime).
We then add elapsedTime (2) to currTime (100) and save 102 into currTime.
The loop tests the values again - currTime (102) - startTime (100) is 2, which is less than DEMO_TIME (5), so we start the loop code again.
We call System.currentTimeMillis() again and get 105. We subtract the value in currTime (102), and get 3, and save it into elapsedTime (3ms have passed since we last set currTime - last time in the loop.)
We then add elapsedTime (3) to currTime (102) and save 105 into currTime.
The loop test the values - currTime (105) - startTime (100) is 5 - this isn't less than DEMO_TIME (5), so the loop ends and exits.
I don't completely understand this myself. All I know is that it gives the correct time to put a thread in a game to sleep. If it doesn't have this mysterious value, a character might animate too fast on a fast computer.
Try reading about
Time-Based vs. Frame-Based Animation - the example on the site is in ActionScript, but the concepts are the same no matter what language you're using (and ActionScript is pretty close to Java syntax-wise anyway).