The result of x * y should be a 32 bit int, but the cast tells the compiler to make the result a 8 bit byte instead. Primitive casts are done by lopping off the higher order bits. So, to demonstrate, 320 (the int result of x * y) would be represented as:
00000000 00000000 00000001 01000000
But casting to the byte gets rid of the first 24 bits, leaving
When x and y are multiplied, both are promoted to type int, which is the type of the result. As an int, this is 320. To understand what happens to this value when cast to type byte, you need to look at the binary representation. As a 32-bit int, 320 is...
0000 0000 0000 0000 0000 0001 0100 0000
When this is cast to an 8-bit byte, only the 8 bits to the right are retained...
And this represents 64.
"We're kind of on the level of crossword puzzle writers... And no one ever goes to them and gives them an award." ~Joe Strummer sscce.org