A byte has just 8 bits. An int has 32. When a byte is cast to an int the missing 24 bits are filled with the byte's sign bit, for instance:
1111 0000 (-16) is turned into 1111 1111 1111 1111 1111 1111 1111 0000, because the sign bit (the left one) is 1.
0111 0000 (112) is turned into 0000 0000 0000 0000 0000 0000 0111 0000, because the sign bit (the left one) is 0.

0xFF is the same as 0000 0000 0000 0000 0000 0000 1111 1111. When the & is applied, it basically takes the byte's value, adds all the zeros or ones, and then turns all those added zeros / ones into zeros, and keeping the bits of the original byte. The result is the same value as the byte had when it was not negative, and the value of the byte + 256 if it was negative. The result is a number between 0 and 255 (inclusive), instead of a number between -128 and 127 (inclusive).