in the byte case, you perform an addition. Because of the plus, both bytes are converted to ints automatically (language specification, 5.6.2 Binary Numeric Promotion) and the result computed at compile time. The compiler recognizes that the result (128 I think, didn't check it on a calculator) is too big for a byte, so a compiler error occurs.
With the ints, everything stays an int, so the compiler does not know, that an overflow will occur. The calculation is made internally with an int, so the compiler cannot know, that it is "too big" - internally the value cannot be too big, due to overflow, it just adds up to minus 2147483648. And that fits into the variable k.