The output of the program is 1 but in C the output of the same program is 2
can any one explain why it is happening
and the other query is in java the size of int and float is 32 bits
then according to the size float value can assigned in to the int variable just like C language
but in case of JAVA it is achieve with the help of explicit-typecasting why it is so???
For your first question, I think what happens is Java picks out the value of i before it's incremented (1), then increments it (making it 2) then assigns the picked out value back to i (making it 1 again). With C, edge cases like that can actually depend on which compiler you use. Ultimately though, the solution is the same in both C in Java: don't write code like that.
Java is stricter about casting than C is, and when you cast a float to int, for example, you are telling it to truncate off the fractional part. C is a lot more flexible. I'm a bit fuzzy on what would happen if you just assigned a float to an int variable, but when you access things through pointers, it would be possible to just interpret the bits in the float as an int, and the numerical values would only be vaguely related to each other.