0, 0L and 0.0 are all the same thing: zero. The difference comes when you assign them.
You can assign "0" to any int, long, float or double (among other things). It's the abstract/universal value zero. Or more precisely, an int 0, which can be automatically cast to 0L, 0.0f and 0.0d.
When you code "0.0", you're defining a floating-point zero. When you code "0L" you're coding a long int zero. If possible, the compiler will automatically cast to the more appropriate value type. If it cannot determine what that type should be, or if there is no simple cast option, the compiler will complain.
Bjoke: A "Bully Joke". A Statement or action made with malicious intent - unless challenged. At which point it magically transforms into "I was just funnin'" or "What's the matter, can't take a joke?"
A simple question indeed; but of course, a conceptual one.
Well, a data type is a particular kind of data item, as defined by the values it can take, the programming language used, or the operations that can be performed on it. Example - integer, character, float, double, etc.
Now, in Java, the default values of float and double are as below.
Float - 0.0f
Double - 0.0d
You can verify the same by running the below program.