I read in the Dennis Ritchie's The C Programming Language book that int must be used for a variable to hold EOF,to make it sufficiently large,so that it can hold EOF value,not char.But following code works fine:
When there is no more input,then getchar returns EOF,and in the above program,variable c with char type,is able to hold it successfully.
Why does this work?As per the explanation in the book above mentioned,the code should not work.
You should be able to use the same code tags for C as you do for Java, and it would make the code look a lot better.
I think we should pay Mr Unicode a visit. The largest letter in that code page (U+00FF) turns out to be ÿ. I am sure any Dutchman will tell me off for telling you that occurs in a Dutch word like Dolfÿn, and say I am mistaken about that.
It says here that the return type for getchar() is int. Now, if you are using an 8‑bit char type and a 16/32‑bit int, presumably the -1 for EOF is shortened by a sort of cast from 0xffff_ffff to 0xff. If you are using char rather than unsigned char, presumably 0xff is interpreted as (char)(-1). And if EOF is -1, you will get equality and your loop will terminate.
This is a potential problem, to do with C not being type‑safe. I suggest you copy and paste the ÿ character I showed you into some test which you are reading from your stdin, and see what happens. I suspect, but am not sure, that your char will misinterpret that as -1 and your loop will terminate prematurely.