posted 20 years ago
The streams deal in "uninterpreted" bytes. If a file contains six bytes, then you'll read six bytes out of it.
The Reader/Writers deal in character data, and the data is interpreted according to a particular character set encoding (either the default, or one you specify). If that character set uses 16 bits per character, then the Reader will return one character for every 2 bytes in the file. But for encodings like UTF-8, which use a variable number of bytes per character, or Latin-1, which always uses only one byte per character, the Readers and Writers will also do the right thing.
So your summary is basically correct, except that the reality is a little more complicated.