Hi, I'm writing a program to help me learn the Nigerian language Hausa. It uses lots of non-standard characters (hooked-k etc) and although I can display them in a GUI alright, it all goes wrong when I write and read them to and from a file. Then the non-standard letters just get displayed as ?, so I get lots of words that look like ?ar?ashin, which isn't very helpful. I assumed that since Java uses Unicode internally, it would also use it when writing to a file. Is that wrong, or have I messed up my reader and writer classes?
When you write strings to a stream a character data, by default the unicode characters ar converted to the local representation of chars in the host machine, and these are then written to the stream. When you read a string it converts it back.
Check your java documentation regarding the character sets (java.io) you can set in your writing and reading, otherwise the default is making conversions between Unicode and what ever your machine is using.
Thanks for pointing me in the right direction. In the end I used and similar for the writer class. It seems to work, so it'll do fine until someone points out how terrible it is . I was really pleased with the way my program was coming along while I was just testing the input/output stuff on the console - then I hooked it up to the GUI and got very depressed. Now I'm all optimistic again: so thanks for cheering me up!
I’ve looked at a lot of different solutions, and in my humble opinion Aspose is the way to go. Here’s the link: http://aspose.com