I currently am working on information retrieval and processing on sanskrit documents and plan to use hadoop in the near future.
I went through the documentation and found that the chararray supports only UTF-8 strings.
When will Pig support UTF-16 characters?
does it help in information retrieval and processing ?
(As of now,I have very limited knowledge about Hadoop..)
posted 4 years ago
You just need to read strings that are UTF-16 encoded? You should be able to write a loader (details are in the book and the online documentation) that assumes strings are UTF-16 encoded. Once they've been converted to a JavaString Pig will handle them just fine, even though internally it will move them to UTF-8 when it stores them.
In your second question, "does it help in information retrieval and processing", is "it" Pig or UTF-16 support? I'm guessing Pig. Pig can be used to load data into Hadoop (again via custom load functions) but its main focus is processing data once the data is on Hadoop. Many people use other tools to get their data into Hadoop first, and then process it with Pig.