aspose file tools*
The moose likes JDBC and the fly likes Dealing with kanji character in oracle 11g with put_line function of utl_file in pl sql script Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Databases » JDBC
Bookmark "Dealing with kanji character in oracle 11g with put_line function of utl_file in pl sql script" Watch "Dealing with kanji character in oracle 11g with put_line function of utl_file in pl sql script" New topic
Author

Dealing with kanji character in oracle 11g with put_line function of utl_file in pl sql script

Vinod Vijay
Ranch Hand

Joined: Sep 13, 2011
Posts: 141

Hi, I have oracle 11g database in my project.

I have two questions. My first question is how can I know whether my database supports multilingual or UTF18 or not although it does support in reality but for my knowledge i want to know this? Is there any sql command by which I can get this information?

2) I have PL SQL procedure and I'm uisng PUT_LINE method of UTL_FILE in this PL SQL to read and write data from a specific table of this oracle 11g database. It reads and creates a pipe (|) separated file at the end like this
U|2012-01-06 20:03:27|34238|4530703|æ| ªå¼~Oä¼~Z社KDDI&BTã~B°ã~C­ã~C¼ã~C~Pã~C«ã~B½ã~Cªã~C¥ã~C¼ã~B·ã~C§ã~C³ã~Bº|JP|N|

If you look at the data above (red one), its a complete one data only(stored as kanji character in japanese in table) but any how this file was rejected as it contains an extra pipe (bolded in same red color).
In PL SQL script, CHR(9) is used to have tab separated lines in dat file. I'm using put_line method on utl_file to finally put the lines one by one.
I think put_line method has some problem or may be not supporting UTF 18 characters. Is there any other method in same UTL_FILE which supports UTF18 characters. Any idea whats happening here? How an extra pipe went?
Thanks


Vinod Vijay Nair
Ireneusz Kordal
Ranch Hand

Joined: Jun 21, 2008
Posts: 423
Vinod Vijay wrote:Hi, I have oracle 11g database in my project.

I have two questions. My first question is how can I know whether my database supports multilingual or UTF18 or not although it does support in reality but for my knowledge i want to know this? Is there any sql command by which I can get this information?


Hello,

If you are struggling with i18 issues on Oracle, provide us the following parameters:
- output of queries from NLS views:


- value of NLS_LANG environment variable on server side
- value of NLS_LANG environment variable on client side
This must be the value from the environment under your server or client is running (could be defined somewhere in starting scripts).

On windows box, if NLS_LANG is not explicitly set, run cmd / regedit, then use Ctr-F and find NLS_LANG in the registry.

For details look for "Globalization support guide" in oracle docs: webpage


One more question - is your PL/SQL procedure using FOPEN_NCHAR and PUT_LINE_NCHAR functions from UTL_FILE package ?
Or plain old FOPEN and PUT_LINE ?
Vinod Vijay
Ranch Hand

Joined: Sep 13, 2011
Posts: 141

Ireneusz Kordal wrote:
Vinod Vijay wrote:Hi, I have oracle 11g database in my project.

I have two questions. My first question is how can I know whether my database supports multilingual or UTF18 or not although it does support in reality but for my knowledge i want to know this? Is there any sql command by which I can get this information?


Hello,

If you are struggling with i18 issues on Oracle, provide us the following parameters:
- output of queries from NLS views:


- value of NLS_LANG environment variable on server side
- value of NLS_LANG environment variable on client side
This must be the value from the environment under your server or client is running (could be defined somewhere in starting scripts).

On windows box, if NLS_LANG is not explicitly set, run cmd / regedit, then use Ctr-F and find NLS_LANG in the registry.

For details look for "Globalization support guide" in oracle docs: webpage

I tried this and found that my database supports multilingual / UTF8 characters.
Output:-
PROPERTY_VALUE -> UTF8



Ireneusz Kordal wrote:
Vinod Vijay wrote:
One more question - is your PL/SQL procedure using FOPEN_NCHAR and PUT_LINE_NCHAR functions from UTL_FILE package ?
Or plain old FOPEN and PUT_LINE ?



For question, my answer is yes in my script, we are still using plain old FOPEN and PUT_LINE like
UTL_FILE.FOPEN ();
Now please advice what to do?
Ireneusz Kordal
Ranch Hand

Joined: Jun 21, 2008
Posts: 423
Please post your ALL settings, at least the most important.
Below is a list, I mark important parameters with <------






and also this:



The parameter: NLS_CHARACTERSET utf-8
says that your database is using UTF-8 encoding for internal storing CHARs VARCHARs in database files (but not NCHARs and NVARCHASs)
- but this is only a half the truth.
Characters and varchars are automatically converted to other encodings depending encoding settings on the operating system of CLIENT,
and (for UTL_FILE) for encoding used by operating system of SERVER.
So we must know these settings.

 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Dealing with kanji character in oracle 11g with put_line function of utl_file in pl sql script