I don't know that this is the right forum to ask that question in, but maybe this will help.
java.lang.String is defined as Unicode. It is the native way to deal with strings in Java apps. It's better NOT to work with EBCDIC internally in an application because of that. So the only times
you should normally convert to EBCDIC are at the peripherals edges, unless you're doing EBCDIC-specific character/byte manipulations. In other words, when you read and write to disk files and do similar tasks. Not, however, necessarily when you talk to databases, because the actual periphery is inside the
JDBC driver layer, so JDBC prefers Unicode on the application side as well.
The Internet was designed in the days when it was common to link IBM mainframes (EBCDIC) with minicomputer and non-IBM mainframes (ASCII). Therefore many of its protocols are text-based, including HTTP, FTP (except binary mode), SMTP/POP/IMAP, TN3270 and other popular servers. When using these protocols, you normally use a client app that works in the native character set of the machine it's running on. To facilitate transport, the MIME encoding uses a character-mode encoding that ensure that binary data is transported without being corrupted by text data transports. So an HTTP email could potentially be translated between ASCII and EBCDIC multiple times as it hops between servers, yet neither the sender nor the recipient would realize that the native character set of the other was not the same.
Thanks to the above mechanisms, your actual need to work with EBCDIC, as opposed to abstract text (in Unicode) is fairly limited. The main exception, as I mentioned, is when you open text files (or data streams) in binary. Most commonly you'll see this when, for example, you open files written by COBOL apps which mix text fields with packed decimal.
Of course, converting packed decimal - or even worse, the rather unique floating-point binary form originally developed for the IBM System/360 - has its own headaches.