Jiafan Zhou

Ranch Hand
+ Follow
since Sep 28, 2005
Jiafan likes ...
Mac OS X Fedora Linux
Merit badge: grant badges
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
3
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Jiafan Zhou

From Oracle's official website, Associate8
Oracle Certified Java Programmer, Silver SE 8 (Oracle Certified Associate, Java SE 8 Programmer - Available only in Japan) , Oracle Certified Associate, Java SE 8 Programmer

Is that real? Can I not take this exam outside Japan?

Wendy Gibbons wrote:I guess as nobody has commented in 5 days the answer is no, nobody else cares



Yep, then how about the duration of database transaction?
Hi all,

I am wondering is it normal to know the duration of a connected database session, from the time a database connection was established to the time the connection was terminated. Considering the fact of using the database pool.

I thought this is a common user case, but after googling a while, I am surprised to find that most database products does not have this information.

I am not interested in the Timeout option due to the fact it is a different user case.


Thanks.
Top level class can either be declared as public or package. Thus protected top level class is not VALID.
11 years ago
Why using underscores?
11 years ago
Finally I can decrement my own counter
--counter

counter=23
I'd like to take this opportunity to thank everyone in this forum who helped me to pass this exam.
In spite of the long waiting period, I enjoyed this exam a lot in part1 2 and 3.

The following is the "notorious" email from Oracle:

"Hi Jiafan,

Congratulations! You have passed the Oracle Certified Master, Java EE 5 Enterprise Architect certification. You are among the elite 1% of certified Java professionals who have gone on to achieve the Java Enterprise Architect certification.

You have received a passing score. Passing grades do not receive a score, just the pass. A failing grade will receive section level scoring so that a candidate may isolate the area of failure and resubmit their assignment. Numeric scores are only provided on failures.

Kindly allow two weeks time to view the Oracle Certified Master, Java EE 5 Enterprise Architect certification status on your CertView profile.

Thank you for all your efforts with this certification, and good luck in your future endeavors with Java.

Please email us your mailing address and phone number in the following format, so that we can process your Oracle Certified Master, Java EE 5 Enterprise Architect certification success kit."
11 years ago
I am not a TCP/IP expert, but this is what I understand.

If a connection is established and one end socket does not have any incoming data, however, at the same time, we use the Java InputStream.read() API to read the data from the socket. SInce this read() is a blocking read, the current thread who execute this statement will be blocked until incoming data from the socket is available. This state is different from the one when there is data available to read from the Socket. As I have mentioned, what I am trying to do is very very simple, to measure the idle time when there is no data to read and the time when there is indeed data to read.

Paul Clapham wrote:There is no way to know whether the delay between your code asking for bytes and your code receiving bytes is caused by the operating system waiting to receive bytes for your code, or by the operating system paging out your code, or by the operating system delaying your code because it's serving a higher-priority thread, or any number of other reasons I haven't thought of.


Is there a way to know this wait (idle) time in a high level above the operating system such as JVM?
Surely when Java input stream (any kind, such as BufferedInputStream) reads from a Socket and the socket does not have any incoming messages, the waiting or idle time will be recorded by the thread?

Paul Clapham wrote:

Jiafan Zhou wrote:I don't totally agree with this. Usually the network read from the socket could be slow itself if reading from a large byte streams. I want to know what percentage of the idle item is occupied in the total read.

And since I don't know how long is the idle time, the idle time could be long as well. I want to know how long is the idle time and want to find out what and where generates this idle time.



Why would the length of the stream being read affect the idle time in any way?

As for what generates the idle time, it's the operating system waiting for packets to come over a network connection. It doesn't look like you really understand the relative speeds of computers versus networks, so you ought to get out your pencil and do a bit of arithmetic. How fast is your computer? Let's suppose you have a 4GB machine -- that means it does 4 billion operations per second. And how fast is your network? Let's suppose you have a 10MB network -- that means it transmits 10 million bytes per second. Think about this for a while so you get some idea of what's important and what's trivial.



Thanks, Paul. The length of the stream should not affect the idle time. A long byte stream only makes the performance worse, plus the idle time.
Hold on, I am not particularly interested in the idle time that generated by the operating system. I am more interested in the idle time generated by the application. For example, if the server (sender of the packets) takes a nap while sending the packets, whereas the receiver uses the blocking read() to wait for receiving the packets, an idle time will be generated. This is what I am interested to know, in particular, the percentage of this idle time vs the actual read time.

Henry Wong wrote:

Jiafan Zhou wrote:Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.



True, but keep in mind that the actual operation should be relatively fast compared to the idle time.

If this is *not* the case, then this measurement would serve little purpose.... ie. what would be the purpose of knowing that you have 10ms of idle time for an operation that takes a second? If the network / operating system is keeping up, meaning no queuing up somewhere, there isn't really a problem.



I don't totally agree with this. Usually the network read from the socket could be slow itself if reading from a large byte streams. I want to know what percentage of the idle item is occupied in the total read.

And since I don't know how long is the idle time, the idle time could be long as well. I want to know how long is the idle time and want to find out what and where generates this idle time.



Rob Spoor wrote:You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.



Plus I don't think the (end - start) is the idle time, but it also includes the time of the actual read.

Rob Spoor wrote:You can use System.currentTimeMillis(). In pseudo code:
To get the idle time between each byte you would need non-buffered (so no BufferedInputStream) single byte reads using the read() method.



Hi, why non-buffered stream? What if I use the BufferedInputStream?
Is a client who creates a SocketChannel (nio) able to connect to a server which does *not* create a socket channel? That is, the server uses a standard server socket.
Will this work?

Or should the server also create a socket channel?
I have problems to modify the server socket code, and want to make sure this.
We know that reading a byte from a Java client socket is a blocking read, i.e. it will wait idle until the bytes are available in the socket. Is there any way to measure the idle time of the read?