Win a copy of Five Lines of Code this week in the OO, Patterns, UML and Refactoring forum!

Dmitry Zhuravlev

Ranch Hand
+ Follow
since Apr 14, 2010
Moscow, Russia
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Dmitry Zhuravlev

There are several protocol which allow for sending files via network. FTP, SSH, HTTP, simple TCP. If the file was sent via network it can be corrupted. So the programmers would like to check the correctness of the version of file that was sent. This can be done with the help of md5 hash, or in some different way.

A question is: is there some network protocols which do this check for the programmer thus we dont need to implement it manually and can rely upon the functions of protocol client?

For example, in SSH there are some check of hash of the file, and probably FTP has something. Does TCP checks the checksum of the transfered packet?.. Which protocols have these checks? Is this enough, or we always should check it?

Greetings, Ladies and Gentlemen!

I need some guidelines about how to start with Hadoop. I need a bunch of two things: a book and an installation. Their versions should match Can you give me some advice on this?

I have 32bit Windows 7 with Debian 7 inside my VirtualBox.


Is there a big difference between Hadoop releases? I mean 1x and 2x.
There is a Hadoop download on Apache site. But I have seen many opinions that are saying that direct installation is a big pain. Is this correct? I just want to setup a single-node cluster to play with it.
There are Cloudera packs. But unfortunately they are for 64bit machines, as far as I understand.
There is a Horton sandbox I have been downloading for the last 30 minutes.

Something else?
What you can recommend me?

And also I need a book that describes the version I am going to install more or less precisely. Hadoop Definitive Guide is from 2012 - is it still up to date? I cannot figure which version of Hadoop it describes.
5 years ago
Greetings, Ladys and Gentlemen!

I have coded a simple setup: client connects to server, server starts sending bytes, client reads them, but with a 1 second delay. I have 2 realizations of this: using Java IO and Java NIO. By printing the number of sent bytes by server and the number of received bytes by client I try to measure the size of client buffer in my system. And of course, I double-check it via socket.getReceiveBufferSize() method.

This doesn't have some practical meaning, just an exercise while studing Java IO.

1) When I run the old IO the difference between sent and receved bytes keeps around 8000 bytes all the time. I.e. I see the following logs: "sent 33000 bytes" and "received 25000 bytes" at the same time in the two consoles. Both receive and send buffers checked by corresponding getters are around 8000 bytes. I.e. their sum is 16000.
2) When I run NIO code the delta is always around 27000 bytes, while receive and send buffers remain the same.

The questions are:
Is my setup correct for my purpose of measuring the socket buffers?
Where do reside those bytes that are suspended between "send" and "received"?
And if they are placed in socket buffers - why the deltas between sent and received data are different from buffer sizes?

Windows, Java 6, client and server located on the same machine.
Here is a code for NIO setup:

Client, which reads data one time in a second.

Server, which sends data all the time.


286000 bytes written.
287000 bytes written.
288000 bytes written.
289000 bytes written.
290000 bytes written.
291000 bytes written.
has read 253000
has read 254000
has read 255000
has read 256000
has read 257000
has read 258000
has read 259000
has read 260000
has read 261000
has read 262000

My application is deployed on Tomcat. It uses several apache frameworks, such as commons, logging and some others. I observe the following situation:

1) on my own machine everything works ok.

2) when I deploy my application on my hosting's Tomcat it cannot find LogFactory class from apache.commons.logging and some other apache classes from other commons libraries (StringUtils for example), and also some google libraries. NoClassDefFoundError is thrown.

I asked hosting support for help and they said that if I use some jars in Tomcat Listeners I should place that jars in Tomcat classpath, not in standart application classpth. I.e. I should place them in some lib folder inside Tomcat's root. Is this correct?
Indeed, they placed the commons-logging jar into the tomcat's own lib folder and the problem disappeared. But I cannot really place all my libs into the Tomcat's own libs, its stupid.

Strange thing is that before I replaced some library with its new version everything worked ok. Is there some cache or something to be cleaned or refreshed when I change the libraries of my webapp?

Note: probably I have several apache commons libraries on my classpath. It worked ok before, but now I have this problem. Can several similair or different versioned jars be the reason of this error?
7 years ago
I want to understand whats going on in Internet industry. Want to know the trends, new succesfull projects, new services from a well-known providers etc. Are there some places where I can find all this stuff?
7 years ago

Need help with a set of quesions on Maven!

1) As far as I understand, Maven always takes all artefacts from local repo. But the repo itself is not updated while I call the build. Maven is not checking whether the downloaded version of that artifact is the most fresh. The only way to correct it is to build with -U prefix. Is it correct?

2) My build failed during the INSTALL phase. I corrected the error and tried to repeat the command with -rf parameter, but Maven said I have an error in command. How can I force Maven continue the build not from some module, but from some phase?

3) One of my artifacts in the repo is called 1.x-Snapshot-... + lastUpdated. I.e. Maven has added "lastUpdated" to my artifact. Why?

4) Whats the difference (or correlations) between the mvn DEPLOY phase and Maven Release Plugin? When should I use that plugin?
7 years ago

Jeanne Boyarsky wrote:It feels like there is a bigger question here. Like how to debug production code. Because that's the only reason I can think of why changing code would not be allowed.

Thank you Jeanne!

You are right.

The thing is that I need to debug the code which is not mine, I get it as a dependency from repository. If I want to change that code I'll have to code it, build it, and somehow give it to my program without installing into the main remote repository. Probably I can change maven POM and specify exact path to the dependency instead of repo group/artifact. Thats why I am looking for something more simple - i.e. using the debugging tools without changing the code.
7 years ago
1) Is it possible to track the number of exceptions being thrown by application? Can I write a test that will print me such a number? Uncaught exceptions stops the execution, so its easy, but what about caught exceptions?

2) How do I write a test that will tell me the number of activities completed in a second? For example, average number exceptions thrown by code in a second, average number of entities saved per second, average number of users logged to the site per second etc. For example, I want to put some load on my webapp and find out how many entities it can save per second under such a load, where the limit etc

3) I want to know how many times some line is executed. Can I do it without changing the code?

4) I want to see the value of expression during the execution of the code without changing the code (*)

5) I want to stop the execution of my application not in the breakpoint, but only if some expression or variable is equal to some value (for example, stop app if this variable is null). Without changing the code, of course.

(*) probably I can use Watches for this. Idea debugger has this feature and I can see the value of some expression during the execution. But when the execution reaches that line second time an old value is cleared. How can I look at expression value in a dynamical way?
7 years ago

Gentlemen, please help me to understand one GWT feauture.

I have the following code:

I thought that this code should produce a simple ajax get request to the url specified, and if the domain of target url does not coincide with domain of my app - it should return error or nothing.

But in fact, it works in the following way:
- if target url contains json it returns that json:
url example:*%20from%20csv%20where%20url%3D''%20and%20columns%3D'symbol%2Cprice%2Cdate%2Ctime%2Cchange%2Ccol1%2Chigh%2Clow%2Ccol2'&format=json
MessageBox with json response is shown.
The request is marked as OK in Chrome Developer Tools Network section.

- if target url is a simple site url (for example: - it returns nothing, no response at all, no error.
Empty Message box in OnSuccess handler is shown.
The request is marked as Canceled in Chrome.

So whats going on?
I thought I can reach other sites only if I use JsonRequestBuilder to create script elements on my web page.
Does simple RequestBuilder also create script elements all the time?
If no, then why first request is working?
If yes, then why the second request is cancelled?

GWT 2.4, Chrome 19.0
8 years ago
Greetings Gentlemen!

Here is a problem I faced:

I have a GWT servlet and I want to use it as server-side for SEVERAL client sides.
For this purpose my servlet should implement the client interface and its asynchronous version.
I dont want the server-side servlet to be dependent on client-side modules.
Thus, I need to place servlet interface in each of the modules.

How can I do it?

As far as I know there are two ways:

1) Make a reusable GWT module, effectively a jar, containing a gwt.xml and the two servlet interfaces, and add it to Maven classpath.
- OK, but in this case I have to create two modules for handling the server side: servlet itself in WAR, and interfaces it implements in reusable GWT module JAR; and that GWT module JAR will exist only for the purpose of storing that two interfaces (MyServletInterface and its asynchronious version). A bit verbose, from my point of view.
2) Place servlet and GWT client interfaces inside one WAR and add it to the classpath of my webapp as a dependency.
- If I will add some WAR to classpath of another WAR nothing good will happen - files will get mixed. Or may be I can overcome this somehow? Probably Maven can somehow exclude something from dependency provided, but this looks a bit ugly.

So how would you go about it?
Is there something better?
8 years ago
Greetings, Gentlemen!

I just have finished reading of SVN book, and want to clear up several questions.

1) What can I do if I need to switch to HEAD revision for some time and correct some bug immediately? Looks like in SVN the only way is to have a separate checkout of repo for this kind of tasks. In Git, for example, you can create several local branches, or stash your changes.

2) Which svn commands except for standart checkout/merge/update/delete/add make your life with SVN happier?

3) Is it possible to set permissions for some repository folders? For example, make each developer having access only to his project folder.

4) Some times after calling svn status I saw a reply:

svn status
M FolderName

What does it mean? Folder name is unmodified, and no changes inside the folder took place. Why folder is considered modified?

5) Looks like i cannot cancel merge if a conflict was discovered, can I?

I am trying to implement web-service provider using Spring 3 annotations. I have an interface marked with @WebService annotation and its implementation with the same annotation and service name and port. But this doesn't work. When I try to create the corresponding bean I get an exception:

(exactly this way, with the misprint)

Have anyone encountered this?
Any suggestions?
8 years ago

Hebert Coelho wrote:Could you put as non required? You will find this parameter at @Column annotation.

Hebert, what do you mean?

I checked column annotation here and have not found any required-looking parameters..
Ok, the problem was that Hibernate created link table with 3 columns: manager_id, action_id,reaction_id and was trying to insert there a row with only two values. I used @JoinTable annotation to provide two join table names to create two sepaarte join tables and it fixed the problem. I may guess that another way is to force Hibernate to add default value attribute to the reaction_id column, I tried to do this using @Column(columnDefinition="int(10) default 1") but after this Hibernate stopped creationg of my scheme in db, I dont know why. Probably one can do it manually in db.

Gentlemen, here is a problem I have encountered.

I have this mapped Entity class:

and also I have an Action class:

These mappings look very simple for me. But they dont work!

When I try to save some Manager instance the Exception is thrown:

org.springframework.jdbc.UncategorizedSQLException: Hibernate operation: Could not execute JDBC batch update; uncategorized SQLException for SQL [insert into Manager_Action (Manager_id, actions_id) values (?, ?)]; SQL state [HY000]; error code [1364]; Field 'reactions_id' doesn't have a default value; nested exception is java.sql.BatchUpdateException: Field 'reactions_id' doesn't have a default value

Hibernate creates the following scheme from my mappings: managers table, actions table and manager_action table which consists of 3 columns: manager_id, action _id, reaction_id. So the problem is somewhere here... But I have specified GeneratorType from Action entity and it should use it for reaction_id. Whats my fault?