Yohan Weerasinghe

Ranch Hand
+ Follow
since Oct 07, 2010
Yohan likes ...
Netbeans IDE Oracle Java
Merit badge: grant badges
Biography
I am 21 years old. Currently following software engineering degree...
For More
Sri Lanka
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
3
Received in last 30 days
0
Total given
27
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Yohan Weerasinghe

Please have a look at the below code



The code is used for auto completing JQuery text field. You can find the plugin here - https://jqueryui.com/autocomplete/#remote-jsonp

However, I have a `REST` link where I would like to retrieve the information. I tried replacing the link in the above example, but it didn't work. A sample REST link can be found here - http://www.thomas-bayer.com/sqlrest/CUSTOMER/ (I not using this anyway).

So, how can I use this same JQuery auto completion with REST API? I also posted the same question in here but seems no response.
8 years ago
It doesn't matter that I am not writing back to the browser, because I can see what is happening in console and I do have set console messages. Regarding writing the s3 thing in another class and calling It via the servlet I have tried that thing too. It has the same situations as the thing what you can see here. I am wondering whether this is related to apache. I think this is either to relates to apache, localhost or xampp
8 years ago
First, I made another post about this on SO, but ended up with no results - http://stackoverflow.com/questions/31176998/amazon-s3-issue-in-web-applications


Please have a look at the below code.


I am using this code to list the buckets I have in amazon s3. However this process is extremely slow, it takes almost 1 minute or more. Not only that, this takes lot of memory, almost freezing the Google Chrome. My bucket is in US standard region and my application is running in a PC located in south asia.

The delay is in connection configuration. I noticed this is not taking any time delays when it is running in non web java applications. I ran this in a simple `Main` class which is not web based, and it worked fine, fast.

I am using Netbeans IDE and the above code is a part of a servlet. The server I am using is apache tomcat 7.

Any ideas why this is taking this much of time? I added all the JAR files came with Amazon SDK for Java as well. It has no use to me if this takes this long.
8 years ago
First, I asked this question here but no good results.

I have a JSP, Servlet, Hibernate application. In this application I have a very weird problem. That is, if the session got expired (in other words "time out") and when the user click on a link the page will be redirected to the index page, but after that the user is not able to log in and access the last link he clicked. I will describe it step by step in below.

1. User log into the application. Session get created.
2. He access the path /Passport
3. User is now idle, session get expired.
4. User come back and click on link to access /Visa . Since the session is now idle, user will be redirected to index page.
5. User log in.
6. Click on the link to access /Visa (from anywhere where the link is available)
. The link is an <a href> where it links to its path like

Visa?idEmployee=1

7. Now the problem. User is redirected back to index page.

I have `Filter` to monitor whether the session is `null` and whether the required session attributes are not `null`. If the request do not fulfill the mentioned 2 conditions, the request will be sent back to the index.

The filter code is below.


In web.xml, I have added the filter from servlet to servlet, like below.



Filter session timeout is configured as below.



So, what is happening here?

Update

When the above error happens, the URL actually looks like `http://localhost:8080/xxx/Visa?idEmployee=1` even though it is redirected!
8 years ago
First of all please note I have posted the below question in Stackoverflow and I have one answer there, but I am not sure about it because it seems like the member who answered also not much sure about his answer. Below is my question.

I am having a JSP, Servlet (Pure JSP, Servlet) application where it uses `Hibernate`. Below is a `Hibernate` implementation class for a one table.

DesignationImpl.java




Below is the service class which calls to the above class.

DesignationService .java




And the servlets call them like below.



As you can see, there is a bad thing happening there, that is, the servlets are creating new `SessionFactory` instances every time it executes.I am having `Driver#Connect` issues and I guess this might be the reason for it.

I read stackoverflow posts and some seems to be suggesting the use of only one `SessionFactory` for the entire application. If it is suitable, then how can I do it? Maybe make a singleton class like below and use it in my implementation classes?



But then what about threads? Servlets are multi threaded isn't it?
Please have a look at the below code



Here I created the `deleteJob()` method to delete and re-open or re-schedule the jobs listed in `passportReminder1()`. However I have no idea about how to it. Hw can I delete or re-schedule a quartz job? I am using the quartz 2 API.

Note: I posted this in here but I am not happy with the answer at all.
9 years ago
The answer in SO really worked. Thanks for the help guys.
9 years ago
First of all, I have posted this question in here but not satisfied with the answers.

I am having an issue with retrieving "grouped" data from HTML form to servlet. I will describe the scenario below.

In companies, they record the salary of the employees once a month.When they record it, they do not do it by visiting each an every employees personal "profile" (or whatever according to the system). Instead what they do is apply the salaries of all of them in one page.

To do the above thing they prefer excel like tabular sheets.

Now, I have a html form, where the form content is a table. One row is dedicated to a one employee.

Below is my form.




As you can see, I have wrapped every row with a `<tbody>`. The `value` attribute of the `<tbody>` will contain the employee id.

Once the form is submitted, the below servlet will capture it.




What I was trying is get the `value` attribute of `<tbody>` (so I can identify the id of the employee) and get the data inside that `<tbody>`. However this didn't work, because I ended up with `NullpointerException` because it failed to read the `<tbody>` value.

So, how can I pass the data from table to servlet where it can clearly understand that one row is representing data belong to a one employee? If this is not the way to do it, I am also open for other methods.
9 years ago
Hi,

Please have a look at the below code.

index.html



styles.css


My above code generates the below web page.



You can see how the Video container and the Logo above it (when posting to SO, I removed the logo by painting it using black colour; so the logo is above the video, and that is what you see as a black colour box) in the right side are out of alignments. It is even worse if someone zoomed in or zoomed out the web page, because the video container and logo is getting more and more out of aligned.

Please have a look at the below image



The above image shows my expectation. The video and logo are properly aligned and they are not getting out of aligned if the web page is zoomed in or zoomed out.

The name of the DIV which contains the Logo is apDiv1 and you can find it in line 24 of the HTML code. The name of the DIV which contains the video and all is video_container2 and you can find it in line 40.

I have shortened the code to the maximum level so it is easy for you to read.

So how can I fix this alignment issue?

PS: I have posted this in this link too, but no answer yet, so I am seeking for your help.
**I have posted this question in here but still no good answer so I am posting it in this forum.

I am kind of new to SQL. I have 2 MySQL Tables. Below is their structure.

**`Key_Hash` Table**


--


**`Key_Word` Table**




Now, below is my query


When you run the above query, you will get an output like below



The important thing here to note is that `indexVal` in `key_word` table is the same set of data in `primary_key` in `key_hash` table (I think it can be a foreign key?). In other words, `primary_key` data in `key_hash` table appear as `indexVal` in `key_word` table. But pleas note `indexVal` can appear any number of times inside the table because it is not a primary key in `key_word`.

OK so, this is not the query what I need exactly. I need to count how many times each unique `indexVal` appear in the above search, and divide it by appropriate value in `key_hash.totalNumberOfWords`.

***I am providing few examples below.***

Imagine I ran the above query, now the result is generated. It says

- `indexVal` 0 appeared 10 times in search
- `indexVal` 1 appeared 20 times in search

- `indexVal` 300 appeared 20,000 times in search

Now keep in mind that `key_hash.primary_key` = `key_word.indexVal` . first I search for `key_hash.primary_key` which is similar to `key_word.indexVal` and get the associated `key_hash.numberOfWords`. Then I divide the `count()` appeared in the above mentioned query from this `key_hash.numberOfWords` and multiply the total answer by 100 (to get the value as a percentage). Below is a query I tried but it has errors.


How can I do this job?

**EDIT**

This is how the `key_hash` table looks like




This is how the `key_word` table looks like




I have a Json file which is 1 Terabytes in size. Each Json Object contains text with 500-600 words. There are 50 million Json objects.

Now this is what I have to do with this Json file. I need to insert 200-300 words and a percentage value into a web page. Once this is done, the web application will read the entire Json file checking whether the inserted words are available in any Json object, and what is the percentage of the availability. If the availability percentage is higher than the percentage I inserted then this application will also keep track of words available in Json Object compared to the input list and words missing in Json Object compared to the input list.

I felt reading 1TB is too big, so I did a trick. I converted the text in every Json Object into hash (this hash represents any word with 3 characters) and saved it into a text file. Now the hash in every new line of this text file represents the text in that Particular Json Object. This text file is 120GB big. 50 Million lines.

My problem is that reading and performing the above job is still harder. It takes hours of time to complete! Why? Because the application read "every" line in this Hash, search which words are available and which words are not. So this "checking" algorithms are running for 50 million times!

Is there anyway I can reduce the time of this operation and do it within few seconds? I know applications in chemistry and genetic medicine does the exact same thing within seconds! I am open to all the solutions, whether it is a Big data solution, data mining or a simple fix, whatever. Please help.

PS: I thought of a Hadoop based solution but purchasing lot of computers! That is a huge cost! Even running in Amazon is double cost! I don't have cash to buy multiple computers too!

PS: I had a suggestion to use GPU computing. Argument was that hadoop uses lot of cores to run the app, and GPU computing does the same (Note I am not saying hadoop can be run in GPU). It is also said GPUs like NVidia Tesla are built for running massive loops. But I have simple loops, just running lot of times.

PS: (Please note I have posted the same question here, but I did not find the answer I am looking for)
9 years ago

Winston Gutkowski wrote:

Yohan Weerasinghe wrote:Thanks for the reply. what did you mean by a 'profiler'?


It's a program that can analyse your program while it's running, and give you some idea of what is being executed most.

An alternative to the above suggestion: Ask someone WHY you're trying to parse 500 megabytes of text on an ad hoc basis.

This (I suspect) is the actual source of your problem.

Winston



Thank you for the reply. I have 2 questions to ask.

1. If the way I am reading files is not the best way, what is the best way? How it should be handled? 500MB file is a Test case. Real file is 4 TeraBytes.
2. Do you think there is "any" place in this code I can use Multi-Threading?
10 years ago

fred rosenberger wrote:Hi Yohan,

I edited your post to split up some REALLY long lines...it just makes your whole post easier to read.

I can't give you any specific advice, but I will tell you what we always say in the performance forum.

Get a profiler, and look and see where your program is REALLY spending its time. you can guess all you want, but you will inevitably be wrong. There is no reason to try and speed up some part that isn't really taking any time.



Thanks for the reply. what did you mean by a 'profiler'?

10 years ago
**Please note that I have posted this question here but posting here because I still have no answer***

I am going to ask kind of a serious question. I have a file with "sentences" and the file size is 500MB. Since it take a long time to read, I created a *Hash* for this and saved it to another file (I first gathered list of words which will be in my program. Then created hashes for them. Then I added this into a `HashMap` so the 'key' is the word and the 'value' is the hash. My using this `HashMap` I converted the entire 500MB into a separate Hash file). Now this Hash is 77 MB. This hash can represent any word using 3 characters, and it create unique hashes for each word. One line in this Hash indicates one sentence in real file.

Now, I am going to enter a list of words into the program. Program will convert these words to *Hash* too. Then it will go through the Hash file I explained before (77MB) and find whether the words I entered are presented in the list (I am comparing Hashes). If presented, then I get the word (Hash indication of the word), convert it to the real word. Below is my program.



I tried my best to reduce the code so I can post something short. Above code is still big, but without all of its parts, you will not understand it.

Now my question is, my program is very very slow. If I insert 50 words into the application, it take more than 1 hour to do the work I explained before. I have tried 2 weeks to find a solution, but I could not. Can someone please help me here to speed this work? FYI, it takes no longer than 12 seconds to read the 77MB Hash file. Something else is wrong. Please help.>
10 years ago

Ulf Dittmer wrote:

Yohan Weerasinghe wrote:The problem here is there are too many root level elements in the XML.


I would phrase it slightly differently: There aren't too many root elements, there is no root element. JSON has no concept of a root element, so the XML class can't provide one. As the javadocs of the XML.toString method says, it creates an XML string, not an XML document. You have to provide the rest.



:O :O :O How can I do that? There is also another issue. I have only one root element in Json, that is an array But this XML conversation has generated multiple arrays!!
10 years ago