Win a copy of Terraform in Action this week in the Cloud forum!

nitinram agarwal

Ranch Hand
+ Follow
since Jan 29, 2009
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by nitinram agarwal

I have a requirement to highlight rows with specific color based on its attribute. For example, let's say I am displaying employee records and all employees whose highest education column is not null then show it with green

I have the existing code as following in the corresponding employee.component.html file



so as of now, only a specific column is colored but the requirement is to color the whole row? I am using Angular 10.x. I am trying to use some kind of logic on starting "tr" tag but its not working.

I am trying to find common elements between 2 lists where elements are of type Object.

For example


similarly


I need to find out the entries from employees which has same first name as that of student.

I did the following




This works, however I am trying to find a solution where common attributes are not limited to a single field (say for example first and last name).

The list can contain many elements but irrespective of the size, I am wondering if a better solution exist using FP.
6 months ago

Paul Clapham wrote:There is only one alternative: process the data in parts instead of loading it all into memory at once. How you would do that depends strongly on the structure of the data, which you haven't said much about. You would like to have an array but it's too big for memory, so you can't have that array. Instead you have to process the data in parts. Again, what those parts might be depend on what the data is.


nitinram agarwal wrote: I have updated by question and provided some details. Please see if it helps

1 year ago
Hello friends,
Hope you are keeping well at this time of pandemic.

I would like the opinion on 2 questions I had in a technical discussion.
While I can think of some possible options, I thought of putting it here so as to get more details. If the question is inappropriate for such forums then please let me know and I will delete it.
The question asked in the technical discussion is (I am not supposed to use any standard java library which handles similar situation and implement something on my own)
1. how do you process an array which does not fit in available memory for JVM
2. how do you process a big file say 20 gb which does not fit in available memory.

I answered the following for 1
a. get the length of the array and process the array using some part of the length(for ex 4 iterations on array with length/4)

For 2 , I told that the original file can be split into multiple parts using split command (or something similar in respective OS environment). Process individual smaller files and generate the intermediate result (for example data aggregation). Once individual file processing is done then based on the final size of the intermediate result files, process all the result file in 1 go or again apply iterative processing.

I think approach for 2 made sense but I am not sure about approach for 1. If there are better alternatives then I want to know. I am generally curious as to what should be the answer.
1 year ago
thanks for the details. I will try to use the standard product as I think doing something inhouse will not be practical on long term
1 year ago
Hello
I have been asked to design a system that monitors the log file on a real time basis and report issues if any (like application failure, threshold breach for specific exception etc). I know that there are some standard tools available for such functionality like Filewatcher from AWS, but my firm does not want to invest in any tool and asked me to develop tools in house with some basic features. My language of choice is java and shell scripting. Can you please advise what should be design approach since the challenges that I think of are the following
1. actively monitoring the log file - this means running a process in parallel to the specific application being monitored to and constantly read the log file. I am not sure what's the best way to read a log file which is being constantly written to
2. passing monitoring the log file - possibly run a program every 30 sec. The program does the following
   2.1. take a snapshot of the log file
   2.2. compare the line count against previously stored snapshot (done 30 sec ago)
  2.3. read the contents of the newly added lines and determine if anything happened. Also possibly maintain state for exception count in a secondary storage
I am open to suggestions for better design and also can choose Python for my work if any such functionalities are easier to implement.
Please advise.

1 year ago
Hello,
It is a legacy application and does the following
1. against some passed in crtieria, invokes the stored procedure
2. pass on the returned data to other application.

I am not fully aware of the end to end flow and application reengineering is definitely not an option as of now (due to overall complexity of the application + effort rerquired for reengineering which requires some budget approval etc).

For this, I am looking for a solution in the existing application.
Hello Everyone,
thanks a lot for your suggestion. I went ahead with object caching and saw an improvement of around 30% in the application (since the application has an uptime of 6 days -starting on Sunday morning and stopping at Saturday morning).

Regards,
6 years ago
Hello,
I have a legacy code which uses JDBC for database connection and calls some procedure which returns dataset. Each row consists of around 30 columns and when the dataset is around 300K rows, my program is failing with Out of memory.

After some analysis, I found that the current program is not able to handle more than 150K records (it runs out of memory). On some basic performance tuning and code clean up, I made this number to 160K (meaning program is failing after fetching 160K rows).

however I am not able to tune it further. The code is usual in nature that it has a ResultSet object to fetch rows.
I am trying to find a way to nullify a specific resultSet row once it is processed but did not find anything in the API. I am trying further to see if there is anything else can be done but feel that if I am able to set the current resultSet position to null after it is processed then there should be some more improvement.

Can anyone please suggest if there is a way to do this? I am trying in parallel to find if there is some alternative using Spring JDBC as well (but this requires more development effort and some code reengineering so getting approval for this approach will be time consuming).

Regards
Hello,
Thanks for your reply. Object creation takes around 10-20 milli second but the class dsign will not allow my approach since there are some instance variables.
6 years ago
Hello,
I have a situation whereby there are a number of classes to handle different scenario( for example download file from some host, creating a new file to specific location, rename of a file etc).

This is the sistuation
1. a number of rules are running on a near realtime basis and corresponding to each rule there is a class
2. Each class has some execute method (similar to what we do for Command pattern) and the execute method performs the specific task (for example rename)
3. The execute method has no global variables. All the details are passed in as argument and the method extracts value from the arguments.
4. There are a number of threads (say 10) which keeps running all the rules. While running the rule, based on some runtime parameter, thread determines which rule to run and creates an object of the rule and calls its execute.

I am planning to do some performance improvement and am planning to keep a cache of the rule object being created (kind of Singleton pattern whereby a rule object is created only once and can be used by multiple threads). This way there will be no need to create a specific rule object everytime before it gets executed (there are around 20-25 rules and each rule gets executed more than 10000 times on a day. Doing the caching will avoid object creation 10000 times)

Though I don't see any issue doing this , I am trying to understand if there is something more that I can do to do performance improvement. If someone can provide pointers , it will be helpful.

Regards,


6 years ago
ITs a jar file. Sorry for the typo.
The difference is that there are environment specific settings in some of the source files as well as proerty file contents.

For ex

ORACLE_HOME is different in different environment and we have some shell scripts which requires this variable.

Please let me know if you need any other details.
6 years ago
Hi,
As of now, I have Maven set up in which I create one tar file for each environment (dev,QA or prod). I have a requirement where by I need to create a tar for all the three environments in one go. I am trying to check the details online but have not had much sucess till now. Can someone please provide me some pointers?

Regards,
6 years ago
I have no intention to call the Java layer from Sqls. The idea is to move the business logic away from db side. For this, I am trying to pick up those units which can be migrated easily. A UDF and references to it , is the first point I want to start with. For example, if a UDF is called from 2 pl-sql routine and the logic in UDF can be replaced by Java code then modify the pl-sql routine so as to remove the reference to UDF and do some application rewrite on Java (kind of util ) which mimics the behavior of UDF. As a second stage , check if we can replace pl-sql routine with Java code. Please let me know if you need any other details.