Virag Saksena

Ranch Hand
+ Follow
since Nov 27, 2005
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Virag Saksena

Tanka,
What application server are you using ?
Download Auptyma's Java Application Monitor, deploy it and observe the JVMs in question at 5 PM ? What do you see happening ? Do you see a lot of requests waiting for a lock ? Or are folks waiting for a database wait or network wait ?

It will tell you where the bottleneck is happening.

Regards,

Virag
17 years ago
Why don't you try disabling jConsole and running it with Auptyma JAM. Enable monitoring and see if the heap usage is still going up and down. Since JAM does not create any extra objects in the JVM, you'll be able to figure out if jconsole was creating the objects.
17 years ago
Best case scenario ...
you read the 3200 bytes into jchar = 6400 bytes
you bind it to a varchar2/char bind value = another 6400 bytes
Assume another 1K for other cursor related stuff
You consume < 16K/message

Worst case scenario ...
you'll parse about 400-800 fields/message (4-8 chars per field)
Each field might go into an object with an average size of 24-80 bytes
You end up with an additional overhead of about 64k/message.

Assuming you are committing data with a batchsize of 20-100, that means you'll be caching 20-100 rows at any given time. Depending upon the type of bind variables and their size, you'll further need to scale the data up.

Of course this is an absurd answer to an absurd question, because I am sure there'll be other activity besides message processing which will happen. If you are currently looking at adding some functionality to an existing server and want to calculate additional memory overhead, this will give you an idea. The best thing to do is to get a similar system and run a tool like JAM to get the size and number of objects involved.
17 years ago
Dave,
With good design, you should have a clearly identified natural primary key. However finding a good primary key (which will never change) is a non-trivial operation, so often people will use a generated key.

The advantage of using a generated ID is that you can change any of the columns without having to cascade the update to the children.

The disadvantage is performance : If you have ZIP code as a column in the original table (rather than foreign key) then queries like
zip code between 12345 and 12378
can be efficiently processed using an index.

The ? is the bind variable/query parameter. It does work in Oracle and postgreSQL. What exception are you getting ?
#1. If you are not doing any validation on the zip code or storing any dependent fields like city/state, you don't really need to move it to another table. Let it be an attribute of the table.

#2. There is a big debate about surrogate keys - Why won't you let the ZIP code be the primary key of the table zipcodes. It is a natural prinmary key ? Why generate an artificial key ?

#3. If you still want an auto-generated primary, I am assuming your table will have the columns SOME_PK, ZIP_CODE

so here is some sample code, take care to close your preparedstatements, resultsets in a finally block, and other exception handling code
Even without active web-users, you can have other threads in the JVM creating temporary objects. As long as you are coming down to a reasonable usage after GC, you should not worry.
17 years ago
Can you post the results of executing this query
This is a very wide scope question... I guess what you could start with is .. how much additional memory your request processing will take

1. If you look at the memory consumed per request, it should be a multiple of your message size.
2. If you decide to parse the message into tokens, then you'll end up multiplying your memory foot print
3. If you cache messages, then multiply the memory by the number of messages

There are real approximate. A good thing to do would be to run a tool like Auptyma's JAM on your test system and do some memory analysis. So you can see the amount of memory being used by different objects, caches and threads.

Regards,

Virag
17 years ago
When designing for performance, you need to consider not only the insert/update requirement but also the query requirements

What people often forget is that 99% of the time you'll be querying data, and less than 1% of the time you'll be querying the data.

The approach outlined by Ilja is a good normalized design which eliminates repeated groups and will work with any number of parameters.

However very often you'd want to run a query which says give me all widgets with colour = red, and visibility = true and nested = false

Now if you have a fully normalized schema, you'll have to use correlated subqueries (with EXISTS clause) or IN clauses and either way query optimization can be a challenge. If you have a limited number of attributes you can put them as columns in the same table so you'll end up with



If you don't know what additional attributes you might have, create holder attributes ATTRIBUTE1, ATTRIBUTE2... (common practice in packaged apps where user defined attributes are needed), and create/update view which can map the attributes.

Now you can run a simple query



With the right indexes on your common attribute choices, you can have fast queries.
17 years ago
If you are trying to track down memory leak on a high transaction system where you can't afford the overhead of instrumentation or JVMTI/JVMPI you can try Auptyma's Java Application Monitor from http://www.auptyma.com

It takes a snapshot of your heap, and loads it to a built in database to provide you with a tree representation of your memory reachability

It has been succesfully used by quite a few customers to find memory leaks on their production systems where they could not afford the overhead of other tools (because sometimes you want to find and fix problems before they crash your system).

Regards,

Virag
17 years ago
Yes, since the instances a, b, c, d, e are children of instance xyz, they'll be eligible for garbage collection when xyz is orphaned by going out of scope (if they have no external references like your example).

However they might not get collected until a full GC happens if ...

1. they land up in the tenured/old generation. In this case if xyz is in Young generation, it'll get collected, but the other instances (even though they are garbage) will not be collected till a full GC

2. they are in the Young generation, but xyz is in tenured/old gen. xyz will not get collected till a full GC (because it is in tenured/old gen). Since a,b,... have a reference from old/tenured generation they'll also not get collected till a full GC happens
17 years ago
<BEGIN RANT>
I disagree. You should find and tune the actual bottlenecks, rather than using a general rule of thumb which might not even apply in your specific case.

There is no subsitute for root cause analysis
</BEGIN RANT>

Regards,

Virag
The Peak of Performance
17 years ago
Steve,
Thanks for the clarification. In the development environment, you'll probably run it in single user mode, and can afford to take the hit of JVMPI/JVMTI. Something which integrates well with your IDE would work well, like NetBeans IDE Profiler.

When running in a test environment, the overhead of JVMPI/AOP based profilers becomes significant, specially when running a multi-user load test. When you are willing to live with 20-30% overhead, you can try byte code instrumentation. For true scalability tests, where you want the overhead of the analysis to be below 1%, you could consider Auptyma's Java Application Monitor.

With bytecode instrumentation, you can get lower overheads but you'll need to get experts who'll only be focused on optimizing the instrumentation points without sacrificing visibility.

Hope that helps.
17 years ago
Steve,
Are you looking to do a first cut dev env optimization, or looking for visibility into high transaction production env ?

Regards,

Virag

The Peak of Performance
17 years ago
You should be able to register and download a version from http://www.auptyma.com
17 years ago