• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Jenny and bind variables

 
Trailboss
Posts: 23780
IntelliJ IDE Firefox Browser Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Currently, Jenny uses bind variables only for the fields that the developer specifies that is what they want.
It has been suggested that I should make the default be to use bind variables.
What do you guys think?
 
Ranch Hand
Posts: 1923
Scala Postgres Database Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Who or what is Jenny?
 
author & internet detective
Posts: 41878
909
Eclipse IDE VI Editor Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Stefan,
Jenny is a db code generator.
Paul, What would be the advantage to using it always by default?
 
paul wheaton
Trailboss
Posts: 23780
IntelliJ IDE Firefox Browser Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I've been told that if you use bind for everything, Oracle can do some optimizations that it cannot do otherwise. An interesting point. I cannot come up with a substantial reason to not do otherwise (other than the fact that it's more 'normal' SQL - possibly easier to read?)
 
Ranch Hand
Posts: 54
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi guys,

Not based on any practical experience, but reasoning purely theoretical: using bind variables would make it impossible to take the detailed distribution of values over a given column into account when making the execution plan. The latter being one of the main advantages claimed for Oracle's Cost Based Optimizer.

I think you 'll find an advantage in using bind variables in the lion's share of cases anyway. It 's high on my list of interesting point to investigate without a business justification, nevertheless.

Good riding,

Rudy.
 
Ranch Hand
Posts: 156
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I suppose if you've got very unevenly distributed data, there could be some advantage to making a new query plan every time. But more usually there's greater gain to be had by re-using old query plans, and you get more re-use if you use bind variables.
 
Sheriff
Posts: 3341
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have heard the hupla about the performance of PreparedStatements over Statements but, have never seen any benchmarks showing the performance enhancements. If someone has documents they can point me at, it would be appreciated. My own limited tests against an Oracle database showed virtually no difference in the times between the two.

However, having said that, PreparedStatements make some things easier, such as removing the need to escape special characters in Varchar(2) data types, no need to convert Dates and TimeStamps to Strings and use a Database fucntion to convert it in the sql, the ability to insert and update Binary data easily.

Reading PreparedStatements raw can be tricky, it is easy to have the ? run together. However if the PreparedStatement is logged after all the bind variables are set, it is fairly easy to read depending on the JDBC Driver. Oracles puts [<position number>, <value>] beside each ?
 
Sheriff
Posts: 67747
173
Mac Mac OS X IntelliJ IDE jQuery TypeScript Java iOS
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I use PreparedStatement almost exclusively, and always when there are parameters to bind.
 
ranger
Posts: 17347
11
Mac IntelliJ IDE Spring
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ht ereason why you should use bind variables is because they will be faster. Here is why.

When Oracle receives a query, it looks in the Library Cache in the SGA, which is the in-residence memory. It tries to find and exact copy, if it does than it can use the already made up explain plan. If it does not find and exact copy, it will have to parse the query and create an explain plan, which takes a small amount of time, this is called a hard parse. An Exact copy means exact, even in the count of spaces.

By using Bind variables, Oracle can keep that in memory and use the explain plan everytime. So whatever values you use in the bind variables, it will also use that explain plan without having to do a hard parse.

In Thomas Kyte's books "Expert one-on-one" he specifically states that you should use bind variables all the time. Mr. Kyte is a VP at Oracle, and the answer guy at Oracle's Ask Tom website, which is a must "Favorite" in any database/query person using Oracle. He has all the answers.

Good luck

Mark
 
Ranch Hand
Posts: 2713
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
From a developer perspective PreparedStatements are much nicer and make portability between databases that much easier since the JDBC driver takes care of character escaping, date formatting, and other such mundane details. So even ignoring performance characteristics, it makes a whole lot of sense to go with PreparedStatements. Like Bear, I use them exclusively at those times that have to drop down and write JDBC code (which occurs less and less frequently).
 
Saloon Keeper
Posts: 27807
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
How much performance boost you get from PreparedStatements depends on the DBMS and driver. I've seen some systems where all the PreparedStatement handler did was convert the incoming data into a plain old SQL internally, but that the other extreme, you might get all sorts of optimizations. Mileage may vary depending on driver (urp!) and road conditions.

Like Mark said, the extra portability and elimination of the need to throw together Yet Another Set of SQL-Escape Utilities is worth something in and of itself.

Of course, the more intense the optimization is, the move overhead setting up a PreparedStatement can be. I've seen a particular problem in webapps, where I'd prepare a statement, use it in a request, then have to throw it away because I returned the connection it was based on back to the pool.

Lately, I've been playing with JDO. One of the neat things about the Kodo implementation (hey, do I get "plug money" for that? ) is that they CACHE their prepared statements and the connections are managed external to the in-and-out-flow of the web requests, which can potentially make things flow more efficiently.

Bottom line: Do your homework and see which tools match your project's needs. And, of course, keep things abstract in case you need to tune later!
 
Chris Mathews
Ranch Hand
Posts: 2713
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Tim Holloway:
Lately, I've been playing with JDO. One of the neat things about the Kodo implementation (hey, do I get "plug money" for that? ) is that they CACHE their prepared statements and the connections are managed external to the in-and-out-flow of the web requests, which can potentially make things flow more efficiently.


Actually, many Application Servers and other Connection Pooling products do this as well. See this thread for a recent discussion.
 
paul wheaton
Trailboss
Posts: 23780
IntelliJ IDE Firefox Browser Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What about when things go wrong? An exception is thrown. Currently, the SQL is dumped and you can try the SQL in a SQL client.

Could this be a case of oracle optimization (and do you really gain that much here? It seems that the lookup and the network call would far outweigh whatever you gained here) vs. error handling readability?
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What about when things go wrong? An exception is thrown. Currently, the SQL is dumped and you can try the SQL in a SQL client.

Naked PreparedStatements are generally harder to debug. JavaWorld addressed this problem in an article awhile ago, wrapping the PreparedStatement in a decorator to provide additional debugging info. Seems like their approach could be applied in Jenny.

Could this be a case of oracle optimization (and do you really gain that much here? It seems that the lookup and the network call would far outweigh whatever you gained here) vs. error handling readability?

I don't really know (and would be interested in hearing from anyone who does know). But I'd think that the lookup at least could be a relatively simple hash table lookup. The network call seems the more significant factor here. So I guess the question is, how long does it take to build a query plan (compared to network overhead)? Dunno.
[ May 27, 2004: Message edited by: Jim Yingst ]
 
Chris Mathews
Ranch Hand
Posts: 2713
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Jim Yingst:
Naked PreparedStatements are generally harder to debug. JavaWorld addressed this problem in an article awhile ago, wrapping the PreparedStatement in a decorator to provide additional debugging info. Seems like their approach could be applied in Jenny.


I actually implemented this using Dynamic Proxies (so it would work with any version of JDBC) if anyone is interested... maybe something we could add to our JavaRanch class library.
 
Sheriff
Posts: 9109
12
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Is your stuff available, Chris?
 
Greenhorn
Posts: 23
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Pros and cons of using "bind variables" (PreparedStatements in JDBC):

When executing a PreparedStatement, drivers will usually "prepare" the statement itself first (either by making a server call, or within the driver), then pass just the data in subsequent calls, already encoded in the proper datatype. Normal SQL statements must be repeatedly parsed, and the data types converted on each call.

So if the same call is going to be made repreatedly, the PrepardeStatement should be faster.

In practice when using connection pooling you can't actually reuse a prepared statement, since you're not keeping connections open, etc. for more than one request. However, in app server containers (Weblogic, for example) there are options for caching prepared statements at the container level, so this adds back the performance gains.

If for no other reason, as stated above, it does free you from all of the syntactic issues associated with creating valid SQL, escaping quotes, etc.

It's a good investment, but you will need to account for database and driver differences. For example, if you're trying to bind decimal values, bits, booleans, etc, Oracle drivers might behave slightly diffrently to SQL Server drivers. Unfortunately that's the complexity that you were trying to avoid in the first place (or at least hide from developers).

This is why some people are using Hibernate, Toplink, EJB CMP and other ORM solutions, because they have already accounted for variations in SQL syntax, datatypes, etc.

So my advice is to try it out on a few databases before commiting to the internal implementation choice. Watch out for "odd" datatypes like big decimal, byte, bit, and boolean. I assume BLOB support is out of scope for Jenny.
 
reply
    Bookmark Topic Watch Topic
  • New Topic