Hello David & Craig, Having read about half of the first, and only, chapter available (still hoping to win the rest), I would take very slight objection to the characterization that "most" Java programmers, left on their own would fall into procedural coding when faced with the large, but no so indomitable wall of the Object-Relational Mapping. I think you could have said beginning programmers or less-experienced programmers. But, that is just a nit. So, the JDO basically takes the whole Persistence mechanism and makes it something that is done to the objects to be persisted instead of the objects doing it to themselves. When the Persistence Interface was first mentioned, I thought ok so all classes will have to implement PersistenceInterface. Not so, it all done behind the scenes at compile time. This seems similar to how "Static SQL" (meaning pre-compiled SQL) was done with other earlier languages. It would be fun to sit and play with this to see if you truly can have the objects that could persist to: 1. A filestore 2. An XML-based filestore 3. A Relational DB All just by changing the implementation properties. Thanks for the food for thought. I can see why some of the members of the "Expert Group" were those with experience in Object Databases or Object Datastores. Cheers, Mike
In my own surveys when I give talks, 95% of the people in the room do NOT define an object model for their persistent data when they use JDBC. It is a major development undertaking to define the object model AND implement the mapping. With tight schedule pressure, they end up just writing Java classes that represent transactions and write code directly to JDBC. That is what is meant in Chapter 1. This is to be expected because there is a considerable amount of work to map classes, relationships among classes, etc. to the database via JDBC. I would be very interested in hearing from others reading this forum whether they have done their own mapping of their Java object models to the database using JDBC, using references, collections, inheritance, private data members, etc. It is a LOT of work, work that is not part of the actual application that is being developed. With JDO, all this work is done for you. I have not seen any XML-based JDO implementations yet. But I have some fairly involved JDO application software that has been run using two object databases, three JDO/relational databases, and the JDO reference implementation, and the only thing that was different was the command to enhance the classes, and the property file which contained things like the name of the PersistenceManagerFactory class. The same code does work across all these different database architectures. There is no smoke and mirrors here, the interfaces that make all this happen are the PersistenceCapable and StateManager interfaces, but these are interfaces you as a user of JDO do not need to be concerned with. The enhancer adds code directly to your persistent classes which allows everything to happen with such a high degree of transparency. So yes, a lot of work happens at build time with the enhancement process. This includes generating some optimized metadata that is MUCH faster than using reflection. There are some characteristics of this that could be considered similar to static SQL, though the emphasis here is on adding some standard interfaces and code to your classes that is still vendor neutral and providing a means for a StateManager object specific to a JDO implementation to be able to peek and poke at your object's internal state in a secure way.
David is quite correct with regard to performance pressures on corporate "Projects". A while back I was going to implement the "perfect" object to JDBC interface for the rest of the guys on the project, and 'boy' what a huge hand full that turned out to be. Few managers today manage by by dollar profiles directly but rather performance relative to expected schedule. So some 'free' nights and weekends could compensate and no one would ever know.... In my case however, it was the raw performance hit that I was taking simply for the sake of formalism that caused me abandon the effort. After I had some core classes up, I started to benchmark against direct JDBC implementations and I concluded that, at that time, even with the tightly coupled JDBC drivers into the Sybase TDS stream, I would take a lot gas if I continued to layer in objects in a very time sensitive application (radar). So my perfect interface degenerated quickly to a lib of basically container classes of very efficient methods that produced and consumed arrays, ArrayList, Vector, etc objects as required by the application. The lib was only interesting in that it didn't care where in the world the Sybase target existed as long as connectivity existed. Even then, some low volumn items (user perfs, etc) were cached until the links were restored. As I'm currently up to my eyeballs in the .NET for a particular customer, I drop by the "ranch" every now and then to see whats happening with my first love. I guess I'm going to have to jump in get my feet wet all over again, this time with JDO to get to feel for whats involved, the benifits, and the bummers. Dave, thanks for all the time and thoughtful input I see you have put into the ranch.. I'll go buy your book. - Dan
I would be very interested in hearing from others reading this forum whether they have done their own mapping of their Java object models to the database using JDBC I just finished a small web application, where we had only a simple servlet container to deploy to, in which we did this very thing. Developing this layer was a lot of work. I'd guess that half of our development time in the first 1.5 to 2 iterations was spent coding it. It was worth it. It was really nice to be able to just send an object to the persistence manager (as we called it) for storage, or to get one out of storage, using simple OO messages. Real JDO is better. We'd have saved probably many tens of hours with it.