File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
http://aspose.com/file-tools
The moose likes EJB and other Java EE Technologies and the fly likes Architectural Evolution with Entity Beans Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of JavaScript Promises Essentials this week in the JavaScript forum!
JavaRanch » Java Forums » Java » EJB and other Java EE Technologies
Bookmark "Architectural Evolution with Entity Beans" Watch "Architectural Evolution with Entity Beans" New topic
Author

Architectural Evolution with Entity Beans

David Harkness
Ranch Hand

Joined: Aug 07, 2003
Posts: 1646
Hello all,
I have taken a small but complex portion of my domain and am trying to evolve a better architecture. The first step went nicely, but the second step is getting out of hand, and I was hoping to get a little advice. I've reached the point where I suspect I've turned down a blind alley but can't tell yet.
The first design was very simplistic: entity beans simply map to database rows with no functionality (Entity Beans as Data Gateways) with session beans performing the business logic and transaction demarcation. The main business method ended up being several pages long and involved a few queries and a couple inserts/updates. As the domain has to handle some fairly involved logic (mimicing singleton behavior within a cluster), and that the first design was a prototype, I redesigned it.
The second design follows Entity Beans as Domain Objects: all domain logic was moved from the session beans to the entity beans. Now the huge procedural method above has been broken apart into several small, reusable methods living in the appropriate entity beans. This works fairly well, except that it will not work completely in a clustered environment. So again I hit the drawing board.
The third and current design is much more involved. I created POJO domain objects and domain stores (find/load entity beans and create domain objects from them). For each domain object there are six classes. For example, the main object Source has
Source [intf], BaseSource, and EjbSource
SourceStore [intf], BaseSourceStore, and EjbSourceStore
The base domain objects perform as much EJB-independent business logic as possible. The Ejb<foo> subclasses handle transactions and mirroring the logic in the entity beans.
I built the domain such that -- within a single cluster node -- there is exactly one instance of any single domain object. Work is performed by synchronizing access to the domain POJOs (via Syncs from Doug Lea's concurrent package), starting a transaction, modifying the entity beans, committing the transaction, and if it succeeds, performing similar logic on the domain objects themselves.
Now, there are patterns for mapping domain objects to entity beans and vice versa. However, if a problem occurs during the write to the persistence layer, the domain objects will then be in an inconsistent state. As well, without locking the domain objects, how do you let two threads see different values (and much more difficult, different *relationships*) since each is in their own transaction?
Am I barking up the wrong tree? Or is this a good design if I have enough time or tools to build out the same services the container provides for entity beans? I'm at the point now where time is short, and I need to decide whether I should go back to the second design or find a better pattern. Either way, I think this new design is going to fall flat or by much too complicated to finish in time or support later.
I've been reading like mad (books, patterns, etc), but so far I haven't found a complete and compelling architecture that uses entity beans -- only various pieces to the puzzle to be used along the way. I would appreciate any help or pointers or even more things to read.
Thank you in advance. Here's hoping I've got some karma to cash in!
David Harkness
Ranch Hand

Joined: Aug 07, 2003
Posts: 1646
If you read that whole post, maybe a brief explanation of the domain will help.
The domain manages a few very large (1 million or more) sets of Tickets grouped into Sources. To use a Source, a User must first be issued a permanent Ticket from that Source. New Tickets will be imported into the system while it is live. To allow for this, Tickets are grouped into Blocks so Block A can be issuing Tickets while Block B is importing new ones.
I find metaphors help a lot. The system manages several ski resorts (Sources). To enter a resort, you must have or purchase a season pass (Ticket), which you get to keep forever. Throughout the season, more passes are created to handle more visitors. Each resort has multiple sales points, each with multiple lines, and must be able to sell passes from each one. Thus each sales point is given a few Blocks of passes.
While it seems quite simple and could be handled with a simple sequence table, it is complicated by the performance requirements. It is expected that we'll need to issue several hundred thousand tickets in the first week. If we assume a linear distribution (best case but not reality), that's 25-100 transactions per minute. The bigger issue, as usual, is dealing with running within a clustered environment.
My expectation is that I cannot depend on the database to handle concurrency and still meet the performance goals. We're using WebLogic Server 7.0sp4 and Oracle 9.2.0.1 running on Solaris and have configured the other entity beans to use optimistic concurrency with a version number column. My fear is that if I depend on simply retrying the operation until it succeeds, there will be too much contention on Blocks and the transaction time will increase too much.
Again, thanks to all who take the time to even read this. I'd appreciate any thoughts on this, and I'm sure others would find the discussion useful.
Stan James
(instanceof Sidekick)
Ranch Hand

Joined: Jan 29, 2003
Posts: 8791
Hmmm, the second post sounded like a problem generating keys? I've seen several systems with a key vendor. I ask it for "n" keys, it returns the current sequence number and increments current by "n". I can increment the number it gives me very quickly in memory up to "n" times before I have to go back for more keys. If I crash or shut down before using all "n" keys, some are never used, so the sequence number has to be big enough to allow for lots of missing keys.
You can have multi-level key vendors that concatenate sequence numbers together. I'm having trouble making up an example for that right now. Scott Ambler had a paper on this a few years ago - you might try searching some of his sites like http://www.agiledata.org/


A good question is never answered. It is not a bolt to be tightened into place but a seed to be planted and to bear more seed toward the hope of greening the landscape of the idea. John Ciardi
Ben Dover
Ranch Hand

Joined: Jan 30, 2004
Posts: 91

The first design was very simplistic: entity beans simply map to database rows with no functionality (Entity Beans as Data Gateways) with session beans performing the business logic and transaction demarcation. The main business method ended up being several pages long and involved a few queries and a couple inserts/updates. As the domain has to handle some fairly involved logic (mimicing singleton behavior within a cluster), and that the first design was a prototype, I redesigned it.

The second design follows Entity Beans as Domain Objects: all domain logic was moved from the session beans to the entity beans. Now the huge procedural method above has been broken apart into several small, reusable methods living in the appropriate entity beans. This works fairly well, except that it will not work completely in a clustered environment. So again I hit the drawing board.

Hi David, this is a little larger project than my usual experience, but I will give it a go. From reading so far, I do feel a lot more comfortable with design 1 as a starting point than design 2, in an architectural sense. Although it may be an idea to refactor out some of the business logic to helper classes or other local session beans for handling fine grained transaction control. Business logic in Entity Beans as you probably know is not highly recommended, you dont want your Entity EJB's too busy to be unable do what theyre best at. Remember you have the option of business methods in the Home object of Entity beans, which is useful for querys, this can improve performance for large result sets.

The third and current design is much more involved. I created POJO domain objects and domain stores (find/load entity beans and create domain objects from them). For each domain object there are six classes. For example, the main object Source has
Source [intf], BaseSource, and EjbSource
SourceStore [intf], BaseSourceStore, and EjbSourceStore
The base domain objects perform as much EJB-independent business logic as possible. The Ejb<foo> subclasses handle transactions and mirroring the logic in the entity beans.

Ok, so you have a version of the Business Delegate pattern going here (passing value objects?), with a kind of service locator when you need calls to EJB layer, is that correct?. Is transaction control still with the Session EJB's or now in the BD POJO layer?

I built the domain such that -- within a single cluster node -- there is exactly one instance of any single domain object. Work is performed by synchronizing access to the domain POJOs (via Syncs from Doug Lea's concurrent package), starting a transaction, modifying the entity beans, committing the transaction, and if it succeeds, performing similar logic on the domain objects themselves.

Sounds like youre mimicking the behaviour of the container here, one POJO object per Entity EJB???
Another immediate issue that appears to my mind is your transactional approach. Although I am unsure as to where you are now demarcating your transactions in Design 3, by adding synchronisation (sic, Im an Aussie, no z's) to your methods in the POJOs you are effectively serialising access to your EJB layer. This could be a bottleneck you might avoid if you assign transaction control to session EJB's alone. They are after all optimised for scalability and have declarative control for rollback, as Im sure you know. And calls to the Entity EJBs remain local.

Now, there are patterns for mapping domain objects to entity beans and vice versa. However, if a problem occurs during the write to the persistence layer, the domain objects will then be in an inconsistent state. As well, without locking the domain objects, how do you let two threads see different values (and much more difficult, different *relationships*) since each is in their own transaction?

What if you used DTO's that are passed between POJO and session EJB's, if transaction successful or not, the value object returned will retain its original data. This leaves the hard work of transaction control and common object state up to the container.

Am I barking up the wrong tree? Or is this a good design if I have enough time or tools to build out the same services the container provides for entity beans? I'm at the point now where time is short, and I need to decide whether I should go back to the second design or find a better pattern. Either way, I think this new design is going to fall flat or by much too complicated to finish in time or support later.

If you are short of time, duplicating logic in different layers as per Design 3 will possibly complicate things too much, and quite possibly you will end up with a design that is less maintainable - and maybe ends up performing worse than design 2. I realise performance is an issue, have you attempted any load/stress testing on your designs so far? I know this sounds idealistic, but designing for a clean implementation should take precedence over designing for performance. When you have a simpler, cleaner design, separation of responsibility, coarse grained EJB layer, and let the container do what it was designed to do, I think you will be better off for it. If performance becomes an issue then reversibility and redesign will be simpler. Maybe you could gather performance data on similar systems, and at least you will have a more clearly defined performance target. You can then add on vendor performance tools for such things as EJB caching if required.
Caching and optimistic locking can achieve enormous performance gains, as will a cluster, but if you choose to synchronise at the POJO level, you will be rendering the optimistic locking capabilities of the EJBs ineffective, and duplicating container behaviour might be a redundancy that falls short of what you hoped for. Also, consult the Weblogic docs, they have special provision for read-only entities and mulit0ple deployment of an entity bean that can improve performance under certain conditions. Also, if youre entity EJBs prove too cumbersome, you can switch to an alternative persistence framework (and compare performance) yet still benefit from CMT. This would require a flexible DAO approach to the design.
Interested to hear your further thoughts. Good luck.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Architectural Evolution with Entity Beans