I have a question about actual implementations of the J2EE spec, application servers basically. Specifically the way they handle EJB's.\n My application loads about a 1000 database rows into memory per user. These would of course be my entity beans. Assuming I implement this in J2EE if the bean is already in memory, and another user requests that row, they will get a reference to the bean that is already loaded. Correct?\n My question is then, if I sign on to my application and request another range of 1000 rows. And the range perhaps overlaps the range already loaded into memory. Say 500 records are the same in both requests. How does J2EE figure out, of the 1000 rows I request which are already loaded? And when it discovers that 500 of them are not yet loaded, does it load each one with a database call? Making 500 database calls then?\n My current implementation gets a 1000 record result set for each user from the database. This seems to work well but I question its scalability vrs. J2EE where each record is only loaded once.
That's one of the things that make EJBs so useful. An Entity EJB has a unique ID - its primary key. That means that the EJB container knows each bean individually -- which it MUST do, since part of the EJB spec mandates that bean integrity be maintained, and you obviously would have problems if the beans weren't unique. Notice that at no time above did I mention database rows! Although backing storage for EJBs is commonly a database, there's no absolute requirement for it. As far as overlapping requests go, since you are NOT directly making a database request, the container has the ability to do some optimizations. This is in fact, one of the reasons why EJBs are so good for large-scale work. Most significant is caching - a finder method returns the primary keys of the EJBs, not the actual EJBs. So only the rows actually being used need be realized as beans. If they are referenced, the container first checks the cache. Beans in cache can be simply passed on - only out-of-cache beans require hits against the backing store.
Customer surveys are for companies who didn't pay proper attention to begin with.
Joined: Sep 10, 2001
Thank you for the reply. My question then is. Is this appropriate use of EJB's? My applicaiton returns a 1000 row list of transactions into memory currently. It then displays 100 to the user and allows them to get more from memory. This way a stored procedure is allowed to call 1000 rows instead of making 1000 individual calls for ejb content if none of the rows were already in memory. So, if our application involves thousands of database rows and our users will need to look at various sets of these rows, sometimes overlaping, sometimes not, is this still a good case for ejb's? Another question I have is, if I try to load a 1000 row range, and say 200 are already in memory. Will it take long to figure out which transactions are already in ejb's in memory, and then won't it take quite a while to make 800 individual database calls to load the other 800 rows into ejb's?
This probably is not the best use of entity beans because of the bean overhead. I think you are better off using JDBC calls from a session bean. Remember that every call to an entity bean is a remote call and carries network overhead. ------------------ Tom Sun Certified Programmer for the Java� 2 Platform Moderator of the forums: J2EE and EJB Other Java APIs
This is the kind of thing that's usually best prototyped and benchmarked to make sure that what you "know" is going to happen is, in fact, what's happening. As Paul mentioned, bulk retrieval of data is pretty expensive and there's not only RMI overhead -- Entity EJBs are going to be created one row at a time ( = 1 SQL request) unless you have a pretty clever prefetch mechanism in the container. However, I have to wonder if you REALLY are going to need all 1000 rows right at your fingertips in full detail, or merely accessible. Fetching 1000 rows and making them available to Java is a sufficiently large task that if you try to do this in response to a web browser request you run the risk of browser timeouts regardless of how you manage them. Fetching 1000 keys and accessing detail data only on rows of particular interest is a different matter. Statistical clustering of record accesses can play into (or out of) the efficiency of caching mechanisms. You can also play various tricks, such as pairing a JavaBean with an EJB and passing it as a unit to the EJB for set/get purposes, thus reducing the RMI overhead. If you need fine-grained transactional support, this too is an attribute of EJBs that you'd otherwise need to code yourself. Only one thing can I guarantee -- no matter how rigorous the theoretical analysis is, actual measured results will contain surprises!