• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Some clarification required on few topics.

 
Ranch Hand
Posts: 125
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

Some clarification required on few topics:

1) In most of the Application Servers apart from the Server jvm which is available there will be nodes of jvm which are mainly for batch processing. By using these nodes of jvm and using threads in them will solve the purpose of batching. Today I think most of the enterprise based applications do have Batching supported, for example startNodeManager in case of Weblogic Application Server and also these are available in other Application Servers as well. What are the benefits provided by new Batch processing in Java EE 7?

2) In case of WebSockets, How scale-able will the application be if it supports WebSocket at least in case of Glassfish 4? The way I understood is if there are 10,000 users connected to the server and opens the WebSocket request for conversation then there will be 10,000 HTTP upgrades to ws or wss requests are to be maintained. Will there be any issues if all the 10,000 users accessing the same information like in case of retrieving the latest Stock price of a commodity? Will it maintain any client specific conversations among two clients like chat application between two clients, in case of 10.000 users if every client is interacting with other but not all it may end with 5000 their specific information that has to be stored at the Server for all 10,000 users.

3) In case of Asynchronous Servlet, If a Client request spawning into a thread and waiting for an update from the Server then When there is an update from the Server will there be an another thread which will process the response for that Client. Is it like Server push the updates when available and those are sent to the registered Clients irrelevant of thread?

4) Second level Cache for JPA, Do it says to maintain good amount of RAM memory at the Application Server? Or will it be good if the memory is maintained in separate jvm on the same application server. Hardly remote invocation is better than database fetch.

Few suggestions for the Java EE Tutorial after every chapter if it is followed with few examples or their usages in some Use Cases is provided and followed with some Q&A like JavaSE follows it will be very grateful.

Thanks,
N. Naresh Kumar.
 
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Naresh,

1) I don't fully understand what you mean on this item, but the node manager used by WebLogic is purely for administration purposes of a WebLogic instance. It's just a monitoring process which is not capable of any batch or any application processing at all. That said, the new Batch processing API has the benefit of being inside the platform without external dependencies or frameworks. Spring Batch for example is an alternative for Spring shops, but for purely Java shops, there is a native option.

2) It's really hard to tell as it has other considerations such as the payload size and frequency if calls. But my understanding is the same as yours and a good ideia here is to use load balancing and clustering to spread the load between multiple servers for such large systems. 10k simultaneous connections is really a lot.

3) Although I know the answer, I'd prefer to have you simply print the Thread Name on an async servlet method so you can see yourself how exactly it works. But to give you a hint, most application servers has thread pools internally in order to handle thread creation to be more efficient and restrict impact on other apps or admin tasks.

4) There are multiple ways to implement JPA caching. But in general, depending on the size of the system, you may want to store results on external JVM nodes, leveraging that as a mechanism to distribute the data. For example: Let's say you are inside the app server (web app) and make a JPA request which will check the cache first and then hit the DB, which during response will populate the cache. Using some solutions like Pivotal GemFire or Oracle Coherence for this caching, it can distribute this among multiple nodes and on 2nd request for the web app running on a diff node than than the 1st request, the system will already have that data in memory and doesn't require another DB query. The fact that this data it's on a diff JVM process indeed has some cost, but it's still cheaper than a network hop to the database + query processing time there. What we do recommend is to have diff JVM for this cache but to keep them co-located in the same host and to avoid out of memory errors on the app server JVM, which in this case will not host JPA cache data, only web application objects.

Hope that helps.

PS: Thanks for the suggestion.
 
Author
Posts: 6
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'll try to answer some of your questions.

1) The primary benefit provided by the Batch Processing APIs (JSR 352) in Java EE 7 is standardization. As you pointed out, many application servers have been providing batch processing capabilities but each did so in its own way. Java EE 7 standardizes this capability so applications that use this feature can be run on another application server with minimal changes.

2) GlassFish 4 is not a production quality application server, so it is not optimized for massively scaled applications, such as the case that you mention. Scalability is a production quality feature that would be supported by commercial application servers, not an open source one like GlassFish. Without quoting numbers, in general, WebSocket applications do run with a higher efficiency and a decreased latency in cases using many small messages, including chat and stock ticker updates for example. An application could be written in a manner that stores conversations between clients, but we don't have an example of this to show you.

3) In the case of an asynchronous servlet, you correctly described the action. Asynchronous Servlets allow you to handle the request and write the response in a different thread from the one that was used to send the request.

4) When a second-level cache is used with JPA 2.1, it helps improve performance by avoiding expensive database calls, keeping the entity data local to the application. How memory is maintained and used is entirely implementation specific.

Thank you for your suggestions about organizing the information in the chapters.
 
With a little knowledge, a cast iron skillet is non-stick and lasts a lifetime.
reply
    Bookmark Topic Watch Topic
  • New Topic