Apart from EJB (that I'm not very fond of for various reasons), there is some consensus that the reference C implementation of Ruby is not as rock-solid as Java for long-lived processes. I never had any problems personally, but it's a known fact that Ruby's garbage collection doesn't perform as well as Java's. Also, threads in Ruby don't map on native OS threads, like Java threads do - so, multithreading in Ruby is less reliable than multithreading in Java. I don't know about alternate implementations, like JRuby.
On the other hand, this turns out not to be a problem in practice. The Ruby approach is simply
not to write long-lived multithreaded processes. I'll give you an example: a web application.
The typical approach with Java is a Java Enterprise (or Spring-based) multithreaded application. Each request is served by a separate
thread, and the JVM process is supposed to stay up indefinitely.
With Ruby, you'll probably adopt a "share nothing" architecture: all web requests are collected by a load balancer that distributes them to a number of Ruby
processes (not threads). The processes can be distributed over multiple servers. Each process serves one single request at a time. Processes are stateless: once the request has been served, you can kill the process, and you won't lose any data. All the user state is either kept in the database, or some kind of external cache (like a memcached server).
My opinion is that a "shared nothing" application makes long-lived processes and multithreading way less important, and is overall easier to both develop and operate. If you want to update your app, you just take a server off the load balancer, update it, restart it, and put it on the load balancer again (it's not always that easy, but in many cases it is). In my experience, a stateful application is generally more complex to deploy and administrate.
In general, what I found is that if you stick with the Ruby "philosophy", then many of the problems that Java is poised to solve become irrelevant.