1.) Batch processing jobs - Let's say you have a website that allows photo uploads, and those photos must be resampled to two "thumbnail" sizes in addition to storing the original. Rather than having your front end application make the user wait for their upload to "finish" while it does the resampling, your front end app could simply publish a message into RabbitMQ containing a reference to the image's ID in the database and then immediately return to the user. Your backend "resampler" app would then consume from the queue that receives resampling requests and generate the thumbnails asynchronously from the actual front-end upload of the image. Your user has a faster upload experience, and if you need to scale up more "resampling" horsepower you simply attach more instances of the "resampler" app to the RabbitMQ queue that receives the requests. Your front end is no longer tightly coupled to the resampling process...it simply stores the original image, fires off the resampling message and goes back to letting your users interact with your site. RabbitMQ is great for any type of batch oriented job like this, where you receive "work orders" and want to be able to both process them asynchronously from receiving them, and be able to easily scale processing power up or down.
2.) Logging and Decoupling - Let's say you built an instant messaging website, where you want to both log every IM and display realtime summary statistics (IMs/sec, number of IMs set to particular recipients, etc). Also, let's say you want to store the actual IMs in Cassandra, but the summary statistics you want in a fast in-memory store like Redis so they can be updated in realtime. With RabbitMQ you could have your front-end publish each IM just once into Rabbit with a tag of "im_log". Then you could create two queues (permanent_log, summary_log) that both subscribe to copies of messages tagged with "im_log". You'd write a consumer app that attaches to the permanent_log queue and writes all log entries to Cassandra, and then you could write a second consumer app that attaches to the summary_log queue and updates summary statistics in Redis in realtime. Now let's say a week goes by and your biggest user wants the ability to send a copy of every IM on their account via XMPP to an outside IM server. Since you're using Rabbit already, all you have to do is create a new queue called "external_log" that also subscribes to messages with the "im_log" tag and then attach a consumer to that queue that forwards the IMs on. Bingo presto, no changes to your front-end or other loggers and you've got the new feature! This is a huge gain over writing a huge monolithic logging app that would have to be rewritten to add the XMPP forwarding feature...or even worse to add the same directly into your front end code and break who knows what. Also, like the last example, you can scale up any of the 3 logger apps easily by attaching more instances as needed to the appropriate queues.
Does that help?
(There's lots of other models that are good fits for Rabbit too since you can actually reply to messages in Rabbit...and use it for RPC or the like.)