We do discuss the integration with NoSQL systems in Chapter 11, briefly. And, like Bill mentions Optiq and other intermediary layers is necessary since Mondrian needs to speak to "something" that speaks SQL. Most NoSQL systems these days are sql-esque (functionally if not semantically similar) but don't speak directly which is where optiq fits in.
The techniques Bill mentions with PDI will work functionally, but will not (*currently to my knowledge) leverage any optimizations in the source data (filtering before aggregation in MongoDB) based on the Mondrian query. Until something like Optiq is working well on many systems, with it's inherent ability for optimization rules, the performance directly on NoSQL systems from Mondrian will be a bit disappointing.
Good luck - and if you get the book, jump straight to Chapter 11 so that the context of where Mondrian fits in the NoSQL/Big Data world.
As Nick said, my Optiq project is allowing us to put Mondrian on top of databases that don't speak SQL.
Optiq has adapters for Splunk, MongoDB, and CSV files.
An adapter for Spark is under development. That is particularly exciting, because we will be able to use Spark as a distributed in-memory database that works on cached copies (or subsets, or aggregates materialized in memory) of other databases. This will be useful for operational databases that do not have great performance for scanning/aggregating large numbers of records.
Optiq also has a JDBC adapter, which allows it to push queries down to an underlying database. Why is that useful? Optiq can be used to combine multiple databases (maybe all relational, or maybe a mixture of relational and non-relational), or add a distributed cache on top of a database.
Lastly, I hear reports from the lab that Mondrian works on top of HBase using the Phoenix JDBC driver. I'll be trying Mondrian on Optiq on HBase shortly.