I'm very interested in hearing Pierre-Yves's response to this question but in the mean time I'd like to offer some thoughts of my own. I studied FP back in the early 80's, my early jobs were assembler, COBOL, and C programming. I picked up C++ in '92, Java in '97, and then over the last decade I've worked with Groovy, then Scala, and for the last nearly six years I've been doing Clojure in production. Along the way I'd been keeping my fingers in the FP pool, always hoping it would go mainstream (in the early 90's I hoped Haskell would the "One True Language"... sigh...).
OOP's great promise to the industry was reusability. We were supposed to be able to build models of things in our businesses and then reuse them across multiple programs. We were supposed to get flexible, reusable abstractions. I think we all know the reality of that? We've seen a lot of "No True Scotsman" arguments around OOP -- "if it doesn't work, you're not doing it right" -- but I view this more as Stockhausen syndrome: we've invested so much as an industry that it's really hard to admit we've made a big, expensive mistake. Tools, training,
patterns, consultants -- a huge, unstoppable machine based on helping you get OOP "right". Not all of OOP is bad of course. It's really good for modeling certain things. But if you look at what
Alan Kay had in mind, it's not what we got. He had in mind a coarse-grained model of objects, that communicated via "messages". It wasn't really about reusability, it was more about autonomy.
Simon posits that OOP should be good for systems where we have well-defined "entities" and perhaps less well-defined "processes". When I look at that, I see data and functions. In OOP, those are blended together. If your process changes, you need to modify your objects. If your data changes, you need to modify your objects. But with either kind of change, you're changing your fundamental building blocks in a way that is intertwined.
With FP, your data and your functions are separated. Alan Perlis said "It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures." (Epigrams on Programming, 1982). In Clojure, you have a number of abstractions and, for each abstraction, you have a lot of functions that are applicable. Mostly your data is a basic hash map and you can apply any number of generic hash map functions on it. You can have a sequence of your data elements, and apply any number of generic sequence functions on them. 100 functions, one data structure. One of the problems with OOP is that, in general, each entity has a specific class type and therefore a specific set of functions that can operate on it. It becomes hard to compose those functions because they are specific, rather than generic: they aren't abstract enough.
My experience, spanning both FP and OOP, is that you can create extremely flexible systems with FP because functions are composable and you have unlimited "plug'n'play" if you have the right level of data abstractions. Your design can also be flexible at the data level if you ensure your entities are accretive: either you just add new fields or you add composable functions that present a compatible API for your data. Since we switched from an OOP approach to an FP approach at work, we've found our ability to pivot and incorporate new requirements easily has improved dramatically. We can extend our data model easily too (at least accretively -- removing things takes more work).