In the tests we have performed and work on real-world applications, we generally see a linear performance improvement. However, a key to pipelines is to be able to predict the scalability upfront given your specific application characteristics (much of the book is dedicated to this topic). For example, if you have a process that is very complex and CPU intensive in the distribution of transactions to pipelines, you will see far less scalability. On the other hand, a highly CPU- or IO-intensive process that is easy to distribute to can generate maximum scalability. As with all parallel computing models, there are limits, but with pipelines you can "do the math" upfront in the design/research phase with ease, and know if its the right approach for your situation or not. It also helps to optimize things in the design phase too, fixing bottlenecks before they occur in your implementation.