We do cover automated acceptance tests quite thoroughly, but perhaps not from the perspective you would expect. I'm the project coordinator for Fit, a popular acceptance testing tool, and working with teams that use Fit has given me a lot of experience with automated acceptance tests.
As I worked with these teams, I consistently noticed several things:
The acceptance tests were a maintenance burden, leading to slow and frequently-broken builds.
The tests were difficult to read and understand.
Customers weren't participating deeply (or at all) in the creation of acceptance tests
For the last four years, I've been experimenting with alternatives to acceptance tests, and I've settled on the following combination of practices as an alternative to traditional acceptance tests:
Test-driven development (for its comprehensive regression suite)
Customer tests with Fit (for examples/tests of business logic)
On-site customer reviews (for communicating details and "done done")
Exploratory testing (for verifying that the process is working correctly)
The "customer tests" practice is most similar to traditional automated acceptance tests, but we've intentionally constrained it in order to increase rigor and reduce maintenance costs.
James Shore, coauthor of <a href="http://www.amazon.com/Art-Agile-Development-James-Shore/dp/0596527675" target="_blank" rel="nofollow">The Art of Agile Development</a>. Website and blog at <a href="http://www.jamesshore.com" target="_blank" rel="nofollow">http://www.jamesshore.com</A> .