what i am trying to do is to read multiple (100) files into a Map in parallel rather than reading one file at a time,
I have read that there may be restrictions to this due to I/O processing but is there a way to use ExecutorService and Threads to run multiple threads in parallel and load the contents of the files into a map contained of DTO's,
The limit usually comes with the throughput and read-heads on the disk(s) involved. If the files are all stored in a single disk, and that disk only has a single read-head then you will likely take longer to read using multiple threads than just one (as there would be more time for the read-head spent seeking to a file required for a particular thread). If you have the files on multiple hard disks, or the files are stored on a RAID with multiple disks and a controller that supports parallel reads, then you can take advantage of multiple threads. So depending on your scenario you might consider making the pool of threads available to the Executor configurable so you can tune the system to the available hardware.
The number of CPUs doesn't really matter, because from the point of view of a CPU, file access consists almost entirely of waiting for the disk to spin to the right position. So depending on what you do with the data from the file, it's quite likely that even if you could process 100 files from 100 different disks simultaneously you still wouldn't be CPU-bound.
However what Steve said about reading multiple files in multiple threads tending to be less useful than you might think, that still applies. You really won't know until you try it whether using multiple threads speeds things up.
Also note that operating system buffering and disk drive electronics buffering sit between your program and the physical disk. Therefore, it is time to experiment! Why not write up your results and let us know what happened so future readers can learn.
My pie came with a little toothpic holding up this tiny ad: