I guess where you head depends in part on why you are doing this and what your goals are. You have provided a little more detail in your second post (indicating around 30 machines are involved). To me this sounds a little more like a business/real work application vs something you are doing for a class? I am not saying
you should "slack off" on a class assignment, more that a real world solution needs to address a number of issues else you may end up with some headaches.
Some points to consider:
1) How often will you be adding/removing servers from the list?
2) How will you address failures (mainly thinking the remote server is off line when you try to push an update)?
3) How many files are you monitoring and what is the complexity (1 directory, many directories, etc)?
4) Is file update your only concern, or do you need to trigger other activities like a cache refresh (reload files); bounce web servers; etc?
5) What is the implication of partial updates/updates in progress - i.e. if 15 machines have a new file and 15 have the old, does this create problems?
6) Is there a certain amount of time you are required to have the updates pushed?
7) Are you considering "re-send" or "re-synch" options for when a machine is offline and does not get an update? (Could be from the perspective of synching all files or just re-sending the one that updated).
With 30 machines, I think you do want to look a little more into some sort of publish/subscribe concept. Not trying to push this approach (no pun intended), but publish subscribe approach allows the remote to manage getting it and can resolve some of the above issues but also adds a different angle of complexity.
If this is a real-world application, I'll also ask if it is an option to look at solutions that already exist which may do this for you.