So if I understand correctly the program publishes some number of different webpages at a fixed time interval and then proceeds to screen scrape those webpages to read back the information for processing?
That would certainly be a ... novel approach.
Somehow I don't think I'm getting a clear picture here
Build a man a fire, and he'll be warm for a day. Set a man on fire, and he'll be warm for the rest of his life.