Thanks for the tips. I feel a lot more confident about the situation now. I have set my sessions to never expiring and instead use a session manager to invalidate the sessions. This is so I can manage HTTP and non-HTTP sessions in the same place, and also have greater flexibility at monitoring and logging sessions. I don't see any problems with this arrangement.
I hadn't thought that sessions might be written to the hard disk, which certainly changes the situation. It looks quite rosy now that I quanitify the memory requirements: the bulk of the users, customers, only access one type of listing page and have short sessions, which will limit the main volume of the session attributes. The non-customers have very long sessions, up to a week, but there would be perhaps only half a dozen of them logged in at any time. I have made the maximum number of records configurable and expect I'll set it to something like one thousand. This is restricted by an SQL helper called by the iterator, so doesn't restrict back-end usage that must access all applicable records. Based on this I could calculate the expected memory use as:
Number of customers logged in at any time X number of iterators per customer held in memory X maximum number of records per iterator X record size
+
Number of non-customers logged in at any time X number of iterators per non-customer held in memory X maximum number of records per iterator X record size
==>
100 X 1 X 1000 X 200 bytes
+
6 X 5 X 1000 X 200 bytes
==>
20,000,000 bytes
+
6,000,000 bytes
==>
26 MB (using 1MB==1,000,000B)
No worries there
Hopefully this example calculation can help other programmers out there.
Scalability should not be a major issue if we maintain high standards of efficient codes/patterns. Managing sessions in the application will help if we ever need to cluster the application.