This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
If a file takes 100,321 bytes on disk and you load it into RAM, it will take 100,321 bytes of RAM. Actual mileage will vary, since there's overhead in both cases - on disk, there's the directory info, and in RAM, there's storage management overhead. But for a file that big, the overhead is comparatively small for most systems both in terms of file overhead and storage overhead, and can thus be ignored. Usually.
Customer surveys are for companies who didn't pay proper attention to begin with.
A file on disk will allways use complete sectors, and the sectorsize may vary. Thus, a file being 1 or 2 bytes long will use much more space on disk.
A short bash-session:
1. echoing "a" into a file will put a newline ('\n') at the end, leading to two bytes. 2. checking the size with wc (word count) shows: two characters. 3. du (disk usage) reports 4,0 K=4096 bytes. Ooops. 4. stat shows, 8 Blocks are used, (each of 512 Bytes).
A file of 4096 bytes would use the same amount of space on the disk.
Now - does a file need (additional) space for a directory entry? I 'don't know for other filesystems than mine (reiserfs 3.6) where I can test it; bash-session-2:
1. Checking free diskspace (df). 2. Create an empty file (which needs no space for content) 3. Check free space again
We see: unchanged. A file of 100 321 bytes would use 102 400 on my disk.
Loading a file to memory can be done in several ways. I can imagine storing it in a "HashMap <BigInteger, Character>" and would expect that to be a very ram consuming way. [ May 22, 2007: Message edited by: Stefan Wagner ]