I was wondering when dealing with very large number calculations (i.e. Fibonacci series), which of the two options below is a better design taking into consideration a balance between memory usage and speed-
1. Memorization method calling the method doing the actual calculations with recursion, or
2. Memorization method calling the method doing the actual calculation with iteration?
p.s. I forgot to mention that in my memorization method, I'm just doing HashMap with cache.get() and cache.put().
Tail recursion is what I used. When I timed the runs using System.nanoTime(); with 2 billion as the number limit in Fibonacci series, the time difference was about 5% in favor of memoization. So I posted the question to see for a gain a 5% speed improvement, is there memory usage consideration that should be taken into account?
Aren't there methods in the Runtime class to allow you to see memory consumption? Have you tried them?
Is it worth all this effort for a 5% speed improvement? Is a 5% difference significant in the first place; I have seen greater differences apparently appearing at random?
posted 9 months ago
Yes I added Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory(); for both tail recursion and iteration versions, and the results were identical. I guess is possible for very certain applications the numbers may differ. But you're right, given the time savings of about 5% and even memory usage, it would not be worth taking much time to strike a balance between time and memory in this case.
But in general, is incorporating memoization as matter of norm to achieve any improvement - a good design?