Stephan van Hulst wrote:I don't see why any compilers would need more than two passes though.
Maybe the writer of the compiler got paid by the line?
It has been 15 years since I had my compilers class, and quite frankly, I stopped paying a lot of attention once I got a job. The only reason I was taking classes was to improve my chances of getting hired.
Even though I never turned in a complete, final project, I still got a 'B'.
I think historically, multiple passes were often performed because machines didn't have as much memory available as they do now, and it was expedient to break the task of compilation into smaller sub-tasks that required less total memory per task. Nowadays that's usually not an issue. But optimizing compilers may make multiple passes; after each optimization, the new code may be worth another pass to see what additional optimizations have been opened up.
I don't think that's quite right Mike. Multi-pass compilers need more memory, because for each task they perform, they add more "decorations" to the abstract syntax tree, which is always in memory. Everything needs to be in memory because optimizations may be done at a later stage. With one-pass compilation, the compiler immediately emits the final target code, and forgets about it as it moves on to the next piece of code.
One-pass compilers were popular because they were simple and fast, and didn't require much memory. Now it's all about multi-pass because memory and speed aren't problems anymore, and they allow for more expressive languages (compare having to declare variables at the top of your function/program to declaring them at the point you need them).
Stephan is correct, The only way the compiler can resolve references finally is via two(or more) passes.
Anyway, why bother with this thread. It compiles, it runs.
FYI, I also in university had a compiler class. Aced it and finished top of class, so there!
Joined: Mar 05, 2008
Stephan van Hulst wrote:I don't think that's quite right Mike. Multi-pass compilers need more memory, because for each task they perform, they add more "decorations" to the abstract syntax tree, which is always in memory. Everything needs to be in memory because optimizations may be done at a later stage.
I did say "historically" at the beginning of that comment. There was a time when they simply couldn't keep everything in memory for a single pass, so they wrote intermediate results to file or tape (!!!). For example, Algol 60, described here on p. 48:
Pass 1 would input a Cobol program from punched cards, perform a partial compilation and output intermediate code on one of the scratch tapes. Pass 2 would then input the intermediate code from tape, perform another partial compilation, and output slightly more detailed code on the other scratch tape, and so on. In this way, the compiler passes would be loaded, one at a time, from the system tape, while the compiled code would move back and forth between the scratch tapes, being gradually refined. The last pass would leave final code on a scratch tape, from which it could be loaded and executed. Since every pass performed a single scan of the original Cobol program (or the intermediate code), this scheme was known as multipass compilation. Multipass compilation made it possible to use a compiler that was much larger than the available core memory.
Well, if you really have to have an answer to that question, then I suggest you search for a Java compiler on line. I'm pretty sure that you should find one. Then download the code and your answer will be there.