• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Paul Clapham
  • Ron McLeod
  • Tim Cooke
  • Junilu Lacar
Sheriffs:
  • Rob Spoor
  • Devaka Cooray
  • Jeanne Boyarsky
Saloon Keepers:
  • Jesse Silverman
  • Stephan van Hulst
  • Tim Moores
  • Carey Brown
  • Tim Holloway
Bartenders:
  • Jj Roberts
  • Al Hobbs
  • Piet Souris

Giving Streams The Least Work To Do

 
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I realize that super-optimizing every line of code we write isn't the end-all and be-all of programming.

I further realize that illustrative examples are meant to teach us concepts, and not to provide us with production-ready code.

Yet, for the most part, when I write code, whether to illustrate something for an example on this site, an interview problem, stuff at work...I try to ensure that it isn't doing significant extra work.

I have seen too many times pretty badly de-optimized code in production running for thousands of customers, millions of times a day, and when you want to "fix it" you need to get a note from your priest or rabbi, win an essay contest, notify your congress-person and file forms in triplicate with three departments.  I'd see the same de-optimized code thousands of times, until I was lucky enough for a bug to manifest close enough to it that it could be cleaned up "as part of" the reluctantly required bug fix.   But, I digress.

Stuart Marks is another one of these great programmers I watch carefully and try to learn from.  I might not emulate his hair style or diction, but his Java is another story.

In his Master Class at Devoxx:
https://www.youtube.com/watch?v=2c_KNH3s2S0

(about 11:58) He gives the following toy example:



Now, when you are coding anything, for a contest, real work, an interview, etc. -- first make sure it works for all valid cases, that it handles invalid cases reasonably, then, if there is a reason, optimize it in terms of "looking for unnecessary work".  Let's pretend we were up to that phase already for this thread.

As a human programmer, I know that the .map() will not change the results for the .filter() step, at least for English.  In my mind I can imagine that in some language an UpperCase String might have a different Java length than its mixed-case counterpart, tho it seems rather unlikely.

Therefore, in my procedural/object-oriented brain, I see "Why are you mapping ALL of them, we only need to .map() the ones that pass our filter??"

I imagine a bunch of extra unnecessary pressure on the Garbage Collector, with lots of upper-cased short-lived unreferenced String objects being generated "for nothing".

I am (almost) constitutionally incapable of not seeing this / ignoring it on even the first pass, where I am just trying to formulate a valid Stream implementing the desired code.

Is there some valid reason to place the .map() before the .filter() in this case that I am not thinking about?

I know in the interesting parts of the JavaDocs for Stream, it reserves the right for the compiler to elide unnecessary steps that can be shown not to affect the output of the Stream, I suppose warning us to not take the apparent detailed flow of the Stream too literally, if Java notices "unnecessary work".

I think I understand that warning.

It seems unlikely that given the vagaries of various languages, that Java would decide String::toUpperCase can't change the length of the String and re-order it for us under the covers.  I could be wrong about THAT.

As stated at the opening, I realize this is a cute, fun, toy example for a tutorial presentation, not an attempt to optimize code that runs ten million times per second at Google.

Is it just that the ordering here probably doesn't make a big difference, or is having the .map() before the .filter() somehow "more correct"?

I have seen optimization attempts break working code way too many times to count, often on weird corner cases that eluded actual (but not ideal) automated testing gauntlets, so I understand why there was resistance to attempts to "optimize" production code not identified by customers as a performance bottleneck.  This actually makes me MORE concerned with writing reasonably optimal code the first time.  Sadly, it often applied to pretty obvious bug fixes as well, but that is another story.

Thanks for your opinions.

In another presentation by someone who teaches Java to many, I saw something like the following.


"Dude!!" I said, "That isn't quite right.  If you are telling people that their output is sorted, and they see some identical WORDS far away from each other in the output, because they differed originally by case ONLY, and are now the same after you UPPERCASE, the Users are going to be Very Annoyed."

My wife does QA, and will often open up tickets for things like that.  I like to guess how they happened without seeing the code.  She just wants them fixed.

So I feel like I have been seeing a bunch of examples that act like ordering barely matters in Stream usage.
Some are just differences that might be considered "Premature Optimization" to care about.
But others affect the actual output, and seem to me unlikely to be making end users of the code happier.

Streams are allowing us a declarative style of coding that I appreciate, and hide some annoying details inside the pipes just like our plumbing systems do in our bathrooms.

I feel there is a little too much ignoring the ordering of steps tho in various code samples I am seeing.

That is either because I can't turn off hyper-optimization brain, in a way that is inappropriate because we aren't supposed to be staring at our pipelines so closely, since there may be re-ordering or skipping of steps done by the optimizer--just write some clear code and leave it at that.  Or that some people are getting charmed into thinking less about the logic flow in ways that could even affect the correctness of relatively small outputs.

I'm not sure which is going on?  Maybe both?
 
Master Rancher
Posts: 4052
56
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jesse Silverman wrote:I realize that super-optimizing every line of code we write isn't the end-all and be-all of programming.




You are dead to me.



...

 
Marshal
Posts: 74341
334
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Jesse Silverman wrote:. . . in some language an UpperCase String might have a different Java length than its mixed-case counterpart . . .

In traditional German spelling, there was no upper case equivalent to ß, which changed to SS, adding one letter to the word's length.

. . . a bunch of extra unnecessary pressure on the Garbage Collector . . .

Those extra objects are probably created as local variables on some sort of stack, where they might not need garbage collection.

Is there some valid reason to place the .map() before the .filter() in this case that I am not thinking about?

It might not apply in this case, but map(...)...filter(...) is often semantically different from filter(...)...map(...).

. . .  tutorial presentation, not an attempt to optimize code that runs ten million times per second . . .

Which is presumably why the slowest part of the whole process, viz. System.out.println(...) isn't optimised

. . . annoyed . . .

I personally think that the concept of Strings implementing Comparable is dubious. I think that Comparable shouldn't only imply a total ordering, but also a single total ordering. Strings have two total orderings, viz. alphabetical (case insensitive) and so‑called ASCIIbetical, so I think Strings don't really have a total ordering. I realise I am in a minority holding that opinion.
 
Marshal
Posts: 26909
82
Eclipse IDE Firefox Browser MySQL Database
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Campbell Ritchie wrote:I personally think that the concept of Strings implementing Comparable is dubious. I think that Comparable shouldn't only imply a total ordering, but also a single total ordering. Strings have two total orderings, viz. alphabetical (case insensitive) and so‑called ASCIIbetical, so I think Strings don't really have a total ordering. I realise I am in a minority holding that opinion.



Yes, String's Comparable implementation is simply the naive implementation which orders based on the Unicode code points in the String.

If you want language-sensitive ordering then a Collator is what you want. (Note that it implements Comparator and not Comparable.) Standard Collators can ignore accents (e.g. treating "c" and "ç" as not significantly different) and capitalization (e.g. treating "P" and "p" as not significantly different). And for other language-specific features (such as Norwegian having "å" at the end of the alphabet, which makes the Norwegian translation of "A to Z" be "A til Å") you can use a RuleBasedCollator. The sky seems to be the limit with those things; you can have your Norwegian collator treat "aa" the same as "å", for example.

Those collators of course don't provide a total order for all Strings, but that's a necessary consequence of them being language-sensitive. One wouldn't expect the Norwegians to worry about how Cyrillic characters are going to be ordered, for example.
 
Saloon Keeper
Posts: 24499
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The #1 rule for optimization is that you don't waste time optimizing things you don't need to. As I once pointed out to a "clever" programmer, you cannot save nanoseconds in a jar for later use.

And that, mind you, was in an era where programmers were (relatively) cheap and hardware was expensive. These days, spending time optimizing code can get you chewed out by the boss for not doing something more "productive". Like that Git-er-Dun project over there that Sales promised to the customer to be in production last Thursday.

But it strikes me that if you do get called to optimize a stream flow that there's a lot in common with SQL. If you run a SQL EXPLAIN, it will show you the different layers of operations and the cost of each. You strive to winnow as much bulk out of the data at the bottom levels so that you have less work to do on the higher ones. So, likewise would you do with the different stages of a Stream.
 
Jesse Silverman
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Holloway wrote:The #1 rule for optimization is that you don't waste time optimizing things you don't need to. As I once pointed out to a "clever" programmer, you cannot save nanoseconds in a jar for later use.

And that, mind you, was in an era where programmers were (relatively) cheap and hardware was expensive. These days, spending time optimizing code can get you chewed out by the boss for not doing something more "productive". Like that Git-er-Dun project over there that Sales promised to the customer to be in production last Thursday.

But it strikes me that if you do get called to optimize a stream flow that there's a lot in common with SQL. If you run a SQL EXPLAIN, it will show you the different layers of operations and the cost of each. You strive to winnow as much bulk out of the data at the bottom levels so that you have less work to do on the higher ones. So, likewise would you do with the different stages of a Stream.



After working on developing a platform for 11 years, where people would actually ask you if a customer had complained about the speed of something that got called in 1000 places and no customer even knew it existed because it wasn't a documented API...I spent about a year at customer sites, where literally every single person I asked said they would never complain that any part of the system was too fast, and any places I could have sped things up would have been appreciated.  It was funny because trying to defend performance projects I would always say "Nobody is complaining it is too fast!" and again, the question was unfair because....hmmm....now that I think about it, I should have actually looked to see everywhere something was called, and looked thru the Bugzilla equivalent to see if anyone had ever complained that anything it was in the path of was too pokey.  I could have cited those ticket numbers...Too late now...

Anyway, interesting.  I am filing this level of stuff away for later, but is there anything remotely analogous to SQL EXPLAIN for Java Streams?  I love that the same code can be used sequentially or with parallel streams, but have a sense that one needs to be more careful about constructing a stream flow to not totally undermine potential benefits of parallelism by disregarding its special needs...
 
Sheriff
Posts: 22504
122
Eclipse IDE Spring VI Editor Chrome Java Windows
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Holloway wrote:The #1 rule for optimization is that you don't waste time optimizing things you don't need to. As I once pointed out to a "clever" programmer, you cannot save nanoseconds in a jar for later use.


I fully agree. If I have to make a trade-off between readability and performance, I choose readability unless the performance gain is really significant.

I saw a great example last Thursday: https://thedailywtf.com/articles/wearing-a-mask. The provided code snippet works, and is probably quite efficient, but it looks like magic to me. I'm with the article's author:

Remy Porter wrote:I'm a simple person, so I'd be tempted to just do this with a loop and a bit-shift operation.

 
Campbell Ritchie
Marshal
Posts: 74341
334
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I also agree with whoever said to document it, if you actually knew what you were writing. Or if you have gone to the trouble of finding Gaudet's algorithm, say where you found it.
In this case, accessing a device (the terminal) is probably slower than anything else, so concatenating the Strings into one and using one print call will probably make more difference to the performance than anything else. But nobody will notice how long it takes to print an eight‑element List anyway. Another thing is that Streams run on lazy execution, so it is possible that elements failing the filter() could be removed before the map(). Or that HotSpot will optimise that stage for a large collection; who cares about eight?
 
Jesse Silverman
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Rob:

I completely agree, but not sure if that was the best example.

The comments section pointed out potential portability issues in C on machines where int is 16 bits, which concerned me the most.

At least one other commenter recognized it as something from Hacker's Delight, a book that 99% of applications programmers will never look at but one that I believe I have seen referenced in the Javadocs.

Just found one reference, I vaguely remember there are others:
https://docs.oracle.com/en/java/javase/16/docs/api/java.base/java/lang/Integer.html

Should "normal" developers be doing that stuff in regular code that people are required to read to get their work done?  No.

But there are some things like "Convert integral type of size whichever To/From Big-Endian/Little-Endian" that may get called zillions of times a day and are looked at by literally 0.0% of programmers working on the system.  Once someone gets them right, and it is okay to have a small piece of code that has unit tests run for two hours if it never changes, nobody is reading those, and they are called SO much that even a 10% difference means "10% difference in any of the networking code".  The place I worked at for 16 years had code that did exactly that, and ran on 20 different platforms not counting different compiler option flags.

During intense debugging where Windows code was crashing deep in Microsoft call stacks I was completely blown away by how often WideCharToMultiByte and back was getting called when working with Strings.  Like, ZILLIONS of times.  I never tried to find and look at the code (I think it was proprietary, at least then) but the docs mentioned somewhere "Don't worry about this getting called zillions of times, it is ultra-hyper-optimized".  So everyone calls it zillions of times a day, like 3 or 4 people on Earth look at it.  That code is probably not easy to read.

Are there some Clever Idiots out there somewhere who liberally sprinkle their code with stuff from "Hacker's Delight" just to look smart?
I guess there must be.

Am I sorry that the people maintaining the Java Class Libraries read and borrowed from it?
Certainly not.
In fact, I am hoping they just never had occasion to update the reference to that of the 2nd edition:
https://www.amazon.com/Hackers-Delight-2nd-Henry-Warren/dp/0321842685/ref=sr_1_1?dchild=1&keywords=%22Hacker%27s+Delight%22&qid=1629549334&s=books&sr=1-1

There are great reasons that people are coding in Java or Python rather than C++ or Assembler.  Only people who've spent a lot of time doing those can fully appreciate those reasons.

So no, we shouldn't normally be writing code like that, but some of the underlying code we do call and never look at may be exactly that tortured.

This detracts from my original point, which I myself did by saying "Hey, I know that..."

I have seen, and continue to see, TONS of code that is less efficient by virtue of doing lots of extra work and not being at all the more readable!

Even really dumb things like repeatedly sorting something in a long loop that is only REMOVING things from it, in single-threaded code.

So writing apparently obfuscated code with no comments is simply unjustifiable.
Optimizing the heck out of code that isn't even a bottleneck for anyone, and tripping up the many people who will need to debug or modify it is unjustifiable.

But writing poorly performing expensive code that isn't actually any more readable than one that avoids lots of unnecessary work isn't so great either.

It was that actual phenomenon I was trying to get at.

Now I am reminded of something I watched about move semantics and return-value and other compiler optimizations in C++.
The hour-plus long example showed some code that appeared to do some ridiculous number of object creations and destructions, like 16 or 20.
They showed how in all modern C++ implementations that gets boiled down to not 12, or 8 or 5, but 1, I think even without compiler switches for optimizations.

I used to think such stuff wasn't happening much in Java, but I am less sure about that now.

Good discussion, but the rules for common code everyone is working on daily together may not that be the same as that which is called by a million times as many programmers as will ever read it, zillions of times a day.
 
Tim Holloway
Saloon Keeper
Posts: 24499
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It's been over 30 years since I read a book (probably Ed Yourdon) which related a conversation between 2 programmers:

Programmer 1: "My code runs 100 microseconds faster than yours"

Programmer 2: "Yes, but mine works".

Too many people think code efficiency comes from gnarly code and clever tricks. In my experience, cleaner code often runs faster. And choosing the right algorithm makes all the difference.

As I've said before, one of the most powerful optimizations I've ever done was in using a Shell-Metzner sort on data that was almost but not quite already in order. Bad case for bubble sort, worst-case for Heap and Quick sorts. This was worthwhile because it was at the core of the master application control program for our shop and we ran that program hundreds of times per day.

And then there's the time when we stopped a mainframe from crashing at 4 pm every afternoon just by changing a single program option. Do you have any idea how hard it is to crash an IBM mainframe to begin with?

Here's the thing. My speciality is high-performance high-reliability stuff. No one has ever faulted me on that. But I have been repeatedly ragged for not being "productive" enough. Managers aren't looking at my code, they're looking at projects "done". Clients may appreciate more pep, but they appreciate more functionality a lot more.

Most code is NOT run thousands of times a day, and for many apps, more computing resources have actually been measured during the build-and-test phase than they actually consume in production.

So again. You oil the parts that squeak. You can polish the rest in your free time. Except, of course that these days, if you have free time, they'll lay someone off and give their job to you since you obviously aren't being efficiently utilized.
 
Jesse Silverman
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
It's hard to crash a Mainframe, but at least in the 1990's it was not too hard to drag one to its knees.

There was some horribly hard to find bug on a big MVS program, that came to me because the first programmer or two looking into it lost patience and gave up.  So when I realized that the "Data Breakpoint" trick I learned on PC's was indeed available in the (non-symbolic) debugger we were using, I felt Very Clever, because I knew whatever the bug was, that all hell broke loose the moment someone unknown stepped on a particular value, and I knew where it was.  It should hardly ever have been touched.  I just need to set that one data breakpoint and go!

I ran the use case and sat back to wait for Victory and Laurels.

About 40 minutes later, I got a furious, frantic call from the Lead Sysop.  "WHAT WAS I DOING??!!"

I told him and heard that for whatever reasons, using a data breakpoint in that version of that debugger on our version of MVS just totally, completely ate the whole machine.  Nobody had warned me not to do that, because nobody even realized it was something we could do.  (I guess the debugger was running stuff in some super-high privileged way)

The tools they used to monitor who was gobbling resources couldn't even squeeze in for upwards of half an hour to see it was me...nobody could get any work done!!

Data Breakpoints are super-useful, but I never used one in that debugger on MVS again...I eventually found the bug without a data breakpoint, taking many, many hours to find the needle in the haystack, which nobody in the world noticed this time except my manager...

Yikes, I still shudder when I remember that sudden transition from feeling "Very Clever" to "Very Embarrassed" so quickly.

On the other hand, I have come over to senior developers who had spent six to ten hours trying to track down a bug to no avail, like, nothing to show for it but frustration and disgust.

One well-chosen data breakpoint in Visual Studio and we had the answer in ten minutes or less.

That answer was almost always something that would make someone want to switch to Java or C# (someone colored outside the lines in a way that compiled, linked and ran, trashing totally unrelated memory).

I think your Yourdon quote addressed a recent question from someone else "Why should I be using Streams in Java, I think they are slower instead of faster?"

But putting some care and thought in the ordering of steps in a Stream of operations doesn't make for less readable code.

I'm going to take home the following two parts of your response:

Too many people think code efficiency comes from gnarly code and clever tricks. In my experience, cleaner code often runs faster. And choosing the right algorithm makes all the difference.



Most code is NOT run thousands of times a day, and for many apps, more computing resources have actually been measured during the build-and-test phase than they actually consume in production.


That applied to very little of the code that I worked on historically, but I am open to taking jobs where that would be true...so they are words to live by in those areas.

And the part that really sucks but has plainly become quite  true:

You can polish the rest in your free time. Except, of course that these days, if you have free time, they'll lay someone off and give their job to you since you obviously aren't being efficiently utilized.

 
Jesse Silverman
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Campbell Ritchie wrote:
In this case, accessing a device (the terminal) is probably slower than anything else, so concatenating the Strings into one and using one print call will probably make more difference to the performance than anything else. But nobody will notice how long it takes to print an eight‑element List anyway. Another thing is that Streams run on lazy execution, so it is possible that elements failing the filter() could be removed before the map(). Or that HotSpot will optimise that stage for a large collection; who cares about eight?



Thanks, Campbell!

All the things you aid about "Maybe this and that" are the kinds of things I was thinking about.

This is the "Lambdas and Streams" forum, but something I haven't done yet but am preparing to do is to start using Java Streams for HackerRank problems.

Which ones?  I don't know, but almost all of them have big, big BIG test cases to run against and are fairly stingy on CPU seconds and memory limits.

I believe there may be many problems I will be able to code correctly faster using Streams, I'll see how well they actually run when I get there.
 
Tim Holloway
Saloon Keeper
Posts: 24499
167
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The #1 way to halt a mainframe while debugging is to physically put  it into single-step mode, but I think that actually required flipping a switch on the operator console.

The #2 way would be if you were indeed running at a high privilege level, although I can't imagine where, since most of the really sensitive stuff would be in places where debuggers don't work. I'd actually lay higher odds on having a high-pririty debugger running that was  holding back the lower-priority processes. Again, can't think of a likely way to do that. Offhand, in fact, I don't recall hardware breakpoint support in VMs, and we weren't a VM shop.

I don't think I had much access to real-time debuggers myself. Mostly dead tree core dumps and later IPCS online dump analysis. All Post-mortem stuff.
 
Jesse Silverman
Saloon Keeper
Posts: 1606
52
Eclipse IDE Postgres Database C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Holloway wrote:The #1 way to halt a mainframe while debugging is to physically put  it into single-step mode, but I think that actually required flipping a switch on the operator console.

The #2 way would be if you were indeed running at a high privilege level, although I can't imagine where, since most of the really sensitive stuff would be in places where debuggers don't work. I'd actually lay higher odds on having a high-pririty debugger running that was  holding back the lower-priority processes. Again, can't think of a likely way to do that. Offhand, in fact, I don't recall hardware breakpoint support in VMs, and we weren't a VM shop.

I don't think I had much access to real-time debuggers myself. Mostly dead tree core dumps and later IPCS online dump analysis. All Post-mortem stuff.



It didn't quite *HALT* it, "ground things to a halt" is apt but just a phrase.

I remembered while walking the dog that it was in a context that had multiple address spaces visible, whatever that implied, but I only brought it up because it was the closest I came to crashing a Mainframe.

Now you give me a flashback to 11th grade.  Mrs. Burke had a husband who worked at IBM in 1984, and was teaching us FORTRAN.

Somehow she got a hold of a decommissioned NYC attendance computer, that was about 20 years or so old at that time.
Our textbooks were targeted for FORTRAN 77, but that dang thing didn't even have a full FORTRAN 66, I think it pre-dated it.

I remember it had a whopping 4kbytes of memory, on a board between the size of an ironing board and a surfboard.

The FORTRAN didn't support nested DO LOOPS or a million other things literally every other FORTRAN I used had.

We had a great Printer for it, so I decided my next program would be a version of BANNER, because printing cute messages on a BIG PRINTER still seemed very cool to me before I turned 14.

"But Jesse, you can't have nested DO LOOPS, how are you going to code that??"

I came up with the coolest plan.

Sure, it didn't have NESTED DO LOOPS (and only had computed IF, not IF THEN ELSE etc. etc.) but it had those stupid implicit do loops in print statements.  I'll get the nesting by cheating!!

It took me only a couple of minutes to write it up, then I got the whole class to go follow me to the computer room to turn it into actual punch cards.

Boy was I excited!

It compiled and linked just fine!!  I was a genius.  Put in some cool //DATA cards for a nice message and....

It just DIED.  No errors, no normal crashing behavior, the whole thing just locked up...normal remedies did nothing, we had to cold power the whole thing down...REBOOT, RINSE, REREAD CARDS....CRASH!!

What a let-down.

Then not long after that, the whole thing just died.  She tried to use her husband's connection to get us some kind of deal from IBM on fixing it, but it was just too old, it would cost ten times more to fix than anyone had budget for, we were screwed.

I worked in the Library and the Librarians loved me.  The class was so small (I forget, between 5 and 8 people) that the Librarians let us go in the back and use the Library's Apple IIe as long as we were quiet enough.  I taught everyone how to graph math functions in AppleSoft basic (mostly the cool ones using polar co-ordinates and compass roses and such), and that was it for FORTRAN for me until the summer of '85 when I got to use nice, standard FORTRAN 77 on the PDP-11, with an awesome Tektronix plotter that could do like 6' x 30' plots...I spent $149 for Prospero FORTRAN 77 on my Atari 520 ST and used it constantly thru-out Engineering School for everything.  I printed out horribly low-resolution plots on my stupid Star SG-10 which at least used cheap standard typewriter ribbons so ink costs were under control.

So until I started working after I graduated with an Engineering Degree, the only mainframe I touched was for 7 credits of PL/C and PL/I, where you'd miss one closing quote and receive 300 pages of error output.  Except for that, PL/C had great error messages, PL/I not so much...I made tons of friends teaching students in danger of failing the class how to code and debug...we were the last semester to use Punch Cards, the summer Data Structures in PL/I course was all on terminals and Wylbur...I remember the machine had 4MB of memory, 1000 times as much as the IBM minicomputer I'd used earlier that year, but would have like 1000 users sharing it from all across the city...

The Mainframe and FORTRAN experience I had may have helped get me my first job.  I was horrified to see that none of the FORTRAN was FORTRAN-77, all "Old-Style" FORTRAN, with Strings packed into REAL types, it was awful.  I had also been using ANSI-style prototypes and some other extensions in C since like, 1986, so I was horrified to be working in K & R C for the first time...also, it was a very long time until I had source-level debugging again, but it wasn't just all dumps, I had live debugging with single-stepping and everything, and if you chose the right compile options to get the equivalent of the .obj output it was quite productive, even if your BAL wasn't very strong.   OK, that isn't true, if you couldn't read BAL you were hosed, but you could usually tell what line of C source you were at...

How is this even relevant?  Maybe it isn't, but it seems like it is.  It is like 16384 times easier to program stuff now than when I started.  For me and for everyone else.

From all incompatible varieties of BASIC/FORTRAN/many assemblers thru PL/I (which became my favorite language for a few months)  then Pascal (which definitely became my favorite language that year) to C which I really never totally stopped using sine 1986...

Yeah, the flashback you triggered was inspiring...our languages and environments are like 16384 or 32768 times more powerful and productive, and they only want us to do 8192 times as much work per day.

That means it should be two to four times easier...I'm going to continue plowing thru this Streams stuff, this is groovy!
 
Consider Paul's rocket mass heater.
reply
    Bookmark Topic Watch Topic
  • New Topic