aspose file tools*
The moose likes Java in General and the fly likes Oh my gawd, Java is doomed Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login
JavaRanch » Java Forums » Java » Java in General
Bookmark "Oh my gawd, Java is doomed" Watch "Oh my gawd, Java is doomed" New topic
Author

Oh my gawd, Java is doomed

Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Real Soon Now. or not.

I've been writing for a while that Java's multi-thread support is too hard to implement. Its OK for a simple UI thread and background worker, but it doesn't scale to use all the cores in a modern computer system.

My last two desktop machines were both quad core.

Now we get to 12 in a consumer product:
http://www.engadget.com/2010/07/27/apple-mac-pro-line-overhauled-with-new-design-and-12-core-proces/

Apple Mac Pro line overhauled with 12 processing cores, arriving in August for $4,999

Well, its a box for serious video editors, etc. not mass market. But 12 processors is a lot.

Sixteen is just around the corner, perhaps by the end of this year?
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
I've been writing for a while that Java's multi-thread support is too hard to implement. Its OK for a simple UI thread and background worker, but it doesn't scale to use all the cores in a modern computer system.


Pat, what do you mean by "doesn't scale"? You have mentioned this in the past, and I always understood it as it gets really complicated -- which I don't really agree with, but understand the issues.

With this post, however, you are implying that you can't keep the processors busy. In that regard, I completely and totally disagree. Multi-core processors may be coming online in the personal computer market, but they have been available for a very long time. Even as early as last year, you can get a JVM running on hardware with over 800 processor cores.

Personally, I have worked on a project that was able to keep 300 processor cores busy. And it ran 300 times faster than on one core (linear scalability, all processors pegged). Now, true, it initially broke a ton of stuff, including the database -- but once all the external resources were scaled, it worked fine. In fact, the JVM wasn't even the hardest part to scale.

Henry

Books: Java Threads, 3rd Edition, Jini in a Nutshell, and Java Gems (contributor)
Bert Bates
author
Sheriff

Joined: Oct 14, 2002
Posts: 8815
    
    5
Yet another technology mismatch. It would seem the hardware guys are ahead of the software guys. If you use Amazon as a way to track technology trends, you don't see Erlang taking over the world...

My own "ad hoc reckoning" is that it's hard enough to write event-driven code. Is it possible that we just haven't come up with a software metaphor for parallel processing that's widely grasp-able by human brains?


Spot false dilemmas now, ask me how!
(If you're not on the edge, you're taking up too much room.)
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Henry Wong wrote:what do you mean by "doesn't scale"? You have mentioned this in the past, and I always understood it as it gets really complicated -- which I don't really agree with, but understand the issues.

With this post, however, you are implying that you can't keep the processors busy. In that regard, I completely and totally disagree. Multi-core processors may be coming online in the personal computer market, but they have been available for a very long time. Even as early as last year, you can get a JVM running on hardware with over 800 processor cores


You have what I mean by "scale" right. Things like the Azul 800+ core processors are very cool. But I believe that taking a standard Java app written by most of the programmers on sites like this, and I think we are smarter than the average bears, are just not going to keep dozens of cores working.

There are really two very different kinds of scaling. One is supporting a massive commercial website. Say Squarespace (which I hear is in Scala). There are millions of hits per day, and the servlet container can spawn off hundreds of thousands of threads to process it. There is actually not a lot of parallel processing in them.

The second is dealing with large sets of data, processing complex algorithms against it. What you want to do here is partition the data and algorithm and spray them to hundreds if not thousands of essentially identical portions. Not SIMD parallel, but real parallel. You have to partition the data, synchronize across the work units, transfer data between processors, do the next step, etc.

I posit that we need better tools to address this second kind of scaling.

To make this work, your funcational statement has to be "invert this matrix" not "for (i : xSize) for (j : ySize) do.....
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Bert Bates wrote:Yet another technology mismatch. It would seem the hardware guys are ahead of the software guys. Is it possible that we just haven't come up with a software metaphor for parallel processing that's widely grasp-able by human brains?


The hardware guys have had their hands forced. For nearly a decade, they can't make the single core much faster. Or they could, but the power and heat requirements would take us to water and then freon cooling. Not gonna work mass market. So as they shrink the die size, all they can do is put more cores on the die. The Moore's law result will be twice as many cores every couple of years.

There are really, really hard problems getting access to "main memory" from lots of cores, and most solutions lead to serious cache coherency issues. But for sure, the hardware folks are ahead of software folks like us.

I think you may be on to something. All programming since Lady Lovelace has been sequential. Its how we think. And we need to stop that. Humans actually do a lot of parallel processing, vision is massively parallel. Hearing music is parallel.

I've been in Doctoral level seminars, where to get in you have to have passed your Comps for a PhD, where most of the parallel programming efforts failed to achieve useful speed ups. its just hard.
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
You have what I mean by "scale" right. Things like the Azul 800+ core processors are very cool. But I believe that taking a standard Java app written by most of the programmers on sites like this, and I think we are smarter than the average bears, are just not going to keep dozens of cores working.


Well, now I am not sure of what you are saying... If you are saying that it is hard to get an average program running with dozens of processors, I only mildly disagree. I don't completely agree, but don't disagree either.

If you are saying that you can't get anything to scale, then I completely disagree. And I disagree because I have done it. There are implementations of Java that does scale. True, certain techniques will break. Thread pools need to be cranked up. Message queues may need more than one broker. Bigger caches will be needed to mitigate database reads. etc. But it has been done.

Henry

Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
There are really two very different kinds of scaling. One is supporting a massive commercial website. Say Squarespace (which I hear is in Scala). There are millions of hits per day, and the servlet container can spawn off hundreds of thousands of threads to process it. There is actually not a lot of parallel processing in them.

The second is dealing with large sets of data, processing complex algorithms against it. What you want to do here is partition the data and algorithm and spray them to hundreds if not thousands of essentially identical portions. Not SIMD parallel, but real parallel. You have to partition the data, synchronize across the work units, transfer data between processors, do the next step, etc.


Again, I have worked on dozens of projects over the last few years. On projects that fit into both of these descriptions. And achieving scaling at a grand scale is possible. Now... true for some projects, we only got about 30 processors busy. But these were IO bound applications, which couldn't originally keep one processor busy, but a 100 fold increase isn't bad.

Could we have accomplished more? Maybe. At a certain point, when all SLAs are met, and externals resources (such as the DB) become more difficult to scale, there is just lesser incentive to do it.

Henry
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Henry Wong wrote:Well, now I am not sure of what you are saying... If you are saying that it is hard to get an average program running with dozens of processors, I only mildly disagree. I don't completely agree, but don't disagree either.

If you are saying that you can't get anything to scale, then I completely disagree.

I'm not saying that really smart, dedicated folks can't make anything scale. They can do anything. I'm concerned about the 80% of folks who are not really smart, dedicated, motivated and hard working.

What I am saying is that it is way too hard for the average professional programmer. You have to break the algorithms into thread pools and manage them. Sure, I bought the first edition of @Henry's book, and have been using it ever since. It can be done. And its easier with later JDK support. But its still based on very simple primitives that require the programmer to understand how all the sharing works. And when the programmer forgets something, you get subtle race conditions that can be nearly impossible to replicate and eradicate.
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40


Pat Farrell wrote:
There are really, really hard problems getting access to "main memory" from lots of cores, and most solutions lead to serious cache coherency issues. But for sure, the hardware folks are ahead of software folks like us.


This is a valid argument, til, of course, someone actually does it. And then the argument becomes moot. And since you bought up Azul , the high end Azul JVM with 800+ processors, has access to almost the same amount of GB in memory, is fully an SMP. Every core can access every part of memory, and with the same access times.

Pat Farrell wrote:What you want to do here is partition the data and algorithm and spray them to hundreds if not thousands of essentially identical portions. Not SIMD parallel, but real parallel. You have to partition the data, synchronize across the work units, transfer data between processors, do the next step, etc.


With Azul, since it is a full SMP, you don't need to partition the data for the processors. You just assign the processors to different work, and let the data be located where they are. You should synchronize across lower granularity, but you don't have to either. The hardware supports "speculative locking". This means that if two threads try to grab the same lock, they will both be granted. And as long as they don't touch the same memory (no memory cache collisions), they will both be synced atomically. If they do collide, one will be rolled backed (processor back to the earlier IP, and memory changed undone).

Henry


* full disclosure -- I used to be a performance engineer, threading expert, for Azul. Worked on a lot of threaded projects for Azul's customers.
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
I'm not saying that really smart, dedicated folks can't make anything scale. They can do anything. I'm concerned about the 80% of folks who are not really smart, dedicated, motivated and hard working.

What I am saying is that it is way too hard for the average professional programmer. You have to break the algorithms into thread pools and manage them. Sure, I bought the first edition of @Henry's book, and have been using it ever since. It can be done. And its easier with later JDK support. But its still based on very simple primitives that require the programmer to understand how all the sharing works. And when the programmer forgets something, you get subtle race conditions that can be nearly impossible to replicate and eradicate.


That's the point that I am trying to make here... For a large part, I agree with you. Threading is hard. And quite frankly, I think that you are being very generous with the 80% remark. Even a lot of very smart people that I know, have issues thinking concurrently.

The issue was with the premise. The premise is that Java won't scale, and hence, is doomed. If the premise was Java is very difficult to scale, and hence, will have issues in the future, then I would not have disagreed (at least, not as strong as I did).

Henry
Bert Bates
author
Sheriff

Joined: Oct 14, 2002
Posts: 8815
    
    5
Hey Henry,

Stepping back a bit, I would surmise that when you were doing this work, your brain was in a very special place. I imagine it took you and the other members of your team several months to really get the correct mindset. In other words it wasn't the kind of stuff that you could just walk into cold. In other, other words, you had the problem domain, but then you also had the "solution domain", the "thinking in terms of parallel processing" domain.

I remember when I was doing expert systems it took me a while to develop in myself an "expert system way" of looking at problems.
Paul Clapham
Bartender

Joined: Oct 14, 2005
Posts: 18570
    
    8

Pat Farrell wrote:I'm not saying that really smart, dedicated folks can't make anything scale. They can do anything. I'm concerned about the 80% of folks who are not really smart, dedicated, motivated and hard working.


That's why those 20% wrote the database servers and the web application servers and the object-relational-mapping systems, so that the 80% don't continually have to reinvent those wheels. And from the threading point of view, that's why the java.util.concurrent package exists -- so the 80% don't have to continually blunder through the wait-notify jungle.

Only problem is, the wait-notify jungle is still there. What needs to be done next -- back to your original point, I think, Pat -- is that all of those low-level C-type classes need to be deprecated or just plain discarded. Have the language provide high-level tools and implement those tools in the best possible way. Don't provide alternative low-level tools for people to seize upon because they must be "more efficient" or something like that.

Unfortunately what I'm describing here is a thorough overhaul of Java. That's something which I don't think has ever been done in the history of programming languages. And I don't see anybody, or at least not anybody with sufficient drive, who would be willing to attempt that fork.
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Paul Clapham wrote:Unfortunately what I'm describing here is a thorough overhaul of Java. That's something which I don't think has ever been done in the history of programming languages. And I don't see anybody, or at least not anybody with sufficient drive, who would be willing to attempt that fork.


I believe, and this is just personal opinion, that you are right, and we have to use Kbwb, where we add one to each letter of java. Existing languages have to be incremental evolved. Which is why Fortran had zero-trip do loops for decades after everyone agreed that it should be guarded like Algol, C, Java, ....

You simply can't remove all the cruft that has to be removed from Java and still call it Java.

My belief is that the sooner we stop writing Java and start writing in a more modern language that uses the JVM, the sooner we will be able to use the 12 core systems that Apple announced this week.

Whether the real number for "too hard" is 80% or 95% or 99% makes no difference in my argument. Its way too hard for most professional developers. That in and of itself doesn't scale. Changing the number changes the size under the tail of the curve. We need to address the heart of the bell curve.

Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Henry Wong wrote: And quite frankly, I think that you are being very generous with the 80% remark. Even a lot of very smart people that I know, have issues thinking concurrently.

Henry, you literally wrote the book on the topic. You are smarter than the average bear.

Henry Wong wrote:The issue was with the premise. The premise is that Java won't scale, and hence, is doomed. If the premise was Java is very difficult to scale, and hence, will have issues in the future, then I would not have disagreed (at least, not as strong as I did).

Then it was sloppy wording on my part. My premise is that Java is doomed because programmers and Java won't scale to keep up with both the problem sizes and the tools we have. Java scaling is too hard, takes too many smart people, you can't hire them for love nor money.

I was durn'd good with Macro Assembly in the olden days. But those times have passed. By the 80s, the claim that you had to write in assembly/macro to get speed was starting to crack, in the 90s it was blown up. Delayed branch, speculative execution, etc made it so that you had to have the compiler do the optimization. We are at a similar turning point today. Soon cell phones will be having quad core processors. I'm sure that my next desktop will have 16 or 32 cores.

Oh, and by the way, all the interesting CPU-ish work is being done these days by ATI and nVidea, so what ever language we use, will have to talk in that space as well. Write once, run anywhere sound good?
William Brogden
Author and all-around good cowpoke
Rancher

Joined: Mar 22, 2000
Posts: 12785
    
    5
Seems to me the real successful approach to using multiple cores would be closer to Grid computing and Map-Reduce where you dont really expect shared memory but use reallllly fast message passing.

Somewhere around here I have a Transputer chip-set - what we hoped would solve the multi core problem back in the 80s. OCCAM was the language required to make full use of it - how time flies.

Bill
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
I was durn'd good with Macro Assembly in the olden days. But those times have passed. By the 80s, the claim that you had to write in assembly/macro to get speed was starting to crack, in the 90s it was blown up. Delayed branch, speculative execution, etc made it so that you had to have the compiler do the optimization. We are at a similar turning point today. Soon cell phones will be having quad core processors. I'm sure that my next desktop will have 16 or 32 cores.


I don't disagree, but you have to understand... I am a bit jaded. My multiprocessor days goes back to the early 90's, so I have seen this "turning point" argument many times before.

From the "parallelizing compilers", of the early 90s, which is supposed to take threading concerns from the programmer completely -- just code like its a fast single processor and the compiler will take care of it. To the object oriented approach of the mid 90s, which is supposed to make it easy. In fact, that was one of the selling points of Java. To the frameworks, like servlets, EJBs, and web services, that is supposed to take care of everything. And all you have to do is code single threaded for one transaction. To these modern frameworks and languages that is supposed to ... etc. etc. etc.

Henry
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Henry Wong wrote:My multiprocessor days goes back to the early 90's, so I have seen this "turning point" argument many times before.

From the "parallelizing compilers", of the early 90s, which is supposed to take threading concerns from the programmer completely -- just code like its a fast single processor and the compiler will take care of it. To the object oriented approach of the mid 90s, which is supposed to make it easy. In fact, that was one of the selling points of Java. To the frameworks, like servlets, EJBs, and web services, that is supposed to take care of everything. And all you have to do is code single threaded for one transaction. To these modern frameworks and languages that is supposed to ... etc. etc. etc


To quote Fred Brooks, There is no silver bullet.

You must be a young'n. I first used a dual CPU system in 71 or so. It was a PDP-10 KA. They expected to get a 70% bump from the second CPU. In reality, they got about 40%. By the time they got the OS/monitor to do a decent job, the CPU itself was completely obsolete. I think we are seeing Daja Vu all over again.

OO was much earlier than the 90s, Smalltalk was hot in 1980. For ages (still?) the ParcPlace Smalltalk system was called Smalltalk80. I did a lot of C++ for Windows 3.0 and 3.1. C++ is a classic example of why you should kill the old language, C, rather than building a ton of cruft on top for some claim of upward compatibility.

Frameworks like EJB are demon spawn. Not nice Daemon, but evil smoking, acid tongue, fire breathing demons. WSDL is another crock. As is RMI and applets.

David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

William Brogden wrote:Seems to me the real successful approach to using multiple cores would be closer to Grid computing and Map-Reduce where you dont really expect shared memory but use reallllly fast message passing.

Somewhere around here I have a Transputer chip-set - what we hoped would solve the multi core problem back in the 80s. OCCAM was the language required to make full use of it - how time flies.

I loved the Transputer; had a graphics board with it. Of course, back then, I was all over symmetrical multiprocessing, and did various parallel algorithms etc. (including a one-time shot at running on a Connection Machine!) and a few other decent machines (decent back then), but most of that was using languages specifically designed for parallelization.

I also did a fair amount of work on embedded systems in the late 80s/90s with multiple processors, but only a few had real shared memory--most were message-passing through RAM queues and interrupts.
Jeanne Boyarsky
internet detective
Marshal

Joined: May 26, 2003
Posts: 30537
    
150

Paul Clapham wrote:
Pat Farrell wrote:I'm not saying that really smart, dedicated folks can't make anything scale. They can do anything. I'm concerned about the 80% of folks who are not really smart, dedicated, motivated and hard working.


That's why those 20% wrote the database servers and the web application servers and the object-relational-mapping systems, so that the 80% don't continually have to reinvent those wheels. And from the threading point of view, that's why the java.util.concurrent package exists -- so the 80% don't have to continually blunder through the wait-notify jungle.

I think this is key. One of the big uses for Java is for web apps. I don't *need* to parallelize much of the work done in a web app. What I need is the application server to parallelize well so I can handle 300 users in parallel.

Similarly for queues, another enterprise Java concept. If I can process more messages off the queue at once, I can dump "work bits" on the queue and not think of parallelization too much there either. (A little so I don't have database locks, but if I'm working on different data, I should be ok.)


[Blog] [JavaRanch FAQ] [How To Ask Questions The Smart Way] [Book Promos]
Blogging on Certs: SCEA Part 1, Part 2 & 3, Core Spring 3, OCAJP, OCPJP beta, TOGAF part 1 and part 2
Martijn Verburg
author
Bartender

Joined: Jun 24, 2003
Posts: 3274
    
    5

Some of new Java 7 features (such as NIO.2) are trying to keep up with Hardware/Software advances, could be worth checking out!


Cheers, Martijn - Blog,
Twitter, PCGen, Ikasan, My The Well-Grounded Java Developer book!,
My start-up.
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Martijn Verburg wrote:Some of new Java 7 features (such as NIO.2) are trying to keep up with Hardware/Software advances, could be worth checking out!


Back when NIO came out, I spent a lot of time trying to use it. I probably wasted a month on it. What a disaster.

If I was in charge, I would invent a new name, as NIO is tarnished by its first version.
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Interesting "intro to Javascript" from the Google Tech Talks.
Into to Javascript

Of course, Javascript has nothing to do with Java, other than it was invented about the same time that Java became popular.

Keep concepts mentioned in the talk:

1) Javascript is functional, not OO
2) you can do OO style things if you want, but it has no inheritance.

For years, the cross-browser incompatibilities cause me to run away whenever Javascript is mentioned, but some smart folks are saying that Javascript is a viable language for server side stuff. Sounds far out to me, but I'm willing to listen to the arguments.
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

JavaScript *IS* OO, and I don't understand why anyone ever claims otherwise. Just because it's not class-based doesn't mean it's not OO, nor that it doesn't have inheritance--just not the same kind. (And it's only "functional" in the sense that functions are first-order objects, which is a subset of functional languages.)

Modulo a few irritating warts, JavaScript is a *strong*, powerful, expressive programming language. I don't see how it'd be any less viable for server-side development than any other similar language; without knowing any arguments *against* it, I wouldn't know how to advocate for it any differently than similar languages.
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
1) Javascript is functional, not OO
2) you can do OO style things if you want, but it has no inheritance.


Had a pretty heated discussion about this earlier this year. I came down on the side of "javascript is *not* OO". It has OO features, that you can simulate OO, but it is not OO.

It was probably not a good idea to get that heated, as it was part of job interview. But hey, it is only possible to let a certain amount of disagreeable facts get by.

Henry
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

How did this thread turn to JavaScript?
Henry Wong
author
Sheriff

Joined: Sep 28, 2004
Posts: 18840
    
  40

Pat Farrell wrote:
For years, the cross-browser incompatibilities cause me to run away whenever Javascript is mentioned, but some smart folks are saying that Javascript is a viable language for server side stuff. Sounds far out to me, but I'm willing to listen to the arguments.


I have to admit that I ran into a ton of cross-browser issues. But it was probably my fault -- I was so fearful of it, that I stayed in Firefox way too long. And wound up with a lot of code that was untested with other browsers. I should have cross browser tested from the beginning.

Once I got past it, and understood the quirks, it was fine. Javascript is probably one of my favorite languages.

David Newton wrote:JavaScript *IS* OO, and I don't understand why anyone ever claims otherwise. Just because it's not class-based doesn't mean it's not OO, nor that it doesn't have inheritance--just not the same kind. (And it's only "functional" in the sense that functions are first-order objects, which is a subset of functional languages.)


And no. My heated debate was not with David.

Henry
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

:)

I'd have to disagree, though--check out Self.
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

David Newton wrote:How did this thread turn to JavaScript?

What, they are not the same?

Javascript's functional, non-OO take on the world is interesting. And the first letters of Javascript are the same as Java.

I personally hate the lack of compile time type safety. Perhaps I'm just too old. But I do like that Javascript doesn't just throw a NPE and die for a value that can't be resolved.

The world of programmers have not embraced the functional languages, and how Javascript does closures is sure foreign to my brain. But then, Scala looks very foreign to me as well.
Debbie Waltz
Greenhorn

Joined: May 12, 2008
Posts: 13
Pat Farrell wrote:Real Soon Now. or not.

I've been writing for a while that Java's multi-thread support is too hard to implement.
...


For what it matters, it is far simpler than doing it in C/C++ and that is what is mostly used elsewere.
If you want to preserve your Java classes and get something easier to implemet parallelism have a look at Scala language: that language has features, like erlang language that someone quoted, that make it simple to create "workers" (look in the Scala By example guide pa. 17.9).

The point anyway is that to get real advantages you have to change the way you think your code: abolish the "shared state" and your code will become easier to be parallelized: have a look at Van Roy's Programming Paradigms for Dummies.
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

Or make it easier yet and use Clojure--then you get a Lisp, too. (IMO Scala is quasi-functional, but also haven't worked with it for some time now, so I'm quite behind. Clojure is my non-Java JVM language of choice.)
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

Debbie Waltz wrote:For what it matters, it is far simpler than doing it in C/C++.....have a look at Van Roy's Programming Paradigms for Dummies.


Being far better, simpler, etc. than C/C++ is not much of an endorsement. In the 70s, C was impressive. C++ was never better than C--, even Objective C did a better job of grafting OO onto C.

Thanks for the link.
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

Pat Farrell wrote:how Javascript does closures is sure foreign to my brain.

What do you see are the significant differences compared to other languages' closures?
Pat Farrell
Rancher

Joined: Aug 11, 2007
Posts: 4658
    
    5

I can't claim that I've written enough serious Javascript to fully grok its closures. And I think I have an immune reaction against the word "closure" itself. When I was writing Smalltalk, blocks were natural. So closures should be natural as well. Yet for some reason, they don't seem natural to me. Perhaps its the kind of mental impedance mismatch that lots of folks have with parallel algorithms.

I am finding myself forced to do more and more serious Javascript, which I do not enjoy (jQuery, etc. make it more acceptable). I'd much rather be working in Java++ or Scala. But as a pro, I do what the guys with money want.

I find it interesting that Java and C have strong walls between functions and objects, while the future seems to be weaker (or no) distinctions. Long ago, pointers to functions were common idioms in C and Bliss, until they added Reflections, it was impossible to do that in Java.
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

Okay, so what are the differences between Smalltalk blocks and JavaSScript closures? (You can talk Smalltalk to me; first job was Smalltalk, and I'm getting back into it with Pharo/Cog--it's good.)

I don't see them as being conceptually different on a meaningful level, although early implementations (IMO) were a bit broken with how block parameters were re-used. But that was pretty early in Smalltalk's history, IIRC.
João Bispo
Greenhorn

Joined: Mar 05, 2009
Posts: 5
Pat Farrell wrote:I think you may be on to something. All programming since Lady Lovelace has been sequential. Its how we think. And we need to stop that. Humans actually do a lot of parallel processing, vision is massively parallel. Hearing music is parallel.


The brain works as a massive parallel processor, but our mind, on top of that "parallel hardware", is mostly sequential. We will not stop thinking "sequential".

Many-core will hardly be the answer for the future of programming. Companies will not admit it, but they are already looking into heterogeneous systems.

Future Microprocessors: Multi-core, Mega-nonsense, and What We Must Do Differently Moving Forward
Raul Guerrero
Greenhorn

Joined: Apr 13, 2009
Posts: 7
João Bispo wrote:
Pat Farrell wrote:I think you may be on to something. All programming since Lady Lovelace has been sequential. Its how we think. And we need to stop that. Humans actually do a lot of parallel processing, vision is massively parallel. Hearing music is parallel.


The brain works as a massive parallel processor, but our mind, on top of that "parallel hardware", is mostly sequential. We will not stop thinking "sequential".

Many-core will hardly be the answer for the future of programming. Companies will not admit it, but they are already looking into heterogeneous systems.

Future Microprocessors: Multi-core, Mega-nonsense, and What We Must Do Differently Moving Forward


Yes, now speaking of how the brain works, it's the perfect example of the current hardware/software problems, the brain is a massive parallel-processor computer, and because our minds think secuential, we are only able to use between 5-15% of our "hardware" capacity at most, if we were able to consciously parallel, then we could use or brains to full capacity, which is pretty much what is being proposed here, but this, just like in programming, comes a big price to pay. Imagine if were able to think in parallel and use all of our brain, here comes another major issue, the heart beating, lungs breathing, and all of those life critical functions that the brain handles automagically for security reasons, what would happen if we had to control them manually? if we had an accident and the brain went unconscious, then we would be dead for sure.

The same with computers, if we move from secuential software programming to parallel, we are going to be able to fully exploit all the new hardware that's gonna appear in the following years. But, we would have to handle everything by hand, and all of the secuential goodness that the rest of us 80% programmers have, would be gone, and we would have to start over with creating libraries, reeducating programmers, etc. And most of the programming languages based on secuential programming would also have to be rewritten.

But, because we think secuential, if we think about usability of a programming language, the best is to use a secuential programming language that gets translated and optimized to parallel just as we do now. So pretty much I think Java is going to be okay, what the 20% genius programmers have to do is focus on optimizing the compiler and JVM, so that the 80% can fully exploit the new hardware.

And also, we think secuentially, but remember, that like software, the brain is evolving as well, but because of nature, it takes hundreds or thousands of years to get optimized, just as our brain performance now is better than it was a hundred years ago (and we continue to think secuentially as usual), the good thing about software is that it evolves in a matter of months, or a couple of years at most, and not in hundreds of years

Pat Farrell wrote:
Frameworks like EJB are demon spawn. Not nice Daemon, but evil smoking, acid tongue, fire breathing demons. WSDL is another crock. As is RMI and applets.


Well, these demons are the ones that help the 80% to quickly program secuential enterprise apps, but have deployments that work with high-end servers, if we had to do our enterprise programming by hand, then it would be back to the dark ages of releasing stovepipe enterprise software every 5 years with programming gurus (if things don't get messy, which is almost impossible) and at a big big cost moneywise as well. If you think those old days were better for software development, then it's like saying medicine is evil, and it was better when people were cured by shamans. Yes, with them it was that medicine started, but it doesn't mean it's better.
And if you think about those frameworks, they don't stay bloated like when they started, just compare J2EE 1.4 to Java EE 6, there's a huge difference. They get better, more complete and lighter, so saying they are evil sounds like a pretty dated opinion.
Ajeeth Kumar
Ranch Hand

Joined: Mar 30, 2005
Posts: 56
Seriously guys... i hate java... i dont know why i am even here... I have worked in java for 6 yrs now and I can never say confidently that i know java.. cos the documentation sucks...I find writing "printf" easier than "S.o.p".... and c'mon, cant Sun come up with meaningful examples in their javadocs like MSDN?? I have to spend hours to understand "reflections" from the blogs, articles on the net whereas it can be put in a simple way with some simple examples just like MSDN.

I think I entered a "private drive"and all the java gurus are going to come after me. I wish I understood Java
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

Raul Guerrero wrote:we are only able to use between 5-15% of our "hardware" capacity at most

This is a complete myth.
David Newton
Author
Rancher

Joined: Sep 29, 2008
Posts: 12617

Ajeeth Kumar wrote:Seriously guys... i hate java... i dont know why i am even here... I have worked in java for 6 yrs now and I can never say confidently that i know java.. cos the documentation sucks...I find writing "printf" easier than "S.o.p".... and c'mon, cant Sun come up with meaningful examples in their javadocs like MSDN?? I have to spend hours to understand "reflections" from the blogs, articles on the net whereas it can be put in a simple way with some simple examples just like MSDN.

It's not that anyone is going to "come after you", it's just that we'll probably disagree with you. And so far all you've really said is you hate the "inadequate" documentation and that you don't know Java--that's different than having a reason to hate Java itself.
Randall Twede
Ranch Hand

Joined: Oct 21, 2000
Posts: 4347
    
    2

I find this very hard to believe. when i was an active programmer, it was MSDN i was always bitching about. cant find what i want etc. by contrast java's documentation seemed impecable. maybe this guy is just pulling our leg?


SCJP
Visit my download page
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: Oh my gawd, Java is doomed