• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Question on Decoupling Vs Performance

 
Ranch Hand
Posts: 681
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have been working on a project where the client layer was very tightly coupled to the server layer. We were exposing and working with server objects, which were very large and cumbersome to work with.

But the rational was performance, that we didnt want to create a mapping layer between client and server. I wanted to seperate client rendring or in this case rendering XML from all the rules on how the data was to be rendered.

So i wanted a service to create the XML taking a client object, and that client object would be cretaed from the server object. But was told there could be a performance impact of mapping server to client.

I cant see there being that much of a performance impact mapping data to a object, and then mapping that to the XML.

As it is we have a unweidly object and since the server object is constantly changing this breaks all our code and are junits.

So we have broken the seperation of concerns and the single responsibilty pattern, and I dont think straight mapping , not using reflection , from a server object to a client objec t will have that much of a performence impact.

But I am open to other views on this, software development is always a learning process, and understand that patterns are there to guide us and are not written in stone.
 
Saloon Keeper
Posts: 27752
196
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
One of the most basic rules of system design is NOT to prematurely "optimize".

I've worked with systems large and small for decades and one thing has always been clear when performance was an issue: The part of the system where the bottleneck was wasn't the part of the system where the bottleneck was expected.

So warping your design for "efficiency" has 2 strikes against it:

1, you're almost certainly wasting time that could be better spent elsewhere

2. The complexities introduced in the false optimization are likely to make it much harder to optimize where the actual problems occur. You'll often end up having to undo some of the earlier work, and at a minimum the system you're working on will be harder to understand.

So go with loose coupling until you're sure that only tighter coupling will resolve an issue. And even then, consider whether a different approach might be better. Some of the worst problems I've seen evaporated when a more suitable algorithm is used - I had one system when data was nearly, but not fully in sort order. The "fast" sorts (heap and quciksort) deliver their worst performance on those, but a shellsort screamed.

Sometimes just flipping a single application option switch can make the difference (had a mainframe crashing before we tracked that one down!)
 
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
+1 what Tim said.

To add to that, developers are notoriously bad at optimizing based on gut feelings and intuition. If performance is really a big consideration, use a profiler!

Besides, what's more expensive in the long run:

1. Loss of performance due to a properly layered design
2. Loss of customer goodwill because your system responds in 50ms (maybe?) instead of 20ms.

or

1. More time spent by developers in maintenance and trying to work around design problems
2. More time spent debugging problems caused by workarounds and design problems
3. More time spent trying to introduce new features to a code base that has design problems
4. More time bringing new developers up to speed to replace developers who got fed up with the design problems and left
5. Loss of goodwill from customers because your system has bugs caused by design problems

---
Will the performance hit of doing mapping be even longer than a few microseconds? Even if were a couple of hundred microseconds, is it worth a poorly factored design and all the problems that brings?
 
Tony Evans
Ranch Hand
Posts: 681
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks guys, sorry for the late reply , fixing bugs and trying to hack enhancements in , every problem you can hit from tight coupling we hit. Over the Christmas I will prepare a report on what went wrong hopefully in the next project we wont make the same mistakes.

I agree with everything you wrote. Sometimes I think with developers a certain stubborness creeps in, and they wont change the design.
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic