Reid M. Pinchback

Ranch Hand
+ Follow
since Jan 25, 2002
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Reid M. Pinchback

Given the name of the magazine, FYI there are two forums here that might be of use to you, this one for performance, another one for testing. Re: questions, journalists are people too so I don't see why you shouldn't have the same opportunities as anybody else around here.
13 years ago

Originally posted by Bert Bates:
Unfortunately, I'm not in a position to be a great advocate for SDLC-esque approaches, but I'd really like to hear about situations when an SDLC-esque approach really makes the most sense.



Probably the trick is assigning the most context-appropriate interpretation to the bit "makes the most sense". You'll definitely see SDLC used now in life-and-limb situations like Aerospace and non-R&D Pharma. SDLC inefficiencies tend to not matter to people because the software cost is often a miniscule part of total project cost, and the extra cost of the inefficiency is miniscule in comparison to the risks of whatever bad events scare people in that particular industry. Good news if you are an SDLC consultant or tool vendor, not so much for anybody else.

Add to that, for in life-and-limb situations I think what you'll really find is SDLC on steroids. Lots of process controls, basically CMP level something-more-than-1 plus SDLC. Definitely that tends to be the case in Pharma, don't know about Aerospace.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]
Right about now I've got that Alice-in-the-looking-glass feeling...
13 years ago

Originally posted by Bert Bates:
Such a foundation often requires a fair amount of upfront analysis and prototyping to build a development model. The development model often relies upon specific design patterns and entity relationship diagrams (ERD). Without this upfront foundation, Iterative development can create long term challenges that are significant in terms of cost and quality.



I also think this isn't truly an Agile vs non-Agile issue, but there is a kernel in here that has a bit of traction in terms of up-front versus iterative, but I think is really something else in disguise. I'll get to the something in disguise later. Long post, bear with me.

I don't personally object to the up-front model, but I object to the notion that the up-front work is invariably near-flawless. That is the fundamental difference I think between the up-front versus the iterative approaches, and it isn't an issue of textbook definition of their tasks, but of the behaviour you see in people (at least in large organizations) when something goes wrong.

When you work iteratively, it isn't a big surprise if you hit a problem and go "oops, we goofed, we have to really rethink this widget implementation before adding the next feature; time to haul out the heavy-duty refactoring plugin...". The problem may be unwanted, but in general why would people freak out over the need to change current code when changing current code is a routine situation anyways?

The up-front situation is different, and not necessarily because of the flesh-and-blood human being (i.e. another developer, so we should sympathize) wearing the architect hat. Anybody truly skilled enough to be an architect should be equally capable of coping with technical problems when it turns out the up-front work missed an issue. The true problem is often going to be with the people around the architect... even with developers around the architect. The unstated problem for up-front design is that some organizational dynamics make it near impossible to identify and take corrective action for big problems (sometimes even for small problems).

For companies *without* that kind of blame-averse/risk-averse dynamic, they could probably make solid cases for why up-front design works for them. Even if they don't talk about iterating, when the design breaks down, they'll end up with an iteration (which they may call "the next release"). Alas, for companies *with* the negative blame-averse dynamic, they'll stick with the design. Yes, the design now known not to work. The implementation code will duck and weave and contort until eventually, if everybody is really lucky, somehow something approximating the original design is achieved. It may be slow as a pig, cost 10 times as much to write the code and QA it, and require 10 times as much hardware to run it, but it will be done.

As a result of all that, Agile people can make a very legitimate case that for blame-averse organizations, the risk of up-front design is even higher than it is for iterative development, which tends to hit larger numbers of small (low anxiety) issues, not small numbers of show-stopping (high-anxiety) issues.

And now for the something in disguise, something I learned from a really smart team lead/boss I had once upon a time. Developers who spend a lot of their career working in the I/T and systems integration space think differently than those who work primarily in the application development space. Not good versus bad, just different. His view, I've come to find that he was correct, and that the issue matters for some classes of project.

If all your coders on a project come from the application space, iteration can spend a lot of time before somebody realizes that the project objectives are literally impossible because other things in the organization render the objectives impossible. This is particularly true when nobody does even a quick first pass at a data model to figure out who *outside* of your project will depend on your data, or vice versa. This isn't an Agile vs non-Agile issue, this is a "your skills reflect your experience" issue.

However, since these problems do come up, now the up-front argument seems to have legs. Now you can make the case "but if you did an up-front design and it went through all the organizational review processes, we would have obviously spotted that this project was impossible". It is a correct statement because often such reviews can spot problems, however, I don't think this indicates a fundamental limitation in the Agile/iterative approach because reviews aren't the *only* way to spot such problems. I think it just means that in such situations, for an Agile team, the team should have somebody working with them who knows about issues the other team members haven't encountered before or aren't in the habit of caring about.

If you include an integration-savvy person who has a hot list of issues they always watch out for, they'll rush to spew them out in the very first meeting of the Agile team. Those issues should end up on some Scrum list somewhere, and the nitty-gritty details dug into as appropriate. Problem solved.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]

Originally posted by Scott Ambler:
Sounds like another HR problem to me.



They definitely had a few of those. :roll:

Originally posted by Bert Bates:
but have the drawback from a business perspective that they do not provide long-term planning capability



I'm with Ilja on point #1. I don't think this is intrinsically an Agile versus non-Agile issue, particularly for the part I quoted. I'd argue that this has more to do with the quality of project management and the executive-level incorporation of the project information, particularly with how effectively you create an information feedback mechanism between developers and whoever is stuck wearing the project manager hat and the levels above them.

One of the hardest messages as a PM to get through to people is that plans and schedule estimates aren't random. They may not be perfect, but from a statistical regression-towards-the-mean they can get pretty good. If you have a process for creating them that is broken, particularly if those plans are created in a way that is *independent* of what the developers know or are experiencing, any notion of long-term planning and estimating is completely a game of smoke, mirrors, politics, and self-deception... but never anything even close to accuracy or consistency.

However the planning starts, for it to continue to have value it has to evolve sanely, which means it needs data, and Agile approaches like Scrum are big on getting exactly the kind of data you want to have as a PM. The difficulty isn't with having Agile people gather the data and massaging with it the PM into something meaningful. The difficulty is with getting people who don't understand projects or project management *listening* to what they are being told when the plan or estimate is now more accurate. Accurate software project estimates scare the stuffing out of the higher-ups.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]
Roger, wondering if you might know something... I'm guessing that all of this is a Corba leftover. Something like requiring (skeleton?) activation for POA. I never kicked around the Corba tires enough to get a lot of depth, wondering if that sounded about right? If so, then the spec'd behaviour might have less to do with being nice to EJB coders, more to do with the bits of Corba compliance Sun tried to allow.
Still not getting it. If algorithms using symmetric keys are faster, then for the same amount of time you could afford to use a *longer* key. Also, why can you have public key systems that use DES or DES-like transforms as the core of the message encryption? If DES-style bit manipulations are fast enough for a symmetric key, what made it all of a sudden hugely slow for an asymmetric key? An extra mod'ing of the bits at the right point between the symbol swaps isn't that big of a deal.

In shifting from symmetric to asymmetric keys, aren't you simply shifting from encrypting with a key that provides its own inverse to instead using a key whose inverse is determined by a harder-to-compute congruence relationship? You only compute that relationship once when generating the keys, so for a fix key length and known keys, the performance difference should be on the order of single-digit percentages (as in communications/IO bandwith variability should be more of a factor, e.g. when using spread spectrum in a noisy environment). Creating the key pairs is very slow, but using them isn't intrinsically slow, only particular choices of public key infrastructure implementation.

This position only makese sense if you believe that asymmetric approaches to encryption have to be intrinsically slower than DES in order to encrypt the bits. I worked on one of these beasties once. Such definitely wasn't the case. There are definitely very strong encryption algorithms that you may choose to use that definitely are slow in software, and I vaguely recall one family of them out of UoWaterloo that were non-RSA based yet would involve asymmetric keys being strong+slow, but holding all other variables in the problem equal, simply choosing symmetric versus asymmetric keys doesn't universally change performance in a huge way. The schemes for managing keys and the schemes for encrypting the messages themselves influence performance... but even for DES with private keys that has been true for over 20 years. Different DES variants have tradeoffs between performance, encryption strength via controlling the # of iterations, and recovery to communications noise bursts. The extra features influence the performance, the length of key influences the performance, but the choice of symmetric vs asymmetric key for a specified length doesn't have to change it materially.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]
13 years ago
And somehow all of that never addressed the original question... is it not the case that in typical usages of asymmetric approaches that you see larger keys? I'm with the poster, I've seen the same thing over the last 20-ish years of their use, particularly in public-key infrastructures, which are particularly amenable to attacks based on the key distribution system.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]
13 years ago
Sounds like a character set issue. If this were Oracle I'd say you needed to use NVARCHAR and configure the database to allow 16-bit characters. Not sure how MySql deals with such issues, don't use it that often.
13 years ago
Sounds like a character set issue. If this were Oracle I'd say you needed to use NVARCHAR and configure the database to allow 16-bit characters. Not sure how MySql deals with such issues, don't use it that often.
The client does not have a reference to the EJBObject. The client has a stub that communicates with the EJBObject. The EJBObject is a server-side artifact. Since entity changes are transactional, I think the point of the spec statement is that whatever existed and was committed before the crash will still exist after the recovery. Any remote stubs referring to those already-existing entities can still be used.

This doesn't ensure you won't get some kind of exception in your client code when the JVM crashes and recovers; after all your transaction was killed during the crash. It just means that you should be able to write recovery code the re-uses the existing entity bean home and instance client-side stubs instead of having the overhead of getting them all over again from JNDI.
I've always used type, but maybe that is because I'm mostly defining complexTypes and only the root node is actually an element (pretty much a necessity with JAXB unless you want it to generate twice as many Java interface files). Out of curiosity I did some digging and I think I figured out the issue. I don't think it is an issue of "ref" versus "type", so much as a case of using each where they get you want you want. The distinction might be a bit easier with an example that doesn't use XmlSchema, but uses DTD instead.



If we were using DTDs instead of XmlSchema, the former example would be the result of us typing in two element definitions that, except for element name, looked identical:



The second example shown would have a DTD like:



The first example is a case of using "type". At any level of the grammar you are saying that the "type" tells you about the children one level down.

The second example is a case of using "ref". With ref you've ALREADY dropped one level down and you are using the "ref" to describe the CURRENT node, not its children. You could use "ref" to also construct the equivalent of the first DTD, but you'd be working at a different level of the tree; you would use "ref" to bring in the Price and Currency individually, instead of using "type" to bring them both in as part of a consistently-used type structure.

The distinction tends to get fuzzy because when you use "type" you are simultaneously specifying the name of the node you are defining AND using the type to declare (by re-use) the content model for the node. With "ref" you are yanking in the node directly, and that node already has its own name and its own content model. When dealing with a single node it isn't a big deal one way or the other, but when dealing with multiple nodes "type" provides more enforcement of the grammar for allowable child combinations than "ref" does for that exact same level of the tree.

If you tend to mostly define types and create very few elements than I think you'll mostly using the "type" attribute during composition of types. A good test (at least for me) is to see what kind of code JAXB generates for your schema. If you get a Java representation that you think accurately reflects your intended content model then you are ok, if not you have to look at how you are using element and type definitions.
[ March 19, 2006: Message edited by: Reid M. Pinchback ]
I'm confused. How did HTML enter into the equation? Isn't this a Swing issue (question was about JTextArea)?

I don't know why the widget ends up blank, maybe something to do with needing to use it in a JScrollPane and specifying rows/columns/wrapping when the String is big, or maybe to use a Document to have a more powerful model for working with smaller pieces of the text.
13 years ago
I think the basic issue is that asymmetric keys involve sharing one half of a key pair, and you don't want to be able to crack the second key just because you already have the first key. Example, in public key systems that involve large prime/pseudo-prime pairs you always want to be able to give away one key without it comprising the matching member. More bits increases the search space for the second key.

None of this is an issue for symmetric keys = private keys. There you don't have the key, just the encrypted data. Cracking the key involves decrypting the data, not analyzing the primary key itself (because you don't have a copy of it unless you are one of the parties exchanging messages).
[ March 18, 2006: Message edited by: Reid M. Pinchback ]
13 years ago