File APIs for Java Developers
Manipulate DOC, XLS, PPT, PDF and many others from your application.
http://aspose.com/file-tools
The moose likes Beginning Java and the fly likes discarding numeric primitives and using BigDecimal Big Moose Saloon
  Search | Java FAQ | Recent Topics | Flagged Topics | Hot Topics | Zero Replies
Register / Login


Win a copy of Murach's Java Servlets and JSP this week in the Servlets forum!
JavaRanch » Java Forums » Java » Beginning Java
Bookmark "discarding numeric primitives and using BigDecimal" Watch "discarding numeric primitives and using BigDecimal" New topic
Author

discarding numeric primitives and using BigDecimal

Mohit Sinha
Ranch Hand

Joined: Nov 29, 2004
Posts: 125
Hi All,

Need to know your thoughts on the same. For a new engagement we plan to discontinue usage of primitives as a programming practice and strictly follow the usage of BigDecimal instead. Just want to know from you all if there are any perils we may encounter in this process. I believe normal/advanced mathematical calculations can be handled with BigDecimal.

The reason we are resorting to this approach is one of the typical found quiet frequently in this and other java forums is primitives don't support null checks and assume a default value. This gets transparently set in the Hibernate domain objects when persisted in the database.
And instead of a null indicator in the database what we end up is with default value of the primitive.

Let me know your thoughts.

Campbell Ritchie
Sheriff

Joined: Oct 13, 2005
Posts: 37890
    
  22
Primitives only assume default values when they are fields.

Horses for courses. The average engineer uses floating-point arithmetic because the imprecision is tolerable. The average banker should avoid floating-point arithmetic because the imprecision is not tolerable. There will be a performance overhead from not using primitives, but I expect that will be tolerable.

And remember null behaves differently in SQL from in Java. And remember always to quote a rounding mode when dividing.
Mohit Sinha
Ranch Hand

Joined: Nov 29, 2004
Posts: 125
Thanks Campbell.

You mention about nulls behave different in sql from in java. What's your point here. My main concern was not setting the java persistent object value when persisting to the database when i know beforehand that its corresponding equivalent in the java space is null
Campbell Ritchie
Sheriff

Joined: Oct 13, 2005
Posts: 37890
    
  22
If you try comparisons with nulls in Java you get an Exception.
If you try null comparisons in SQL you get null as a result. So SQL uses 3-value logic: true, false, null, and null is rather like don't know. Java uses 2-value logic: true and false.
Mike Simmons
Ranch Hand

Joined: Mar 05, 2008
Posts: 2982
    
    9
I think there are two issues here: primitives vs. reference types, and floating-point vs. BigDecimal. The issues overlap in some ways, but I think they should be considered separately nonetheless.

1. Primitives vs. reference types: If you need to be able to represent a null which is different from a default value, then yes you probably need to use reference types for fields of persistent objects - at least, for all fields that map to nullable columns. However this does not necessarily imply BigDecimal. A common alternate solution is to use wrappers: Integer, Long, Double, Float, etc. If nullability is your only concern, there's no need to switch to BigDecimal.

Note also that if a column is declared NOT NULL, then there's no reason to allow nulls in the model field either. In this case I usually prefer to use primitives, specifically because they don't allow nulls to slip in where they shouldn't. Disallowing nulls then makes other coding easier, as you don't have to insert null checks in your code later on. You get fewer NullPointerException bugs if nulls aren't possible in the first place.

So yes, if a column must be nullable, use a reference type to represent that column in the model class. But if the column isn't nullable, I encourage you to use a primitive type. Unless you have some other reason to avoid a primitive. Such as...

2. Floating-point vs. BigDecimal. If you're dealing with money, you probably want to use BigDecimal. Well, mathematically literate programmers could also just as easily use int or long, transposing the decimal place. (E.g. record $1.23 as 123 cents.) But that sort of thinking seems to be out of vogue these days; oh well. So sure, use BigDecimal for monetary quantities. However, for many other types of things, primitives are just fine, even superior. I discussed this recently here. Also, many quantities are inherently integers, with no decimal point needed - using int, long, or even BigInteger communicates this much more clearly than using BigDecimal does.

So, I suggest you choose your data types first by ignoring nullability concerns. Just consider whether you need precisely-rounded decimals, or whether readability, speed, and/or storage space are more important to you. Then consider nullability - if you were going to use an int, but need it to be nullable, use an Integer instead.
 
I agree. Here's the link: http://aspose.com/file-tools
 
subject: discarding numeric primitives and using BigDecimal
 
Similar Threads
Array
Using BigDecimal in ActionForm
BigDecimal's bottlenecks
Boolean and boolean
Castor | Default Element Value