Hi,
My design follows these conventions: I expose Data (as the interface
DBMain) to the client, and the client's business methods call lock() and
unlock() as appropriate.
I was reviewing and augmenting my
JUnit tests of Data when I smelled
a bad smell concerning my preliminary understandings of what the
DuplicateKeyException implied. Before coming to my final conclusions,
I also wanted to review a post which I felt, based upon intuition, might
be related:
https://coderanch.com/t/184840/java-developer-SCJD/certification/NX-Questions-deleted-records Here are my current ideas (though these ideas are not meant as a response
to the above link, the above link I investigated to be sure that I didn't leave
anything out; of course, if there are flaws with the following ideas, please
let me know).
Because I expose Data to the client, this means that I design and
test the
database independent of my particular project (URLyBird). The database I
design and test, and the Data object given to the client would work for
any project (URLyBird, contractors, and projects that are not yet defined).
Although my specific implementation may vary, the general concepts concerning
keys are now outlined [of course, I present these ideas for two reasons: my
ideas may be useful to others, my ideas may be flawed and thus may be corrected.]
There are two concepts: PrimaryKey and ImmutableKey.
Each concept or object defines zero or more fields which define it. PrimaryKey and
ImmutableKey are equivalent in that whatever fields each defines is identically defined
in the other. [Note: I forgot to mention, however, that a PrimaryKey has a state: on
or off; if the PrimaryKey is off, then it is not enforced, and a DuplicateKeyException
is never raised within the create() method.]
Example: if PrimaryKey.toString() = fields 0 and 2, then ImmutableKey.toString() also
is fields 0 and 2. if PrimaryKey.toString() = no fields, then ImmutableKey.toString()
also = no fields.
Functions:
The PrimaryKey is used during the create() method.
The ImmutableKey is used during the update() method.
PrimaryKey
---------
If the PrimaryKey is not trivially defined (i.e., if it is not
defined to define no fields), then the job of the PrimaryKey
is as you would expect in standard relational databases:
when a record is created, before that record is created,
every extant record must be read to determine that the
PrimaryKey fields are not duplicated. A DuplicateKeyException
is thrown if the primary key (as defined by the concatenation
of the fields which comprise that primary key) already exists
within the database. This search might be optimized by refining
the reading methods to read only primary key fields.
While a PrimaryKey has no particular function in the URLyBird
project, it is easy to imagine a database containing names
and addresses, and you might define the name field as a primary
key which you would not want duplicated (this lessens the
confusion of the user if the user's Aunt is not listed five times
within the database).
ImmutableKey
------------
The ImmutableKey defines fields which cannot be modified during
an update operation. So, if a person books a hotel room that
contains 20 beds for a convention, the number of beds must be
immutable, otherwise the contract with the person who made the
booking could potentially be broken.
Interestingly enough, even though the URLyBird project has no
need for a PrimaryKey, it definitely needs an ImmutableKey which
is the hotel, location, and all the attributes of the room being rented.
That is, all the fields but the last (which contains the ID of the person
who booked the room).
Notice that when I say that an ImmutableKey is required, I don't mean
that you must implement an ImmutableKey object; I only mean that
your implementation must have the same effect as if you had an
ImmutableKey object as defined next.
Assuming that an ImmutableKey object is implemented, here are the
two possible implementation strategies: A and B.
A
-----
Rule: Your database is not allowed to make an inactive record become
active
in place. That is, if record 100 is deleted (it is inactive), you
can never make it active again (until the server shuts down, at which time
you can compact the database). The reason for this is that this algorithm
ignores immutable fields, so if the immutable fields changed, the algorithm
would be updating a completely different record, and it would have no way
of knowing this.
It is my understanding that one is not required to make an inactive record
active in place:
Creates a new record in the database (possibly reusing a
deleted entry). Though, perhaps I am mis-reading this Sun directive?
Perhaps it is saying "if you can re-use a deleted entry, then do so?"
This algorithm ingores all incoming fields of the incoming record which
are mutable. Any incoming field which is mutable could potentially be
passed in as null or as an empty
string. The algorithm only focuses
on the incoming fields which are mutable, and it writes out only and all the
mutable fields to the record it is updating (assuming that the mutable
fields are not null, empty, and the like, otherwise this would cause
an IllegalArgumentException).
In short: this algorithm does not read the record it is updating; it only
writes the mutable fields to this record it is updating.
B
-----
This algorithm allows the create() method to
in place make a deleted
(inactive) record become active (i.e., write a new record in the same location
as a previous, deleted record once existed).
This algorithm must read the record before updating it. This is because the
fields of the record are no longer strictly immutable if you allow a previously
deleted record to be made active in place.
This algorithm only reads the immutable fields of the record and compares them
to the corresponding fields of the incoming record. If they match in a manner
that you consider to be equivalent, then and only then is it appropriate to
continue, otherwise,
1. If the database is designed so that
in place recreation of records is
allowed, throw a RecordNotFoundException (since the record you
were looking for, characterized by its immutable fields, no longer exists).
2. If the database is designed so that
in place recreation of records is
not allowed, ...? (not sure where point 2 might lead).
Assuming that all the immutable fields comprising the ImmutableKey match with
the corresponding fields of the incoming record, then and only then is the
record updated. First attempts may simply write the complete record out anew;
more subtle, subsequent implementations might only write out the mutable
fields (to save time).
Thanks,
Javini Javono
P.S.
After writing the above, it came to me that, yes, I shut down my server all the time.
But, a production server, perhaps handling clients around the globe is running all
the time. Thus, I should design my server to attempt to stay running as long as
possible, and attempt to
in place create new records where previously
deleted records existed. Thus, given that I will carry out this strategy, I will be
compelled to follow plan B as outlined above.
[ March 15, 2004: Message edited by: Javini Javono ]