This week's book giveaway is in the OCAJP 8 forum. We're giving away four copies of OCA Java SE 8 Programmer I Study Guide and have Edward Finegan & Robert Liguori on-line! See this thread for details.
Yes, it is expensive and it is bad programming to #1 although surprisingly I have seen #1 actually done in practice. It's usually discovered when long numeric have reached their overflow limit (which in most systems should be impossible) or when performance grinds to a halt (due to constant record locking). And depending on the size of the table and the number of indexes it could be extremely expensive.
While it may be far easier to implement #1, since its a lot closer to saving/loading a data file, its not maintainable in the long term from a debugging perspective (how do you debug a problem with record 491 if record 491 was deleted?), nor does is even allow concurrency (what if one user was reading data while the other was erasing all the records).
Again, if your system has data access patterns similar to that of a file system, then it doesn't make sense to even use a database other than to enable users to find lots of bugs when they try to open more than one user at once.
What you're referring to is trying to precisely store a java object in a database. The thing is, there's no 1-to-1 mapping. Generally in a database you never have all of the data (imagine constantly keeping track of millions of records in a single array), therefore the comparison you gave isn't really applicable to databases.
You did mention inserting at random parts of the list which in a way relates primary key generation. Most times, a database gives a primary key (think of it as your array index in this situation) for every new record created. The thing about databases though, is that the condition is usually such that the key be unique, order is not important. So you may have records 1-5 (6 was deleted), 7-10, 194, 3958, etc. The common situation, though, is to use ever increasing keys so that higher numbers imply (but never guarentee) at least that many records have been by the system over the lifetime.
And yes, users can try to insert two records with the same primary key (or array index in your case). That's a non-trivial scenario with a number of solutions. The simplist is you let the database enforce the constraint (the db will never allow two records with the same key so one user will fail and the other will suceed). Another solution is to manage the primary key on the application server and have a static synchronized object or service that gives out only one key at a time.
Good primary key management can be hard especially in gigantic systems. Often times, recovery is the key (if one user fails and one succeeds,,, well then lets try the user that failed 2-3 more times) but then you have starvation and livelock issues best left to the performance forum...