In SQL you can set the transaction isolation level explicitly before running a piece of SQL.
Not a good thing to do for the vast majority of normal database application operations. You'd need a fairly compelling special case before its worth considering. And if you are not care ful you'll end up with weird bugs coming from dirty reads, phantom reads etc.
What it will do is going to be DB specific. Remember that the four transaction isolation levels are not supported by all RDBMS implementations, so doing this you will be restricting which DBs you can use your code with in a less than obvious way. Remember also that calling setTransactionIsolation() can only ever be an attempt to change the isolation level. What it does is again implementation specific.
The big question you have to ask is why you think you need to do this. Changing isolation levels up so to speak (i.e from READ_COMMITTED to SERIALIZABLE) should be safe enough, but changing them down will probably result in buggy behaviour - since you start to allow dirty reads etc. Perhaps you could post what you are trying to achieve by doing this and maybe someone can suggest a better alternative?