• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Tim Cooke
  • Devaka Cooray
Sheriffs:
  • Liutauras Vilda
  • paul wheaton
  • Rob Spoor
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
Bartenders:
  • Carey Brown
  • Roland Mueller

Performance Issues with Huge Blobs in DB

 
Ranch Hand
Posts: 118
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hey All,

Recently we have discovered that with hight volume of traffic for Oracle 10g database, we were running into Data fragmentation issues because we are storing Blob that avgs 140k. The issue as I understood,The way application works is whenever user comes to site & browse thru we insert/update their session with pertinent info, after 20mins of idle time blob will be deleted. The issue as I understood, when we do delete it's removing Blobs and creating holes, next insert/update scaning the whole table till it finds appropriate hole, which is causing slowness. It turned out to be interal Oracle Bug, so we were looking at other alternatives to save this huge blob. I heard JCS, In-Memory caching mechanisms, we are evaluating which one to use or change the whole application to minimize Blob. My question is, How do you deal with session management for huge blob, hoping that someone could shed light on the best practise.

Thanks,
Anu
 
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Seems to me that the easiest thing to do would be to use either Java's temporary file mechanism or an in-memory cache depending on the volume requirement.

See java.io.File for the createTempFile() method or the Whirlycache toolkit. By defining a blob storage class with a simple interface you could experiment with both with no change to your main application. A blob storage class that could switch methods depending on available memory would not be hard either.

Bill
 
anupa oru
Ranch Hand
Posts: 118
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks William for your awesome feedback.

I will try this cache and let you know how it works. Do you think this would be the best caching mechanism that's out there for storing large Blobs? Keep me posted if you find anything else interested, since we are still in evaluation phase we have time to experiment with different caches and come to consensus.

Thanks again.
Anu
 
anupa oru
Ranch Hand
Posts: 118
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The thing that you said about java.io.File, I read somewhere else too storing Blob in file would be more efficient than saving it in database. Do you think that's a valid statement though, I would think writing to File would be more time consuming since you need to deal with File Handlers,reading file etc. If you were to use java.io.File how would do manage file, do you write Blob to File vice versa, are there any API methods for these or do we need to write custom logic for this.
 
William Brogden
Author and all-around good cowpoke
Posts: 13078
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

I read somewhere else too storing Blob in file would be more efficient than saving it in database. Do you think that's a valid statement though,



Absolutely! There is nothing magic going on in a database, it will either store the blob as a file or in a memory cache AND you will have the extra overhead of the database connection.

Bill
 
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Yes, but presumably a database is being used to obtain some benefit. As soon as some of the data moves outside the database, you lose many of those benefits. Particularly, you suddenly have to code your own file-management routines, to ensure that the files are written and deleted at the right times.
 
Rancher
Posts: 4804
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
For serious volume, websites tend to use memcached with dedicated servers with lots of memory, say 64GB or more.

But mere mortals don't typically need to frequently access a large number of big blobs. If you were hosting a video sharing site, that would be different.
 
Ranch Hand
Posts: 126
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Anupa,
Couple of points!,

1. In oracle you can store blobs as "out of line". this will make the blobs getting stored outside of the table and hence in the event of table scanning, ti will not scan the blobs !

2. Second point is on cashing; we are using a Oracle( previously tangosol ) product called Coherence for distributed caching !. take a look at www.tangosol.com. We are using tangosol for similar size storage and our response time is around 2 milliseconds.

Shoot me a mail if you need more info on any of the above

Thanks
raees
 
Can you really tell me that we aren't dealing with suspicious baked goods? And then there is this tiny ad:
We need your help - Coderanch server fundraiser
https://coderanch.com/wiki/782867/Coderanch-server-fundraiser
reply
    Bookmark Topic Watch Topic
  • New Topic