This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
I've to store a binary file into database larger than max_allowed_packet size say e.g. 2 GB. How to do that? I hope you'll know that if you try the jdbc blob way you get an error saying that max_packet_size exceeded.
-- Is there a way I can open a stream to a blob into a table's row and write data chunk ( < max_packet_size) by chunk?
-- Or Any other you can suggest?
Note: Please avoid posting increasing the max_allowed_packet size. What is I want to store a file larger than the max value of max_allowed_packet . Also even if I have to store a file of say 200 mb and if I have to hold the 200 mb in some memory object, jvm will run out of memory sooner or later.
max_allowed_packet Command-Line Format --max_allowed_packet=#
Config-File Format max_allowed_packet
Option Sets Variable Yes, max_allowed_packet
Variable Name max_allowed_packet
Variable Scope Both
Dynamic Variable Yes
The maximum size of one packet or any generated/intermediate string.
The packet message buffer is initialized to net_buffer_length bytes, but can grow up to max_allowed_packet bytes when needed. This value by default is small, to catch large (possibly incorrect) packets.
You must increase this value if you are using large BLOB columns or long strings. It should be as big as the largest BLOB you want to use. The protocol limit for max_allowed_packet is 1GB. The value should be a multiple of 1024; nonmultiples are rounded down to the nearest multiple.
You might have to break your object/file up into pieces and store as multiple bolbs.
I'm in the camp that believes files should *NEVER* be stored in a database... but even if I was in the other camp, storing files greater than 1 GB in a database? That's a really bad idea... really bad.
-- Ours is a product which would work in a clustered Environmnt ad the processing of a file can be delegated to any node? So a file mighthve been receivd by a node and posts a JMS notification for processing. These file sizes can be from 500 kb to 1 GB. I want to store them into DB so that they are available to EJBs on any node and take advntge of HA featues of Ap server.
-- Even ifI increase the size of max packet, i woul still need to hold the whole bytes into RAM which will definately not be a good idea.
But then you have an extra network transfer of a 1 GB file. This takes a significant amount of time. I recommend you revisit this design. Maybe route the jobs to the server where the file actually exists. Or use a SAN/network drive so all the app servers can access the file.
Databases are designed to handle thousands (often millions+) of records so the ability of any one record containing 1 GB would mean you could potentially have hundreds (or thousands) of records each with 1 gigabyte. I seriously doubt your system is designed to handle that kind of load. Do not store large files in the database... its a really bad idea.
Rajesh k Kumar
Joined: Jan 21, 2010
Thanks for you views,
I agree on the point that large files should not be stored into DB.
So let me drop this hypothetical angle of strorng a 1GB file.
What about the second problem that I have where in if you want to store a 30 MB file (for example), you must hold it in the memory before you could write it into a MySQL server. Do we have any solutions there?