This week's book giveaway is in the OCPJP forum. We're giving away four copies of OCA/OCP Java SE 7 Programmer I & II Study Guide and have Kathy Sierra & Bert Bates on-line! See this thread for details.
hii actually I'm reading records from ms-access near about 10,000 and inserting it in mysql on server so please tell me which is way to insert fast. i used preparedStatement for reading from ms-access and also for inserting but it takes much time so please need help...
I'm reading and writing simultaneously.
I read from ms-access and get that in ResultSet rs; and used loop while(rs.next() and stores all records in different different string and then use that rs to
rs.setString for inserting into mysql
The fastest way to upload files may well be to forget using INSERT queries completely, and investigate what bulk upload capabilities your database has. These are going to be different for each DBMS, and I've never used them with MySQL, but I'm pretty sure it has some. A bit of googling for "MySQL bulk upload" suggests you might want to look at http://dev.mysql.com/doc/refman/5.1/en/mysqlimport.html
@ Matthew ya your right when i fetch records from ms-access and try to insert in my-sql my frame can't even move gets hanged and can't even close ,have to use cnt and alt and del to close it , ya thank you for link if possible guide me more
1) Like Mathew said, using bulk loading utility provided by the database is fast... but let me add a caveat here:- most of the time. See, to use a bulk load utility, you will have to export the data from MSAccess and CSV, and then bulk load it into your MYSQL database. This will be faster.. usually, except remember that you are incurring disk IO while saving the CSV file, and Disk IO is probably the slowest thing on your system. The way you are doing it now, everything stays in memory. The advantage of using a bulk loader is that the file gets transmitted to the database server, parsed and loaded on the database server. Usually the database is much faster at parsing and loading data, simply because the database developers have written highly optimized code, and also the database is a bigger machine, so it can parallel load the data across multiple threads. I would definetly explore this option. Espescially, if your client machine is a smaller machine, and the database server is a bigger machine. This option is relatively very simple to implement
2) The way you have it now, you do executeBatch at the end. I'm not sure how MYSQL driver works, but usually when you use addBatch/executeBatch, the records stay in memory on the client while you call addBatch, and go to the server when you do executeBatch. This might cause high GC usage on the client. It's highly unlikely that only 10K records will cause a problem, but dependening on how much memory you gave java, it might be. I would profile your memory usage on the client. It might be better to call executeBatch every N records to minimize memory usage
3) If you have a big "Client" machine, you might want to consider executing the load in multiple threads. However, I would keep this is a last option, since it's much more complicated to implement.
Regarding your screen "freezing", that's expected if you run a long running process in the event thread. What you should do is run it in a background thread. This will unfreeze your UI. However, the user will be able to interact with the UI while the data load process is going on. If you don't want the user to interact with the UI while the process is going on, you might have do something like disable the buttons while the thread is running.
Jayesh A Lalwani wrote:The advantage of using a bulk loader is that the file gets transmitted to the database server, parsed and loaded on the database server. Usually the database is much faster at parsing and loading data, simply because the database developers have written highly optimized code, and also the database is a bigger machine, so it can parallel load the data across multiple threads. I would definetly explore this option. Espescially, if your client machine is a smaller machine, and the database server is a bigger machine. This option is relatively very simple to implement
All this is true. I'd just add that in some databases (don't know how about MySQL, though), bulk loading is faster due to other factors as well. For example, space is allocated in bigger chunks. Database block can be formatted and written to disk directly, avoiding the locking needed for standard processing. Undo doesn't have to be generated at all. Indexes might be updated in bulk too. So even if the database is on local machine, bulk load can still be (sometimes significantly) faster than a bunch of inserts.