Win a copy of 97 Things Every Java Programmer Should Know this week in the Java in General forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Paul Clapham
  • Jeanne Boyarsky
  • Junilu Lacar
  • Henry Wong
Sheriffs:
  • Ron McLeod
  • Devaka Cooray
  • Tim Cooke
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Frits Walraven
  • Tim Holloway
  • Carey Brown
Bartenders:
  • Piet Souris
  • salvin francis
  • fred rosenberger

Slow performance of bulk inserts into large MongoDB collection

 
Greenhorn
Posts: 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I have data in JSON format containing millions of records that I want to insert into MongoDB database. I created a JAVA program that reads the JSON file, parses it and bulk inserts it to the MongoDB collection using the insertMany() method. Each bulk insert contains 10000 documents. Average size of the document is 13 kB. After inserting roughly about 300 000 documents to the collection, the performance of the inserts progressively starts slowing down. There are no indexes on the collection apart from the default one provided by MongoDB.

I have looked into the mongod.log to diagnose the problem and it looks like after the collection contains about 300 000 documents, every following bulk insert causes an aggregate command with COLLSCAN on the entire collection. After the collection contains 3 000 000 documents, the COLLSCAN took about 30 seconds. The time of the bulk insert operation itself does not change, staying at average 200 ms/10000 documents.

The complete log file from MongoDB can be found here: https://pastebin.com/STDZTJJU

The following JSON output is an example of the aggregate command that is executed after every insert extracted from mongod.log file. Here the COLLSCAN took more than 6 seconds.

Is there anything I can do to avoid the collection scans after every bulk insert?

I COMMAND  [conn2] command diploma.patent command: aggregate {
   aggregate: "patent",
   pipeline: [
       { $match: {}
       },
       { $group: {
           _id: null,
           n: { $sum: 1
               }
           }
       }
   ], cursor: {},
   $db: "diploma",
   $readPreference: { mode: "primaryPreferred" }
}
planSummary: COLLSCAN
keysExamined: 0
docsExamined: 2453599
cursorExhausted: 1
numYields: 19422
nreturned: 1
reslen: 123
locks: {
   Global: {
       acquireCount: {
           r: 19424
       }
   },
   Database: {
       acquireCount: {
           r: 19424
       }
   },
   Collection: {
       acquireCount: {
           r: 19424
       }
   }
} protocol:op_msg 6274ms
 
Being a smart alec beats the alternative. This tiny ad knows what I'm talking about:
Devious Experiments for a Truly Passive Greenhouse!
https://www.kickstarter.com/projects/paulwheaton/greenhouse-1
    Bookmark Topic Watch Topic
  • New Topic