• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Liutauras Vilda
  • Paul Clapham
  • paul wheaton
Sheriffs:
  • Tim Cooke
  • Devaka Cooray
  • Rob Spoor
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Tim Moores
  • Carey Brown
  • Mikalai Zaikin
Bartenders:

Slow performance of bulk inserts into large MongoDB collection

 
Greenhorn
Posts: 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have data in JSON format containing millions of records that I want to insert into MongoDB database. I created a JAVA program that reads the JSON file, parses it and bulk inserts it to the MongoDB collection using the insertMany() method. Each bulk insert contains 10000 documents. Average size of the document is 13 kB. After inserting roughly about 300 000 documents to the collection, the performance of the inserts progressively starts slowing down. There are no indexes on the collection apart from the default one provided by MongoDB.

I have looked into the mongod.log to diagnose the problem and it looks like after the collection contains about 300 000 documents, every following bulk insert causes an aggregate command with COLLSCAN on the entire collection. After the collection contains 3 000 000 documents, the COLLSCAN took about 30 seconds. The time of the bulk insert operation itself does not change, staying at average 200 ms/10000 documents.

The complete log file from MongoDB can be found here: https://pastebin.com/STDZTJJU

The following JSON output is an example of the aggregate command that is executed after every insert extracted from mongod.log file. Here the COLLSCAN took more than 6 seconds.

Is there anything I can do to avoid the collection scans after every bulk insert?

I COMMAND  [conn2] command diploma.patent command: aggregate {
   aggregate: "patent",
   pipeline: [
       { $match: {}
       },
       { $group: {
           _id: null,
           n: { $sum: 1
               }
           }
       }
   ], cursor: {},
   $db: "diploma",
   $readPreference: { mode: "primaryPreferred" }
}
planSummary: COLLSCAN
keysExamined: 0
docsExamined: 2453599
cursorExhausted: 1
numYields: 19422
nreturned: 1
reslen: 123
locks: {
   Global: {
       acquireCount: {
           r: 19424
       }
   },
   Database: {
       acquireCount: {
           r: 19424
       }
   },
   Collection: {
       acquireCount: {
           r: 19424
       }
   }
} protocol:op_msg 6274ms
 
Your mind is under my control .... your will is now mine .... read this tiny ad
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic