com.mongodb.MongoBulkWriteException: Bulk write operation error on server 192.168.56.1:27017. Write errors: [BulkWriteError{index=6, code=11000, message='E11000 duplicate key error collection: dbname.prodAgg index: id dup key: { : { prodCategory: "xxx", prodId: "yyyyy", location: "US-EAST" } }', details={ }}].

Stack Overflow | Vamsi | 2 months ago
tip
Do you know that we can give you better hits? Get more relevant results from Samebug’s stack trace search.
  1. 0

    Dupllicate key error when trying to write an $group aggregation to mongodb from Spark using scala

    Stack Overflow | 2 months ago | Vamsi
    com.mongodb.MongoBulkWriteException: Bulk write operation error on server 192.168.56.1:27017. Write errors: [BulkWriteError{index=6, code=11000, message='E11000 duplicate key error collection: dbname.prodAgg index: id dup key: { : { prodCategory: "xxx", prodId: "yyyyy", location: "US-EAST" } }', details={ }}].
  2. 0

    MongoDB(Wired tiger) : Upsert errors with "duplicate key error" - Multi Threaded

    Stack Overflow | 1 year ago | svs teja
    com.mongodb.MongoBulkWriteException: Bulk write operation error on server Write errors: [BulkWriteError{index=0, code=11000, message='E11000 duplicate key error collection: index: _id_ dup key: { :
  3. 0

    Bulk execution with failure throws DuplicateKey instead of BulkWriteException

    GitHub | 2 years ago | Tolsi
    com.mongodb.casbah.BulkWriteException: Bulk write operation error on server test:27017. Write errors: [BulkWriteError{index=0, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "8" }', details={ }}, BulkWriteError{index=1, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "4" }', details={ }}, BulkWriteError{index=2, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "9" }', details={ }}, BulkWriteError{index=3, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "5" }', details={ }}, BulkWriteError{index=4, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "10" }', details={ }}, BulkWriteError{index=5, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "6" }', details={ }}, BulkWriteError{index=6, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "1" }', details={ }}, BulkWriteError{index=7, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "2" }', details={ }}, BulkWriteError{index=8, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "7" }', details={ }}, BulkWriteError{index=9, code=11000, message='E11000 duplicate key error collection: test.segments index: date_val_idx dup key: { : "2015-08-21", : "3" }', details={ }}].
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    I've noticed the application frequently getting WriteConflict errors when doing batch saves of logs to a capped collection. Application error: {noformat} [java] 2014-12-01T19:08:34.253+0000 [AutomationAgentLogs-2] gid: WARN com.xgen.svc.atm.svc.AutomationAgentLogSvc$LogWriter [bulkInsertBatch:219] - Failure on insert of 24 automation agent logs [java] com.mongodb.BulkWriteException: Bulk write operation error on server ip-10-169-132-210.ec2.internal:27018. Write errors: [BulkWriteError{index=0, code=112, message='WriteConflict', details={ }}] . [java] at com.mongodb.BulkWriteBatchCombiner.throwOnError(BulkWriteBatchCombiner.java:125) ~[mongo.jar:na] [java] at com.mongodb.BulkWriteBatchCombiner.getResult(BulkWriteBatchCombiner.java:115) ~[mongo.jar:na] [java] at com.mongodb.DBCollectionImpl.executeBulkWriteOperation(DBCollectionImpl.java:160) ~[mongo.jar:na] {noformat} Primary logs: {noformat} 2014-12-01T19:08:31.712+0000 I QUERY [conn17673] command mmsdbautomationlog.$cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 reslen:80 2345ms 2014-12-01T19:08:31.846+0000 I WRITE [conn17871] insert mmsdbautomationlog.agentLogs query: { _id: ObjectId('547cbcafe4b087d54e96dba1'), groupId: ObjectId('5479fde1e4b0e88a8f89cf91'), timestamp: 14174609 02000, level: "info", thread: "main/components/agent.go:352", logger: "", message: "All 1 Mongo processes are in goal state, Monitoring agent in goal state, Backup agent in goal state", hostname: "green-2.c luste.5479ff83e4b0e88a8f89e619.mongodbdns.com", process: null, threw: "" } ninserted:0 keyUpdates:0 exception: WriteConflict code:112 194ms 2014-12-01T19:08:31.848+0000 I QUERY [conn18043] command backupjobs.$cmd command: findAndModify { findandmodify: "blockstore_jobs", query: { finished: false, workingOn: false, type: "groom", $or: [ { pri orities: { $elemMatch: { lastResort: true, runEligibleTs: { $lt: 1417460911720 } } } } ] }, sort: { submittedAt: 1 }, update: { $set: { workingOn: { heartbeat: 1417460911723, machine: { machine: "mms-dev-da emon-2", head: "/data2/backups/daemon/" } } } } } keyUpdates:0 reslen:44 123ms 2014-12-01T19:08:31.848+0000 I QUERY [conn18039] command backupjobs.$cmd command: findAndModify { findandmodify: "blockstore_jobs", query: { finished: false, workingOn: false, type: "groom", $or: [ { pri orities: { $elemMatch: { lastResort: true, runEligibleTs: { $lt: 1417460911722 } } } } ] }, sort: { submittedAt: 1 }, update: { $set: { workingOn: { heartbeat: 1417460911723, machine: { machine: "mms-dev-da emon-2", head: "/data/backups/daemon/" } } } } } keyUpdates:0 reslen:44 119ms 2014-12-01T19:08:31.967+0000 I WRITE [conn17871] insert mmsdbautomationlog.agentLogs query: { _id: ObjectId('547cbcafe4b087d54e96dba2'), groupId: ObjectId('5479fde1e4b0e88a8f89cf91'), timestamp: 14174609 02000, level: "info", thread: "main/components/agent.go:354", logger: "", message: "All 1 Mongo processes are in goal state, Monitoring agent in goal state, Backup agent in goal state", hostname: "green-2.c luste.5479ff83e4b0e88a8f89e619.mongodbdns.com", process: null, threw: "" } ninserted:1 keyUpdates:0 121ms {noformat} Primary: 2.8.0rc1 (wiredtiger) Secondaries: 2.8.0rc1 (mmapv1) Same replica set as SERVER-16366

    JIRA | 2 years ago | Cailin Anne Nelson
    com.mongodb.BulkWriteException: Bulk write operation error on server ip-10-169-132-210.ec2.internal:27018. Write errors: [BulkWriteError{index=0, code=112, message='WriteConflict', details={ }}] .
  6. 0

    I'm putting this under the Java driver as I'm seeing it there - only on MDB 3.0 and Wired tiger BUT I see a similar issue in Python with a different set of code - bulk updates failing and no handy info. I'm getting the following error when doing muti threaded bulk upserts. {noformat} com.mongodb.BulkWriteException: Bulk write operation error on server 172.31.12.171:27017. Write errors: [BulkWriteError{index=97, code=1, message=' Update query failed -- RUNNER_DEAD', details={ }}]. at com.mongodb.BulkWriteBatchCombiner.throwOnError(BulkWriteBatchCombiner.java:127) at com.mongodb.BulkWriteBatchCombiner.getResult(BulkWriteBatchCombiner.java:115) at com.mongodb.DBCollectionImpl.executeBulkWriteOperation(DBCollectionImpl.java:160) at com.mongodb.DBCollection.executeBulkWriteOperation(DBCollection.java:1737) at com.mongodb.DBCollection.executeBulkWriteOperation(DBCollection.java:1733) at com.mongodb.BulkWriteOperation.execute(BulkWriteOperation.java:93) at MongoWorker.run(MongoWorker.java:86) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {noformat} Window server (But see python issue on Linux servers), can share jar that reproduces easily.

    JIRA | 2 years ago | John Page
    com.mongodb.BulkWriteException: Bulk write operation error on server 172.31.12.171:27017. Write errors: [BulkWriteError{index=97, code=1, message=' Update query failed -- RUNNER_DEAD', details={ }}].

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. com.mongodb.MongoBulkWriteException

      Bulk write operation error on server 192.168.56.1:27017. Write errors: [BulkWriteError{index=6, code=11000, message='E11000 duplicate key error collection: dbname.prodAgg index: id dup key: { : { prodCategory: "xxx", prodId: "yyyyy", location: "US-EAST" } }', details={ }}].

      at com.mongodb.connection.BulkWriteBatchCombiner.getError()
    2. MongoDB Java Driver
      BulkWriteBatchCombiner.getError
      1. com.mongodb.connection.BulkWriteBatchCombiner.getError(BulkWriteBatchCombiner.java:176)
      1 frame