java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE "TBL_ID"='1621' was aborted. Call getNextException to see the cause.

Apache's JIRA Issue Tracker | Alexander Behm | 2 years ago
  1. 0

    It appears that the Hive Metastore does not properly migrate column statistics when renaming a table across databases. While renaming across databases is not supported in HiveQL, it can be done via the Metastore Thrift API. The problem is that such a newly renamed table cannot be dropped (unless renamed back to its original database/name). Here are steps for reproducing the issue. 1. From the Hive shell/beeline: {code} create database db1; create database db2; create table db1.mv (i int); use db1; analyze table mv compute statistics for columns i; {code} 2. From a Java program: {code} public static void main(String[] args) throws Exception { HiveConf conf = new HiveConf(MetaStoreClientPool.class); HiveMetaStoreClient hiveClient = new HiveMetaStoreClient(conf); Table t = hiveClient.getTable("db1", "mv"); t.setDbName("db2"); t.setTableName("mv2"); hiveClient.alter_table("db1", "mv", t); } {code} 3. From the Hive shell/beeline: {code} drop table db2.mv2; {code} Stack shown when running 3: {code} FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Exception thrown flushing changes to datastore at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165) at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106) at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) NestedThrowablesStackTrace: java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE "TBL_ID"='1621' was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2737) at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424) at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372) at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628) at org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:596) at org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:683) at org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:86) at org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:454) at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:203) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:267) at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:98) at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106) at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) ) hive> drop table db2.mv2; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:javax.jdo.JDODataStoreException: Exception thrown flushing changes to datastore at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165) at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106) at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) NestedThrowablesStackTrace: java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE "TBL_ID"='1621' was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2737) at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424) at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372) at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628) at org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:596) at org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:683) at org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:86) at org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:454) at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:203) at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:267) at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:98) at org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108) at com.sun.proxy.$Proxy0.commitTransaction(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106) at com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) ) {code}

    Apache's JIRA Issue Tracker | 2 years ago | Alexander Behm
    java.sql.BatchUpdateException: Batch entry 0 DELETE FROM "TBLS" WHERE "TBL_ID"='1621' was aborted. Call getNextException to see the cause.
  2. 0

    [HIVE-9720] Metastore does not properly migrate column stats when renaming a table across databases. - ASF JIRA

    apache.org | 1 year ago
    java.sql.BatchUpdateException: Batch entry 0 DELETE FROM WHERE ='1621' was aborted. Call getNextException to see the cause.
  3. 0

    Issues with the Hive plugins: 1. The plugin uses a very specific HCatalog version. We don't know the behavior against other versions of HCatalog. 2. They don't work with avro tables because the etl apps expose avro. The apps should not expose avro. 3. Even if you change the app, they don't work if you are reading from one avro table and writing to another. The combination of HCatalogInputFormat and AvroSerDe have a bug where it reads the avro schema from a property that the input and output both set. This only happens for map-only jobs. 4. Even if you change the AvroSerDe to ignore the schema in that property, it will be able to read correctly but there is some issue writing to partitioned avro tables: {code} Job commit failed: org.apache.hive.hcatalog.common.HCatException : 2006 : Error adding partition to metastore. Cause : MetaException(message:javax.jdo.JDODataStoreException: Add request failed : INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES (?,?,?,?,?) at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252) at org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:3048) at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2992) at org.apache.hadoop.hive.metastore.ObjectStore.copyMSD(ObjectStore.java:2958) at org.apache.hadoop.hive.metastore.ObjectStore.alterTable(ObjectStore.java:2813) at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.alterTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:241) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3345) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:3325) at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy5.alter_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9105) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9089) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) NestedThrowablesStackTrace: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES ('2649',NULL,'user','string','2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2737) at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424) at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372) at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628) at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:409) at org.datanucleus.store.rdbms.scostore.JoinListStore.internalAdd(JoinListStore.java:304) at org.datanucleus.store.rdbms.scostore.AbstractListStore.addAll(AbstractListStore.java:136) at org.datanucleus.store.rdbms.mapping.java.CollectionMapping.postInsert(CollectionMapping.java:136) at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:519) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143) at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784) at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2314) at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObjectAsValue(PersistableMapping.java:567) at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObject(PersistableMapping.java:326) at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:193) at org.datanucleus.state.JDOStateManager.providedObjectField(JDOStateManager.java:1269) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideField(MStorageDescriptor.java) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideFields(MStorageDescriptor.java) at org.datanucleus.state.JDOStateManager.provideFields(JDOStateManager.java:1346) at org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:305) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390) at org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5027) at org.datanucleus.flush.FlushOrdered.execute(FlushOrdered.java:106) at org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4119) at org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450) at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575) at org.datanucleus.store.query.Query.executeQuery(Query.java:1760) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:3048) at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2992) at org.apache.hadoop.hive.metastore.ObjectStore.copyMSD(ObjectStore.java:2958) at org.apache.hadoop.hive.metastore.ObjectStore.alterTable(ObjectStore.java:2813) at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.alterTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:241) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3345) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:3325) at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy5.alter_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9105) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9089) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ) at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:969) at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.commitJob(FileOutputCommitterContainer.java:249) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:274) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: MetaException(message:javax.jdo.JDODataStoreException: Add request failed : INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES (?,?,?,?,?) at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252) at org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:3048) at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2992) at org.apache.hadoop.hive.metastore.ObjectStore.copyMSD(ObjectStore.java:2958) at org.apache.hadoop.hive.metastore.ObjectStore.alterTable(ObjectStore.java:2813) at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.alterTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:241) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3345) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:3325) at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy5.alter_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9105) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9089) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) NestedThrowablesStackTrace: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES ('2649',NULL,'user','string','2') was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2737) at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424) at org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372) at org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628) at org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:409) at org.datanucleus.store.rdbms.scostore.JoinListStore.internalAdd(JoinListStore.java:304) at org.datanucleus.store.rdbms.scostore.AbstractListStore.addAll(AbstractListStore.java:136) at org.datanucleus.store.rdbms.mapping.java.CollectionMapping.postInsert(CollectionMapping.java:136) at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:519) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertTable(RDBMSPersistenceHandler.java:167) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:143) at org.datanucleus.state.JDOStateManager.internalMakePersistent(JDOStateManager.java:3784) at org.datanucleus.state.JDOStateManager.makePersistent(JDOStateManager.java:3760) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2219) at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2314) at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObjectAsValue(PersistableMapping.java:567) at org.datanucleus.store.rdbms.mapping.java.PersistableMapping.setObject(PersistableMapping.java:326) at org.datanucleus.store.rdbms.fieldmanager.ParameterSetter.storeObjectField(ParameterSetter.java:193) at org.datanucleus.state.JDOStateManager.providedObjectField(JDOStateManager.java:1269) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideField(MStorageDescriptor.java) at org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoProvideFields(MStorageDescriptor.java) at org.datanucleus.state.JDOStateManager.provideFields(JDOStateManager.java:1346) at org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:305) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390) at org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5027) at org.datanucleus.flush.FlushOrdered.execute(FlushOrdered.java:106) at org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4119) at org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450) at org.datanucleus.store.query.Query.prepareDatastore(Query.java:1575) at org.datanucleus.store.query.Query.executeQuery(Query.java:1760) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672) at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.listStorageDescriptorsWithCD(ObjectStore.java:3048) at org.apache.hadoop.hive.metastore.ObjectStore.removeUnusedColumnDescriptor(ObjectStore.java:2992) at org.apache.hadoop.hive.metastore.ObjectStore.copyMSD(ObjectStore.java:2958) at org.apache.hadoop.hive.metastore.ObjectStore.alterTable(ObjectStore.java:2813) at sun.reflect.GeneratedMethodAccessor71.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) at com.sun.proxy.$Proxy0.alterTable(Unknown Source) at org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:241) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_core(HiveMetaStore.java:3345) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:3325) at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) at com.sun.proxy.$Proxy5.alter_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9105) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_table_with_environment_context.getResult(ThriftHiveMetastore.java:9089) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_environment_context_result$alter_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:36822) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_environment_context_result$alter_table_with_environment_context_resultStandardScheme.read(ThriftHiveMetastore.java:36799) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$alter_table_with_environment_context_result.read(ThriftHiveMetastore.java:36741) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_alter_table_with_environment_context(ThriftHiveMetastore.java:1261) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.alter_table_with_environment_context(ThriftHiveMetastore.java:1245) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:338) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:327) at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.updateTableSchema(FileOutputCommitterContainer.java:481) at org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.registerPartitions(FileOutputCommitterContainer.java:874) ... 6 more {code}

    Cask Community Issue Tracker | 10 months ago | Albert Shau
    java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "COLUMNS_V2" ("CD_ID","COMMENT","COLUMN_NAME","TYPE_NAME","INTEGER_IDX") VALUES ('2649',NULL,'user','string','2') was aborted. Call getNextException to see the cause.
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Some of the the slaves in PDB-1124 (e.g 2) seem to hit this every time they attempt to gc the master database. {code} 2015-01-04 05:23:24,952 INFO [c.p.p.command] [b9d19827-c06b-4396-ac07-0afb6707e89f] [replace catalog] 2015-01-04 05:23:32,427 INFO [c.p.p.c.services] Starting sweep of stale reports (threshold: 14 days) 2015-01-04 05:23:32,439 INFO [c.p.p.c.services] Finished sweep of stale reports (threshold: 14 days) 2015-01-04 05:23:32,439 INFO [c.p.p.c.services] Starting database garbage collection 2015-01-04 05:23:35,208 ERROR [c.p.p.c.services] Error during garbage collection java.sql.BatchUpdateException: Batch entry 0 DELETE FROM resource_params_cache WHERE NOT EXISTS (SELECT * FROM catalog_resources cr WHERE cr.resource=resource_params_cache.resource) was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2746) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:457) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1887) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405) ~[puppetdb.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2893) ~[puppetdb.jar:na] at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:469) ~[puppetdb.jar:na] at clojure.java.jdbc.internal$do_prepared_STAR_$fn__6350.invoke(internal.clj:356) ~[na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:223) ~[na:na] at clojure.java.jdbc.internal$do_prepared_STAR_.doInvoke(internal.clj:356) ~[na:na] at clojure.lang.RestFn.invoke(RestFn.java:423) [puppetdb.jar:na] at clojure.java.jdbc$delete_rows.invoke(jdbc.clj:297) ~[na:na] at com.puppetlabs.puppetdb.scf.storage$delete_unassociated_params_BANG_$fn__9696.invoke(storage.clj:740) ~[na:na] at com.puppetlabs.puppetdb.scf.storage.proxy$java.lang.Object$Callable$7da976d4.call(Unknown Source) ~[na:na] at com.yammer.metrics.core.Timer.time(Timer.java:91) ~[puppetdb.jar:na] at com.puppetlabs.puppetdb.scf.storage$delete_unassociated_params_BANG_.invoke(storage.clj:739) ~[na:na] at com.puppetlabs.puppetdb.scf.storage$garbage_collect_BANG_$fn__9708$fn__9709.invoke(storage.clj:767) ~[na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:223) ~[na:na] at com.puppetlabs.puppetdb.scf.storage$garbage_collect_BANG_$fn__9708.invoke(storage.clj:766) ~[na:na] at com.puppetlabs.puppetdb.scf.storage.proxy$java.lang.Object$Callable$7da976d4.call(Unknown Source) ~[na:na] at com.yammer.metrics.core.Timer.time(Timer.java:91) ~[puppetdb.jar:na] at com.puppetlabs.puppetdb.scf.storage$garbage_collect_BANG_.invoke(storage.clj:765) ~[na:na] at com.puppetlabs.puppetdb.cli.services$garbage_collect_BANG_$fn__19190.invoke(services.clj:165) ~[na:na] at com.puppetlabs.jdbc$with_transacted_connection_fn$fn__6761$fn__6762$fn__6763.invoke(jdbc.clj:290) ~[na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:204) ~[na:na] at com.puppetlabs.jdbc$with_transacted_connection_fn$fn__6761$fn__6762.invoke(jdbc.clj:290) ~[na:na] at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:186) ~[na:na] at com.puppetlabs.jdbc$with_transacted_connection_fn$fn__6761.invoke(jdbc.clj:287) ~[na:na] at com.puppetlabs.jdbc$eval6739$retry_sql_STAR___6740$fn__6741$fn__6742.invoke(jdbc.clj:259) ~[na:na] at com.puppetlabs.jdbc$eval6739$retry_sql_STAR___6740$fn__6741.invoke(jdbc.clj:258) ~[na:na] at com.puppetlabs.jdbc$eval6739$retry_sql_STAR___6740.invoke(jdbc.clj:250) ~[na:na] at com.puppetlabs.jdbc$with_transacted_connection_fn.invoke(jdbc.clj:286) ~[na:na] at com.puppetlabs.puppetdb.cli.services$garbage_collect_BANG_.invoke(services.clj:164) ~[na:na] at com.puppetlabs.puppetdb.cli.services$perform_db_maintenance_BANG_.doInvoke(services.clj:177) [na:na] at clojure.lang.RestFn.applyTo(RestFn.java:139) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:626) [puppetdb.jar:na] at com.puppetlabs.puppetdb.cli.services$start_puppetdb$fn__19276.invoke(services.clj:347) [na:na] at clojure.lang.AFn.run(AFn.java:22) [puppetdb.jar:na] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [na:1.7.0_71] at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) [na:1.7.0_71] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) [na:1.7.0_71] at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_71] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_71] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_71] {code} That is from host 2's puppetdb.log-20150111, which is in the ticket. This is also present in the master puppetdb logs for PE-7392 on 2014-12-25, but that is the only time it occurs for them.

    JIRA | 2 years ago | Wyatt Alt
    java.sql.BatchUpdateException: Batch entry 0 DELETE FROM resource_params_cache WHERE NOT EXISTS (SELECT * FROM catalog_resources cr WHERE cr.resource=resource_params_cache.resource) was aborted. Call getNextException to see the cause.
  6. 0

    Hi Gang, I just upgraded to PuppetDB 2.3.1 from 2.3.0 in my development environment and PuppetDB crashed on restart. If you need more information let me know. The resulting log messages. 2015-04-01 08:44:51,294 INFO [o.e.j.u.log] Logging initialized @24441ms 2015-04-01 08:44:52,110 INFO [p.t.s.w.jetty9-core] Removing buggy security provider SunPKCS11-NSS version 1.7 2015-04-01 08:44:52,636 INFO [p.t.s.w.jetty9-service] Initializing web server(s). 2015-04-01 08:44:52,642 INFO [p.t.s.w.jetty9-service] Starting web server(s). 2015-04-01 08:44:52,818 INFO [p.t.s.w.jetty9-core] Starting web server. 2015-04-01 08:44:52,822 INFO [o.e.j.s.Server] jetty-9.2.z-SNAPSHOT 2015-04-01 08:44:52,874 INFO [o.e.j.s.ServerConnector] Started ServerConnector@66ace155{HTTP/1.1}{localhost:8080} 2015-04-01 08:44:53,150 INFO [o.e.j.s.ServerConnector] Started ServerConnector@7f17e429{SSL-HTTP/1.1}{0.0.0.0:8081} 2015-04-01 08:44:53,151 INFO [o.e.j.s.Server] Started @26300ms 2015-04-01 08:44:53,228 INFO [c.p.p.c.services] PuppetDB version 2.3.1 2015-04-01 08:44:53,362 INFO [c.p.p.s.migrate] Applying database migration version 28 2015-04-01 08:44:53,396 ERROR [c.p.p.s.migrate] Caught SQLException during migration java.sql.BatchUpdateException: Batch entry 5 DELETE FROM fact_paths t1 WHERE t1.id <> (SELECT MIN(t2.id) FROM fact_paths t2 WHERE t1.path = t2.path) was aborted. Call getNextException to see the cause. at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2746) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:457) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1887) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405) ~[puppetdb.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2893) ~[puppetdb.jar:na] at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:469) ~[puppetdb.jar:na] at clojure.java.jdbc$do_commands$fn__7301.invoke(jdbc.clj:188) ~[na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:223) [na:na] at clojure.java.jdbc$do_commands.doInvoke(jdbc.clj:187) ~[na:na] at clojure.lang.RestFn.invoke(RestFn.java:3894) [puppetdb.jar:na] at com.puppetlabs.puppetdb.scf.migrate$lift_fact_paths_into_facts.invoke(migrate.clj:968) ~[na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_$fn__20902$fn__20915.invoke(migrate.clj:1063) ~[na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_$fn__20902.invoke(migrate.clj:1062) [na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:204) [na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_.invoke(migrate.clj:1059) [na:na] at com.puppetlabs.puppetdb.cli.services$start_puppetdb$fn__21109.invoke(services.clj:292) [na:na] at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:186) [na:na] at com.puppetlabs.puppetdb.cli.services$start_puppetdb.invoke(services.clj:290) [na:na] at com.puppetlabs.puppetdb.cli.services$reify__21157$service_fnk__18232__auto___positional$reify__21168.start(services.clj:366) [na:na] at puppetlabs.trapperkeeper.services$eval18068$fn__18082$G__18058__18085.invoke(services.clj:10) [na:na] at puppetlabs.trapperkeeper.services$eval18068$fn__18082$G__18057__18089.invoke(services.clj:10) [na:na] at puppetlabs.trapperkeeper.internal$run_lifecycle_fn_BANG_.invoke(internal.clj:154) [na:na] at puppetlabs.trapperkeeper.internal$run_lifecycle_fns.invoke(internal.clj:182) [na:na] at puppetlabs.trapperkeeper.internal$build_app_STAR_$reify__18905.start(internal.clj:449) [na:na] at puppetlabs.trapperkeeper.internal$boot_services_STAR_$fn__18917.invoke(internal.clj:473) [na:na] at puppetlabs.trapperkeeper.internal$boot_services_STAR_.invoke(internal.clj:471) [na:na] at puppetlabs.trapperkeeper.core$boot_with_cli_data.invoke(core.clj:113) [na:na] at puppetlabs.trapperkeeper.core$run.invoke(core.clj:144) [na:na] at puppetlabs.trapperkeeper.core$main.doInvoke(core.clj:159) [na:na] at clojure.lang.RestFn.applyTo(RestFn.java:137) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at com.puppetlabs.puppetdb.cli.services$_main.doInvoke(services.clj:373) [na:na] at clojure.lang.RestFn.invoke(RestFn.java:421) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:383) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:156) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at com.puppetlabs.puppetdb.core$run_command.invoke(core.clj:87) [na:na] at com.puppetlabs.puppetdb.core$_main.doInvoke(core.clj:95) [na:na] at clojure.lang.RestFn.invoke(RestFn.java:436) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:388) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:160) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at clojure.main$main_opt.invoke(main.clj:315) [puppetdb.jar:na] at clojure.main$main.doInvoke(main.clj:420) [puppetdb.jar:na] at clojure.lang.RestFn.invoke(RestFn.java:482) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:401) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:171) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.main.main(main.java:37) [puppetdb.jar:na] 2015-04-01 08:44:53,398 ERROR [c.p.p.s.migrate] Unravelled exception org.postgresql.util.PSQLException: ERROR: update or delete on table "fact_paths" violates foreign key constraint "fact_values_path_id_fk" on table "fact_values" Detail: Key (id)=(452) is still referenced from table "fact_values". at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886) ~[puppetdb.jar:na] at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:405) ~[puppetdb.jar:na] at org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2893) ~[puppetdb.jar:na] at com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:469) ~[puppetdb.jar:na] at clojure.java.jdbc$do_commands$fn__7301.invoke(jdbc.clj:188) ~[na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:223) [na:na] at clojure.java.jdbc$do_commands.doInvoke(jdbc.clj:187) ~[na:na] at clojure.lang.RestFn.invoke(RestFn.java:3894) [puppetdb.jar:na] at com.puppetlabs.puppetdb.scf.migrate$lift_fact_paths_into_facts.invoke(migrate.clj:968) ~[na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_$fn__20902$fn__20915.invoke(migrate.clj:1063) ~[na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_$fn__20902.invoke(migrate.clj:1062) [na:na] at clojure.java.jdbc.internal$transaction_STAR_.invoke(internal.clj:204) [na:na] at com.puppetlabs.puppetdb.scf.migrate$migrate_BANG_.invoke(migrate.clj:1059) [na:na] at com.puppetlabs.puppetdb.cli.services$start_puppetdb$fn__21109.invoke(services.clj:292) [na:na] at clojure.java.jdbc.internal$with_connection_STAR_.invoke(internal.clj:186) [na:na] at com.puppetlabs.puppetdb.cli.services$start_puppetdb.invoke(services.clj:290) [na:na] at com.puppetlabs.puppetdb.cli.services$reify__21157$service_fnk__18232__auto___positional$reify__21168.start(services.clj:366) [na:na] at puppetlabs.trapperkeeper.services$eval18068$fn__18082$G__18058__18085.invoke(services.clj:10) [na:na] at puppetlabs.trapperkeeper.services$eval18068$fn__18082$G__18057__18089.invoke(services.clj:10) [na:na] at puppetlabs.trapperkeeper.internal$run_lifecycle_fn_BANG_.invoke(internal.clj:154) [na:na] at puppetlabs.trapperkeeper.internal$run_lifecycle_fns.invoke(internal.clj:182) [na:na] at puppetlabs.trapperkeeper.internal$build_app_STAR_$reify__18905.start(internal.clj:449) [na:na] at puppetlabs.trapperkeeper.internal$boot_services_STAR_$fn__18917.invoke(internal.clj:473) [na:na] at puppetlabs.trapperkeeper.internal$boot_services_STAR_.invoke(internal.clj:471) [na:na] at puppetlabs.trapperkeeper.core$boot_with_cli_data.invoke(core.clj:113) [na:na] at puppetlabs.trapperkeeper.core$run.invoke(core.clj:144) [na:na] at puppetlabs.trapperkeeper.core$main.doInvoke(core.clj:159) [na:na] at clojure.lang.RestFn.applyTo(RestFn.java:137) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at com.puppetlabs.puppetdb.cli.services$_main.doInvoke(services.clj:373) [na:na] at clojure.lang.RestFn.invoke(RestFn.java:421) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:383) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:156) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at com.puppetlabs.puppetdb.core$run_command.invoke(core.clj:87) [na:na] at com.puppetlabs.puppetdb.core$_main.doInvoke(core.clj:95) [na:na] at clojure.lang.RestFn.invoke(RestFn.java:436) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:388) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:160) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.core$apply.invoke(core.clj:624) [puppetdb.jar:na] at clojure.main$main_opt.invoke(main.clj:315) [puppetdb.jar:na] at clojure.main$main.doInvoke(main.clj:420) [puppetdb.jar:na] at clojure.lang.RestFn.invoke(RestFn.java:482) [puppetdb.jar:na] at clojure.lang.Var.invoke(Var.java:401) [puppetdb.jar:na] at clojure.lang.AFn.applyToHelper(AFn.java:171) [puppetdb.jar:na] at clojure.lang.Var.applyTo(Var.java:700) [puppetdb.jar:na] at clojure.main.main(main.java:37) [puppetdb.jar:na] 2015-04-01 08:44:53,403 INFO [p.t.internal] Shutting down due to JVM shutdown hook. 2015-04-01 08:44:53,404 INFO [p.t.internal] Beginning shutdown sequence 2015-04-01 08:44:53,406 INFO [c.p.p.c.services] Shutdown request received; puppetdb exiting. 2015-04-01 08:44:53,406 INFO [p.t.s.w.jetty9-service] Shutting down web server(s). 2015-04-01 08:44:53,407 INFO [p.t.s.w.jetty9-core] Shutting down web server. 2015-04-01 08:44:53,414 INFO [o.e.j.s.ServerConnector] Stopped ServerConnector@66ace155{HTTP/1.1}{localhost:8080} 2015-04-01 08:44:53,420 INFO [o.e.j.s.ServerConnector] Stopped ServerConnector@7f17e429{SSL-HTTP/1.1}{0.0.0.0:8081} 2015-04-01 08:44:53,422 INFO [p.t.s.w.jetty9-core] Web server shutdown 2015-04-01 08:44:53,423 INFO [p.t.internal] Finished shutdown sequence ---- h3. QA Risk Analysis | Probability | Low | | Severity | Med (puppetdb crash | | Risk Level | Low | | Test Level | Unit |

    JIRA | 2 years ago | Pete Brown
    java.sql.BatchUpdateException: Batch entry 5 DELETE FROM fact_paths t1 WHERE t1.id <> (SELECT MIN(t2.id) FROM fact_paths t2 WHERE t1.path = t2.path) was aborted. Call getNextException to see the cause.

    10 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.sql.BatchUpdateException

      Batch entry 0 DELETE FROM "TBLS" WHERE "TBL_ID"='1621' was aborted. Call getNextException to see the cause.

      at org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError()
    2. PostgreSQL JDBC Driver
      AbstractJdbc2Statement.executeBatch
      1. org.postgresql.jdbc2.AbstractJdbc2Statement$BatchResultHandler.handleError(AbstractJdbc2Statement.java:2598)
      2. org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1836)
      3. org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:407)
      4. org.postgresql.jdbc2.AbstractJdbc2Statement.executeBatch(AbstractJdbc2Statement.java:2737)
      4 frames
    3. BoneCP :: Core Library
      StatementHandle.executeBatch
      1. com.jolbox.bonecp.StatementHandle.executeBatch(StatementHandle.java:424)
      1 frame
    4. DataNucleus RDBMS plugin
      SQLController$1.transactionFlushed
      1. org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:372)
      2. org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:628)
      3. org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:596)
      4. org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:683)
      4 frames
    5. DataNucleus Core
      TransactionImpl.commit
      1. org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:86)
      2. org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:454)
      3. org.datanucleus.TransactionImpl.flush(TransactionImpl.java:203)
      4. org.datanucleus.TransactionImpl.commit(TransactionImpl.java:267)
      4 frames
    6. DataNucleus JDO API plugin
      JDOTransaction.commit
      1. org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:98)
      1 frame
    7. Hive Metastore
      ObjectStore.commitTransaction
      1. org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:411)
      1 frame
    8. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    9. Hive Metastore
      RawStoreProxy.invoke
      1. org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
      1 frame
    10. com.sun.proxy
      $Proxy0.commitTransaction
      1. com.sun.proxy.$Proxy0.commitTransaction(Unknown Source)
      1 frame
    11. Hive Metastore
      HiveMetaStore$HMSHandler.drop_table_with_environment_context
      1. org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1389)
      2. org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1525)
      2 frames
    12. Java RT
      Method.invoke
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      4. java.lang.reflect.Method.invoke(Method.java:606)
      4 frames
    13. Hive Metastore
      RetryingHMSHandler.invoke
      1. org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:106)
      1 frame
    14. com.sun.proxy
      $Proxy1.drop_table_with_environment_context
      1. com.sun.proxy.$Proxy1.drop_table_with_environment_context(Unknown Source)
      1 frame
    15. Hive Metastore
      ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult
      1. org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8072)
      2. org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:8056)
      2 frames
    16. Apache Thrift
      TBaseProcessor.process
      1. org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
      2. org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
      2 frames
    17. Hive Metastore
      TSetIpAddressProcessor.process
      1. org.apache.hadoop.hive.metastore.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:48)
      1 frame
    18. Apache Thrift
      TThreadPoolServer$WorkerProcess.run
      1. org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
      1 frame
    19. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      3. java.lang.Thread.run(Thread.java:724)
      3 frames