org.apache.metamodel.MetaModelException: Could not get schema names: The statement is closed.

GitHub | ClaudiaPHI | 2 months ago
  1. 0

    GitHub comment 1201#249551178

    GitHub | 2 months ago | ClaudiaPHI
    org.apache.metamodel.MetaModelException: Could not get schema names: The statement is closed.
  2. 0

    GitHub comment 1201#249551178

    GitHub | 2 months ago | ClaudiaPHI
    org.apache.metamodel.MetaModelException: Could not rollback transaction: rollback() should not be called while in auto-commit mode.
  3. 0

    GitHub comment 1486#244173235

    GitHub | 3 months ago | chmandrade
    org.apache.metamodel.MetaModelException: java.io.IOException: Mkdirs failed to create /results (exists=false, cwd=file:/Users/henriqueandrade/Documents/App/spark/DataCleaner/engine/env/spark/target)
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Running DataCleaner on Spark as a local file and S3

    GitHub | 3 months ago | chmandrade
    org.apache.metamodel.MetaModelException: java.io.IOException: Mkdirs failed to create /results (exists=false, cwd=file:/Users/henriqueandrade/Documents/App/spark/DataCleaner/engine/env/spark/target)

    Root Cause Analysis

    1. org.apache.metamodel.MetaModelException

      Could not get schema names: The statement is closed.

      at org.apache.metamodel.jdbc.JdbcUtils.wrapException()
    2. org.apache.metamodel
      AbstractDataContext.getSchemaByName
      1. org.apache.metamodel.jdbc.JdbcUtils.wrapException(JdbcUtils.java:61)
      2. org.apache.metamodel.jdbc.JdbcDataContext.getSchemaNamesInternal(JdbcDataContext.java:805)
      3. org.apache.metamodel.AbstractDataContext.getSchemaNames(AbstractDataContext.java:110)
      4. org.apache.metamodel.AbstractDataContext.getSchemaByName(AbstractDataContext.java:203)
      4 frames
    3. org.datacleaner.connection
      SchemaNavigator.convertToColumns
      1. org.datacleaner.connection.SchemaNavigator.getSchemaByName(SchemaNavigator.java:60)
      2. org.datacleaner.connection.SchemaNavigator.convertToTable(SchemaNavigator.java:68)
      3. org.datacleaner.connection.SchemaNavigator.convertToColumns(SchemaNavigator.java:106)
      3 frames
    4. org.datacleaner.beans
      InsertIntoTableAnalyzer.run
      1. org.datacleaner.beans.writers.InsertIntoTableAnalyzer.run(InsertIntoTableAnalyzer.java:407)
      2. org.datacleaner.beans.writers.InsertIntoTableAnalyzer.run(InsertIntoTableAnalyzer.java:76)
      2 frames
    5. org.datacleaner.util
      WriteBuffer.addToBuffer
      1. org.datacleaner.util.WriteBuffer.flushBuffer(WriteBuffer.java:84)
      2. org.datacleaner.util.WriteBuffer.addToBuffer(WriteBuffer.java:60)
      2 frames
    6. org.datacleaner.beans
      InsertIntoTableAnalyzer.run
      1. org.datacleaner.beans.writers.InsertIntoTableAnalyzer.run(InsertIntoTableAnalyzer.java:377)
      1 frame
    7. org.datacleaner.job
      TaskRunnable.run
      1. org.datacleaner.job.runner.AnalyzerConsumer.consumeInternal(AnalyzerConsumer.java:71)
      2. org.datacleaner.job.runner.AbstractRowProcessingConsumer.consume(AbstractRowProcessingConsumer.java:159)
      3. org.datacleaner.job.runner.ConsumeRowHandlerDelegate.consume(ConsumeRowHandlerDelegate.java:64)
      4. org.datacleaner.job.runner.ConsumeRowHandler.consumeRow(ConsumeRowHandler.java:146)
      5. org.datacleaner.job.tasks.ConsumeRowTask.execute(ConsumeRowTask.java:51)
      6. org.datacleaner.job.concurrent.TaskRunnable.run(TaskRunnable.java:61)
      6 frames
    8. Java RT
      Thread.run
      1. java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
      2. java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
      3. java.lang.Thread.run(Thread.java:745)
      3 frames