java.lang.RuntimeException: Failed to acquire the lock in 60 seconds. Possible reasons: - The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block. - This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance. - A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified. - Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified. - This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries

Stack Overflow | vijar | 3 months ago
  1. 0

    Iterating over ChronicleMap results in Exception

    Stack Overflow | 3 months ago | vijar
    java.lang.RuntimeException: Failed to acquire the lock in 60 seconds. Possible reasons: - The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block. - This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance. - A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified. - Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified. - This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries
  2. 0

    Iterating over ChronicleMap results in Exception

    GitHub | 3 months ago | leventov
    java.lang.RuntimeException: Failed to acquire the lock in 60 seconds. Possible reasons: - The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block. - This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance. - A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified. - Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified. - This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries
  3. 0

    Re: Cube Build Failed at Last Step//RE: Error while making cube & Measure option is not responding on GUI

    kylin-dev | 2 years ago | Santosh Akhilesh
    java.lang.RuntimeException: Can't get cube segment >>>size. >>> at >>>com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(Jo >>>b >>>F >>>lowListener.java:247) >>> at >>>com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.ja >>>v >>>a >>>:101) >>> at >>>org.quartz.core.QuartzScheduler.notifyJobListenersWasExecuted(QuartzSche >>>d >>>u >>>ler.java:1985) >>> at >>>org.quartz.core.JobRunShell.notifyJobListenersComplete(JobRunShell.java: >>>3 >>>4 >>>0) >>> at org.quartz.core.JobRunShell.run(JobRunShell.java:224) >>> at >>>org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java >>>: >>>5 >>>73) >>> >>>I have checked in hbase shell and following are the tables in hbase; >>>hbase(main):001:0> list >>>TABLE >>> >>>KYLIN_K27LDMX63W >>>kylin_metadata_qa >>>kylin_metadata_qa_acl >>>kylin_metadata_qa_cube >>>kylin_metadata_qa_dict >>>kylin_metadata_qa_invertedindex >>>kylin_metadata_qa_job >>>kylin_metadata_qa_job_output >>>kylin_metadata_qa_proj >>>kylin_metadata_qa_table_snapshot >>>kylin_metadata_qa_user >>>11 row(s) in 0.8990 seconds >>> >>> >>>Regards, >>>Santosh Akhilesh >>>Bangalore R&D >>>HUAWEI TECHNOLOGIES CO.,LTD. >>> >>>www.huawei.com >>>------------------------------------------------------------------------ >>>- >>>- >>>----------------------------------------------------------- >>>This e-mail and its attachments contain confidential information from >>>HUAWEI, which >>>is intended only for the person or entity whose address is listed above. >>>Any use of the >>>information contained herein in any way (including, but not limited to, >>>total or partial >>>disclosure, reproduction, or dissemination) by persons other than the >>>intended >>>recipient(s) is prohibited. If you receive this e-mail in error, please >>>notify the sender by >>>phone or email immediately and delete it! >>> >>>________________________________________ >>>From: Santoshakhilesh >>>Sent: Friday, February 27, 2015 2:15 PM >>>To: dev@kylin.incubator.apache.org >>>Subject: RE: Error while making cube & Measure option is not responding >>>on GUI >>> >>>I have manually copied the jar to /tmp/kylin , now satge 2 is done , >>>thanks. >>> >>>Regards, >>>Santosh Akhilesh >>>Bangalore R&D >>>HUAWEI TECHNOLOGIES CO.,LTD. >>> >>>www.huawei.com >>>------------------------------------------------------------------------ >>>- >>>- >>>----------------------------------------------------------- >>>This e-mail and its attachments contain confidential information from >>>HUAWEI, which >>>is intended only for the person or entity whose address is listed above. >>>Any use of the >>>information contained herein in any way (including, but not limited to, >>>total or partial >>>disclosure, reproduction, or dissemination) by persons other than the >>>intended >>>recipient(s) is prohibited. If you receive this e-mail in error, please >>>notify the sender by >>>phone or email immediately and delete it! >>> >>>________________________________________ >>>From: Shi, Shaofeng [shaoshi@ebay.com] >>>Sent: Friday, February 27, 2015 1:00 PM >>>To: dev@kylin.incubator.apache.org >>>Cc: Kulbhushan Rana >>>Subject: Re: Error while making cube & Measure option is not responding >>>on GUI >>> >>>In 0.6.x the packages are named with “com.kylinolap.xxx”, from 0.7 we >>>renamed the package to “org.apache.kylin.xxx”; When you downgrade to >>>0.6, >>>did you also replace the jar location with 0.6 ones in kylin.properties? >>> >>>On 2/27/15, 3:13 PM, "Santoshakhilesh" >>>wrote: >>> Hi Shaofeng , I have added my fact and dimension tables under default database of hive. Now stage 1 of Cube Build is ok. And there is failure at step2. The map reduce job for the finding distinct columns of fact table is error. Yarn log is as below. Strangely this is class not found error. I have checked the Kylin.properties and the jar is already set as below. kylin. log has one exception connecting to linux/10.19.93.68 to 0.0.0.0:10020 Please help me to give a clue , I am also trying to check meanwhile Thanks. kylin property # Temp folder in hdfs kylin.hdfs.working.dir=/tmp # Path to the local(relative to job engine) job jar, job engine will use this jar kylin.job.jar=/tmp/kylin/kylin-job-latest.jar Map Reduce error ---------------------------- 2015-02-27 20:24:25,262 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: com/kylinolap/common/mr/KylinMapper
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    Re: Cube Build Failed at Last Step//RE: Error while making cube & Measure option is not responding on GUI

    apache.org | 1 year ago
    java.lang.RuntimeException: Can't get cube segment >size. > at >com.kylinolap.job.flow.JobFlowListener.updateCubeSegmentInfoOnSucceed(JobF >lowListener.java:247) > at >com.kylinolap.job.flow.JobFlowListener.jobWasExecuted(JobFlowListener.java >:101) > at >org.quartz.core.QuartzScheduler.notifyJobListenersWasExecuted(QuartzSchedu >ler.java:1985) > at >org.quartz.core.JobRunShell.notifyJobListenersComplete(JobRunShell.java:34 >0) > at org.quartz.core.JobRunShell.run(JobRunShell.java:224) > at >org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:5 >73) > >I have checked in hbase shell and following are the tables in hbase; >hbase(main):001:0> list >TABLE > >KYLIN_K27LDMX63W >kylin_metadata_qa >kylin_metadata_qa_acl >kylin_metadata_qa_cube >kylin_metadata_qa_dict >kylin_metadata_qa_invertedindex >kylin_metadata_qa_job >kylin_metadata_qa_job_output >kylin_metadata_qa_proj >kylin_metadata_qa_table_snapshot >kylin_metadata_qa_user >11 row(s) in 0.8990 seconds > > >Regards, >Santosh Akhilesh >Bangalore R&D >HUAWEI TECHNOLOGIES CO.,LTD. > >www.huawei.com >-------------------------------------------------------------------------- >----------------------------------------------------------- >This e-mail and its attachments contain confidential information from >HUAWEI, which >is intended only for the person or entity whose address is listed above. >Any use of the >information contained herein in any way (including, but not limited to, >total or partial >disclosure, reproduction, or dissemination) by persons other than the >intended >recipient(s) is prohibited. If you receive this e-mail in error, please >notify the sender by >phone or email immediately and delete it! > >________________________________________ >From: Santoshakhilesh >Sent: Friday, February 27, 2015 2:15 PM >To: dev@kylin.incubator.apache.org >Subject: RE: Error while making cube & Measure option is not responding >on GUI > >I have manually copied the jar to /tmp/kylin , now satge 2 is done , >thanks. > >Regards, >Santosh Akhilesh >Bangalore R&D >HUAWEI TECHNOLOGIES CO.,LTD. > >www.huawei.com >-------------------------------------------------------------------------- >----------------------------------------------------------- >This e-mail and its attachments contain confidential information from >HUAWEI, which >is intended only for the person or entity whose address is listed above. >Any use of the >information contained herein in any way (including, but not limited to, >total or partial >disclosure, reproduction, or dissemination) by persons other than the >intended >recipient(s) is prohibited. If you receive this e-mail in error, please >notify the sender by >phone or email immediately and delete it! > >________________________________________ >From: Shi, Shaofeng [shaoshi@ebay.com] >Sent: Friday, February 27, 2015 1:00 PM >To: dev@kylin.incubator.apache.org >Cc: Kulbhushan Rana >Subject: Re: Error while making cube & Measure option is not responding >on GUI > >In 0.6.x the packages are named with “com.kylinolap.xxx”, from 0.7 we >renamed the package to “org.apache.kylin.xxx”; When you downgrade to 0.6, >did you also replace the jar location with 0.6 ones in kylin.properties? > >On 2/27/15, 3:13 PM, "Santoshakhilesh" >wrote: > Hi Shaofeng , I have added my fact and dimension tables under default database of hive. Now stage 1 of Cube Build is ok. And there is failure at step2. The map reduce job for the finding distinct columns of fact table is error. Yarn log is as below. Strangely this is class not found error. I have checked the Kylin.properties and the jar is already set as below. kylin. log has one exception connecting to linux/10.19.93.68 to 0.0.0.0:10020 Please help me to give a clue , I am also trying to check meanwhile Thanks. kylin property # Temp folder in hdfs kylin.hdfs.working.dir=/tmp # Path to the local(relative to job engine) job jar, job engine will use this jar kylin.job.jar=/tmp/kylin/kylin-job-latest.jar Map Reduce error ---------------------------- 2015-02-27 20:24:25,262 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.NoClassDefFoundError: com/kylinolap/common/mr/KylinMapper
  6. 0

    Exception is: org.gradle.api.tasks.TaskExecutionException: Execution failed for task :app:compileDebugJavaWithJavac

    Stack Overflow | 12 months ago | angel
    java.lang.RuntimeException: java.lang.ClassCastException: com.sun.tools.javac.code.Type cannot be cast to javax.lang.model.type.DeclaredType Possible causes for this unexpected error include:<ul><li>Gradle's dependency cache may be corrupt (this sometimes occurs after a network connection timeout.) <a href="syncProject">Re-download dependencies and sync project (requires network)</a></li><li>The state of a Gradle build process (daemon) may be corrupt. Stopping all Gradle daemons may solve this problem. <a href="stopGradleDaemons">Stop Gradle build processes (requires restart)</a></li><li>Your project may be using a third-party plugin which is not compatible with the other plugins in the project or the version of Gradle requested by the project.</li></ul>In the case of corrupt Gradle processes, you can also try closing the IDE and then killing all Java processes.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. java.lang.RuntimeException

      Failed to acquire the lock in 60 seconds. Possible reasons: - The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block. - This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance. - A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified. - Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified. - This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: https://github.com/OpenHFT/Chronicle-Map#multi-key-queries

      at net.openhft.chronicle.hash.impl.BigSegmentHeader.deadLock()
    2. net.openhft.chronicle
      ChronicleMapIterator.hasNext
      1. net.openhft.chronicle.hash.impl.BigSegmentHeader.deadLock(BigSegmentHeader.java:59)
      2. net.openhft.chronicle.hash.impl.BigSegmentHeader.updateLock(BigSegmentHeader.java:231)
      3. net.openhft.chronicle.map.impl.CompiledMapIterationContext$UpdateLock.lock(CompiledMapIterationContext.java:768)
      4. net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntryWhile(CompiledMapIterationContext.java:3810)
      5. net.openhft.chronicle.map.impl.CompiledMapIterationContext.forEachSegmentEntry(CompiledMapIterationContext.java:3816)
      6. net.openhft.chronicle.map.ChronicleMapIterator.fillEntryBuffer(ChronicleMapIterator.java:61)
      7. net.openhft.chronicle.map.ChronicleMapIterator.hasNext(ChronicleMapIterator.java:77)
      7 frames
    3. Java RT
      Iterable.forEach
      1. java.lang.Iterable.forEach(Iterable.java:74)
      1 frame