There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Iterating over ChronicleMap results in Exception
    via Stack Overflow by vijar
    • java.lang.RuntimeException: Failed to acquire the lock in 60 seconds. Possible reasons: - The lock was not released by the previous holder. If you use contexts API, for example map.queryContext(key), in a try-with-resources block. - This Chronicle Map (or Set) instance is persisted to disk, and the previous process (or one of parallel accessing processes) has crashed while holding this lock. In this case you should use ChronicleMapBuilder.recoverPersistedTo() procedure to access the Chronicle Map instance. - A concurrent thread or process, currently holding this lock, spends unexpectedly long time (more than 60 seconds) in the context (try-with-resource block) or one of overridden interceptor methods (or MapMethods, or MapEntryOperations, or MapRemoteOperations) while performing an ordinary Map operation or replication. You should either redesign your logic to spend less time in critical sections (recommended) or acquire this lock with tryLock(time, timeUnit) method call, with sufficient time specified. - Segment(s) in your Chronicle Map are very large, and iteration over them takes more than 60 seconds. In this case you should acquire this lock with tryLock(time, timeUnit) method call, with longer timeout specified. - This is a dead lock. If you perform multi-key queries, ensure you acquire segment locks in the order (ascending by segmentIndex()), you can find an example here: at net.openhft.chronicle.hash.impl.BigSegmentHeader.deadLock( at net.openhft.chronicle.hash.impl.BigSegmentHeader.updateLock( at$UpdateLock.lock( at at at at at java.lang.Iterable.forEach(
    No Bugmate found.