water.DException$DistributedException: from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV!

JIRA | Kevin Normoyle | 3 years ago
  1. 0

    I was removing keys manually, then using RemoveAll to clean up any locked keys. RemoveAll failed when it made the job list sync against cancelled jobs, because job keys were missing import_only: /home4/jenkins/jobs/h2o_release_tests_164/workspace/py/testdir_release/c1/test_c1_fvec.py uses put://home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv Local path to file that will be uploaded: /home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv That path resolves as: /home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv parse parameters: {'header': None, 'destination_key': None, 'separator': None, 'preview': None, 'exclude': None, 'header_from_file': None, 'parser_type': None, 'blocking': None, 'single_quotes': None, 'source_key': 'iris2.csv'} redirect http://192.168.1.164:54361/2/Progress2.json?job_key=%240301c0a801a45ad4ffffffff%24_b52db68d21c1e5c4d8513564f8cd4c7b&destination_key=iris2.hex 11:18:00.634 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV! + at water.Job.isRunning(Job.java:251) + at water.Job.isEnded(Job.java:261) + at water.Job.waitUntilJobEnded(Job.java:374) + at water.Job.waitUntilJobEnded(Job.java:388) + at water.util.RemoveAllKeysTask.lcompute(RemoveAllKeysTask.java:17) + at water.DRemoteTask.compute2(DRemoteTask.java:91) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)

    JIRA | 3 years ago | Kevin Normoyle
    water.DException$DistributedException: from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV!
  2. 0

    I was removing keys manually, then using RemoveAll to clean up any locked keys. RemoveAll failed when it made the job list sync against cancelled jobs, because job keys were missing import_only: /home4/jenkins/jobs/h2o_release_tests_164/workspace/py/testdir_release/c1/test_c1_fvec.py uses put://home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv Local path to file that will be uploaded: /home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv That path resolves as: /home4/jenkins/jobs/h2o_release_tests_164/workspace/smalldata/iris/iris2.csv parse parameters: {'header': None, 'destination_key': None, 'separator': None, 'preview': None, 'exclude': None, 'header_from_file': None, 'parser_type': None, 'blocking': None, 'single_quotes': None, 'source_key': 'iris2.csv'} redirect http://192.168.1.164:54361/2/Progress2.json?job_key=%240301c0a801a45ad4ffffffff%24_b52db68d21c1e5c4d8513564f8cd4c7b&destination_key=iris2.hex 11:18:00.634 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV! + at water.Job.isRunning(Job.java:251) + at water.Job.isEnded(Job.java:261) + at water.Job.waitUntilJobEnded(Job.java:374) + at water.Job.waitUntilJobEnded(Job.java:388) + at water.util.RemoveAllKeysTask.lcompute(RemoveAllKeysTask.java:17) + at water.DRemoteTask.compute2(DRemoteTask.java:91) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)

    JIRA | 3 years ago | Kevin Normoyle
    water.DException$DistributedException: from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV!
  3. 0

    What I did: running on 163. 16 nodes (on one host) with -Xmx10g import airlines_all.csv go to glm2 autoframe the source key 08:12:55.219 main INFO WATER: ----- H2O started ----- 08:12:55.221 main INFO WATER: Build git branch: master 08:12:55.221 main INFO WATER: Build git hash: 85e743cca1bb124399e18dc0359f6d8f221d2711 08:12:55.222 main INFO WATER: Build git describe: 85e743c 08:12:55.222 main INFO WATER: Build project version: 1.7.0.1066 08:12:55.222 main INFO WATER: Built by: 'jenkins' 08:12:55.222 main INFO WATER: Built on: 'Wed Oct 9 22:36:27 PDT 2013' 08:12:55.222 main INFO WATER: Java availableProcessors: 32 08:12:55.228 main INFO WATER: Java heap totalMemory: 1.92 gb 08:12:55.229 main INFO WATER: Java heap maxMemory: 8.89 gb 08:12:55.229 main INFO WATER: ICE root: '/home/tomk/ice_root' 08:12:55.262 main WARN WATER: Multiple local IPs detected: + /10.0.12.1 /10.0.1.1 /10.0.4.1 /192.168.1.163 + Attempting to determine correct address... + Using /192.168.1.163 08:12:55.298 main INFO WATER: Internal communication uses port: 54322 + Listening for HTTP and REST traffic on http://192.168.1.163:54321/ 08:12:55.336 main INFO WATER: H2O cloud name: 'tomtest' 08:12:55.336 main INFO WATER: (v1.7.0.1066) 'tomtest' on /192.168.1.163:54321, discovery address /239.243.12.228:61427 08:12:55.339 main INFO WATER: Cloud of size 1 formed [/192.168.1.163:54321] 08:12:55.591 main INFO WATER: Log dir: '/home/tomk/ice_root/h2ologs' 08:12:59.056 FJ-10-1 INFO WATER: Cloud of size 16 formed [/192.168.1.163:54321, /192.168.1.163:54323, /192.168.1.163:54325, /192.168.1.163:54327, /192.168.1.163:54329, /192.168.1.163:54331, /192.168.1.163:54333, /192.168.1.163:54335, /192.168.1.163:54337, /192.168.1.163:54339, /192.168.1.163:54341, /192.168.1.163:54343, /192.168.1.163:54345, /192.168.1.163:54347, /192.168.1.163:54349, /192.168.1.163:54351] 08:13:37.146 #:54325-0 INFO WATER: Start remote task#14 class water.parser.DParseTask from /192.168.1.163:54325 08:13:56.159 FJ-9-23 INFO WATER: Done remote task#14 class water.parser.DParseTask to /192.168.1.163:54325 08:14:36.887 # Session INFO WATER: Converting ValueArray to Frame: node(/192.168.1.163:54321) convNum(0) key(nfs://home/tomk/h2o-1.7.0.1066/../airlines_all.hex.autoframe)... java.lang.RuntimeException: water.DException$DistributedException: from /192.168.1.163:54331; java.lang.AssertionError: null while mapping key $0000000080ed00000000$nfs://home/tomk/h2o-1.7.0.1066/../airlines_all.hex 08:15:07.060 # Session INFO WATER: at water.Request2.set(Request2.java:422) 08:15:07.061 # Session INFO WATER: at water.api.RequestArguments$Argument.check(RequestArguments.java:539) 08:15:07.062 # Session INFO WATER: at water.api.RequestQueries.buildQuery(RequestQueries.java:148) 08:15:07.062 # Session INFO WATER: at water.api.Request.serve(Request.java:95) 08:15:07.063 # Session INFO WATER: at water.api.RequestServer.serve(RequestServer.java:220) 08:15:07.063 # Session INFO WATER: at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:391) 08:15:07.064 # Session INFO WATER: at java.lang.Thread.run(Thread.java:662) 08:15:07.094 # Session INFO WATER: Caused by: water.DException$DistributedException: from /192.168.1.163:54331; java.lang.AssertionError: null while mapping key $0000000080ed00000000$nfs://home/tomk/h2o-1.7.0.1066/../airlines_all.hex 08:15:07.120 # Session INFO WATER: at water.nbhm.NonBlockingHashMap.putIfMatchUnlocked(NonBlockingHashMap.java:369) 08:15:07.120 # Session INFO WATER: at water.H2O.putIfMatch(H2O.java:439) 08:15:07.120 # Session INFO WATER: at water.DKV.DputIfMatch(DKV.java:66) 08:15:07.121 # Session INFO WATER: at water.DKV.put(DKV.java:22) 08:15:07.121 # Session INFO WATER: at water.DKV.put(DKV.java:16) 08:15:07.121 # Session INFO WATER: at water.UKV.put(UKV.java:27) 08:15:07.121 # Session INFO WATER: at water.UKV.put(UKV.java:23) 08:15:07.122 # Session INFO WATER: at water.UKV.put(UKV.java:84) 08:15:07.122 # Session INFO WATER: at water.fvec.AppendableVec.closeChunk(AppendableVec.java:55) 08:15:07.122 # Session INFO WATER: at water.fvec.NewChunk.close(NewChunk.java:103) 08:15:07.122 # Session INFO WATER: at water.fvec.Chunk.close(Chunk.java:145) 08:15:07.122 # Session INFO WATER: at water.ValueArray$Converter.map(ValueArray.java:627) 08:15:07.123 # Session INFO WATER: at water.MRTask.lcompute(MRTask.java:66) 08:15:07.123 # Session INFO WATER: at water.DRemoteTask.compute2(DRemoteTask.java:75) 08:15:07.123 # Session INFO WATER: at water.MRTask.lcompute(MRTask.java:62) 08:15:07.123 # Session INFO WATER: at water.DRemoteTask.compute2(DRemoteTask.java:75) 08:15:07.123 # Session INFO WATER: at water.MRTask.lcompute(MRTask.java:62) 08:15:07.124 # Session INFO WATER: at water.DRemoteTask.compute2(DRemoteTask.java:75) 08:15:07.124 # Session INFO WATER: at water.MRTask.lcompute(MRTask.java:62) 08:15:07.124 # Session INFO WATER: at water.DRemoteTask.compute2(DRemoteTask.java:75) 08:15:07.124 # Session INFO WATER: at water.H2O$H2OCountedCompleter.compute(H2O.java:599) 08:15:07.124 # Session INFO WATER: at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) 08:15:07.125 # Session INFO WATER: at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) 08:15:07.125 # Session INFO WATER: at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) 08:15:07.129 # Session INFO WATER: at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) 08:15:07.129 # Session INFO WATER: at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)

    JIRA | 3 years ago | Tom Kraljevic
    java.lang.RuntimeException: water.DException$DistributedException: from /192.168.1.163:54331; java.lang.AssertionError: null while mapping key $0000000080ed00000000$nfs://home/tomk/h2o-1.7.0.1066/../airlines_all.hex
  4. Speed up your debug routine!

    Automated exception search integrated into your IDE

  5. 0

    http://172.16.2.161:8080/job/h2o_master_DEV_gradle_build/28042/testReport/junit/hex.deeplearning/DeepLearningTest/testCreditProstateTanh/ {code} 12-09 15:45:42.144 172.16.2.179:44008 32224 FJ-0-17 INFO: Building H2O DeepLearning model with these parameters: 12-09 15:45:42.144 172.16.2.179:44008 32224 FJ-0-17 INFO: {"_model_id":{"name":"_9483eb6fab215e8e8ba27ab8d5d4c7d","type":"Key"},"_train":{"name":"_9211ec74219ab28deb4d9f0f42ac4192","type":"Key"},"_valid":null,"_nfolds":0,"_keep_cross_validation_predictions":false,"_fold_assignment":"AUTO","_distribution":"poisson","_tweedie_power":1.5,"_ignored_columns":null,"_ignore_const_cols":true,"_weights_column":null,"_offset_column":null,"_fold_column":null,"_score_each_iteration":false,"_stopping_rounds":5,"_stopping_metric":"AUTO","_stopping_tolerance":0.0,"_response_column":"Cost","_balance_classes":false,"_max_after_balance_size":5.0,"_class_sampling_factors":null,"_max_hit_ratio_k":10,"_max_confusion_matrix_size":20,"_checkpoint":null,"_overwrite_with_best_model":true,"_autoencoder":false,"_use_all_factor_levels":true,"_activation":"Rectifier","_hidden":[10,10,10],"_epochs":100.0,"_train_samples_per_iteration":-2,"_target_ratio_comm_to_comp":0.05,"_seed":11185083,"_adaptive_rate":false,"_rho":0.99,"_epsilon":1.0E-8,"_rate":1.0E-4,"_rate_annealing":1.0E-6,"_rate_decay":1.0,"_momentum_start":0.9,"_momentum_ramp":1000000.0,"_momentum_stable":0.99,"_nesterov_accelerated_gradient":true,"_input_dropout_ratio":0.0,"_hidden_dropout_ratios":null,"_l1":0.0,"_l2":0.0,"_max_w2":10.0,"_initial_weight_distribution":"UniformAdaptive","_initial_weight_scale":1.0,"_loss":"Automatic","_score_interval":5.0,"_score_training_samples":10000,"_score_validation_samples":0,"_score_duty_cycle":0.1,"_classification_stop":0.0,"_regression_stop":1.0E-6,"_quiet_mode":false,"_score_validation_sampling":"Uniform","_diagnostics":true,"_variable_importances":false,"_fast_mode":false,"_force_load_balance":true,"_replicate_training_data":true,"_single_node_mode":false,"_shuffle_training_data":false,"_missing_values_handling":"MeanImputation","_sparse":false,"_col_major":false,"_average_activation":0.0,"_sparsity_beta":0.0,"_max_categorical_features":2147483647,"_reproducible":true,"_export_weights_and_biases":false,"_elastic_averaging":false,"_elastic_averaging_moving_rate":0.9,"_elastic_averaging_regularization":0.001,"_mini_batch_size":1} 12-09 15:45:42.147 172.16.2.179:44008 32224 FJ-0-17 INFO: _adaptive_rate: Using manual learning rate. Ignoring the following input parameters: rho, epsilon. 12-09 15:45:42.147 172.16.2.179:44008 32224 FJ-0-17 INFO: _reproducibility: Automatically enabling force_load_balancing, disabling single_node_mode and replicate_training_data 12-09 15:45:42.147 172.16.2.179:44008 32224 FJ-0-17 INFO: and setting train_samples_per_iteration to -1 to enforce reproducibility. 12-09 15:45:42.148 172.16.2.179:44008 32224 FJ-0-17 INFO: Model category: Regression 12-09 15:45:42.148 172.16.2.179:44008 32224 FJ-0-17 INFO: Number of model parameters (weights/biases): 391 12-09 15:45:42.148 172.16.2.179:44008 32224 FJ-0-17 WARN: Reproducibility enforced - using only 1 thread - can be slow. 12-09 15:45:42.148 172.16.2.179:44008 32224 FJ-0-17 INFO: ReBalancing dataset into (at least) 1 chunks. 12-09 15:45:42.157 172.16.2.179:44008 32224 FJ-0-17 INFO: Number of chunks of the training data: 1 12-09 15:45:42.157 172.16.2.179:44008 32224 FJ-0-17 INFO: Setting train_samples_per_iteration (-1) to one epoch: #rows (20). 12-09 15:45:42.157 172.16.2.179:44008 32224 FJ-0-17 INFO: Enabling training data shuffling to avoid training rows in the same order over and over (no Hogwild since there's only 1 chunk). 12-09 15:45:42.157 172.16.2.179:44008 32224 FJ-0-17 INFO: Starting to train the Deep Learning model. 12-09 15:45:42.162 172.16.2.179:44008 32224 FJ-0-17 INFO: Scoring the model. 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: Status of Neuron Layers (predicting Cost, regression, poisson distribution, Automatic loss, 391 weights/biases, 5.1 KB, 20 training samples, mini-batch size 1): 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: Layer Units Type Dropout L1 L2 Mean Rate Rate RMS Momentum Mean Weight Weight RMS Mean Bias Bias RMS 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: 1 15 Input 0.00 % 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: 2 10 Rectifier 0.00 % 0.000000 0.000000 0.000100 0.000000 0.900002 -0.108586 0.815557 -1991477697279010.500000 3812537241960448.000000 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: 3 10 Rectifier 0.00 % 0.000000 0.000000 0.000100 0.000000 0.900002 -21504996.238843 145055296.000000 -2059190118721077.500000 1609435914960896.000000 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: 4 10 Rectifier 0.00 % 0.000000 0.000000 0.000100 0.000000 0.900002 -29014930.585342 133162176.000000 -578534089719839.800000 601639018823680.000000 12-09 15:45:42.163 172.16.2.179:44008 32224 FJ-0-17 INFO: 5 1 Linear 0.000000 0.000000 0.000100 0.000000 0.900002 -3167488038.400004 6322765824.000000 -1644014701750027.800000 0.000000 onExCompletion for hex.Model$BigScore@2e3a87f5 water.DException$DistributedException: from /172.16.2.179:44000; by class hex.Model$BigScore; class java.lang.UnsupportedOperationException: Trying to predict with an unstable model. Job was aborted due to observed numerical instability (exponential growth). Either the weights or the bias values are unreasonably large or lead to large activation values. Try a different initial distribution, a bounded activation function (Tanh), adding regularization (via max_w2, l1, l2, dropout) or learning rate (either enable adaptive_rate or use a smaller learning rate or faster annealing). For more information visit: http://jira.h2o.ai/browse/TN-4 at hex.deeplearning.DeepLearningModel.score0(DeepLearningModel.java:831) at hex.Model.score0(Model.java:852) at hex.Model$BigScore.map(Model.java:820) at water.MRTask.compute2(MRTask.java:678) at water.H2O$H2OCountedCompleter.compute1(H2O.java:1060) at hex.Model$BigScore$Icer.compute1(Model$BigScore$Icer.java) at water.H2O$H2OCountedCompleter.compute(H2O.java:1056) at jsr166y.CountedCompleter.exec(CountedCompleter.java:468) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) java.lang.RuntimeException: water.DException$DistributedException: from /172.16.2.179:44008; by class hex.Model$BigScore; class water.DException$DistributedException: from /172.16.2.179:44000; by class hex.Model$BigScore; class java.lang.UnsupportedOperationException: {code}

    JIRA | 1 year ago | Arno Candel
    water.DException$DistributedException: from /172.16.2.179:44000; by class hex.Model$BigScore; class java.lang.UnsupportedOperationException: Trying to predict with an unstable model. Job was aborted due to observed numerical instability (exponential growth). Either the weights or the bias values are unreasonably large or lead to large activation values. Try a different initial distribution, a bounded activation function (Tanh), adding regularization (via max_w2, l1, l2, dropout) or learning rate (either enable adaptive_rate or use a smaller learning rate or faster annealing). For more information visit: http://jira.h2o.ai/browse/TN-4
  6. 0

    (not shown here, but a prior parse got a stack trace, and apparently left a key locked) I'm transitioning to using remove_all()...so I'm just noting this in case we want remove_key to remove even if a key is locked (say due to an error) or whether we want a -force param details the test suite is running multiple tests on a single cloud...so I'm not sure what's going on now with the new behavior around locked keys. I used to remove all keys between tests. I probably have to update to the remove_all that forces key removal? this is an interesting message though. ...we probably don't wnat to cause exception on this? does this just send exception info to the browser, with no bad side effect on h2o (no crash? from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) /var/lib/jenkins/jobs/h2o_release_tests/workspace/py/testdir_release/c7/test_c7_rel.py check_sandbox_for_errors: Errors in sandbox stdout or stderr (or R stdout/stderr). Could have occurred at any prior time 10:01:12.429 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.177:54355; java.lang.NullPointerException 10:01:12.435 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz 10:01:12.437 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:57) + at water.DTask.dinvoke(DTask.java:78) + at water.RPC$RPCCall.compute2(RPC.java:276) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) 10:01:12.444 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz

    JIRA | 3 years ago | Kevin Normoyle
    water.DException$DistributedException: from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name.

    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis

    1. water.DException$DistributedException

      from /192.168.1.164:54357; java.lang.AssertionError: Job should be always in DKV!

      at water.Job.isRunning()
    2. water
      Job.waitUntilJobEnded
      1. water.Job.isRunning(Job.java:251)
      2. water.Job.isEnded(Job.java:261)
      3. water.Job.waitUntilJobEnded(Job.java:374)
      4. water.Job.waitUntilJobEnded(Job.java:388)
      4 frames
    3. water.util
      RemoveAllKeysTask.lcompute
      1. water.util.RemoveAllKeysTask.lcompute(RemoveAllKeysTask.java:17)
      1 frame
    4. water
      H2O$H2OCountedCompleter.compute
      1. water.DRemoteTask.compute2(DRemoteTask.java:91)
      2. water.H2O$H2OCountedCompleter.compute(H2O.java:712)
      2 frames
    5. jsr166y
      ForkJoinWorkerThread.run
      1. jsr166y.CountedCompleter.exec(CountedCompleter.java:429)
      2. jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263)
      3. jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974)
      4. jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477)
      5. jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
      5 frames