java.lang.IllegalArgumentException

There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • (not shown here, but a prior parse got a stack trace, and apparently left a key locked) I'm transitioning to using remove_all()...so I'm just noting this in case we want remove_key to remove even if a key is locked (say due to an error) or whether we want a -force param details the test suite is running multiple tests on a single cloud...so I'm not sure what's going on now with the new behavior around locked keys. I used to remove all keys between tests. I probably have to update to the remove_all that forces key removal? this is an interesting message though. ...we probably don't wnat to cause exception on this? does this just send exception info to the browser, with no bad side effect on h2o (no crash? from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) /var/lib/jenkins/jobs/h2o_release_tests/workspace/py/testdir_release/c7/test_c7_rel.py check_sandbox_for_errors: Errors in sandbox stdout or stderr (or R stdout/stderr). Could have occurred at any prior time 10:01:12.429 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.177:54355; java.lang.NullPointerException 10:01:12.435 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz 10:01:12.437 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:57) + at water.DTask.dinvoke(DTask.java:78) + at water.RPC$RPCCall.compute2(RPC.java:276) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) 10:01:12.444 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz
    via by Kevin Normoyle,
  • cd testdir_multi_jvm python test_GBM_cancel_model_reuse.py starts 5 gbm jobs, then uses the jobs list to get id's to cancel all of them then repeats. The 2nd (and more) passes reuse the model keys from the first pass after the cancel, the job keys should be reusable, but they get the exception below gets stack trace on the 2nd pass 2014-02-06 20:59:36.453011 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad0&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:36.463305 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad1&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:36.471699 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad2&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:36.491283 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad3&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:36.510315 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad4&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:38.561355 -- Start http://192.168.0.7:54323/Jobs.json? 2014-02-06 20:59:38.645897 -- Start http://192.168.0.7:54323/Cancel.json?key=$0301c0a8000734d4ffffffff$_b879788d0e0cb142329256355414c77 2014-02-06 20:59:38.783900 -- Start http://192.168.0.7:54323/Cancel.json?key=$0301c0a8000734d4ffffffff$_ab576bde271a67c190b53ecf532d443d 2014-02-06 20:59:38.907229 -- Start http://192.168.0.7:54323/Cancel.json?key=$0301c0a8000734d4ffffffff$_aa38007d2111319a601e97c838b44be3 2014-02-06 20:59:38.991482 -- Start http://192.168.0.7:54323/Cancel.json?key=$0301c0a8000734d4ffffffff$_a46e367dbedd026e6a6d2d6805174a7a 2014-02-06 20:59:39.041373 -- Start http://192.168.0.7:54323/Cancel.json?key=$0301c0a8000734d4ffffffff$_b732df72604ff457a9766af25b354e36 2014-02-06 20:59:39.087883 -- Start http://192.168.0.7:54323/Jobs.json? 2014-02-06 20:59:39.141825 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad0&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:39.200337 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad1&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 2014-02-06 20:59:39.236226 -- Start http://192.168.0.7:54323/2/GBM.json?learn_rate=0.1&destination_key=GBMBad2&classification=0&min_rows=1&ntrees=2&response=C379&ignored_cols_by_name=C4,C5,C6,C7,C8,C9,C10,C11,C12,C15,C17,C18,C19,C20,C21,C425,C426,C427,C541,C542,C379&source=c.hex&grid_parallelism=4&max_depth=8 08:59:39.289 FJ-9-9 ERRR WATER: + water.DException$DistributedException: from /192.168.0.7:54321; java.lang.IllegalArgumentException: Model GBMBad2 is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:57) + at water.DTask.dinvoke(DTask.java:78) + at water.RPC$RPCCall.compute2(RPC.java:276) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
    via by Kevin Normoyle,
  • Here I do back to back postfiles (or overlapped actually on different nodes) of the same file (iris) with different key on the postfile for the source, and different destination_key for the parsed result. I had thought that was enough to guarantee they were independent, since the visible created keys were different names here's the commands.log showing what I'm doing 2014-03-29 15:16:51.917781 -- Start http://10.71.0.101:54321/2/PostFile.json?key=iris2.csv_1 #/home/0xdiag/h2o/smalldata/iris/iris2.csv 2014-03-29 15:16:51.917919 -- Start http://10.71.0.100:54321/2/PostFile.json?key=iris2.csv_0 #/home/0xdiag/h2o/smalldata/iris/iris2.csv 2014-03-29 15:16:51.918169 -- Start http://10.71.0.100:54321/2/PostFile.json?key=iris2.csv_2 #/home/0xdiag/h2o/smalldata/iris/iris2.csv 2014-03-29 15:16:51.970297 -- Start http://10.71.0.101:54321/2/Parse2.json?destination_key=iris2.csv_2.hex&source_key=iris2.csv_2 2014-03-29 15:16:51.985916 -- Start http://10.71.0.100:54321/2/Parse2.json?destination_key=iris2.csv_1.hex&source_key=iris2.csv_1 2014-03-29 15:16:51.991497 -- Start http://10.71.0.101:54321/2/Parse2.json?destination_key=iris2.csv_0.hex&source_key=iris2.csv_0 I get this in stdout. It's possible h2o has some internal state it's protecting, that's locked to the actual filename. So either there's a real lock, or this is incorrectly detecting a collison when there shouldn't be one. java.lang.IllegalArgumentException: Dataset iris2.csv_0.hex is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:57) + at water.Atomic.fork(Atomic.java:42) + at water.Atomic.invoke(Atomic.java:34) + at water.Lockable.write_lock(Lockable.java:59) + at water.Lockable.delete_and_lock(Lockable.java:63) + at water.fvec.ParseDataset2.forkParseDataset(ParseDataset2.java:58) + at water.api.Parse2.serve(Parse2.java:43) + at water.api.Request.serveGrid(Request.java:133) + at water.api.Request.serve(Request.java:110) + at water.api.RequestServer.serve(RequestServer.java:327) + at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + at java.lang.Thread.run(Thread.java:744) If I create hard links with different names (pointing to the same file) I can do this test successfully (with covtype.data and covtype20x.data also) with up to 10 parallel uploads/parses overlapping. (all with different names for keys as above, with the links providing different src file names also)
    via by Kevin Normoyle,
  • (not shown here, but a prior parse got a stack trace, and apparently left a key locked) I'm transitioning to using remove_all()...so I'm just noting this in case we want remove_key to remove even if a key is locked (say due to an error) or whether we want a -force param details the test suite is running multiple tests on a single cloud...so I'm not sure what's going on now with the new behavior around locked keys. I used to remove all keys between tests. I probably have to update to the remove_all that forces key removal? this is an interesting message though. ...we probably don't wnat to cause exception on this? does this just send exception info to the browser, with no bad side effect on h2o (no crash? from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) /var/lib/jenkins/jobs/h2o_release_tests/workspace/py/testdir_release/c7/test_c7_rel.py check_sandbox_for_errors: Errors in sandbox stdout or stderr (or R stdout/stderr). Could have occurred at any prior time 10:01:12.429 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.177:54355; java.lang.NullPointerException 10:01:12.435 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz 10:01:12.437 # Session ERRR WATER: + water.DException$DistributedException: from /192.168.1.172:54355; java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:57) + at water.DTask.dinvoke(DTask.java:78) + at water.RPC$RPCCall.compute2(RPC.java:276) + at water.H2O$H2OCountedCompleter.compute(H2O.java:712) + at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) + at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) + at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) + at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) + at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) 10:01:12.444 # Session INFO HTTPD: GET /Remove.json key=nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz
    via by Kevin Normoyle,
  • There is an error in unlocking the dataset after the previous model fails. The error in the previous model is a javaOutOfMemory and the following model is successfully created, despite the locked job error. The only issue is that that R unit test would fail overall. The test is too long to run, so the script for repo is attached. Output from logs: 05:22:21.748 # Session INFO HTTPD: POST /2/Exec2.json str=arcene.train.full = arcene.train.full 05:22:21.879 # Session ERRR WATER: env.remove_and_unlock() failed + java.lang.IllegalArgumentException: Dataset arcene.train.full is already in use. Unable to use it now. Consider using a different destination name. + at water.Lockable$PriorWriteLock.atomic(Lockable.java:85) + at water.Lockable$PriorWriteLock.atomic(Lockable.java:74) + at water.TAtomic.atomic(TAtomic.java:19) + at water.Atomic.compute2(Atomic.java:58) + at water.Atomic.fork(Atomic.java:42) + at water.Atomic.invoke(Atomic.java:34) + at water.Lockable.write_lock(Lockable.java:60) + at water.exec.Env.remove_and_unlock(Env.java:349) + at water.api.Exec2.serve(Exec2.java:71) + at water.api.Request.serveGrid(Request.java:165) + at water.Request2.superServeGrid(Request2.java:482) + at water.api.Exec2.serveGrid(Exec2.java:78) + at water.api.Request.serve(Request.java:142) + at water.api.RequestServer.serve(RequestServer.java:484) + at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:424) + at java.lang.Thread.run(Thread.java:745) Output from test: [2014-08-14 17:37:37] [ERROR] : Error: Test failed: 'Testing memory performance of Strong Rules' Not expected: http://127.0.0.1:54321/2/Exec2.json returned the following error: Frame is already locked by job null. 1: withWarnings(test(conn)) 2: withCallingHandlers(expr, warning = wHandler) 3: test(conn) 4: h2o.glm(x = c(1:7000), y = "arcene.train.label", data = arcene.train.full, family = "binomial", lambda_search = T, alpha = 1, nfolds = 0, use_all_factor_levels = T) 5: .h2o.get.glm(data@h2o, as.character(res$destination_key), return_all_lambda) 6: h2o.getFrame(h2o, pre$json$glm_model$dataKey) 7: .h2o.exec2(expr = key, h2o = h2o, dest_key = key) 8: .h2o.__exec2_dest_key(h2o, expr, dest_key) 9: .h2o.__remoteSend(client, .h2o.__PAGE_EXEC2, str = expr) 10: stop(paste(myURL, " returned the following error:\n", .h2o.__formatError(res$error))) 11: .handleSimpleError(function (e) { e$calls <- head(sys.calls()[-seq_len(frame + 7)], -2) signalCondition(e) }, "http://127.0.0.1:54321/2/Exec2.json returned the following error:\n Frame is already locked by job null.\n", quote(.h2o.__remoteSend(client, .h2o.__PAGE_EXEC2, str = expr))).
    via by Ariel Rao,
  • This looks like a test failure. The test will randomly have a name collision on model names. Or is this a cleanup problem? ./h2o-py/tests/testdir_algos/kmeans/pyunit_random_attack_medium.py Failure example: http://172.16.2.161:8080/job/h2o_master_DEV_win8_pyunit_medium_large/1248/artifact/h2o-py/tests/results/java_0_0.out.txt 11-01 09:37:46.444 172.17.6.15:56789 2404 FJ-0-5 INFO: {"_model_id":{"name":"my_model","type":"Key"},"_train":{"name":"ozone.hex","type":"Key"},"_valid":null,"_nfolds":0,"_keep_cross_validation_predictions":false,"_fold_assignment":"AUTO","_distribution":"AUTO","_tweedie_power":1.5,"_ignored_columns":["wind"],"_ignore_const_cols":true,"_weights_column":null,"_offset_column":null,"_fold_column":null,"_score_each_iteration":false,"_stopping_rounds":0,"_stopping_metric":"AUTO","_stopping_tolerance":0.001,"_response_column":null,"_balance_classes":false,"_max_after_balance_size":5.0,"_class_sampling_factors":null,"_max_hit_ratio_k":10,"_max_confusion_matrix_size":20,"_checkpoint":null,"_k":20,"_max_iterations":1000,"_standardize":false,"_seed":8718,"_init":"Furthest","_user_points":null,"_pred_indicator":false} 11-01 09:37:46.444 172.17.6.15:56789 2404 FJ-0-5 INFO: Dropping ignored columns: [wind] java.lang.IllegalArgumentException: class hex.kmeans.KMeansModel my_model is already in use. Unable to use it now. Consider using a different destination name. at water.Lockable$PriorWriteLock.atomic(Lockable.java:109) at water.Lockable$PriorWriteLock.atomic(Lockable.java:98) at water.TAtomic.atomic(TAtomic.java:17) at water.Atomic.compute2(Atomic.java:55) at water.Atomic.fork(Atomic.java:39) at water.Atomic.invoke(Atomic.java:31) at water.Lockable.write_lock(Lockable.java:59) at water.Lockable.delete_and_lock(Lockable.java:66) at hex.kmeans.KMeans$KMeansDriver.compute2(KMeans.java:269) at water.H2O$H2OCountedCompleter.compute(H2O.java:1065) at jsr166y.CountedCompleter.exec(CountedCompleter.java:468) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104) barrier onExCompletion for hex.kmeans.KMeans$KMeansDriver@60442d4a java.lang.AssertionError: Can't unlock: Not locked! at water.Lockable$Unlock.atomic(Lockable.java:187) at water.Lockable$Unlock.atomic(Lockable.java:177) at water.TAtomic.atomic(TAtomic.java:17) at water.Atomic.compute2(Atomic.java:55) at water.Atomic.fork(Atomic.java:39) at water.Atomic.invoke(Atomic.java:31) at water.Lockable.unlock(Lockable.java:172) at water.Lockable.unlock(Lockable.java:168) at hex.kmeans.KMeans$KMeansDriver.compute2(KMeans.java:332) at water.H2O$H2OCountedCompleter.compute(H2O.java:1065) at jsr166y.CountedCompleter.exec(CountedCompleter.java:468) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
    via by Brandon Hill,
  • it's not necessarily a good thing to do, but we should support it [load a dataset; call it d] d$col2000 <- min(d$col2000) Error in .h2o.__remoteSend(client, .h2o.__PAGE_EXEC2, str = expr) : http://127.0.0.1:54321/2/Exec2.json returned the following error: 2 better yet… once you've done this, stuff breaks everywhere > d$col2000 <- min(d$col2000) + d$col2000 Error in .h2o.__remoteSend(client, .h2o.__PAGE_EXEC2, str = expr) : http://127.0.0.1:54321/2/Exec2.json returned the following error: Dataset Last.value.4 is already in use. Unable to use it now. Consider using a different destination name. > head(d) Error in data.frame(col1 = 0.542521227151155, col2 = 0.267292166827247, : arguments imply differing number of rows: 1, 0 > 04:23:10.668 # Session INFO HTTPD: POST /2/Exec2.json str=Last.value.8 = bad_2k.hex[,2000] = c(0.000574313336983323) 04:23:10.747 # Session ERRR WATER:▫ + java.lang.ArrayIndexOutOfBoundsException: 2 + ▸ at water.fvec.Vec.elem2ChunkIdx(Vec.java:407) + ▸ at water.fvec.Vec.chunk(Vec.java:505) + ▸ at water.fvec.Vec.at(Vec.java:513) + ▸ at water.fvec.Frame.toString(Frame.java:475) + ▸ at water.api.Exec2.serve(Exec2.java:50) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724) 04:23:10.748 # Session ERRR WATER:▫ + java.lang.ArrayIndexOutOfBoundsException: 2 + ▸ at water.fvec.Vec.elem2ChunkIdx(Vec.java:407) + ▸ at water.fvec.Vec.chunk(Vec.java:505) + ▸ at water.fvec.Vec.at(Vec.java:513) + ▸ at water.fvec.Frame.toString(Frame.java:475) + ▸ at water.api.Exec2.serve(Exec2.java:50) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724) 04:23:51.514 # Session INFO HTTPD: POST /Inspect.json key=bad_2k.hex 04:23:51.633 # Session INFO HTTPD: POST /Inspect.json key=bad_2k.hex max_column_display=2147483647 04:23:51.753 # Session INFO HTTPD: POST /2/Exec2.json str=Last.value.9 = bad_2k.hex[,2000] 04:23:51.768 # Session ERRR WATER:▫ + java.lang.IllegalArgumentException: Dataset Last.value.4 is already in use. Unable to use it now. Consider using a different destination name. + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + ▸ at water.TAtomic.atomic(TAtomic.java:19) + ▸ at water.Atomic.compute2(Atomic.java:57) + ▸ at water.Atomic.fork(Atomic.java:42) + ▸ at water.Atomic.invoke(Atomic.java:34) + ▸ at water.Lockable.write_lock(Lockable.java:59) + ▸ at water.exec.Env.remove_and_unlock(Env.java:326) + ▸ at water.api.Exec2.serve(Exec2.java:62) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724)
    via by Earl Hathaway,
  • say you load a dataset. it didn't load correctly because of a bad column name. I edited the file then reran the same R code to load the data. d <- h2o.importFile(l, '/Users/earl/work/bad.2k.csv') Now a number of operations emit this (exception below) edit: it gets worse. even if I reload the data and specify a different key, eg d <- h2o.importFile(l, '/Users/earl/work/bad.2k.csv', 'fml') the key is set properly, but operations such as > min(d$col2000) Error in .h2o.__remoteSend(client, .h2o.__PAGE_EXEC2, str = expr) : http://127.0.0.1:54321/2/Exec2.json returned the following error: Dataset Last.value.38 is already in use. Unable to use it now. Consider using a different destination name. create the below exception as well. To get exec2 to work again you have to bounce the whole cluster. + java.lang.IllegalArgumentException: Dataset bad_2k.hex is already in use. Unable to use it now. Consider using a different destination name. + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + ▸ at water.TAtomic.atomic(TAtomic.java:19) + ▸ at water.Atomic.compute2(Atomic.java:57) + ▸ at water.Atomic.fork(Atomic.java:42) + ▸ at water.Atomic.invoke(Atomic.java:34) + ▸ at water.Lockable.write_lock(Lockable.java:59) + ▸ at water.exec.Env.remove_and_unlock(Env.java:326) + ▸ at water.api.Exec2.serve(Exec2.java:62) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724) 04:29:39.990 # Session INFO HTTPD: POST /Inspect.json key=bad_2k2.hex 04:29:40.230 # Session INFO HTTPD: POST /Inspect.json key=bad_2k2.hex max_column_display=2147483647 04:29:40.350 # Session INFO HTTPD: POST /2/Exec2.json str=Last.value.38 = bad_2k2.hex[,2000] 04:29:40.366 # Session ERRR WATER:▫ + java.lang.IllegalArgumentException: Dataset bad_2k.hex is already in use. Unable to use it now. Consider using a different destination name. + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + ▸ at water.TAtomic.atomic(TAtomic.java:19) + ▸ at water.Atomic.compute2(Atomic.java:57) + ▸ at water.Atomic.fork(Atomic.java:42) + ▸ at water.Atomic.invoke(Atomic.java:34) + ▸ at water.Lockable.write_lock(Lockable.java:59) + ▸ at water.exec.Env.remove_and_unlock(Env.java:326) + ▸ at water.api.Exec2.serve(Exec2.java:62) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724) ================ new exception 04:33:21.503 # Session ERRR WATER:▫ + java.lang.IllegalArgumentException: Dataset Last.value.38 is already in use. Unable to use it now. Consider using a different destination name. + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:84) + ▸ at water.Lockable$PriorWriteLock.atomic(Lockable.java:73) + ▸ at water.TAtomic.atomic(TAtomic.java:19) + ▸ at water.Atomic.compute2(Atomic.java:57) + ▸ at water.Atomic.fork(Atomic.java:42) + ▸ at water.Atomic.invoke(Atomic.java:34) + ▸ at water.Lockable.write_lock(Lockable.java:59) + ▸ at water.exec.Env.remove_and_unlock(Env.java:326) + ▸ at water.api.Exec2.serve(Exec2.java:62) + ▸ at water.api.Request.serveGrid(Request.java:129) + ▸ at water.Request2.superServeGrid(Request2.java:472) + ▸ at water.api.Exec2.serveGrid(Exec2.java:71) + ▸ at water.api.Request.serve(Request.java:108) + ▸ at water.api.RequestServer.serve(RequestServer.java:315) + ▸ at water.NanoHTTPD$HTTPSession.run(NanoHTTPD.java:421) + ▸ at java.lang.Thread.run(Thread.java:724)
    via by Earl Hathaway,
    • java.lang.IllegalArgumentException: Dataset nfs://home/0xcustomer/home-0xdiag-datasets/manyfiles-nflx-gz/file_100.dat.gz is already in use. Unable to use it now. Consider using a different destination name. at water.Lockable$PriorWriteLock.atomic(Lockable.java:83) at water.Lockable$PriorWriteLock.atomic(Lockable.java:72) at water.TAtomic.atomic(TAtomic.java:19) at water.Atomic.compute2(Atomic.java:57) at water.DTask.dinvoke(DTask.java:78) at water.RPC$RPCCall.compute2(RPC.java:276) at water.H2O$H2OCountedCompleter.compute(H2O.java:712) at jsr166y.CountedCompleter.exec(CountedCompleter.java:429) at jsr166y.ForkJoinTask.doExec(ForkJoinTask.java:263) at jsr166y.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:974) at jsr166y.ForkJoinPool.runWorker(ForkJoinPool.java:1477) at jsr166y.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
    No Bugmate found.