There are no available Samebug tips for this exception. Do you have an idea how to solve this issue? A short tip would help users who saw this issue last week.

  • Quite a number of different manifestations of this observed by a number of our customers using different cloud providers. In common is the use of a "single-shot" style retention strategy, though the root cause is observable with great care when using any retention strategy other than Always. The basic issue is that you cannot determine if a node is idle unless you hold the Queue lock as that is the only way to ensure that the Queue is not in the process of assigning work to the node you are removing. Symptoms include: * Build logs that claim the job was executed on "master" even though the job is tied to a specific label that master does not have. The build log will have been "unable to be determined" * Build logs where the node is gone just as soon as the job starts {code} 2015-03-05 13:27:55.101 Started by upstream project "____" build number ___ 2015-03-05 13:27:55.102 originally caused by: 2015-03-05 13:27:55.103 Started by user ____ 2015-03-05 13:27:55.437 FATAL: no longer a configured node for ____ 2015-03-05 13:27:55.440 java.lang.IllegalStateException: no longer a configured node for ____ 2015-03-05 13:27:55.440 at hudson.model.AbstractBuild$AbstractBuildExecution.getCurrentNode( 2015-03-05 13:27:55.440 at hudson.model.AbstractBuild$ 2015-03-05 13:27:55.441 at hudson.model.Run.execute( 2015-03-05 13:27:55.441 at 2015-03-05 13:27:55.441 at hudson.model.ResourceController.execute( 2015-03-05 13:27:55.441 at {code}
    via by stephenconnolly,
  • I have: * Zuul * Gearman plugin 0.1.3 (with ) * Jenkins 1.625.3 * Nodepool 0.1.1 (yeah it is old) Today I have added a new job that runs a test suite. On build completion I have a few publishers: * Archive the artifacts ( logs/* ). Note the build produce no log but archiver is set to not fail * PostBuild, to trigger another project (named castor-save). The archiver fails because the node went offline while it was executing: {{ ✓ retrieve en.wp main page via mobile-sections (364ms) ✓ retrieve lead section of en.wp main page via mobile-sections-lead (306ms) FATAL: no longer a configured node for ci-jessie-wikimedia-33866 java.lang.IllegalStateException: no longer a configured node for ci-jessie-wikimedia-33866 at hudson.model.AbstractBuild$AbstractBuildExecution.getCurrentNode( at hudson.model.AbstractBuild$AbstractBuildExecution.reportBrokenChannel( at hudson.model.AbstractBuild$AbstractBuildExecution.perform( at hudson.model.Build$ at hudson.model.Build$BuildExecution.doRun( at hudson.model.AbstractBuild$ at hudson.model.Run.execute( at at hudson.model.ResourceController.execute( at ERROR: Step ‘Archive the artifacts’ failed: no workspace for mobileapps-deploy-npm-node-4.3 #1 [PostBuildScript] - Execution post build scripts. [PostBuildScript] Build is not success : do not execute script Finished: FAILURE }} I am apparently not the only one impacted. From a recent IRC log at > Thelo greghaynes: once in a while I get this error : FATAL: no longer a configured node for d-p-c-local_01-769 in my job's console JENKINS-26665 "Complete lack of correct synchronization or concern for thread safety in mansion cloud plugin" has a similar stack trace. Job page: (hopefully Jenkins will keep it). I have attached the XML configuration. It ran on node ci-jessie-wikimedia-33866. The job failure occurred on Feb 15th 2016 at 17:39:02 In my case I had two different jobs running on the same node. Which goes something like: {{ 2016-02-15 17:31:37,287 INFO nodepool.NodeLauncher: Node id: 33866 is ready 2016-02-15 17:31:41,056 INFO nodepool.NodeLauncher: Node id: 33866 added to jenkins 2016-02-15 17:37:21,325 DEBUG nodepool.NodeUpdateListener: Received: onStarted {"name":"integration-config-tox-py27-jessie" ... "node_name":"ci-jessie-wikimedia-33866" 2016-02-15 17:38:01,808 DEBUG nodepool.NodeUpdateListener: Received: onFinalized {"name":"integration-config-tox-py27-jessie" ... "node_name":"ci-jessie-wikimedia-33866" }} And half a minute after, a different job is assigned to the same node: {{ 2016-02-15 17:38:33,867 DEBUG nodepool.NodeUpdateListener: Received: onStarted {"name":"mobileapps-deploy-npm-node-4.3" ... "node_name":"ci-jessie-wikimedia-33866" 2016-02-15 17:38:33,871 INFO nodepool.NodeUpdateListener: Setting node id: 33866 to USED 2016-02-15 17:39:01,875 DEBUG nodepool.NodePool: Deleting node id: 33866 which has been in used state for 0.00802109248108 hours 2016-02-15 17:39:02,942 DEBUG nodepool.NodeUpdateListener: Received: onCompleted {"name":"mobileapps-deploy-npm-node-4.3" ... "node_name":"ci-jessie-wikimedia-33866" FAILURE a0ab290726d747608dcac63b1f1a33b5","ZUUL_VOTING":"1"},"node_name":"ci-jessie-wikimedia-33866" FAILURE 2016-02-15 17:39:06,763 INFO nodepool.NodePool: Deleted jenkins node id: 33866 }}
    via by Antoine Musso,
    • java.lang.IllegalStateException: /var/jenkins_home/jobs/v3_flash_matrix_delphi_1/configurations/axis-BROWSER/chrome7/axis-label/win8.1_64bit/builds/211 already existed; will not overwrite with v3_flash_matrix_delphi_1/BROWSER=chrome7,label=win8.1_64bit #211 at hudson.model.RunMap.put( at hudson.matrix.MatrixConfiguration.newBuild( at hudson.matrix.MatrixConfiguration.newBuild( at hudson.model.AbstractProject.createExecutable( at hudson.model.AbstractProject.createExecutable( at hudson.model.Executor$ at hudson.model.Executor$ at hudson.model.Queue._withLock( at hudson.model.Queue.withLock( at

    Users with the same issue

    Unknown visitor1 times, last one,
    Unknown visitor1 times, last one,