$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM

Apache's JIRA Issue Tracker | Roman Shaposhnik | 4 years ago
Click on the to mark the solution that helps you, Samebug will learn from it.
As a community member, you’ll be rewarded for you help.
  1. 0

    Here's what I'm observing on a fully distributed cluster deployed via Bigtop from the RC0 2.0.3-alpha tarball: {noformat} 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009:$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM at<init>( at org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.<init>( at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken( at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken( at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod( at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ at org.apache.hadoop.ipc.RPC$ at org.apache.hadoop.ipc.Server$Handler$ at org.apache.hadoop.ipc.Server$Handler$ at Method) at at at org.apache.hadoop.ipc.Server$ Caused by:$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM at at<init>( ... 12 more ] {noformat} This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this is a Hadoop issue rather than the oozie one is because when I hack /etc/krb5.conf to be: {noformat} [libdefaults] ticket_lifetime = 600 default_realm = LOCALHOST default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc [realms] LOCALHOST = { kdc = localhost:88 default_domain = .local } [domain_realm] .local = LOCALHOST [logging] kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmin.log default = FILE:/var/log/krb5lib.log {noformat} The issue goes away. Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it should NOT pay attention to /etc/krb5.conf to begin with.

    Apache's JIRA Issue Tracker | 4 years ago | Roman Shaposhnik$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM
  2. 0

    This might not be a bug. Here is the description. Any workarounds are appreciated. I am only able to execute hadoop commands using principals which are in the default realm. seems to be ignored. Attached is a log of everything done. Here is overview of the configuration and some troubleshooting tests: # created and tested a principal using the KDC instead of AD and confirmed all OK hadoop george@EC2.INTERNAL Name: george@EC2.INTERNAL to george # fails to use with principal from AD, seems to ignore rules in hadoop george@CLOUDSECURE.LOCAL Exception in thread "main"$NoMatchingRule: No rules applied to george@CLOUDSECURE.LOCAL at at # note: ip-10-151-51-135.ec2.internal has Win 2008 R2 + AD DS with 1 forest, and defines all user accounts used for authentication /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EC2.INTERNAL dns_lookup_realm = false dns_lookup_kdc = false max_life = 1d max_renewable_life = 7d ticket_lifetime = 24h renew_lifetime = 7d forwardable = true default_tgs_enctypes = aes256-cts aes128-cts arcfour-hmac des3-hmac-sha1 des-hmac-sha1 des-cbc-md5 des-cbc-crc default_tkt_enctypes = aes256-cts aes128-cts arcfour-hmac des3-hmac-sha1 des-hmac-sha1 des-cbc-md5 des-cbc-crc [realms] EC2.INTERNAL = { kdc = ip-10-191-70-81.ec2.internal admin_server = ip-10-191-70-81.ec2.internal default_domain = EC2.INTERNAL } CLOUDSECURE.LOCAL = { kdc = ip-10-151-51-135.ec2.internal:88 admin_server = ip-10-151-51-135.ec2.internal:749 default_domain = EC2.INTERNAL } [domain_realm] .ec2.internal = EC2.INTERNAL ec2.internal = EC2.INTERNAL cat /etc/hadoop/conf.cloudera.hdfs1/core-site.xml <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera CM on 2013-10-06T10:16:50.792Z--> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ip-10-191-70-81.ec2.internal:8020</value> </property> <property> <name>fs.trash.interval</name> <value>1</value> </property> <property> <name></name> <value>kerberos</value> </property> <property> <name></name> <value>authentication</value> </property> <property> <name></name> <value>RULE:[1:$1@$0](.*@\QEC2.INTERNAL\E$)s/@\QEC2.INTERNAL\E$// RULE:[2:$1@$0](.*@\QEC2.INTERNAL\E$)s/@\QEC2.INTERNAL\E$// RULE:[1:$1@$0](.*@\QCLOUDSECURE.LOCAL\E$)s/@\QCLOUDSECURE.LOCAL\E$// RULE:[2:$1@$0](.*@\QCLOUDSECURE.LOCAL\E$)s/@\QCLOUDSECURE.LOCAL\E$// DEFAULT</value> </property> </configuration>

    Cloudera Open Source | 3 years ago | Daniel Rule$NoMatchingRule: No rules applied to george@CLOUDSECURE.LOCAL
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    run dmlc yarn error, "failure to login"

    GitHub | 1 year ago | robbine failure to login
  5. 0

    Kerberos Authentication Error - When loading Hadoop Config Files from SharedPath

    Stack Overflow | 8 months ago | Padmanabhan Vijendran Login failure for name@XX.XX.COM from keytab \\NASdrive\name.keytab: java.lang.IllegalArgumentException: Illegal principal name name@XX.XX.COM:$NoMatchingRule: No rules applied to name@XX.XX.COM

    1 unregistered visitors

    Root Cause Analysis


      No rules applied to yarn/localhost@LOCALREALM

    2. Apache Hadoop Auth
      1 frame
    3. Hadoop
      1 frame
    4. hadoop-mapreduce-client-common
      1. org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.<init>(
      1 frame
    5. hadoop-mapreduce-client-hs
      1. org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(
      1 frame
    6. hadoop-mapreduce-client-common
      1. org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(
      1 frame
    7. hadoop-yarn-api
      1. org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(
      1 frame
    8. Hadoop
      1. org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
      2. org.apache.hadoop.ipc.RPC$
      3. org.apache.hadoop.ipc.Server$Handler$
      4. org.apache.hadoop.ipc.Server$Handler$
      4 frames
    9. Java RT
      1. Method)
      2 frames
    10. Hadoop
      2. org.apache.hadoop.ipc.Server$
      2 frames