failure to login

Talend Open Integration Solution | Wes Fox | 3 years ago
  1. 0

    run dmlc yarn error, "failure to login"

    GitHub | 11 months ago | robbine failure to login
  2. 0

    Kerberos Authentication Error - When loading Hadoop Config Files from SharedPath

    Stack Overflow | 4 months ago | Padmanabhan Vijendran Login failure for name@XX.XX.COM from keytab \\NASdrive\name.keytab: java.lang.IllegalArgumentException: Illegal principal name name@XX.XX.COM:$NoMatchingRule: No rules applied to name@XX.XX.COM
  3. Speed up your debug routine!

    Automated exception search integrated into your IDE

  4. 0

    This might not be a bug. Here is the description. Any workarounds are appreciated. I am only able to execute hadoop commands using principals which are in the default realm. seems to be ignored. Attached is a log of everything done. Here is overview of the configuration and some troubleshooting tests: # created and tested a principal using the KDC instead of AD and confirmed all OK hadoop george@EC2.INTERNAL Name: george@EC2.INTERNAL to george # fails to use with principal from AD, seems to ignore rules in hadoop george@CLOUDSECURE.LOCAL Exception in thread "main"$NoMatchingRule: No rules applied to george@CLOUDSECURE.LOCAL at at # note: ip-10-151-51-135.ec2.internal has Win 2008 R2 + AD DS with 1 forest, and defines all user accounts used for authentication /etc/krb5.conf [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] default_realm = EC2.INTERNAL dns_lookup_realm = false dns_lookup_kdc = false max_life = 1d max_renewable_life = 7d ticket_lifetime = 24h renew_lifetime = 7d forwardable = true default_tgs_enctypes = aes256-cts aes128-cts arcfour-hmac des3-hmac-sha1 des-hmac-sha1 des-cbc-md5 des-cbc-crc default_tkt_enctypes = aes256-cts aes128-cts arcfour-hmac des3-hmac-sha1 des-hmac-sha1 des-cbc-md5 des-cbc-crc [realms] EC2.INTERNAL = { kdc = ip-10-191-70-81.ec2.internal admin_server = ip-10-191-70-81.ec2.internal default_domain = EC2.INTERNAL } CLOUDSECURE.LOCAL = { kdc = ip-10-151-51-135.ec2.internal:88 admin_server = ip-10-151-51-135.ec2.internal:749 default_domain = EC2.INTERNAL } [domain_realm] .ec2.internal = EC2.INTERNAL ec2.internal = EC2.INTERNAL cat /etc/hadoop/conf.cloudera.hdfs1/core-site.xml <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera CM on 2013-10-06T10:16:50.792Z--> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ip-10-191-70-81.ec2.internal:8020</value> </property> <property> <name>fs.trash.interval</name> <value>1</value> </property> <property> <name></name> <value>kerberos</value> </property> <property> <name></name> <value>authentication</value> </property> <property> <name></name> <value>RULE:[1:$1@$0](.*@\QEC2.INTERNAL\E$)s/@\QEC2.INTERNAL\E$// RULE:[2:$1@$0](.*@\QEC2.INTERNAL\E$)s/@\QEC2.INTERNAL\E$// RULE:[1:$1@$0](.*@\QCLOUDSECURE.LOCAL\E$)s/@\QCLOUDSECURE.LOCAL\E$// RULE:[2:$1@$0](.*@\QCLOUDSECURE.LOCAL\E$)s/@\QCLOUDSECURE.LOCAL\E$// DEFAULT</value> </property> </configuration>

    Cloudera Open Source | 3 years ago | Daniel Rule failure to login
  5. 0

    Here's what I'm observing on a fully distributed cluster deployed via Bigtop from the RC0 2.0.3-alpha tarball: {noformat} 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType [TRANSIENT], ErrorCode [JA009], Message [JA009:$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM at<init>( at org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.<init>( at org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken( at org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken( at org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod( at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ at org.apache.hadoop.ipc.RPC$ at org.apache.hadoop.ipc.Server$Handler$ at org.apache.hadoop.ipc.Server$Handler$ at Method) at at at org.apache.hadoop.ipc.Server$ Caused by:$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM at at<init>( ... 12 more ] {noformat} This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this is a Hadoop issue rather than the oozie one is because when I hack /etc/krb5.conf to be: {noformat} [libdefaults] ticket_lifetime = 600 default_realm = LOCALHOST default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc [realms] LOCALHOST = { kdc = localhost:88 default_domain = .local } [domain_realm] .local = LOCALHOST [logging] kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmin.log default = FILE:/var/log/krb5lib.log {noformat} The issue goes away. Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it should NOT pay attention to /etc/krb5.conf to begin with.

    Apache's JIRA Issue Tracker | 4 years ago | Roman Shaposhnik$NoMatchingRule: No rules applied to yarn/localhost@LOCALREALM

    1 unregistered visitors
    Not finding the right solution?
    Take a tour to get the most out of Samebug.

    Tired of useless tips?

    Automated exception search integrated into your IDE

    Root Cause Analysis


      No rules applied to user@FOOBAR.COM

    2. Apache Hadoop Auth
      1 frame
    3. Hadoop
      3 frames
    4. Java RT
      1. sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      2. sun.reflect.NativeMethodAccessorImpl.invoke(
      3. sun.reflect.DelegatingMethodAccessorImpl.invoke(
      4. java.lang.reflect.Method.invoke(
      9. Method)
      11 frames
    5. Hadoop
      3. org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(
      4. org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(
      5. org.apache.hadoop.fs.FileSystem$Cache.get(
      6. org.apache.hadoop.fs.FileSystem.get(
      7. org.apache.hadoop.fs.FileSystem.get(
      7 frames
    6. visa.poc_01_serialwriteseq_kerberos_0_1
      1. visa.poc_01_serialwriteseq_kerberos_0_1.POC_01_SerialWriteSEQ_Kerberos.tFileInputDelimited_1Process(
      2. visa.poc_01_serialwriteseq_kerberos_0_1.POC_01_SerialWriteSEQ_Kerberos.tLibraryLoad_1Process(
      3. visa.poc_01_serialwriteseq_kerberos_0_1.POC_01_SerialWriteSEQ_Kerberos.runJobInTOS(
      4. visa.poc_01_serialwriteseq_kerberos_0_1.POC_01_SerialWriteSEQ_Kerberos.main(
      4 frames