Searched on Google with the first line of a JAVA stack trace?

We can recommend more relevant solutions and speed up debugging when you paste your entire stack trace with the exception message. Try a sample exception.

Recommended solutions based on your search

Solutions on the web

via Stack Overflow by sawai singh
, 1 year ago
Output directory hdfs://localhost:9000/sawai.txt already exists
via by Unknown author, 2 years ago
Output directory file:/home/Hamid/output.txt already exists
via Stack Overflow by Prasat Madesh
, 11 months ago
Output directory file:/home/deevita/MapReduceTutorial/mapreduce_output_sales already exists
via Stack Overflow by Frank Su
, 2 years ago
Output directory hdfs://dv-db.machines:8020/tmp/xxxx/cluster/97916 already exists
via incubator-hcatalog-user by Rohini Palaniswamy, 2 years ago
Output directory hdfs://cluster:54310/user/hive8/warehouse/db/table_1/_DYN0.4448079902737385/load_date=20120515/repo_name=testRepo already exists
org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory file:/home/hp/output already exists	at org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(	at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(	at org.apache.hadoop.mapreduce.Job$	at org.apache.hadoop.mapreduce.Job$	at Method)	at	at	at org.apache.hadoop.mapreduce.Job.submit(	at org.apache.hadoop.mapred.JobClient$	at org.apache.hadoop.mapred.JobClient$	at Method)	at	at	at org.apache.hadoop.mapred.JobClient.submitJobInternal(	at org.apache.hadoop.mapred.JobClient.submitJob(	at org.apache.hadoop.mapred.JobClient.runJob(	at	at	at WordCount.WordCount.main(