apache.hadoop.mapred.Сбой JobClient.runJob

#java #ubuntu #hadoop #mapreduce

#java #ubuntu #hadoop #mapreduce

Вопрос:

Я провел тест только со стандартным заданием mapreduce в Ubuntu. Но появилась следующая ошибка, а затем сбой.

conf.myconf создается из conf.пустой копии

 ubuntu@ip-172-31-20-2:/etc/hadoop/conf.myconf$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples-2.0.0-mr1-cdh4.7.0.jar grep ~/hadoop/input ~/hadoop/output '吾輩'

16/10/16 09:56:45 WARN conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
16/10/16 09:56:45 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/10/16 09:56:45 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
16/10/16 09:56:45 INFO mapred.FileInputFormat: Total input paths to process : 2
16/10/16 09:56:46 INFO mapred.JobClient: Running job: job_local1816293773_0001
16/10/16 09:56:46 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/10/16 09:56:46 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
16/10/16 09:56:46 ERROR mapred.FileOutputCommitter: Mkdirs failed to create file:/etc/hadoop/conf.myconf/grep-temp-607671240/_temporary
16/10/16 09:56:46 INFO mapred.LocalJobRunner: Waiting for map tasks
16/10/16 09:56:46 INFO mapred.LocalJobRunner: Starting task: attempt_local1816293773_0001_m_000000_0
16/10/16 09:56:46 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 09:56:46 INFO util.ProcessTree: setsid exited with exit code 0
16/10/16 09:56:46 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@1704878c
16/10/16 09:56:46 INFO mapred.MapTask: Processing split: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.utf8.txt:0 1120636
16/10/16 09:56:46 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
16/10/16 09:56:46 INFO mapred.MapTask: numReduceTasks: 1
16/10/16 09:56:46 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/16 09:56:46 INFO mapred.MapTask: io.sort.mb = 100
16/10/16 09:56:46 INFO mapred.MapTask: data buffer = 79691776/99614720
16/10/16 09:56:46 INFO mapred.MapTask: record buffer = 262144/327680
16/10/16 09:56:46 INFO mapred.MapTask: Starting flush of map output
16/10/16 09:56:46 INFO mapred.MapTask: Finished spill 0
16/10/16 09:56:46 INFO mapred.Task: Task:attempt_local1816293773_0001_m_000000_0 is done. And is in the process of commiting
16/10/16 09:56:46 INFO mapred.LocalJobRunner: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.utf8.txt:0 1120636
16/10/16 09:56:46 INFO mapred.Task: Task 'attempt_local1816293773_0001_m_000000_0' done.
16/10/16 09:56:46 INFO mapred.LocalJobRunner: Finishing task: attempt_local1816293773_0001_m_000000_0
16/10/16 09:56:46 INFO mapred.LocalJobRunner: Starting task: attempt_local1816293773_0001_m_000001_0
16/10/16 09:56:46 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 09:56:46 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@76ae327a
16/10/16 09:56:46 INFO mapred.MapTask: Processing split: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.txt:0 748959
16/10/16 09:56:46 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
16/10/16 09:56:46 INFO mapred.MapTask: numReduceTasks: 1
16/10/16 09:56:46 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/16 09:56:46 INFO mapred.MapTask: io.sort.mb = 100
16/10/16 09:56:46 INFO mapred.MapTask: data buffer = 79691776/99614720
16/10/16 09:56:46 INFO mapred.MapTask: record buffer = 262144/327680
16/10/16 09:56:47 INFO mapred.MapTask: Starting flush of map output
16/10/16 09:56:47 INFO mapred.Task: Task:attempt_local1816293773_0001_m_000001_0 is done. And is in the process of commiting
16/10/16 09:56:47 INFO mapred.LocalJobRunner: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.txt:0 748959
16/10/16 09:56:47 INFO mapred.Task: Task 'attempt_local1816293773_0001_m_000001_0' done.
16/10/16 09:56:47 INFO mapred.LocalJobRunner: Finishing task: attempt_local1816293773_0001_m_000001_0
16/10/16 09:56:47 INFO mapred.LocalJobRunner: Map task executor complete.
16/10/16 09:56:47 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 09:56:47 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@778b3dc2
16/10/16 09:56:47 INFO mapred.LocalJobRunner:
16/10/16 09:56:47 INFO mapred.Merger: Merging 2 sorted segments
16/10/16 09:56:47 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 19 bytes
16/10/16 09:56:47 INFO mapred.LocalJobRunner:
16/10/16 09:56:47 WARN mapred.LocalJobRunner: job_local1816293773_0001
java.io.IOException: The temporary job-output directory file:/etc/hadoop/conf.myconf/grep-temp-607671240/_temporary doesn't exist!
    at org.apache.hadoop.mapred.FileOutputCommitter.getWorkPath(FileOutputCommitter.java:250)
    at org.apache.hadoop.mapred.FileOutputFormat.getTaskOutputPath(FileOutputFormat.java:240)
    at org.apache.hadoop.mapred.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:44)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:476)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:449)
16/10/16 09:56:47 INFO mapred.JobClient:  map 100% reduce 0%
16/10/16 09:56:47 INFO mapred.JobClient: Job complete: job_local1816293773_0001
16/10/16 09:56:47 INFO mapred.JobClient: Counters: 21
16/10/16 09:56:47 INFO mapred.JobClient:   File System Counters
16/10/16 09:56:47 INFO mapred.JobClient:     FILE: Number of bytes read=3276711
16/10/16 09:56:47 INFO mapred.JobClient:     FILE: Number of bytes written=479842
16/10/16 09:56:47 INFO mapred.JobClient:     FILE: Number of read operations=0
16/10/16 09:56:47 INFO mapred.JobClient:     FILE: Number of large read operations=0
16/10/16 09:56:47 INFO mapred.JobClient:     FILE: Number of write operations=0
16/10/16 09:56:47 INFO mapred.JobClient:   Map-Reduce Framework
16/10/16 09:56:47 INFO mapred.JobClient:     Map input records=4748
16/10/16 09:56:47 INFO mapred.JobClient:     Map output records=483
16/10/16 09:56:47 INFO mapred.JobClient:     Map output bytes=7245
16/10/16 09:56:47 INFO mapred.JobClient:     Input split bytes=219
16/10/16 09:56:47 INFO mapred.JobClient:     Combine input records=483
16/10/16 09:56:47 INFO mapred.JobClient:     Combine output records=1
16/10/16 09:56:47 INFO mapred.JobClient:     Reduce input groups=0
16/10/16 09:56:47 INFO mapred.JobClient:     Reduce shuffle bytes=0
16/10/16 09:56:47 INFO mapred.JobClient:     Reduce input records=0
16/10/16 09:56:47 INFO mapred.JobClient:     Reduce output records=0
16/10/16 09:56:47 INFO mapred.JobClient:     Spilled Records=1
16/10/16 09:56:47 INFO mapred.JobClient:     CPU time spent (ms)=0
16/10/16 09:56:47 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
16/10/16 09:56:47 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
16/10/16 09:56:47 INFO mapred.JobClient:     Total committed heap usage (bytes)=538443776
16/10/16 09:56:47 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
16/10/16 09:56:47 INFO mapred.JobClient:     BYTES_READ=1869595
16/10/16 09:56:47 INFO mapred.JobClient: Job Failed: NA
java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1416)
    at org.apache.hadoop.examples.Grep.run(Grep.java:69)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.Grep.main(Grep.java:93)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
 

Но, когда я меняю пользователя с ubuntu на hdfs, он работает нормально.
Почему это?

 hdfs@ip-172-31-20-2:~$ hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples-2.0.0-mr1-cdh4.7.0.jar grep /home/ubuntu/hadoop/input /home/ubuntu/hadoop/output '吾輩'

16/10/16 10:15:04 WARN conf.Configuration: session.id is deprecated. Instead, use dfs.metrics.session-id
16/10/16 10:15:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
16/10/16 10:15:04 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
16/10/16 10:15:04 INFO mapred.FileInputFormat: Total input paths to process : 2
16/10/16 10:15:04 INFO mapred.JobClient: Running job: job_local783267739_0001
16/10/16 10:15:04 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/10/16 10:15:04 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Waiting for map tasks
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Starting task: attempt_local783267739_0001_m_000000_0
16/10/16 10:15:05 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 10:15:05 INFO util.ProcessTree: setsid exited with exit code 0
16/10/16 10:15:05 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@56683a8d
16/10/16 10:15:05 INFO mapred.MapTask: Processing split: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.utf8.txt:0 1120636
16/10/16 10:15:05 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
16/10/16 10:15:05 INFO mapred.MapTask: numReduceTasks: 1
16/10/16 10:15:05 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/16 10:15:05 INFO mapred.MapTask: io.sort.mb = 100
16/10/16 10:15:05 INFO mapred.MapTask: data buffer = 79691776/99614720
16/10/16 10:15:05 INFO mapred.MapTask: record buffer = 262144/327680
16/10/16 10:15:05 INFO mapred.MapTask: Starting flush of map output
16/10/16 10:15:05 INFO mapred.MapTask: Finished spill 0
16/10/16 10:15:05 INFO mapred.Task: Task:attempt_local783267739_0001_m_000000_0 is done. And is in the process of commiting
16/10/16 10:15:05 INFO mapred.LocalJobRunner: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.utf8.txt:0 1120636
16/10/16 10:15:05 INFO mapred.Task: Task 'attempt_local783267739_0001_m_000000_0' done.
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Finishing task: attempt_local783267739_0001_m_000000_0
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Starting task: attempt_local783267739_0001_m_000001_0
16/10/16 10:15:05 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 10:15:05 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2a801853
16/10/16 10:15:05 INFO mapred.MapTask: Processing split: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.txt:0 748959
16/10/16 10:15:05 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
16/10/16 10:15:05 INFO mapred.MapTask: numReduceTasks: 1
16/10/16 10:15:05 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/16 10:15:05 INFO mapred.MapTask: io.sort.mb = 100
16/10/16 10:15:05 INFO mapred.MapTask: data buffer = 79691776/99614720
16/10/16 10:15:05 INFO mapred.MapTask: record buffer = 262144/327680
16/10/16 10:15:05 INFO mapred.MapTask: Starting flush of map output
16/10/16 10:15:05 INFO mapred.Task: Task:attempt_local783267739_0001_m_000001_0 is done. And is in the process of commiting
16/10/16 10:15:05 INFO mapred.LocalJobRunner: file:/home/ubuntu/hadoop/input/wagahaiwa_nekodearu.txt:0 748959
16/10/16 10:15:05 INFO mapred.Task: Task 'attempt_local783267739_0001_m_000001_0' done.
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Finishing task: attempt_local783267739_0001_m_000001_0
16/10/16 10:15:05 INFO mapred.LocalJobRunner: Map task executor complete.
16/10/16 10:15:05 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 10:15:05 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@6b9918ca
16/10/16 10:15:05 INFO mapred.LocalJobRunner:
16/10/16 10:15:05 INFO mapred.Merger: Merging 2 sorted segments
16/10/16 10:15:05 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 19 bytes
16/10/16 10:15:05 INFO mapred.LocalJobRunner:
16/10/16 10:15:05 INFO mapred.JobClient:  map 100% reduce 0%
16/10/16 10:15:05 INFO mapred.Task: Task:attempt_local783267739_0001_r_000000_0 is done. And is in the process of commiting
16/10/16 10:15:05 INFO mapred.LocalJobRunner:
16/10/16 10:15:05 INFO mapred.Task: Task attempt_local783267739_0001_r_000000_0 is allowed to commit now
16/10/16 10:15:06 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local783267739_0001_r_000000_0' to file:/var/lib/hadoop-hdfs/grep-temp-640163485
16/10/16 10:15:06 INFO mapred.LocalJobRunner: reduce > reduce
16/10/16 10:15:06 INFO mapred.Task: Task 'attempt_local783267739_0001_r_000000_0' done.
16/10/16 10:15:06 INFO mapred.JobClient:  map 100% reduce 100%
16/10/16 10:15:06 INFO mapred.JobClient: Job complete: job_local783267739_0001
16/10/16 10:15:07 INFO mapred.JobClient: Counters: 21
16/10/16 10:15:07 INFO mapred.JobClient:   File System Counters
16/10/16 10:15:07 INFO mapred.JobClient:     FILE: Number of bytes read=5289694
16/10/16 10:15:07 INFO mapred.JobClient:     FILE: Number of bytes written=719795
16/10/16 10:15:07 INFO mapred.JobClient:     FILE: Number of read operations=0
16/10/16 10:15:07 INFO mapred.JobClient:     FILE: Number of large read operations=0
16/10/16 10:15:07 INFO mapred.JobClient:     FILE: Number of write operations=0
16/10/16 10:15:07 INFO mapred.JobClient:   Map-Reduce Framework
16/10/16 10:15:07 INFO mapred.JobClient:     Map input records=4748
16/10/16 10:15:07 INFO mapred.JobClient:     Map output records=483
16/10/16 10:15:07 INFO mapred.JobClient:     Map output bytes=7245
16/10/16 10:15:07 INFO mapred.JobClient:     Input split bytes=219
16/10/16 10:15:07 INFO mapred.JobClient:     Combine input records=483
16/10/16 10:15:07 INFO mapred.JobClient:     Combine output records=1
16/10/16 10:15:07 INFO mapred.JobClient:     Reduce input groups=1
16/10/16 10:15:07 INFO mapred.JobClient:     Reduce shuffle bytes=0
16/10/16 10:15:07 INFO mapred.JobClient:     Reduce input records=1
16/10/16 10:15:07 INFO mapred.JobClient:     Reduce output records=1
16/10/16 10:15:07 INFO mapred.JobClient:     Spilled Records=2
16/10/16 10:15:07 INFO mapred.JobClient:     CPU time spent (ms)=0
16/10/16 10:15:07 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
16/10/16 10:15:07 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
16/10/16 10:15:07 INFO mapred.JobClient:     Total committed heap usage (bytes)=853016576
16/10/16 10:15:07 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
16/10/16 10:15:07 INFO mapred.JobClient:     BYTES_READ=1869595
16/10/16 10:15:07 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with processName=JobTracker, sessionId= - already initialized
16/10/16 10:15:07 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
16/10/16 10:15:07 INFO mapred.FileInputFormat: Total input paths to process : 1
16/10/16 10:15:07 INFO mapred.JobClient: Running job: job_local196240886_0002
16/10/16 10:15:07 INFO mapred.LocalJobRunner: OutputCommitter set in config null
16/10/16 10:15:07 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapred.FileOutputCommitter
16/10/16 10:15:07 INFO mapred.LocalJobRunner: Waiting for map tasks
16/10/16 10:15:07 INFO mapred.LocalJobRunner: Starting task: attempt_local196240886_0002_m_000000_0
16/10/16 10:15:07 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 10:15:07 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5ad979dd
16/10/16 10:15:07 INFO mapred.MapTask: Processing split: file:/var/lib/hadoop-hdfs/grep-temp-640163485/part-00000:0 109
16/10/16 10:15:07 WARN mapreduce.Counters: Counter name MAP_INPUT_BYTES is deprecated. Use FileInputFormatCounters as group name and  BYTES_READ as counter name instead
16/10/16 10:15:07 INFO mapred.MapTask: numReduceTasks: 1
16/10/16 10:15:07 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
16/10/16 10:15:07 INFO mapred.MapTask: io.sort.mb = 100
16/10/16 10:15:07 INFO mapred.MapTask: data buffer = 79691776/99614720
16/10/16 10:15:07 INFO mapred.MapTask: record buffer = 262144/327680
16/10/16 10:15:07 INFO mapred.MapTask: Starting flush of map output
16/10/16 10:15:07 INFO mapred.MapTask: Finished spill 0
16/10/16 10:15:07 INFO mapred.Task: Task:attempt_local196240886_0002_m_000000_0 is done. And is in the process of commiting
16/10/16 10:15:07 INFO mapred.LocalJobRunner: file:/var/lib/hadoop-hdfs/grep-temp-640163485/part-00000:0 109
16/10/16 10:15:07 INFO mapred.Task: Task 'attempt_local196240886_0002_m_000000_0' done.
16/10/16 10:15:07 INFO mapred.LocalJobRunner: Finishing task: attempt_local196240886_0002_m_000000_0
16/10/16 10:15:07 INFO mapred.LocalJobRunner: Map task executor complete.
16/10/16 10:15:07 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
16/10/16 10:15:07 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7ea3e75d
16/10/16 10:15:07 INFO mapred.LocalJobRunner:
16/10/16 10:15:07 INFO mapred.Merger: Merging 1 sorted segments
16/10/16 10:15:07 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 19 bytes
16/10/16 10:15:07 INFO mapred.LocalJobRunner:
16/10/16 10:15:07 INFO mapred.Task: Task:attempt_local196240886_0002_r_000000_0 is done. And is in the process of commiting
16/10/16 10:15:07 INFO mapred.LocalJobRunner:
16/10/16 10:15:07 INFO mapred.Task: Task attempt_local196240886_0002_r_000000_0 is allowed to commit now
16/10/16 10:15:07 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local196240886_0002_r_000000_0' to file:/home/ubuntu/hadoop/output
16/10/16 10:15:07 INFO mapred.LocalJobRunner: reduce > reduce
16/10/16 10:15:07 INFO mapred.Task: Task 'attempt_local196240886_0002_r_000000_0' done.
16/10/16 10:15:08 INFO mapred.JobClient:  map 100% reduce 100%
16/10/16 10:15:08 INFO mapred.JobClient: Job complete: job_local196240886_0002
16/10/16 10:15:08 INFO mapred.JobClient: Counters: 21
16/10/16 10:15:08 INFO mapred.JobClient:   File System Counters
16/10/16 10:15:08 INFO mapred.JobClient:     FILE: Number of bytes read=4312215
16/10/16 10:15:08 INFO mapred.JobClient:     FILE: Number of bytes written=957027
16/10/16 10:15:08 INFO mapred.JobClient:     FILE: Number of read operations=0
16/10/16 10:15:08 INFO mapred.JobClient:     FILE: Number of large read operations=0
16/10/16 10:15:08 INFO mapred.JobClient:     FILE: Number of write operations=0
16/10/16 10:15:08 INFO mapred.JobClient:   Map-Reduce Framework
16/10/16 10:15:08 INFO mapred.JobClient:     Map input records=1
16/10/16 10:15:08 INFO mapred.JobClient:     Map output records=1
16/10/16 10:15:08 INFO mapred.JobClient:     Map output bytes=15
16/10/16 10:15:08 INFO mapred.JobClient:     Input split bytes=109
16/10/16 10:15:08 INFO mapred.JobClient:     Combine input records=0
16/10/16 10:15:08 INFO mapred.JobClient:     Combine output records=0
16/10/16 10:15:08 INFO mapred.JobClient:     Reduce input groups=1
16/10/16 10:15:08 INFO mapred.JobClient:     Reduce shuffle bytes=0
16/10/16 10:15:08 INFO mapred.JobClient:     Reduce input records=1
16/10/16 10:15:08 INFO mapred.JobClient:     Reduce output records=1
16/10/16 10:15:08 INFO mapred.JobClient:     Spilled Records=2
16/10/16 10:15:08 INFO mapred.JobClient:     CPU time spent (ms)=0
16/10/16 10:15:08 INFO mapred.JobClient:     Physical memory (bytes) snapshot=0
16/10/16 10:15:08 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
16/10/16 10:15:08 INFO mapred.JobClient:     Total committed heap usage (bytes)=898629632
16/10/16 10:15:08 INFO mapred.JobClient:   org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
16/10/16 10:15:08 INFO mapred.JobClient:     BYTES_READ=23
 

Ответ №1:

 The temporary job-output directory file:/etc/hadoop/conf.myconf/grep-temp-607671240/_temporary doesn't exist!
 

Я бы сказал, что пользователь ubuntu не имеет к нему доступа.