I just deleted thous six corrupted files.
hadoop fsck / -openforwrite
Total size: 172988470041 B
Total dirs: 2601
Total files: 3086
Total symlinks: 0
Total blocks (validated): 4729 (avg. block size 36580348 B)
Minimally replicated blocks: 4729 (100.0 %)
Over-replicated blocks: 40 (0.8458448 %)
Under-replicated blocks: 0 (0.0 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 3.0148022
Corrupt blocks: 0
Missing replicas: 0 (0.0 %)
Number of data-nodes: 6
Number of racks: 2
FSCK ended at Tue Oct 14 23:13:26 EEST 2014 in 51 milliseconds
The filesystem under path ‘/’ is HEALTHY
But still:
hdfs fsck -list-corruptfileblocks – gives corrupted blocks
The filesystem under path ‘/’ has 201 CORRUPT files