Thank you for your answer.
Yes I’m actually speaking of disk space.
I could try on a brand new HDFS with only a 1Go file, but hdfs dfsadmin -report
provides the total disk usage of datanodes, including HDFS users folders, logs, tmp…
I was wondering if a 3 times replication would imply 3* the data size on the disk space, or if there is a kind of formula to evaluate the disk space required.