I have 9 nodes and I have started to install HDP 2.3 using Ambari 2.1.0.
Objective : Use ONE disk for namenode, metadata etc. and rest of the disks for storing the HDFS data blocks
Node-1 : 7 disks(1 for root, opt etc., 6 empty)
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root
16G 2.8G 13G 19% /
tmpfs 24G 0 24G 0% /dev/shm
/dev/sdb1 194M 58M 127M 32% /boot
/dev/mapper/vg00-home
16G 11M 15G 1% /home
/dev/mapper/vg00-nsr 16G 2.4G 13G 16% /nsr
/dev/mapper/vg00-opt 16G 260M 15G 2% /opt
/dev/mapper/vg00-itm 434M 191M 222M 47% /opt/IBM/ITM
/dev/mapper/vg00-tmp 16G 70M 15G 1% /tmp
/dev/mapper/vg00-usr 16G 2.0G 13G 14% /usr
/dev/mapper/vg00-usr_local
248M 231M 4.4M 99% /usr/local
/dev/mapper/vg00-var 16G 4.6G 11G 31% /var
/dev/mapper/vg00-tq 3.0G 974M 1.9G 34% /opt/teamquest
AFS 8.6G 0 8.6G 0% /afs
/dev/sdc1 551G 198M 522G 1% /opt/dev/sdc
/dev/sdd1 551G 198M 522G 1% /opt/dev/sdd
/dev/sde1 551G 198M 522G 1% /opt/dev/sde
/dev/sdf1 551G 198M 522G 1% /opt/dev/sdf
/dev/sdg1 551G 198M 522G 1% /opt/dev/sdg
/dev/sdh1 551G 198M 522G 1% /opt/dev/sdh
Node-2 to Node-9 : 8 disks(1 for root, opt etc., 7 empty)
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg00-root
16G 405M 15G 3% /
tmpfs 24G 0 24G 0% /dev/shm
/dev/sda1 194M 58M 126M 32% /boot
/dev/mapper/vg00-home
16G 11M 15G 1% /home
/dev/mapper/vg00-nsr 16G 2.4G 13G 17% /nsr
/dev/mapper/vg00-opt 16G 35M 15G 1% /opt
/dev/mapper/vg00-itm 434M 191M 221M 47% /opt/IBM/ITM
/dev/mapper/vg00-tmp 16G 70M 15G 1% /tmp
/dev/mapper/vg00-usr 16G 1.9G 14G 13% /usr
/dev/mapper/vg00-usr_local
248M 11M 226M 5% /usr/local
/dev/mapper/vg00-var 16G 1.8G 14G 12% /var
/dev/mapper/vg00-tq 3.0G 946M 1.9G 33% /opt/teamquest
AFS 8.6G 0 8.6G 0% /afs
/dev/sdb1 551G 215M 522G 1% /opt/dev/sdb
/dev/sdc1 551G 328M 522G 1% /opt/dev/sdc
/dev/sdd1 551G 215M 522G 1% /opt/dev/sdd
/dev/sde1 551G 198M 522G 1% /opt/dev/sde
/dev/sdf1 551G 198M 522G 1% /opt/dev/sdf
/dev/sdg1 551G 327M 522G 1% /opt/dev/sdg
/dev/sdh1 551G 243M 522G 1% /opt/dev/sdh
In ‘Assign Masters’
Node-1 Namenode, Zookeeper server
Node-2 SNN, RM, Zookeeper server, History Server
Node-3 WebHCat Server, HiveServer2, Hive Metastore, HBase Master, Oozie Server, Zookeeper server
Node-4 Kafka, Accumulo Master etc.
Node-5 Falcon, Knox etc.
In ‘Assign Slaves and Clients’, nothing selected for Node-1, rest 8 having uniform clients, NodeManager, Regionserver etc.
I AM STUCK IN the ‘Customize Services’, FOR EXAMPLE :
Under ‘HDFS/Namenode directories’, the defaults are :
/nsr/hadoop/hdfs/namenode
/opt/hadoop/hdfs/namenode
/opt/IBM/ITM/hadoop/hdfs/namenode
/tmp/hadoop/hdfs/namenode
/usr/hadoop/hdfs/namenode
/usr/local/hadoop/hdfs/namenode
/var/hadoop/hdfs/namenode
/opt/teamquest/hadoop/hdfs/namenode
/dev/isilon/hadoop/hdfs/namenode
/opt/dev/sdc/hadoop/hdfs/namenode
/opt/dev/sdd/hadoop/hdfs/namenode
/opt/dev/sde/hadoop/hdfs/namenode
/opt/dev/sdf/hadoop/hdfs/namenode
/opt/dev/sdg/hadoop/hdfs/namenode
/opt/dev/sdh/hadoop/hdfs/namenode
Under ‘DataNode directories’, the defaults are :
/nsr/hadoop/hdfs/data
/opt/hadoop/hdfs/data
/opt/IBM/ITM/hadoop/hdfs/data
/tmp/hadoop/hdfs/data
/usr/hadoop/hdfs/data
/usr/local/hadoop/hdfs/data
/var/hadoop/hdfs/data
/opt/teamquest/hadoop/hdfs/data
/dev/isilon/hadoop/hdfs/data
/opt/dev/sdb/hadoop/hdfs/data
/opt/dev/sdc/hadoop/hdfs/data
/opt/dev/sdd/hadoop/hdfs/data
/opt/dev/sde/hadoop/hdfs/data
/opt/dev/sdf/hadoop/hdfs/data
/opt/dev/sdg/hadoop/hdfs/data
/opt/dev/sdh/hadoop/hdfs/data
To achieve the *Objective I mentioned in the beginning, what values shall I put and in which all places?
I thought of replacing ‘DataNode directories’ defaults with
/opt/dev/sdb
/opt/dev/sdc
/opt/dev/sdd
/opt/dev/sde
/opt/dev/sdf
/opt/dev/sdg
/opt/dev/sdh