Hello Subhankar,
I recommend following this recipe for writing a Hadoop application that needs to be sensitive to the environment’s configuration (like service addresses).
1. Declare your main class to implement the Tool interface. This sets up your main class to run in a standard lifecycle known to the Hadoop library classes.
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/util/Tool.html
2. Declare your main class to extend the Configured base class. This automatically sets up your class with implementations of the getConf and setConf methods demanded by the Tool interface. This step is optional, but most of the time it’s helpful, because then you don’t need to implement these 2 methods yourself.
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configured.html
3. Code the static main method of your class to pass an instance of the class to ToolRunner. This will look something like this:
public static void main(String[] args) throws Exception { System.exit(ToolRunner.run(new Main(), args)); }
https://hadoop.apache.org/docs/current/api/org/apache/hadoop/util/ToolRunner.html
4. Run your application via the “hadoop jar” command line interface.
hadoop jar your-app.jar org.yourorg.yourapp.MainClass
By following this recipe, you can run your application in any Hadoop cluster, and it will respect the configuration in the deployment environment. Most importantly, it will respect the fs.defaultFS property in core-site.xml, which defines the default FileSystem URI.
I hope this helps.