I solved it!
The cause was the way Hadoop resolves the FileSystem implementation. When using a blank configuration as a starting point and configuring things programmatically, the FileSystem.get method queries the class loader for the correct FileSystem class to use. Somehow the DistributedFileSystem class can be found with this method when running from Eclipse or the Unit test, where the explicit hadoop-hdfs jar is on the class path, but when the classes are “shaded” into the “Über-Jar”, i.e., directly in the final output Jar, this mechanism fails.
So, I had to explicitly set the “fs.hdfs.impl” directive in the configuration like so:
conf.setClass("fs.hdfs.impl", DistributedFileSystem.class, FileSystem.class);
After this change, HDFS is found and usable.