When Spark jobs are submitted they upload files to HDFS. So the account submitting Spark job needs to have permission to write to HDFS.
In an HDP 2.2.4 (or higher)cluster Spark user is created and corresponding to this user /user/spark dir is created where Spark user has full permissions.
If you don’t have user Spark created in your OS.
As root
You can create the OS user as “useradd spark -g hadoop” and then create /user/spark dir and make spark user the owner of the dir and give it full rights.
Then su spark and run spark jobs.