Hi,
how can reoccurring spark jobs be scheduled on a cluster? Including restarting failed ones or submitting/configuring new ones?
I found this job server: https://github.com/spark-jobserver
it seems only to work with scala jobs though.
Thanks for any help.
Cheers Alem