I have a SolR database which is being used by customers for fast searches. I currently update it record by record. This is not terribly fast and a nightmare when I want to create everything from scratch.
ideally I want to use Spark as my Hadoop YARN programming language.
so my question is
a) how can I update SolR from spark? Do I just have to use plain Java?
b) how can I create a whole new SolR database from a Spark DataFrame in as quick way as possible? We used to be able to create license files which could be used by SolR. Is that still true? Is SolR Cloud much different?
feel free to point me at stuff I should read.
thanks