This describes a workflow to run [eggnog mapper on a large dataset](https://github.com/eggnogdb/eggnog-mapper/wiki/eggNOG-mapper-v2#setting-up-large-annotation-jobs)
In brief it contains 4 steps:
* Splitting the input FASTA files into chunks
* Run diamond from eggnogmapper on each chunks
* Merge all results to one unique `emapper.seed_orthologs`