Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
B
bioinfo_utils
Manage
Activity
Members
Labels
Plan
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Blaise LI
bioinfo_utils
Commits
1aa9d765
Commit
1aa9d765
authored
11 months ago
by
Blaise Li
Browse files
Options
Downloads
Patches
Plain Diff
Example command to run on the cluster.
parent
bd2e9a4d
No related branches found
No related tags found
No related merge requests found
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
README.md
+19
-1
19 additions, 1 deletion
README.md
singularity/run_pipeline.sh
+3
-1
3 additions, 1 deletion
singularity/run_pipeline.sh
singularity/workflows_shell.sh
+3
-2
3 additions, 2 deletions
singularity/workflows_shell.sh
with
25 additions
and
4 deletions
README.md
+
19
−
1
View file @
1aa9d765
...
...
@@ -92,6 +92,24 @@ workflow, as well as a shell wrapper and symbolic links that can be used in the
same way as described in the previous section.
#### Running on a cluster with slurm workload manager
In july 2024, attempts were made to make it possible to run the containerized
pipeline on a computing cluster using the slurm workload manager. The
above-mentioned shell wrapper (and symbolic links to it) will, by default,
attempt to run the pipeline using sbatch and can use some environment variables
in order to specify quality of service (
`QOS`
), partition (
`PART`
), location of
gene lists (
`GeNE_LISTS_DIR`
) and installed genomes (
`GENOME_DIR`
).
Example command to run on a computing cluster:
QOS="hubbioit" PART="hubbioit" GENOME_DIR="/pasteur/appa/homes/bli/test/cecere_pipelines_tests/Genomes" GENE_LISTS_DIR="/pasteur/appa/homes/bli/test/cecere_pipelines_tests/Gene_lists" run_sRNA-seq_pipeline 20221219_FS10001183_sRNA-seq_2cells_embryos_testprotocol_purif_pippinprep.yaml --cores 20 -j 300
To run the pipeline on a non-cluster machine, set the
`DEFAULT_HOSTNAME`
variable to this machine's hostname (either on the command-line, as for the
other variables above, or by editing the wrapper script).
### Genome preparation
A genome preparation workflow is available at
...
...
@@ -115,6 +133,6 @@ This upload upon error can be inactivated by adding `--config upload_on_err=Fals
## Citing
If you use these tools, please cite the following paper
s
:
If you use these tools, please cite the following paper:
> Barucci et al, 2020 (doi: [10.1038/s41556-020-0462-7](https://doi.org/10.1038/s41556-020-0462-7))
This diff is collapsed.
Click to expand it.
singularity/run_pipeline.sh
+
3
−
1
View file @
1aa9d765
...
...
@@ -56,6 +56,8 @@ BASEDIR=$(dirname "${SCRIPT}")
container
=
"
${
BASEDIR
}
/run_pipeline"
wrapper
=
"
${
BASEDIR
}
/wrap_in_container.sh"
cluster_config
=
"
${
BASEDIR
}
/cluster_config.json"
# If we are on this machine, then the pipeline will be run without sbatch
[[
${
DEFAULT_HOSTNAME
}
]]
||
DEFAULT_HOSTNAME
=
"pisa"
# Do we have singularity?
...
...
@@ -128,6 +130,6 @@ cmd="APPTAINERENV_USER=${USER} apptainer run -B /opt/hpc/slurm -B /var/run/munge
# that are expected to be in a specific location there.
# singularity run -B /pasteur -B /run/shm:/run/shm ${container} ${PROGNAME} $@
#[[ $(hostname) = "pisa" ]] && SINGULARITYENV_USER=${USER} singularity run --cleanenv -B /pasteur -B /run/shm:/run/shm ${container} ${PROGNAME} $@ || sbatch --qos=${QOS} --part=${PART} --wrap="${cmd}"
[[
$(
hostname
)
=
"pisa"
]]
&&
SINGULARITYENV_USER
=
${
USER
}
singularity run
-B
/pasteur
-B
/local
-B
/run/shm:/run/shm
${
container
}
${
PROGNAME
}
$@
||
sbatch
--qos
=
${
QOS
}
--part
=
${
PART
}
--wrap
=
"
${
cmd
}
"
[[
$(
hostname
)
=
${
DEFAULT_HOSTNAME
}
]]
&&
SINGULARITYENV_USER
=
${
USER
}
singularity run
-B
/pasteur
-B
/local
-B
/run/shm:/run/shm
${
container
}
${
PROGNAME
}
$@
||
sbatch
--qos
=
${
QOS
}
--part
=
${
PART
}
--wrap
=
"
${
cmd
}
"
exit
0
This diff is collapsed.
Click to expand it.
singularity/workflows_shell.sh
+
3
−
2
View file @
1aa9d765
...
...
@@ -37,7 +37,8 @@ SCRIPT=$(readlink -f "${0}")
# Absolute path this script is in
BASEDIR
=
$(
dirname
"
${
SCRIPT
}
"
)
container
=
"
${
BASEDIR
}
/run_pipeline"
# If we are on this machine, then the pipeline will be run without sbatch
[[
${
DEFAULT_HOSTNAME
}
]]
||
DEFAULT_HOSTNAME
=
"pisa"
# Do we have singularity?
singularity
--version
2> /dev/null
&&
have_singularity
=
1
...
...
@@ -77,6 +78,6 @@ case ${1} in
# -B /pasteur will mount /pasteur in the container
# so that it finds the Genome configuration and gene lists
# that are expected to be in a specific location there.
[[
$(
hostname
)
=
"pisa"
]]
&&
SINGULARITYENV_USER
=
${
USER
}
singularity shell
-B
/pasteur
-B
/local
-B
/run/shm:/run/shm
${
container
}
$@
||
APPTAINERENV_USER
=
${
USER
}
apptainer shell
-B
/opt/hpc/slurm
-B
/var/run/munge
-B
/pasteur
-B
/local
${
container
}
$@
[[
$(
hostname
)
=
${
DEFAULT_HOSTNAME
}
]]
&&
SINGULARITYENV_USER
=
${
USER
}
singularity shell
-B
/pasteur
-B
/local
-B
/run/shm:/run/shm
${
container
}
$@
||
APPTAINERENV_USER
=
${
USER
}
apptainer shell
-B
/opt/hpc/slurm
-B
/var/run/munge
-B
/pasteur
-B
/local
${
container
}
$@
;;
esac
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment