Batch scripts ============= As seen earlier (:ref:`job_script`), the typical way of creating a job is to write a job submission script. Which is a shell script (e.g. a Bash script) whose **first comments**, if they are prefixed with ``#SBATCH``, are interpreted by :term:`Slurm` as parameters describing resource requests and submissions options [#sbatch]_. * See ``sbatch`` manual page [#sbatch]_ for all available directives, * See our `repository of examples scripts `_. For instance, the following script would request one task with one :term:`CPU` for 10 minutes (default allowed time), along with 2 GiB of :term:`RAM` memory, in the default partition (E5): .. code-block:: bash #!/bin/bash # #SBATCH --job-name=test #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=2G hostname -s sleep 60s When started, this job will launch the command ``hostname`` on the node on which the requested :term:`CPU` was allocated. Then, a second job step will run the ``sleep`` command. You can create this job submission script on any login nodes (See :doc:`login_nodes`) using a text editor such as nano or vim (See :doc:`../environment_and_tools/editors`), and save it as ``submit.sh``. .. TIP:: **You can submit jobs from any login node to any partition. Login nodes are only segregated for build (CPU µarch) and scratch access.** .. caution:: ``#SBATCH`` **directives must be at the top of the script**, with no empty line(s) in between. :term:`Slurm` will ignore all ``#SBATCH`` directives after the first *non-comment line* (that is, the first line in the script that doesn't start with a ``#`` character). Always put your ``#SBATCH`` parameters at the top of your batch script. For example : .. code-block:: bash #!/bin/bash # #SBATCH --job-name=big_job #SBATCH --mem=16G #SBATCH --time=0-00:15:00 #SBATCH --partition=E5 will cause *--mem* and all following ``#SBATCH`` directives to be ignored and the job to be submitted with the default parameters. **Spaces in parameters will cause** ``#SBATCH`` **directives to be ignored** Slurm will ignore all ``#SBATCH`` directives after the first white space. For instance directives like those: .. code-block:: bash #SBATCH --job-name=big job .. code-block:: bash #SBATCH --mem=16 G .. code-block:: bash #SBATCH --partition = Cascade will cause all following ``#SBATCH`` directives to be ignored and the job to be submitted with the default parameters. We provide a `repository of examples scripts `_, with various conditions, that may help you write your own. .. _preemption: Preemption mode --------------- Normal priority jobs running on partitions with **preemption mode** (see :doc:`partitions_overview`) will be **requeue** when a high priority job take over: - if the job script (or runned software) handle **restart**, job will resume at last saved restart point, - if the job script (or software) does not handle restart, job will be restarted from the beginning... Most of commonly used HPC softwares do handle restart point (vasp, gromacs, aspect, RAMSES, nextflow, etc). .. [#sbatch] You can get the complete list of parameters by referring to the ``sbatch`` manual page (``man sbatch``).