Login nodes

Important

Login nodes are not for computing

Login nodes are shared among many users and therefore must not be used to run computationally intensive tasks. Those should be submitted to the scheduler which will dispatch them on compute nodes.

Note

Do and Don’ts

  • Avoid running calculations (with large IOPS) on the home disk,

  • Always use the queueing system,

  • Login nodes are for editing files, transfering files and submitting jobs,

  • Login nodes should be used to build specific binaries,

  • Do not run calculations interactively on the login nodes,

  • Offenders process will be killed without warnings!

The key principle of a shared computing environment is that resources are shared among users and must be scheduled. It is mandatory to schedule work by submitting jobs to the scheduler on PSMN’s clusters. And since login nodes are shared resources, they must not be used to execute computing tasks.

Acceptable use of login nodes include:

  • file transfers and file manipulations (compression, decompression, etc),

  • script and configuration file editing,

  • binary building, compilation,

  • short binary tests with small input/output (~10mn CPU time),

  • lightweight workflow tests on small datasets (~10mn CPU time),

  • job submission and monitoring.

Tip

You can submit jobs from any login node to any partition. Login nodes are only segregated for build (CPU microarchitecture) and scratch access.

Here is a list of login nodes :

Partition

login/build nodes

IB scratch

None

x5570comp[1-2]

None

E5

e5-2667v4comp[1-2],

c8220node1

/scratch/E5N

E5-GPU

r730gpu01

/scratch/Lake

Lake

m6142comp[1-2], cl5218comp[1-2], cl6242comp[1-2], cl6226comp[1-2]

/scratch/Lake

Epyc

None

/scratch/Lake

Cascade

s92node01

/scratch/Cascade

For example, to access /scratch/E5N/, you have to login with either one of e5-2667v4comp1, e5-2667v4comp2 or c8220node1.

Login nodes without partition

login nodes

CPU Model

cores

RAM

ratio

infiniband

GPU

local scratch

x5570comp[1-2]

X5570 @ 2.93GHz

8

24 GiB

3 GiB/core

N/A

N/A

N/A

These login nodes are the oldest. They are not connected to any partition (or cluster). Usable for anything BUT builds (jobs surveillance, files operations [edition, compression/decompression, copy/move, etc.], short testings, etc.). See also Clusters/Partitions overview.

Binaries build on these, with system defaults (no loaded modules), should run on all partitions. Minimal GCC tuning: -mtune=generic -O2 -msse4a.

Tip

You can submit jobs from any login node to any partition. Login nodes are only segregated for build (CPU microarchitecture) and scratch access.

Visualization nodes

See Using X2Go for data visualization for connection manual.

Host

CPU family

RAM

Network

main Scratch

GPU

r740visu

E5 (8 cores)

192Gib

56Gb/s

/scratch/E5N

Quadro P4000 8Gb

r740gpu0[6-9]

Lake (16 cores)

192Gib

56Gb/s

/scratch/Lake

T1000 8Gb

  • Nodes r740gpu0[6-9] have a local scratch on /scratch/local/ (120 days lifetime residency),

  • Some visualization servers are not listed here, due to restricted access. Contact PSMN’s Staff or your PSMN’s group correspondent for more.