Multi Core Machines in URCS

Introduction

There are some powerful servers with N cores (N>=4) in the department. Not all of them are in the NFS file system with the desktops, so you may need to do file transfers. However, you can do some privileged experiments on the machines such as replacing linux kernel or changing physical memory size. To prevent conflicts, it's better to check running tasks with their owner before launching yours.

For ordinary user-space experimentation on staff-administered machines, just run 'who' or 'uptime' to see if anybody else is around. If you discover that someone else is often on the machine, send a note (or better, drop by his or her office) to coordinate, so you don't mess up each other's performance experiments. If this sort of informal coordination doesn't seem to be working, talk to your advisor to have something more formal set up (this has been done a few times in the past - e.g. around crunch time for a looming conference deadline).

If you need a machine for a big/long-running user-space job, you may be best off reserving a node from the torque cluster. The specifications for these vary somewhat; you can specify constraints such as # of CPUs, memory, etc. when requesting a machine, e.g. if you need one of the more powerful machines or a lot of memory. All of the machines in the torque cluster are in the NFS file system.

Naming Scheme

The machines generally follow a naming scheme which reveals their computational capacity. They are generally named as follows:


node [NUMBER OF DIES] x [NUMBER OF CORES PER DIE] x [NUMBER OF HYPER-THREADS PER CORE] x [a-z]

Since the hyper-threading was mostly introduced for naming our Nehalem processors, it might not appear in the convention.

Active Machines

Node Name CPU(s) Details Memory NFS CS Login OS Primary Use Primary PI
node1x4b 1 Intel Xeon E3-1220 Ivy Bridge, 3.1Ghz 4GB N Y Fedora 13, custom kernel (3.14.29-zchen+)
node1x4c 1 Intel i7-3820 Ivy Bridge, 3.6Ghz 32GB N Y Fedora 19 GPU Computing - Jiebo Luo Lab jluo
node1x4x2b 1 Intel Xeon E5-520 Nehalem, 2.26Ghz , 4 cores x 2 Threads 4GB N N Ubuntu 12.04.4 LTS no staff access to machine
node1x4x2d 1 Intel i7-4770 Haswell, 3.4Ghz 8GB Y Y Fedora 22 Transactional memory research
node1x4x2e 1 Intel i7-4770 Haswell, 3.4Ghz 8GB N Y Fedora 17, custom kernel (3.10.0-rc3+)
node1x4x2f 1 Intel Xeon E3-1280 Haswell, 3.6GHz, 4-cores x 2-hyperthreads per core (SMT) 16GB N Y Fedora 19 Operating System Research
node1x6x2a 1 Intel i7-5930K Haswell-E, 3.5GHz 48GB N Y Fedora 20 GPU Computing - Jiebo Luo Lab jluo
node1x8x1a 1 AMD Opteron 4284 Valencia (Bulldozer, c. 2011), 3.0GHz 8GB N Y Fedora 22
node2x4x2a 2 Intel Xeon E5-520 Nehalem, 2.27GHz, 4-Cores x 2-"Threads" 8GB N Y Fedora 15 Compiler research
node2x6x2b 2 Intel Xeon E5-2620 8GB N Y Fedora 20, custom kernel
node2x6x2i 2 Intel Xeon L5640 Westmere-EP, 2.26GHz, 2-sockets, 6-cores per socket, 2 x hyperthreads per core (SMT) 12GB N Y Fedora 22?, unstable kernel Operating system research
node2x10a 2 Intel Xeon E5-2660 Ivy Bridge-EP, 2.2GHz, 2-sockets, 10-cores per socket x 2-hyperthreads per core (SMT) 16GB N Y Fedora 19, custom kernel (3.14.8) Operating system research
node2x12x1a 2 AMD Opteron 6172 Opteron, 2.1GHz, 12-Cores x 1-"Thread" 16GB N Y Fedora 19, unstable kernel (3.11.10-200.fc19.x86_64) Operating system research
node2x18a 2 Intel Xeon E5-2699v3 Haswell, 2.30Ghz, 18-core x 2-"Threads" 198GB Y Y Fedora 20 Transactional memory research (more information)
node-ibm-822 2 IBM S822L Power8 4.1GHz, 10-core x 8-"Thread" 64GB Y Y Ubuntu 15.04 Compiler research

-- James Roche - 2016-06-13

Edit | Attach | Watch | Print version | History: r10 | r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r2 - 2016-06-13 - JamesRoche
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback