Difference: MulticoreMachines (18 vs. 19)

Revision 192017-06-23 - EthanJohnson

Line: 1 to 1
 
META TOPICPARENT name="WebPreferences"

Multicore Machines in URCS


Introduction

Added:
>
>

This page is deprecated and hasn't been updated in a while. Please see the similarly-named MultiCoreMachines, which is being kept up to date.

-- Ethan Johnson - 2017-06-23

  There are some powerful servers with N cores (N>=4) in the department. Not all of them are in the NFS file system with the desktops, so you may need to do file transfers. However, you can do some privileged experiments on the machines such as replacing linux kernel or changing physical memory size. To prevent conflicts, it's better to check running tasks with their owner before launching yours.

For ordinary user-space experimentation on staff-administered machines, just run 'who' or 'uptime' to see if anybody else is around. If you discover that someone else is often on the machine, send a note (or better, drop by his or her office) to coordinate, so you don't mess up each other's performance experiments. If this sort of informal coordination doesn't seem to be working, talk to your advisor to have something more formal set up (this has been done a few times in the past - e.g. around crunch time for a looming conference deadline).

If you need a machine for a big/long-running user-space job, you may be best off reserving a node from the torque cluster. The specifications for these vary somewhat; you can specify constraints such as # of CPUs, memory, etc. when requesting a machine, e.g. if you need one of the more powerful machines or a lot of memory. All of the machines in the torque cluster are in the NFS file system.

Naming scheme

The machines generally follow a naming scheme which reveals their computational capacity. They are generally named as follows:

Deleted:
<
<
 
node [NUMBER OF DIES] x [NUMBER OF CORES PER DIE] x [NUMBER OF HYPER-THREADS PER CORE] x [a-z]

Since the hyper-threading was mostly introduced for naming our Nehalem processors, it might not appear in the convention.

Active Machines

Node name CPU(s) Details Memory NFS CS login* OS Primary Use
node1x4b 1 Intel Xeon E3-1220 Ivy Bridge, 3.1GHz 4GB N Y Fedora 13, custom kernel (3.14.29-zchen+)  
node1x4c 1 Intel i7-3820 Ivy Bridge, 3.6GHz 32GB N Y Fedora 19 GPU Computing - Jiebo Luo lab
node1x4x2b 1 Intel Xeon E5-520 Nehalem, 2.27GHz, 4-Cores x 2-"Threads" 4GB ? N Ubuntu 12.04.4 LTS no staff access to machine
node1x4x2d 1 Intel i7-4770 Haswell, 3.4GHz 8GB Y Y Fedora 22 Transactional memory research
node1x4x2e 1 Intel i7-4770 Haswell, 3.4GHz 8GB N Y Fedora 17, custom kernel (3.10.0-rc3+)  
node1x4x2f 1 Intel Xeon E3-1280 Haswell, 3.6GHz, 4-cores x 2-hyperthreads per core (SMT) 16GB N Y Fedora 19 Operating System Research
node1x6x2a 1 Intel i7-5930K Haswell-E, 3.5GHz 48GB N Y Fedora 20 GPU Computing - Jiebo Luo lab
node1x8x1a 1 AMD Opteron 4284 Valencia (Bulldozer, c. 2011), 3.0GHz 8GB N Y Fedora 15  
node2x4x2a 2 Intel Xeon E5-520 Nehalem, 2.27GHz, 4-Cores x 2-"Threads" 8GB N Y Fedora 15 Compiler research
node2x6x2b     8GB ? N Fedora 15; custom kernel  
node2x6x2i 2 Intel Xeon L5640 Westmere-EP, 2.26GHz, 2-sockets, 6-cores per socket, 2 x hyperthreads per core (SMT) 12GB N Y Fedora 22?, unstable kernel Operating system research
node2x10a 2 Intel Xeon E5-2660 Ivy Bridge-EP, 2.2GHz, 2-sockets, 10-cores per socket x 2-hyperthreads per core (SMT) 16GB N Y Fedora 19, custom kernel (3.14.8) Operating system research
node2x12x1a 2 AMD Opteron 6172 Opteron, 2.1GHz, 12-Cores x 1-"Thread" 16GB N Y Fedora 19, unstable kernel (3.11.10-200.fc19.x86_64) Operating system research
node2x18a 2 Intel Xeon E5-2699v3 Haswell, 2.30Ghz, 18-core x 2-"Threads" 198GB Y Y Fedora 20 Transactional memory research (more information)
node73-node78 2 Intel Xeon E5520 Nehalem, 2.27GHz, 4-Cores x 1-"Threads" 16GB Y Y Fedora (version?) 6 new cluster nodes, jobs should be submitted through the qsub system
node-ibm-822 2 IBM S822L Power8 4.1GHz, 10-core x 8-"Thread" 64GB Y Y Ubuntu 15.04 Compiler research
* Use your CS department credentials (same as on cycle[1-3], etc.) to log in.

Inactive/Former Machines

Node name CPU Details Memory Primary Use
node1x4a 1 Intel Xeon X3230 Kentsfield, 2.66GHz, 4-Cores x 1-"Threads" 4GB Compiler Research, no NFS (permanently retired)
node1x2x2a       (down)
node1x4x2a 1 Intel Xeon E5520 Nehalem, 2.27GHz, 4-Cores x 2-"Threads" 3GB GPU Computing - was Konstantinos Menychtas (down)
node1x4x2c 1 Intel Xeon X3470 Lynnfield, 2.93GHz, 4-Cores x 2-"Threads" 4GB Moved to undergrad network (down)
node1x6x2b       (down)
node2x2a 2 Intel Xeon 5160 Woodcrest, 3.00GHz, 2-Cores x 1-"Threads" 4GB (permanently retired)
node2x2b 2 Status Unknown    
node2x6x2a 2 Intel Xeon E5649 Westmere, 2.53GHz, 6-Cores x 2-"Threads" 12GB Compiler research, no NFS (down; needs Linux reinstall)
node4x2a 4 Intel Xeon 3.4GHz   4GB (permanently retired)
\ No newline at end of file
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback