GPU Computing in URCS
Introduction
GPU Computing (GPUc) refers to the use of Graphics Processing Units (GPUs) for General-Purpose computations (i.e. not necessarily related to some fixed graphics-programming API). Alternatively, and maybe more often, GPU-Computing is referred to as GPGPU. The main target of GPU-Computing are heavy, mostly-embarrassingly-, data-parallel applications, which can efficiently exploit the massively-parallel architecture of GPUs. An impressive number of
applications
falls in this category, and an impressive number of those intersects with the research interests of URCS.
node1x4x2a (
node1x4x2a.cs.rochester.edu ) is our specialized, GPU-Computing-capable server, accessible through the grad-network (NFS-mount), and the target of (at least) systems research on GPU-Computing. This page is a
temporary description of the hardware and software installed, and of the necessary steps you have to take before you can make use of the system for your GPUc applications. Though the system is administered by
JamesRoche, due to it being the focus of systems research, it might be
frequently reboot or heavily in use, so you should consult Jim and/or
KonstantinosMenychtas before you try to make use of it for your own research purpose.
Hardware
node1x4x2a is a Dell T7500n Workstation, with the following characteristics :
More specifically on the graphics cards :
* Characteristic / Graphics Card |
NVIDIA Quadro NVS 295 |
NVIDIA GeForce GTX 275 |
CUDA Driver Version |
2.30 |
2.30 |
CUDA Runtime Version |
2.30 |
2.30 |
CUDA Capability Major revision number |
1 |
1 |
CUDA Capability Minor revision number |
1 |
3 |
Total amount of global memory |
267714560 bytes |
939261952 bytes |
Number of multiprocessors |
1 |
30 |
Number of cores |
8 |
240 |
Total amount of constant memory |
65536 bytes |
65536 bytes |
Total amount of shared memory per block |
16384 bytes |
16384 bytes |
Total number of registers available per block |
8192 |
16384 |
Warp size |
32 |
32 |
Maximum number of threads per block |
512 |
512 |
Maximum sizes of each dimension of a block |
512 x 512 x 64 |
512 x 512 x 64 |
Maximum sizes of each dimension of a grid |
65535 x 65535 x 1 |
65535 x 65535 x 1 |
Maximum memory pitch |
262144 bytes |
262144 bytes |
Texture alignment |
256 bytes |
256 bytes |
Clock rate |
1.30 GHz |
1.46 GHz |
Concurrent copy and execution |
No |
Yes |
Run time limit on kernels |
Yes |
No |
Integrated |
No |
No |
Support host page-locked memory mapping |
No |
Yes |
Compute mode |
Default* |
Default* |
* Multiple host threads can use the simultaneously
Software
In order to make use of the graphics devices for GPU-Computing, you would need to be familiar with the Compute Unified Device Architecture (
CUDA
) of
NVIDIA
. Notice that with the NVIDIA CUDA 2.3 you should be able to write code for GPUs both using CUDA and
OpenCL
. To make use of either API, you should read the respective Documentation ( trivial list under
further resources ).
node1x4x2a has the following software installed
Notice that Fedora 11 is not officially supported by NVIDIA CUDA 2.3. However, this shouldn't cause any trouble to the use of the devices for GPU Computing.
At any one point on node1x4x2a :
- The only software that is guaranteed to be installed is the NVIDIA GPU driver (2.3 or later).
- Similarly, the only configuration that is guaranteed to have taken place is the configuration needed for the graphics cards to be usable for GPU computing.
Additional Software
In order to make actual use of the GPUs for GPU-Computing, you will have to install additional software. You can do this at you home directory (NFS-mount).
Install the NVIDIA CUDA Toolkit.
- Grab the NVIDIA Toolkit Version 2.3 for Fedora 10 x86_64 from here
. This package is absolutely necessary, as it includes the compiler (nvcc) and run-time system to run your CUDA programs.
- Install the NVIDIA Toolkit under some directory in your home directory (you will prompted during installation). For the following, the toolkit is assumed to be installed under $HOME/Applications/gpu_computing/cuda_x86_64. This is the CUDA_INSTALL_PATH .
- Add the following lines to your .bashrc file - or do similarly if you are using CSH or any other shell (for the following we assume you use bash and that .bashrc will be properly loaded). These will make nvcc and the respective dev-files ready to use.
############################ CUDA START ############################
# Get the machine name stripping the domain .cs.rochester.edu
whoami=`uname -n | sed 's/\..*//'`
if [[ $whoami = "node1x4x2a" ]]
then
export CUDA_INSTALL_PATH=$HOME/Applications/gpu_computing/cuda_x86_64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CUDA_INSTALL_PATH/lib64
export PATH=$PATH:$CUDA_INSTALL_PATH/bin
export MANPATH=$MANPATH:$CUDA_INSTALL_PATH/man
export C_INCLUDE_PATH=$C_INCLUDE_PATH:$CUDA_INSTALL_PATH/include
export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:$CUDA_INSTALL_PATH/include
fi
############################ CUDA STOP #############################
Because of lack of support for Fedora 11, which in fact is lack of support for gcc4.4 (the default gcc in F11), we will be using the "compatibility-version" of the gcc compiler, 3.4, which is already installed on node1x4x2a. Any .c/.cpp files compiled with gcc/g++ and linked against .cu files compiled with nvcc, won't work if you use gcc4.4. To make this customary change only valid for use with nvcc, do the following :
- Create a new directory - we 'll call it gcc_compat for this example - inside the CUDA Toolkit directory (the aforementioned CUDA_INSTALL_PATH ). For example, mkdir $CUDA_INSTALL_PATH/gcc_compat
- Create symbolic links to gcc/g++34 in this directory. A valid set of links should look as follows :
g++ -> /usr/bin/x86_64-redhat-linux-g++34
g++34 -> /usr/bin/x86_64-redhat-linux-g++34
gcc -> /usr/bin/x86_64-redhat-linux-gcc34
gcc34 -> /usr/bin/x86_64-redhat-linux-gcc34
Do not make symbolic links to /usr/bin/gcc34 or g++34. Because of NFS and ccache, the links won't be properly resolved and you will still be using gcc/g++4.4
Install the NVIDIA CUDA SDK.
This step is optional, unless you want to read/use the examples/libraries of the SDK in your code. To make use of the SDK you need to have completed the previous steps. Then
- Grab the NVIDIA SDK Version 2.3 for Fedora 10 from here
.
- Install somewhere in your home directory. For the following, the toolkit is assumed to be installed under $HOME/Documents/workspace.
- Install the following packages
- freeglut-2.4.0-16.fc11.x86_64.rpm
- freeglut-devel-2.4.0-16.fc11.x86_64.rpm
- mesa-libGL-7.6-0.1.fc11.x86_64.rpm
- mesa-libGL-devel-7.6-0.1.fc11.x86_64.rpm
- mesa-libGLU-7.6-0.1.fc11.x86_64.rpm
- mesa-libGLU-devel-7.6-0.1.fc11.x86_64.rpm
To do this, you can just download the aforementioned rpms to your $CUDA_INSTALL_PATH and then manually extract them under the same directory. One way to do this is as follows :
while inside $CUDA_INSTALL_PATH, where the *.rpm files also lie, do
for file in *.rpm
do
rpm2cpio $file | cpio -idv
done
This will put the libraries and include files under the directories you have already set to your bashrc, hence make them usable without any further configuration.
- Go back to $HOME/Documents/workspace .Make the following changes (below is a diff you can apply) to the file C/common/common.mk in the SDK
41c41
< #CUDA_INSTALL_PATH ?= /usr/local/cuda
---
> CUDA_INSTALL_PATH ?= /usr/local/cuda
43,45c43,45
< #ifdef cuda-install
< # CUDA_INSTALL_PATH := $(cuda-install)
< #endif
---
> ifdef cuda-install
> CUDA_INSTALL_PATH := $(cuda-install)
> endif
73,75c73,75
< CXX := $(CUDA_INSTALL_PATH)/gcc_compat/g++
< CC := $(CUDA_INSTALL_PATH)/gcc_compat/gcc
< LINK := $(CUDA_INSTALL_PATH)/gcc_compat/g++ -fPIC
---
> CXX := g++
> CC := gcc
> LINK := g++ -fPIC
81c81
< CUBIN_ARCH_FLAG := -m64
---
> CUBIN_ARCH_FLAG :=
107c107
< NVCCFLAGS := --compiler-bindir $(CUDA_INSTALL_PATH)/gcc_compat
---
> NVCCFLAGS :=
- The only SDK project that cannot be built under this configuration, is scanLargeArray, which you should move outside of the SDK projects directory ( C/src/ ) temporarily.
- After invoking make under the C directory in the SDK, you should be able to make all projects.
- If some projects, like projects with OpenGL visualizations, don't work, don't panic; that is natural because of the lack of support for OpenGL through VNC/ssh-X (yet). Same holds for any project which uses non-installed libraries/software. However, a few we have tested and should work include the following ( under C/bin/linux/release/ )
- deviceQuery
- bandwidthTest
- transpose
- radixSort
- matrixMul
Alternatively
This is only
temporary :
Login to node1x4x2a and take a look under
/localdisk/NVIDIA . You will find the CUDA Toolkit ( under
gpu_computing ), the SDK and a .bashrc sample.
Do not make any changes in this directory.
If you are just curious to see some GPU-applications, take a look at the NVIDIA SDK, under /localdisk/NVIDIA/NVIDIA_GPU_Computing_SDK/ . Under C/src/ you will find the source code and under C/bin/linux/release the binaries to sample projects. To run the binaries, fix your LD_LIBRARY_PATH first, with something like this:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/localdisk/NVIDIA/gpu_computing/cuda_x86_64/lib64/ . Some projects don't work yet.
If you want to make changes, make a copy of
/localdisk/NVIDIA either locally (
/localdisk ) or to your home directory, and adjust your shell init file taking into account at least the CUDA-related paths of the sample.bashrc file under
/localdisk/NVIDIA .
Further resources
For questions and help, contact
KonstantinosMenychtas, provided that you first have tried the following and have not found the answer you needed.
If you are just starting with CUDA and want a couple of hands-on resources, you might want to try
UIUC ECE 498 (Programming Massively Parallel Processors) class notes
and
Dr. Dobbs "super-computing for the masses" tutorials (7 parts, starting
here
). Notice though that there is no better manual for the rapidly changing software/hardware platform than the CUDA Programming Guide itself - all other resources can and will probably be slightly outdated.