Home | Tutorials | GPU Cluster | Publications | Courses | Projects


NSF AWARD: CNS-08555220


 

GPU Cluster Infrastructure

 

  1. GPU Cluster at CCR

    The GPU cluster at CCR(Center For Computational Research at Buffalo, NY) is available only through the dedicated gpu queue, contact CCR to request access. The details of the Fermi GPU cluster is as follows:

    • Number of nodes = 32
      • PowerEdge C6100 - dual quad-core Compute Nodes
      • Vendor = DELL
      • Number of Processor Cores = 12
      • Processor Description:
        • 12x2.66GHz Intel Xeon X5650 "Westmere" (Nehalem-EP) Processor Cores
        • Main memory size: 48GB
        • Instruction cache size: 128 Kbytes
        • Data cache size: 128 Kbytes
        • Secondary unified instruction/data cache size: 12 MBytes
      • Local Hard Drives: 2x500GB SATA (/scratch), 1x100GB SSD (/ss_scratch)
      • Local scratch is approximately 0.9TB total
      • Two Nvidia M2050 "Fermi" Tesla GPUs (3GB memory per card)
    • Number of nodes = 1
      • PowerEdge R910 - quad socket, oct-core Compute Node
      • Vendor = DELL
      • Number of Processor Cores = 32
      • Processor Description:
        • 32x2.0GHz Intel Xeon X7550 "Beckton" (Nehalem-EX) Processor Cores
        • Main memory size: 256GB
        • Instruction cache size: 128 Kbytes
        • Data cache size: 128 Kbytes
        • Secondary unified instruction/data cache size: 18 MBytes
        • Local Hard Drives: 2x500GB SATA (/scratch), 14x100GB SSD (/ss_scratch)
        • Local scratch is approximately 1.9TB total
    • QDR InfiniBand 40Gb/s
    • Operating System: Linux (RedHat Enterprise Linux 5.5, 2.6.18 Kernel)
    • InfiniBand Mellanox Technologies MT26428 Network Card

  2. GPU Cluster at ACL

    • GPU nodes

      We have GPU nodes in ACL(Accelerated Computing Labs) which can be accessed by graduate students for research work. The nodes are named "acl-cadi-xeon-10", "acl-cadi-opteron-2" and "acl-cadi-opteron-1". "acl-primary.cse.buffalo.edu " is the head node(non-GPU node) from which the GPUs can be accessed. Please contact Dr. Vipin Chaudhary regarding creating accounts in acl-primary.

    • Setting up the environment for GPUs

      After your account gets created, login to acl-primary and set the PATH variables as export PATH = $PATH:/usr/local/cuda/bin and export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:/usr/local/cuda/lib.Check for CUDA driver and compiler by the command nvcc -V as shown on the right.


      [username@acl-primary ~]$ nvcc -V
      nvcc: NVIDIA (R) Cuda compiler driver
      Copyright (c) 2005-2007 NVIDIA Corporation
      Built on Thu_Jun_19_03:38:28_PDT_2008
      Cuda compilation tools, release 2.0, V0.2.1221

    • Accessing the nodes

      The GPUs can be accessed from the head node either interactively using the command "qsub -q cuda -I" or by submitting a PBS script using "qsub -q cuda cudaScript".

      Example CUDA script :

      #!/bin/bash
      #PBS -S /bin/bash
      #PBS -l nodes=1:ppn=1
      #PBS -j oe
      #PBS -N hello
      #PBS -l walltime=00:15:00
      cd $PBS_O_WORKDIR
      echo $PBS_O_WORKDIR
      NN=`cat $PBS_NODEFILE | wc -l`
      . $MODULESHOME/init/bash
      echo "Running on nodes:"
      cat $PBS_NODEFILE
      mkdir /tmp/$PBS_JOBID
      echo "NN = "$NN
      date
      cd /home/csgrad/CUDAproject
      ./executable <param1> <param2>