This class wiki serves as both a blackboard for technical discussions and a billboard for information sharing. The organization of this wiki is made as simple as possible but hopefully also effective enough for discussion and Q&A. In general, postings go to one of the four sections below. The Announcements is basically used for reminding students of the class schedule and homework due dates. The on Administration is where you put questions about class administration, for instance, about grading criteria. The other two sections are the places for technical discussions. All the specific questions and issues about the assignments go to on Assignments. The other general questions and comments about the course content go to on Technical Content.

Following a good twiki style, every posting should better be tagged with the author name and the posting time.

-- TongxinBai - 23 Jan 2008

Access Control

Class Website

http://www.cs.rochester.edu/u/scott/458/

Announcements

  • Assignment 1 due date: 2/1. Please use the turnin script on the graduate network to turn in your homework.
    Note: For some technical reason I haven't been granted access to the cs458 account yet. To turn in the the first part of the assignment 1, which is due on 1/28, please send emails to me directly. Sorry about the inconvenience.
    -- TongxinBai 27 Jan 2008
    The turnin script is up now.
    -- TongxinBai 28 Jan 2008

  • For all guest lectures this semester (beginning with Chen Ding's talk on Feb. 20), you are required to write a half-page summary and email it to the TA within two days. -- MichaelScott - 18 Feb 2008

on Administration

on Assignments

I recommend taking a look at /usr/grads/share when you're setting up your account on sync. I've installed, and am technically the maintainer of, up-to-date development tools for sync and swym. You should set your paths to use the current versions of applications as installed. There's no need to reinstall them yourself.

As a side note, the assignment 1 page recommends lung as a single-processor SPARC to test your code on. Keep in mind that lung is Solaris 8 (probably in v8plus mode) and sync is Solaris 10 (I think it's in v9 mode), so you're likely to need different compiler flags on the different architectures if you're doing anything interesting. Lung also mounts the NFS file system so you need to make sure that your .bashrc and .cshrc don't try and do anything funny with a non-SPARC binary when you log in, or you'll wind up with errors.

The installed tools are probably different versions as well, come to think of it. Maybe it's not such a great idea to use the modern tools on sync if you're going to be developing on lung... anyone else's comments are welcome.

-- LukeDalessandro - 24 Jan 2008

When turn in your programming assignments please always remember to include a README, a Makefile and a PDF file for performance evaluation if needed.
-- TongxinBai 28 Jan 2008

In class last time, Michael said to include a "README.pdf" with our submission. Do you (you're grading, right, Tongxin?) want our README file to be a PDF, or does it matter?

-- PaulArdis - 30 Jan 2008

I guess Michael's intention was to use a pdf to show performance data in a graphical manner. In that sense a PDF README is preferred, otherwise you have to include another PDF file for your result evaluation.

-- TongxinBai 30 Jan 2008

Another useful feature that can help with building on different environments is OS = $(shell gcc -dumpmachine | sed -e 's/.*-//' -e 's/[0-9].*//'). If you stick this in your Makefile you can figure out what operating system you are running on and make appropriate paths and things.

-- LukeDalessandro - 31 Jan 2008

It's been noticed that last night someone used the machine for an enormous amount of time causing other people had to wait overnight until they could start their tests. Since our resources are limited and shared, please be considerate and take good control of the time when you are about to launch any testing.

-- TongxinBai 1 Feb 2008

Minor question: When it says, "have each thread run a loop in which it repeatedly enqueues data to, or dequeues from (with 50-50 probability), a shared queue", does that mean that each thread should (for every iteration) flip a coin and decide whether to enqueue/dequeue this time, or should there be a 50% chance on spawning each thread that it is either an always-enqueue or always-dequeue thread? I wasn't sure how to read that "50-50". Thanks.

-- PaulArdis 5 Feb 2008

Based on the paragraph that follows the list of queue types it's the coin-flip style (otherwise there would be no reason to "preallocate" random numbers, and the "preallocation" of nodes would also be odd if a thread is always dequeueing).

-- LukeDalessandro - 06 Feb 2008

An issue that arises is how to deal with an empty queue. The easiest solution is to just fail the dequeue and go on and do whatever your next coinflip tells you to do. The problem with this approach is that this means that in the long run you're going to be enqueueing slightly more often than you dequeue, which will lead to an overall growth trend in the queue. You could precisely compute the probability of dequeueing from an empty queue for any particular insert and adjust your priority accordingly. You could have one "enqueue only" thread but that's going to change the kind of performance that you see. You could just take the easy approach, which is what I'm doing.

As a side note, an alternative to preallocating random bits, which can take up a lot of space and cause strange cache behavior, is to use rand_r which is a reentrant rand call that is thread safe. Also in this case it's possible that a thread can run out of local nodes. It makes sense to allocate another block of local nodes rather than waiting for a dequeue to become available.

-- LukeDalessandro - 07 Feb 2008

A note on unions. The easiest way to implement a counted pointer is with a union. You wind up with something like (in C++):

union counted_ptr {
    unsigned long long ull_for_casx_or_mvx;
    struct {
        Node* ptr;
        unsigned long count;
    };
};

or, for a templated version:

template <typename T>
struct counted_ptr {
    union {
        unsigned long long ull_for_casx_or_mvx;
        struct {
            T* ptr;
            unsigned long count;
        };
    };
};

So if you have counted_ptr<Node> head; you can say if (head.ptr == NULL) and you can also say casX(&head.ull_for_casx_or_mvx, ...). This avoids explicit casting and makes code easier to figure out. If you're using C++ you can also give the union or struct members like T* operator->() or nice constructors to make it easier and safer to use.

As a side note, you should be able to template the union version directly, but I can't seem to get g++ to like it when I do. Hope this helps someone.

-- LukeDalessandro - 08 Feb 2008

Assignment 2

There is a question raised about the input format. For the convenience of testing and grading your program is supposed to take inputs from stdin to which I can redirect my test input sets when I test your program. The input format is described in the web page, the first line indicating the number of cities with the following lines giving out the coordinates.

-- TongxinBai - 14 Feb 2008

How to run Display.java on linux?

-- XiaomingGu - 15 Feb 2008

javac Display.java
java Display <args>

-- MichaelScott - 18 Feb 2008

I've saved a couple of randomly generated inputs to ~cproscia/tsp100.txt and tsp1000.txt on sync. The average length of a random solution to the 1000 vertex problem should be about 5,000,000. I've put my best solutions so far in .sol text files, which should be viewable with Display.java.

-- CoreyProscia - 21 Feb 2008

Isn't the average length dependent on the scale? Otherwise the absolute value is meaningless. This is just my understanding and has nothing to do with grading. smile

-- TongxinBai

The average length of a random tour should vary with the number of vertices. The range of coordinates should be from 0 to 10,000. I was thinking that the average distance between two vertices would be exactly 5,000, but that is not correct (it might be close though).

-- CoreyProscia - 22 Feb 2008

MPI and OpenMP

In your last programming assignment you'll be working on MPI and OpenMP. Besides the difference of message passing and shared memory, MPI is a library interface whereas OpenMP consists of a set of directives that need special compiler support. Although both of them are language independent we'll be mostly targeting on C. In this assignment your working environment will be on both the department's Linux cluster and the Sun machine (sync). The following guide could be served as a start point.

MPI

1. How to program?
Take a look at MPI tutorial from Lawrence Livermore National Lab.

2. Before build and run you might need to set proper values to some environment variables.
A quick solution is save the following script to ~/.cshrc.mpi

foreach n (`seq -s " " 33 64`)
    if (`hostname` == node$n.cs.rochester.edu) then
        setenv MPI_HOME /u/cs458/mpich-1.2.7-install
    endif
end

if ( `hostname` == sync.cs.rochester.edu )  then
   setenv MPI_HOME /u/cs458/mpich-1.2.7-sync
endif

setenv PATH ${MPI_HOME}/bin:${PATH}
setenv LD_LIBRARY_PATH ${MPI_HOME}/lib:${LD_LIBRARY_PATH}

and then copy the following line to the end of your .cshrc

source ~/.cshrc.mpi 

3. How to build?
Basically, like gcc, mpicc could run the errands of both compilation and linking. A detail Makefile could be found at /u/cs458/test_mpi. You may use the directory as a template and plug in your own programs.

4. How to run?
mpirun is the script for launching your mpi tasks. The syntax is

mpirun -np <np> -machinefile <machine-file-name> <program name and arguments> 
np specifying the number of processors to use and the machine file listing the processors to be selected from.

NOTE: All the jobs to be submitted to the cluster nodes ranging from node33 to node64 are subject to centralized management. For detail instructions, please refer to ClusterJobs. In short, you need to wrap your jobs in a shell script and then pass it as an argument to qsub.

Here I wrote a script run.sh that could be used in submitting your MPI jobs.

 
#!/bin/csh
source ~/.cshrc.mpi

cd $PBS_O_WORKDIR
#set your machinefile
cat $PBS_NODEFILE > ./cluster_nodes 
#count how many nodes you reserve

#run mpi on these nodes
mpirun -np `cat ./cluster_nodes | wc -l` -machinefile ./cluster_nodes ./linux/<your-mpi-program>   > ./mpi.out

Now you ssh to node64.cs.rochester.edu, enter your mpi working directory and then copy and fix run.sh. To run your mpi program, just type

qsub -l nodes=4 run.sh

CAUTION: Since the cluster is a shared computation environment for real tasks you need to be careful to guarantee no running or zombie processes left unattended.

-- TongxinBai - 23 Feb 2008

OpenMP

GCC starts supporting OpenMP from version 4.2.0. On sync, gcc-4.2.0 is installed. To use it as your default gcc, you may save the following three lines to .cshrc.gcc42 and source it before you make and run your OpenMP programs.
 
setenv GCC_HOME /usr/grads/share/gcc-4.2.0
setenv PATH $GCC_HOME/bin:$PATH
setenv LD_LIBRARY_PATH $GCC_HOME/lib:$LD_LIBRARY_PATH
GCC 4.1.2 on Fedora released Linux also supports OpenMP.

For programming guide you may refer to this tutorial.

-- TongxinBai - 23 Feb 2008

Compiling for OpenMP on sync

In addition to using gcc 4.2, you need to compile with the -fopenmp command-line option:
gcc -fopenmp -o foo foo.c

-- MichaelScott - 27 Feb 2008

Perl script for city generation

Here's a simple script that I've been using to generate random sets of cities. You can use it with the command line parameters -p for the number of points (cities), and -s to change the random seed. Output is deterministically the same when run with the same parameters.

#!/usr/bin/perl

use strict;
use warnings;

# command line arguments
my $points = 5;
my $seed = 0;

{
    # usage string
    my $usage = q(
usage: rgen.pl [p,s,h]
    
    -h              help
    -p              number of points to generate
    -s              random seed for point generation  
);

    use Getopt::Long;  
    my $help = '';
      
    die $usage if !GetOptions('p=i'   => \$points, 
                              's=i'   => \$seed,
                              'h|?' => \$help);
    if ($help) {
        print $usage;
        exit 0;
    }
}

srand $seed;

{
    print $points."\n";
    for (my $i = 0; $i < $points; $i++) {
        print int(rand(10000))." ".int(rand(10000))."\n";
    }
}

This lets you test your program with a command line like: ./rgen.pl -p 100 | ./tsp | java Display, if the perl script is named rgen.pl and your executable is named tsp.

-- Main.LukeDalessandro - 28 Feb 2008

On RSTM

RSTM can be confusing and frustrating to use at first. Here are a few points that may help.

  • The transactional system only protects objects that inherit from the stm::Object class, that are accessed through smart pointers. Your implementation needs to be object-oriented, at least to the first approximation.
  • Shared class definitions shouldn't define any non-static-constant fields. All fields need to be generated using the GENERATE_FIELD/GENERATE_ARRAY macros, and accessed through the resulting accessors as defined in the "Programming with RSTM" documentation.
  • Shared arrays are currently considered to be one block object, so a thread reading an element of an array that is disjoint from an element that a different thread is writing will be considered a read-write conflict and will (most likely) result in an abort of the reader.
  • If you throw an exception from inside a transaction that is not handled within the transaction it will abort the transaction and then continue to propagate. This seems like the obvious implementation, but you need to remember that anything you did to shared data during the transaction has been "undone" by the abort mechanism. Do not embed any transactionally shared information inside an exception, or you could wind up segfaulting if you try and use it.
  • Be really careful with constructors, and the "this" pointer. Read the relevant sections in the "Programming with RSTM" documentation about them.

I'm sure there will be other issues that you run into. The point of the assignment (I think) isn't for us to struggle with the vagaries of the RSTM package, but to try and come up with an interesting transactional solution to TSP. As such, feel free to email me if you find yourself struggling with something that you think is RSTM specific.

-- Main.LukeDalessandro - 19 Mar 2008

I had some trouble compiling and linking my code even with the benchmark, mesh, and swarm examples. Here's my attempt at creating a simple Makefile.

STMPATH    = ../
include $(STMPATH)Makefile.inc

CXX = g++
CXXFLAGS += -I$(STMPATH)
LDFLAGS  += -L$(STMPATH)stm/obj -lstm

all: solve

solve: solve.cpp
	$(CXX) -o solve $(CXXFLAGS) solve.cpp $(LDFLAGS)

clean:
	rm -f *~ *.o solve core

If you modify STMPATH, you can put your code anywhere relative to the rstm directory. In my C++ code, I included <stm/stm.hpp> and was able to call stm::init(/*cm_type*/ "Polka", /*validation*/ "invis-eager", /*use_static_cm*/ true);

-- Main.CoreyProscia - 19 Mar 2008

There's a slight bug in the mesh code on Solaris (sync/swym). On line 85 of queues.hpp in the mesh directory, #elif defined(SPARC) should be #else, and delete line 88 entirely (#error Please indicate your endianness here).

Sorry for any confusion.

-- Main.LukeDalessandro - 25 Mar 2008

on Technical Content

Edit | Attach | Watch | Print version | History: r36 < r35 < r34 < r33 < r32 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r35 - 2008-03-25 - LukeDalessandro
 
  • Edit
  • Attach
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback