Difference: CS255Spring10ProjectHome (13 vs. 14)

Revision 142010-04-05 - XiaoyaXiang

Line: 1 to 1
 
META TOPICPARENT name="CS255Spring10"
Line: 194 to 194
 

Part4: Group Competition (55%) (Due: Apr. 16 11:59pm)

Changed:
<
<
In this project, you are required to work together in teams and develop your compiler to parallelize a sequential program using OpenMP directives, including parallel sections/loops and private or other types of variables. In more detail, there are several steps you should follow to complete the project.
>
>
In this project, you are required to work together in teams and develop your compiler to parallelize a sequential program using OpenMP directives, including parallel sections/loops and private or other types of variables. In more detail, there are several steps you should follow to complete the project.
 
step #1. Build your own repository
Changed:
<
<
Build your own repository using Mecurial and send your project-directory to all your teammates, as well as the instructor and TA. All the shared files, such as urcc/ast folder, should be included in your repository. Do NOT put the whole gcc directory in your repository. This step should be done by Wednesday (Apr. 7th 11:59pm).
>
>
Build your own repository using Mecurial and send your project-directory to all your teammates, as well as the instructor and TA. All the shared files, such as urcc/ast folder, should be included in your repository. Do NOT put the whole gcc directory in your repository. This step should be done by Wednesday (Apr. 7th 11:59pm).
 
step #2. Generate OpenMP code for shared-memory parallel execution.
Changed:
<
<
Based on Project 2 and 3, your compiler should be able to identify all the loops and do conservative dependence checking for simple loops by now. For the loops your compiler declares there is no loop-carried dependence, you have to parallelize the loop iterations using OpenMP directives. A simple example is following.
>
>
Based on Project 2 and 3, your compiler should be able to identify all the loops and do conservative dependence checking for simple loops by now. For the loops your compiler declares there is no loop-carried dependence, you have to parallelize the loop iterations using OpenMP directives. A simple example is following.
 
    // there is no loop-carried dependence in this example
Line: 222 to 222
  Please familiar yourself with OpenMP and more OpenMP examples before you start coding. For the loops which has loop-carried dependence which prevent it from being parallelized automatically, you compiler should never output a wrong parallelized code.
Added:
>
>
step #3. Advanced transformations for parallelism ( Competition Part ).

By now you have a fairly complete grasp of the theories and techniques for dependence elimination at the loop level and working knowledge of URCC compiler. In the last phase of the project you have the freedom to develop creative and effective solutions to parallelize sequential code aggressively. The results from the test programs and (possible) hidden benchmarks will become part of the growing record of the annual compiler competition, so your compiler competes not only with the compiler from your classmates but also the best and brightest from future years. The following example just give you a general idea of how advanced transformations works for parallelism.

    // there is loop-carried dependence in this example
    for (i=0; i < N; i++){
       c[i] = t;
       t = a[i] + b[i];
   }
       
    // you can use scalar expansion to eliminate the dependence and get the following code
    for (i=0; i < N; i++){
       c[i] = t[i];
       t[i]  = a[i] + b[i];
    } 
    t = t [N-1];

     // then parallelize it as below

    int chunk = 10; // here, 10 is just an example value. You do not have to follow this value.
    #pragma omp parallel shared(a,b,c,chunk) private(i)
    {
        #pragma omp for schedule(dynamic,chunk) nowait
        for (i=0; i < N; i++){
            c[i] = t[i];
            t[i] = a[i] + b[i];
        }
     }  // end of parallel section
     t = t[N-1];

step #4. Test your transformed program.

I will commit the test cases into your working repository. To run the generated OpenMP programs, you should have following commands at hand.

    gcc -fopenmp loop_urcc.c -o loop
    setenv OMP_NUM_THREADS 4
    ./loop

You can use node4x2a ( node4x2a.cs.rochester.edu ) as your test machine. This machine has four CPU chips, each containing two cores, each further containing two hardware hyperthreads. So each machine includes 16 processors in total. Given its uniqueness, it can be heavily used at times. You should be courteous to others who also use this machine. Specifically, you should always check whether the machine is being used before running your program. Since this machine was purchased primarily for research purposes, you should give priority to those who use it for research.

Project milestones

  • Monday, April 05, final groups formed, and roles assigned.
  • Wednesday, April 07, repository created and initial version of compiler souce code committed.
  • Friday, April 09, design and implementation strategy reviewed.
  • Monday, April 12, first feedback from team leaders.
  • Wednesday April 14, second feedback from team leaders.
  • Friday, April 16, 11:59pm, final code submission.
 * Set ALLOWTOPICCHANGE = ChenDing, XiaoyaXiang
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2017 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding URCS? Send feedback