Teaching
 
Contents
Home
Academics
Work Experience
Teaching
Research & Projects
Publications
Editor
Awards & Honors
Visits & Talks
Conferences
My Family
Download My Resume
Links
 
 

CS300N: SuperComputing for Engineering Applications

Lectures Schedule: Slot "D" : Tues (9:00 - 10:00), Wed (8:00-9:00), Thu(9:00-10:00)

TA: Rishi Kumar: e-mail: csd00393@cse.iitd.ernet.in

Objective:

This course is aimed at providing students with a deep knowledge of the techniques and tools needed to understand today's and tomorrow's Supercomputers, and to efficiently program them.
Today's supercomputers range from expensive highly parallel shared/distributed memory platforms down to cheap local networks of standard workstations. But the problems associated with software development are the same on all architectures: the user needs to recast his or her algorithm or application in terms of parallel entities (tasks, processes, threads, or whatever) that will execute concurrently. Parallelism is difficult to detect in an automatic fashion because of data dependencies. In many cases, one needs to perform some form of algorithm restructuring to expose the parallelism. Finally, to realize the restructured algorithm in terms of software on a specific architecture may be quite complicated.


In this course we plan to cover and understand the nuts and bolts of developing engineering  applications. For instance our study of Shared memory parallel architectures and programming with OpenMP and Ptheards, Distributed memory message-passing parallel architectures and programming, portable parallel message-passing programming using MPI. This will also include design and implementation of parallel numerical and non-numerical algorithms for  engineering, applications. In addition we will study performance evaluation and benchmarking on today's Supercomputers.

Topics:

Introduction to Supercomputing. Various supercomputer architectures: Shared memory parallel architectures and programming with OpenMP and Ptheards. Distributed memory message-passing parallel architectures and programming, portable parallel message-passing programming using MPI. Design and implementation of parallel numerical and non-numerical algorithms for scientific and engineering,  applications.


PRE-REQUISITES

Knowledge of C/C++/Fortran/Java programming. Understanding of Unix Operating System.

Text Books

MAIN READING

Kai Hwang and Zhiwei Xu, Scalable Parallel Computing, McGraw Hill New York, 1997.

Vipin Kumar, Ananth Grama, Anshul Gupta, George Karypis, Introduction to Parallel Computing, Design and Analysis of Algorithms, Redwood City, CA, Benjmann/Cummings (1994).

Barry Wilkinson and Michael Allen, Parallel Programming, Pearson Education Asia, 1999.

SUPPLEMENTARY READING

Ian T. Foster, Designing and Building Parallel Programs, Concepts and tools for Parallel Software Engineering, Addison-Wesley Publishing Company (1995).

W. Gropp, E. Lusk, and A. Skjellum, Using MPI: Portable Parallel Programming with the Message-Passing, MIT Press, 1994

Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek and Vaidy Sunderam, PVM: Parallel Virtual Machine - A Users' Guide and Tutorial for Networked Parallel Computing, MIT Press, 1994


Rohit Chandra , Ramesh Menon, Leo Dagum, David Kohr, Dror Maydan, and Jeff McDonald,Parallel Programming in OpenMP, Morgan Kaufmann, 2000

  Matrix Computations by Gene H. Golub / Charles F. Van Loan

Computing Systems to be used:

PARAM

IBM RS/6000

SUN Fire V6800

Pentium - Linux Cluster

Lecture Notes 

MPI-Part-I

MPI-Part-II

OpenMP Notes (PPT) 

OpenMP Fortran Spec

OpenMP C Spec

Hands-On

MPI

OpenMP

Pthreads

Assignments

This Course will have three Assignments of 10 Marks each. 

Links

For Information about Sun Performance Library please see SUN Workshop

Unix Tutorials

Stanford Unix Programming Tools Manual (pdf file)
GNU Bash Manual (html file)
GNU Make Manual (html file)



Other Parallel Information Sites


Related On-line Textbooks


Send mail to dheerajb@cse.iitd.ernet.in with questions or comments about this web site.