Modern Parallel Programming
This page will be updated regularly. Please visit (and reload) often.

Parallel Programming
Course# COL 730 (Sem I, 2017-2018)

Mon, Tue, Fri 12-1250 IIA-106 Tue, Thu 3-430 (Room IIA-425)


Subodh Kumar <subodh@cse.*>
Office hour: Wed, Thu 12-1


ANMOL MAHAJAN <Anmol.Mahajan.cs513@cse.*>
Office hour: Mon & Wed 3-4:30pm, Cloud Lab (Room No. 411, SIT)

Replace * with

News & Announcements

Course Information

This is a first course in parallel programming and does not require any previous parallel computing experience. Data structures and Operating Systems are required. L-T-P: 3-0-2.

With the growing number of cores on a chip, programming them efficiently has become an indispensable knowledge for the future. Modern Parallel Programming is a hands-on course involving significant parallel programming on compute-clusters, multi-core CPUs and massive-core GPUs.

Contents: Parallel performance metrics, Models of parallel computation, Parallel computer organization, Parallel programming environments, Load distribution, Throughput, Latency and Latency hiding, Memory and Data Organizations, Inter-process communication, Distributed memory architecture, Interconnection network and routing, Shared memory architecture, Memory consistency, Non-uniform memory, Parallel Algorithm techniques: Searching, Sorting, Prefix operations, Pointer Jumping, Divide-and-Conquer, Partitioning, Pipelining, Accelerated Cascading, Symmetry Breaking, Synchronization (Locked/Lock-free).

Tentative outline

Part I

From serial to parallel thinking: common gotchas
A history of parallel computers and lessons learned from them.
Performance metrics - speedup, utilization, efficiency, scalability
Models of Parallel Computation
How useful are these models for modern machines
Parallel Computer Organization
Pipelining and Throughput
Latency and Latency hiding
Memory Organization Inter-process communication Inter-connection network
Message passing
Shared/Distributed memory
Basic Parallel Algorithmic Techniques
Pointer Jumping, Divide-and-Conquer, Partitioning, Pipelining, Accelerated Cascading, Symmetry Breaking, Synchronization (Locked, Lock-free)
Parallel Algorithms
Data organization for shared/distributed memory
Searching, Merging, Sorting, Prefix operations
Example applications

Part II

Writing Parallel Programs
GPU-Compute Architecture, CUDA, Memory organization in CUDA
Multi-Core CPU programming, OpenMP, MPI
Performance evaluation and scalability

Academic Integrity Code

Academic honesty is required in all your work. You must solve all programming assignments entirely on your own, except where group work is explicitly authorised. This means you must not take, neither show, give or otherwise allow others to take your program code, problem solutions, or other work.

This means you must protect your code from access by others. Do not leave it where others can find it. Do not give it to someone for submission on your behalf. Do not use any fragment of code obtained online or from someone else, except what is explicitly authorised as a part of the course. When authorised, any non-original code that you do use must be clearly identified with due reference to the source. Falsifying program output or results is cheating also.

Please see your professor if there are any questions about what is permissible.

Students who are caught cheating will be given a 0 and a letter grade penalty. Second violation will result in summary failure from the course.

Assignments & Grading

Work Points Schedule
Assignment 1 10 Due Aug 25Sep 3, 1155p.
Assignment 2 10 Due Sep 15, Sep 24, 1155p.
Assignment 3 10 Due Oct 6, 1159p.
Project 20 Due Nov 17, 1159p.
Minor1 13
Minor2 13
Major 24

Late Policy: A total of six days of late submission is allowed for the three assignments. Use them where you need them. Beyond six days, you lose 1 mark per day of delay.

Attendance Policy: 100% attendance is required. prior permission must be taken for any absence.

Audit Policy: A grade of C or better is required for Audit Pass.

Recommended Text

Parallel Programming in C with MPI and OpenMP by M J Quinn

Introduction to Parallel Computing by Ananth Grama, George Karypis, Vipin Kumar, and Anshul Gupta

Programming Massively Parallel Processors by D. Kirk and W. Hwu

Further Reading