Tuesday/Thursday, 5:00p - 6:20p, On-Line

The focus of this course is experimental (hands-on) parallel computing. Each student is responsible for a semester-long project. Grading will be based on the project, as well as two formal talks, using presentation software (e.g., PowerPoint), that covers your project, including a definition and justification of the problem, sequential and parallel solution strategies, and a significant set of running times on large parallel systems that allow for an analysis and explanation of Amdahl's and Gustafson's speedups. In particular, the first talk provides a brief explanation of the proposed project, goals, expectations, and a timeline of the work to be performed. The second talk provides a summary of accomplishments. Students are encouraged to look at the final talks from previous semesters, available below. Note that a successfully completed project satisfies the requirement for a project in the M.S. program. (The student who completes the project successfully is responsible for filling out the proper paperwork and presenting it to Dr. Miller for a signature.) NB: There will be a cap on the number of students allowed to enroll in the course, so that those who are enrolled will have a full experience and educational opportunity.

Attendance is required. This course is offered remotely via Zoom. Please see the LMS (UBLearns) to find the Zoom information. The course is listed as an HE course, so will satisfy the requirement for a course to be taken on-campus/in person.

Grading is subjective, based on the quality of the following:

- Class Attendance and Participation
- Project chosen with respect to key parameters of projects, as discussed in class (ability to demonstrate both speedup and scaled speedup))
- Midterm Presentation
- Final Presentation

**Presentations:**

- Dr. Matt Jones (CCR) presented material covering
an introduction to CCR and their systems, logging into and submitting jobs to CCR's clusters, MPI programming, OpenMP programming, and debugging, to name a few.
Please see presentations on MPI, OpenMP, and SLURM at
CCR.
- Parallel Implementation of Dijkstra's Single Source Shortest Path Algorithm, Kartik Sehgal.
- KMP Parallel Algorithm for Pattern Matching, Rajesh Bammidi.
- Solving N-Body Problem using Parallel Approach, Sakshi Rakesh Singhal.
- Parallel Fast Fourier Transform, Aaqib Wadood Syed.
- Closest Pair of Points Problem, Sai Praveen Mylavarapu.
- Sieve Parallel Algorithm, Shivangi Mishra.
- Breadth First Search Using 1 D Partition, Shalini Agarwal.
- Matrix-Matrix Multiplication, Parth Anand.
- Parallel Breadth-First Search Using MPI, Sumanth Thota.
- Prime Factorization, Shubham Sunita Ambavale.
- K-Nearest Neighbors, Abel Jacob.
- Subset Sum Count (0-1 Knapsack Variant), Kunal Chand.
- Prime Number Generation, Sayli Umesh Nadkar.
- Longest Common Subsequence, Saema Nadim.
- Parallel Image Upscaling using Bilinear Interplation, Siddhant Gupta.
- Generating Super Magic Hashes: A Parallel Approach, Bhargav Srinivasamurthy Vasist.
- Parallel Logistic Regression, Varun Bhatt.
- Parallel Implementation of Bellman Ford Algorithm, Shreya Reddy Gouru.
- Parallel Matrix Multiplication, Cole Severance.
- Disease Spread Simulation, Sumanth Reddy Dasi.
- Parallel Union-Find using MPI, Shubham Prasad Pednekar.
- Parallelizing the Floyd-Warshall Algorithm, Prashant Godhwani.
- Parallelized Logistic Regression using Gradient Descent, Anuja Chandrashekhar Wani.
- 0/1 Knapsack Problem, Lakshya Rawal.
- Parallel Fast Fourier Transform, Jessica Grogan.