CSCI-B 424 Parallel and Distributed Programming
3 credits
- Prerequisite(s): CSCI-C 310 or CSCI-C 343
- Delivery:
Description
Overview of parallel computers, shared memory, message passing, MIMD, and SIMD classifications. Understanding and use of message passing and synchronization facilities such as MPI. Study of parallel programming models such as master-slave, client-server, task-farming, divide-and-conquer, and pipeline. Performance analysis of parallel systems, execution time, time complexity, load balancing, and scalability.
Learning Outcomes
- Explain the fundamental concepts and principles of parallel processing, including the motivation, terminology, and history of parallel processing. CS 5
- Explain various parallel computer architectures and their trade-offs, including shared memory, distributed memory, and hybrid parallel architectures. CS 3
- Apply different parallel programming models, such as message passing interface (MPI), OpenMP, and CUDA, and the ability to select the appropriate model for a given problem. CS 5
- Design and analyze parallel algorithms for various problems, such as sorting, searching, graph algorithms, linear algebra, and optimization. CS 1
- Design and analyze distributed algorithms for various problems such as consensus, replication, and fault tolerance. CS 1
- Describe various parallel and distributed computing platforms, including clusters, grids, clouds, P2P networks, client-server architectures, and web services, and the ability to select the appropriate platform for a given problem. CS 5
- Analyze parallel and distributed performance using appropriate metrics, profiling, and tracing tools. CS 2
- Apply parallel and distributed processing techniques to high-performance computing applications, such as scientific simulations and data analytics. CS 4
Policies and Procedures
Please be aware of the following linked policies and procedures. Note that in individual courses instructors will have stipulations specific to their course.