CSE 160: Introduction to High Performance Parallel Computation
Spring 2005
Lecture: Tues/Thurs 5-6:20pm, WLH 2207
Discussion: Mon 10-10:50am, HSS 1330
Professor: Andrew A. Chien -- achien@ucsd.edu -- AP&M 4808
Office Hours:
- Tues/Thurs 2-2:30pm, AP&M 4808
- 15-20 minutes after class
- Via email
Teaching Assistant: Sagnik Nandy -- snandy@cs.ucsd.edu -- AP&M 4438
Office Hours:
Course Administrative Support: Jenine Combs -- jcombs@ucsd.edu
CSE 160 Materials:
- Textbook: Doug Lea, Concurrent Programming in Java: Design Principles and Patterns, Second Edition, 1999.
- Course Schedule/Lecture Slides
- Laboratory Exercises/Homework
- Reserve Readings
- Culler and Singh, Parallel Computer Architecture: A Hardware/Software Approach
- Almasi and Gottlieb, Highly Parallel Computing
- Grama, et. al., Introduction to Parallel Computing
- Coursework and Grading
- Students will be expected to complete four homework/laboratory assingments, as well as a midterm and final exam.
- Course Grade Breakdown:
- 50% homework and laboratory assignments
- 50% midterm and final exams
Course Overview and Outline:
- Parallel computation has become a critical element of performance in all computational systems. While traditionally courses in parallel computing focus on scientific applications, we focus on non-numeric applications which come from the Internet such as web search (e.g. Google and crawling), large-scale data handling systems, data mining, etc. as well as scientific computation.
- The class will explore an integrated view of parallelism across many scales, exploring both the expression and management of parallelism for performance in small and large-scale systems.
- Topics include fundamental aspects of concurrency, including synchronization, parallelism, parallel algorithms, communication, scalability, etc. The programming language will be an extension of Java (called ProActive Java) providing easy accessibility.
Parallelism is a key facet of nearly every computing system today. Google, Yahoo, MSN, and AskJeeves internet services all use thousands of machines in parallel structures to service your requests in seconds. Instruction-level Parallelism (ILP) has been a key contributor to processor performance since the 1960's and in every microprocessor since the 1980's. Single chip, multi-core systems are bringing multiprocessor parallelism (multiple-CPU's) into everyday applications. Wireless equipment, including WiFi and Mobile telephony, such as CDMA, uses significant parallelism in signal processing to achieve noise resistance and efficient transmission. Graphics for video-games, movies, and general HDTV/Multimedia applications, both for encoding and decoding, make use of high degrees of parallelism.
For more information, email Professor Andrew Chien
CSE160 Home
CSAG Home
UCSD CSE Home
UCSD Home
Contact Webmaster