Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Last revision Both sides next revision
funding [2009/07/02 08:32]
sprenkle
funding [2013/05/03 14:40]
pollock [NSF 0720712 Collaborative: CSR-AES: System Support for Auto-tuning MPI Applications]
Line 48: Line 48:
 The project will pursue a vertically integrated approach, where a set of optimizations in the compiler, network and operating system, can enable legacy parallel applications to scale to a much larger number of CPUs, even if written without any knowledge of our techniques. An experimental prototype and preliminary experiments with real scientific applications,​ show that significant performance improvements are possible with a vertically integrated approach where knowledge of the context of communication operations is joined with knowledge of the network and cluster details to provide a fine-grained strategy for overlapping communication and computation. Based on these initial promising results, the overall goal of this proposed research is to create a means for scalable cluster computing through enabling integrated knowledge and cooperation between the source optimizer, operating system, and network technology of the cluster, without relying on the programmer to learn about the low level details of the cluster communications system. The project will pursue a vertically integrated approach, where a set of optimizations in the compiler, network and operating system, can enable legacy parallel applications to scale to a much larger number of CPUs, even if written without any knowledge of our techniques. An experimental prototype and preliminary experiments with real scientific applications,​ show that significant performance improvements are possible with a vertically integrated approach where knowledge of the context of communication operations is joined with knowledge of the network and cluster details to provide a fine-grained strategy for overlapping communication and computation. Based on these initial promising results, the overall goal of this proposed research is to create a means for scalable cluster computing through enabling integrated knowledge and cooperation between the source optimizer, operating system, and network technology of the cluster, without relying on the programmer to learn about the low level details of the cluster communications system.
  
-==== NSF 0720712 Collaborative:​ CSR-AES: System Support for Auto-tuning MPI Applications ==== + 
- +
-//PI: Martin Swany; CoPI: Lori Pollock// +
- +
- +
-The AToMS (Automatic Tuning of MPI Software) project is investigating a software system that can automatically improve the performance of large-scale scientific applications. Scientific codes that demand more and more computing resources are critical to modern science, but too often scientists must spend time constructing programs that run fast at the expense of doing their primary research. As computers contain an increasing number of computing elements, the problem worsens. The goal of the AToMS project is to begin to address this issue by applying automatic application tuning. +
- +
-An optimizing compiler transforms programs into sematically equivalent ones that perform better. When dealing with any complicated architecture it is difficult to know which transformations will improve performance. Auto-tuning takes the approach of trying many transformations and empirically evaluating the resulting versions. AToMS performs this auto-tuning with a combination of a static analysis based code transformation engine (called ASPhALT) and runtime support in the OpenMPI library. The combination of compile-time and run-time support allows for code restructuring to overlap computation and communication and the creation of optimized data-packing routines. In addition, code can be generated to take advantage of multicore processor +
-architectures. +
- +
-**Intellectual Merit:** The merit of the proposed pro ject is in gaining understanding about what is required to support automatically tunable MPI programs. Broader Impacts: This project will impact the high-performance and scientific computing community and users of parallel computers by making it easier to achieve good performance. ​+
    
  
funding.txt · Last modified: 2013/05/03 14:41 by pollock
  • 213 Smith Hall   •   Computer & Information Sciences   •   Newark, DE 19716  •   USA
    Phone: 302-831-6339  •   Fax: 302-831-8458