Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
funding [2009/07/01 08:02]
pollock
funding [2013/05/03 14:40]
pollock [NSF 0720712 Collaborative: CSR-AES: System Support for Auto-tuning MPI Applications]
Line 1: Line 1:
-====== Overview ======+====== ​Funding ​Overview ======
  
 The research in the Software Analysis and Compilation Lab at University of Delaware The research in the Software Analysis and Compilation Lab at University of Delaware
 is funded through research grants and student fellowships. Throughout the years, funding has been provided by: is funded through research grants and student fellowships. Throughout the years, funding has been provided by:
  
-  * NSF - The National Science Foundation  +  * [[http://​www.nsf.gov|NSF - The National Science Foundation]]  
-  * CTA - Army Research Lab Collaborative Technology Alliance +  * [[http://​www.arl.army.mil/​www/​default.cfm?​Action=93|CTA - Army Research Lab Collaborative Technology Alliance]] 
-  * ARL - The Army Research Laboratory +  * [[http://​www.arl.army.mil/​www/​default.htm|ARL - The Army Research Laboratory]] 
-  * CRA-W - The Computing Research Association'​s Committee on the Status of Women in Computing Distributed Mentoring Program (DMP, DREU)+  * [[http://​www.cra-w.org/​dmp|CRA-W - The Computing Research Association'​s Committee on the Status of Women in Computing Distributed Mentoring Program (DMP, DREU)]]
  
  
-====== ​Summaries of Current Research Grants ======+====== Current Research Grants ======
  
 ==== NSF 0702401 Applying and Integrating Natural Language Processing Analysis of Programs to Aid in Software Maintenance and Evolution ==== ==== NSF 0702401 Applying and Integrating Natural Language Processing Analysis of Programs to Aid in Software Maintenance and Evolution ====
Line 32: Line 32:
 Broader Impacts.** The proposed research will lead to the advancement of our theory and development of practical tools that provide automatic or semiautomatic assistance in software development and maintanence. This, in turn, should decrease maintenance time and help to increase the quality of software. The results, collection of concern benchmarks, and developed tools will be disseminated through publications and a website. A new unit on code maintenance using our techniques will be developed for the undergraduate Java course. Open-ended projects with our tools also will be incorporated into the software engineering course. Undergraduates will be trained in research through Science and Engineer Scholars, Honor s theses, independent studies, summer research, and the CRA Distributed Mentor project for women, all of which Pollock has a long track record of participation. Broader Impacts.** The proposed research will lead to the advancement of our theory and development of practical tools that provide automatic or semiautomatic assistance in software development and maintanence. This, in turn, should decrease maintenance time and help to increase the quality of software. The results, collection of concern benchmarks, and developed tools will be disseminated through publications and a website. A new unit on code maintenance using our techniques will be developed for the undergraduate Java course. Open-ended projects with our tools also will be incorporated into the software engineering course. Undergraduates will be trained in research through Science and Engineer Scholars, Honor s theses, independent studies, summer research, and the CRA Distributed Mentor project for women, all of which Pollock has a long track record of participation.
  
-==== NSF 0509170 CSR - AES: An Integrated Approach to Improving Communication Performance in Clusters ==== 
  
-//PI: Martin Swany; Co-PI: Lori Pollock// 
  
-The project will develop an integrated aproach to improving communication performance in clusters. Cluster computing has become a common, cost-effective means of parallel computing. Although adding more CPUs increases the cluster'​s maximum processing power, real applications often can not efficiently use very large numbers of CPUs, due to lack of scalability. In regular codes the main impediment to achieving scalability is the communication overhead which increases as the number of CPUs increases. Most of these optimization methods proposed target specialized hardware or programming languages, and require specialized knowledge from the domain scientist, or are not enough to provide a comprehensive solution on their own, and do not adequately address the challenges of the layers of communication software between the sender processes and the receiver processes. Improved performance overall for these applications,​ it remains largely untapped due to (1) the need for the knowledge of the context of the communication operations to exploit the sophisticated network technology fully, and the (2) the low level nature of programming needed within the application program context to achieve that potential. In particular, performance can often be improved through increasing the use of lightweight asynchronous communication. Unfortunately,​ programming with asynchronous communication is difficult and error prone, even for the most experienced programmers.+=== REU Supplements ===
  
-The project will pursue a vertically integrated approach, where a set of optimizations in the compiler, network and operating system, can enable legacy parallel applications to scale to a much larger number of CPUs, even if written without any knowledge of our techniques. An experimental prototype and preliminary experiments with real scientific applications,​ show that significant performance improvements are possible with a vertically integrated approach where knowledge of the context of communication operations is joined with knowledge of the network and cluster details to provide a fine-grained strategy for overlapping communication and computation. Based on these initial promising results, the overall goal of this proposed research is to create a means for scalable cluster computing through enabling integrated knowledge and cooperation between the source optimizer, operating system, and network technology of the cluster, without relying on the programmer to learn about the low level details of the cluster communications system.+03/05/2009 2 Undergraduates
  
-==== NSF 0720712 Collaborative:​ CSR-AES: System Support for Auto-tuning MPI Applications ====+11/06/2007 2 Undergraduates
  
-//PIMartin Swany; CoPI: Lori Pollock//+==== NSF 0509170 CSR - AESAn Integrated Approach to Improving Communication Performance in Clusters ====
  
 +//PI: Martin Swany; Co-PI: Lori Pollock//
  
-The AToMS (Automatic Tuning of MPI Software) ​project ​is investigating a software system that can automatically improve the performance ​of large-scale scientific ​applications. ​Scientific ​codes that demand more and more computing resources ​are critical ​to modern sciencebut too often scientists must spend time constructing programs that run fast at the expense ​of doing their primary research. As computers contain an increasing number ​of computing elements, ​the problem worsensThe goal of the AToMS project is to begin to address this issue by applying automatic ​application ​tuning.+The project ​will develop an integrated aproach to improving communication ​performance ​in clusters. Cluster computing has become a common, cost-effective means of parallel computing. Although adding more CPUs increases the cluster'​s maximum processing power, real applications ​often can not efficiently use very large numbers of CPUs, due to lack of scalabilityIn regular ​codes the main impediment to achieving scalability is the communication overhead which increases as the number of CPUs increases. Most of these optimization methods proposed target specialized hardware or programming languages, ​and require specialized knowledge from the domain scientist, or are not enough ​to provide a comprehensive solution on their ownand do not adequately address ​the challenges ​of the layers ​of communication software between ​the sender processes and the receiver processesImproved performance overall for these applications,​ it remains largely untapped due to (1) the need for the knowledge ​of the context of the communication operations ​to exploit the sophisticated network technology fully, and the (2) the low level nature of programming needed within the application ​program context to achieve that potential. In particular, performance can often be improved through increasing the use of lightweight asynchronous communication. Unfortunately,​ programming with asynchronous communication is difficult and error prone, even for the most experienced programmers.
  
-An optimizing compiler transforms programs into sematically equivalent ones that perform better. When dealing with any complicated architecture it is difficult to know which transformations ​will improve performance. Auto-tuning takes the approach of trying many transformations and empirically evaluating ​the resulting versionsAToMS performs this auto-tuning ​with a combination ​of a static analysis based code transformation engine (called ASPhALT) and runtime support in the OpenMPI library. The combination ​of compile-time ​and run-time support allows ​for code restructuring to overlap computation and communication and the creation ​of optimized data-packing routines. In additioncode can be generated ​to take advantage ​of multicore processor +The project ​will pursue a vertically integrated ​approach, where a set of optimizations in the compiler, network and operating system, can enable legacy parallel applications to scale to a much larger number of CPUs, even if written without any knowledge of our techniquesAn experimental prototype and preliminary experiments with real scientific applications,​ show that significant performance improvements are possible ​with a vertically integrated approach where knowledge ​of the context ​of communication operations is joined with knowledge of the network ​and cluster details to provide a fine-grained strategy ​for overlapping ​communication and computation. Based on these initial promising results, ​the overall goal of this proposed research is to create a means for scalable cluster computing through enabling integrated knowledge and cooperation between the source optimizeroperating system, and network technology of the cluster, without relying on the programmer ​to learn about the low level details ​of the cluster communications system.
-architectures.+
  
-**Intellectual Merit:** The merit of the proposed pro ject is in gaining understanding about what is required to support automatically tunable MPI programs. Broader Impacts: This project will impact the high-performance and scientific computing community and users of parallel computers by making it easier to achieve good performance. ​+ 
    
  
funding.txt · Last modified: 2013/05/03 14:41 by pollock
  • 213 Smith Hall   •   Computer & Information Sciences   •   Newark, DE 19716  •   USA
    Phone: 302-831-6339  •   Fax: 302-831-8458