Deputy CIO for Cyberinfrastructure and Research IT, Division of Information Technology, University of Maryland
Deepthought2, the University of Maryland central cluster, was launched in July, 2014. Deepthought2 will be utilized by the workshop participants. This talk will provide an introduction to Deepthought2 and other research support activities by the Division of Information Technology.
Senior Director of Research, NVIDIA
In this talk, I will discuss the broad and exciting subject of GPU-powered computational displays – a new approach that combines the computational power of the GPU with novel optics to produce new displays suitable for augmented and virtual reality.
Director, University of Maryland Institute for Advanced Computer Studies (UMIACS)
Augmented reality is the next logical leap forward in the ever-expanding information revolution. By overlaying, or augmenting, digital information on top of real-world settings, augmented reality allows people from all walks from life—physicians, educators, industrial workers, artists and everyday citizens—to see and to use the information that matters most to them. In this talk, I will discuss some of the computational challenges in the augmented reality pipeline that GPUs are well-positioned to address.
Moderator: Terry Yoo
Office of High Performance Computing and Communications, National Library
of Medicine, National Institutes of Health
This panel discussion will include Peter Bajcsy (computer scientist, Information Technology Laboratory, NIST), Raj Shekhar (founder, IGI Technologies), and Oleg Kuybeda (image processing scientist, Office of High Performance Computing and Communications, National Library of Medicine, National Institutes of Health). The discussion will center on how GPUs are enabling and improving a host of medical applications, including image-guided interventions and surgical planning, cancer detection and diagnosis, single-particle microscopy, and cardiac stress-testing.
Alumni Centennial Professor, Department of Physics and Astronomy, Johns Hopkins University
In this talk, I will present an overview on how data is changing science. This also has profound implications on the computational architectures used to perform computations and analyses. We will discuss several challenging science cases, and show how a combination of high-speed I/O and GPUs can achieve remarkable results.
Associate Professor, Department of Biology and UMIACS, University of Maryland
Phylogenetic inference is fundamental to our understanding the tree of life — the evolutionary relationships of life on earth. There is recent concentration of interest in statistical approaches, such as maximum likelihood estimation and Bayesian inference. Yet, for large datasets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of non-statistical or approximate approaches. The emergence of GPUs provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. We have developed BEAGLE, an application programming interface (API) and library for high performance statistical phylogenetic inference, which provides a uniform interface for performing phylogenetic likelihood calculations on a variety of hardware platforms.
Professor, Department of Computer Science and UMIACS, University of Maryland
Getting the most performance at the lowest power consumption is a challenging and often time consuming problem. Historically, such projects required extensive manual effort and needed to be repeated each time new hardware was released. In this talk, I will describe our work on the Active Harmony system and its NEMO algorithm which provides a way to automatically tune complex multi-objective criteria such as obtain the best performance possible subject to a power constraint. In addition to describing our approach, I will also present results from running the tool on benchmark codes running actual GPUs.
Department of Mechanical Engineering and Institute for Systems Research, University of Maryland
Robotics and Advanced Manufacturing applications utilize extensive geometric and physical simulations. These simulations are needed to enable automated planning and optimization. High simulation fidelity is very important in these applications. High simulation speed is needed to solve planning and optimization problems in a reasonable amount of time. GPUs can be used to speed up computations needed to enable high-fidelity, high-speed simulations, and hence significantly improve the performance of the automated planning and optimization. This presentation will describe how GPU-enabled computing is being used in planning for autonomous boats, automated mold design, and automated optical micromanipulation.
Professor, Department of Electrical and Computer Engineering and UMIACS, University of Maryland
GPUs are currently playing a major role in driving advances in high performance computing due to their advantages in performance/cost ratio, energy consumption, and programmability. Our work has aimed at the development of optimization techniques for mapping algorithms and applications onto GPUs and heterogeneous CPU-GPU platforms. During the past year, we have focused on biomedical applications including: (i) agent based modeling of vocal fold inflammation and wound healing; (ii) development of connectivity-based brain parcellations using diffusion tensor imaging; and (iii) construction and analysis of brain networks using resting state fMRI. In this presentation, we will give an overview of some of our recent work focusing on some of the optimization methods used for some core scientific computations.
Department Chair, Minta Martin Professor of Aerospace Engineering, University of Maryland
Recent work in the development of crashworthy crew seats in helicopters, blast resistant crew seats for ground vehicles, and other shock and impact problems, has focused on the use of magnetorheological fluids (MRFs) in novel adaptive shock absorber designs. MRFs change their viscosity in response to an applied field, when the magnetic particles align their dipoles parallel to the field lines and form chain structures. By varying magnetic field, the shock absorber can adjust its stroking load without using an active valve system, and so is more reliable in impact and shock mitigation systems. A key problem is the formation of microstructures in the valve of the shock absorber, and this talk shows how the use of a GPU code allows for simulation of magnetorheological flows at device scales.
Associate Professor, Department of Chemical and Biomolecular Engineering, A. James Clark School of Engineering, University of Maryland
Advances in computational resources have allowed researchers focusing on molecular-scale simulations to begin to fully probe timescales of lipid membrane phase changes, protein folding and ligand binding. In this talk, I will discuss how GPUs have allowed for improved molecular dynamics (MD) simulations on proteins and membranes. The speedup on even a single node allows us to simulate parameter space in an efficient manner. Sample scaling will be shown with the use of the NAMD simulation package. Then, some sample applications will be presented on bilayer phase changes and protein-ligand interaction.
Associate Professor, Department of Chemical and Biomolecular Engineering, A. James Clark School of Engineering, University of Maryland
As the variety of off-the-shelf processors expands, traditional implementation methods of systems for digital signal processing and communication are no longer adequate to achieve design objectives in a timely manner. There is a necessity for designers to easily track the changes in computing platforms, and apply them efficiently while reusing legacy code and optimized libraries that target specialized features in single processing units. In this context, we propose a new system design workflow to schedule and implement Software Defined Radio (SDR) applications that are developed using the GNU Radio environment, and targeted to GPU platforms. We present a design flow that extends the popular GNU radio environment, lays the foundation for rigorous analysis based on formal dataflow models, and provides a standalone library of GPU accelerated actors that can be integrated efficiently into existing applications.
Associate Professor, School of Engineering and Applied Science, George Washington University
PyGBe is a code that uses Python, GPUs and boundary elements to solve problems in protein electrostatics. We released it last year, showing how it compares with a well-known finite-difference code to compute protein solvation energies. This is a quantity used by biologists in various situations governed by protein electrostatics. This year, we've worked on an extension of PyGBe to study the preferred orientation of proteins near charged surfaces. The target application is computational modeling of biosensors.
Professor, Computer Science and UMIACS, University of Maryland
We detail two applications of GPU-based heterogeneous computing developed in my group. The first allowed the development of a real-time computational acoustical imaging device, since spun out as a company, VisiSonics. This device combines a spherical microphone array and a co-located array of HD video cameras, and works with algorithms implemented on a GPU-enabled laptop computer. The GPU performs acoustical beamforming and video image stitching, while the CPUs provide control. The second uses heterogeneous architectures for speedup of Fast Multipole Methods (FMM). The FMM is an approximation algorithm allowing the fast computation to specified accuracy of dense matrix vector products that arise in fluid mechanics, acoustical and EM wave scattering, molecular dynamics, statistics, and other fields. This algorithm has particular promise on distributed parallel architectures, as it has good communication complexity, important on parallel architectures.
Project leader, ASC Production Visualization Project, Los Alamos National Laboratory
As Moore’s Law fades and new computing architectures, such as the GPU, come into wide use, we will correspondingly need to develop new ways to exploit them. This includes not only meeting programming challenges, but also addressing power and resiliency issues. Probabilistic methods underlie many new and emerging paradigms, and may be useful in addressing challenges that arise. In this talk, we discuss aspects of the GPU that align well with probabilistic paradigms, and present a case study illustrating probabilistic computing on a GPU.
Moderator: Jimmy Lin
Associate Professor, College of Information Studies and UMIACS,
University of Maryland
This panel will include Raju Namburu (Computational Sciences Division, Army Research Laboratory), George Stantchev (Naval Research Laboratory), R. Jacob Vogelstein (ODNI/IARPA), and others. GPU characteristics, like numerous simple yet energy-efficient computational cores, thousands of simultaneously active fine-grained threads, and large off-chip memory bandwidth, have motivated their deployment into a range of high-performance computing systems. This discussion will center on applications of these units in high-throughput computing.
High-level overview of GPU hardware and software with an emphasis on how to use GPUs via applications, libraries and programming languages. [1hr]
Overview of the OpenACC programming model with some introductory hands-on exercises. [2hrs]
Introduction to programming in CUDA C/C++ with hands-on exercises. [1hr]
Introduction to programming in CUDA C/C++ with hands-on exercises. [1hr]
Optimization techniques for global memory and shared memory access, including using profiling tools to identify program hot spots and how to optimize them. [3hrs]