Dr. Miller's research interests include
Parallel Algorithms and Architectures,
and Image Analysis.
Google Scholar. Dr. Miller has an H-index of 45 and Google Scholar rankings by area of Optimization Algorithms (#1), Parallel Algorithms (#3), Computational Geometry (#10), Crystallography (#22), and Algorithms (#51).
Dr. Miller and colleagues (most notably, Prof. Quentin
F. Stout, U. Mich), developed a cadre of
efficient and optimal computational paradigms, solution strategies,
and data movement operations for a wide variety of
parallel architectures, including the mesh, pyramid, hypercube, and PRAM,
as well as innovative architectures, such as the reconfigurable mesh (mesh
with reconfigurable bus) that they introduced.
Miller has also applied this toolbox of techniques to create efficient and
optimal algorithms to solve fundamental problems in areas including, but
not limited to,
computational geometry, image analysis, graph theory, data movement
operations, and NP-hard problems.
Impact of this material follows.
Highlights of this work include the following.
- Several of our papers covering the reconfigurable mesh have been cited
200+ times each, according to Google Scholar.
- Google Scholar lists more than 14,000 papers that present results for
the reconfigurable mesh.
- Seminal papers of ours covering pyramid computer algorithms have been cited
100+ times each.
- Major papers of ours covering mesh algorithms and parallel
algorithms to solve problems in computational geometry have been
cited ~100 times each.
We introduced a paradigm of continuously using fewer processors to arrive
at faster (often optimal) algorithms.
This technique is presented within the context of a
divide-and-conquer strategy or an iterative process
whereby the number of processors is reduced at each stage of the
algorithm by an architecture-dependent amount
in order to arrive at an optimal solutions to problems.
This notion was in dramatic contrast to the wisdom of the time that
focused on keeping as many processors as possible busy in order to arrive at
an efficient algorithms.
- We showed that for a large class of problems, the top of pyramid computer
is unnecessary. In fact, we proved that for many problems,
a lower bound on the running time of a solution on the pyramid
is proportional to the square root of
the number of processors on the edge of mesh at the base of the pyramid.
This result reverberated in the image processing
community, disproving the generally held belief that virtually all
image-based problems could be solved in polylogarithmic time on a pyramid
- We introduced a generic divide-and-conquer paradigm that showed by matching
certain characteristics of an architecture to the algorithm, efficient and
often optimal, divide-and-conquer solutions could be obtained for a wide set of
problems on a large set of parallel architectures.
- We introduced fundamental, architecture independent, data movement
that could be ported to a wide variety of parallel architectures
and could then be used in conjunction
with some of our algorithmic strategies as part of an extensive toolkit to
efficiently solve higher-level problems.
- We (including V.K. Prasanna Kumar, USC), and simultaneously several other
groups, introduced a reconfigurable mesh (mesh with reconfigurable
that would allow the mesh connections to be enabled or disabled on-the-fly and
assumed that every connected set of processors was connected via a bus rather
than a series of mesh links. We were able to show that this model was
actually more powerful than the shared-memory PRAM model for certain
problems, disproving a generally held belief of the time that the PRAM
was the most powerful parallel model of computation.
Highlights of algorithms for the various models include the following.
From the perspective of problem domains, we were able to utilize our
cadre of efficient operations and paradigms to devise efficient
(and often optimal) solutions as follows.
- Mesh: We introduced asymptotically optimal data movement
operations and paradigms. We used these procedures to introduce
asymptotically optimal solutions to
fundamental problems in areas that included computational geometry,
image analysis, and graph theory.
- Pyramid: We introduced asymptotically efficient/optimal data
movement operations. We used these operations to introduce
asymptotically optimal solutions to
fundamental problems in areas that included graph theory and image analysis.
- Hypercube: We introduced asymptotically efficient (and often
optimal) solutions to fundamental problems in areas that included
computational geometry, image analysis and graph theory.
- PRAM: We provided asymptotically optimal solutions to problems
in computational geometry.
- Reconfigurable Mesh: We introduced asymptotically optimal
data movement operations. We used these data movement operations to
provide asymptotically optimal algorithms for fundamental problems involving
graphs and images.
- Fixed Degree Network: We introduced the first asymptotically optimal
algorithm to solve the convex hull problem for planar point input on a
fixed degree network (a modified AKS network).
- Computational Geometry: We introduced efficient solutions to fundamental problems in
computational geometry for the mesh, hypercube, and PRAM.
- Image Processing: We introduced efficient solutions to fundamental problems in
image processing for the mesh, hypercube, pyramid, and reconfigurable mesh.
- Graph Theory: We introduced efficient solutions to fundamental problems in
graph theory for the mesh, pyramid, hypercube and reconfigurable mesh.
Molecular Structure Determination.
Dr. Miller worked on a large project in direct methods of crystal structure
determination with a world-class group of scientists at the
Hauptman-Woodward Medical Research Institute, including Nobel Laureate Dr. Herbert A. Hauptman, Dr. George DeTitta,
and Dr. Charles M. Weeks.
This project can be held up as an exemplar of computational science and
That is, the project involved a computer scientist (Miller),
working closely with a mathematician (Hauptman), as well as theoretical and
experimental scientists in a disciplinary area
(structural biologists DeTitta and Weeks).
The result of this project was the
algorithm, the SnB program, optimization strategies,
and worldwide community codes based on Shake-and-Bake.
Our Shake-and-Bake algorithm and SnB computer program have
increased by several orders of magnitude the size
of molecular structures that can be solved by a direct methods approach.
Further, our algorithm has also been shown to be applicable to much larger
proteins, which were never considered to be candidates for a direct methods
This project has had significant impact world-wide.
Highlights of this project include the following.
- Our Shake-and-Bake algorithm was listed
on the IEEE Poster "Top Algorithms of the 20th Century".
- Approximately 10,000 users worldwide have downloaded
direct-methods programs based on our Shake-and-Bake algorithm.
- A paper describing our
Shake-and-Bake breakthrough has been cited ~60,000 times,
according to Google Scholar.
- Our initial set of papers introducing Shake-and-Bake
have been cited 300+ times each.
- Minimal Principle: The development of the minimal principle,
which serves as the critical target function for optimization.
- Design, Analysis, and Implementation:
The design, analysis, implementation, refinement, and deployment of the
Shake-and-Bake method of structure determination in accessible computer
programs that run efficiently over a wide-variety of platforms.
- Solution to The Phase Problem: For all
intents and purposes, this algorithm "solved" the century-old phase problem
Our Shake-and-Bake optimization algorithm
introduced a new dual-space solution strategy coupled with a new optimization
function to solve the phase-problem of crystallography. Previous
optimization strategies included the traditional tangent formula coupled
with traditional optimization strategies (genetic algorithms,
simulated annealing, and so forth).
- Real-World Outreach: This program has been used to determine
numerous critical structures,
including vancomycin, the "antibiotic of last resort".
- Dual-Space Refinement: The development of a cyclical approach that automatically alternates
phase refinement in reciprocal space with the imposition of phase constrains
in real space until the structure is determined.
- Parallel Computing: Using parallel computers (e.g., Intel iPSC/2, Intel iPSC/860, TMC CM-2,
and TMC CM-5) to provide the compute cycles necessary to
experiment with solution strategies that would eventually be employed on
a standard PC.
- Portable Program: Providing an implementation of the solution strategy for a wide variety
of parallel and sequential machines.
- Structure Solutions: The program has been able to solve
all structures in appropriate
domains provided that reasonable quality data was provided as input.
- Peer Review Funding: Due to the comprehensive nature of this project, funding was provided
by 3 consecutive NIH program-project grants, a variety of NSF
grants, as well as foundation and appropriations to support the acquisition of
The focus of Dr. Miller's Cyberinfrastructure Laboratory is on providing
infrastructure necessary to perform high-end simulation and modeling.
- Grid Deployment: We developed and deployed the ACDC-Grid
as an experimental grid. This grid served as a platform
for developing an integrated data and compute grid. It also provided
a platform for work on a grid monitor, grid portal, node swapping system,
predictive scheduling environment, and resource management system.
Our follow-on Western New York Grid was
designed and deployed as a heterogeneous grid in Western New York.
Our work on the Western New York Grid led to the design and deployment
of the New York State Grid
(NYS Grid), a production level grid based on the OSG software stack.
This work was used by Grid3+, Open Science Grid, the IBM NE BioGrid, and served
as the cornerstone of a New York State cyberinfrastructure initiative.
- Grid Monitoring: We developed and deployed a light-weight
grid monitoring system on production grids that allowed
one to asses the health and status of a grid.
- Predictive Scheduling: We designed and installed an
efficient predictive scheduler on experimental grids. Our scheduler
maintained a database of historical jobs to profile user/group/package usage.
- Data Migration: We developed and deployed a data migration tool
on experimental grids that allows for the transparent
migration of data between various resources on a grid.
- Dynamic Resource Allocation: We designed, developed, and installed
a dynamic resource allocation tool on a variety of grids.
- Grid-Enabling Template: We developed and deployed a
grid-enabling template framework
that dramatically reduced the time to migrate software to a grid.
- Management Tools: We developed and deployed a management tool
for heterogeneous grids.
- Software Migration: We migrated numerous software packages
to a variety of grids, including applications in areas
of structural biology, bioinformatics,
ground water modeling, earthquake engineering, and computational chemistry,
to name a few.
Center for Computational Research (CCR).
Following his vision that in the 21st century,
leading academic institutions would
i) embrace the digital data-driven society that we live in and
ii) work to empower their students to compete in our knowledge-based economy,
Dr. Miller founded the Center for Computational Research
During his tenure as (Founding) Director from 1998-2006, CCR was
continuously ranked as one of the leading
supercomputing centers worldwide, earning Miller the following
Typically, CCR supported hundreds of projects simultaneously
from a variety of local and national colleges,
universities, non-profit organizations, government agencies,
and the private sector.
Miller's vision to 1) enable Research and Scholarship,
2) provide Education, Outreach, and Training, and 3) affect Technology
Transfer to local industry was successfully served by CCR during his
- HPC Wire's Top People and Organizations to Watch, 2003.
- SGI Innovator Award, one of six in the inaugural class, 2003.
- Best Practices Award, Bio-IT World, 2003.
- Dell Center in Research Excellence, Michael Dell presented the
first such award personally in Buffalo.
Outreach (CCR): One key to building a successful center was Miller's ability to
work with elected officials, local, regional, and national leaders,
as well as a wide variety of vendors including Dell, SGI, IBM, EMC, and
HP, to name a few. In fact, Miller's efforts with Dell culminated with
the installation of the first Dell cluster to make the top500 list (#22).
Miller also worked with companies in order to generate animated videos for
MTV, create traffic accident reconstructions, create traffic flow simulations,
and generate medical and scientific visualizations. This work was performed
with a variety of local, regional, and national companies and government
More details on Miller's contributions to CCR can be found on
his Biography page.
Center of Excellence in Bioinformatics.
The success of CCR and Miller's ability to work with elected officials were
establishing the $290M New York State Center of Excellence in
a major economic driver for Western New York.
In fact, in establishing the Center of Excellence in January of 2001,
New York State Gov. George E. Pataki stated that
"This Center [of Excellence in Bioinformatics] will, through the
University of Buffalo's Center for Computational Research, create
academic and industrial partnerships...."
Education, Outreach, and Training (EOT).
Prof. Miller has been teaching undergraduate and graduate courses since
He has given numerous presentations to elected officials, defense
contractors, scout troops, high-school students, prospective students and
student-athletes, and citizens concerned with plans for construction in
He has produced reports on parallel processing
education, organized panel discussions on EOT, serves on the editorial board
of a journal devoted to teaching and case studies, and has introduced
10 courses in areas involving cyberinfrastructure, high-end computing, and
Under Miller's direction, workshops in computational science and high-end
computing have been offered for high-school teachers, a year-long
bioinformatics course has been developed and offered to high-school students,
and training sessions in multiprocessor computing has been offered to
faculty, students, and staff from area colleges and organizations.
Dr. Miller was instrumental in the formation of the
National Science Foundation EOT-PACI program and set up
similar programs locally.
Finally, Miller was accessible to the media, being quoted in some 1500 news
articles and appearing with regularity on television and radio.
Funding. Including personal peer reviewed funding,
appropriations, contracts, and additional funds that CCR enabled during
his tenure as Director, Dr. Miller has helped bring in
approximately $0.5 billion dollars to Western New York.
Additional Information. Additional information is available
on the Biography page of this web site.