event

Making Effective Use of (Partial) Data Dependencies for Parallelization

Primary tabs

Data dependencies have strong connections with parallelism. The fundamental observation, going (at least) 30 years back, is that two code blocks that have no (transitive) data dependencies can be executed in parallel, resulting in the same final state as running the codes sequentially. This has been the basis and precondition for sophisticated research on parallelizing compilers for many years. Unfortunately, only in rare cases is this precondition met: The candidate code blocks are often dependent, and even if not, the compiler's (static) dependence analysis is typically too conservative to prove independence, failing due to spurious dependencies.


Tripp will propose a new view of program dependencies, utilizing accurate -- yet potentially partial -- dependence information to tune/specialize a baseline synchronization algorithm while preserving its correctness (i.e. serializability guarantees). This can be done in more than one way, including (i) building specialized, client-specific conflict-detection oracles, (ii) synthesizing concurrency monitors that predict the available parallelism per input data and/or computation phase, and (iii) finding true, semantic dependencies that limit parallelism. He will survey several techniques for leveraging dependence information along these lines, which make safe use of dynamic (rather than static) dependencies, backed by user-provided data abstractions, for precise dependence analysis.

Status

  • Workflow Status:Published
  • Created By:Antonette Benford
  • Created:01/09/2013
  • Modified By:Fletcher Moore
  • Modified:10/07/2016

Keywords

  • No keywords were submitted.