Author: Benedikt Bleimhofer Title: Hierarchical Arrays for Efficient and Productive Data-Intensive Parallel Computing Abstract: Today's high performance computing (HPC) systems are characterized by massive parallelism and multi-level memory hierarchies. Currently most parallel programs are written using either multithreading (e.g., OpenMP) or message passing (e.g., MPI). While threads are easier to program than message passing and provide implicit communication via shared memory, their effectiveness is limited on large systems. Message passing on the other hand scales well on current clusters and provides explicit locality control. However, the programming is very complex and error-prone due to the explicit communication. Partitioned Global Address Space (PGAS) approaches are a relatively new way to combine the advantages of the two aforementioned models. Even though some of the most popular PGAS languages have already been introduced in 2004, the knowledge about these new programming approaches and their acceptance is generally very low. We believe that this is mainly due to the fact that most current PGAS approaches are implemented as new programming languages and thus require a complete rewriting of existing application code. Furthermore, most current PGAS languages do not support multilevel memory hierarchies and do thus not match current hardware. In this thesis Hierarchical Arrays (HA), a data-structure oriented C++ template library, providing hierarchical PGAS-like abstractions for essential data types like multidimensional arrays and distributed lists is designed and prototypically implemented. The general principles for the design are the explicit control of the hierarchical data layout by the programmer, the use of one-sided message primitives, and the ability of the library to co-exist with current parallel programming models such as MPI, in order to support incremental adoption.