TU Wien Informatics

20 Years

About

Parallel Computing deals with the efficient utilization of parallel processing resources for the solution of computational problems. This sounds dry, but since all modern, general purpose computing devices are in some way or the other parallel computers, parallel compting is ubiquitous and inevitable.

Since not all computational problems are easily amenable to being solved in parallel, it is fascinating and challenging, and abounds with issues and problems that must be resolved better. Parallel Computing at TU Wien focuses on efficient utilization and modeling of real, existing architectures and systems (shared-memory multi-cores, distributed memory systems, hybrid and accelerated systems), on algorithms, interfaces, libraries and, to some extent, applications; and with idealized models of parallel computuations to explore the limits of parallelization.

The research area has specific expertise and interest in message-passing parallel computing, interfaces like MPI, benchmarking of parallel algorithms, scheduling, shared-memory algorithms and data structures, and parallel algorithms. All these topics are dealt with extensively in lectures offered by the research division.

The research Unit Parallel Computing is part of the Institute of Computer Engineering.

Sascha Hunold
Sascha Hunold S. Hunold

Associate Professor
Assoc.Prof. Dipl.-Inf. Dr.

Jesper Larsson Träff
Jesper Larsson Träff J. Träff

Head of Research Unit
Univ.Prof. Dr. / MSc PhD

Majid Salimibeni
Majid Salimibeni M. Salimibeni

PostDoc Researcher
PhD

Ioannis Vardas
Ioannis Vardas I. Vardas

PreDoc Researcher
MSc

Ulrike Weisz
Ulrike Weisz U. Weisz

Office Services
Mag. Dr.

2024

2023

2022

2021

  • A Doubly-pipelined, Dual-root Reduction-to-all Algorithm and Implementation / Träff, J. L. (2021). A Doubly-pipelined, Dual-root Reduction-to-all Algorithm and Implementation. arXiv. https://doi.org/10.48550/arXiv.2109.12626
  • A more pragmatic implementation of the lock-free, ordered, linked list / Träff, J. L., & Pöter, M. (2021). A more pragmatic implementation of the lock-free, ordered, linked list. In J. Lee & E. Petrank (Eds.), Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. ACM. https://doi.org/10.1145/3437801.3441579
  • MicroBench Maker: Reproduce, Reuse, Improve / Hunold, S., Ajanohoun, J. I., & Carpen-Amarie, A. (2021). MicroBench Maker: Reproduce, Reuse, Improve. In 2021 International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS). 12th IEEE International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS 2021) in conjunction with SC 2021, St. Louis, Missouri, United States of America (the). IEEE. https://doi.org/10.1109/pmbs54543.2021.00013
    Project: Autotune (2021–2025)
  • Teaching Complex Scheduling Algorithms / Hunold, S., & Przybylski, B. (2021). Teaching Complex Scheduling Algorithms. In 2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 11th NSF/TCPP Workshop on Parallel and Distributed Computing Education (EduPar 2021) in conjunction with 35th IEEE IPDPS 2021 - Online Conference, Portland, Oregon, USA, United States of America (the). IEEE. https://doi.org/10.1109/ipdpsw52791.2021.00058
  • MPI collective communication through a single set of interfaces: A case for orthogonality / Träff, J. L., Hunold, S., Mercier, G., & Holmes, D. J. (2021). MPI collective communication through a single set of interfaces: A case for orthogonality. Parallel Computing: Systems & Applications, 107(102826), 102826. https://doi.org/10.1016/j.parco.2021.102826
    Project: Process Mapping (2019–2024)

2020

  • Special issue: Selected papers from EuroMPI 2019 / Träff, J. L., & Hoefler, T. (2020). Special issue: Selected papers from EuroMPI 2019. Parallel Computing, 99, Article 102695. https://doi.org/10.1016/j.parco.2020.102695
  • High-Quality Hierarchical Process Mapping / Faraj, M. F., van der Grinten, A., Meyerhenke, H., Träff, J. L., & Schulz, C. (2020). High-Quality Hierarchical Process Mapping. arXiv. https://doi.org/10.48550/arXiv.2001.07134
  • k-ported vs. k-lane Broadcast, Scatter, and Alltoall Algorithms / Träff, J. L. (2020). k-ported vs. k-lane Broadcast, Scatter, and Alltoall Algorithms. arXiv. https://doi.org/10.48550/arXiv.2008.12144
  • Efficient Process-to-Node Mapping Algorithms for Stencil Computations / Hunold, S., von Kirchbach, K., Lehr, M., Schulz, C., & Träff, J. L. (2020). Efficient Process-to-Node Mapping Algorithms for Stencil Computations. arXiv. https://doi.org/10.48550/arXiv.2005.09521
    Project: Process Mapping (2019–2024)
  • Decomposing MPI Collectives for Exploiting Multi-lane Communication / Träff, J. L. (2020). Decomposing MPI Collectives for Exploiting Multi-lane Communication. SPCL_Bcast, ETH Zürich, Zürich, Switzerland. http://hdl.handle.net/20.500.12708/87082
  • High-Quality Hierarchical Process Mapping / Faraj, M. F., van der Grinten, A., Meyerhenke, H., Träff, J. L., & Schulz, C. (2020). High-Quality Hierarchical Process Mapping. In S. Faro & D. Cantone (Eds.), 18th International Symposium on Experimental Algorithms, SEA 2020 (pp. 4:1-4:15). Schloss Dagstuhl - Leibniz-Zentrum für Informatik. https://doi.org/10.4230/LIPIcs.SEA.2020.4
    Project: Process Mapping (2019–2024)
  • Decomposing MPI Collectives for Exploiting Multi-lane Communication / Träff, J. L., & Hunold, S. (2020). Decomposing MPI Collectives for Exploiting Multi-lane Communication. In 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEE International Conference on Cluster Computing (IEEE Cluster 2020) - Online Conference, Kobe, Japan. IEEE. https://doi.org/10.1109/cluster49012.2020.00037
  • Predicting MPI Collective Communication Performance Using Machine Learning / Hunold, S., Bhatele, A., Bosilca, G., & Knees, P. (2020). Predicting MPI Collective Communication Performance Using Machine Learning. In 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEE International Conference on Cluster Computing (IEEE Cluster 2020) - Online Conference, Kobe, Japan. IEEE. https://doi.org/10.1109/cluster49012.2020.00036
  • Signature Datatypes for Type Correct Collective Operations, Revisited / Träff, J. L. (2020). Signature Datatypes for Type Correct Collective Operations, Revisited. In 27th European MPI Users’ Group Meeting. 27th European MPI Users’ Group Meeting (EuroMPI/USA 2020) - Online Conference, Austin, United States of America (the). IEEE. https://doi.org/10.1145/3416315.3416324
  • Collectives and Communicators: A Case for Orthogonality / Träff, J. L., Hunold, S., Mercier, G., & Holmes, D. J. (2020). Collectives and Communicators: A Case for Orthogonality. In 27th European MPI Users’ Group Meeting. 27th European MPI Users’ Group Meeting (EuroMPI/USA 2020) - Online Conference, Austin, United States of America (the). IEEE. https://doi.org/10.1145/3416315.3416319
  • Efficient Process-to-Node Mapping Algorithms for Stencil Computations / von Kirchbach, K., Lehr, M., Hunold, S., Schulz, C., & Träff, J. L. (2020). Efficient Process-to-Node Mapping Algorithms for Stencil Computations. In 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEE International Conference on Cluster Computing (IEEE Cluster 2020) - Online Conference, Kobe, Japan. IEEE. https://doi.org/10.1109/cluster49012.2020.00011
    Project: Process Mapping (2019–2024)
  • Optimizing Memory Access in TCF Processors with Compute-Update Operations / Forsell, M., Roivainen, J., & Träff, J. L. (2020). Optimizing Memory Access in TCF Processors with Compute-Update Operations. In 2020 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 22nd Workshop on Advances in Parallel and Distributed Computational Models (APDCM 2020) in conjunction with IPDPS 2020 - Online Conference, New Orleans, United States of America (the). IEEE. https://doi.org/10.1109/ipdpsw50202.2020.00100
  • Classical and pipelined preconditioned conjugate gradient methods with node-failure resilience / Pachajoa, C., Levonyak, M., Pacher, C., Träff, J. L., & Gansterer, W. (2020). Classical and pipelined preconditioned conjugate gradient methods with node-failure resilience. In A. Schlögl, J. Kiss, & S. Elefante (Eds.), Austrian High-Performance-Computing Meeting (AHPC 2020) (p. 13). IST Austria. https://doi.org/10.15479/AT:ISTA:7474
  • A more Pragmatic Implementation of the Lock-free, Ordered, Linked List / Träff, J. L., & Pöter, M. (2020). A more Pragmatic Implementation of the Lock-free, Ordered, Linked List. arXiv. https://doi.org/10.48550/arXiv.2010.15755
  • Scheduling.jl - Collaborative and Reproducible Scheduling Research with Julia / Hunold, S., & Przybylski, B. (2020). Scheduling.jl - Collaborative and Reproducible Scheduling Research with Julia. arXiv. https://doi.org/10.48550/arXiv.2003.05217
  • Better Process Mapping and Sparse Quadratic Assignment / Kirchbach, K. V., Schulz, C., & Träff, J. L. (2020). Better Process Mapping and Sparse Quadratic Assignment. ACM Journal on Experimental Algorithmics, 25, 1–19. https://doi.org/10.1145/3409667
    Project: Process Mapping (2019–2024)
  • Improved Cartesian Topology Mapping in MPI / Lehr, M., & von Kirchbach, K. (2020). Improved Cartesian Topology Mapping in MPI. In A. Schlögl, J. Kiss, & S. Elefante (Eds.), Austrian High-Performance-Computing Meeting (AHPC 2020) (p. 27). IST Austria. https://doi.org/10.15479/AT:ISTA:7474
  • Exploiting Multi-lane Communication in MPI Collectives / Träff, J. L. (2020). Exploiting Multi-lane Communication in MPI Collectives. In A. Schlögl, J. Kiss, & S. Elefante (Eds.), Austrian High-Performance-Computing Meeting (AHPC 2020) (p. 30). IST Austria. https://doi.org/10.15479/AT:ISTA:7474

2019

2018

  • Stamp-it: A more Thread-efficient, Concurrent Memory Reclamation Scheme in the C++ Memory Model / Pöter, M., & Träff, J. L. (2018). Stamp-it: A more Thread-efficient, Concurrent Memory Reclamation Scheme in the C++ Memory Model. arXiv. https://doi.org/10.48550/arXiv.1805.08639
  • Memory Models for C/C++ Programmers / Pöter, M., & Träff, J. L. (2018). Memory Models for C/C++ Programmers. arXiv. https://doi.org/10.48550/arXiv.1803.04432
  • Parallel Quicksort without Pairwise Element Exchange / Träff, J. L. (2018). Parallel Quicksort without Pairwise Element Exchange. arXiv. https://doi.org/10.48550/arXiv.1804.07494
  • Hierarchical Clock Synchronization in MPI / Hunold, S., & Carpen-Amarie, A. (2018). Hierarchical Clock Synchronization in MPI. In 2018 IEEE International Conference on Cluster Computing (CLUSTER). IEEE International Conference on Cluster Computing, CLUSTER 2018, Belfast, United Kingdom, EU. IEEE. https://doi.org/10.1109/cluster.2018.00050
  • Algorithm Selection of MPI Collectives Using Machine Learning Techniques / Hunold, S., & Carpen-Amarie, A. (2018). Algorithm Selection of MPI Collectives Using Machine Learning Techniques. In 2018 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS). 9th IEEE International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS 2018) in conjunction with SC 2018, Dallas, Texas, USA, Non-EU. IEEE. https://doi.org/10.1109/pmbs.2018.8641622
  • Brief Announcement / Pöter, M., & Träff, J. L. (2018). Brief Announcement. In Proceedings of the 30th on Symposium on Parallelism in Algorithms and Architectures. 30th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2018), Vienna, Austria, Austria. ACM. https://doi.org/10.1145/3210377.3210661
  • <i>Stamp-it</i> , amortized constant-time memory reclamation in comparison to five other schemes / Pöter, M., & Träff, J. L. (2018). Stamp-it            , amortized constant-time memory reclamation in comparison to five other schemes. In Proceedings of the 23rd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 23rd Symposium on Principles and Practice of Parallel Programming (PPoPP 2018), Vienna, Austria, Austria. ACM. https://doi.org/10.1145/3178487.3178532
  • Practical, distributed, low overhead algorithms for irregular gather and scatter collectives / Träff, J. L. (2018). Practical, distributed, low overhead algorithms for irregular gather and scatter collectives. Parallel Computing: Systems & Applications, 75, 100–117. https://doi.org/10.1016/j.parco.2018.04.003
    Project: MPI (2013–2018)
  • Supporting concurrent memory access in TCF processor architectures / Forsell, M., Roivainen, J., Leppänen, V., & Träff, J. L. (2018). Supporting concurrent memory access in TCF processor architectures. Microprocessors and Microsystems, 63, 226–236. https://doi.org/10.1016/j.micpro.2018.09.013
  • On Optimal trees for Irregular Gather and Scatter Collectives / Träff, J. L. (2018). On Optimal trees for Irregular Gather and Scatter Collectives. Uppsala University, Uppsala, Sweden, EU. http://hdl.handle.net/20.500.12708/86726
  • Full-Duplex Inter-Group All-to-All Broadcast Algorithms with Optimal Bandwidth / Kang, Q., Träff, J. L., Al-Bahrani, R., Agrawal, A., Choudhary, A., & Liao, W. (2018). Full-Duplex Inter-Group All-to-All Broadcast Algorithms with Optimal Bandwidth. In Proceedings of the 25th European MPI Users’ Group Meeting. 25th European MPI Users’ Group Meeting (EuroMPI 2018), Barcelona, Spain, EU. ACM. https://doi.org/10.1145/3236367.3236374
  • Implementation of Multioperations in Thick Control Flow Processors / Forsell, M., Roivainen, J., Leppänen, V., & Träff, J. L. (2018). Implementation of Multioperations in Thick Control Flow Processors. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). 20th Workshop on Advances in Parallel and Distributed Computational Models (APDCM 2018) in conjunction with IPDPS 2018, Vancouver, British Columbia, Canada, Non-EU. IEEE. https://doi.org/10.1109/ipdpsw.2018.00121
  • Autotuning MPI Collectives using Performance Guidelines / Hunold, S., & Carpen-Amarie, A. (2018). Autotuning MPI Collectives using Performance Guidelines. In Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region. International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2018), Tokyo, Japan, Non-EU. ACM. https://doi.org/10.1145/3149457.3149461

2017

  • A new and five older Concurrent Memory Reclamation Schemes in Comparison (Stamp-it) / Pöter, M., & Träff, J. L. (2017). A new and five older Concurrent Memory Reclamation Schemes in Comparison (Stamp-it). arXiv. https://doi.org/10.48550/arXiv.1712.06134
  • On Optimal Trees for Irregular Gather and Scatter Collectives / Träff, J. L. (2017). On Optimal Trees for Irregular Gather and Scatter Collectives. arXiv. https://doi.org/10.48550/arXiv.1711.08731
  • Better Process Mapping and Sparse Quadratic Assignment / Schulz, C., & Träff, J. L. (2017). Better Process Mapping and Sparse Quadratic Assignment. arXiv. https://doi.org/10.48550/arXiv.1702.04164
  • Practical, Linear-time, Fully Distributed Algorithms for Irregular Gather and Scatter / Träff, J. L. (2017). Practical, Linear-time, Fully Distributed Algorithms for Irregular Gather and Scatter (1702.05967). arXiv. https://doi.org/10.48550/arXiv.1702.05967
    Project: MPI (2013–2018)
  • VieM v1.00 - Vienna Mapping and Sparse Quadratic Assignment User Guide / Schulz, C., & Träff, J. L. (2017). VieM v1.00 - Vienna Mapping and Sparse Quadratic Assignment User Guide. arXiv. https://doi.org/10.48550/arXiv.1703.05509
  • Micro-benchmarking MPI Neighborhood Collective Operations / Lübbe, F. D. (2017). Micro-benchmarking MPI Neighborhood Collective Operations. In F. F. Rivera, T. F. Pena, & J. C. Cabaleiro (Eds.), Euro-Par 2017: Parallel Processing 23rd International Conference on Parallel and Distributed Computing, Santiago de Compostela, Spain, August 28 – September 1, 2017, Proceedings (pp. 65–78). Springer. https://doi.org/10.1007/978-3-319-64203-1_5
    Project: MPI (2013–2018)
  • Tuning MPI Collectives by Verifying Performance Guidelines / Hunold, S., & Carpen-Amarie, A. (2017). Tuning MPI Collectives by Verifying Performance Guidelines. arXiv. https://doi.org/10.48550/arXiv.1707.09965
  • Better Process Mapping and Sparse Quadratic Assignment / Schulz, C., & Träff, J. L. (2017). Better Process Mapping and Sparse Quadratic Assignment. In C. S. Iliopoulos, S. P. Pissis, S. J. Puglisi, & R. Raman (Eds.), 16th International Symposium on Experimental Algorithms, SEA 2017 (pp. 4:1-4:15). Schloss Dagstuhl - Leibniz-Zentrum für Informatik GmbH. https://doi.org/10.4230/LIPIcs.SEA.2017.4
  • On expected and observed communication performance with MPI derived datatypes / Carpen-Amarie, A., Hunold, S., & Träff, J. L. (2017). On expected and observed communication performance with MPI derived datatypes. Parallel Computing: Systems & Applications, 69, 98–117. https://doi.org/10.1016/j.parco.2017.08.006
    Projects: EPiGRAM (2013–2016) / MPI (2013–2018)
  • Scheduling Independent Moldable Tasks on Multi-Cores with GPUs / Bleuse, R., Hunold, S., Kedad-Sidhoum, S., Monna, F., Mounie, G., & Trystram, D. (2017). Scheduling Independent Moldable Tasks on Multi-Cores with GPUs. IEEE Transactions on Parallel and Distributed Systems, 28(9), 2689–2702. https://doi.org/10.1109/tpds.2017.2675891
  • MPI Is 25 Years Old! / Lusk, E., & Träff, J. L. (2017). MPI Is 25 Years Old! HPCwire, MAY 1. http://hdl.handle.net/20.500.12708/146783
  • Autotuning MPI Collectives using Performance Guidelines / Hunold, S., & Carpen-Amarie, A. (2017). Autotuning MPI Collectives using Performance Guidelines. LIG - Bâtiment IMAG, St Martin d’Hères, France, EU. http://hdl.handle.net/20.500.12708/86599
  • The past 25 years of MPI / Träff, J. L. (2017). The past 25 years of MPI. Panel at ISC High Performance Conference 2017 - The HPC Event, Intel booth, Frankfurt, Germany, EU. http://hdl.handle.net/20.500.12708/86517
  • Fast Processing of MPI Derived Datatypes? / Träff, J. L. (2017). Fast Processing of MPI Derived Datatypes? Mini Workshop Algorithms Engineering, Uni Wien, Vienna, Austria, Austria. http://hdl.handle.net/20.500.12708/86518
  • High Performance Expectations for MPI / Träff, J. L. (2017). High Performance Expectations for MPI. Friedrich-Alexander-Universität Erlangen-Nürnberg, Prof. Dr. Gerhard Wellein, Erlangen, Germany, EU. http://hdl.handle.net/20.500.12708/86505
  • Is Gossip-inspired reduction competitive in high performance computing? / Wimmer, E. (2017). Is Gossip-inspired reduction competitive in high performance computing? International Workshop on Parallel Numerics (PARNUM 2017), Waischenfeld, Germany, EU. http://hdl.handle.net/20.500.12708/86504
  • Euro-Par 2016: Parallel Processing Workshops / Desprez, F., Dutot, P.-F., Kaklamanis, C., Marchal, L., Molitorisz, K., Ricci, L., Scarano, V., Vega-Rodriguez, M. A., Varbanescu, A. L., Hunold, S., Scott, S. L., Lankes, S., & Weidendorfer, J. (Eds.). (2017). Euro-Par 2016: Parallel Processing Workshops. Springer Nature Switzerland AG 2021. https://doi.org/10.1007/978-3-319-58943-5
  • Exploiting Common Neighborhoods to Optimize MPI Neighborhood Collectives / Mirsadeghi, S. H., Träff, J. L., Balaji, P., & Afsahi, A. (2017). Exploiting Common Neighborhoods to Optimize MPI Neighborhood Collectives. In 2017 IEEE 24th International Conference on High Performance Computing (HiPC). 24th IEEE International Conference on High Performance Computing (HiPC 2017), Jaipur, India, Non-EU. IEEE. https://doi.org/10.1109/hipc.2017.00047
  • Supporting concurrent memory access in TCF-aware processor architectures / Forsell, M., Roivainen, J., Leppänen, V., & Träff, J. L. (2017). Supporting concurrent memory access in TCF-aware processor architectures. In J. Nurmi, M. Vesterbacka, J. J. Wikner, A. Alvandpour, M. Nielsen-Lönn, & I. R. Nielsen (Eds.), 2017 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC). IEEE. https://doi.org/10.1109/norchip.2017.8124962
  • Predicting the Energy-Consumption of MPI Applications at Scale Using Only a Single Node / Heinrich, F. C., Cornebize, T., Degomme, A., Legrand, A., Carpen-Amarie, A., Hunold, S., Orgerie, A.-C., & Quinson, M. (2017). Predicting the Energy-Consumption of MPI Applications at Scale Using Only a Single Node. In 2017 IEEE International Conference on Cluster Computing (CLUSTER). IEEE International Conference on Cluster Computing (CLUSTER 2017), Honolulu, Hawaii, USA, Non-EU. IEEE. https://doi.org/10.1109/cluster.2017.66
  • Practical, linear-time, fully distributed algorithms for irregular gather and scatter / Träff, J. L. (2017). Practical, linear-time, fully distributed algorithms for irregular gather and scatter. In Proceedings of the 24th European MPI Users’ Group Meeting on - EuroMPI ’17. 24th European MPI Users’ Group Meeting (EuroMPI/USA 2017), Chicago, IL, USA, Non-EU. ACM. https://doi.org/10.1145/3127024.3127025
    Project: MPI (2013–2018)
  • Introduction to REPPAR Workshop / Hunold, S., Legrand, A., & Nussbaum, L. (2017). Introduction to REPPAR Workshop. In 2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW). IEEE. https://doi.org/10.1109/ipdpsw.2017.221
  • High Performance Expectations for MPI / Träff, J. L. (2017). High Performance Expectations for MPI. In G. Baumgartner & J. Courian (Eds.), AHPC 2017, Austrian HPC Meeting 2017 (p. 33). FSP Scientific Computing, University of Innsbruck. http://hdl.handle.net/20.500.12708/56920

2016

  • Message-Combining Algorithms for Isomorphic, Sparse Collective Communication / Träff, J. L., Carpen-Amarie, A., Hunold, S., & Rougier, A. (2016). Message-Combining Algorithms for Isomorphic, Sparse Collective Communication. arXiv. https://doi.org/10.48550/arXiv.1606.07676
  • Benchmarking Concurrent Priority Queues: Performance of k-LSM and Related Data Structures / Gruber, J., Träff, J. L., & Wimmer, M. (2016). Benchmarking Concurrent Priority Queues: Performance of k-LSM and Related Data Structures. arXiv. https://doi.org/10.48550/arXiv.1603.05047
  • PGMPI: Automatically Verifying Self-Consistent MPI Performance Guidelines / Hunold, S., Carpen-Amarie, A., Lübbe, F. D., & Träff, J. L. (2016). PGMPI: Automatically Verifying Self-Consistent MPI Performance Guidelines. arXiv. https://doi.org/10.48550/arXiv.1606.00215
    Projects: MPI (2013–2018) / ReproPC (2013–2016)
  • MPI Derived Datatypes: Performance Expectations and Status Quo / Carpen-Amarie, A., Hunold, S., & Träff, J. L. (2016). MPI Derived Datatypes: Performance Expectations and Status Quo. arXiv. https://doi.org/10.48550/arXiv.1607.00178
    Projects: EPiGRAM (2013–2016) / MPI (2013–2018)
  • The EPiGRAM Project: Preparing Parallel Programming Models for Exascale / Markidis, S., Peng, I. B., Larsson Träff, J., Rougier, A., Bartsch, V., Machado, R., Rahn, M., Hart, A., Holmes, D., Bull, M., & Laure, E. (2016). The EPiGRAM Project: Preparing Parallel Programming Models for Exascale. In M. Taufer, B. Mohr, & J. M. Kunkel (Eds.), High Performance Computing : ISC High Performance 2016 International Workshops, ExaComm, E-MuCoCoS, HPC-IODC, IXPUG, IWOPH, P^3MA, VHPC, WOPSSS, Frankfurt, Germany, June 19–23, 2016, Revised Selected Papers (pp. 56–68). Springer International Publishing. https://doi.org/10.1007/978-3-319-46079-6_5
    Project: EPiGRAM (2013–2016)
  • The art of benchmarking MPI libraries / Hunold, S. (2016). The art of benchmarking MPI libraries. Austrian HPC Meeting 2016 - AHPC16, Grundlsee, Austria. http://hdl.handle.net/20.500.12708/86269
  • Brief Announcement: Benchmarking Concurrent Priority Queues: / Gruber, J., Träff, J. L., & Wimmer, M. (2016). Brief Announcement: Benchmarking Concurrent Priority Queues: In SPAA ’16: Proceedings of the 28th ACM Symposium on Parallelism in Algorithms and Architectures (pp. 361–362). ACM. https://doi.org/10.1145/2935764.2935803
  • Viewpoint: (Mis)Managing Parallel Computing Research through EU Project Funding / Träff, J. L. (2016). Viewpoint: (Mis)Managing Parallel Computing Research through EU Project Funding. Communications of the ACM, 59(12), 46–48. https://doi.org/10.1145/2948893
  • Governing energy consumption in Hadoop through CPU frequency scaling: An analysis / Ibrahim, S., Phan, T.-D., Carpen-Amarie, A., Chihoub, H.-E., Moise, D., & Antoniu, G. (2016). Governing energy consumption in Hadoop through CPU frequency scaling: An analysis. Future Generation Computer Systems: The International Journal of EScience, 54, 219–232. http://hdl.handle.net/20.500.12708/148922
  • Editorial: Special Issue: Euro-Par 2015 / Lengauer, C., Bougé, L., & Träff, J. L. (2016). Editorial: Special Issue: Euro-Par 2015. Concurrency and Computation: Practice and Experience, 28(12), 3445–3446. http://hdl.handle.net/20.500.12708/148865
  • Polynomial-Time Construction of Optimal MPI Derived Datatype Trees / Träff, J. L. (2016). Polynomial-Time Construction of Optimal MPI Derived Datatype Trees. Leibniz-Rechenzentrum (LRZ), Garching bei München, Germany, EU. http://hdl.handle.net/20.500.12708/86364
  • On The Power of Structured Data in MPI / Träff, J. L. (2016). On The Power of Structured Data in MPI. Guest Lecture of the course: Parallel and High Performance Computing, LMU Munich, Munich, Germany, EU. http://hdl.handle.net/20.500.12708/86357
    Project: MPI (2013–2018)
  • The Art of MPI Benchmarking / Hunold, S. (2016). The Art of MPI Benchmarking. 45th SPEEDUP Workshop on High-Performance Computing, Basel, Switzerland, Non-EU. http://hdl.handle.net/20.500.12708/86310
  • Tutorial: Effective MPI Programming: concepts, advanced features, do's and dont's / Träff, J. L. (2016). Tutorial: Effective MPI Programming: concepts, advanced features, do’s and dont’s. Tutorial on MPI at the 22nd International European Conference on Parallel and Distributed Computing (Euro-Par 2016), Grenoble, France, EU. http://hdl.handle.net/20.500.12708/86292
  • The Art of MPI Benchmarking / Hunold, S. (2016). The Art of MPI Benchmarking. Lunchtime Seminar, Department of Computer Science, University of Innsbruck, Innsbruck, Austria, Austria. http://hdl.handle.net/20.500.12708/86282
  • Clock Synchronization Algorithms and SimGrid / Hunold, S. (2016). Clock Synchronization Algorithms and SimGrid. SimGrid User Days, CNRS center Villa Clythia, Fréjus, France, EU. http://hdl.handle.net/20.500.12708/86260
  • Effective MPI Programming: Concepts, Advanced Features, Do's and Don'ts / Träff, J. L. (2016). Effective MPI Programming: Concepts, Advanced Features, Do’s and Don’ts. Vienna Scientific Cluster: VSC School Seminar, TU Wien, Vienna, Austria, Austria. http://hdl.handle.net/20.500.12708/86253
  • Polynomial-Time Construction of Optimal MPI Derived Datatype Trees / Ganian, R., Kalany, M., Szeider, S., & Träff, J. L. (2016). Polynomial-Time Construction of Optimal MPI Derived Datatype Trees. In 2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS). IEEE 30th International Parallel and Distributed Processing Symposium (IPDPS 2016), Chicago, Illinois, USA, Non-EU. IEEE Computer Society. https://doi.org/10.1109/ipdps.2016.13
    Project: EPiGRAM (2013–2016)
  • The art of benchmarking MPI libraries / Hunold, S., Carpen-Amarie, A., & Träff, J. L. (2016). The art of benchmarking MPI libraries. In I. Reichl, C. Blaas-Schenner, & J. Zabloudil (Eds.), Austrian HPC Meeting 2016 - AHPC 2016 (p. 45). Vienna Scientific Cluster (VSC). http://hdl.handle.net/20.500.12708/56921
  • On the Expected and Observed Communication Performance with MPI Derived Datatypes / Carpen-Amarie, A., Hunold, S., & Träff, J. L. (2016). On the Expected and Observed Communication Performance with MPI Derived Datatypes. In D. Holmes, A. Collis, J. L. Träff, & L. Smith (Eds.), Proceedings of the 23rd European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2966884.2966905
    Projects: EPiGRAM (2013–2016) / MPI (2013–2018)
  • A Library for Advanced Datatype Programming / Träff, J. L. (2016). A Library for Advanced Datatype Programming. In D. Holmes, A. Collis, J. L. Träff, & L. Smith (Eds.), Proceedings of the 23rd European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2966884.2966904
    Project: EPiGRAM (2013–2016)
  • High Performance Parallel Summed-Area Table Kernels for Multi-core and Many-core Systems / Papatriantafyllou, A., & Sacharidis, D. (2016). High Performance Parallel Summed-Area Table Kernels for Multi-core and Many-core Systems. In P.-F. Dutot & D. Trystram (Eds.), Euro-Par 2016: Parallel Processing (pp. 306–318). Springer International Publishing. https://doi.org/10.1007/978-3-319-43659-3_23
  • Automatic Verification of Self-consistent MPI Performance Guidelines / Hunold, S., Carpen-Amarie, A., Lübbe, F. D., & Träff, J. L. (2016). Automatic Verification of Self-consistent MPI Performance Guidelines. In P.-F. Dutot & D. Trystram (Eds.), Euro-Par 2016: Parallel Processing (pp. 433–446). Springer International Publishing. https://doi.org/10.1007/978-3-319-43659-3_32
    Projects: MPI (2013–2018) / ReproPC (2013–2016)
  • Proceedings of the 23rd European MPI Users' Group Meeting, EuroMPI 2016 / Holmes, D., Collis, A., Träff, J. L., & Smith, L. (Eds.). (2016). Proceedings of the 23rd European MPI Users’ Group Meeting, EuroMPI 2016. ACM. http://hdl.handle.net/20.500.12708/24173

2015

2014

  • Preface: Selected Papers from EuroMPI 2012 / Träff, J. L., & Benkner, S. (2014). Preface: Selected Papers from EuroMPI 2012. Computing, 96(4), 259–261. https://doi.org/10.1007/s00607-013-0335-z
  • An improved, easily computable combinatorial lower bound for weighted graph bipartitioning / Träff, J. L., & Wimmer, M. (2014). An improved, easily computable combinatorial lower bound for weighted graph bipartitioning. arXiv. https://doi.org/10.48550/arXiv.1410.0462
  • Stepping Stones to Reproducible Research: A Study of Current Practices in Parallel Computing / Carpen-Amarie, A., Rougier, A., & Lübbe, F. D. (2014). Stepping Stones to Reproducible Research: A Study of Current Practices in Parallel Computing. In L. Lopes, J. Zilinskas, A. Costan, R. G. Cascella, G. Kecskemeti, E. Jeannot, M. Cannataro, L. Ricci, S. Benkner, S. Petit, V. Scarano, J. Gracia, S. Hunold, S. L. Scott, S. Lankes, C. Lengauer, J. Carretero, J. Breitbart, & M. Alexander (Eds.), Euro-Par 2014: Parallel Processing Workshops Euro-Par 2014 International Workshops, Porto, Portugal, August 25-26, 2014, Revised Selected Papers, Part I (pp. 499–510). Springer International Publishing. https://doi.org/10.1007/978-3-319-14325-5_43
  • Pheet meets C++11 / Pöter, M. (2014). Pheet meets C++11. arXiv. https://doi.org/10.48550/arXiv.1411.1951
  • Perfectly Load-Balanced, Stable, Synchronization-Free Parallel Merge / Siebert, C., & Träff, J. L. (2014). Perfectly Load-Balanced, Stable, Synchronization-Free Parallel Merge. Parallel Processing Letters, 24(01), 1450005. https://doi.org/10.1142/s0129626414500054
  • Reproducible MPI Micro-Benchmarking Isn't As Easy As You Think / Hunold, S., Carpen-Amarie, A., & Träff, J. L. (2014). Reproducible MPI Micro-Benchmarking Isn’t As Easy As You Think. Research Group Theory and Applications of Algorithms, University of Vienna, Vienna, Austria, Austria. http://hdl.handle.net/20.500.12708/85872
    Projects: MPI (2013–2018) / ReproPC (2013–2016)
  • One Step towards Bridging the Gap between Theory and Practice in Moldable Task Scheduling with Precedence Constraints / Hunold, S. (2014). One Step towards Bridging the Gap between Theory and Practice in Moldable Task Scheduling with Precedence Constraints. AIT Austrian Institute of Technology, Seibersdorf, Austria, Austria. http://hdl.handle.net/20.500.12708/85871
    Project: ReproPC (2013–2016)
  • The Power of Structured Data in MPI / Träff, J. L. (2014). The Power of Structured Data in MPI. Compiler Technology and Computer Architecure Group at the University of Hertfordshire, Hertfordshire, United Kingdom, EU. http://hdl.handle.net/20.500.12708/85832
  • The Power of Structured Data in MPI / Träff, J. L. (2014). The Power of Structured Data in MPI. Research Group Theory and Applications of Algorithms and Research Group Scientific Computing, University of Vienna, Vienna, Austria, Austria. http://hdl.handle.net/20.500.12708/85825
  • Moldable Task Scheduling: Theory and Practice / Hunold, S. (2014). Moldable Task Scheduling: Theory and Practice. Workshop on New Challenges in Scheduling Theory, Aussois, France, EU. http://hdl.handle.net/20.500.12708/85817
  • Reproducibility of Experiments: It's about the WHO and less the HOW / Hunold, S. (2014). Reproducibility of Experiments: It’s about the WHO and less the HOW. Panel on reproducible research methodologies and new publication models, 4th International Workshop on Adaptive Self-tuning Computing Systems (ADAPT 2014) co-located with HiPEAC 2014, Vienna, Austria, Austria. http://hdl.handle.net/20.500.12708/85814
    Project: ReproPC (2013–2016)
  • One Step towards Bridging the Gap between Theory and Practice in Moldable Task Scheduling with Precedence Constraints / Hunold, S. (2014). One Step towards Bridging the Gap between Theory and Practice in Moldable Task Scheduling with Precedence Constraints. 9th Scheduling for Large Scale Systems Workshop, Lyon, France, EU. http://hdl.handle.net/20.500.12708/85812
  • The Power of Structured Data in MPI / Träff, J. L. (2014). The Power of Structured Data in MPI. I3MS Seminar Series, Aachen GRS, RWTH Aachen, Aachen, Germany, EU. http://hdl.handle.net/20.500.12708/85805
    Projects: EPiGRAM (2013–2016) / MPI (2013–2018)
  • Datatypes in Exascale message-passing / Rougier, A. (2014). Datatypes in Exascale message-passing. 3rd Vienna Scientific Cluster User Workshop, Neusiedl am See, Austria. http://hdl.handle.net/20.500.12708/85788
  • Multi-core prefix-sums / Papatriantafyllou, A. (2014). Multi-core prefix-sums. 3rd Vienna Scientific Cluster User Workshop, Neusiedl am See, Austria. http://hdl.handle.net/20.500.12708/85787
  • Implementing a classic: zero-copy all-to-all communication with MPI datatypes / Träff, J. L. (2014). Implementing a classic: zero-copy all-to-all communication with MPI datatypes. Department of Computer Science, University of Copenhagen, Copenhagen, Denmark, EU. http://hdl.handle.net/20.500.12708/85783
  • Euro-Par 2014: Parallel Processing Workshops / Lopes, L., Zilinskas, J., Costan, A., Cascella, R. G., Kecskemeti, G., Jeannot, E., Cannataro, M., Ricci, L., Benkner, S., Petit, S., Scarano, V., Gracia, J., Hunold, S., Scott, S. L., Lankes, S., Lengauer, C., Carretero, J., Breitbart, J., & Alexander, M. (Eds.). (2014). Euro-Par 2014: Parallel Processing Workshops. Springer. https://doi.org/10.1007/978-3-319-14313-2
  • Euro-Par 2014: Parallel Processing Workshops / Lopes, L., Zilinskas, J., Costan, A., Cascella, R. G., Kecskemeti, G., Jeannot, E., Cannataro, M., Ricci, L., Benkner, S., Petit, S., Scarano, V., Gracia, J., Hunold, S., Scott, S. L., Lankes, S., Lengauer, C., Carretero, J., Breitbart, J., & Alexander, M. (Eds.). (2014). Euro-Par 2014: Parallel Processing Workshops. Springer. https://doi.org/10.1007/978-3-319-14325-5
  • Reproducible MPI Micro-Benchmarking Isn't As Easy As You Think / Hunold, S., Carpen-Amarie, A., & Träff, J. L. (2014). Reproducible MPI Micro-Benchmarking Isn’t As Easy As You Think. In J. Dongarra, Y. Ishikawa, & A. Hori (Eds.), Proceedings of the 21st European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2642769.2642785
    Projects: MPI (2013–2018) / ReproPC (2013–2016)
  • Optimal MPI Datatype Normalization for Vector and Index-block Types / Träff, J. L. (2014). Optimal MPI Datatype Normalization for Vector and Index-block Types. In J. Dongarra, Y. Ishikawa, & A. Hori (Eds.), Proceedings of the 21st European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2642769.2642771
    Project: EPiGRAM (2013–2016)
  • Zero-copy, Hierarchical Gather is not possible with MPI Datatypes and Collectives / Träff, J. L., & Rougier, A. (2014). Zero-copy, Hierarchical Gather is not possible with MPI Datatypes and Collectives. In J. Dongarra, Y. Ishikawa, & A. Hori (Eds.), Proceedings of the 21st European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2642769.2642772
  • MPI Collectives and Datatypes for Hierarchical All-to-all Communication / Träff, J. L., & Rougier, A. (2014). MPI Collectives and Datatypes for Hierarchical All-to-all Communication. In J. Dongarra, Y. Ishikawa, & A. Hori (Eds.), Proceedings of the 21st European MPI Users’ Group Meeting. ACM. https://doi.org/10.1145/2642769.2642770
  • Scheduling Moldable Tasks with Precedence Constraints and Arbitrary Speedup Functions on Multiprocessors / Hunold, S. (2014). Scheduling Moldable Tasks with Precedence Constraints and Arbitrary Speedup Functions on Multiprocessors. In R. Wyrzykowski, J. Dongarra, K. Karczewski, & J. Wasniewski (Eds.), Parallel Processing and Applied Mathematics (pp. 13–25). Springer. https://doi.org/10.1007/978-3-642-55195-6_2
  • Towards Efficient Power Management in MapReduce: Investigation of CPU-Frequencies Scaling on Power Efficiency in Hadoop / Ibrahim, S., Moise, D., Chihoub, H.-E., Carpen-Amarie, A., Bougé, L., & Antoniu, G. (2014). Towards Efficient Power Management in MapReduce: Investigation of CPU-Frequencies Scaling on Power Efficiency in Hadoop. In F. Pop & M. Potop-Butucaru (Eds.), Adaptive Resource Management and Scheduling for Cloud Computing (pp. 147–164). Springer International Publishing. https://doi.org/10.1007/978-3-319-13464-2_11
  • Implementing a classic / Träff, J. L., Rougier, A., & Hunold, S. (2014). Implementing a classic. In M. Gerndt, P. Stenström, L. Rauchwerger, B. Miller, & M. Schulz (Eds.), Proceedings of the 28th ACM international conference on Supercomputing - ICS ’14. ACM. https://doi.org/10.1145/2597652.2597662
  • Data structures for task-based priority scheduling / Wimmer, M., Versaci, F., Träff, J. L., Cederman, D., & Tsigas, P. (2014). Data structures for task-based priority scheduling. In Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP ’14. 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2014, Orlando, Florida, USA, Non-EU. ACM. https://doi.org/10.1145/2555243.2555278

2013

2012

2011

 

  • Sascha Hunold: Best Short Paper / PMBS@Supercomputing
    2022 / USA
  • Sascha Hunold: Best Paper Award IEEE CLUSTER 2020
    2020 / Japan
  • Jesper Larsson Träff: Innovation Radar: Innovation Title: PGAS-based MPI with interoperability; Innovation Category: Exploration; FP 7 project EPiGRAM
    2018 / Project
  • Sascha Hunold: Best Paper Award EuroMPI/Asia
    2014 / Japan
  • Jesper Larsson Träff: Best Paper Award: "Reproducible MPI Micro-Benchmarking Isn't As Easy As You Think", S. Hunold, A. Carpen-Amarie, J. Träff, 21st European MPI Users' Group Meeting, EuroMPI/ASIA 2014, Kyoto, Japan, September 9-12, 2014
    2014 / Program Chairs of EuroMPI/ASIA 2014 / Japan

Soon, this page will include additional information such as reference projects, conferences, events, and other research activities.

Until then, please visit Parallel Computing’s research profile in TISS .