Bergen Language Design Laboratory (BLDL)

BLDL has an internal meeting series. Some of these have a content which may be of interest to a larger audience. The program of these are announced here.

Contact Magne Haveraaen for more information.

Seminars 2011

  • Wednesday, 2011-04-06 1315, room 2144
    Eva Burrows (BLDL): Code refactoring, code rejuvenation, code transformation and IDEs
    (Trial lecture for PhD).

  • Monday, 2011-05-23 1315, Auditorium at VilVite, Thormøhlensgt 51
    Eva Burrows (BLDL): Programming with Explicit Dependencies: A Framework for Portable Parallel Programming
    (PhD defence).

  • Tuesday, 2011-05-24 1415, room 2144
    Keshav Pingali (University of Texas, USA): Parallel Programming Needs New Foundations
    When parallel programming started in the 70's and 80's, it was mostly art: languages such as functional and logic programming languages were designed and appreciated mainly for their elegance and beauty. More recently, parallel programming has become engineering: imperative languages like FORTRAN and C++ have been extended with parallel constructs, and we now spend our time benchmarking and tweaking large programs no one understands to obtain performance improvements of 5-10%. In spite of all this activity, we have few insights into how to write parallel programs to exploit the performance potential of multicore processors.
    In this talk, I will argue that these problems arise largely from the limitations of the program-centric abstractions like dependence graphs that we currently use to think about parallelism. I will then propose a novel data-centric abstraction called the operator formulation of algorithms, which reveals that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous in diverse applications ranging from mesh generation/refinement/partitioning to SAT solvers, maxflow algorithms, stencil computations and event-driven simulation. I will also show that the operator formulation can be used to perform a structural analysis of algorithms that can be exploited for efficient implementations of these algorithms. Finally, I will describe a system based on these ideas called Galois for programming multicore processors.
    These considerations suggest that the operator formulation of algorithms might provide a useful foundation for a new theory of parallel programming.

  • Tuesday, 2011-05-24 1515, room 2144
    Lawrence Rauchwerger (Texas A&M): STAPL: A High Productivity Parallel Programming Environment
    The Standard Template Adaptive Parallel Library (STAPL) is a collection of generic data structures and algorithms that provides a high productivity, parallel programming infrastructure with an approach that draws heavily in design from the C++ Standard Template Library (STL). By abstracting much of the complexities of parallelism from the end user, STAPL provides a platform for high productivity by enabling the user to focus on algorithmic design instead of lower level parallel implementation issues. In this talk, we provide an overview of the major STAPL components, discuss its framework for adaptive algorithm selection, and show that several important scientific applications can be written with relative ease in STAPL and still have scalable performance.

  • Thursday, 2011-11-03 1415, room 2142
    Michael Löwe (Computer Science department at the University of Applied Science Hannover, DE): Rule-based Refactoring of Software Systems - A Graph Transformation Approach.

Room 2142 (lille auditorium) is in Høyteknologisenteret, Thormøhlensgt 55.
Room 2144 (stort auditorium) is in Høyteknologisenteret, Thormøhlensgt 55.

Previous years