Call for Workshop Papers are now open!

The following workshops are co-located with Grid 2010.

Please see the individual workshop websites for participation details.

3rd Workshop on Service Level Agreements in Grids


Organisers: Philipp Wieder, Ramin Yahyapour, Wolfgang Ziegler

As the Grid evolves to become an infrastructure for providing and consuming services in research and commercial environments, mechanisms are needed to agree on the objectives and the quality of such service provision. This could be facilitated by means of electronic contracts between service consumers and one or more service provider(s), in order to achieve the necessary reliability and commitment on both sides. Such contracts help to establish a well-defined relationship between a service provider and a client in the context of a particular service provision. This is especially important if the services or resources to be used come from different administrative domains, or if commercial service provision needs to be supported. Over the last years we have seen Service Level Agreements (SLA) increasingly been used to establishing these kinds of guarantees and relationships. This workshop will provide a forum to present current research and up-to-date solutions from research and business communities.

Topics of interest for the Service Level Agreement in Grids workshop include but are not limited to:

  • Areas benefiting from SLAs
  • Languages to express SLAs (e.g. SLOs, BLOs, QoS, Guarantees, Penalties)
  • Protocols to conclude and negotiate SLAs
  • Technologies for management and observation of SLAs
  • Static vs. dynamic SLAs
  • Business models & Grid economy
  • Validation techniques for SLA parameters
  • Comparison and matchmaking of SLA and policy descriptions
  • Managing user expectations via SLAs
  • Multi-party SLAs and SLA chaining
  • SLA-based resource discovery, co-scheduling and resource reservation
  • Registry Services and repositories for SLA
  • SLA-based trust management including VOs
  • Non-technical aspects of SLA-based systems like laws or trust
  • SLAs for bursting into the Cloud

Energy Efficient Grids, Clouds and Clusters Workshop 2010 (E2GC2-2010)


Organisers: Hiroshi Nakamura, Jean-Marc Pierson, Jean-Patric Gelas

The question of energy savings is a matter of concern since a long time in the mobile distributed systems. However, for the large-scale non-mobile distributed systems, which nowadays reach impressive sizes, the energy dimension just starts to be taken into account. The E2GC2 workshop will focus on Green and energy efficient approaches, ideas, practical solutions, experiments and framewords dedicated to medium and large scale distributed infrastructures like Grids, clouds and clusters.

Topics of interest addressed by the E2GC2 workshop include, but are not limited to:

  • Green architectures for Grids, Clouds and clusters
  • Large scale energy monitoring systems
  • Energy efficient infrastructures
  • Energy efficient scheduling in Grids and clouds
  • Virtualization impact for energy reduction
  • Energy savings and QoS
  • Reporting and exposing carbon and energy impact
  • Energy efficient large scale applications
  • Energy efficiency benchmarking
  • Energy consumption modeling and prediction
  • Real life experiments
  • Green standards

Workshop on Autonomic Computational Science



Organiser: Shantenu Jha

Strategic investments coupled with technological advances are rapidly realizing a pervasive cyberinfrastructure, both nationally and globally, that integrates computers, networks, data archives, instruments, observatories, experiments, and embedded sensors and actuators. Such a computational ecosystem has the potential to catalyze new thinking in virtually all areas of computational science and engineering, which can lead to unprecedented insights into natural, engineered and human systems.

For example, application formulations can holistically investigate any phenomena of interest by combining computations, experiments, observations, and real-time information, for example, to understand and manage natural and engineered systems. These emerging computational paradigms and practices enabled by this cyber-ecosystem are naturally distributed and collaborative and fundamentally data intensive and data driven, as they explore coupled multi-physics, multi-scale formulations, end-to-end application workflows.

Autonomic computing techniques can address various aspects of system behaviour in the context of such infrastructure. Central to the autonomic paradigm are three fundamental separations: (1) a separation of computations from coordination and interactions; (2) a separation of non-functional aspects (e.g. resource requirements, performance) from functional behaviors, and (3) a separation of policy and mechanism – policies in the form of rules are used to orchestrate a repertoire of mechanisms to achieve context-aware adaptive runtime computational behaviors and coordination and interaction relationships based on functional, performance, and QoS requirements.  For instance, Autonomic computing techniques could provide:

  • New and more robust application formulation
  • Management of unpredictable system behaviour and unforeseen user behaviour and abuse;
  • Better management of energy consumption; and
  • More effective resource management to support scalability so that resources behave “elastically” at higher usage levels.

The 2010 Workshop on Component-Based High Performance Computing (CBHPC 2010)


Organisers: Gabrielle Allen, Thilo Kielmann

Component and framework technology is mainstream for desktop environments, but has lagged in the high-performance computing (HPC) community. The reasons for this stem partly from a general lack of awareness of component concepts in the community, but mostly from the fact that desktop component models sacrifice performance for ease-of-use. In addition, HPC uniquely requires component-based support for patterns special to parallel computing, such as the massively parallel single program multiple data pattern. Beyond the special requirements of HPC, component concepts promise to provide the same benefits as they do in the mainstream: participation by 10′s or 100′s of developers and the ability to support the software complexity that the simulation of natural phenomena demand. Likewise, with multi-core architecture becomes the norm and cloud computing gaining popularity, understanding requirements unique to HPC will enable a new class of commercial HPC applications.

Following the success of past HPC-GECO and CompFrame workshop series, the fifth installment of the workshop, CBHPC 2010, aims to bring together the developers and users of such technologies, and to build an international research community around these issues. This year’s workshop focuses on the role of component and framework technologies in high-performance and scientific computing, and on high-level, component-based and innovative programming tools and environments to efficiently develop high performance applications and exploit them both on individual massively parallel systems and on the Grid.

CBHPC welcomes submissions dealing with high-level and component-based approaches to HPC and Grid Computing:

  • Component models and frameworks
  • Component-based platforms for Grid, Clouds and large-scale facilities
  • Programming environments and paradigms
  • Analysis and comparison of existing programming approaches
  • Design patterns for HPC and Grid applications
  • Integration of different distributed/Grid/HPC programming frameworks
  • Tools and Environments for Coupling of Parallel Application codes
  • Application-level and support-level management of performance, QoS, faults, dynamicity, architecture heterogeneity
  • Application-level QoS contract description and enforcement
  • Advanced middleware systems as a device to efficiently exploit Grid resources (e.g. high-bandwidth, innovative networks) in high-level programming environments
  • Case studies and experiments of large and geographic scale high-level HPC applications, large-scale data/analysis
  • Applicability of software engineering techniques for restructuring and integration
  • High-level approaches for emerging HPC architectures, including clusters of reconfigurable computing units, multicore processors, and other hybrid, hardware accelerator techniques such as GPGPU, cell processors, and FPGA.
  • Approaches to component composition, development, deployment, repositories, debugging, and testing for components in HPC environments
  • Keynote Speakers

    Mr. Satoshi Sekiguchi, AIST, Japan
    “Grids to Clouds – Common Evolution or yet another Jalapagos?”

    Prof. Ignacio M. Llorente, Universidad Complutense de Madrid, Spain
    “OpenNebula: Leading Innovation in Cloud Computing Management”

    Dan Reed, Microsoft
    “Technical Clouds: Seeding Discovery”

    Dr. Shantenu Jha, Louisiana State University, USA
    A Fresh Perspective on Distributed Scientific Applications and Cyberinfrastructure

  • Important Dates

    Feb. 1, 2010 - Technical Paper Submission Open
    May. 12, 2010 - Technical Paper Submission Due - deadline extended!
    May. 3, 2010 - Workshop Submissions Due
    Jul. 1, 2010 - Paper Acceptance Notifications
    Aug. 13, 2010 - Full Papers Due - deadline extended!
    Sep. 12, 2010 - Poster Submissions Due
    Sep. 25, 2010 - Poster Acceptance Notifications
    Oct 25-28, 2010 - Grid 2010, Brussels, Belgium