Within this section are the outputs of the ETF that have been produced so far. These include the reports that were produced in the previous incarnations of the ETF that have been uploaded previously to the NeSC digital library.
This document discusses the issues surrounding the use of multiple clouds in a research context, proposing a broker-based approach and discussing as an exemplar implementation the Zeel/i broker used in the JISC FleSSR project.
This document sets out the system to integrate the use of Condor based Campus Grids and the mathematical packages Matlab and R. It also addresses setting up the following pre-requisites BLCR (check-pointing), Matlab and R for the University of Reading Campus Grid. The work has been used by a number of researchers from within the University of Reading to utilize two extremely useful tools within a distributed computing environment.
Zeel is a software framework developed by Belfast e-Science (BeSC) to support dynamic network-centric infrastructures and is capable of software deployment and resource management. It has been in active development for more than 5 years and supports the large-scale commercial projects that BeSC have been involved in over this period. We call Zeel a framework because it acts as an integration layer to enable the use of other technologies within our systems without the use of those technologies being directly visible in our software. In essence, Zeel is an integrated collection of (abstract) technology interfaces and associated implementation technology instantiations for those interfaces.
This note describes the way in which a large-scale visualization of 3D heart data was ported to the video wall in the Oxford e-Research Centre (OeRC). The original version of the visualization program was written by Dr Christopher Goodyer for the powerwall at the University of Leeds. It displays isosurfaces of MRI data stacks obtained from a high-resolution scan of a rabbit heart; this data was supplied by Dr Peter Kohl at the University of Oxford. Porting the program was requested by OeRC in order to facilitate wider dissemination of the visualization work.
The Matlab distributed computing engine is tested and analysed on the White Rose Grid. Its installation and use are described with reference to the possibility of deployment onto the National Grid Service. Results from performance and scalability tests are presented and demonstrate cases where point to point messaging and collective messaging are most beneficial. For computation using large matrices the transport tests demonstrated that collective messaging scales much better than point to point messaging.
The Application Hosting Environment (AHE) is a lightweight hosting environment that allows scientists to run applications on grid resources in a quick, transparent manner. The AHE provides resource selection, application launching, workflow execution, provenance and data-recovery, exposing a WSRF compatible interface to job management on remote grid resource using WSRF::Lite as its middleware. The review is based on version 1.0.1 and 1.0.2 of the AHE Server. Version 1.0.1 has a stand-alone installation process while version 1.0.2 has been included into the OMII stack.
This report focuses on the GridSAM system developed by the London e-Science Centre and distributed by Open Middleware Infrastructure Institute (OMII) at Southampton. This system manages job submission to a variety of resources via a web service interface. The main purpose of this evaluation is to examine the strengths and weaknesses of the system and identify the issues that need to be considered before deploying it in a production environment. GridSAM appears to be an extremely useful tool for large-scale Grid service providers, if only because it provides a standardised interface to the many different platforms and middleware which can exist on the likes of the National Grid Service (NGS). It provides an alternative to the portal style approach, and uses web service standards to implement a standardised interface. However, portals could be written to allow submission via GridSAM. Ongoing work to modularise and simplify the implementation will mean that GridSAM will become an invaluable part of the NGS infrastructure. The Belfast e-Science Centre is currently providing a production deployment of the GridSAM instances for all of the NGS resources. In addition, other GridSAM instances are being deployed at other sites to provide local job submission to NGS resources, which should provide a good testing ground for GridSAM.
The Exludus replicator file sharing software is tested and analysed on the White Rose Grid. Its installation and use are described with reference to the possibility of deployment onto the National Grid Service. A small set of performance tests against a simple file serving system (NFS 3) and a more expensive cluster file system (IBM GPFS) are performed and it is found that Replicators aggregate file transfer speed scales linearly with the number of nodes involved in the tests, giving far superior performance to NFS in cases where large numbers of nodes require read access to a single file from the file server.
The Lightweight SRM Evaluation is a project operated by the UK Grid Engineering Taskforce (ETF). Its purpose is to evaluate a lightweight Storage Resource Manager (SRM) implementation, namely the Disk Pool Manager (DPM) software developed by CERN, for suitability for production deployments on the UK National Grid Service infrastructure. DPM is lightweight in that it implements SRM protocol services to stand in front of pools of diskbased file systems rather than more “heavyweight” Mass Storage Systems (MSS) which might include tape archive systems as well as disk pools. It is also lightweight in that it is supposed to be low maintenance, and easy to run in a good performance mode. This evaluation finds that deployment of the software in the NGS environment should be straightforward, and recommends that the NGS should avail itself of the expertise that exists within the GridPP Storage group in planning, executing and supporting the rollout of SRM within NGS. The NGS DPM services could be “plugged into” the GridPP Storage group testing infrastructure and publish themselves to appropriate Index Information servers: storage accounting work has been done and should be adopted by NGS. The only successful deployments of DPM were via RPMs on Red Hat Enterprise Linux/Scientific Linux installations. The best approach for NGS sites might be to provision additional servers on which to deploy DPM services to stand in front of disk resources to be managed as pools.
CROWN 1.0 is a Web Service hosting environment based on an earlier release of the Globus Toolkit. It allows services to be deployed, undeployed, and redeployed remotely without container restart (Hot Deployment). It contains a security system that allows fine grained authentication and authorisation control at both Node and Service levels. CROWN nodes can be federated into a Grid. There is also an Eclipse plug-in development tool. The evaluation team installed the software on a variety of machines and successfully tested remote deployment, interoperability, and access control. In general we conclude that CROWN is suitable for deployment on NGS machines.
The ETF Service Registries Workpackage was to establish a pilot services Registry for the UK e-Science Programme which was: • geographically distributed; • load-balanced; • fully redundant/replicated For this purpose the group evaluated the use of UDDI, with UDDI nodes deployed at four locations – Daresbury, NeSC, OeSC and WeSC. The evaluation work took place between June 2004 and December 2004.
This report provides an evaluation of the Resource Aware Visualisation Environment (RAVE) as developed by the Welsh e-Science Centre (WeSC) in Cardiff. It reports on a series of operations carried out by members of the Grid Engineering Task Force (ETF) which included the process of installation, deployment and use of RAVE services. The report is written with a view to finding out the suitability of RAVE in facilitating a national visualisation service. Criteria reported on for the purposes of this evaluation include: installation and deployment for clients, installation and deployment for servers, documentation, ease of use, functionality and future scalability. All technical issues encountered by the evaluators have been documented. The RAVE evaluation is part of a series of middleware evaluations conducted by the ETF. It stands alone and does not compare the application to other visualisation packages.
Report summarising the outputs of the EU-IndiaGrid Workshop, March 2011
This document is a revised report of the UK Engineering Task Force (ETF) Globus Toolkit 4 (GT4) Middleware Evaluation team and reports on the current state of the GT4 toolkit. The GT4 middleware evaluation was initiated to assess the suitability of GT4 for future ETF and UK e-Science Projects. The evaluation began in late November 2004 and was suspended in February 2005; the evaluation was re-activated in May 2005 to evaluate the first release of GT4 and to consider interoperability of GT2 software and the pre-WS components in the GT4 release.
The UK’s Engineering Task Force (ETF) is evaluating several Grid middleware solutions in order to take a view on their deployability on the resources within the National Grid Service (NGS) and those of the wider UK e-Science community. These systems include those from GridSystems, the Open Middleware Infrastructure Institute (OMII), the EGEE project and the Globus Alliance. This report presents the results of the evaluation of the OMII system.
An valuation of the ARC middleware by the University of Leeds
A report detailing the integration of the SARoNGS with portal infrastructure for the National e-Infrastructure for Social Simulation.
Materials and report form the ETF sponsored "Integrating Multi-Touch and Interactive Surfaces into the Research Environment" workshop.
Key to broadening participation in grid computing is the provision of easy to use access mechanisms and user interfaces, to allow a wide range of users with different skill sets to access the computational and data resources on offer. The Application Hosting Environment is one such middleware tool, hiding much of the complexity of dealing with grid resources from the user, and allowing them to interact with applications rather than machines. The nature of AHE means that it can be used as a single interface to a wide variety of resources, ranging from those provided at a departmental or institutional level to international federated grids of supercomputers. The AHE has users in a number of academic institutions in the UK and beyond, mostly using it to run simulations on resource such as the UK’s National Grid Service, including the high end machine HECToR and also on the US TeraGrid and EU DEISA grid. In addition a number of groups and projects worldwide are either deploying or evaluating the AHE. These groups include both scientific projects looking to deploy one or two applications for use by project members and grid resource providers aiming to host a number of different applications for use by a wide community. In this white paper we will give a brief overview of the Application Hosting Environment software, and then look at several case studies of groups and projects that are currently making use of AHE.
OeRC in conjunction with the University of Reading have developed the ability to deploy Linux based Condor instances on Windows machines using the lightweight CoLinux kernel for Windows. The novel msi installer automatically detects the memory and network configuration of the host and configures the guest OS accordingly. This configuration allows the whole Linux system to appear as a Windows service which only requires a handful of ports and no extra dedicated network interfaces. When combined with a standard Windows Condor install this allows the creation of very flexible Condor pools.
OpenStack is a set of components for implementing an Infrastructure as a Service Cloud. It was formed from Rackspace Hosting and NASA in July 2010 and is supported by many other companies. It allows a user to manage a large number of virtual machines including associated storage and networking. In addition, an object store is provided for long-term storage of static or rarely changing data. It is at least partly compatible with Amazon's Elastic Compute Cloud (EC2) and Simple Storage Service (S3). A modest deployment of the major components was done for this evaluation. OpenStack provides a rich set of features for implementing an Infrastructure as a Service cloud. Not all documented features work as advertised and it would be challenging to deploy and maintain a production quality system at present. However, this is likely to change in the near future given that the development of OpenStack is progressing at a rapid pace.