H.J.Burckhart, 13/2/2003

ATLAS DCS

  1. Overview

The ATLAS DCS

is composed of a Back-End (BE) system, which will be implemented with the commercial SCADA package PVSS-II running on PCs under both Windows and Linux and of several Front-End (FE) systems. The BE will be distributed over the control room on surface and several electronics rooms underground, see Fig 1. The FE equipment is mainly situated in the experimental cavern.

Fig 1: Organization of ATLAS DCS

The BE is organized in 3 levels (Fig 2). The top level comprises the Global Control Stations (GCS) providing functions like operator interface for commands, data display, alarm and status system, and a Web interface. Via an information server it also connects to systems external to ATLAS like the LHC accelerator and the CERN services.

The next level down consists of the Subdetector Control Stations (SCS), which allow the standalone operation of the individual subdetectors. There is also one SCS for the Common Infrastructure Control (CIC). The data acquisition system (DAQ) interacts mainly with these SCS via a dedicated PVSS application.

The Local Control Stations (LCS) of the lowest level connect to the FE and read out the data, analyze them and send them to mass storage.

Fig2. Hierarchical organization of the Back-End system

The FE is the responsibility of the subdetectors. It comprises both commercial devices like power supplies and purpose-built systems. The connection mechanism to PVSS is standardized, OPC being the most used one.

Because of ionizing radiation and a strong magnetic field in the experimental cavern the choice of commercial systems is very limited. For standard analog and digital I/O, ATLAS has developed an intelligent I/O concentrator called Embedded Local Monitor Board (ELMB)

It comprises a multiplexed 16-bit ADC with 64 channels and 24 digital I/O ports. A microprocessor provides local data processing and reduction. The ELMB communicates via an OPC server with PVSS, using the CERN-recommended protocol CANopen on the industrial fieldbus CAN. All ATLAS subdetectors use the ELMB for their standard I/O. High density and very low cost are important additional advantages, as about 5000 such units will be needed in ATLAS.

  1. ATLAS and JCOP

ATLAS intends to use the products and services of JCOP wherever possible in the implementation of its control system. The detailed needs are listed in the appendix. The DCS team of the experiment is responsible for implementation and operation of the control system using the building blocks provided and maintained by the controls group IT/CO and by industry. The ELMB is available also to users outside of ATLAS.

Appendix: Work packages needed by ATLAS DCS

  1. Introduction

The Joint Controls Project (JCOP) is a collaborative effort between the 4 LHC experiments and the controls group IT/CO. Its Purpose is “To develop a common framework and components for detector control of the LHC experiments and to define the required long-term support.” While the experiments are responsible for an adequate control system to operate their detector, it seems natural, that IT/CO should be responsible for commonly used building blocks (e.g. tools, libraries) and for applications which all experiments need (e.g. gas, connection to common services). The aim of this document is to define work packages with time scales as needed by ATLAS.

  1. Work packages

The commercial controls software PVSS-II has been chosen by JCOP as the basis for the Back-End DCS system. It has proven to be very adequate, but it misses some features needed for controlling experiments and also dedicated interfaces to the experiment specific environment are needed.In the following details of the work to be done are given together with their respective time scales.

2.1 WP1: General support of PVSS

This is the first-line user support dealing with technical consultancy, administrative aspects like licences and distribution, and training. It also includes both commercial and technical contacts to the company ETM, which provides PVSS.

Status: This is already in place and it will continue to be needed during the operation of ATLAS.

2.2 WP2: Framework tools and libraries (Chapter 2.1 in JCOP Program Plan)

Develop or support (if it is a commercial product) building blocks, which are necessary to built the control system. This is handled in the JCOP sub-project called ‘Framework’, which is driven by a working group and is monitored by the JCOP Executive Board. Priorities, time scales and deliverables are discussed in these bodies. It contains many items and the more important ones are:

  • Tools for the controls hierarchy

Time scale: Prototype needed in 2Q2003 for ID in SR building

  • Configuration tools

Time scale: Prototype needed in 2Q2003, final tool mid 2004

  • Connection to Data Bases

This is a very wide field and has many aspects, which are very much under external influence. There exist several possibilities:

  • Possibly the best solution would be that PVSS uses a commercial DB to which the experiment’s configuration and conditions data bases are interfaced. The core work would need to be done by ETM, the interfacing by JCOP.
  • Connection to Configuration DB

In the (likely) case, that the DAQ systems of the different experiments use different configuration DBs, only an interface tool can be made within JCOP.

Time scale: Prototype 2Q2003, final mid 2004

  • Connection to Conditions DB

If (hopefully) a CERN-wide DB is decided and implemented, a full connection to DCS should be made.

Time scale: would be very useful already in 2003, realistic only for 2004 or even later

  • Data visualisation

The functionality of the PVSS tools is quite limited. Either a substantial upgrade is needed (to be done by ETM) or a performing connection to the package(s) used by the experiments (e.g. ROOT)

Time scale: would be very useful already in 2003. An upgrade is realistic only for 2004 or even later; a (prototype) interfacing could already be done for 2Q2003.

  • Alarm system

Application package tailored by using standard PVSS tools

Time scale: 2004

As stated above, this WP contains many other smaller sub-points.

2.3 WP3: Connection to commonly used Front-End devices an systems

The devices identified so far are listed below. Each point has in general 2 components: communication SW and (generic) application SW to control the device.

  • HT systems (JCOP PP 2.6)
  • CAEN 1527, existing, upgrade to FE modules needed
  • ISEG, being developed
  • LT systems (JCOP PP 2.6)
  • Wiener (2004)
  • Crates (e.g. Wiener) (2004) (JCOP PP 2.8)
  • Rack Control (together with ESS) (2004) (JCOP PP 2.7)
  • Gas systems (together with GWG) (2004) (JCOP PP 2.4)
  • Cooling (together with JCOV) (2004) (JCOP PP 2.9)

2.4 WP4: Connection SW to external systems, called “Data Interchange Protocol” (DIP)(JCOP PP 2.2)

This is a CERN-wide communication protocol (with implementation) to be used by accelerators, CERN services, TCR, and the LHC experiments. Time scale: 2005

2.5 WP5: Applications in PVSS using DIP (JCOP PP 2.3)

Each LHC experiment needs generic applications, configurable to exchange information in both directions with:

  • LHC accelerator (2006)
  • Cooling/Ventilation (2004)
  • Electricity (2005)
  • Information from the electricity distribution system
  • Controls of the lowest level (e.g. racks) by the experiment

2.6 WP6: Detector Safety System DSS(JCOP PP 2.10)

This has to be defined together with the GLIMOS, once the prototype is finished.

2.7 WP7: Direct involvement with users

IT/CO personal should work directly with subdetector controls experts during commissioning in order to help them with their work and also learn about the usage (and possible problems) of the building blocks they provided. This work includes:

  • Install JCOP products in ATLAS prototype applications
  • Help with final user applications (e.g. configuring, trouble shooting, debugging)