The TINE Control System Protocol: Status Report

Philip Duval, DESY, Hamburg Germany

Abstract

The dominant control system in the HERA accelerator is TINE. Demands on and from control system components have made TINE extremely flexible and adaptable to a wide variety of operating systems and platforms. In particular, over the past year several noteworthy features have been added to TINE. In this report, we shall briefly review the TINE control system and highlight its distinguishing features.

1 INTRODUCTION

For the most part, TINE defines a data-exchange protocol, which can be used as the basis for a control system. As a control system proper, it is “do-it-yourself” oriented. That is, it is assumed that a hardware-near data acquisition system is in place on the relevant front-end devices. At DESY for instance, the in-house field-bus SEDAC is in common use, as are the CAN bus and GPIB. There are well-defined drivers and APIs for each of these, which provide a hardware IO layer for TINE here. Continuing the “do-it-yourself” discussion, we note that all redundancy and fault tolerance should be added where needed. Furthermore, as the ethernet is the data-exchange medium, it should be noted that mission-critical devices should never depend absolutely on communication with other elements on the ethernet. Likewise, it should be noted that TINE has been seen to scale to machines the size of HERA without any problems.

TINE follows the traditional “Client-Server” dichotomy, where the control system elements play either the role of (and have the characteristics of) a server or a client. Servers can be attached to hardware (Front End Computers – FECs) or can provide middle layer services. They have unique names and addresses and are known on the net via database or name server (i.e. they do not broadcast their services). Clients are “anonymous” and can exist in multiple instances anywhere on the network. By this, we mean that clients are nowhere entered in a database, and several clients can appear bearing the same name. (Each computer on the ethernet must of course have a unique address). Clients find Servers by querying the equipment name server. Finally, TINE is not database-driven, and does not require any central database for configuration.

2 THREE-FOLD INTEGRATION

Perhaps the most distinguishing feature about TINE is its integration of client and server components of vastly different networking environments. To begin with, TINE is a multi-platform system, running on legacy platforms such as MS-DOS, Win16 (Windows 3.X), and VAX and ALPHA VMS, as well as Win32 (Windows 95,98, NT, 2000), most UNIX machines, and VxWorks. TINE is also a multi-protocol system to the extent that IP and IPX are both supported as data exchange protocols. In this day of the internet, IPX should be regarded as a ‘legacy’ protocol, but is nonetheless supported where an IPX stack exists. Finally, and perhaps most interestingly, TINE is a multi-control system architecture system, allowing client-server, publisher-subscriber, and producer-consumer data exchange in any variation. A recent innovation in the TINE data-exchange package also includes a hybrid between publisher-subscriber and producer-consumer, which might be called producer-subscriber and brings to light the idea of a ‘network subscription’.

2.1 Multi-Platform

TINE runs on a number of platforms as illustrated in the table below. At HERA, the de-facto console platform is Windows NT (Window 3.1 in earlier years). All manner of front-end platforms are in use, however the individual sub-system laboratories (e.g. the RF group) generally stick to their preferred platform for all sub-system components. Note also that by allowing a heterogeneous system, expensive front-end hardware can be used where warranted (for mission-critical devices) and inexpensive hardware used elsewhere. Furthermore a systematic, piecemeal upgrade of a control system is possible, since TINE will run fine on older systems such as VAX-VMS and MSDOS as well as the more modern systems such as Window NT, Solaris and VxWorks.

OS / IP Stack / IPX Stack
DOS / LWP or Client32 / NOVELL
Win16 / WINSOCK / NOVELL
Win32 / WINSOCK / WINSOCK
Linux / Native BSD / Native BSD
Solaris / Native BSD / -
HP-UX / Native BSD / -
SGI / Native BSD / -
OSF / Native BSD / -
VAX-VMS / UCX / -
ALPHA-VMS / MULTI-NET / -
VxWorks / Native BSD / -

Table 1: TINE Platform and Protocol support

2.2 Multi-Protocol

TINE supports both IP and IPX ethernet protocols. As seen from the above table, the IP protocol is general available across all platforms, whereas IPX is available primarily on PC systems (DOS, Windows, Linux). Where necessary, IPX could also be ported to the other platforms. However, an IPX-stack must be obtained and installed separately, as it is not in the standard system kernels for these cases.

2.3 Multi-Architecture

TINE now supports four modes of data exchange, each of which could be used to define the control system architecture.

Client-Server: A traditional data exchange mechanism available in most control systems is pure, synchronous client-server data exchange, where a client makes a request and waits for the completion of the request. This is generally used to change hardware settings for instance. However polling front ends to get display data becomes very inefficient when the number of clients is large (> 10 say).

Publisher-Subscriber: For many cases, a much better approach is the publisher-subscriber data exchange. Here a client (the subscriber) communicates its request to a server (the publisher) and does not wait for a response. Instead it expects to receive a notification within the timeout period. This can be a single command, or for regular data acquisition it can be a request for data at periodic intervals or upon change of data contents. In this format, the server maintains a list of the clients it has and what they are interested in. This is much harder to program than simple client-server communication, but the payoff is large. As all data exchange is scheduled by the server instead of the client, the problem of a large number of clients is reduced by an order of magnitude or more. When the number of clients is very large (~100), one could consider yet another mechanism: Producer-Consumer.
Producer-Consumer: A third alternative for data exchange is the Producer-Consumer model. In this case a server is the producer. It transmits its data via broadcast or multicast on the control system network. Clients (i.e. consumers) simply listen for the data. This is frequently the appropriate data transfer mechanism, when the number of clients is large. For most control systems, there are certain parameters of system-wide interest. At HERA for instance, the Electron and Proton beam-energies, beam-currents, beam-lifetimes, etc. are made available via system broadcast at 1 Hz. Note that in this case, a server has no knowledge of clients whatsoever. Furthermore, clients no longer get what the want via a contract. Instead they get what has been produced on the network.

Producer-Subscriber: A very attractive alternative to the above data-exchange mechanisms is the Producer-Subscriber mode. On the surface, this looks like the Publisher-Subscriber model, except that the calling subscriber can request that the contract be sent to for instance his entire subnet or his multicast group. Then if such a client program is started on many stations, a server ‘sees’ only one client, namely the network. This reduces the load on the server (marginally), and can dramatically reduce the load on the network. As always, one should use this mode of operation where it makes sense. Note that if all control system elements are transferred this way, then all applications will see all control system traffic at the application layer (whether the data is to be thrown away or not). This could place an unnecessary load at the client-side.

3 MECHANICS

A detailed description of the functionality and operability of TINE is given in reference [1] and [2]. Below we present a synopsis of some of its features.

3.1 Equipment Modules

TINE is object-based, where TINE servers contain one or more equipment modules, which are designed to present an object-view of the equipment being controlled. These equipment modules have system-wide unique export names and contain one or more instances of the equipment, defined by device names. Furthermore equipment modules support device properties, which reflect the equipment operation.

TINE clients contact a particular device by specifying the equipment module’s export name and the device name. The nature of the request is specified by the device property.

In Producer-Consumer mode, on the other hand, a server makes registered data available via broadcast. In this case a client simply listens.

3.2 API

TINE offers a common, intuitive Application Programmers Interface (API) in C (across all supported platforms), in Visual Basic (on Windows Platforms), and in JAVA (client-side only). Recently, the C++ interface of DOOCS[3] has become available.

3.3 Plug-and-Play

A TINE server registers its address, server name and its equipment module names with the equipment name server upon startup. A TINE client can then query the equipment name server for names and addresses (and cache the results). As persistent timeouts have the effect of forcing an address resolution, one can take a TINE server down, and bring it up on another machine (and protocol!), without the necessity of restarting a client.

3.4 Data Exchange

TINE data transfer is full-duplex and can consist of up to (currently) 64 Kilobytes of data in either direction, both in synchronous as well as asynchronous transfer modes. There are currently some 30 defined system data formats, and it is also possible to specify user-defined structures. The latter must be registered at both the client and server if automatic byte-swapping and alignment is to take place. TINE servers frequently allow data-size and data-type overloading, whereby the data for a particular device and property can be packaged differently than the default size and type. For instance, ORBIT data can be defaulted to give 300 float values representing the BPM positions. Nonetheless, a request to get data-type FLTINT might return 300 position-status pairs, or type NAMEFI might return 300 location-position-status triplets. Likewise a client could ask for a section of the orbit near an experiment, etc.

3.5 Security

As noted above, TINE servers are registered with the equipment name server, which constitutes the control system database. TINE clients on the other hand are anonymous in this respect. A TINE server is fundamentally open to access from any client anywhere on the ethernet. A specific data request can however carry a WRITE access bit, which can in turn be filtered against at the server side. Namely a TINE server can allow WRITE access only to specific users, and/or only from specific networks, or network addresses. A client can also request an ACCESSLOCK token, which allows one and only one client process to have WRITE access.

3.6 Alarm System

As most alarms ultimately originate during hardware or service IO operations, it is most natural to locate the first layer of alarm processing directly on the front-end. A Tine server maintains a local alarm list containing all relevant information about each alarm. Clients can then at any time query a server, i.e. the Local Alarm Server (LAS), as to its alarm state. The primary client for all local alarm servers is the Central Alarm Server (CAS), where the next stage of alarm processing takes place. See reference [4].

3.7 Archive System

TINE has central archive system, which acquires and provides both machine data and event-driven post-mortem based on the TINE protocol. Storing data centrally guarantees that the data belong together, are filtered according to the same rules, and are maintaining and stored according to the same rules. At the same time, all TINE servers can keep local histories (both long-term and short-term) of selected properties. Such local histories are of course managed locally, but do not produce in themselves extra network traffic and sometimes very useful for display purposes. Post-mortem triggers can also launch a request to obtain and store a local history buffer and associate it with a particular event.

4 INTEGRATION WITH OTHER CONTROL SYSTEMS

4.1 DOOCS

Over the past year, great strides have been made in integrating the TINE protocol into DOOCS. At the client side, TINE is already fully implemented, meaning that the DOOCS client-API can communicate seamlessly with any TINE server. In other words, TINE can run in a DOOCS context. This turned out to be a non-trivial matter, but has been achieved and shown to be stable. At the server side, a “doocs2tine” gateway was written, which if nothing else demonstrates that a simple repackaging can transform a TINE request for a DOOCS server into a DOOCS request. The next step in this direction, is to implement this repacking directly at the DOOCS server.

4.2 EPICS

Recently an epics2tine translator was developed, which can turn an EPICS IOC into a TINE server. By making queries to the EPICS database at start time, a TINE server running on the EPICS IOC can export all relevant information to the TINE world. Incoming requests are then translated into the appropriate put and get calls into the EPICS database. In this sense, an EPICS IOC then becomes “bi-lingual”, in that both a Channel-Access server and a TINE server are ready and waiting to process data requests. For details, please see reference [5].

4.3 MKI3

A fourth important control system in use at DESY is the MKI3 system, described in reference [6]. This system works on Windows systems only, but since Windows is a supported platform for TINE, it is likewise straightforward to make all MKI3 servers “bi-lingual”. It is not such an easy matter to make a bi-lingual server as by DOOCS or EPICS, since there no systemic queries are available in the MKI3 system. Nonetheless, it has proven easy enough on a case-by-case basis to upgrade an MKI3 server, where warranted.

5 CONCLUSIONS

The flexibility of TINE has been invaluable in integrating the HERA front ends into a working system. Just as important, it has demonstrated a transparent way to progressively upgrade existing hardware. Where for practical reasons the latter must remain on “older” platforms and operating systems, TINE servers can nonetheless be installed and maintained. When it becomes practical to “modernize” front-end elements, this can be achieved piecemeal, without any blanket restructuring. The implementation of TINE at DESY is PC-dominated. Although TINE works fine in say a pure UNIX world, the number of GUI tools developed for PC consoles running Win32 make the latter (currently) the most attractive platform on the client-side.

Where are areas of applicability for TINE? Since TINE is based on sockets, and not on something ‘modern’, such as DCOM or CORBA, one might be inclined to dismiss it as a relic which found its niche in HERA. Nothing is farther from the truth. At the developer’s end, the tools offered are as modern as anywhere (ActiveX, Java, etc.). The transport layer is of course not seen by the developer. Furthermore, neither DCOM nor CORBA runs on the number of platforms supported by TINE. A switch to either one would immediately dispense with several legacy platforms such as MSDOS, WIN16, and VMS. Finally, both DCOM and CORBA work in a unicast world. The ability to multicast or to have ‘network’ subscriptions would be completely missing. When any of these considerations (legacy platform support, multicasting) is important, TINE should be considered as one of the best alternatives around.

REFERENCES

[1] P.Duval, “TINE: An Integrated Control System for HERA”, Proceedings, PCaPAC’99, 1999.

[2]

[3] G.Gygierl, O.Hensler, and K.Rehlich, “Experience with an Object Oriented Control System at DESY”, Proceedings PCaPAC’99, 1999.

[4] M.Bieler, et al., “PC Based Alarm System for the HERA machine”, Proceedings PCaPAC’99, 1999.

[5] Z. Kakucs, P. Duval, M. Clausen, “An EPICS to TINE Translator”, these Proceedings.

[6] Ruediger Schmitz, “A Control System for the DESY Accelerator Chains”, Proceedings PCaPAC’99.