monitoring and control systems for experiments AT ELI-NP Technical design report M. O. Cernaianu1, B. de boisdeffre1, d. Ursescu1, F. Negoita1, C. A…. [301573]
[anonimizat]
M. O. Cernaianu1, B. de boisdeffre1, d. Ursescu1, F. Negoita1, C. A. Ur1, O. TEsileanu1, D. Balabanski1, T. Ivanoaica1, M. Ciubancan1, C. Savlovschi1, I. Dancus1 and S. Gales1
1National Institute for Nuclear Physics and Engineering, P.O.Box MG-6, RO-077125 Bucharest-Magurele, Romania
Abstract. [anonimizat] E1-E8, specifying input/[anonimizat], [anonimizat], [anonimizat], maintenance, integration with subsystems and other transverse needs. [anonimizat] a [anonimizat].
Key words: ELI-NP, [anonimizat], synchronization, storage, modular architecture.
1. ELI-[anonimizat].
[anonimizat]: the High Power Laser (HPLS) System and the Gamma Beam System (GBS). The HPLS will be able to deliver six optical laser outputs: 2×0.1PW@10Hz, 2x1PW@1Hz and 2x10PW@1/minute. The GBS will deliver a high and low energy gamma beams produced by a two stages warm LINAC electron accelerator providing gamma rays with energies up to 3.5 MeV (low energy) and 19.5 MeV (high energy), respectively. [anonimizat]/GBS
Control Systems. They will also include their own Machine Protection System (HPLS/GBS MPS) in order to avoid any damages caused by abnormal modes of operation and Safety systems. These MPS will be mostly based on beam permit (configuration checking before the shot) and beam interlock (beam stop with shutters).
To transport the laser outputs to one of the 5 experimental areas (E1, E6, E4, E5, E8), a Laser Beam Transport System (LBTS) system including its own control system will be provided. A [anonimizat]. This LBTS/GBDD will handle the configuration for the transport of the laser/gamma beam in function of the experiment.
Three main type of experiments are envisaged:
[anonimizat] & gamma driven experiments.
[anonimizat] & Control Systems are mandatory (EXPs MCS). These EXPs MCS shall integrate the control and monitoring of the devices (actuators, sensors and experimental detectors/instruments with their power supplies) that will permit the setup and the running of the experiment in the experimental areas. These devices require slow control signals.
The acquisition of data coming from the experimental detectors will benefit of dedicated transmission lines for highest data throughput. The DAQs will be made with specific equipment that will depend on the experiment. These DAQs require fast communication for data output.
A Personnel Safety System (PSS) [anonimizat]. This PSS have to master all the others systems and to interlock their operation. In that sense, a Radioprotection system will be implemented (level of radiations will be accessible through detectors installed in the different areas). However, the Personnel Safety System is not the subject of the current TDR that only refers to the experiments control and the interfaces with the other systems.
Machine Protections Systems for the components part of the LBTS/GBDD/EXPs MCS will be developed in order to guarantee their integrity. This will maximize operational availability by minimizing down-time (quench, repairs) and avoid expensive repair of the equipment.
A key network within the ELI-NP facility is the Timing/Synchronization and TAG Network. This network includes:
the HPLS Timing System (HPLS TS) that ensures the triggering and/or synchronization of the different laser driven experiments with the HPLS.
the GBS Timing System (GBS TS) that ensures the triggering and/or synchronization of the different gamma driven experiments with the GBS.
All the equipment described above will be installed within the laser building and the gamma & experiments building.
Three other buildings (Office, Laboratories & Workshop and Guest) complete the installation. Several systems (Building Management System) will manage these five buildings for general, purposes. The BMS system shall be interlocked with the Personnel Safety System as the first one commands the experiments doors position and state while the latter shall integrate the procedures for operation and logic. The BMS will handle building specific information like: CO2, O2 monitoring, fire alarms, access levels, etc. See annex 1 for more details. Work is being done to define the access modes in the experimental areas and associated procedures. Emergency/fire situation plans already exist and the Building Management System is responsible for integrating and controlling these parameters and associated actions.
Regarding the emergency situations, an evacuation plan and the necessary procedures are in the process of being developed by the designer of the facility. In the situation of electricity blackout, the facility benefits from UPS systems and a power generator in order to handle the critical or sensitive equipment.
The scheme below summarizes the general systems in the facility:
Fig. 1 General overview of the ELI-NP systems
The current Monitoring and Control System technical proposal describes the architecture and the design approach for the experiments slow signals monitoring and control, equipment synchronization with the machines and data storage. By slow signals we refer to parameters/configurations that don’t need a deterministic communication. All the parameters and actions that need a fast response or feedback need to be implemented in real-time systems or with embedded logic.
The synchronization between the HPLS and the Gamma beam will be done in a later stage of the project reason for which it is not the subject of the present technical proposal. The link between GBS and HPLS is envisaged to be made exclusively by hardware means for synchronization and native software TANGO – EPICS interface for system state communications.
The proposed architecture for ELI-NP experiments monitoring and control systems is based on local control of the experiment, with internal distributed monitoring and control and remote management with a Supervision control for each experiment area. Each of the eight ELI-NP experimental areas may house different number of experiments, each with its own apparatus for detection, monitoring and control of the experiment. Each of the experiments will have a data storage system slot where the detectors/monitoring systems or other similar apparatus will save the necessary information and data related to the experiment through a very high speed data bus (e.g. Gigabit Ethernet), Fiber optics based.
A couple of software frameworks will be developed in TANGO/EPICS for the implementation of the standardized communication bus, data packet exchange and communication protocol between the devices in the experimental areas and the users which will drive the experiments.
The following chapters will focus only on the EXPs MCS and their interfaces with the others systems described above. Moreover, a specific section in the “System Implementation” chapter will describe the Timing/Synchronization and TAG Network that will ensure the timing distribution within the facility.
An in more detail description of the LBTS/GBDD systems and Safety System approach is also presented in the chapters below.
2. Introduction
The ELI-NP facility is anticipated to have a modern implementation of a distributed monitoring and control system (MCS), as used in any research facility in Europe. The entire monitoring and control system architecture will be implemented on a standardized architecture (TANGO/ EPICS) with various hardware ranging from commercially available devices (PLCs, PCs, etc ) to specific apparatus that will be designed to fit ELI-NP research purposes. Specific monitoring, control and graphical user interfaces are foreseen to be implemented in NI LabVIEW and Matlab in a fast manner as they permits integration with the above mentioned standards. Also, different controls in the experiments may be developed into LabVIEW due to the easiness it provides for non-programmers and short time development feature. The back-bone of the MCS infrastructure will be Fiber Optics for both slow control and large data fluxes intended for storage and for long distance communication. CAT6 copper cable is also envisaged. To protect the stable MCS network from the web clients and other LAN work groups, servers and gateways will be used to provide data access between the internal network and the outside ones, as also suggested in [1].
The ELI-NP facility will have a dedicated major control system for the HPLS, developed in TANGO and a second dedicated control system for the Gamma Beam developed in EPICS. For the building management (doors access, HVAC, safety, fire, etc), the Building Management System (BMS) will integrate, acquire, log and control all the necessary signals (Fig. 1). The hardware controllers that will come with the BMS of the building and will handle the door interlocks shall be driven by the Safety system of the ELI-NP facility in order to implement the personnel safety.
Three separate control rooms will exist, each for the BMS, HPLS, and Gamma Beam where the proper equipment will be installed for monitoring and controlling the building, the HPLS and Gamma Beam system. Additional Safety system access panels will be available in the HPLS control room and Gamma control room to integrate the safety procedures and signals (interlocks, radiation, etc).
The proposed experiments will have specific detection, monitoring and control equipment that must be remotely controlled from two areas of the building: the “Data Acquisition Room” (DAqR) and “Users Room” (UsR). Moreover, in the DAqR and UsR must exist displays that provide to the user the necessary information on the major equipment (HPLS and Gamma Beam) and from the BMS. This architecture will be client – server based, where the monitoring/control clients together with displays will be in the DAqR and UsR.
The proposed architecture for ELI-NP experiments monitoring and control systems is based on local control of the experiment with internal distributed monitoring and control and additional client to control/supervise each experiment. Each of the eight ELI-NP experimental areas may house different number of experiments, each with its own apparatus for detection, monitoring and control of the experiment. Each of the experiments will have a data storage system slot where the detectors/monitoring systems or other similar apparatus will save the necessary information and data related to the experiment through a very high speed data bus (e.g. Gigabit Ethernet). The experimental data storage and transmission shall benefit in general from dedicated data busses, as depicted in Fig 2, separated from the MCS bus that controls and monitors the equipment itself, in order to achieve the highest data throughput. This data will be available from the DAqR and UsR clients for various usage, as presented in Fig. 2.
For each experiment, a number of detection equipment is envisaged, that needs to be monitored and controlled remotely from DAqR and UsR that both lie at more than 100m away from the experimental area, where the experiment is taking place. For this reason, a localized control of the equipment associated with an experiment is necessary for making calibrations, adjustments, maintenance, etc through different HMI. In the case of the running experiment, due to radioprotection issues and functionality of the infrastructure, the users will control and monitor the experiment only from the DAqR or UsR. In this case, a client –server architecture is envisaged. The same GUI may be used to access the equipment, however multiple access levels will be implemented.
For each experiment, a distributed monitoring and control of the apparatus will be needed. The user, based on different security access levels (e.g. SuperUser, User, Guest), must have the ability to locally configure the devices in the experiment for calibration, maintenance and any other required parameters. The local control will be in general an industrial PC connected by various interfaces (e.g. Ethernet) to all the other equipment like detector electronics, monitoring, control or other hardware necessary in the experiment. Each equipment shall have its own HMI that must communicate with the local control by software API or hardware means. From the software point of view, the communication implies that the API of the equipment to be available and able to communicate with the local control HMI. The MCS framework will be based on TANGO or EPICS function of the controlled area, as described later.
Function of the chosen standard from the above mentioned (TANGO or EPICS), the framework will afterwards be ported in all the experiments to obtain the client – server communication. This approach will allow to remotely manage each experiment from at least one client.
For the High Power Laser based experiments, TANGO shall be used while for the Gamma Beam based experiment, EPICS. In the HPLS based experiments, multiple TANGO frameworks will be developed, each for one experiment, for redundancy, safety and better maintenance. In the case of the GBS based experiments, one EPICS framework will handle the entire experiments, as its native architecture has the redundancy already implemented.
The motivation for choosing this approach lies in the characteristics of the architectures:
TANGO characteristics:
A central database must exist to handle all the devices. Redundancy should therefore be implemented to be sure that if the database crashes, another one takes its place. The database must be active at the start-up of the entire system to map each device server (each device in the network).
If maintenance is operated in the database, the system cannot be used (the devices cannot be accessed through the TANGO standard).
The multiple ported framework has the advantages:
From the safety point of view, with each experiment having its own server-client communication that does not interfere with the others, in the eventuality of a failure, only one area is under maintenance while the other can operate. The same for upgrade procedures.
The use of the same framework for all the experiments will ease the implementation process and the maintenance ones but with increased costs.
EPICS characteristics:
Each device has his own database and therefore the redundancy is not needed. A central database can also be implemented but this will not affect the functioning of the devices that are in the network.
For each experiment, in the UsR and DAqR there will be at least one PC client and at least one display to show the information of interest to the user. In general, multiple clients and multiple displays shall be available for every experiment.
The UsR and DAqR will also house clients that will show the information of interest from the BMS and from the two major equipment: HPLS and Gamma Beam.
The LBTS technologies (e.g. motorized mirrors, alignment, Polarization control, Adaptive Optics,) shall be controlled from the HPLS control room by the operators. This is due to the fact that these technologies affect the pulse parameters on the target. The scientists shall not be allowed to directly control these parameters as improper handling can cause the malfunction of the entire system. The parameters change of the laser pulse will be issued to the HPLS control room that fires the laser. Furthermore, in the current approach for the LBTS technologies, their physical placement architecture is designed in such way that the LBTS technologies can ensure the same laser pulse parameters to any of the areas E1, E6 and E7 for the two 10PW beams.
Because the LBTS is related to the HPLS parameters and shall be controlled from the same area, the control and monitoring architecture should be implemented similar to the HPLS one, reason for which TANGO shall be used for the LBTS control. This is described in Figure 3.
For the case when an equipment associated with an experiment needs to be synchronized with one of the two major equipment HPLS or Gamma Beam, the synchronization will always be hardware based in order to ensure the speed and determinism and avoid software glitches or malfunctions. However, the synchronization hardware must feed in this case to the UsR or DAqR information regarding the synchronization status or whatever information is considered necessary. This shall be implemented using the same adopted standard (EPICS/TANGO).
The experimental doors system that is connected to the BMS shall be mastered by the Personnel Safety System and interlocked in such way to provide a safe operation of the facility. The central supervision system of the BMS should be able to be programmed to acquire/log the information from all the detectors/door states and send audio-visual panic signals to the users or even intervene in case of necessity (e.g. automatic fire extinguish). An in detail description of the ELI-NP Building Management System (BMS) is given at the end of the document in Annex 1.
The vacuum system for the HPLS (for the compressors and beam transport of the 0.1PW and 1PW) will be in the tender’s assignment.
The vacuum system for the Gamma system (accelerator section) will be in the tender’s assignment.
The vacuum system for the LBTS system shall be in the tender’s assignment and controlled via the LBTS control system. The central control Supervision post of the LBTS will be placed inside the control room of the HPLS.
The vacuum system for the interaction chambers shall be part of the interaction chambers itself, but also controlled from the control room of the HPLS, for safety reasons. This will be in detail presented in chapter “System Implementation”. The proposed architecture for the vacuum control is based on PLC control of the valves and pumps and monitoring of the parameters, with master – slave connection for ensuring supervision [2]. This will allow the implementation of safety procedures that will not permit the users to use the pumps or vanes in a way that can damage the devices.
Fig. 2 General Architecture for Monitoring and Control of experiments in the ELI-NP facility
Fig 3. Plan of the TANGO and EPICS usage in ELI-NP
The Contents and Components that will be addressed in the following document are presented below. On the right side is presented the main equipment that is linked to the experiments and on the left side the building systems (BMS, Safety, Radioprotection). The Experiments requirements, design and implementation is presented in the center and will be described in the following chapters.
Fig. 4 Contents and Components
3. General System Requirements
3.1 Architecture
The definite requisite of the experiments architecture is scalability, which can be achieved through distributed computing and control models. Decentralizing the operation center (i.e. the brain) eliminates one of the most common single point-of-failure challenges hence removing the need for a centralized control server [3]. The server would have limited system resources (i.e. processing power, hard-disk space, and memory) and running into those limitations in long term is almost unavoidable. In addition, trying to add more resources to a centralized server is much more costly than adding same amount of resources to distinct computing units. Scaling the whole system is as easy as adding new control computers to the network along with new network switches [3]. However, because the experiments often change, on the Supervision side that controls and manages the entire experiment, various centralized state machines can be implemented that shall coordinate the equipment actions in an autonomous and predefined way.
3.2 Reliability
Reliability is paramount. Considering the sentence “nuclear research” is in the project definition, high availability is of essence. Even though there are secondary and tertiary machine and personnel protection measures, primary control system must be in charge at every step of the way. Due to this, redundancy of the chosen control hardware is crucial. Choosing computers with redundant power supplies, fans, and hard disks in Raid operation mode is vital [3]. Given these aspects, it is envisaged that for each experimental area to use industrial PCs with redundant power sources. The advantages of these PCs in comparison to standard Desktop computers are: better shock/vibration, temperature, power supply and dust tolerances, better expandability and durability, easier access to the inside. In addition, distributing the system to distinct control nodes is important for reliability as well as scalability. If a local control node for an experimental area fails, it will not be affecting the operation of the rest of the areas. For this reason, each experimental area is envisaged to have its own control system equipment for controlling/monitoring the devices, along with a distributed control system inside the experiment.
3.3 Performance
Performance bottlenecks can bring a system to a halt, which contradicts reliability [3]. Ideally, in the case of an unexpected node failure, the system should be able to have spare resources to rapidly shift the applications from the failed node. In this situation, the distributed control shows an advantage, also in correlation with virtualization servers.
3.4 Security of the system
Both hardware and software level security is important for a control system that is intended to be operational 24/7. Any control network should be safe of any hacker intrusion or virus infections. This can certainly be achieved through separating the control network from LAN workgroups. Software security needs to be provided through either code access security, VPN, using permission tables or role-based security implementations. [1] For the Web access to the experiment, the access from the outside world to the protected control system network shall be made using VPN access.
3.5 Protection of the equipment
Due to environmental constraints (various pipes, radiation, EMP, etc), not every control hardware can be fitted in any desired area.
This means that control nodes to be used within the experimental areas needs to be compact, reliable and should fit in places as much as possible protected by the radiation and EMP sources. In addition, high energy radiation is known to deteriorate the transparency of fiber-optic cables so fiber cables used in high radiation areas should be replaced in every 2 to 3 years or special shielded cables should be opted in for specific sectors [3].
3.6 Safety interfaces
In order to operate in safe conditions for all the personnel, a facility must have a dedicated Safety system. This system must use dedicated hardware, following the safety standards and must be interfaced with the other systems and equipment in the facility. All protected areas (laser rooms, gamma rooms) and experimental areas must have a radiation protection system linked with a safety system with dedicated hardware (panic buttons, safety PLCs, checkboxes, etc, interlocks) that is able to allow or block the access in the hazardous areas based on clear procedures. The personnel safety system of the facility must interface all the other subsystems in the facility that protects the personnel and the entire system must be built in a unitary way.
3.7 Integration
Whenever possible, any control hardware or software library used in the implementation of the system should be compliant with third party tools. This way, preexisting equipment can easily be integrated into the control system. Translators from Tango – Epics – LabView are envisaged as a multitude of equipment have drivers already implemented for one of the above mentioned Control System software.
3.8 Maintenance
In order to have a good management and a low maintenance, due to the fact that the ELI-NP facility will house 8 experimental areas, each shall have its own control system equipment. The hardware systems inside each experimental area (computers, racks, storage) that integrate the specific detection systems are envisaged to be standardized and similar in order to provide quick replacement and less training. Moreover, in this case with separate equipment for each experimental area, in the period of system upgrades, one experimental area can enter in maintenance while the others can still function.
4. High power laser based experiments requirements
In the following schematic, it is highlighted in red the building layout where the laser experiments are taking place (E1, E6, E4, E5 and E7).
Fig. 5 Building layout with laser and gamma based experiments
In the following subchapters, the requirements in terms of experimental equipment are presented for each area.
4.1 Experimental area E1
In the E1 area, multiple experiments will be implemented:
Laser driven nuclear physics
Gamma induced experiments
Beam Parameters
10PW, 1&2 beams configurations, short focal, circular polarized laser beams.
Target Systems
For the targets of the LDNP, the following are necessary:
solid (x10nm for RPA, x10um for TNSA dimensions can be 100x100mm, x100nm for BOA), gas as secondary (plasma) target , liquid but later in the development schedule.
For RPA, thin targets: Wafer technology SiN – circular D=4inch. Alternative solution : metallic deposition – e.g. Gold on Si. DLC foils, polymer foils.
Thorium targets TBD.
Holders for solid targets need to be designed. These should have a standardized form.
Manipulation drive systems are required, both in-chamber and outside for alignment. 0.25um accuracy positioning in X, Y and Z-axes and 1 mrad in angular rotation. High resolution microscope optics are required in-situ within the chamber for accurate positioning of targets
A target alignment station with high resolution microscope optics for off-line positioning in X, Y, Z and theta
Target alignment system
Target exchanging system with load lock is compulsory for the repetition rate.
Target synchronization for positioning, cleaning before laser shot.
Timing/Synchronization Systems
For the LDNP experiments, the CCD cameras from the detectors need to be synchronized with the HPLS pulse. This requires gated signals. Delay generators are needed to tune the value of the delay. The HPLS trigger signal should come with ~100ms before the laser pulse.
The HPLS needs to provide the burst mode (selectable number of pulses) to run the experiments.
The data generated by the detectors and stored in the data storage must be correlated with the laser shot in a unique way.
Detector + DAQ
Laser driven experiment:
LaBr3, plastic scintillators and Ge detectors will need to be developed.
The CCD cameras will need a fast trigger signal (correlated with the laser shot). A delay generator will be needed in this way to be synchronized with the laser pulse and to adjust the gating signal. The delay generator parameters needs to be remotely controlled in order to fine adjust the settings from the UserRoom/DataAcqRoom.
For the LaBr3, plastic scintillators and Ge detectors, the trigger delay is not an issue (orders of 100ns+/- the jitter value are reasonable, if acquisition will be based on digitizers). In this case, the digitizers need to be triggered.
Each shot with the HPLS will need to be correlated with the beam diagnostics reports and other relevant information, data that needs to be available in the UserRoom/DataAcqRoom. Additional, a hardware signal may be available from the accelerator (Gamma beam) in order to determine if perturbations/background occurred and if the data must be analyzed differently.
The mass spectrometer and recoils spectrometer needs to be designed (Table 1). They will employ high current sources with control, slits, diagnostics, cooling water.
The gas needed for the gas filled recoil spectrometer needs to be monitored/controlled from the UsersRoom/DataAcqRoom.
A tape transport system from the decay station will also be available and it has to be controlled and monitored from the UsersRoom/DataAcqRoom.
Motorized stages (order of tens) will be needed to be controlled and monitored from the UsersRoom/DataAcqRoom.
Table 1
Status of the development of the detectors and associated requests
For the photo-fission experiment that will be performed in E8/E1, a gas cell (ion guide) needs to be monitored and controlled. Moreover, differential pumping of 0.1 atm to 10^-7atm is needed.
The multireflection trap must have a monitoring/control client in the UsersRoom/DataAcqRoom.
On beam diagnostics there will be a detector that monitors the time structure of the beam and will produce logic signals (NIM/TTL) that will be distributed to the experimental setups.
Interaction Chamber (IC) and Vacuum system
The vacuum pumps, gauges and valves shall be part of the Laser Beam Transport System and of the Interaction Chamber. The Interaction Chamber must have a Safety system to secure the access to the chamber and the safety of the personnel. The IC must be interlocked with the LBTS for the same reasons. The IC must be controlled during the operation of the laser from the HPLS control room, to prevent any accident and because of the system’s complexity.
Storage and Data flows
For the Laser driven experiments in E1, the following data flows are expected:
Table 2
Data flows expected for LDNP
The data output from the detectors must either be stored on a dedicated storage server or on a local PC near the experimental area. Part of the data must be sent in the UsersRoom/DataAcqRoom for online experiment control.
A storage amount of 70TB must be available for first experiments and short term storage of about 6 months.
EMP
For proper operation of the equipment and to prevent the malfunction of the apparatus when the experiments are running, the electronics and other sensible equipment must be electrically shielded and placed as far as possible from the EMP source. The electrical signals must be filtered to the largest possible extent and FO used to minimize the EMP background generated during the experiment.
HMI, Clients and Supervision
The users access on the clients monitoring system (UsersRoom/DataAcqRoom) to the experiment supervision & control (on the E1 area side) will be based on various security levels. Other clients (configuration or control) may have the right to configure in depth specific parameters.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system.
4.2 Experimental area E5
In the E5, multiple experiments are programmed to be implemented:
Materials irradiation
Biology
Space science
Beam parameters
1PW, 1&2 beams, linear and circular polarized laser beam, long and short focal configurations.
Target Systems
For the targets in the materials irradiation experiment, the following are necessary:
solid (x10nm for RPA, x10um for TNSA dimensions can be 100x100mm, x100nm for BOA), gas as secondary (plasma) target , liquid but later.
For RPA, thin targets: Wafer technology SiN – circular D=4inch. Alternative solution : metallic deposition – e.g. Gold on Si. DLC foils, polymer foils.
Holders for solid targets need to be designed. These should have a standardized form.
Manipulation drive systems are required , both in-chamber and outside for alignment. 0.25um accuracy positioning in X, Y and Z-axes and 1 mrad in angular rotation. High resolution microscope optics are required in-situ within the chamber for accurate positioning of targets
A target alignment station with high resolution microscope optics for off-line positioning in X, Y, Z and theta
Target alignment system
Target exchanging system with load lock compulsory for the repetition rate.
Target synchronization for positioning, cleaning before laser shot.
Gas target (Nozzle + holder + positioning + alignment)
Timing/Synchronization Systems
For the experiments, the CCD cameras from the detectors need to be synchronized with the HPLS pulse. This requires gated signals. Delay generators are needed to tune the value of the delay. The HPLS trigger signal should come with ~100ms before the laser pulse.
The HPLS needs to provide the burst mode (selectable number of pulses) to run the experiments.
The data generated by the detectors and stored in the data storage must be correlated with the laser shot diagnostics in a unique way.
Detector + DAQ
The equipment envisaged for the Materials irradiation experiments are described below.
Table 3
Equipment requirements for the Materials irradiation experiments
The detection is accomplished with the Pyrometer, Streak camera, Visar, Ion and Electron Spectrometers and with the probe beam. The data from these devices has to be stored on a local PC with timestamp. For the devices that have their own software application that cannot be interfaced with TANGO/EPICS/LabVIEW, remote desktop connection will be used to monitor de data and interface with the equipment.
The same equipment as above can be used.
The same equipment as above can be used.
Interaction Chamber (IC) and Vacuum system
The interaction chamber has no specific requirements in addition to vacuum systems that are compulsory.
Storage and Data flows
A storage amount of 70TB must be available for first experiments and short term storage of about 6 months.
EMP
For proper operation of the equipment and to prevent the malfunction of the apparatus when the experiments are running, the electronics and other sensible equipment must be electrically shielded and placed as far as possible from the EMP source. The electrical signals must be filtered to the largest possible extent and FO used to minimize the EMP background generated during the experiment.
HMI, Clients and Supervision
The:
motorized stages,
the manipulator for the radiation generator target,
the manipulator for the irradiated target
CCD camera
Cryostat
Heating stages
Gas jet system
have to be controlled remotely from the UserRoom during experiment
For the devices 1, 3-4 – Motion&Vision, a custom application or multiple applications needs to be built to handle all the parameters on the local control of the equipment (near the exp area).
For devices 5-6 – TargetTemperature, a custom application must be also built to handle the parameters on the local control of the equipment (near the exp area).
For devices 2,7 – ParticlesProduction, a custom application must be also built to handle the parameters on the local control of the equipment (near the exp area). The application must be able to permit the selection between the two systems that generate accelerated particles (gaseous or solid) and their parameters. These parameters must be available via the TANGO/EPICS bus in the UserRoom/DataAcqRoom on a second GUI interface. TBD
The applications should permit logging of the state of the selected parameters.
For the detection equipment, a software trigger application must exist, to start the data acquisition procedure. Additional, hardware trigger must be implemented, function of necessities, to sync the equipment with the laser shot. TBD
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system.
4.3 Experimental area E6
In the E6 area, QED experiments will be developed. The requirements are as follows:
Beam parameters
10PW, 2 beams, compressed, long and short focal, circular polarized laser beam
Target Systems
For the targets of the QED, the following are necessary:
solid (x10nm for RPA, x10um for TNSA dimensions can be 100x100mm, x100nm for BOA)
For RPA, thin targets: Wafer technology SiN – circular D=4inch. Alternative solution : metallic deposition – e.g. Gold on Si. DLC foils, polymer foils.
Foam target TBD
Holders for solid targets need to be designed. These should have a standardized form.
Manipulation drive systems are required, both in-chamber and outside for alignment. 0.25um accuracy positioning in X, Y and Z-axes and 1 mrad in angular rotation. High resolution microscope optics are required in-situ within the chamber for accurate positioning of targets
A target alignment station with high resolution microscope optics for off-line positioning in X, Y, Z and theta
Target alignment system
Target exchanging system with load lock compulsory for the repetition rate.
Target synchronization for positioning, cleaning before laser shot.
Gas target: Gas cell or preformed plasma capillary waveguide
Injector (LWFA) gas jet + possible counter propagating beam
X-Y-X-theta-phi adjustment for capillary
Two types of first experiments: Gas and solid to be defined and detailed in the TDR
Timing/Synchronization Systems
For the experiments, the CCD cameras from the detectors need to be synchronized with the HPLS pulse. This requires gated signals. Delay generators are needed to tune the value of the delay. The HPLS trigger signal should come with ~100ms before the laser pulse.
The HPLS needs to provide the burst mode (selectable number of pulses) to run the experiments.
The data generated by the detectors and stored in the data storage must be correlated with the laser shot diagnostics in a unique way.
Detector + DAQ
Specifications of the required diagnostics for X-ray, electron and ion beam spectral and spatial measurements:
High resolution dispersion spectrometers required for >GeV electrons and ions (spectral changes expected due to the onset of high-field QED processes)
High energy (hundreds of MeV) -ray spectral measurements
Spatially / angularly-resolved measurement of high energy -rays
Low background detectors required for measurement of positron production
Required diagnostics of the laser-plasma interaction:
Optical probing using a small portion of the main beam split off, frequency doubled and directed along the target surface to characterize density gradients
Measurement of the back-scatter and absorbed laser pulse energy (Optical isolation is required to prevent back reflection causing damaging of laser components upstream)
Nuclear activation to characterize plasma temperature
Required detectors:
Active detectors, e.g. high dynamic range CCD cameras to image scintillator or phosphor radiation in the dispersion plane of the electron, ion and -ray spectrometers
All detection systems employed must be characterized for their EMP sensitivity in a high energy laser-plasma environment
Passive detectors based on dosimeters film, track detectors, imaging plate etc. Used in single-shot operation mode to cross reference results obtained using the active detectors
Data acquisition systems:
‘On-line’/Real time analysis of data is required to guide decisions on the next laser shots to be taken
Data upload via a central data management system – to enable quick extraction of data over a range of laser, plasma and beam diagnostics
Interaction Chamber (IC) and Vacuum system
The vacuum pumps, gauges and valves shall be part of the Laser Beam Transport System and of the Interaction Chamber. The Interaction Chamber must have a Safety system to secure the access to the chamber and the safety of the personnel. The IC must be interlocked with the LBTS for the same reasons. The IC must be controlled during the operation of the laser from the HPLS control room, to prevent any accident and because of the system’s complexity.
Storage and Data flows
The above mentioned detectors, DAQs and diagnostics must output the data to a data storage system. A storage amount of 70TB must be available for first experiments and short term storage of about 6 months.
EMP
For proper operation of the equipment and to prevent the malfunction of the apparatus when the experiments are running, the electronics and other sensible equipment must be electrically shielded and placed as far as possible from the EMP source. The electrical signals must be filtered to the largest possible extent and FO used to minimize the EMP background generated during the experiment.
HMI, Clients and Supervision
The user’s access on the clients monitoring system (UsersRoom/DataAcqRoom) to the experiment supervision & control (on the E1 area side) will be based on various security levels. Other clients (configuration or control) may have the right to configure in depth specific parameters.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
A experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system.
5. Gamma beam based experiments requirements
In the following schematic, it is highlighted in violet the building layout where the gamma experiments are taking place (E1, E7, E8, E3, E7, E8, Gamma Source Recovery area). Except E3, all the areas are inaccessible during an experiment run, due to radioprotection issues.
Fig. 6 Building layout with laser and gamma based experiments
In the following subchapters, the requirements in terms of experimental equipment are presented for each area.
5.1 Experimental area E2
The E2 area will host the NRF experiments using the low energy gamma beam.
In this case, the detectors are an array of segmented HPGE crystals and LaBr3 detectors, named ELIADE.
Beam parameters
The list of the gamma beam parameters is detailed in the following table:
Table 4
Gamma Beam parameters
This list is valid for all the gamma driven experiments. However, the first day gamma based experiment performed in the E2 area, will use the low-energy beam with the following characteristics:
gamma rays of ≤ 3.5 MeV with BW ≤ 0.5%
beam spot diameter FWHM ≤ 1 cm
Target systems
The Target systems regroups three main sub-systems.
The first module consists of a pipe, referred as CCD BLACK BOX that contains a scintillator, a lens and a mirror. These latter are fixed on the internal side of the pipe. The CCD BLACK-BOX in the following, is installed on a stage, referred as CCD STAGE that can be moved by stage motor means. A 3 axis movement (x, y, and z) will be achieved. This system is named, CCD system and belongs to the GBDD system (for further details, see TDR Gamma Beam Delivery and Diagnostics).
Fig. 7 CCD Black box
The control of this system must achieve:
The Trigger of the CCD camera for Image Acquisition (external Trigger signal is required)
The Image acquisition from the CCD camera
The Storage of the image on a computer that will perform the image processing.
The Display of the processed image on the HMI located in the User Room and possibility to interact with the image (cursors, spatial profile of the beam, statistics, etc.)
The Storage of the processed image in a dedicated data storage location
A Remote control of the motor that performs the movement of the CCD STAGE, via a HMI located in the User Room.
The second system is directly related to the ELIADE Interaction Chamber (ELIADE IC). This ELIADE IC will always contain one pipe used for the transport of the beam. This pipe is referred as COLLIMATOR PIPE. Two collimators will enable/disable the beam transport inside it. This pipe will be movable by manual or electrical actuators. This system is referred as Collimator system.
In terms of control and if the movement of the collimator pipe are performed by motors, the requirements are the following:
Remote control of the position (OPEN/CLOSE) of the two collimators (stepper motors are proposed)
Remote control of the motors that performs the movement of the COLLIMATOR PIPE, via a HMI located in the User Room.
The last system is the TARGET PIPE that contains the target and that is manually inserted in the COLLIMATOR PIPE. It is assumed that this TARGET PIPE will be automatically aligned with the COLLIMATOR PIPE.
Fig. 8 Collimator pipe
Timing/Synchronization system
The time structure of the gamma beam is as follow:
Fig. 9 Gamma beam time structure
32 micro-bunches of 1 ps each, separated from each other by 16 ns, form a 512 ns macro-bunch. These macro-bunches are generated @ 100 Hz. This frequency is the one of the laser shots in the Low-energy IP.
Because the sampling rates of the digitizers used for the experiment are of the order of 100 MS/s, a continuous data sampling and read-out is impossible. Most of the digitizers used for nuclear experiments involving gamma rays provides a lot of features in terms of trigger and data analysis (Threshold trigger, Waveform/Pulse analysis, Zero suppression, etc.).
Besides, these internal features, a timing system must be implemented. It must deliver several global triggers to the DAQ system (in this case ELIADE DAQ) in order to decide when the data should be read-out.
The basic requirements are summarized by the two following figures:
Fig. 10 ELIADE DAQ triggering system sketch
All the data acquired for one Macro-bunch by the Ge/LaBr3 digitizers will be read-out only during an "Acquisition Time Frame" fixed by the type of detectors (50 us for the HPGe, 10 us for the LaBr3).
Fig. 11 Macrobunch trigger timing structure
Moreover, another requirement is an absolute timestamp for the macro-bunches generated in the GBS, to be correlated with the machine parameters and afterwards to correlate the parameters with the experimental data.
Detector ELIADE + DAQ
The ELIADE detector is the most complex equipment involved in the experiment. It regroups:
Two mechanical support, one for the CLOVER detectors (ELIADE MEC 1) and one for the LaBr3 detectors (ELIADE MEC 2)
An array of 8 CLOVERS detectors fixed on ELIADE MEC 1
An array of 4 LaBr3 detectors fixed on ELIADE MEC 2
An Interaction Chamber (ELIADE IC) under vacuum fixed on ELIADE MEC 1
ELIADE detectors
A CLOVER detector is composed by 4 High Purify Germanium crystal (HPGe), each of them having 8 segments. The crystals are installed in a common vacuum cryostat as shown in the figure below:
Fig. 12 Schematic of the Clover detector
The HPGe crystals requires the control of two parameters:
Temperature (via 2 Pt-100 sensors mounted in the CLOVER detector)
Bias Voltage (via the HV Power supplies)
To control the temperature of each HPGe crystal, an automated LN2 filling system is foreseen. It has to keep the HPGe detectors at the temperature of the liquid nitrogen without any external action. The system should allow for the monitoring of critical parameters, allow users to make a minimum set of operations and give the administrator access to the configuration of the system.
Regarding the bias voltage, the crystals are assembled close together with an electric insulation between them which allow them to be operated at different bias voltages. The separate bias voltage for each crystal improves noise immunity and allows operation at lower than the operational voltage should it be needed. The HV power supply shall provide a voltage up to 5000 V due to the specifications of the HPGe crystals.
The current HV Power supplies are already delivered with their own controller and provides safety interlock signals. But, the control of the voltages (eq. to power supply) will be ensured along with the control of the LN2 filling system by a National Instruments compactRIO running LabVIEW.
From the User point of view, the control requirements are the following:
Local and partial control of the compactRIO via a local computer running LabVIEW. This should allow to display the temperature of each crystal and the possibility to start the HV Power supply only if the temperature of the crystals enables it.
Remote monitoring of the Temperature of each CLOVER cryostat via an HMI in the User Room
Remote monitoring of the Bias Voltage of each crystal, via an HMI in the User Room
ELIADE DAQ
The DAQ envisaged for the HPGe crystals is made following these steps:
Each HPGe crystal is be read–out by a charge sensitive preamplifier (front-end electronics) that required a Low Voltage (LV) power supply.
After the preamplifier, a digital readout system (DAQ Front-End) able to cope the signals from Ge crystals has to be developed.
The samples acquired by the digitizers shall be read-out by fast multi-core PCs that will process the data.
The processed data shall be stored in the DAQ Room in two separate storage disks in order to have a back-up/recovery feature.
For the LaBr3 detectors, the steps are similar. The Front-end detector is in this case the Photomultiplier (PMT) attached to the detector while the front-end electronics is specific to this type of detectors.
If possible, a remote monitoring of the LV Power supplies shall be done by the User, via an HMI.
Assuming that the ELIADE DAQ Front-End will be based on dedicated crates (VME/PXI/CompactPCI, etc.) with digitizers boards that will be accessed by a PC, several parameters are envisaged to be measured and compared to acceptable values. These parameters can be divided into two categories:
Physical parameters
Temperature
Voltage and Current
Logical parameters (not exhaustive)
Single board CPU status (run, failure, reset, etc.)
Watchdog Timer
Boards memory (registers, FIFO, etc.)
At a first stage, the requirements in terms of control are to remotely control the physical parameters, via a HMI in the User Room.
In a second stage of development, the control of the logical parameters will be achieved.
ELIADE IC
The ELIADE IC vacuum system will fit the following requirements:
The nominal vacuum level is better than 10-3 mbar.
The vacuum inside the ELIADE IC will be performed by different pump units (?).
A/several vacuum gauge(s) (VG) along with its/their controller will enable to monitor the value of the pressure inside the ELIADE IC.
At a first stage, the pumping-down will be manual: the user shall go to the experimental area to start the vacuum pump units.
In terms of control, two requirements have been expressed for the first stage:
a remote monitoring of ELIADE IC vacuum in the User Room through a HMI
A hardware interface shall provide the effective value of the ELIADE IC vacuum to the Vacuum Control system of the GBDD (see chapter 8, GBDD, Vacuum).
Data Storage
To evaluate the data storage, the following requirements have to be taken into consideration:
Table 5
First Data Flux estimation for NRF experiment
These tables don't take into account the possibility to perform an off-line/on-line processing.
For the high resolution mode, 200/70 kbits of raw data are acquired by HPGe/LaBr3 digitizers assuming an Acquisition Time Frame of 50/10 micro-seconds. This leads to a raw data flux per channel up to 20 Mbits/s for HPGe, 7 Mbits/s for LaBr3.
After processing, the total bandwidth is evaluated at 100 Mbits/s.
Table 6
Estimation of bandwidth requirement for NRF experiment
The table below summarizes the storage needs:
Table 7
Estimation of storage required for NRF experiment
In summary, 350 TB of storage are needed for this experiment.
GBDD Diagnostics Low energy
The GBDD Diagnostics Low energy tools are the ones described in the TDR Control Systems, chapter 8.
The diagnostics will provide 5 key beam parameters, referred as Diagnostics Parameters (DP):
Energy spread DP provided by the Attenuator system with HPGe crystals
Polarization DP provided by a NRF polarimeter
Flux (Intensity) DP provided by the Attenuator system with the LaBr3 scintillators
Time Structure DP provided by the Attenuator system with the LaBr3 scintillators
Spatial profile DP provided by the CCD system
Each of these DP shall be provided to the User during the experiment via two type of HMI.
Moreover, the user will be able to control remotely certain components of this system (see TDR Gamma Beam Delivery and Diagnostics).
GBS
The Beam parameters set or measured in the gamma beam system shall be accessible to the User in the User room. This transmission over the two control networks should be implemented in an "easy way" because both frameworks are EPICS based.
HMI, clients
A local HMI shall provide local monitoring/control/configuration of the systems listed above.
A supervision HMI, in the User room, will less features shall enable a global supervision of the experiment.
A last HMI in the User room, shall offer the possibility to test, validate and integrate new equipment.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system.
5.2 Experimental area E3 and Accelerator Bay 1
Accelerator Bay 1 and E3 areas will be used for Positron Production experiments. Two type of production shall be available:
A converter chamber
An isotope source (22Na).
Each of these sources will produced positrons beams that will be transported to three experimental setups location:
Coincidence Doppler Broadening Spectroscopy (CDBS)
Positron Annihilation Lifetime Spectroscopy (PALS)
Positron induced Auger Electron Spectroscopy (PAES).
The Accelerator Bay 1 will host a last type of experiment:
Gamma-induced positron spectroscopy (GIPS)
The following scheme presents the different components of the Positron production experiments.
|Fig. 13 Positron production area E3 and Accelerator Bay 1
Beam parameters
See Table 4.
Target systems
Converter chamber source
No specific requirements.
Isotope Source
Two type targets manipulator under vacuum are envisaged:
2 X 1 long (1m) linear manipulator for the isotope Source
2 X 1 short (5 cm) linear + rotation manipulator for the moderator
In total, 4 manipulators are foreseen because two chambers will be used (see Isotope source Interaction chambers and vacuum).
The control requirements are:
Local control of the position (1 axis + rotation) of the isotope source
Local control of the position (1 axis) of the moderator
Local control of the two systems above via a User HMI in the E3 area.
Remote control and monitoring of the two systems via a User HMI in the User room.
CDBS
No specific requirements.
PALS
No specific requirements.
PAES
Currently, no specific requirements.
Positron source interaction chambers and vacuum
Converter interaction chamber
Details will be updated progressively based on the experimental specific requests.
Isotope interaction chamber
The isotope IC is a 5-way cross DN220CF, model CX5-200S.
The power supply for the converter chamber has to be remotely monitored via an HMI in the User room.
Vacuum is also needed.
Positron beam transport technologies and vacuum
The beam transport can be divided between sections in Accelerator Bay 1 and in E3.
Accelerator Bay 1: focus lens system
This system focus the positron beams via six electrostatic lenses. The voltage of each lens has to be monitored. This requires:
Locally control of the lens power supplies
Remote monitoring of the voltage via the remote monitoring of the lens power supplies via a User HMI in the User room.
E3: Magnetic switches
Two magnetic adiabatic switches are foreseen. The current approach is the superposition of the longitudinal main field generated by a solenoidal coil with a transverse switching field by a magnetic dipole.
The control requirements are the control of the magnetic fields of the solenoidal coils and dipole magnets. This means:
Local control of the Power supplies connected to each devices (generally the Power supply comes with its own controller)
Remote monitoring of the Power supplies through an HMI in the User room
E3: Beam Profile
A beam profile will be achieved by MCP coupled with phosphor screen which reflection will be recorded trough a view port by CCD camera by the help of 45o mirror.
The control requirements are:
The Trigger of the CCD camera for Image Acquisition (external Trigger signal is required)
The Image acquisition from the CCD camera
The Storage of the image on a computer that will perform the image processing.
The Display of the processed image on the HMI located in the User Room/Experimental area
The Storage of the processed image in a dedicated data storage location
Vacuum
Fore vacuum and Ultra-High Vacuum (UHV) pumps and gauges shall be ensure the vacuum in the different section of the beam transport. 5 Pneumatic gate valves (GV) are envisaged:
1 after the focus lens system in Accelerator Bay 1 : GV 1
1 before the Isotope IC in E3 : GV 2
1 before the CDBS IC in E3 : GV3
1 before the PALS IC in E3 : GV4
1 before the PAES ICs in E3 : GV5
The vacuum inside the different sections shall be controlled by a vacuum control system. At a first stage, these systems shall provide:
A local control of each vacuum pump unit associated with one section of the beam transport (Start, Stop, etc.)
A local control of all the valves (open/close)
A PLC based controller that shall ensure:
The automatic sequence of the pumping-down in order to reach the nominal vacuum level inside each section of the beam transport
The control and monitoring of the Valves Gauges dispersed into the pipes possibility to read the pressure values.
The logic between the vacuum pump control units, valves control units and vacuum gauges in order to avoid any unsafe vacuum conditions.
A safety system, hardware based, that should avoid:
Differential pressure between effective vacuum and nominal vacuum higher than X % on beam shot.
Opening/closing of the valves in unsafe vacuum conditions.
Local configuration and monitoring of each of the vacuum control system, through an HMI (all the pertinent signals related to the local controls and interlock machine protection shall be displayed on this HMI)
Software and Hardware interfaces with the ELI-NP Personal Safety System and ELI-NP Machine Protection system.
At a second stage, these two PLC control systems must be integrated with the general EXP MCS based on EPICS in order to provide:
A remote control of the vacuum by the Operator, via an HMI located in the Gamma Control room.
A remote monitoring of the vacuum by the User, via an HMI located in the User room.
Timing/Synchronization
Similar requirements as for the NRF Low energy experiments. In summary,
One signal, @ 100Hz correlated to the Macro-bunch generation.
An absolute timestamp for the macro-bunches generated in the GBS, to be correlated with the machine parameters and afterwards to correlate the parameters with the experimental data
Detector + DAQ
The table below summarizes the detector used for each of the 4 experiments.
Table 8
List of detectors for E3 and Accelerator Bay 1 areas
HPGe crystals
The control requirements for the HPGe crystals are the same as the ones expressed for the HPGe crystals used in ELIADE. The system used for ELIADE (LN2 gas filling system and HV supply, both controlled by a NI CompactRIO) could be shared with the GIPS experiment. This is motivated by the fact that only one experiment can be performed at the same time and the rack that will host the NI Compact RIO will be movable.
BaF2 PMT
They require High Voltage Power supplies (3 kV). From the User point of view the control requirements are:
Local control of the HV Power supply (generally endorsed by an embedded controller)
Remote monitoring of the HV Power supply, via an HMI, in the User Room.
DAQ
Details will be updated progressively based on the specific experimental requests.
Experimental Interaction Chamber and vacuum
The experimental interaction chambers (EXP IC) are located only in E3. The following tables presents them.
Table 9
List of interactions chambers in E3 and Accelerator Bay 1 areas
For the CDBS IC and PALS IC. The vacuum system should fit the following requirements:
The nominal vacuum level must be defined.
The vacuum inside the IC will be performed by different pump units (?).
A/several vacuum gauge(s) (VG) along with its/their controller will enable to monitor the value of the pressure inside the ELIADE IC.
At a first stage, the pumping-down will be manual: the user shall go to the experimental area to start the vacuum pump units.
In terms of control, two requirements have been expressed for the first stage:
a remote monitoring of the CDBS/PALS IC vacuum in the User Room through a HMI
A hardware interface shall provide the effective value of the CDBS/PALS IC vacuum to the Vacuum Control system of the Positron Beam transport.
For the PAES assembly, the vacuum system should fit the following requirements:
The nominal vacuum level must be 10-9 mbar.
The vacuum inside the ICs will be performed by 4 dry backing pumps; 4 turbo pumps (2 of 400 l/s, for the deposition chambers, one of ~ 685 l/s for the analyzing chamber, one of ~ 265 l/s for the load lock chamber), one ion gate pump might be required for the analyzing chamber to reach background pressure of ~ 5×10-10 mbar.
A/several vacuum gauge(s) (VG) along with its/their controller will enable to monitor the value of the pressure inside the ELIADE IC.
At a first stage, the pumping-down will be manual: the user shall go to the experimental area to start the vacuum pump units.
Because of the complexity of the assembly a specific Vacuum system should be implemented.
GBS
The Beam parameters set or measured in the gamma beam system shall be accessible to the User in the User room. This transmission over the two control networks should be implemented in an "easy way" because both frameworks are EPICS based.
HMI, clients
A local HMI shall provide local monitoring/control/configuration of the systems listed above.
A supervision HMI, in the User room, will less features shall enable a global supervision of the experiment.
A last HMI in the User room, shall offer the possibility to test, validate and integrate new equipment.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system. Moreover, because the accelerator bay 1 will be covered by the GBS Personal Safety system, a clear interface between ELI-NP PSS, the experimental safety system and the GBS Personal Safety system has to be defined.
5.3 Experimental area E7
Only one gamma driven experiment is envisaged at this moment: "a gamma above threshold experiment" using the "GANT" array. This array is composed by:
A fast detection system consisting of 30 – 60 LaBr3:Ce detectors for gamma rays.
60 liquid scintillation detectors for neutrons. These latter should be NE213 type.
Beam parameters
See Table 4.
Target systems
The target should be aligned with the GANT detector. The system that shall implement this feature is not designed. The CCD system described in the GBDD chapter and used for the NRF experiment is one envisaged solution.
Timing/Synchronization systems
Similar requirements as for the NRF Low energy experiments. In summary,
One signal, @ 100Hz correlated to the Macro-bunch generation.
An absolute timestamp for the macro-bunches generated in the GBS, to be correlated with the machine parameters and afterwards to correlate the parameters with the experimental data.
Detector + DAQ
The NE213 detectors composing the GANT array requires High Voltage (6 – 10 kV).
The main requirement is the control of the High Voltage of each detector
Locally ensured by a VME/PXI, etc. board delivered with the Power Supply unit.
Remotely monitored by the User in the User Room.
Interaction Chamber vacuum
The experiment using this detector might need a vacuum chamber but it's not clarified at this moment.
Data Storage
Details will be updated progressively based on the specific requests of the experiments.
GBDD Diagnostics High-Energy
The GBDD Diagnostics tools are the ones described in the TDR Control Systems, chapter 8.
Energy spread DP: the system has to be designed.
Polarization DP provided by the Fission chamber
Time Structure DP provided by one LaBr3 scintillator
Spatial profile DP provided by the CCD system
Each of these DP shall be provided to the User during the experiment via two type of HMI.
Moreover, the user will be able to control remotely certain components of this system (see TDR Gamma Beam Delivery and Diagnostics).
GBS
Same requirements as the ones for the NRF low energy experiments.
HMI clients
A local HMI shall provide local monitoring/control/configuration of the systems listed above.
A supervision HMI, in the User room, will less features shall enable a global supervision of the experiment.
A last HMI in the User room, shall offer the possibility to test, validate and integrate new equipment.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system
5.4 Experimental area E8
In the E8 area, four type of experiments are foreseen.
Photo-fission experiments based on the following type of detectors :
High Efficiency Ionization Chamber (BIC) and Si DSSD Detector System (BIC array + DSSDs)
Thick Gas Electron Multiplier (THGEM)
ELIADE
Gamma above threshold experiments with a 4π neutron detector
Nuclear Resonance Fluorescence
Charged Particles experiments based on the following type of detectors :
large area of Silicon Strip Detectors (SSDs)
Gas Time Projection Chamber read by an electronic readout (eTPC)
Bubble Chamber (BD)
Beam parameters
See table 4.
Target systems
Photo-fission
No specific requirements have been addressed. The target must be mounted and fixed in laboratories.
Gamma above threshold
No specific requirements have been addressed for the 4π neutron detector.
NRF High Energy
Same requirements as the ones expressed for the NRF low energy experiments
Charged Particles
For eTPC and BD, the target is the active medium (gas) inside the chamber. The control of this medium is addressed below.
For the SSDs, the target has to be placed inside a reaction chamber with a target ladder. In addition, it is foreseen that the vertical position (height) and the rotation of the target can be set before and during the experiment.
Timing/Synchronization system
Similar requirements as previously:
One signal, @ 100Hz correlated to the Macro-bunch generation.
An absolute timestamp for the macro-bunches generated in the GBS, to be correlated with the machine parameters and afterwards to correlate the parameters with the experimental data.
Detectors + DAQ
The table below summarizes all the detectors used for the experiments.
Table 10
List of detectors for gamma in E8 area
BIC array + DSSDs detectors:
The BIC is an ionization chamber containing an active medium (gas). The characteristics foreseen are a mixture of 90% Ar + 10% CH4 at gas pressure of 1 bar.
A local system for gas-recycling is envisaged. This shall ensure:
Local control of the flow, temperature and pressure of the gas in the chamber: the filling command and the temperature and pressure sensors values will form a close loop in order to keep a very high purity degree of the gas.
Local feedback
The High Voltage applied between the cathode and the anode of the chamber shall also be controlled locally. A remote control must be available for the User in the control room.
THGEM array detector
This type of detector is also an ionization chamber containing an active medium (gas). The active gas flow envisaged is 5 mbar isobutene. A similar system as the one described for the precedent detector will be used.
4π neutron detectors
The neutrons detectors requires High Voltage (~ 2kV). The main requirement is the control of the High Voltage of each detector
Locally ensured by a VME/PXI, etc. board delivered with the Power Supply unit.
Remotely monitored by the User in the User Room.
ELIADE detector
Same requirements as the ones for NRF low energy experiments.
SSDs
Each silicon detector requires:
Biais supply
Locally controlled via Mesytec MNV-4 hardware type
Remotely monitored/controlled via a User HMI in the User Room
Amplifier :
Locally controlled via a Mesytec MSCF-16 F hardware type
Remotely monitored/controlled via a User HMI in the User Room
eTPC detector
This chamber works at low pressure (~ 100mbar), the target being a special TPC-compatible gas. A local system for gas-recycling is envisaged, with local feedback and the possibility to monitor the key values and control the functioning remotely from the control room. The gas system and the temperature and pressure monitor will form a close loop in order to keep a very high purity degree of the TPC gas.
Moreover the drift-velocity has to be monitored. Finally, the generation of the drift field in the eTPC requires HV Power supplies (tens or hundreds of kV). This system has to be controlled remotely. The list summarizes these items:
Temperature and pressure monitor implemented into the DAQ system by the use of a local controller
Drift-velocity monitoring detector implemented into the DAQ system by the use of a local controller
A gas control system with gas recycling feature for both non-rares and rares gases locally controlled
HV Power supply control with its own local controller (CAEN type of HV Power Supply are envisaged)
Each of the item listed above shall be remotely monitored and/or controlled via a User HMI in the User room.
BD
Two type of system will be used in terms of control:
A Pressure control system including an hydraulic and pneumatic systems along with a temperature system that includes cooling and heating options by the use of Liquid Nitrogen. The control unit proposed is a National Instruments chassis composed by a controller module and anolog/digital interface modules for the hydraulic, pneumatic and temperature systems.
Acoustic Trigger system: this system itself has to be controlled due to its complexity: Laser, Power supply, acoustic sensors and piezoelectrics control. The control unit proposed is based on National Instruments CPU.
Interaction Chamber (IC) vacuum
BIC array + DSSDs
No specific requirements besides the control of the gas medium of each BIC.
THGEM array
No specific requirements besides the control of the gas medium in each of the THGEM.
4π neutron detectors
Two configuration are envisaged at this moment. The first one is having a pipe that ends at the limit of the detector. This is a partial vacuum. The second configuration proposes a full vacuum which means that the pipe with the vacuum goes through the detector (full vacuum).
Besides the configuration, the following requirements have been expressed.
The nominal vacuum level must be defined.
A/several vacuum gauge(s) (VG) will enable to monitor the value of the pressure inside the 4π neutron detector IC.
The interface between the GBDD pipe and the 4π neutron detector IC consists of a small gap of air.
At a first stage, the pumping-down will be manual: the operator shall go to the experimental area to start the vacuum pump units in charge of the 4π detector IC vacuum.
At a second stage, this pumping-down could be automatic by the use of a PLC based system.
In terms of control, two requirements have been expressed:
A remote monitoring of 4π detector IC vacuum in the User Room through an HMI
A hardware interface that shall provide the effective value of the ELIADE IC vacuum to the PLC Control system of the GBDD.
ELIADE detector
Same as the ones expressed for the NRF Low Energy.
SSDs detectors
The SSDs based experiments require a target located inside an interaction chamber (IC) under vacuum.
The following requirements have been expressed.
The nominal vacuum level must be defined.
A/several vacuum gauge(s) (VG) will enable to monitor the value of the pressure inside the IC.
The interface between the GBDD pipe and the IC consists of a small gap of air.
At a first stage, the pumping-down will be manual: the operator shall go to the experimental area to start the vacuum pump units in charge of the 4π detector IC vacuum.
At a second stage, this pumping-down could be automatic by the use of a PLC based system.
In terms of control, two requirements have been expressed:
A remote monitoring of the IC vacuum in the User Room through a HMI
A hardware interface shall provide the effective value of the IC vacuum to the Vacuum Control system of the GBDD.
eTPC detector
The eTPC itself has to be placed inside a chamber under vacuum.
The following requirements have been expressed.
The nominal vacuum level must be defined.
A/several vacuum gauge(s) (VG) will enable to monitor the value of the pressure inside the IC.
The interface between the GBDD pipe and the IC consists of a small gap of air.
At a first stage, the pumping-down will be manual: the operator shall go to the experimental area to start the vacuum pump units in charge of the eTPC chamber vacuum.
At a second stage, this pumping-down could be automatic by the use of a PLC based system.
In terms of control, two requirements have been expressed:
A remote monitoring of the IC vacuum in the User Room through a HMI
A hardware interface shall provide the effective value of the eTPC chamber vacuum to the Vacuum Control system of the GBD.
BD
No specific requirements besides the control of the gas medium inside the bubble chamber.
The experiment using this detector might need a vacuum chamber but it's not clarified at this moment.
Data Storage
No specific requirements were formulated. The data storage amount shall follow the general implementation.
GBDD Diagnostics High-Energy
The GBDD Diagnostics tools are the ones described in the TDR Control Systems, chapter 8.
Energy spread DP: the system has to be designed.
Polarization DP provided by the Fission chamber
Time Structure DP provided by one LaBr3 scintillator
Spatial profile DP provided by the CCD system
Each of these DP shall be provided to the User during the experiment via two type of HMI.
Moreover, the user will be able to control remotely certain components of this system (see TDR Gamma Beam Delivery and Diagnostics).
GBS
Same requirements as the ones for the NRF low energy experiments.
HMI clients
A local HMI shall provide local monitoring/control/configuration of the systems listed above.
A supervision HMI, in the User room, will less features shall enable a global supervision of the experiment.
A last HMI in the User room, shall offer the possibility to test, validate and integrate new equipment.
For the equipment that cannot provide interfaces with the general control system architecture, Remote Desktop will be used to access remote the parameters of the apparatus, in the UsersRoom.
Safety system interfaces
An experimental area safety system must exist to handle the access door interlocks, other interlocks, panic buttons, check boxes, shutters, etc and this must be interfaced with the ELI-NP general personnel safety system.
6. Combined Laser&gamma based experimEnts requirements
Experimental area E7
6.1.1 Beam parameters
Details will be updated based on experimental specific requirements.
Target systems
The experiments in this area will either focus the laser beams in vacuum, or use a primary gas target for electron acceleration to two energy ranges: 50-100MeV and 2-2.5GeV.
Timing/Synchronization systems
In order to perform combined experiments, with HPLS 10PW pulses and GBS gamma/electron bunches, the experiments will need a way of correlating the experimental data with the parameters of the beams themselves, which are measured by the large equipment or by additional setups. For this, an unique way of identifying the parameters of the pulses, or other parameters in the large machines that are important for the experiment is necessary.
Detectors + DAQ
Electron spectrometer for energies of hundreds MeV-GeV
Gamma radiation detector for high energy gammas (hundreds MeV) – using convertors for pair creation
Gamma radiation detectors for MeV-10s MeVs
7. System design
7.1 General Architecture Model
The EXPs MCS will be based on the following layers:
Supervision Layer composed by:
The Remote Human Machine Interfaces (HMI) sub-layer regroups general purpose PCs and Monitors that will be used in the UsersRoom/DaqRoom. These HMI will provide high-level supervision to the equipment and state machines inside the experiment.
The Central services sub-layer that will include several type of central services: Archiving, Logging, Alarm Handling systems, common network services (DNS, etc.) or eventually specific services for Distributed Control System (DCS) needs. In general, they must run continuously during the experiment that is why they are referred as "central". From the hardware point of view, industrial PCs or high performance virtualization servers are the solutions taken into account.
Control Layer composed by
The Local Control unit sub-layer including local rack mount industrial PCs. If required, some PLC or others dedicated computers (e.g. National Instruments) will be used as local controller for specific purposes (vacuum, machine protection, delay, etc.). All these control units will manage the hardware equipment or its interfaces.
The Local Human Machine Interfaces sub-layer will provide HMI for operators that want to access locally the equipment for maintenance, configuration purposes. This feature will be possible through dedicated Keyboard Video Mouse (KVM) switches and consoles in the case of industrial PCs. In other cases, specific HMI or others solutions will be provided.
Equipment Layer composed by:
The Equipment Hardware sub-layer that consist of sensors, detectors, and actuators used during the experiment.
The Equipment Interface sub-layer, when the experiment requires it (I/O specific controller, switches, etc.) which refers to intermediary equipment that can be controlled by the Control layer.
To summarize, the hardware architecture can be seen as three-tier layers structure.
The connection between the different modules containing in each of these layers should be done by the use of several type of networks/connections:
Equipment connections: Equipment – Control Unit connections. The type of connection depends on the equipment. It can be a dedicated network for PLCs, a single cable between the equipment and the control unit using Ethernet, RS-232, GPIB, etc.
Control System Network: Control Unit – Supervision connections. A Ethernet Bus network will be used based on Fiber Optics link to the largest possible extent.
Moreover, dedicated Networks physically separated from the others such as a Video Network, are envisaged if the amount of data transmitting by specific equipment, such as cameras, over the control system network is too large.
Finally, Electromagnetic perturbations/constraints like Electromagnetic Pulse (EMP) that will occur in the areas used for laser experiments shall be taken into consideration for the choice of the cables (cat6, Fiber Optic, etc.). The Protection/Isolation of electrical hardware modules shall be done.
The scheme below illustrates the model detailed above:
Fig. 14 General Architecture Model with three-tier layer
7.2 Software Architecture Model
As stated at the beginning of this document, the EXP MCS is split in two categories: the EXPs MCS for laser driven experiments, referred as Laser EXP MCS and the one for the gamma driven experiment, Gamma EXP MCS. This distinction is clear from a software point of view: the Laser EXP MCS will use TANGO whereas the Gamma EXP MCS will be developed on EPICS. For the combined experiments, a decision shall be taken based on the equipment involved and on synchronization level needed.
EPICS and TANGO are two Distributed Control System (DCS) that have been first developed by the research community in the 1990's in USA for EPICS [4] and in the 2000's in France for TANGO [5]. Then, both of the systems have been used in different research facilities all over the world. The two systems, have the same goal: allowing the control and the monitoring of a variety of devices that are mandatory in a research facility.
In the following, a brief description of each software will be made, their differences and their similarities will be also presented.
Fig. 15 General Software architecture model
EPICS
The Experimental Physics and Industrial Control System (EPICS) is a DCS "hardware-oriented" that allows data exchange between different components of a system through a Client-Server model communication. EPICS is based on a hardware hierarchy composed by three main component:
An Operator Interface (OPI): workstation which can run various EPICS clients tools.
An Input Ouput Controller (IOC): any platform that can support EPICS run time databases (iocCore) together with the other software components included into an IOC. It can be a VME crate (vxWorks), a desktop (Linux, Windows, MACOS), an embedded-controller (RTEMS), etc. The IOC interfacing one equipment enables its control.
A Local Area Network (LAN): network that allows the communication between the IOCs and OPIs.
The following part will describe each the EPICS software components associated to this hardware architecture.
EPICS Servers
From a software point of view, the key component of EPICS is the process variable (PV). A PV represents a physical quantity that is measured by a sensor or controlled by an actuator. A PV is modelled as a record in the EPICS real-time database running on one IOC. This IOC acts generally as an EPICS server, even if it can also be an EPICS client. IOC encapsulates specific state machine (Sequencer), database which describes PV’s owned by it and device support modules for interfacing the hardware. The scheme below summarizes these ideas.
Fig. 16 EPICS IOC Overview
The concept of EPICS IOC was strongly related to the I/O hardware on which it should run. You could perfectly control one hardware device and even define its state machine, however it was not sufficient in terms of sequences/coordination of several devices. This last item refers to a sequence involving several hardware devices, or software processes that involves several devices. This "lack" has been filled with the concept of soft IOC.
EPICS Communication Protocol
The Channel Access part depicted in Figure 16 is related to the communication between the EPICS clients and the EPICS server. Each clients, respectively server, using Channel Access will be referred as Channel Access Client (CAC), respectively Channel Access Server (CAS). The Channel Access is the protocol enabling the transmission of the data related to PVs [6]. In order to avoid network conflicts, each PV must have a unique name. A CAC can set (write), get (read) or monitor the data of PVs via the Channel Access protocol. This protocol is based on an UDP request broadcasted to the servers that are listed in the EPICS_CA_ADDR_LIST. Then the CAS handling the PV requested will reply and a TCP connection will start between the CAC and the CAS selected.
Fig. 17 Channel Access Overview
The Figure 17 shows examples of two PVs: SA1:T1:temp reading a temperature value and S1:G1:pressure reading the value of the pressure in a vacuum. These PVs are implemented inside the IOC Database. CAC are able to display the values of this PVs by the use of different type of Graphical User Interfaces provided with the EPICS software.
Fig.18 EPICS principles
EPICS Clients
The precedent part has already detailed the features offered by the EPICS clients (CAC). Most of them consists on Graphical User Interfaces (GUI) displayed on the OPI. These GUIs are based on EPICS client extensions that have been developed for more than 30 years. Some of them are using Channel Access:
Operator Display Manager (MEDM, [7]): different display managers for monitoring purposes have been developed. The most recent being MEDM.
Sequencer system (Sequencer, [8]): tool for running an IOC sequencer that consists on the execution of state programs running on the I/O controller.
Alarm Handling System (ALH, [9]): interactive graphical applications displaying and monitoring EPICS database alarm states. It serves as an interface between an operator and the EPICS database and it communicates with the database using channel access function calls. The user interface for the Alarm Handler contains a hierarchical display of an alarm configuration structure allowing both high level and detailed views of the alarm configuration structure in one window.
Archiving system (CA Archiver): archiving toolsets for EPICS that can archive the values of Process Variables via the EPICS Channel Access protocol.
Logging System (ioCLog): A system wide error logger supplied with base. It writes all messages to a system wide file. iocLogServer is provided with the EPICS base software whereas iocLogClient ensures the configuration of the system.
Others are based on other environment. The best example being the Database Management tools (VDCT). It's an IOC database configuration tool, based on Java, developed and maintained by Cosylab [10].
It must be noticed that some of these tools can be used with other central servers or databases for achieving their goals (Archiving service, Alarm Handling, etc.). They can be integrated into the EPICS environment. Nowadays, Control System Studio [11] is widely used for the integration of these central services. Moreover, a lot of users prefers to develop and use other Integrated Development Environment (IDE) that is the reason why bindings between EPICS and other software (LabVIEW, C/C++, Java, MATLAB, Perl, Python, etc.) has been created and maintained. Finally, EPICS provides an access security system that limits access from CAC to IOC database.
EPICS Software Architecture Model
To summarize the ideas detailed above, the following scheme presents the EPICS backbone software architecture:
Fig. 19 EPICS Architecture
The IOC servers (in orange) provided generic services for the control of the system. They can be divided into two categories: Hard IOC for the control of one Hardware Device and Soft IOC for the control of processes involving several hardware devices. The generic services delivered by the IOCs servers are accessed by several EPICS Clients using Channel Access ("CAC" label). Each CAC can be implemented with the access security system. CAC Clients in blue are used for "generic monitoring". The CAC sequencer client is specially used for IOC Sequencer. The CAC red ones are used for the central services (Archive, Logging, Alarms) and can be used in conjunction with additional central servers. The Java client (VDCT) is used for the configuration and the browsing of the IOC databases. EPICS provides bindings with others IDE such as LabVIEW and Matlab (in yellow). These bindings are implemented on both client and server sides.
TANGO
TANGO is an object oriented distributed control system based on CORBA (Common Object Request Broker Architecture) [12]. The main goal of CORBA is to provide a network communication between the TANGO clients and servers through a common Interface Description Language (IDL). To facilitate the implementation of the control system, TANGO hides all the details concerning CORBA. TANGO can run on Linux and Windows.
The following parts describe the major concepts related to the TANGO servers, the communication protocol and the TANGO clients.
TANGO Device Server Model
The philosophy of TANGO is that all TANGO servers components can be seen as an object within a model, named TANGO Device Server Model (TDSM). This model includes three basic concepts [13]:
TANGO Class
TANGO Device
TANGO Device Server
The key object is the TANGO Device. It can be an equipment (e.g. a motor), a set of equipment (e.g. several motors driven by the same controller), a software processes (data processing) or a group of devices representing a subsystem. Each Device:
Belongs to a class as an instantiation of a TANGO Class and give access to the services of this class.
Has a unique name that identifies it in network name space
Has an CORBA type Interface giving the possibility to :
Execute Commands (Generic / Specific) that perform some actions.
Read/Write Attributes (Generic / Specific) that describes a physical unit produced or administrated by the device.
Has one State/Status (ON/OFF, STANDBY, INIT, etc.) defining a device state machine
Has some properties
If the device is physical type, a Hardware Control Code has to be written for each device.
Fig. 20 TANGO Device
TANGO classes define the interface and implements the device control or the implementation of a software treatment. All the code related to the interface can be automatically generated with POGO [14]. All classes are derived from one root class thus allowing some common behavior for all devices.
Finally, each Device are hosted within a server process named TANGO Device Server whose main task is to offer one or more services to one or more clients.
Fig. 21 TANGO Device Server
TANGO Database
In order to define and parametrize (Adress, Min/Max values, Alarm, etc.) the Devices inside the system, TANGO uses the concept of Properties. Properties permit the configuration of one device without changing the TANGO Class code. To facilitate the configuration of an entire system composed by several devices, a MySQL relational database, named TANGO Database or TANGO Property Database stores all these properties. Moreover, the TANGO Database is also used for the network communication (see TANGO Communication Protocol).
TANGO Communication Protocol
TANGO Device Servers and TANGO clients can use synchronous, asynchrounous and events communication mode with CORBA and events communication mode with ZeroMQ [15].
In synchronous mode, client send a request and wait until it receipts the answer sent by server or the timeout is reached. When waiting, the client is blocked.
In asynchronous mode, a client send a request and doesn't wait the answer in a blocked mode. The answer sent by a server can be retrieved by calling an API specific call or by requesting the excution of a call-back method when the answer from the server arrives.
The event mode is now based on the ZeroMQ library that implements several well known communication pattern including the Publish/Subscribe pattern which is the basic of the new TANGO event system.
The TANGO Database is also used for the network configuration. In that sense, all client and servers have to access it during the startup. Moreover, the database ensure the uniqueness of device name and of alias. It also links device name and it list of aliases.
The two schemes below give an overview of the TANGO communication protocols :
Fig. 22 TANGO synchronous/ asynchronous communication mode
Fig. 23 TANGO events communication mode
TANGO Clients
The Client Applications consists of GUIs that requires different services. Here are some of them:
Configuration tool (Jive, [16]): client for browsing and editing the TANGO Database.
Display Management tool (Java ATK Panel, [17]) : generic application which displays panels allowing to execute any device commands or read/write any device attributes .
Global Administration Tool (Astor/Starter, [18]): On each host to be controlled, a Device Server (called Starter) takes care of all Device Servers running (or supposed to) on this computer. The controlled server list is read from the TANGO database. A graphical client (called Astor) is connected to all Starter servers and is able to:
Display the control system status and component status using coloured icons,.
Execute actions on components (start, stop, test, configure, display information, …)
Execute diagnostics on components.
Execute global analysis on a large number of crates or database
Logging System (LogViewer): Tango implements a Tango Logging Service enabling the display of messages related to the control system status. The LogViewer client enables the control over how much information coming from the Devices is actually generated and to where it goes. It can be used in conjunction with a database for the storage of the messages.
Archiving System (Mambo) : TANGO includes in itself some methods for archiving values of Devices attributes. Mambo allows the user to define configurations that describe archiving and data exploitation for a group of attributes. It requires two databases, one Historical Database (HDB) for the "infinite" storage and one Temporary Database (TDB) that erases from time to time the oldest attributes values [19].
Alarm System (PANIC [20], Elettra Alarm System [21]) : several clients have been developped for the configuration, management and display of the alarms of the different devices composing the system. It requires also in general the use of a MySQL database dedicated to this central services.
It must be noticed that some of these clients' tools can be used along with dedicated central servers and databases for achieving their goals (Logging system, Archiving system, Alarm system, etc.). They can be integrated into the TANGO environment. Moreover, a lot of users prefers to develop and use other Integrated Development Environment (IDE) that is the reason why bindings between TANGO and other software (LabVIEW, Matlab, etc.) has been created and maintained. Finally, TANGO provides a Control Acess service that allows an access with different rights, e.g defines which users can excute some commands (or write attributes) on a device and from which host.
TANGO Software Architecture
Figure 24 summarizes the ideas presented above.
The main servers components are the Device Servers distributed within the system. They can be implemented for the control of one hardware device (level 1), the control of a set of hardware devices (level 2), implementation of software processes (level 3) or representation of an entire subsystem (level 4). One central database server, the TANGO Database, is the key component making possible the communication between the Devices Servers and the Clients. These Clients are used for general monitoring/administration purposes (blue ones), the configuration of the Tango Database (in purple), whereas the red ones are used for managing central services (Logging System, Archive system, Alarm Handling system). These central services can required additional central servers and databases included in the Server Layer. Client bindings exist between TANGO and other IDEs such as LabVIEW and Matlab.
Fig. 24 TANGO Software Architecture
Remote Desktop
For the equipment that cannot be interfaced with the EPICS or TANGO CS architectures, the remote desktop connection shall be used to remotely control the devices from the UsersRoom.
7.3 Laser Based Experiments Architecture Model
The HPLS based experiments architecture is presented below.
Fig. 25 Laser based experiments architecture
Inside the experimental areas, local HMIs shall exist to control the equipment of the experiment and make all necessary configurations. The hardware for the local HMIs shall be based on industrial PC racks placed inside EMP protected cages. These PCs shall also hold the drivers for the equipment controlled or linked to the PCs and to the TANGO bus.
On the Supervision layer, inside the control room of the experiment, desktop PCs shall be used. These PCs shall hold the HMI Supervision that will control the experiment remotely.
For each experiment, a separate TANGO client- server architecture shall be implemented, each with its own database, physical link, clients and servers. This is due to maintenance reasons and maximum operational time of the experiments (if one experimental area is in upgrade/maintenance mode the others can still function).
7.4 Gamma Based Experiments Architecture Model
The GBS based experiments architecture is presented below.
Fig. 26 Gamma based experiments architecture
Inside the experimental areas, local HMIs shall exist to control the equipment of the experiment and make all necessary configurations. The hardware for the local HMIs shall be based on industrial PC racks. These PCs shall also hold the drivers for the equipment controlled or linked to the PCs and to the TANGO bus.
On the Supervision layer, inside the control room of the experiment, desktop PCs shall be used. These PCs shall hold the HMI Supervision that will control the experiment remotely.
For the experiments, one EPICS client- server architecture shall be implemented.
7.5 Hardware Interfaces
The hardware interfaces between systems shall use standardized communication as much as possible (e.g. Ethernet, RS 232, RS 485, Modbus, USB). The link between the experimental areas equipment (industrial PCs) and the UsersRoom/DataAcqRoom shall be made using FO. If in some cases this is not possible, CAT 6 cable will be the alternative.
7.6 Software Interfaces
The software interfaces between equipment shall be made using the TANGO and EPICS native means as much as possible. TANGO and EPICS bindings to Matlab and LabVIEW shall be also used in order to achieve the best performance in the shortest time and to take advantage of the mathematical and processing packages already existent in Matlab and LabVIEW applications.
A TANGO – LabVIEW binding that shall allow any LabVIEW app behave like a TANGO device server represents an opportunity, as it will ease the integration of already existent equipment in the TANGO bus.
A TANGO – EPICS bidirectional binding that shall allow any EPICS IOC to be used as a TANGO device server (and vice-versa) represents an opportunity, as it will ease the interfaces between the two control architectures and reduce code rewriting.
7.7 IT Systems
The implementation of the Experiments Control Systems shall be made using industrial PCs and any other equipment that solely permits the integration into the EPICS or Tango architectures (e.g. NI CompactPCI devices with intrinsic integration to the EPICS architecture via LabVIEW core).
Inside the experimental area, a crate will be used to store a number of industrial PCs with EMP housing protection. The physical output of the crate shall be Fiber Optics and this link will be passed through the penetrations in the antivibration floor and afterwards to the control rooms of the experiment – Users Room and Data Acquisition Room.
Additional crates will exist on the corridors, connected to the FO network that will store the Database of the TANGO control system and its services. In these crates, one database shall handle one experimental area yielding 5 industrial PCs units for the HPLS based experiments (E1, E6, E7, E4 and E5). An additional solution is to use virtualization servers inside the UsersRoom/DaqRoom. The two solutions shall be analyzed based on costs estimates, space, performance, maintainability and function of the other equipment in the building.
In the Users Room/ Daq Room, it shall exist a Supervision unit for each experimental area.
7.8 Data Storage and Data Processing/Analysis
In ELI-NP, the data storage and data analysis infrastructure will be developed such to offer scalability and reliability for the experiments. The infrastructure will serve as a research tool for data transfer and data processing and the system will be designed as a hub between the user and the data sets that will need to be analyzed.
The data flow architecture is presented for two cases:
Experiment side – related to the experiment itself, user dependent
Local facility side – related to the facility and the features available to the user
The experiment side data flow presents the data flux from the front-end electronics (digitizers, camera, etc) through real time preprocessing (if necessary) up to the event builder and to the data management.
The real time processing shall be made using FPGA or DSP in order to reduce the data where high fluxes are generated by the detectors (e.g. ELIADE array).
The event builder shall merge the byte streams with the desired conditions and the timestamp into a single data structure passed to the data management system that will make the metadata for each file.
The local facility side depicts the data flow for storage (short term – disk and long term – tape storage), online/offline processing and analysis and simulation.
This architecture is under development, various solutions are being evaluated and a separate, in detail document shall be prepared.
Fig. 27 Data flow for ELI-NP experimental data
A place shall be dedicated in the UsersRoom/DaqRoom for the short term data storage center (no more than 6 months), dedicated to the experiments data. An amount of 1PB is considered sufficient for the first stage experiments (from the implementation phase). Redundancy shall be implemented in order to protect the data and special care shall be taken for protection against power failures.
7.9 Safety System Interfaces
For the experimental areas, it is required to have a safety system that is linked to the BMS of the building in order to command the doors access and interlocks and connected to panic buttons, shutters, check boxes, etc. This safety system is supposed to be interfaced with the general ELI-NP safety system.
The HPLS and GBS will have dedicated Safety systems that will be interfaced with the general ELI-NP safety system.
The LBTS and GBDD will have dedicated Safety systems that will be interfaced with the general ELI-NP safety system.
The 10PW interaction chambers will have dedicated safety systems in order to protect the personnel when in operating with the chamber and the system will be interfaced with the general ELI-NP safety system.
The general architecture overview for the laser related safety systems is presented below:
Fig. 28 General architecture of the Safety and Control System of Laser beam delivery in connection with the rest of the ELI-NP facility.
The general overview of the Interaction Chamber control system and safety system is presented below.
Fig. 29 General overview of the E1 interaction chamber safety system and control system in connection with the rest of the ELI-NP safety system.
System implementation
8.1 Building Interfaces
The experiments control systems and interfacing shall need:
space for positioning the equipment inside the experimental areas, hallways and UsersRooms/DaqRoom
power outlets to connect the equipment in the areas where they are placed
cables and cable ducts for FO and copper cable between experimental areas and UsersRoom/DaqRoom, HPLS room, GBS room
8.2 LBTS Technologies and Vacuum
The LBT system will have two systems for controlling its functionality, one being the Safety System and the other the Control System.
The Safety System is a dedicated part of the LBTS that deals with the human safety when working in the areas where the LBTS operates. The LBTS safety system shall exchange with ELI-NP- safety information regarding the state of the LBTS that shall be used in the general ELI-NP safety system to properly operate the facility (experimental areas personnel clearance, laser interlocks, radiation safety, etc).
The Control System of LBTS is a dedicated part of the LBTS that shall control the alignment, diagnostics, focusing and the routing of the two 10PW laser beams, to the seven outputs corresponding to the possible experimental areas (E1, E6, E7), in their associated specific configurations required by the experiments. It shall also provide monitoring, control, alarms management and logging of the status of all the subsystems of LBTS via hardware and software means.
The Control System of LBTS shall provide the following functionality:
Laser beam configuration routing (for each arm)
Local (inside of the E1, E6, E7 area) and Remote control (outside of the E1, E6, E7 area)
Gate valves and Vacuum control via hardware and software from the Control System application
Beam alignment and associated procedures control and monitoring
Status reporting with history, logging and alarm signaling
Beam diagnosis
Unitary control and status feedback for all the LBTS subsystems:
LBTS configuration – Routing Mirrors
Alignment system
Gate Valves and Vacuum system
Machine protection system
via the Supervision software.
The Control System of the LBTS shall provide a Supervision software. The Supervision of the LBTS shall provide the following features and functionality
Sequence and configure the LBTS system (vacuum configuration, mirrors configuration, beam alignment, diagnosis)
Human-Machine Interface
Synoptic display
Alarm management
History of configurations management
Logging management
User profiles configuration
For each subsystem at least one dedicated HMI shall exist in the Supervision software
The HMI of the Supervision for LBTS and the associated HMI of the LBTS Safety system shall be located in the HPLS Control Room.
Additional Clients with Human machine interfaces shall exist to individually access the parameters of the LBTS subsystems that shall be described below. The access to the Client shall be made such that to avoid concurrent access to the same parameter.
A separate Client and HMI shall display only the video from the cameras, which shall be implemented with a dedicated network due to bandwidth issues.
For the clients and Supervision software implementation, Tango based applications are considered for compatibility issues and maintenance.
The routing of the laser beams shall be made in manual and automatic modes.
In manual mode, the routing mirrors shall be remotely and individually driven by the operator
In automatic mode, the routing mirrors shall be remotely and automatically driven by the system into predefined configurations (detailed below) set by the operator. In this mode, the alignment will be automatically made after the routing is complete.
The convention for the Gate Valves (VR and VB) refer to the red beam 1 and blue beam 2. The red beam 1 denotes the left arm of the HPLS, from the West side of the building while the blue beam to the right arm of the HPLS from the East side.
Fig. 30 LBTS configuration
The possible single beam (1 Beam functioning at one time) configurations are as following:
Table 11
Possible configurations for one running beam:
For all the configurations described, all combinations between the two beams shall exist (e.g. 2 beams in E6, 2 beams in E1, 2 beams in E7, 1 beam in E1 and 1 beam in E6 etc), yielding 19 configurations in total. The corresponding Gate Valves shall be opened or closed function of where the beam is routed, when the Control System of LBTS works in automatic mode of configuration.. In the automatic mode, the 7 configurations described before shall be implemented (1 beam in one experimental area). In the case of manual mode the system shall provide alarms and conditionings that will ensure the safety of the personnel and the security of the LBTS.
GBDD Technologies and Vacuum
GBDD Overview
The Gamma Beam Delivery and Diagnostics (GBDD) system consists of two "lines" (GBDD Low energy and GBDD High energy) that cross the ELI-Building from the Beam-line starting at Low Energy Interaction Point, respectively the Beam-Line starting at the High Energy Interaction Point, to the E8 experimental area.
These lines are not continuous due to the fact that a lot of equipment have to be inserted/removed from the beam for experimental purposes.
The diagnostics modules of the GBDD system shall provide the measures of 5 key beam parameters, referred as Diagnostics Parameters (DP):
Energy spread
Polarization
Flux (Intensity)
Time Structure
Spatial profile
The system used depends on the energy of the beam. Here is the list of the ones envisaged:
Low energy beam (Eγ < 3.5 MeV)
Attenuator system using HPGe detectors for energy spread measurements and LaBr3 for flux monitoring and time structure
CCD system for spatial profile
NRF polarimeter based on ELIADE with the use of four HPGe crystals mounted at 90 degrees
High energy beam (Eγ < 19.5 MeV)
D2O system composed of a deuterium target and four neutrons detectors for Flux monitoring and Polarization measures (for this last parameter, the energy of the beam has to be above 3 MeV)
CCD system for spatial profile (the same as the precedent one)
Fission chamber for Flux monitoring with beam energy below 3 MeV
Stand-alone LaBr3 detector for time structure
The energy spread system used for High energy is not yet defined.
Two others features are achieved through the GBDD:
Collimation of the beam via several collimators that are located in the walls between the Gamma Source Recovery and E8, and between E7 and E8 areas (2 collimators per beam).
Vacuum inside each GBDD section: the effective level of vacuum should be the same in all the sections of the Low energy line, respectively High energy line.
The GBDD Control system shall implement the control of all the modules listed above. Moreover, a dedicated GBDD Safety system interfaced with the ELI-NP PSS shall handle the safety of the people working in the areas.
GBDD Control System
The GBDD control system shall provide the following functionalities:
Local control of the low energy and high energy Diagnostics modules
Local control of the beam collimators
Local control of the vacuum inside each GBDD line section
Local supervision control via User HMIs in E2, Gamma Source recovery, E7 and E8 areas
Remote supervision via an HMI in the User room
Remote monitoring via a dedicated User HMI in the User Room
For further details, please refer to the Controls chapter of the TDR GBDD.
GBDD Safety system
The GBDD Safety System is a dedicated part of the GBDD that deals with the human safety when working in the areas where the GBDD operates. The GBDD safety system shall exchange information with ELI-NP- safety system and experiment safety system information regarding the state of the GBDD (collimators position, vacuum level, etc.) that shall be used in the general ELI-NP safety system to properly operate the facility (experimental areas personnel clearance, gamma interlocks, radiation safety, etc) and the experiments.
8.4 Interaction Chambers for 10 PW experiments (E1, E6)
The Interaction chambers for E1 and E6 areas, for the 10PW laser based experiments will have two systems for controlling its functionality, one being the Safety System and the other the Control System.
The Safety System is a dedicated part of the Interaction Chamber (IC) that deals with the human safety when working with the Interaction Chamber. The IC safety system shall exchange with ELI-NP- safety system and experiment safety system information regarding the state of the IC, that shall be used in the general ELI-NP safety system to properly operate the facility (experimental areas personnel clearance, laser interlocks, radiation safety, etc) and the experiments. The IC safety system shall be a part of the E1 experimental area safety system.
The Control System of the IC shall control and monitor the vacuum state in the chamber, shall monitor the CCD cameras attached to the interaction chamber and shall be interfaced with the LBTS control system for exchanging information regarding the gate valves status and vacuum level status from the LBTS. The gate valves are part of the LBTS and are controlled by the LBTS control system.
The Control System of IC shall provide the following functionality:
Vacuum control via hardware and software from the Control System application
Video cameras monitoring of the mirrors inside the interaction chamber
Status reporting with history, logging and alarm signaling
via the Supervision software.
The Control System of the IC shall provide a Supervision software. The Supervision of the IC shall provide the following features and functionality:
Human-Machine Interface
Synoptic display
Alarm management
History of configurations management
Logging management
User profiles configuration
The HMI of the Supervision for IC and the associated HMI of the IC Safety system shall be located in the HPLS Control Room.
Additional Clients with Human machine interfaces shall exist to individually access the parameters of the IC (Vacuum parameters, CCD, etc). The access to the Client shall be made such that to avoid concurrent access to the same parameter. The clients shall exist on separate machines than the machine that holds the Supervision software.
Separate Clients and HMIs shall display only the video from the cameras and Vacuum status for the Users when they are in the interaction area. The information shall also be available in the UsersRoom.
For the clients and Supervision software implementation, Tango based applications shall be considered for compatibility and maintenance reasons.
8.5 Timing/Synchronization Network for HPLS experiments
Two solutions shall be taken into account:
A system that generates a tag, synchronized with the beam shot can be implemented [as existent in Spring8, SACLA]. The solution must correlate the HPLS pulse by pulse beam properties (Energy, pulse duration, spectrum, spatial profile, contrast?) with the diagnostics data. The tag shall be hardware transmitted to all the experimental areas/unprotected areas where it will be read by the equipment required to perform the data encapsulation [timestamp/tag, data] e.g. a PC. This PC should be able to run in real time the TAG reading and to perform the TAG association with the data acquired from other equipment (e.g. CCD camera). The TTL trigger for the camera shall also be sent to these computers in order to know when the acquisition was triggered.
The idea of TAG identifier correlation is presented below:
Fig. 31 Laser – experiments correlation approach
For the DAQ data correlation with the TAG, the serial TAG data can be sampled with one of the Analog Inputs and reconverted to the 32 bit identifier, in this way both Data and TAG will be extracted at the same time.
Together with the acquired data, these signals should be enough to determine what data corresponds to what timestamp/tag.
A requirement is that the TAG generation system and the HPLS database that logs the beam parameters to be able to run and log in real time (@10Hz) the unique TAG and the diagnostics data.
An example of machine-experiment correlation is provided below:
Fig. 32 Proposed solution for HPLS beam properties – data experiment correlation
The second solution is to have all computers synchronized with the same NTP server, linked to a GPS. The synchronization shall allow that all PCs (of the HPLS and on the experiment side) to have the same timestamp, allowing an easy correlation but not deterministic. This solution is able to provide the correlation if the experiments are made with low repetition rate pulses (less than 1 shot/s) and if the applications are carefully treated.
8.6 Timing/Synchronization Network for GBS experiments
The requirements of the experiments are:
Global trigger correlated with the macro-bunches for reading-out the data acquired by the digitizers
If possible, a global timestamp for all the data produced and acquired during the experiments or at least for the ones that will be used in the same processing after one experiment (in our case data acquired during the NRF experiment).
A system that generates a tag, synchronized with the beam that identifies some GBS parameters at a certain moment can be implemented. The solution could exist to correlate the Gamma beam macro-bunch diagnostics data with the Gamma beam machine parameters and with experimental data. The tag shall be hardware transmitted to all the experimental areas/unprotected areas where it will be read by the equipment required to perform the data encapsulation (Fast read-out PC in this case).
Two proposed solutions are presented, both based on the GBS Timing system.
For the synchronization of the devices over the machine, the GBS is using a system based on the Micro Research Finland Timing system [22], referred as Gamma Picosecond Timing System. This system distributes a Timing sequence composed by several data packets transmitted at a frequency referred as Event Clock. Each data packet is composed by one Event Code, 1 Byte, along with 1 Byte of data (Distributed Bus, DBus). The Transmission (Tx) uses the 8b10b protocol and is physically achieved by Optical Fiber.
Fig 33 GBS synchronization system based on event generators
The generation of the events is ensured by the Event Generator (EVG) that is phase locked with an external clock (in this case the RF Clock of the GBS, @ 62.08 MHz) that is therefore the Event Clock. The timing sequence of events is generally up streamed with Fan Outs. In the Gamma beam machine one timing sequence will broadcast its events to Event Receivers (EVR) at a frequency referred as Repetition frequency, Fr, equals to 100 Hz (Macro-bunch frequency).
The EVR can be configured to perform some actions when it receives a specific event code: generation of signals (e.g TTL) w/wout delays, possibility to choose the width of the signal, etc. Moreover, a global time stamp ("second counter"), 32 un-signed bits, is generated by the EVG and distributed over the MRF Timing network. Each EVR implements also an "event" counter, 32 un-signed bits, that allows a high precision timestamp, up to 1/Evt Clock = 16.1 ns. Because all the EVR are phase locked with the same EVG, this means that all the EVRs are synchronized with a precision of 16.1 ns. The jitter between two outputs located on different EVRs is assumed to be less than 25 ps rms with a 125 MHz reference clock [23].
Some EVRs could be used for the Timing/Synchronization Network for Gamma Experiments and connected to the Event Generator of the Gamma Picosecond Timing System. Two solutions are proposed. In each solution, EVRs receive an Event Code associated to a macro-bunch generation. A Macro-bunch trigger signal for the different DAQ crates is also necessary to start data acquisition. The delay and width of each Macro-bunch trigger can be programmed on the EVR. Moreover, each EVR generates a TAG that identifies the Trigger sent (i.e. the macro-bunch).
The TAG could consist on an event counter, e.g. on 24 bits, that is sent by the GBS machine when an event in generated.
Then, this TAG is sent to the Fast-read out PC connected to the digitizers by optical link, parallel lines (e.g. 24bit – 24lines) on copper cable or serial over copper cable. This PC makes the data processing. Its output consist of the encapsulated data [TAG, PROCESSED data] or [TAG, RAW DATA] if a raw data replication is required.
The only differences between the two architectures proposed are:
In the first architecture, one EVR is located on each DAQ crate
In the second one, all the EVRs are mounted one the same board, referred as Timing system that needs to be implemented.
The second solution reduces the number of EVR because each EVR can send several triggers at different crates: e.g. the EVR – EXP on the figures could generate another Macro-bunch trigger that could be send to another crate. However, if a problem occurs on the board, the entire system could be down.
Fig. 34 Proposed solution for Synchronization and TAG signals, EVR on each DAQ
Fig. 35 Proposed solution for Synchronization and TAG signals, 1 Timing unit and afterwards TAG and Trigger signals distributed to all equipment.
The third solution is to use an NTP server synchronized with a GPS, and all PCs to be synchronized with the NTP server. In this way, all systems shall have the same timestamp. This solution can be implemented only if the requirements do not request deterministic correlation. Moreover, this solution shall be based only on the clock of each PC and the tasks running on it under various OSs, which have to be carefully treated in order to achieve a reliable correlation.
8.7 Experimental Area Control
Description of the DaqRoom/UsersRoom will be added progressively based on the equipment details.
8.8 Interfaces between Systems
A detailed description will be added based on the defined equipment.
8.9 Toolset evaluation and coordination between similar facilities
Various tests have been performed in ELI-NP in order to determine the best choice for the toolset to be used for the development of the described architecture. Ideally the choice shall be such to have as few programming languages and similar GUIs development both for EPICS and TANGO. These aspects are under evaluation:
TANGO DCS toolset evaluation:
In terms of programming languages C++ and Java are envisaged.
Advantages : simplify the development and the maintenance of the system
Drawbacks: reduce the usage of external resources (servers developed by other facilities, specific GUIs)
The ATK packaged developed at SOLEIL [24] and the QTango package [25], developed at ELETTRA, are the candidates for the local supervision client. Only one of them shall be implemented at a first stage of development.
Advantages : GUIs already existing, fit the language recommendations (ATK is made in Java, QTango in C++), benefit from the experience of other facilities
Drawbacks: the widgets are limited.
Rapid client application prototyping can be done using the TANGO – Labview binding which allows the creation of client GUIs. This feature is under evaluation. Control loops on Windows PCs shall be tested.
Development or usage of a common error handling system is envisaged. This topic has been presented by ELI-ALPS [26] and is of interest to ELI-NP.
Advantages : ease the debugging of errors, impose some rules in the development of new device servers
Drawbacks : integration of the existing device servers
A repository tool should be used for hardware and software management. The CCDB tool [27], developed at ALBA is a candidate.
Advantage : ease the management of all the equipment integrated into the control system, possible benefit from external resources (CCDB)
Drawbacks: a priori none.
EPICS DCS toolset evaluation:
The selection of software tools that will be used to develop, test and deploy the integrated control system for GBS and related experiments is based on the following constraints and assumptions:
The GBS machine will be delivered by an external supplier with its own control system software and hardware (reunited in an abstract modular concept called ‘vertical column’), which includes:
CODAC software suite [28] version 4.1. CODAC is a mature package, with wide usage in the scientific community, including:
EPICS version: 3.14.12.3
an extensive list of EPICS hardware and software support packages
Control System Studio: a tool for creating EPICS client interfaces, which contains: a GUI design tool (BOY), an archiving system (BEAUTY) and an alarm handling system (BEAST)
support for standardized control system development and deployment
Compact PCI hardware platforms for running the IOC servers
Desktop PCs having Scientific Linux (release 6.3) operating system for running EPICS clients
EPICS and its extensions are open source, supported by a large scientific community and are flexible enough to be installed on different range of hardware platforms and operating systems (e.g. vxWorks, RTEMS, Linux, Windows). They are able to interoperate by means of the EPICS Channel Access network protocol.
A basic EPICS control system has already been demonstrated in ELI-NP using out-of-the-box open-source code:
EPICS IOC running on Linux (Ubuntu 14.04) to control DG645 delay generator
EPICS client represented by CSS instance running a BOY *.opi interface on a Windows 8 machine.
There are also bindings available between EPICS and commercial applications (e.g. Labview, Matlab), which may allow rapid prototyping of control system clients (not yet demonstrated in ELI-NP). This feature in under evaluation.
Additional control software will need to be developed in-house to control the gamma beam experiments and correlate the data acquisition with the beam parameters.
This control software will need to be interoperable with the one installed on the GBS machine, but also with legacy 3rd party control interfaces. Finally, it may also need to be compatible with the HPLS control system, (which is built on TANGO)
It will need to support additional hardware devices (e.g. sensors, motors, power sources, cameras etc) which may function on different operating systems (e.g. Windows or Linux)
It may also require the implementation of specialized control algorithms and loops
Based on the above, we foresee a heterogeneous solution having EPICS framework at its core, but with the following particularities:
different types of computing platforms (Desktop/industrial) and operating systems (Windows, Linux or real-time).
Different types of clients. The CSS IDE has only a limited capability of integrating complex control algorithms in the client side, so in the first stages of development Matlab or LabVIEW are envisaged. A staged approach is intended to be adopted as it will be described in section 8.11.
In terms of software engineering, this solution would require the following skill set:
Linux & Windows operating knowledge for installing IOCs already available in the community
C and C++ programming for development of new or modified IOCs
Matlab/ LabVIEW for prototyping of new control client interfaces.
Information exchange with the other ELI pillars is taken into account with the topics:
Common issues/problems shared by the facilities
Common control system tools shared by the facilities
The coordination between the three ELI sites is at an early stage. Several meetings have been already organized and common subjects of interests have been pointed out. To go further, the requirements have to be precisely specified. This work is under progress.
The common issues faced by the ELI sites should obviously require common solutions. However, each facility has its particularity and own experimental programme and some solutions will be specific driven by the experimental request. ELI-NP is currently evaluating what toolset shall be used for development of the experiments CS: DCS framework (Tango, EPICS, etc.), Operating system (Windows, Linux, RTOS), languages (C++, Python, Java, C, etc.), dedicated DCS tools (database browser, GUIs), IDE. In this sense, a couple of prototype set-ups are being developed (motion stages, camera and delay generator) in both TANGO and EPICS environments.
8.10 Configuration and Simulation for Laser Driven Nuclear Physics Experiment (E1)
In the following, it is presented the equipment involved in the LDNP experiment and the hardware and software architecture for performing the experiment together with the system design and implementation. In the architecture, TANGO shall be used for compatibility with the HPLS and LBTS, maintenance reasons, development effort and scalability of the solutions.
Equipment involved in the experiment:
Target insertion system with loadlock– used to insert a target wafer or RCF place inside the interaction chamber, without breaking the vacuum inside the Interaction Chamber (IC)
Target manipulation system – inside the IC, at least 5 axis of movement, used to position the target in the required focal spot of the HPLS.
Target alignment based on CCDs – inside the IC, multiple CCDs placed such to monitor the position of the target and align it correspondingly with the Target manipulation system
Delay generators for synchronization & TAG system
High Power Laser System (HPLS)
Laser Beams Transport System (LBTS)
Interaction Chamber (IC)
Data storage system
Detection specific for LDNP:
Thomson parabola
HV ctrl (offline)
CCD camera gated
MCP – HV control (offline)
Scintillators
Gated HV
DAQ
General experiment systems, machine setup and running steps
In the following, there will be described the envisaged setups for:
The Synchronization and TAG system
the Target insertion system
the Target alignment system
and the experiment flow regarding:
the Safety systems involved in the HPLS based experiments
the LBTS setup for experiment run
the HPLS setup for experiment run
the Data storage setup
the experiment itself (in the interaction area)
The Synchronization and TAG system:
The solution, as presented in the chapters before shall be able to synchronize the information from the detectors with the beam parameters given by the HPLS system.
Target insertion system:
A possible implementation for the TIS is presented below. The insertion of the target shall be made from above the IC, using the combination of the motion controllers of the TIS itself (2 axis) and the Target manipulator inside the IC. The systems are depicted below. The TIS shall have a loadlock system and separate vacuum pumps. An ongoing research contract with a Romanian institute will deliver a prototype for a Target insertion system with three holder positions magazine.
Fig. 36 Target insertion system and Target manipulation system example
Target alignment system description:
In the following we shall refer to the Target Alignment System as composed by a Target manipulator and CCD cameras for alignment. The system is suited for wafer like solid targets but can be adapted to gaseous targets also.
The target wafer shall have at least four markers in order to precisely align the wafer in the focus of the laser beam. The focus can be determined using a microscope and a camera to determine the needed focal spot on target. The four markers shall be placed in such way to ensure the reference planarity of the wafer. Afterwards, an in situ characterization of the wafer in the holder shall be made. For this, an interferometer can be placed in the load lock system that will be able to characterize the irregularities of the wafer plane, with respect to the markers that are considered coplanar. This information will be used afterwards, together with each target characterization coordinates, to achieve the best alignment for the target in the desired focal spot of the HPLS.
After the focal spot set-up, the alignment system shall place the wafer, sequentially, with the markers on the focus of the alignment laser beam. After this procedure, the referece wafer plane shall be perpendicular to the laser beam.
In order to reach the required Angle of Incidence in respect to the laser beam axis, the rotational stage that is beneath the target holder shall be used. The AOI is considered in the plane to maintain the incident P polarization of the laser beam.
The target wafer shall be characterized before its insertion in the interaction chamber and a predefined format file will contain the X,Y and Z position of each target in respect to one of the alignment markers. Each of the target wafer holder shall have a 5 digit number embodied on the frame for recognizing the target and choosing the corresponding alignment file.
Three proposed solutions for the wafer movement and alignment of each target:
The alignment system shall place the target wafer in such way that the required laser focal spot to be acquired on the markers and the zero position (Z axis, on the laser beam axis) will be recorded. This means that the plane of the wafer is perpendicular to the laser beam axis and the required focal spot dimension is set on the face of the wafer. Afterwards, the 0 position will be recorded also for the X and Y axis from one of the predetermined markers. For each of the targets inside the target wafer, the alignment system will adjust its position using the X,Y and Z coordinates stored in the alignment file.
The alignment of each target is considered to be maintained from one target to the other based on the mechanical setup that has to be precisely aligned with the laser beam. In this way, only the X, Y and Z coordinates for each target are needed in order to precisely align each target of the wafer.
The second solution assumes that the mechanical system is not precisely aligned and that the mechanical joints have high tolerances which may lead to movements of the alignment system that are unpredictable. This implies an online correction of the system.
The proposed solution is an online heuristic control algorithm that monitors the wafer position for each target before the laser shot and control the alignment system motors position to correct any misalignment. A possibility is to have a mirror on the side of the alignment system of the wafer (figure) that is aligned in the beginning to be in a perpendicular plane to the wafer. When the wafer is moved to another target position, the far field measurement will show if the wafer is moving on the correct axis or if some compensation has to be made.
Fig. 37 Proposed alignment system
The heuristic algorithm can be a genetic algorithm that has to control the motorized stages of the alignment system such to perform the alignment of the wafer plane in respect to the laser beam axis (point 1) or afterwards, to adjust the wafer for each of the targets in order to maintain the correct focal spot on the target and AOI. The benefit of the algorithm is that it will automatically find through iterative steps the minimum error between the chosen criteria and the actual solution, not having to deal with individual control of several motorized axis.
The third solution is to have for each target an additional marker on the wafer. TBD
Experiment flow:
In general, for each experiment, the Safety system, the LBTS and HPLS have to be prepared to run the experiment. In the following it is presented the design for the general ELI-NP Safety system and how the user shall interact with the LBTS and HPLS to reach the desired beam parameters configuration for the experiment.
Safety setup (interaction area)
In the ELI-NP facility, the LBTS and HPLS shall each have their own safety systems. Each of them shall take care of the personnel safety when operating in the interaction area and in the HPLS area, respectively.
Furthermore, each experimental area shall have a safety system that shall handle the radioprotection doors position, search boxes, panic buttons, etc. The Safety system shall interact with the Building Management System (BMS) that drives the doors and which is interconnected with the fire extinguishing system, alarms, gas monitoring system, etc.
Fig. 38 ELI-NP safety systems general overview
In the interaction area, the Interaction chambers shall have their own safety system, as depicted below. This safety system of the IC will act on the IC doors state, search boxes and panic buttons. Due to its size, the IC access doors shall have hardware position locking systems, search boxes and panic buttons, to protect the personnel from accidents.
Fig. 39 ELI-NP IC safety system and integration with the other ELI-NP safety
LBTS setup for experiment run
Users will be able to extract the beam information of the machine in their predefined storage area. A server will be present and all the software interfaces available to extract the necessary information of the machine.
The user will not be allowed to modify the parameters of the Beam transport system. The beam transport system shall have its control room in the HPLS control room. Its operation will be interlocked with the HPLS operation and with the ELI-NP safety system.
In order to modify the beam transportation parameters – deviate beam, change polarization level, adjust wavefront with adaptive optics, a request shall be made to the LBTS operator to change the desired parameters.
HPLS setup for experiment run
Users can extract information from the intermediary database and store them in their predefined storage area. A script shall be developed for this action. Both intermediary and storage database are supposed to be placed in a Data storage virtualization server, in the DAQ room.
A server will be present and the software interfaces available to extract the information from the HPLS database to the User’s database. The HPLS database accessible to the user shall have only information regarding the output beam properties of the HPLS.
The HPLS database shall have also a tag, unique for each laser beam shot, sent via HW means to the experimental areas and synchronized with the laser pulses. The tag will be stored also in the HPLS database, to identify the parameters of each shot.
The HPLS shall have through Supervision the burst mode function that will allow the operator to shot a given no of laser pulses. This is necessary for experiment set-up.
The user will not be allowed to enter or modify with the HPLS database where the beam information exists.
The user will not be allowed to modify any of the beam parameters or fire shots but only request this to the HPLS operators.
Data storage setup
The data storage system shall exist in the DAQ room in a dedicated space. A virtualization system shall be implemented, where each experiment shall have an allocated slot inside the data storage system where the user will be allowed to store the data from the experiment. The parameters from the HPLS shall be available on the same DataStorageSystem. The amount of space for the experiment shall be function of the users request and can vary from experiment to experiment. The data storage from the DAQ room shall be used for short term storage of the experimental data. The data storage system from the DAQ room architecture is presented below:
Fig. 40 Data storage solution with dynamic slot assignment for each experiment
Experiment run
The setup is first prepared inside the interaction area. Before exiting the experimental area, an assigned person shall action the searchbox button and the search procedure is being made inside the area, before exiting. After the search is completed, the radioprotection door is closed.
The user shall control afterwards the entire setup from the UsersRoom and DAQroom.
Within the experiment flow, the user shall control the Target insertion system and the Target alignment system for targets. The Delay generators that trigger the equipment for acquisition shall also be available to the user for setting the correct gating signals.
The user shall request to the HPLS and LBTS operators the correct configuration before the experiment run and then during the experiment they shall request for laser shots.
In the following there are presented the two stages of the experiment preparation, the preconfiguration that takes place inside the interaction area and the remote operation of the equipment that permits this feature, along with the experiment running steps.
Preconfiguration (when inside the interaction area)
Laser configuration in exp chamber (mirrors pos, etc)
Diagnostics assembly installation,
Target system placing
TIS alignment, etc
Offline equipment setting up
Request for allocation of data storage space, testing
Adjustments of remote controlled equipment
Prealignment and testing of the target, TIS, focusing
Chamber closing and pumping, testing if OK
Realignment in vacuum
HV setup
Interaction area closing procedure
Remote configuration (when outside the interaction area)
UserRoom control:
TIS with target movement test with cameras
Beam request – test pulses
Online visualization of DAQ and CCD data
DAQ and CCD readjustment remote
If needed – access the experimental area for fine readjustment of equipment without remote control
Beam request – run mode
Online visualization
Target movement and automatic alignment is working
Target exchange procedure
Start from c) again
A diagram with steps will be added based on detailed procedures.
Experiment IT systems:
Fig. 41 Example of E1 experimental area IT systems general overview
Outside interaction area
The place outside the interaction area shall be assigned to equipment that can be placed at large distance from the experiment (e.g. motor drivers, TANGO services). However, for the motor drivers position, a solution shall be chosen based on the interfaces with the LBTS equipment, vacuum related equipment, cost effectiveness and performance.
Two solutions are considered and shall be tested for best performance and cost effectiveness regarding the TANGO architecture:
1x PC rack with 2x industrial PCs to run the TANGO based services and database.
Virtualization server inside the Daq Room, with separate slot for each experimental area [as implemented in Soleil].
Inside interaction area
PC rack w EMP protection with industrial PCs and Fiber Optics (FO) output to minimize the penetration holes for cables.
PC rack w EMP protection with 5 industrial PCs for E1 output FO.
1 PC for Target insertion system control (to be moved outside?)
1 PC for Target manipulation system control and delay generator control (to be moved outside?)
1 PC for alignment (CCD cameras)
2 PC for DAQ and CCD diagnostics
Rack units:
1x Rack unit for digitizers, oscilloscopes, EMP sensitive equipment that will output the data via FO. The rack shall be EMP protected.
1x Rack unit for analog signals and other signals for equipment (motor drivers connection to the motor itself, triggers, etc). The rack shall be EMP protected.
The rack 1 will have 1x patch panel with general purpose connectors, 50 conn to be available/exp area
Additional, the TAG HW signal and TTL signal shall be available in the racks.
The rack 2 will have 1x patch panel with BNC conn – general purpose, x50 wires to be available/exp area.
Additional, the TAG HW signal and TTL trigger signal shall be available in the rack. Other analog cables that links the interaction area with the nonprotected area are TBD.
Digital signals:
The digital I/O communication shall be FiberOptics over Ethernet, to prevent the data corruption and equipment malfunction due to EMP.
The proposed solution is using 1x industrial PC rack system, with slots filled with industrial PCs for general purpose that shall convert the data passed through FO intro serial/parallel communication as needed.
For the signal filtering and EMP protection of the sensible equipment, the solution complexity should be decided after the first tests, when a real estimation of the EMP value will be available and measured progressively. In this sense, the first approach shall use EMP protection crates and EMP filters at the entrance of the cables into the crates (if this is permitted by the bandwidth/frequency of the signals needed to be transmitted). As the signals usually come from the detection, a high EMP is however not accepted as it would mean the detection was already affected by the pulse.
The proposed approach is depicted below:
Fig. 42 Proposed solution for E1 experimental area EMP protected rack
For the FO passthrough, a similar solution exists.The FO shall come also from a crate like the one described above.
Fig. 43 E1 experimental area EMP solution for Fiber optics cable
All cables from the experimental room will go into the racks where the equipment for detection and the IT systems that will be part of the control systems shall be placed. The industrial PC's will have a fiber uplink in the switch that will provide fiber optics links between the experimental room's racks or from them to the non protected area.
Multimode fiber optic (62.5 nm core) shall be used and it is mandatory that each industrial PC has a fiber optic interface (at least 1Gbps) in order to transmit real time datawithout the need to stop data acquisition.
In the proposed topology the computers in the racks can communicate horizontally or vertically according to the requirements.
Experiment Design
The alarm, services and database in the TANGO architecture shall be placed on separate computers. Two solutions are considered and shall be tested for best performance and cost effectiveness as presented in chapter 3.
Approach1) One solution shall be to place the services and TANGO related database on industrial PCs placed in protected racks on the corridors.
Approach 2) The second choice is to have a virtualization server inside the DAQ room.
For each Experimental area, a separate slot shall be dedicated in the virtualization server and the necessary software will be installed accordingly.
The solution shall be chosen based on price estimation, scalability, maintenance and occupied space also in conjunction with the experimental apparatus and other equipment for the LBTS, vacuum, etc.
For the experiment, the IT systems from the interaction area shall house the TANGO device servers for the equipment. In the first stage of the development, the TANGO device serves shall only implement the access to the parameters of the HW device and not the logic (e.g. only the means to send commands and receive the status from a motor and not the logic to perform an automatic alignment). Because a long period of testing, development and adjustments shall be needed for the implementation of the logic, this shall be made at the HMI layer and not inside the device server. LabVIEW and Matlab are envisaged to implement the GUI and the logic at the Users level.
The machines housing the TAGO drivers (device servers) for equipment shall also have additional clients where the user will be able to control the hardware device in particular.
In the following we will refer to the Client as being the hardware and the HMI inside the interaction area, where the user have access to setup his experiment. The client shall have an HMI where all the parameters of a device that can be controlled or monitored are available.
The Supervision shall be the hardware and software that allows the user to remotely access the parameters of the devices. Usually, the Supervision shall allow less access to the devices than the Client, for security reasons.
In the current configuration, the user shall also be available to access the Supervision SW from the interaction area.
Devices and their associated implementation:
Target manipulation system – device servers for motor drivers API in TANGO, Supervision SW and logic in LabVIEW and afterwards in TANGO specific application.
Alignment based on CCD cameras – drivers for camera in TANGO, Supervision SW and mathematics for image recognition, etc in LabVIEW and Matlab.
Target insertion system (2 possibilities to be tested for the best performance)
complete CS in LabVIEW and integration with TANGO as a device server. This requires a TANGO-LabVIEW translator. Useful to see any LabVIEW app as a device server.
TANGO device server for motor drivers API, logic and Supervision in LabVIEW or TANGO specific application.
Delay generator – Drivers in TANGO, Supervision in TANGO or LabVIEW
Controls and drivers for DAQ system in LabVIEW, remote desktop to be used in the beginning
CCD from the detection – (2 possibilities)
Driver in TANGO; Supervision and image processing, logic in LabVIEW and Matlab
Driver and logic in TANGO; Supervision in LabVIEW
As the development will advance and satisfactory working solutions that do not need any longer development will be obtained, the mathematics and logic from the Supervision HMI and Client HMI can be progressively shifted from LabVIEW and Matlab to the device servers themselves as presented below.
Fig. 44 Stage approached development for TANGO drivers and logic/mathematics for experiment control
The other specific detection equipment, shall first be implemented with remote desktop access control if no TANGO interface already exist implemented for the device.
For the data output from the detection system (CCDs, DAQs, etc) no specific file format exists. Dedicated high throughput links shall exist either connected directly to the Data storage system, either intermediate PCs shall be added by the user to encapsulate, manipulate or process the data and put it in the correct format for storage in the ELI-NP Data storage system, as presented in Fig 1.
For the web access to the data from the experiment, the figure below presents the proposed solution.
A VPN connection shall be used to secure the PC that access the data. The VPN shall be installed apriori to the PC of the user that is granted the access to see the data of interest.
Fig. 45 Proposed external access to the data from the experiment
Experiment implementation
The system implementation is presented below. The Supervision layer refers to the equipment and services placed inside the DAQ Room and Users Room. For the data storage, a virtualization server will be used to assign a data slot for the experiment needs. This data storage will only accommodate the short term data storage of the experiment (e.g. 6 months), following that after this period the data to be placed in a larger data center, proprietary to IFINHH. A tape system is envisaged as a solution for long term backup.
In the UsersRoom and Data Acq Room, the user shall have three computing stations (general PC) with multiple monitors. A total of 5 monitors is considered sufficiently to display the information related to the LDNP experiment, as detailed in the first stage of development. A number of three computers is considered sufficiently to perform the logic and mathematics in the HMI (LabVIEW or Matlab at this stage).
For the Target Insertion System, a PC with a Supervision HMI that holds also the logic is considered sufficiently to perform all the required tasks.
For the Target Manipulation System, an industrial PC shall hold the drivers to control the manipulator and the Delay Generator used in the experiments. The PC shall also have the Clients HMI to access the parameters of the above mentioned devices.
For the CCD alignment of the Target manipulator, an industrial PC shall hold the drivers to control the cameras as also to hold the generic Client HMI to access the parameters of the CCD and display the images.
For the Target Alignment System, a PC with a Supervision HMI will be used to integrate the dedicated control of the Target manipulation system and the CCD cameras logic that both determines the alignment of the target in the focal point of the HPLS beam.
Two solutions shall be tested for the best performance:
The logic for the manipulation system and the logic for the CCD cameras processing can be implemented each in separate HMI LabVIEW or Matlab clients, on the Local Control Layer, and the processed parameters accessed in the Supervision of the Target Alignment System, another HMI LabVIEW or Matlab, at the Remote Operator level. The approach shall be tested for processing speed and reliability. Additional development is needed to interface a LabVIEW application as a TANGO device server.
The entire control is implemented directly in the Supervision HMI, at the Remote Operator level. This solution can burden the CPU of the Supervision HMI at the Remote Operator level.
Fig 46 represents the experimental setup and associated detection and control. The Hardware architecture and equipment is presented in Fig. 47.
Fig. 46 E1 experimental setup example
Fig. 47 Proposed E1 experiment Hardware Architecture Model
A TANGO driver example for motion controller is presented below:
In the experimental setup presented above, there exists several motion controllers for different movements necessary inside the experiment, e.g. target manipulation inside the Interaction Chamber. The following subsection will present a detailed example of how the motion controller shall be integrated into the TANGO architecture of the experiment control, as described in the design of the present TDR.
The main equipment that shall be controlled via the TANGO architecture is the 8MT173V-30-VSS42 Standa motorized translation stage, 1 axis and vacuum compatible.
Fig. 48 Standa 8MT173V-30-VSS42 picture
The translation of the stage is ensured by the VSS42 stepper motor. The specifications particular to this motor are listed below.
Fig. 49 VSS42 technical specifications
Finally this translation stage motor requires a controller for driving its movements. The current controller used is 8SMC1-USBhF. This type of controller can handle from 1 to 4 axis: 8SMC1-USBhF-B1 series for 1 axis and 8SMC1-USBhF-B2-i series for multiple axis control, 'i' being the number of axis (2 < i < 4). The one used in the prototype is 8SCMC1-USBhF-B2-4.
Fig 50 Standa 8SMC1-USBhF-B2-4 picture
The motion controller prototype follows the general design models (hardware and software) presented in the sections above.
Hardware architecture design
The general hardware model is composed by three layers (Equipment, Local Control and Supervision) divided into different sub-layers. Following this model, the motion controller prototype will be composed of:
1 equipment hardware, 8MT173V-30-VSS42, and its equipment interface, 8SMC1-USBhf-B2-4 controller
1 local control unit that consists of a HP Z220 Desktop along with its Human Machine interface for local supervision, HP ZR2330w monitor.
1 supervision computer that consists of a Lenovo Thinkpad W540.
The connections between the different components are the following:
"Equipment Hardware – Equipment interface" connection : Stepper motor connector 15 SUBF (Female)
"Equipment Interface – Local control unit" connection : USB 2.0
Control system network: Ethernet cat6
In the EXP MCS that will be implemented in ELI-NP, the local control unit and supervision computers will differs from the one used in this prototype.
Software architecture design
The general software model is composed by two types of servers, generic and central, that provide some services to different clients. The generic servers consists of DS accessed by Tango/Supervision clients. Moreover, the central servers are used for alarm handling, archiving or logging management.
In this prototype, the central servers are not taken into consideration. Moreover, there is no Supervision software on top of Tango. This decision is related to the fact that this first prototype is focusing on the implementation of Tango itself.
Tango requires also a database, refers as Tango Db used for enabling the network communications between clients and servers and for the storage of different motor and controller properties. The list below summarizes the different software modules that will composed the Tango MCS prototype:
Table 12
Hardware and Software association
Tango Device classes hierarchy
Tango uses three basic concepts:
Tango Devices Classes (or Tango classes) : the back-bone of the system
Tango Devices : one instance of a class
Tango Devices Servers: process hosting one or several devices. Tango clients connects to Device servers through specific API enabling the monitoring and the control of the system.
A Device hierarchy, similar to the one proposed in the TANGO DeviceServers – Design & Implementation Guidelines Revision 6, has been defined.
Table 13
Device server hierarchy
Tango Abstract Device pattern
The Abstract Device pattern is derived from the Abstract Factory pattern. This idea has been applied for Tango classes [29] and consists of providing a common interface to a wide range of concrete implementations. The abstract factory will return an abstract interface to a concrete implementation of the desired object.
The main motivation of the Abstract Device pattern is:
That same client applications can communicate with multiple concrete implementations of related devices
To define standard interfaces to a families of devices
To avoid duplication of similar but incompatible interfaces for related devices e.g. two power supplies from different suppliers,
Guide new programmer's on what the essential methods need to be implemented for a family of devices
Up to know, the abstract device pattern has been used for the classes related to hardware devices.
Tango Device servers design
Taking into consideration the hierarchy and the abstract device pattern presented, the Tango MCS prototype will be based on the following Tango classes:
1 "Motor" abstract Device class, Level 1.
1 "StandaMotorAxis" Device class, Level 1, representing the 8MT173V-VSS42 motorized stage that inherits from the "Motor" abstract class.
1 "StandaMotorCtr" Device class, Level 1, representing the 8SMC1-USBhF-B2-4 controller
1 "USBCom" Device class, Level 0, representing the communication between the local control unit and the controller
1 "StandaMotorsGroup" Device class, Level 2, representing the motor controller along with the motor(s) controlled
All the Devices that will instantiate these classes will be hosted in the same Device Server.
The properties of each Device will be stored in the Tango Db.
Tango Clients design
The Tango clients will used the ATK suite based on the Model-View-Controller design pattern also used in the Java Swing package. The Tango basic objects such as device attributes and device commands provide the model for the ATK Swing based components called viewers. The models and the viewers are regrouped in two separate packages respectively ATK Core and ATK Widget.
Fig. 51 Tango ATK suite
ATK Core is used to create and initialize the “model” part of the design pattern (commands, attributes, etc.) directly related to the device servers that will be accessed by Tango ATK. The communication between the device servers and ATK Core is made via a Tango Java API. ATK Widget provides the Graphical User Interfaces (GUI) based on Java Swing. In order to connect ATK Core with ATK Widget, a specific application is used "SetModel()".
Development approach
First stage
At a first stage of development, the supervision computer, Lenovo Thinkpad W540, will not be taken into consideration. The effective control of the motor will be ensured by the DS that are installed on the Local control unit and accessed by ATK clients installed on the same machine.
Regarding the device servers implementation, only the "StepMotorAxis" and the "StandaStepMotorAxis" will be developed. The USB connection will rely directly on the windows driver. The "StandaMotorBoxCtr" will not be implemented because only one motor will be used.
This first phase shall validate the implementation of the device servers for the monitoring and control of the 8MT173V-30-VSS42 motor through one 8SMC1-USBhF controller.
The following scheme provides a simplified diagram representing the first phase of development.
Fig. 52 Tango MCS prototype design – 1st stage
Second stage
Major modifications:
"USBCom" Device class will be implemented
A second "StandaStepMotorAxis" Device will be used
"StandaBoxCtr" Device class will be implemented
Tango ATK clients will be installed on the Supervision computer
This second stage shall validate a real distributed and remote control of several motors.
Tango Device Servers implementation
Tango comes with a suite of tools that facilitate the development of the DS. The most important ones are POGO (Program Obviously used to Generate tango Object) and Jive.
Tango Devices classes – POGO
POGO is a code generator that provides the back-bone of the code needed to implement a Tango Device class. After the code is completed and compiled an application is created.
Once the application has been created, its execution must follow this syntax
{Device Server Exec Name} {Instance Name}
With, {Device Server Exec Name}: name the application created by POGO
{Instance Name}: instance of the Device class (usually "test", or '1', '2')
Tango Device servers – Jive
The execution details above will not work if the Tango Device Server is not registered in the Tango Db. This registration is done through Jive.
Costs estimates
The table 14 of equipment shall be considered in order to implement the first setup that shall:
Control remotely the Target Insertion system,
Control remotely the Target alignment system
Provide a short term storage (6 months) to save the experimental data
Control remotely the delay generators for various equipment that needs to be triggered at a defined moment in respect to the HPLS pulses.
Control the parameters of the detection: acquisition boards, CCDs and view the results in the DaqRoom and UsersRoom.
Table 14
Estimated IT equipment required first day LDNP experiment
8.11 Configuration and Simulation for Nuclear Resonance Fluorescence Experiment (E8)
The general layout of the experiment is the following:
Fig. 53 NRF low energy experiment position map
The Gamma Beam System in Accelerator Bay 1 is composed by one Linear Accelerator that will provide an electron beam. This beam interacts with Lasers at the Low Energy Interaction Point (IP) at 100 Hz frequency in order to produce a Gamma Beam with an energy of 3.5 MeV. The gamma beam is transported to the E2 experimental area via the Gamma Beam Delivery and Diagnostics system. Finally the ELIADE detector and its associated systems (Target Alignment Systems, ELIADE detectors and ELIADE DAQ and its Trigger system) form the setup used during a Nuclear Resonance Fluorescence experiment.
Equipment involved in the experiment description
Besides, the GBS and the GBDD systems, one NRF experiment with a low-energy beam will include the following modules:
Target alignment systems
ELIADE detectors
ELIADE DAQ
Timing system
Target Alignment systems
The Target alignment systems regroups three main sub-systems.
The first module consists of a pipe, referred as CCD BLACK BOX that contains a scintillator, a lens and a mirror. These latter are fixed on the internal side of the pipe. The CCD BLACK-BOX in the following, is installed on a stage, referred as CCD STAGE that can be moved by stage motor means. A 3 axis movement (x, y, and z) will be achieved. This system is named, CCD system and belongs to the GBDD system.
Fig. 54 CCD black box for the GBDD
The control of this system must achieve:
The Trigger of the CCD camera for Image Acquisition (external Trigger signal is required)
The Image acquisition from the CCD camera
The Storage of the image on a computer that will perform the image processing.
The Display of the processed image on the HMI located in the User Room and possibility to interact with the image (cursors, spatial profile of the beam, statistics, etc.)
The Storage of the processed image in a dedicated data storage location
A Remote control of the motor that performs the movement of the CCD STAGE, via a HMI located in the User Room.
The second system is directly related to the ELIADE Interaction Chamber (ELIADE IC). This ELIADE IC will always contain one pipe used for the transport of the beam. This pipe is referred as COLLIMATOR PIPE. Two collimators will enable/disable the beam transport inside it. This pipe will be movable by manual or electrical actuators. This system is referred as Collimator system.
In terms of control and if the movement of the collimator pipe are performed by motors, the requirements are the following:
Remote control of the position (OPEN/CLOSE) of the two collimators (stepper motors are proposed)
Remote control of the motors that performs the movement of the COLLIMATOR PIPE, via a HMI located in the User Room.
The last system is the TARGET PIPE that contains the target and that is manually inserted in the COLLIMATOR PIPE. It is assumed that this TARGET PIPE will be automatically aligned with the COLLIMATOR PIPE.
ELIADE detector
The ELIADE detector is the most complex equipment involved in the experiment. It regroups:
Two mechanical support, one for the CLOVER detectors (ELIADE MEC 1) and one for the LaBr3 detectors (ELIADE MEC 2)
An array of 8 CLOVERS detectors fixed on ELIADE MEC 1
An array of 4 LaBr3 detectors fixed on ELIADE MEC 2
An Interaction Chamber (ELIADE IC) under vacuum. This chamber contains a circular pipe where the target shall be inserted. Moreover, the ELIADE IC is movable relatively to the mechanical support ELIADE MEC 1 by the use of the actuators.
A CLOVER detector is composed by 4 High Purity Germanium crystal (HPGe), each of them having 8 segments.
ELIADE IC
In terms of control, the ELIADE IC vacuum level has to be remotely monitored. The following is assumed that the nominal vacuum level is better than 10-3 mbar.
The vacuum inside the ELIADE IC will be performed by different pump units (?).
A/several vacuum gauge(s) (VG) will enable to monitor the value of the pressure inside the ELIADE IC.
The interface between the GBDD pipe and the ELIADE IC is only air.
At a first stage, the pumping-down will be manual: the user shall go to the experimental area to start the vacuum pump units.
Two requirements have been expressed:
A remote monitoring of ELIADE IC vacuum in the User Room through a HMI
A hardware interface shall provide the effective value of the ELIADE IC vacuum to the PLC Control system of the GBDD.
ELIADE HPGe crystals
The HPGe crystals requires the control of two parameters:
Temperature
Bias Voltage (High Voltage type)
To control the temperature of each HPGe crytal, an automated c LN2 filling system is foreseen. It has to keep the Ge detectors at the temperature of the liquid nitrogen without any external action. The system should allow for the monitoring of critical parameters, allow users to make a minimum set of operations and give the administrator access to the complete configuration of the system.
Regarding the bias voltage, the crystals are assembled close together with an electric insulation between them which allow them to be operated at different bias voltages. The separate bias voltage for each crystal improves noise immunity and allows operation at lower than the operational voltage should it be needed. The HV power supply shall provide a voltage up to 5000 V due to the specifications of the HPGe crystals.
The current HV Power supply are already delivered with their own controller and provides safety interlock signals. But, the control of the voltages (eq. to power supply) will be ensured along with the control of the LN2 filling system by a National Instruments compactRIO running LabVIEW software. The values of the voltage applied for each crystal shall be accessible to the User.
ELIADE DAQ
The DAQ envisaged for the HPGe crystals is made following these steps:
Each HPGe crystal is be read–out by a charge sensitive preamplifier (front-end electronics) that required a Low Voltage (LV) power supply.
After the preamplifier, a digital readout system (DAQ Front-End) able to cope the signals from Ge crystals has to be developed.
The samples acquired by the digitizers shall be read-out by fast multi-core PCs that will process the data.
The processed data shall be stored in the DAQ Room in two separate storage disks in order to have a back-up/recovery feature.
For the LaBr3 detectors, the steps are similar. The Front-end detector is in this case the Photomultiplier (PMT) attached to the detector while the front-end electronics is specific to this type of detectors.
If possible, a remote monitoring of the LV Power supplies shall be done by the User, via an HMI.
Assuming that the ELIADE DAQ Front-End will be based on dedicated crates (VME/PXI/CompactPCI, etc.) with digitizers boards that will be accessed by a PC, several parameters are envisaged to be measured and compared to acceptable values. These parameters can be divided into two categories:
Physical parameters
Temperature
Voltage and Current
Logical parameters (not exhaustive)
Single board CPU status (run, failure, reset, etc.)
Watchdog Timer
Boards memory (registers, FIFO, etc.)
The remote control of the physical parameters are nowadays achievable by the use of Power supply units and Fan-Tray containing a microcontroller and providing CAN, Ethernet or RS-232 ports. This enables the remote control of:
Power Supply parameters: Voltage, Current, limits set-up, etc.
Fan tray parameters: voltage, current, power, temperature, speed, etc.
Regarding the logical parameters, two major solutions have been developed yet:
Local control of the Single board with another one: the VME System Monitor Board developed in the Argonne National Laboratory [30] is one example.
Remote control of the boards of the crate by the use of a PC running a server that maps the memory location of any board installed on crate. This enables the access to the registers and the FIFO's of each board. This system has been developed in the Canadian Light Source [31].
The two solutions detailed for the logical parameters can also be used for the physical ones.
At a first stage, the requirements in terms of control are to remotely control the physical parameters, via a HMI in the User Room.
In a second stage of development, the control of the logical parameters will be achieved.
Timing/Synchronization system
The solution, as presented before can be based on Timing system that outputs a trigger signal and a TAG identifier. Inside the GBS machine, each TAG identifier shall be logged with the correspondent machine beam diagnosis data or other data of importance (e.g. flash lamp pulse no).
The second solution is to use the NTP server and synchronize all PCs with a GPS in order to have the same timestamp.
General experiment systems, machine setup and running steps
GBS
Experiment safety
Experiment flow
GBS
The Beam parameters set or measured in the gamma beam system shall be accessible for the User. This information transmission over the two control networks should be implemented in an "easy way" because both of the framework use EPICS.
Experiment Safety
The safety of the persons working in the facility will be handle by the ELI Personal Safety System (ELI PSS). The scope of the ELI PSS is:
The control access in the areas where a risk exists (this supposes that all the risks have been evaluated)
The monitoring and alarm of the radiations level in the facility areas
The respect of safety procedures related to the use of dedicated components/systems for the experiment (Liquid Nitrogen, High Voltage, Vacuum, etc.)
The following are not in the scope of the ELI-PSS:
The general safety risks such as fire (Detection & Extinction), CO2 & O2, temperature, etc. These risks are controlled by different systems composing the Building Management System (BMS) and could be interfaced with the ELI-PSS. The Door Interlock and the Control Access systems are also in the scope of the BMS. However, these will be directly integrated within the ELI-PSS.
The physical protection such as shields, clothes and glasses that shall be worn, etc.
Fig. 55 GBS Safety system in correlation with other ELI-NP safety systems
To summarize, the ELI-PSS have to handle all the risks induced directly by the different phases of all the experiments (Set-up, Start, Run, and Maintenance).
The ELI-PSS will master several safety sub-systems distributed in the facility. All the safety sub-systems should work independently and be able to take a decision (open a mechanic contact, activate a shutter, etc.) whenever it is needed. In the case of the NRF experiments, these subsystems are:
GBS Safety System: this GBS Safety system will handle the access control to the GBS machine areas and implement a radiation alarm system in these areas [32]
GBDD PSS : the GBDD Safety system will collect interlock signals that should avoid
Experimental PSS that should cover all the risks that could happened during the different phases of one experiment (calibration, set-up, run, maintenance).
The BMS Door Interlock
The Radiation Monitoring System that shall monitor the radiations level in the entire facility, along with alarms, flash lamps, etc.
Experimental Flow
Access to the experimental area:
In this part, the set-up involves the presence of the Users in the experimental area. An Access Safety Procedure shall be flowed and ensured that:
The level of radiations in E2 has to be under the threshold given by the radiation safety sub-system.
The access to the E2 area is impossible if the accelerator is ON.
For E2 access, the User shall use the Door Interlock system based on the key panels (see) and shall also have the rights (code & card) to enter in this area.
The presence of one person in E2 shall be continuously indicated to the ELI-PSS.
Experiment run with the Gamma Beam:
A Safety Start Experiment request should be followed each time a User ask for running an experiment using the Gamma Beam.
The experimental area hosting the ELIADE detector for NRF experiment using the low-energy beam will be E2.
Preliminaries safety procedures:
The access safety procedure has to be done.
The steps of this set-up are the following:
CCD camera Alignment with the Beam
After mounting the {ELIADE TARGET PIPE + CCD CAMERA} in E2, the User has to exit the experimental area and go to the control room. He has to require the use of the Gamma Beam and should follow the Safety Start Experiment request. Even if there is no real experiment at this time, the use of the gamma beam inside an experimental area shall be considered as an experiment. Then the User performs, via one HMI installed in the User Room, the alignment process detailed in the Chapter 2.
Target insertion in the ELIADE IC
This step requires the access to the experimental area E2. The Safety Access procedure has to be followed. If everything is alright, the User can access the room. This insertion doesn’t require any control or monitoring yet. It might change in the future. This insertion could be done inside the target laboratory if mandatory.
Target alignment with the beam
Once the target has been inserted and fixed on the ELIADE IC, the target has to be aligned with the beam. Again, the User must leave the area and follows the Safety Start Experiment request. The alignment is then achieved remotely in the User Room.
ELIADE IC vacuum
Access Safety procedure and then manual start of the pump units of ELIADE IC.
ELIADE IC HPGe crystals set-up (Temperature and HV)
Each Dewar have to be connected to the transfer lines of the LN2 Filling system.
The HV Power supply controller has to be connected to the compactRIO. All the temperature channels of the HPGe crystals (Pt-100 sensors) must be connected to the compactRIO.
Then, the configuration of the system has to be done locally via a local computer running LabVIEW.
Once this step is finished, the User can exit the room and go to the User Room.
Data Storage
The data storage depends strongly on the DAQ and Timing systems implemented. The following features are envisaged:
Processing of the Raw Data by the use of fast read-out PC. E.g., one read-out PC with several Fiber ports or with a PCI-express card linked to one/several digitizer.
Replication of Raw Data on dedicated storage disks for backup/recovery.
Storage of the processed data.
Fig. 56 Data storage for experiments, sketch
Pre-configuration
If the User wants to integrate new equipment for the experiment, he will use local EPICS clients or by using other IDEs (bindings with EPICS/TANGO will be provided). Moreover, he will decide what kind of values of interest must be archived, what the settings of the alarms are and what errors logs have to be taken into account. To summarize he will set the Central Services. He should also configure a part of the timing system.
Experiment run
Before starting the safety process, the User will start the EPICS/Top Supervision clients that will be provided in the User Room. The main client that he/she will use will be a synoptic of the experiment with all the main equipment and its associated alarms.
Then the Safety Start Experiment process is launched. If it ends with an OK signal, the Users can start his experiment.
During the experiment, the Users will visualize on a dedicated monitor:
The Diagnostics Parameters provided by the GBDD
The Beams parameters set by the GBS, after User request (see below)
The user will monitor:
The compactRIO that performs the auto-filling system. This monitoring includes the surveillance of both the hardware status and crystals critical parameters: temperature and voltage.
The Low-Voltage power supply
The vacuum level inside the ELIADE IC.
The user will control:
The timing system
The physical parameters of the ELIADE DAQ Front-Ends (the control of the logical parameters will not be achieved in a first phase)
Thus, alarms status, error logs and archiving values of interest will be also provided. These features are referred as central services.
During the experiment, the User can request new beam parameters (e.g. energy): the gamma driven experiments, and in particular in NRF, requires the use of different energy of the beam. When the User wants to modify this parameter, he has to send it to the GBS Operator via the Control System Network (EPICS CA) or by phone. In both case, this request has to be logged and archived in the central services.
Experiment Design
The design follow the models presented in the chapter 7.
Experiment Implementation
The following describes the hardware and software implementation associated to the NRF experiment. However, hardware and software from the GBDD MCS will also be used.
The Central Services implementation is common to all the Gamma based experiments and will be also used for the GBDD MCS. These ones are described in the TDR Control System [Ref. TDR Control].
Hardware
The goal is to purchase the same type of hardware in the entire facility, to the largest possible extent.
Local Racks
Three racks are envisaged.
Inside or outside E2/E8: NRF rack, with:
Control Units
1 industrial PC for the COLLIMATOR PIPE motors
1 industrial PC for the ELIACE IC Vacuum
1 industrial PC for the control of ELIADE DAQ FRONT-ENDS
1 compactRIO for the control of the LN2 Filling system and the High Voltage Power supplies.
Local HMI:
1 KVM switch with 4 ports or 1 HMI Touch Panel
1 Power Distribution Unit (PDU)
Control Unit – Equipment interface : 1 Switch 48 Ethernet ports + 1 patch panel
This rack shall be movable from E2 to E8.
Rack – Control system network interface: 1 Ethernet Switch 48 ports
Supervision computers
1 Desktop PCs with 4 monitors inside the User Room, NRF Supervision PC: monitoring of the ELIADE IC vacuum, the control of the ELIADE DAQ & Timing and the monitoring of the temperature and voltage of HPGe crystals and control and monitoring of the GBDD diagnostics components.
1 additional PC, Dev PC, will be used for test and development before and after the complete commissioning of the control system.
Data Processing computers
Inside the interaction area: Real-time industrial PC rack for DAQ Front-End read-out, with 4 industrial PCs for E2 : multi-cores PC with four 1/10 Gb ports (Optical link). These PC shall have also the possibility to integrate PCI-e cards.
Data Storage servers
In the DAQ room: to be defined.
Software
The Operating system used will be Scientific Linux, version (?)…except for the platform that requires real-time operating system (e.g. Timing platform).
On all the computers described above, the EPICS base will be installed. Others EPICS extensions and modules will be added. They will depend on the IOC implemented.
EPICS IOCs
Target Alignment Systems
Implemented on the Collimator Industrial PC (1st stage):
A Hard IOC will be developed for interfacing the COLLIMATOR PIPE motor controller that will be delivered along with the two stepper motors: COLLIMATOR PIPE MOTOR.
An IOC will be developed for interfacing the collimator motor controller delivered with the two 2-axis motor: COLLIMATOR MOTOR CONTROL Hard IOC.
ELIADE detectors
Implemented on the ELIADE IC Vacuum Industrial PC: an Hard IOC will be used for interfacing the vacuum pressure controller that will of the vacuum gauges implemented for the ELIADE IC.
In a second stage of development future, a Hard IOC might be developed for the monitoring of the vacuum pumps units.
Implemented directly on the compactRIO : one hard IOC for monitoring the LN2 filling system, the temperature and the voltage of the HPGe crystals. It will work along with LabVIEW. This feature has already been implemented [Ref.! National Instruments]
ELIADE Timing
To be done.
ELIADE DAQ
Implemented on the ELIADE DAQ: one soft IOC for the control of the physical parameters from the ELIADE DAQ Front-Ends
EPICS Clients
The generic clients shall be used
CA Clients: MEDM, CLT, iocLog. All of them will used the EPICS access security feature. MEDM is used for synoptic and monitoring, CLT is the generic command tools provided with EPICS Base, and iocLog for the configuration and display of the errors that occurs on each IOC.
Java Client : VDCT
These clients will be installed on all the industrial PC listed above (and on all the Supervision PC ?). The local clients will provide an easy access to the hardware equipment or its interface via the EPICS IOC.
Supervision Software
The Supervision Software shall be the software installed on Supervision computers allowing the user to remotely access the parameters of the devices. Usually, the Supervision shall allow less access to the devices than the Client, for security and bandwidth reasons.
The Supervision Software will also implement along with Central servers and databases, the central services detailed previously (Alarm Handling, Archiving system, Log System). An EPICS health IOC monitoring is also envisaged.
Data Processing Software
Not in the scope of this document.
Data Storage Software
Not in the scope of this document.
The General architecture model is presented below. The data processing and data storage components don't appear because the design is not fixed.
Fig .57 NRF experiment Hardware architecture model
Costs estimation of equipment
Each area shall have a rack 26-42 U form factor with the minimum requirements:
– each rack has it's own air cooling unit
– each rack will provide space for BNC and connectors patch panel.
– each rack provides minimum UPS Power to support the Industrial PC and other equipment installed respecting ESD protection requirements.
All cables from the experimental room to the rack will go into the rack to the control systems, from this point the industrial PC's will have a fiber uplink in the switch that will provide fiber optics links between the experimental room's racks or from them to the central monitoring rack. The infrastructure shall be deployed in a star or meshed topology with the central point of the star positioned in a safe area with controlled environment.
The racks equipped the same are distributed near the experiment room and will collect the slow control data from the sensors, the data is sorted on the computer's disk and replicated to a central storage server ( that will provide raid and back-up for the data). The storage server will be located in a dedicated computing that will be a data center respecting all the necessary regulation for storing/processing the data according to the researcher’s needs.
From each rack we'll have multimode fiber optic (62.5 nm core) going to the central server or to other racks that is collecting the replicated data from the sensors, it is mandatory that each industrial PC has a fiber optic interface (at least 1Gbps( in order to transmit real time data without the need to stop data acquisition.
In the proposed topology the computers in the racks will be communicating horizontally or vertically according to the required architecture.
The minimum requirements can be slightly changed according to each experiment if the sensors require a certain type of management interface or a certain type of communications.
Table 15
Estimated IT equipment required first day NRF experiment
This table doesn't include a compacRIO and the LabVIEW software that are envisaged.
Costs and planning
The following table presents the estimated IT equipment for the first day experiments taking into account the experimental areas. The IT equipment described have the purpose to remotely control the devices/detectors/equipment of the experiment from the UsrRoom/DataAcqRoom and to provide maximum 6 months data storage space for the experimental data. The estimated equipment costs amount presented in the following represents a compilation for the requirements of the 8 experimental areas, for first day experiments.
Table 16
Estimated IT equipment required for the first day experiments
The planning for the implementation for Day 0 and what is envisaged to be developed and available for the Users is as following:
Laser driven experiments:
HPLS parameters stored in users data storage, to be accessible by the User.
Functional Data storage with virtual slots dedicated to each experimental area accessible by the User
Data processing solution
NTP server for synchronization of all PCs internal timing, for correlation of experiment data with HPLS parameters and other equipment.
Programmable triggering system for equipment in the experimental areas, synchronized with the HPLS. The trigger delay will be able to be controlled by the User.
Target insertion system, able to be controlled by the User
Target manipulation system inside experimental areas, able to be controlled by the User.
Gamma driven experiments:
GBS parameters stored in users data storage, to be accessible by the User.
Functional Data storage with virtual slots dedicated to each experimental area accessible by the User
Data processing solution
NTP server for synchronization of all PCs internal timing, for correlation of experiment data with GBS parameters and other equipment
Programmable triggering system for equipment in the experimental areas, able to be controlled by the User.
REFERENCES
Copyright Notice
© Licențiada.org respectă drepturile de proprietate intelectuală și așteaptă ca toți utilizatorii să facă același lucru. Dacă consideri că un conținut de pe site încalcă drepturile tale de autor, te rugăm să trimiți o notificare DMCA.
Acest articol: monitoring and control systems for experiments AT ELI-NP Technical design report M. O. Cernaianu1, B. de boisdeffre1, d. Ursescu1, F. Negoita1, C. A…. [301573] (ID: 301573)
Dacă considerați că acest conținut vă încalcă drepturile de autor, vă rugăm să depuneți o cerere pe pagina noastră Copyright Takedown.
