Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2024 logbook, Contact: F.Wolff-Fabris, A.Galler, H.Sinn, Page 9 of 11  Not logged in ELOG logo
New entries since:Thu Jan 1 01:00:00 1970
ID Date Author Groupdown Subject
  205   23 Nov 2024, 10:27 Jan GruenertPRCIssue

SA1 Beckhoff issue

SA1 blocked to max. 2 bunches and SA3 limited to max 29 bunches
At 9h46, SA1 goes down to single bunch. PRC is called because SA3 (SCS) cannot get the required >300 bunches anymore. (at 9h36 SA3 was limited to 29 bunches/train)

FXE, SCS, DOC are informed, DOC and PRC identify that this is a SA1 Beckhoff problem.
Many MDLs in SA1 are red across the board, e.g. SRA, FLT, Mirror motors. Actual hardware state cannot be known and no movement possible.
Therefore, EEE-OCD is called in. They check and identify out-of-order PLC crates of the SASE1 EPS and MOV loops.
Vacuum and VAC PLC seems unaffected. 

EEE-OCD needs access to tunnel XTD2. It could be a fast access if it is only a blown fuse in an EPS crate.
This will however require to decouple the North branch, stop entire beam operation for SA1 and SA3.
Anyhow, SCS confirms to PRC that 29 bunches are not enough for them and FXE also cannot go on with only 1 bunch, effectively it is a downtime for both instruments.

Additional oddity: there is still one pulse per train delivered to SA1 / FXE, but there is no pulse energy in it ! XGM detects one bunch but with <10uJ.
Unclear why, PRC, FXE, and BKR looking into it until EEE will go into the tunnel.

In order not to ground the magnets in XTD2, a person from MPC / DESY has to accompany the EEE-OCD person.
This is organized and they will meet at XHE3 to enter XTD2 from there.

VAC-OCD is also aware and checking their systems and on standby to accompany to the tunnel if required.
For the moment it looks that the SA1 vacuum system and its Beckhoff controls are ok and not affected.

11h22: SA3 beam delivery is stopped.
In preparation of the access the North branch is decoupled. SA2 is not affected, normal operation.

11h45 : Details on SA1 control status. Following (Beckhoff-related) devices are in ERROR:

  • SA1_XTD2_VAC/SWITCH/SA1_PNACT_CLOSE and SA1_XTD2_VAC/SWITCH/SA1_PNACT_OPEN (this limits bunch numbers in SA1 and SA3 by EPS interlock)
  • SA1_XTD2_MIRR-1/MOTOR/HMTX and HMTY and HMRY, but not HMRX and HMRZ
  • SA1_XTD2_MIRR-2/MOTOR/* (all of them)
  • SA1_XTD2_FLT/MOTOR/
  • SA1_XTD2_IMGTR/SWITCH/*
  • SA1_XTD2_PSLIT/TSENS/* but not SA1_XTD2_PSLIT/MOTOR/*
  • more ...

12h03 Actual physical access to XTD2 has begun (team entering).

12h25: the EPS loop errors are resolved, FLT, SRA, IMGTR, PNACT all ok. Motors still in error.

12h27: the MOTORs are now also ok.

The root causes and failures will be described in detail by the EEE experts, here only in brief:
Two PLC crates lost communication to the PLC system. Fuses were ok. These crates had to be power cycled locally.
Now the communication is re-established, and the devices on EPS as well as MOV loop have recovered and are out of ERROR.

12h35: PRC and DOC checked the previously affected SA1 devices and all looks good, team is leaving the tunnel.

13h00: Another / still a problem: FLT motor issues (related to EPS). This component is now blocking EPS. EEE and device experts working on it in the control system. They find that they again need to go to the tunnel.

13h40 EEE-OCD is at the SA1 XTD2 rack 1.01 and it smells burnt. Checking fuses and power supplies of the Beckhoff crates.

14h: The SRA motors and FLT are both depending on this Beckhoff control.
EEE decides to remove the defective Beckhoff motor terminal because voltage is still delivered and there is the danger that it will start burning.

We proceed with the help of the colleagues in the tunnel and the device expert to manually move the FLT manipulator to the graphite position,
and looking at the operation plan photon energies it can stay there until the WMP.

At the same time we manually check the SRA slits, the motors are running ok. However, removing that Beckhoff terminal also disables the SRA slits.
It would require a spare controller from the labs, thus we decide in the interest of going back to operation to move on without being able to move the SRA slits.

14h22 Beam is back on. Immediately we test that SA3 can go to 100 bunches - OK. At 14h28 they go to 384 bunches. OK. Handover to SCS team.

14h30 Realignment of SA1 beam. When inserting IMGFEL to check if the SRA slits are clipping the beam or not, it is found after some tests with the beamline attenuators and the other scintillators that there is a damage spot on the YAG ! See next logbook entry and ATT#3. This is not on the graphite filter or on the attenuators or in the beam as we initially suspected.
The OCD colleagues from DESY-MPC and EEE are released.
The SRA slits are not clipping now. The beam is aligned to the handover position by RC/BKR. Beam handover to FXE team around 15h.

Attachment 1: JG_2024-11-23_um_12.06.35.png
JG_2024-11-23_um_12.06.35.png
Attachment 2: JG_2024-11-23_um_12.10.23.png
JG_2024-11-23_um_12.10.23.png
Attachment 3: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
  206   23 Nov 2024, 15:11 Jan GruenertPRCIssue

Damage on YAG screen of SA1_XTD2_IMGFEL

A damage spot was detected during the restart of SA1 beam operation at 14h30.

To confirm that it is not something in the beam itself or a damage on another component (graphite filter, attenuators, ...), the scintillator is moved from YAG to another scintillator.

ATT#1: shows the damage spot on the YAG (lower left dark spot, the upper right dark spot is the actual beam). Several attenuators are inserted.
ATT#2: shows the YAG when it is just moved away a bit (while moving to another scintillator, but beam still on YAG): only the beam spot is now seen. Attenuators unchanged.
ATT#3: shows the situation with BN scintillator (and all attenuators removed).

XPD expert will be informed about this new detected damage in order to take care.

Attachment 1: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
Attachment 2: JG_2024-11-23_um_14.40.23.png
JG_2024-11-23_um_14.40.23.png
Attachment 3: JG_2024-11-23_um_15.14.33.png
JG_2024-11-23_um_15.14.33.png
  207   24 Nov 2024, 13:47 Jan GruenertPRCIssue

Beckhoff communication loss SA1-EPS

12h19: BKR informs PRC about a limitation to 30 bunches in total for the north branch.
The property "safety_CRL_SA1" is lmiting in MPS.
DOC and EEE-OCD and PRC work on the issue.

Just before this, DOC had been called and informed EEE-OCD about a problem with the SA1-EPS loop (many devices red), which was very quickly resolved.
However, "safety_CRL_SA1" had remained limiting, and SA3 could only do 29 bunches but needed 384 bunches.

EEE-OCD together with PRC followed the procedure outlined in the radiation protection document 
Procedures: Troubleshooting SEPS interlocks
https://docs.xfel.eu/share/page/site/radiation-protection/document-details?nodeRef=workspace://SpacesStore/6d7374eb-f0e6-426d-a804-1cbc8a2cfddb

The device SA1_XTD2_CRL/DCTRL/ALL_LENSES_OUT was in OFF state although PRC and EEE confirmed that it should be ON, an aftermath of the EPS-loop communication loss.
EEE-OCD had to restart / reboot this device on the Beckhoff PLC level (no configuration changes) and then it loaded correctly the configuration values and came back as ON state.
This resolved the limitation in "safety_CRL_SA1".

All beamlines operating normal. Intervention ended around 13h30.

  208   24 Nov 2024, 14:09 Jan GruenertPRCIssue

Beckhoff communication loss SA1-EPS (again)

14h00 Same issue as before, but now EEE cannot clear the problem anymore remotely.
An access is required immediately to repair the SA1-EPS loop to regain communication.
EEE-OCD and VAC-OCD are coming in for the access and repair.

 

PRC update at 16h30 on this issue

One person each from EEE-OCD and VAC-OCD and DESY-MPC shift crew made a ZZ access to XTD2 to resolve the imminent Beckhoff EPS-loop issue.
The communication coupler for the fiber to EtherCat connection of the EPS-loop was replaced in SA1 rack 1.01.
Also, a new motor power terminal (which had been smoky and removed yesterday) was inserted to close the loop for regaining redundance.
All fuses were checked. However, the functionality of the motor controller in the MOV loop could not yet be repaired, thus another ZZ access
after the end of this beam delivery (e.g. next tuesday) is required to make the SRA slits and FLT movable again. EEE-OCD will submit the request.

Moving FLT is not crucial now, and given the planned photon energies as shown in the operation plan, it can wait for repair until WMP.
Moving the SA1 SRA slits is also not crucially mandatory in this very moment (until tuesday), but it will be needed for further beam delivery in the next weeks.

Conclusions: 

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and SA1/3 thus didn't have any beam from 15h18 to 16h12 (54 min).
Beam operation SA1 and SA3 is now fully restored, SCS and FXE experiments are ongoing.
The SA1-EPS-loop communication failure is cured and should thus not come back again.
At 16h30, PRC and SCS made a test that 384 bunches can be delivered to SA3 / SCS. OK.
Big thanks to all colleagues who were involved to resolve this issue: EEE-OCD, VAC-OCD, DOC, BKR, RC, PRC, XPD, DESY-MPC

 

Timeline + statistics of safety_CRL_SA1 interlockings (see ATT#1)
(these are essentially the times during which SA1+SA3 together could not get more than 30 bunches)

12h16 interlock NOT OK
13h03 interlock OK
duration 47 min

13h39 interlock NOT OK
13h42 interlock OK
duration 3 min

13h51 interlock NOT OK
13h53 interlock OK
duration 2 min

13h42 interlock NOT OK
15h45 interlock OK
duration 2h 3min

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and thus SA1/3 together didn't have any beam at all from 15h18 to 16h12 (54 min).
SA3 had beam with 29 or 30 pulses until start of ZZ, while SA1 already lost beam for good at 13h48 until 16h12 (2h 24 min).
SA3/SCS needed 386 pulses which they had until 11h55, but this afternoon they only got / took 386 pulses/train from 13h04-13h34, 13h42-13h47 and after the ZZ.

Attachment 1: JG_2024-11-24_um_16.54.02.png
JG_2024-11-24_um_16.54.02.png
  209   24 Nov 2024, 17:23 Jan GruenertPRCStatus

Status

All beamlines in operation.
The attachment ATT#1 shows the beam delivery of this afternoon, with the pulse energies on the left and the number of bunches on the right.
SA2 was not affected, SA1 was essentially without beam from 13h45 to 16h15 (2.5h) and SA3 didn't get enough (386) pulses in about the same period.
Beam delivery to all experiments is ok since ~16h15, and SCS is just not taking more pulses for other (internal) reasons.

Attachment 1: JG_2024-11-24_um_17.22.58.png
JG_2024-11-24_um_17.22.58.png
  214   04 Dec 2024, 15:34 Jan GruenertPRCStatus

Beam delivery Status well after tuning finished (which went faster than anticipated).

Attachment 1: JG_2024-12-04_um_15.33.09.png
JG_2024-12-04_um_15.33.09.png
  11   03 Feb 2024, 23:51 Jörg HallmannMIDShift summary

goals achieved:
alignment of the beam in tunnel and hutches including refinement of the beam in the center of rotation 
check of cameras and optical components
initial check of the mirror chamber (successfully)

issues:
smarACT motors in the DES tricky - some are in error state and the filter wheel is moving and does not stop - positioned it and removed afterwards the cable - EEE intervention next week necessary
camera of Pop-in imager II45-2 in XTD6 went into error - DOC solved the issue

  12   04 Feb 2024, 23:36 Wonhyuk JoMIDShift summary

Goal Achieved.
- Finished flat field measurement for AGIPD at different scenarios for upcoming user experiments. 
- Aligned the spectrometer at DES and obtained the energy spectrum of 9 keV SASE.
- In order to investigate the sample to detector distance of AGIPD, we collected SAXS patterns from three different type of SiO2 NPs at different sample positions.
- Finished preparations for tomorrow.
 

  13   05 Feb 2024, 23:10 Jörg HallmannMIDShift summary

check of the optimized focussing options at 10 keV using only one arm in CRL2 - results still to be evaluated
successful check of the mirror chamber in the experiment hutch of MID - it was possible to reflect the direct beam down and use it under a shallow angle on the sample position

  40   29 Feb 2024, 02:41 Jörg HallmannMIDShift summary

Stable seeding with 1mJ + 400mJ SASE
HIREX triggered interlock when all attenuators are removed 
successful test of nano focusing setup
successful test and alignment of x-ray microscopes
Alignment of split and delay line
Laser x-ray timing at sample position

  42   01 Mar 2024, 06:30 Jörg HallmannMIDShift summary

Shift summary (Feb 29th):
Stable seeding with 1mJ + 400mJ SASE
Alignment of Split and Delay Line
successful test of nano focusing setup together wir SDL lower branch
optimisation of x-ray microscope
timing with SDL was at first attempt not successful: we found some clipping of the beam with the DPS and the Diamond window
first test of laser irradiation on a sample to test the pump conditions for the user experiment
-> found time0 at the end at 1100ps.

  88   25 Apr 2024, 01:50 Jörg HallmannMIDShift summary

instrument alignment including NaFos, DES spectrometer, diamond detector, PUMA & test samples
seeding makes a good impression based on the DES spectrometer

  90   27 Apr 2024, 02:11 Jörg HallmannMIDShift summary
  • optimized the FEL energy on the absorption edge of Iridium
  • measurements on the SDW of Chromium with max magnetic discharges of the PUMA at different attenuations
  • changed the sample from Chromium to user samples

issues:

  • energy scan not efficiently working
  • poor seeding quality in long FEL trains
    ---> both topics have been solved finally by the machine
  103   13 May 2024, 08:48 Ulrike BoesenbergMIDShift summary

Weekend summary:

Overall stable beam delivery with 300bunches @18keV and 750uJ

-issue with XGM on Saturday night/morning

-The intensity in SA2 over the train is dropping of when SA3 is changing some parameters. BKR can fix it within a couple of minutes

-The users found suitable working conditions to overcome the limitations from the 10bar He supply line.

  125   13 Jun 2024, 01:46 Jörg HallmannMIDShift summary

alignment of the instrument including collimation and beam shaping
test of the tempus detector using standard samples with different rep rates, intensities & # of pulses
start commissioning of the PhotonArrivalMonitor (temporal & spatial overlap, white light spectrum)

  126   14 Jun 2024, 01:53 Jörg HallmannMIDShift summary

continue with Tempus tests
continue with PAM commissioning (first results of the working device at MID)

comments:
changed to 10 keV in order to use only one focussing lens in CRL2 and generating a smaller focus

issues:
vacuum issues in the MPC (reason: open roughing valve)
FSSS (hall sensor broken - exchange of the entire device planned for tomorrow)

  139   08 Aug 2024, 00:18 Jörg HallmannMIDShift summary

optimization of seeding (still not as good as needed)

set position for the two color mode (difference 3-4 ev)


timing of the direct beam with diode and laser
timing of the direct beam with YAG and laser
alignment of the lower branch of the SDL
timing pf the lower branch beam with YAG and laser

issues:
intensity and spectral purity of the seeded beam not really good
alignment of the mirrors have been tricky (previous values did not work)
 

  140   09 Aug 2024, 01:59 Jörg HallmannMIDShift summary

optimized the SDL alignment

found timing of the upper (-520 ps) and lower branch (-514.5 ps) of the SDL

moved the upper branch by 3 ps (positive) and lost the beam on the merger crystal - difficult to get it back

  141   10 Aug 2024, 01:17 Jörg HallmannMIDShift summary

Found the delay time from upper and lower branches of SDL with a frosted-YAG.
The upper branch is -512.8 and the lower branch is -513.28.
The delay time difference is about 500 fs. 
The Mid SDL delay tuner should be investigated further more. 
The next step is find the spatial overlap on the Suna microscope and measure the fringes. 
 

  144   11 Aug 2024, 02:15 Wonhyuk JoMIDShift summary

Found the spatial overlap on the Suna microscope with Zyla.
Found the SDL motor position giving 200 fs time difference between the upper and lower branches. 
The encoder and current values of DC2_CASZ and DC1_CASZ are not 100 % correlated. 
The SDL delay scan using ScanTool has an issue due to the incorrect motor movement. 
The manual delay scan has been attempted, which was not successful. 
 

ELOG V3.1.4-7c3fd00