Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2024 logbook, Contact: F.Wolff-Fabris, A.Galler, H.Sinn, Page 10 of 11  Not logged in ELOG logo
ID Date Author Group Subjectdown
  110   25 May 2024, 02:38 Rebecca BollSQSIssue

Today in the afternoon, SASE1 was tuned (a description of the activities in att 4). We now realize that this has in some form changed the SQS pulse parameters.

att 1 shows ion spectra recorded under nominally the same photon parameters before (run 177) and after (run 247)

it's also visible in the history of the SASE viewer that the behavior of the SASE3 pulse energy is different before and after 15:45, which is right when SASE1 was tuned. it clearly started fluctuating more. however, we don't think that the pulse energy fluctuations themselves are the reason for the large change in our data, we rather suspect a change in the pulse duration and/or the source point to cause this. however, we have no means to characterize this now after the fact.

it is particularly unfortunate that the tuning happened exactly BEFORE we started recording a set of reference spectra on the SASE3 spectrometer, which were supposed to serve as a calibration for the spectral corelation analysis to determine the group duration, as well as for a training data set to use the PES with machine learning as a virtual spectrometer for the large data set taken in the shifts before.

Attachment 1: Run_177_vs_247.png
Run_177_vs_247.png
Attachment 2: Screenshot_from_2024-05-25_00-44-46.png
Screenshot_from_2024-05-25_00-44-46.png
Attachment 3: Screenshot_from_2024-05-25_01-03-29.png
Screenshot_from_2024-05-25_01-03-29.png
Attachment 4: Screenshot_from_2024-05-25_01-36-54.png
Screenshot_from_2024-05-25_01-36-54.png
  131   24 Jul 2024, 10:26 Harald SinnPRCIssue

In SASE2 undualtor cell 10 has issues. Suren K. will do a ZZ at 11:30 to fix it (if possible). Estimated duration: 1-3 hours. 

Update 12:30: The problem is more severe than anticipated. Suren needs to get a replacement driver from the lab and then go back into the tunnel at about 14:00 

In principle, the beam could be use dby HED until 14:00

Update 13:30: Beam was not put back, because energizing and de-energizing the magnets would take too much time. Whenever Suren is ready, he will go back into XTD1 and try to fix the problem. 

Udate 17:30: Suren and Mikhail could do the repair and finish now th zz. A hardware piece (motor driver) had to be exchanged and it turned out that the Beckhoff software on it was outdated. To get the required feedback from Beckhoff took considerable time, but now everything looks good.  

  132   24 Jul 2024, 11:35 Harald SinnPRCIssue

The control rack of the soft mono at SASE3 has some failure. Leandro from EEE will enter the XTD10 at 12:00. 

Beam will be interupted for SAS1 and SASE3.

Estimate to fix it (if we are lucky) is one hour. 

Udate 12:30: Problem is solved (fuse was broken, replaced, motors tested)

Update 13:30: Beam is still not back, because BKR has problems with their sequencer

  134   27 Jul 2024, 09:10 Harald SinnPRCIssue

SASE1: Solid attenuator att5 is in error. When it is tried to move it out, the arm lifts, but does not reach the final position before the software time-out. We  increased the software time-out from 4 seconds to 7 seconds, which didn't help. We suspect that there is a mechanical problem that prevents the att5 actuator to reach the out position. 

SPB states that it is ok to leave the attenuator arm in over the weekend, because they run mostly detector tests and don't require full intensity.  

  142   10 Aug 2024, 07:19 Romain LetrunSPBIssue

The SASE1 HIREX interlock is not working as expected.

At the position 'OUT' (step 2 in HIREX scene) and while none of the motors are moving, the interlock SA1_XTD9_HIREX/VDCTRL/HIREX_MOVING is triggered, which prevents the use of more than two pulses.

SA1_XTD9_HIREX/VDCTRL/HIREX_MOVING remains in INTERLOCKED state while moving HIREX in (expected behaviour) and finally goes to ON state when HIREX is inserted, thus releasing the MODES and MODEM interlocks and allowing normal beam operation.

Attachment 1: Screenshot_from_2024-08-10_07-17-38.png
Screenshot_from_2024-08-10_07-17-38.png
  159   08 Sep 2024, 20:30 Peter ZaldenFXEIssue

We observe a continuous horizontal beam drift in SA1 after the M1 incident. It seems not to converge, so one can wonder where this will go....

Attached screenshot shows the M2 piezo actuator RY that keeps the beam at the same position on the M3 PBLM. These are about 130 m apart, The drift is 1V/12 hours. 0.01V corresponds to a beam motion by 35 um. So the drift velocity is 0.3 mm/hour. In the past days this had at least once lead to strong leakage around M2. But the two feedbacks in the FXE path are keeping the beam usable most of the time.

Attachment 1: 2024-09-08-194936_755701995_exflqr51474.png
2024-09-08-194936_755701995_exflqr51474.png
  161   09 Sep 2024, 23:19 Peter ZaldenFXEIssue

Finally, the drift velocity of the pointing reduced approx a factor of 2 as compared to yesterday, see attached. Possibly it will still converge...

Attachment 1: 2024-09-09-231922_609331430_exflqr51474.png
2024-09-09-231922_609331430_exflqr51474.png
  173   25 Sep 2024, 09:47 Jan GruenertPRCIssue

Issue with SA3 Gas attenuator (SA3_XTD10_GATT)

9h06: SQS informs PRC that the Argon line in GATT is not reacting on pressure change commands, no GATT operation, which is required to reduce pulse energies for alignment. VAC-OCD is informed.

ZZ access is required to check hardware in the tunnel XTD10.
SA1 / SPB is working on other problems in their hutch (water leaks) and don't take beam.
SA3 / SQS will work with the optical laser.
SA2 / MID will not be affected: there is also an issue in the hutch and MID currently doesn't take beam into the hutch, and would anyhow only very shortly be affected during the moment of decoupling the beamline branches.

9h45: BKR decouples North Branch (SA1/SA3) to enable ZZ accesss for the VAC colleagues.

ZZ access of VAC colleagues.

10h12: The problem is solved. A manual intervention in the tunnel was indeed necessary. VAC colleagues are leaving the tunnel.

  • Beam down SASE1: 9h43 until 10h45 (1 hour no beam), tuning done at 11h45, then 1.7mJ at 6keV single bunch.
  • Beam down SASE3: 9h43 until 10h56 (1.25 hours no beam). Before intervention 4.2mJ average with 200 bunches, afterwards 2.5mJ at 400 eV / 3nm single bunch.
Attachment 1: JG_2024-09-29_um_12.06.37.png
JG_2024-09-29_um_12.06.37.png
  174   25 Sep 2024, 10:12 Raśl Villanueva-GuerrerootherIssue

Issue with SA3 Gas attenuator (SA3_XTD10_GATT) - SOLVED

Dear all,

after the reported issue by SQS colleagues regarding the impossibility to inject Argon, and after a first remote evaluation, we (VAC) have entered XTD10 to assess the status and verify the most probable working hypothesis: a manual valve in the supply line was closed (otherwise as expected). Joshua and Benoit confirmed it and the showstopper is now gone. The system is back to normal and standard operation can be resumed in any moment. 

A small remark: as Argon (or any other gas different from Nitrogen) is not the default operation mode, it is important that if needed, there should be an explicit request during the preparation of the operation tables with XO. This would help to avoid this kind of situations. Unfortunately sending a short notice e-mail just before the next run is clearly not recommended.

With best regards.

  175   25 Sep 2024, 16:53 Chan KimSPBIssue

600um thick CVD diamond (XTD2 attenuator) seems damaged

 

Attachment 1: 600um thick CVD diamond (Arm 4) OUT

Attachment 2: 600um thick CVD diamond (Arm 4) IN

Attachment 1: Screenshot_from_2024-09-25_16-50-57.png
Screenshot_from_2024-09-25_16-50-57.png
Attachment 2: Screenshot_from_2024-09-25_16-51-21.png
Screenshot_from_2024-09-25_16-51-21.png
  185   29 Sep 2024, 11:42 Jan GruenertPRCIssue

A2 failure - downtime

The machine / RC informs (11h22) that there is an issue with the accelerator module A2 (first module in L1, right after the injector).
This causes a downtime of the accelerator of 1 hour (from 10h51 to 11h51), affecting all photon beamlines.

  191   07 Oct 2024, 15:15 Naresh KujalaPRCIssue

14:40hr

FXE and HED reported the laser synchronization issue. Informed to Machine RC. 

LbSync team couldn't fixed this issue remotely and on there way to Schenefeld to fix this issue.

Naresh 

PRC wk 41

  199   12 Nov 2024, 10:07 Frederik Wolff-FabrisPRCIssue
- While FEL IMG with YAG screen was inserted in SA1, EPS/MPS was not limited to the expected 2 bunches, but to 30 bunches; 
- It was found the condition #3 ("SA1_XTD2_VAC/Switch/SA1_PNACT_Close") at the EPS Mode-S was disabled which should be always enabled; 
- Condition #3 has been enabled again and now SA1 is limited to 2 bunches, as expected when YAG screen is inserted. 

Further investigations will be done to find why and when it was disabled. 
Attachment 1: Screenshot_2024-11-12_100032.png
Screenshot_2024-11-12_100032.png
  200   15 Nov 2024, 10:50 Frederik Wolff-FabrisPRCIssue

SA2 cell#5 undulator segment moved at ~09:45AM Fri to 25mm without aparent reason and undulator experts need a ZZ to exchange power supply; FEL instensity dropped from 230uJ to 60uJ s consequence.

RCs are preparing south branch for the iminent ZZ. HED is informed.

**Update: ZZ finished by 12:10; recovered performance and re-started delivery by 13:30.

  201   23 Nov 2024, 00:36 Jan GruenertPRCIssue

20h16: beam down in all beamlines due to a gun problem
20h50: x-ray beams are back
Shortly later additional 5 min down (20h55 - 21h00) for all beamlines, then the gun issue is solved.

After that, SA1 and SA3 are about back to same performance, but SA2 HXRSS is down from 130 uJ before to 100 uJ after the downtime.
Additionally, MID informs at 21h07 that no X-rays at all arrive to their hutch anymore. Investigation with PRC.

At 21h50, all issues are mostly settled, MID has beam, normal operation continues.
Further HXRSS tuning and preparation for limited range photon energy scanning will happen in the night.

Attachment 1: JG_2024-11-23_um_00.47.04.png
JG_2024-11-23_um_00.47.04.png
  202   23 Nov 2024, 00:48 Jan GruenertPRCIssue

Broken Bridges

Around 21h20 the XPD group notices that several karabo-to-DOOCS bridges are broken.
The immediate effect is that in DOOCS all XGMs in all beamlines show warnings
(because they don't receive important status informations from karabo anymore, for instance the insertion status of the graphite filters nrequired at this photon energy).

PRC contacts Tim Wilksen / DESY to repair these bridges after PRC check with DOC, who confirmed that on karabo side there were no issues.
Tim finds that ALL bridges are broken down. The root cause is a failing computer that hosts all these bridges, see his logbook entry;
https://ttfinfo.desy.de/XFELelog/show.jsp?dir=/2024/47/22.11_a&pos=2024-11-22T21:47:00
That also explains that HIREX data was no longer available in DOOCS for the accelerator team tuning SA2-HXRSS.

The issue was resolved by Tim and his colleagues at 21h56 and had persisted since 20h27 when that computer crashed (thus 1.5h downtime of the bridges).
Tim will organize with BKR to put a status indicator for this important computer hosting all bridges on a BKR diagnostic panel.

  204   23 Nov 2024, 10:23 Jan GruenertPRCIssue

Broken Radiation Interlock

Beam was down for all beamlines due to a broken radiation interlock in the SPB/SFX experimental hutch.
The interlock was brocken accidently with the shutter open, see https://in.xfel.eu/elog/OPERATION-2024/203
Beam was down for 42 min (from 3h00 to 3h42) but the machine came back with same perfomrmance.

Attachment 1: JG_2024-11-23_um_10.22.43.png
JG_2024-11-23_um_10.22.43.png
  205   23 Nov 2024, 10:27 Jan GruenertPRCIssue

SA1 Beckhoff issue

SA1 blocked to max. 2 bunches and SA3 limited to max 29 bunches
At 9h46, SA1 goes down to single bunch. PRC is called because SA3 (SCS) cannot get the required >300 bunches anymore. (at 9h36 SA3 was limited to 29 bunches/train)

FXE, SCS, DOC are informed, DOC and PRC identify that this is a SA1 Beckhoff problem.
Many MDLs in SA1 are red across the board, e.g. SRA, FLT, Mirror motors. Actual hardware state cannot be known and no movement possible.
Therefore, EEE-OCD is called in. They check and identify out-of-order PLC crates of the SASE1 EPS and MOV loops.
Vacuum and VAC PLC seems unaffected. 

EEE-OCD needs access to tunnel XTD2. It could be a fast access if it is only a blown fuse in an EPS crate.
This will however require to decouple the North branch, stop entire beam operation for SA1 and SA3.
Anyhow, SCS confirms to PRC that 29 bunches are not enough for them and FXE also cannot go on with only 1 bunch, effectively it is a downtime for both instruments.

Additional oddity: there is still one pulse per train delivered to SA1 / FXE, but there is no pulse energy in it ! XGM detects one bunch but with <10uJ.
Unclear why, PRC, FXE, and BKR looking into it until EEE will go into the tunnel.

In order not to ground the magnets in XTD2, a person from MPC / DESY has to accompany the EEE-OCD person.
This is organized and they will meet at XHE3 to enter XTD2 from there.

VAC-OCD is also aware and checking their systems and on standby to accompany to the tunnel if required.
For the moment it looks that the SA1 vacuum system and its Beckhoff controls are ok and not affected.

11h22: SA3 beam delivery is stopped.
In preparation of the access the North branch is decoupled. SA2 is not affected, normal operation.

11h45 : Details on SA1 control status. Following (Beckhoff-related) devices are in ERROR:

  • SA1_XTD2_VAC/SWITCH/SA1_PNACT_CLOSE and SA1_XTD2_VAC/SWITCH/SA1_PNACT_OPEN (this limits bunch numbers in SA1 and SA3 by EPS interlock)
  • SA1_XTD2_MIRR-1/MOTOR/HMTX and HMTY and HMRY, but not HMRX and HMRZ
  • SA1_XTD2_MIRR-2/MOTOR/* (all of them)
  • SA1_XTD2_FLT/MOTOR/
  • SA1_XTD2_IMGTR/SWITCH/*
  • SA1_XTD2_PSLIT/TSENS/* but not SA1_XTD2_PSLIT/MOTOR/*
  • more ...

12h03 Actual physical access to XTD2 has begun (team entering).

12h25: the EPS loop errors are resolved, FLT, SRA, IMGTR, PNACT all ok. Motors still in error.

12h27: the MOTORs are now also ok.

The root causes and failures will be described in detail by the EEE experts, here only in brief:
Two PLC crates lost communication to the PLC system. Fuses were ok. These crates had to be power cycled locally.
Now the communication is re-established, and the devices on EPS as well as MOV loop have recovered and are out of ERROR.

12h35: PRC and DOC checked the previously affected SA1 devices and all looks good, team is leaving the tunnel.

13h00: Another / still a problem: FLT motor issues (related to EPS). This component is now blocking EPS. EEE and device experts working on it in the control system. They find that they again need to go to the tunnel.

13h40 EEE-OCD is at the SA1 XTD2 rack 1.01 and it smells burnt. Checking fuses and power supplies of the Beckhoff crates.

14h: The SRA motors and FLT are both depending on this Beckhoff control.
EEE decides to remove the defective Beckhoff motor terminal because voltage is still delivered and there is the danger that it will start burning.

We proceed with the help of the colleagues in the tunnel and the device expert to manually move the FLT manipulator to the graphite position,
and looking at the operation plan photon energies it can stay there until the WMP.

At the same time we manually check the SRA slits, the motors are running ok. However, removing that Beckhoff terminal also disables the SRA slits.
It would require a spare controller from the labs, thus we decide in the interest of going back to operation to move on without being able to move the SRA slits.

14h22 Beam is back on. Immediately we test that SA3 can go to 100 bunches - OK. At 14h28 they go to 384 bunches. OK. Handover to SCS team.

14h30 Realignment of SA1 beam. When inserting IMGFEL to check if the SRA slits are clipping the beam or not, it is found after some tests with the beamline attenuators and the other scintillators that there is a damage spot on the YAG ! See next logbook entry and ATT#3. This is not on the graphite filter or on the attenuators or in the beam as we initially suspected.
The OCD colleagues from DESY-MPC and EEE are released.
The SRA slits are not clipping now. The beam is aligned to the handover position by RC/BKR. Beam handover to FXE team around 15h.

Attachment 1: JG_2024-11-23_um_12.06.35.png
JG_2024-11-23_um_12.06.35.png
Attachment 2: JG_2024-11-23_um_12.10.23.png
JG_2024-11-23_um_12.10.23.png
Attachment 3: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
  206   23 Nov 2024, 15:11 Jan GruenertPRCIssue

Damage on YAG screen of SA1_XTD2_IMGFEL

A damage spot was detected during the restart of SA1 beam operation at 14h30.

To confirm that it is not something in the beam itself or a damage on another component (graphite filter, attenuators, ...), the scintillator is moved from YAG to another scintillator.

ATT#1: shows the damage spot on the YAG (lower left dark spot, the upper right dark spot is the actual beam). Several attenuators are inserted.
ATT#2: shows the YAG when it is just moved away a bit (while moving to another scintillator, but beam still on YAG): only the beam spot is now seen. Attenuators unchanged.
ATT#3: shows the situation with BN scintillator (and all attenuators removed).

XPD expert will be informed about this new detected damage in order to take care.

Attachment 1: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
Attachment 2: JG_2024-11-23_um_14.40.23.png
JG_2024-11-23_um_14.40.23.png
Attachment 3: JG_2024-11-23_um_15.14.33.png
JG_2024-11-23_um_15.14.33.png
  109   19 May 2024, 15:20 Antje TrappXROHED 6keV after Euler MDL
Attachment 1: Screenshot_from_2024-05-19_15-18-51.png
Screenshot_from_2024-05-19_15-18-51.png
ELOG V3.1.4-7c3fd00