Group Selection Page HELP Controls Group Data Analysis Group ITDM WP76 DET Vacuum EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Operation PSPO XO TS Migrated2Zulip
OPERATION-2025 OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2024 logbook, Contact: F.Wolff-Fabris, A.Galler, H.Sinn, Page 11 of 12  Not logged in ELOG logo
Entry   Issue, posted by Jan Gruenert on 25 Sep 2024, 09:47 JG_2024-09-29_um_12.06.37.png

Issue with SA3 Gas attenuator (SA3_XTD10_GATT)

9h06: SQS informs PRC that the Argon line in GATT is not reacting on pressure change commands, no GATT operation, which is required to reduce pulse energies for alignment. VAC-OCD is informed.

ZZ access is required to check hardware in the tunnel XTD10.
SA1 / SPB is working on other problems in their hutch (water leaks) and don't take beam.
SA3 / SQS will work with the optical laser.
SA2 / MID will not be affected: there is also an issue in the hutch and MID currently doesn't take beam into the hutch, and would anyhow only very shortly be affected during the moment of decoupling the beamline branches.

9h45: BKR decouples North Branch (SA1/SA3) to enable ZZ accesss for the VAC colleagues.

ZZ access of VAC colleagues.

10h12: The problem is solved. A manual intervention in the tunnel was indeed necessary. VAC colleagues are leaving the tunnel.

  • Beam down SASE1: 9h43 until 10h45 (1 hour no beam), tuning done at 11h45, then 1.7mJ at 6keV single bunch.
  • Beam down SASE3: 9h43 until 10h56 (1.25 hours no beam). Before intervention 4.2mJ average with 200 bunches, afterwards 2.5mJ at 400 eV / 3nm single bunch.
Entry   Issue, posted by Raśl Villanueva-Guerrero on 25 Sep 2024, 10:12 

Issue with SA3 Gas attenuator (SA3_XTD10_GATT) - SOLVED

Dear all,

after the reported issue by SQS colleagues regarding the impossibility to inject Argon, and after a first remote evaluation, we (VAC) have entered XTD10 to assess the status and verify the most probable working hypothesis: a manual valve in the supply line was closed (otherwise as expected). Joshua and Benoit confirmed it and the showstopper is now gone. The system is back to normal and standard operation can be resumed in any moment. 

A small remark: as Argon (or any other gas different from Nitrogen) is not the default operation mode, it is important that if needed, there should be an explicit request during the preparation of the operation tables with XO. This would help to avoid this kind of situations. Unfortunately sending a short notice e-mail just before the next run is clearly not recommended.

With best regards.

Entry   Issue, posted by Chan Kim on 25 Sep 2024, 16:53 Screenshot_from_2024-09-25_16-50-57.pngScreenshot_from_2024-09-25_16-51-21.png

600um thick CVD diamond (XTD2 attenuator) seems damaged

 

Attachment 1: 600um thick CVD diamond (Arm 4) OUT

Attachment 2: 600um thick CVD diamond (Arm 4) IN

Entry   Issue, posted by Jan Gruenert on 29 Sep 2024, 11:42 

A2 failure - downtime

The machine / RC informs (11h22) that there is an issue with the accelerator module A2 (first module in L1, right after the injector).
This causes a downtime of the accelerator of 1 hour (from 10h51 to 11h51), affecting all photon beamlines.

Entry   Issue, posted by Naresh Kujala on 07 Oct 2024, 15:15 

14:40hr

FXE and HED reported the laser synchronization issue. Informed to Machine RC. 

LbSync team couldn't fixed this issue remotely and on there way to Schenefeld to fix this issue.

Naresh 

PRC wk 41

Entry   Issue, posted by Frederik Wolff-Fabris on 12 Nov 2024, 10:07 Screenshot_2024-11-12_100032.png
- While FEL IMG with YAG screen was inserted in SA1, EPS/MPS was not limited to the expected 2 bunches, but to 30 bunches; 
- It was found the condition #3 ("SA1_XTD2_VAC/Switch/SA1_PNACT_Close") at the EPS Mode-S was disabled which should be always enabled; 
- Condition #3 has been enabled again and now SA1 is limited to 2 bunches, as expected when YAG screen is inserted. 

Further investigations will be done to find why and when it was disabled. 
Entry   Issue, posted by Frederik Wolff-Fabris on 15 Nov 2024, 10:50 

SA2 cell#5 undulator segment moved at ~09:45AM Fri to 25mm without aparent reason and undulator experts need a ZZ to exchange power supply; FEL instensity dropped from 230uJ to 60uJ s consequence.

RCs are preparing south branch for the iminent ZZ. HED is informed.

**Update: ZZ finished by 12:10; recovered performance and re-started delivery by 13:30.

Entry   Issue, posted by Jan Gruenert on 23 Nov 2024, 00:36 JG_2024-11-23_um_00.47.04.png

20h16: beam down in all beamlines due to a gun problem
20h50: x-ray beams are back
Shortly later additional 5 min down (20h55 - 21h00) for all beamlines, then the gun issue is solved.

After that, SA1 and SA3 are about back to same performance, but SA2 HXRSS is down from 130 uJ before to 100 uJ after the downtime.
Additionally, MID informs at 21h07 that no X-rays at all arrive to their hutch anymore. Investigation with PRC.

At 21h50, all issues are mostly settled, MID has beam, normal operation continues.
Further HXRSS tuning and preparation for limited range photon energy scanning will happen in the night.

Entry   Issue, posted by Jan Gruenert on 23 Nov 2024, 00:48 

Broken Bridges

Around 21h20 the XPD group notices that several karabo-to-DOOCS bridges are broken.
The immediate effect is that in DOOCS all XGMs in all beamlines show warnings
(because they don't receive important status informations from karabo anymore, for instance the insertion status of the graphite filters nrequired at this photon energy).

PRC contacts Tim Wilksen / DESY to repair these bridges after PRC check with DOC, who confirmed that on karabo side there were no issues.
Tim finds that ALL bridges are broken down. The root cause is a failing computer that hosts all these bridges, see his logbook entry;
https://ttfinfo.desy.de/XFELelog/show.jsp?dir=/2024/47/22.11_a&pos=2024-11-22T21:47:00
That also explains that HIREX data was no longer available in DOOCS for the accelerator team tuning SA2-HXRSS.

The issue was resolved by Tim and his colleagues at 21h56 and had persisted since 20h27 when that computer crashed (thus 1.5h downtime of the bridges).
Tim will organize with BKR to put a status indicator for this important computer hosting all bridges on a BKR diagnostic panel.

Entry   Issue, posted by Jan Gruenert on 23 Nov 2024, 10:23 JG_2024-11-23_um_10.22.43.png

Broken Radiation Interlock

Beam was down for all beamlines due to a broken radiation interlock in the SPB/SFX experimental hutch.
The interlock was brocken accidently with the shutter open, see https://in.xfel.eu/elog/OPERATION-2024/203
Beam was down for 42 min (from 3h00 to 3h42) but the machine came back with same perfomrmance.

Entry   Issue, posted by Jan Gruenert on 23 Nov 2024, 10:27 JG_2024-11-23_um_12.06.35.pngJG_2024-11-23_um_12.10.23.pngJG_2024-11-23_um_14.33.06.png

SA1 Beckhoff issue

SA1 blocked to max. 2 bunches and SA3 limited to max 29 bunches
At 9h46, SA1 goes down to single bunch. PRC is called because SA3 (SCS) cannot get the required >300 bunches anymore. (at 9h36 SA3 was limited to 29 bunches/train)

FXE, SCS, DOC are informed, DOC and PRC identify that this is a SA1 Beckhoff problem.
Many MDLs in SA1 are red across the board, e.g. SRA, FLT, Mirror motors. Actual hardware state cannot be known and no movement possible.
Therefore, EEE-OCD is called in. They check and identify out-of-order PLC crates of the SASE1 EPS and MOV loops.
Vacuum and VAC PLC seems unaffected. 

EEE-OCD needs access to tunnel XTD2. It could be a fast access if it is only a blown fuse in an EPS crate.
This will however require to decouple the North branch, stop entire beam operation for SA1 and SA3.
Anyhow, SCS confirms to PRC that 29 bunches are not enough for them and FXE also cannot go on with only 1 bunch, effectively it is a downtime for both instruments.

Additional oddity: there is still one pulse per train delivered to SA1 / FXE, but there is no pulse energy in it ! XGM detects one bunch but with <10uJ.
Unclear why, PRC, FXE, and BKR looking into it until EEE will go into the tunnel.

In order not to ground the magnets in XTD2, a person from MPC / DESY has to accompany the EEE-OCD person.
This is organized and they will meet at XHE3 to enter XTD2 from there.

VAC-OCD is also aware and checking their systems and on standby to accompany to the tunnel if required.
For the moment it looks that the SA1 vacuum system and its Beckhoff controls are ok and not affected.

11h22: SA3 beam delivery is stopped.
In preparation of the access the North branch is decoupled. SA2 is not affected, normal operation.

11h45 : Details on SA1 control status. Following (Beckhoff-related) devices are in ERROR:

  • SA1_XTD2_VAC/SWITCH/SA1_PNACT_CLOSE and SA1_XTD2_VAC/SWITCH/SA1_PNACT_OPEN (this limits bunch numbers in SA1 and SA3 by EPS interlock)
  • SA1_XTD2_MIRR-1/MOTOR/HMTX and HMTY and HMRY, but not HMRX and HMRZ
  • SA1_XTD2_MIRR-2/MOTOR/* (all of them)
  • SA1_XTD2_FLT/MOTOR/
  • SA1_XTD2_IMGTR/SWITCH/*
  • SA1_XTD2_PSLIT/TSENS/* but not SA1_XTD2_PSLIT/MOTOR/*
  • more ...

12h03 Actual physical access to XTD2 has begun (team entering).

12h25: the EPS loop errors are resolved, FLT, SRA, IMGTR, PNACT all ok. Motors still in error.

12h27: the MOTORs are now also ok.

The root causes and failures will be described in detail by the EEE experts, here only in brief:
Two PLC crates lost communication to the PLC system. Fuses were ok. These crates had to be power cycled locally.
Now the communication is re-established, and the devices on EPS as well as MOV loop have recovered and are out of ERROR.

12h35: PRC and DOC checked the previously affected SA1 devices and all looks good, team is leaving the tunnel.

13h00: Another / still a problem: FLT motor issues (related to EPS). This component is now blocking EPS. EEE and device experts working on it in the control system. They find that they again need to go to the tunnel.

13h40 EEE-OCD is at the SA1 XTD2 rack 1.01 and it smells burnt. Checking fuses and power supplies of the Beckhoff crates.

14h: The SRA motors and FLT are both depending on this Beckhoff control.
EEE decides to remove the defective Beckhoff motor terminal because voltage is still delivered and there is the danger that it will start burning.

We proceed with the help of the colleagues in the tunnel and the device expert to manually move the FLT manipulator to the graphite position,
and looking at the operation plan photon energies it can stay there until the WMP.

At the same time we manually check the SRA slits, the motors are running ok. However, removing that Beckhoff terminal also disables the SRA slits.
It would require a spare controller from the labs, thus we decide in the interest of going back to operation to move on without being able to move the SRA slits.

14h22 Beam is back on. Immediately we test that SA3 can go to 100 bunches - OK. At 14h28 they go to 384 bunches. OK. Handover to SCS team.

14h30 Realignment of SA1 beam. When inserting IMGFEL to check if the SRA slits are clipping the beam or not, it is found after some tests with the beamline attenuators and the other scintillators that there is a damage spot on the YAG ! See next logbook entry and ATT#3. This is not on the graphite filter or on the attenuators or in the beam as we initially suspected.
The OCD colleagues from DESY-MPC and EEE are released.
The SRA slits are not clipping now. The beam is aligned to the handover position by RC/BKR. Beam handover to FXE team around 15h.

Entry   Issue, posted by Jan Gruenert on 23 Nov 2024, 15:11 JG_2024-11-23_um_14.33.06.pngJG_2024-11-23_um_14.40.23.pngJG_2024-11-23_um_15.14.33.png

Damage on YAG screen of SA1_XTD2_IMGFEL

A damage spot was detected during the restart of SA1 beam operation at 14h30.

To confirm that it is not something in the beam itself or a damage on another component (graphite filter, attenuators, ...), the scintillator is moved from YAG to another scintillator.

ATT#1: shows the damage spot on the YAG (lower left dark spot, the upper right dark spot is the actual beam). Several attenuators are inserted.
ATT#2: shows the YAG when it is just moved away a bit (while moving to another scintillator, but beam still on YAG): only the beam spot is now seen. Attenuators unchanged.
ATT#3: shows the situation with BN scintillator (and all attenuators removed).

XPD expert will be informed about this new detected damage in order to take care.

Entry   Issue, posted by Jan Gruenert on 24 Nov 2024, 13:47 

Beckhoff communication loss SA1-EPS

12h19: BKR informs PRC about a limitation to 30 bunches in total for the north branch.
The property "safety_CRL_SA1" is lmiting in MPS.
DOC and EEE-OCD and PRC work on the issue.

Just before this, DOC had been called and informed EEE-OCD about a problem with the SA1-EPS loop (many devices red), which was very quickly resolved.
However, "safety_CRL_SA1" had remained limiting, and SA3 could only do 29 bunches but needed 384 bunches.

EEE-OCD together with PRC followed the procedure outlined in the radiation protection document 
Procedures: Troubleshooting SEPS interlocks
https://docs.xfel.eu/share/page/site/radiation-protection/document-details?nodeRef=workspace://SpacesStore/6d7374eb-f0e6-426d-a804-1cbc8a2cfddb

The device SA1_XTD2_CRL/DCTRL/ALL_LENSES_OUT was in OFF state although PRC and EEE confirmed that it should be ON, an aftermath of the EPS-loop communication loss.
EEE-OCD had to restart / reboot this device on the Beckhoff PLC level (no configuration changes) and then it loaded correctly the configuration values and came back as ON state.
This resolved the limitation in "safety_CRL_SA1".

All beamlines operating normal. Intervention ended around 13h30.

Entry   Issue, posted by Jan Gruenert on 24 Nov 2024, 14:09 JG_2024-11-24_um_16.54.02.png

Beckhoff communication loss SA1-EPS (again)

14h00 Same issue as before, but now EEE cannot clear the problem anymore remotely.
An access is required immediately to repair the SA1-EPS loop to regain communication.
EEE-OCD and VAC-OCD are coming in for the access and repair.

 

PRC update at 16h30 on this issue

One person each from EEE-OCD and VAC-OCD and DESY-MPC shift crew made a ZZ access to XTD2 to resolve the imminent Beckhoff EPS-loop issue.
The communication coupler for the fiber to EtherCat connection of the EPS-loop was replaced in SA1 rack 1.01.
Also, a new motor power terminal (which had been smoky and removed yesterday) was inserted to close the loop for regaining redundance.
All fuses were checked. However, the functionality of the motor controller in the MOV loop could not yet be repaired, thus another ZZ access
after the end of this beam delivery (e.g. next tuesday) is required to make the SRA slits and FLT movable again. EEE-OCD will submit the request.

Moving FLT is not crucial now, and given the planned photon energies as shown in the operation plan, it can wait for repair until WMP.
Moving the SA1 SRA slits is also not crucially mandatory in this very moment (until tuesday), but it will be needed for further beam delivery in the next weeks.

Conclusions: 

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and SA1/3 thus didn't have any beam from 15h18 to 16h12 (54 min).
Beam operation SA1 and SA3 is now fully restored, SCS and FXE experiments are ongoing.
The SA1-EPS-loop communication failure is cured and should thus not come back again.
At 16h30, PRC and SCS made a test that 384 bunches can be delivered to SA3 / SCS. OK.
Big thanks to all colleagues who were involved to resolve this issue: EEE-OCD, VAC-OCD, DOC, BKR, RC, PRC, XPD, DESY-MPC

 

Timeline + statistics of safety_CRL_SA1 interlockings (see ATT#1)
(these are essentially the times during which SA1+SA3 together could not get more than 30 bunches)

12h16 interlock NOT OK
13h03 interlock OK
duration 47 min

13h39 interlock NOT OK
13h42 interlock OK
duration 3 min

13h51 interlock NOT OK
13h53 interlock OK
duration 2 min

13h42 interlock NOT OK
15h45 interlock OK
duration 2h 3min

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and thus SA1/3 together didn't have any beam at all from 15h18 to 16h12 (54 min).
SA3 had beam with 29 or 30 pulses until start of ZZ, while SA1 already lost beam for good at 13h48 until 16h12 (2h 24 min).
SA3/SCS needed 386 pulses which they had until 11h55, but this afternoon they only got / took 386 pulses/train from 13h04-13h34, 13h42-13h47 and after the ZZ.

Entry   Issue, posted by Jan Gruenert on 05 Dec 2024, 07:38 JG_2024-12-05_um_07.30.17.png

Accelerator module issue

7h20: BKR/RC inform that there is was issue with the A12 Klystron filament which had caused a short downtime (6h15-6h22) and was taken offbeam.
Beam was recovered in all beamlines, but the pulse energy came back lower especially in SA1 (before 920uJ, after about 500 uJ) and higher fluctuations.

All experiments were contacted to find a convenient time for the required 15-30min without beam to recover A12, and all agreed to perform this task immediately.
The intervention actually lasted only a few minutes (7h38-7h42) and restored all intensities to the original levels (averages over 5min):

  • SA1 (18 keV 1 bunch): MEAN=943.8 uJ, SD=56.08 uJ
  • SA2 (17 keV 128 bunches): MEAN=1071 uJ, SD=148.3 uJ
  • SA3 (1 keV 410 bunches): MEAN=347.5 uJ, SD=78.06 uJ

The issue seems resolved at 7h42.

Entry   Issue, posted by Johannes Moeller on 05 Dec 2024, 09:18 

MID operation limited to 55 bunches by SA2_XTD1_OPTICS_PROTECTION/MDL/ATT1 because of inserted Si 0.5mm. Seems a bit too tight for 1mJ@17keV, since diamond is not so usefull at this photon energy anymore.

Entry   Issue, posted by Jan Gruenert on 05 Dec 2024, 15:58 

additional comment to MESSAGE ID: 217

PRC was informed by MID at 8h50 about a new appeared limiation to about 60 pulses / train (MID operated previously at 120 pulses/train).
It turned out that this limitation came from the protection of the SA2_XTD1_ATT attenuator, in partiicular the Si attenuator.
MID had not noticed this before probably due to the fact that after bringing back A12 online, there was a bit higher pulse energy for MID, and MID changed the attenuation settings.

MID could continue working with the full required 120 pulses / train by inserting more diamond attenuators in XTD1 instead of Silicon and using additional attenuators downstream.

Entry   HED 6keV after Euler MDL, posted by Antje Trapp on 19 May 2024, 15:20 Screenshot_from_2024-05-19_15-18-51.png
 
Entry   Experiment summary, posted by Naresh Kujala on 28 Jan 2024, 19:55 Screenshot_from_2024-01-28_20-42-59.png

X-ray delivery

  • 0.25nC, 14 GeV, rep rate 2.25 MHz, 9300 eV, 1 to 100 pulses, ~2000 uJ
  • M1 and M2 mirrors are aligned and beam trajectory to SPB.

Goal

  • Commissioning of the Gotthard-II 25um MHz detector on SA1-HIREX.

Achievements

  • Aligned the HIREX using bent diamond crystal C110 and no gratings used.
  • 2D camera(Bragg/photonic science) aligned and spectral data is collected.
  • Trigger delay scan of Gotthard-II detector. The best trigger delay setting was found at 1952637.
  • Calibration Pipeline is established and can load/inject the constants
  • Online preview of Gotthard-II corrected data is successfully tested and commissioned.
  • successfully able to load calibration constants and dark runs in myMDC.
  • Successfully able to start the calibrated data after completion of runs in myMDC.
  • Calibration/corrected and dark data reports are successfully generated in the proposals
  • New filter stage commissioned and tested. Updated the positions in the MDL.
  • Intensity scan output for strips 2150 and 2151, obtained varying the SA1 diamond attenuators. I_0 is calculated integrating the output from strip 2000 to strip 2300.
  • Different gain setting data was collected.
  • Gotthard-II tested and commissioned with different rep. rate: 4.5 MHz, 2.25 MHz, 1.125 MHz, 0.5 MHz. The test was sucessfull and we can record spectrum for different rep. rate.
  • There are no visible aritfacts around the gain switching region, and the correction seems to work well.
  • spectral data is collected with 2.25 MHz with pulses 2, 30 and 100.
  • Attachment 1: Fisrt results from Gotthard-II with spectral data in MHz rep rate.

Issues:

  • Calibration pipeline of pulses/train (ASSEMBLE_STREAK) couldn't able to instance. DRC and OCD DA group helped to fix (Thank you)
  • Adhoc data sources were added in SA1 COMM with support from DRC and CTRL (Thank you)
  • beam lost at ~12:30hr today due magnets issues and till now no beam. Unable to continue the rest of the program planned for commissioned of the Gotthard-II.

 

Thanks to Marco, Jiaghuo, DRC(Monica), CTRL OCD (Andrea), DA OCD (Karim) for there support and help.

 

Entry   Experiment summary, posted by Konopkova Zuzana on 13 Feb 2024, 08:55 

*Achievements :
- Generally rather successful with very high quality beam parameters
- We had a nearly steady 800 uJ+ HXRSS flux during the user run and ~ 4mJ SASE during the setup at HED.
- Beampointing was rather stable throughout the HXRSS operation.
- Users took several shots (>2600 FEL and FEL+ReLAX shots) on target. The beamline setup was very quick prior to that and users were able to get preliminary data to prototype parameters already a day before formal handover to them and could straight go to harvest mode on Saturday morning.  
-----------------------------------
* Issues :
- Users still found large Bremsstrahlung fluxes on their background, but this is rather on our side, in conjunction with the users.
- On the last day, we had issues with JF calibrations due to the high run numbers in the proposal. DOC promptly helped out.  
- Initially, we observed fluctuations in the FEL flux. BKR was very prompt to correct for this and user operation was not impacted seriously. Slow drifts in pulse energy were also observed, but again, could be compensated from time to time by them.
-----------------------------------

ELOG V3.1.4-7c3fd00