Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2024 logbook, Contact: F.Wolff-Fabris, A.Galler, H.Sinn, Page 1 of 11  Not logged in ELOG logo
ID Datedown Author Group Subject
  209   24 Nov 2024, 17:23 Jan GruenertPRCStatus

Status

All beamlines in operation.
The attachment ATT#1 shows the beam delivery of this afternoon, with the pulse energies on the left and the number of bunches on the right.
SA2 was not affected, SA1 was essentially without beam from 13h45 to 16h15 (2.5h) and SA3 didn't get enough (386) pulses in about the same period.
Beam delivery to all experiments is ok since ~16h15, and SCS is just not taking more pulses for other (internal) reasons.

Attachment 1: JG_2024-11-24_um_17.22.58.png
JG_2024-11-24_um_17.22.58.png
  208   24 Nov 2024, 14:09 Jan GruenertPRCIssue

Beckhoff communication loss SA1-EPS (again)

14h00 Same issue as before, but now EEE cannot clear the problem anymore remotely.
An access is required immediately to repair the SA1-EPS loop to regain communication.
EEE-OCD and VAC-OCD are coming in for the access and repair.

 

PRC update at 16h30 on this issue

One person each from EEE-OCD and VAC-OCD and DESY-MPC shift crew made a ZZ access to XTD2 to resolve the imminent Beckhoff EPS-loop issue.
The communication coupler for the fiber to EtherCat connection of the EPS-loop was replaced in SA1 rack 1.01.
Also, a new motor power terminal (which had been smoky and removed yesterday) was inserted to close the loop for regaining redundance.
All fuses were checked. However, the functionality of the motor controller in the MOV loop could not yet be repaired, thus another ZZ access
after the end of this beam delivery (e.g. next tuesday) is required to make the SRA slits and FLT movable again. EEE-OCD will submit the request.

Moving FLT is not crucial now, and given the planned photon energies as shown in the operation plan, it can wait for repair until WMP.
Moving the SA1 SRA slits is also not crucially mandatory in this very moment (until tuesday), but it will be needed for further beam delivery in the next weeks.

Conclusions: 

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and SA1/3 thus didn't have any beam from 15h18 to 16h12 (54 min).
Beam operation SA1 and SA3 is now fully restored, SCS and FXE experiments are ongoing.
The SA1-EPS-loop communication failure is cured and should thus not come back again.
At 16h30, PRC and SCS made a test that 384 bunches can be delivered to SA3 / SCS. OK.
Big thanks to all colleagues who were involved to resolve this issue: EEE-OCD, VAC-OCD, DOC, BKR, RC, PRC, XPD, DESY-MPC

 

Timeline + statistics of safety_CRL_SA1 interlockings (see ATT#1)
(these are essentially the times during which SA1+SA3 together could not get more than 30 bunches)

12h16 interlock NOT OK
13h03 interlock OK
duration 47 min

13h39 interlock NOT OK
13h42 interlock OK
duration 3 min

13h51 interlock NOT OK
13h53 interlock OK
duration 2 min

13h42 interlock NOT OK
15h45 interlock OK
duration 2h 3min

In total, this afternoon SA1+SA3 were about 3 hours limited to max. 30 bunches.
The ZZ-access lasted less than one hour, and thus SA1/3 together didn't have any beam at all from 15h18 to 16h12 (54 min).
SA3 had beam with 29 or 30 pulses until start of ZZ, while SA1 already lost beam for good at 13h48 until 16h12 (2h 24 min).
SA3/SCS needed 386 pulses which they had until 11h55, but this afternoon they only got / took 386 pulses/train from 13h04-13h34, 13h42-13h47 and after the ZZ.

Attachment 1: JG_2024-11-24_um_16.54.02.png
JG_2024-11-24_um_16.54.02.png
  207   24 Nov 2024, 13:47 Jan GruenertPRCIssue

Beckhoff communication loss SA1-EPS

12h19: BKR informs PRC about a limitation to 30 bunches in total for the north branch.
The property "safety_CRL_SA1" is lmiting in MPS.
DOC and EEE-OCD and PRC work on the issue.

Just before this, DOC had been called and informed EEE-OCD about a problem with the SA1-EPS loop (many devices red), which was very quickly resolved.
However, "safety_CRL_SA1" had remained limiting, and SA3 could only do 29 bunches but needed 384 bunches.

EEE-OCD together with PRC followed the procedure outlined in the radiation protection document 
Procedures: Troubleshooting SEPS interlocks
https://docs.xfel.eu/share/page/site/radiation-protection/document-details?nodeRef=workspace://SpacesStore/6d7374eb-f0e6-426d-a804-1cbc8a2cfddb

The device SA1_XTD2_CRL/DCTRL/ALL_LENSES_OUT was in OFF state although PRC and EEE confirmed that it should be ON, an aftermath of the EPS-loop communication loss.
EEE-OCD had to restart / reboot this device on the Beckhoff PLC level (no configuration changes) and then it loaded correctly the configuration values and came back as ON state.
This resolved the limitation in "safety_CRL_SA1".

All beamlines operating normal. Intervention ended around 13h30.

  206   23 Nov 2024, 15:11 Jan GruenertPRCIssue

Damage on YAG screen of SA1_XTD2_IMGFEL

A damage spot was detected during the restart of SA1 beam operation at 14h30.

To confirm that it is not something in the beam itself or a damage on another component (graphite filter, attenuators, ...), the scintillator is moved from YAG to another scintillator.

ATT#1: shows the damage spot on the YAG (lower left dark spot, the upper right dark spot is the actual beam). Several attenuators are inserted.
ATT#2: shows the YAG when it is just moved away a bit (while moving to another scintillator, but beam still on YAG): only the beam spot is now seen. Attenuators unchanged.
ATT#3: shows the situation with BN scintillator (and all attenuators removed).

XPD expert will be informed about this new detected damage in order to take care.

Attachment 1: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
Attachment 2: JG_2024-11-23_um_14.40.23.png
JG_2024-11-23_um_14.40.23.png
Attachment 3: JG_2024-11-23_um_15.14.33.png
JG_2024-11-23_um_15.14.33.png
  205   23 Nov 2024, 10:27 Jan GruenertPRCIssue

SA1 Beckhoff issue

SA1 blocked to max. 2 bunches and SA3 limited to max 29 bunches
At 9h46, SA1 goes down to single bunch. PRC is called because SA3 (SCS) cannot get the required >300 bunches anymore. (at 9h36 SA3 was limited to 29 bunches/train)

FXE, SCS, DOC are informed, DOC and PRC identify that this is a SA1 Beckhoff problem.
Many MDLs in SA1 are red across the board, e.g. SRA, FLT, Mirror motors. Actual hardware state cannot be known and no movement possible.
Therefore, EEE-OCD is called in. They check and identify out-of-order PLC crates of the SASE1 EPS and MOV loops.
Vacuum and VAC PLC seems unaffected. 

EEE-OCD needs access to tunnel XTD2. It could be a fast access if it is only a blown fuse in an EPS crate.
This will however require to decouple the North branch, stop entire beam operation for SA1 and SA3.
Anyhow, SCS confirms to PRC that 29 bunches are not enough for them and FXE also cannot go on with only 1 bunch, effectively it is a downtime for both instruments.

Additional oddity: there is still one pulse per train delivered to SA1 / FXE, but there is no pulse energy in it ! XGM detects one bunch but with <10uJ.
Unclear why, PRC, FXE, and BKR looking into it until EEE will go into the tunnel.

In order not to ground the magnets in XTD2, a person from MPC / DESY has to accompany the EEE-OCD person.
This is organized and they will meet at XHE3 to enter XTD2 from there.

VAC-OCD is also aware and checking their systems and on standby to accompany to the tunnel if required.
For the moment it looks that the SA1 vacuum system and its Beckhoff controls are ok and not affected.

11h22: SA3 beam delivery is stopped.
In preparation of the access the North branch is decoupled. SA2 is not affected, normal operation.

11h45 : Details on SA1 control status. Following (Beckhoff-related) devices are in ERROR:

  • SA1_XTD2_VAC/SWITCH/SA1_PNACT_CLOSE and SA1_XTD2_VAC/SWITCH/SA1_PNACT_OPEN (this limits bunch numbers in SA1 and SA3 by EPS interlock)
  • SA1_XTD2_MIRR-1/MOTOR/HMTX and HMTY and HMRY, but not HMRX and HMRZ
  • SA1_XTD2_MIRR-2/MOTOR/* (all of them)
  • SA1_XTD2_FLT/MOTOR/
  • SA1_XTD2_IMGTR/SWITCH/*
  • SA1_XTD2_PSLIT/TSENS/* but not SA1_XTD2_PSLIT/MOTOR/*
  • more ...

12h03 Actual physical access to XTD2 has begun (team entering).

12h25: the EPS loop errors are resolved, FLT, SRA, IMGTR, PNACT all ok. Motors still in error.

12h27: the MOTORs are now also ok.

The root causes and failures will be described in detail by the EEE experts, here only in brief:
Two PLC crates lost communication to the PLC system. Fuses were ok. These crates had to be power cycled locally.
Now the communication is re-established, and the devices on EPS as well as MOV loop have recovered and are out of ERROR.

12h35: PRC and DOC checked the previously affected SA1 devices and all looks good, team is leaving the tunnel.

13h00: Another / still a problem: FLT motor issues (related to EPS). This component is now blocking EPS. EEE and device experts working on it in the control system. They find that they again need to go to the tunnel.

13h40 EEE-OCD is at the SA1 XTD2 rack 1.01 and it smells burnt. Checking fuses and power supplies of the Beckhoff crates.

14h: The SRA motors and FLT are both depending on this Beckhoff control.
EEE decides to remove the defective Beckhoff motor terminal because voltage is still delivered and there is the danger that it will start burning.

We proceed with the help of the colleagues in the tunnel and the device expert to manually move the FLT manipulator to the graphite position,
and looking at the operation plan photon energies it can stay there until the WMP.

At the same time we manually check the SRA slits, the motors are running ok. However, removing that Beckhoff terminal also disables the SRA slits.
It would require a spare controller from the labs, thus we decide in the interest of going back to operation to move on without being able to move the SRA slits.

14h22 Beam is back on. Immediately we test that SA3 can go to 100 bunches - OK. At 14h28 they go to 384 bunches. OK. Handover to SCS team.

14h30 Realignment of SA1 beam. When inserting IMGFEL to check if the SRA slits are clipping the beam or not, it is found after some tests with the beamline attenuators and the other scintillators that there is a damage spot on the YAG ! See next logbook entry and ATT#3. This is not on the graphite filter or on the attenuators or in the beam as we initially suspected.
The OCD colleagues from DESY-MPC and EEE are released.
The SRA slits are not clipping now. The beam is aligned to the handover position by RC/BKR. Beam handover to FXE team around 15h.

Attachment 1: JG_2024-11-23_um_12.06.35.png
JG_2024-11-23_um_12.06.35.png
Attachment 2: JG_2024-11-23_um_12.10.23.png
JG_2024-11-23_um_12.10.23.png
Attachment 3: JG_2024-11-23_um_14.33.06.png
JG_2024-11-23_um_14.33.06.png
  204   23 Nov 2024, 10:23 Jan GruenertPRCIssue

Broken Radiation Interlock

Beam was down for all beamlines due to a broken radiation interlock in the SPB/SFX experimental hutch.
The interlock was brocken accidently with the shutter open, see https://in.xfel.eu/elog/OPERATION-2024/203
Beam was down for 42 min (from 3h00 to 3h42) but the machine came back with same perfomrmance.

Attachment 1: JG_2024-11-23_um_10.22.43.png
JG_2024-11-23_um_10.22.43.png
  203   23 Nov 2024, 07:09 Raphael de WijnSPBShift end

End of in-house 8496

X-ray delivery

15.13 keV 30 pulses @0.5 MHz and 64 pulses (customized pattern), 0.85-1.0 mJ stable delivery

Achievements/Observations

  • We used the customized pattern and it worked well, thanks to everyone involved
  • One fuse was blew, TS OCD had to come on site
  • Interlock was brocken accidently with the shutter open, which tripped the beam in all SASEs
  202   23 Nov 2024, 00:48 Jan GruenertPRCIssue

Broken Bridges

Around 21h20 the XPD group notices that several karabo-to-DOOCS bridges are broken.
The immediate effect is that in DOOCS all XGMs in all beamlines show warnings
(because they don't receive important status informations from karabo anymore, for instance the insertion status of the graphite filters nrequired at this photon energy).

PRC contacts Tim Wilksen / DESY to repair these bridges after PRC check with DOC, who confirmed that on karabo side there were no issues.
Tim finds that ALL bridges are broken down. The root cause is a failing computer that hosts all these bridges, see his logbook entry;
https://ttfinfo.desy.de/XFELelog/show.jsp?dir=/2024/47/22.11_a&pos=2024-11-22T21:47:00
That also explains that HIREX data was no longer available in DOOCS for the accelerator team tuning SA2-HXRSS.

The issue was resolved by Tim and his colleagues at 21h56 and had persisted since 20h27 when that computer crashed (thus 1.5h downtime of the bridges).
Tim will organize with BKR to put a status indicator for this important computer hosting all bridges on a BKR diagnostic panel.

  201   23 Nov 2024, 00:36 Jan GruenertPRCIssue

20h16: beam down in all beamlines due to a gun problem
20h50: x-ray beams are back
Shortly later additional 5 min down (20h55 - 21h00) for all beamlines, then the gun issue is solved.

After that, SA1 and SA3 are about back to same performance, but SA2 HXRSS is down from 130 uJ before to 100 uJ after the downtime.
Additionally, MID informs at 21h07 that no X-rays at all arrive to their hutch anymore. Investigation with PRC.

At 21h50, all issues are mostly settled, MID has beam, normal operation continues.
Further HXRSS tuning and preparation for limited range photon energy scanning will happen in the night.

Attachment 1: JG_2024-11-23_um_00.47.04.png
JG_2024-11-23_um_00.47.04.png
  200   15 Nov 2024, 10:50 Frederik Wolff-FabrisPRCIssue

SA2 cell#5 undulator segment moved at ~09:45AM Fri to 25mm without aparent reason and undulator experts need a ZZ to exchange power supply; FEL instensity dropped from 230uJ to 60uJ s consequence.

RCs are preparing south branch for the iminent ZZ. HED is informed.

**Update: ZZ finished by 12:10; recovered performance and re-started delivery by 13:30.

  199   12 Nov 2024, 10:07 Frederik Wolff-FabrisPRCIssue
- While FEL IMG with YAG screen was inserted in SA1, EPS/MPS was not limited to the expected 2 bunches, but to 30 bunches; 
- It was found the condition #3 ("SA1_XTD2_VAC/Switch/SA1_PNACT_Close") at the EPS Mode-S was disabled which should be always enabled; 
- Condition #3 has been enabled again and now SA1 is limited to 2 bunches, as expected when YAG screen is inserted. 

Further investigations will be done to find why and when it was disabled. 
Attachment 1: Screenshot_2024-11-12_100032.png
Screenshot_2024-11-12_100032.png
  198   04 Nov 2024, 15:07 Frederik Wolff-FabrisXOSA2 status

Dear MID colleagues, Svitozar (PRC), Bolko and Shan (RCs),
This morning we faced the situation where MID was limited to 400 pulses (mode M) via the EPS signals. The current RF window set at the machine allows to the requested 450 pulses.
The limiting conditions were the inserted graphite filter (#2) and the diamon screen at the MID Pop-in (#17). After a discussion with experts (and thanking Naresh, Wolfgang and Andreas), both conditions in EPS mode F (see attached) are now disabled, as the current SA2 beam offer safe condition for both devices.
This special situation with disabled EPS signals in mode F is in place until Tuesday 05.Nov as MID will stay at ~12.28keV and with HXRSS (500-600uJ). Please note that in case of either a change in photon energy, substantial increase of photon flux, longer SA RF window or the wish to insert YAG screen at the SA2 Pop-in, these two mentioned conditions have to be enabled again at the SA2 EPS Mode F prior these actions take place.
After the beamtime ends on Tuesday 7AM, either me, Naresh or the PRC will make sure these two EPS conditions are re-enabled.
In case of questions please do not hesitate in contacting me.
Best wishes,
Frederik

**Update Mon 15:00: The beamtime delivery to MID is extended up to Tuesday 9AM. At this moment the EPS signals will be enabled again by Frederik W.-F.

 

  197   03 Nov 2024, 00:00 Raphael de WijnSPBShift summary

X-ray delivery

12.5 keV, ~1.6 mJ

Achievements/Observations:
Collected almost 40 runs of the priority sample with 7 pulses per train using the Polypico injection system.

Issues:
Jungfrau controller issue when changing back to single cell adaptive to collect powder data with new geometry (solved together with DOC).
Pipeline issue after requesting single cell correction - still the same bug in the calibration pipeline, the pipeline has to be re-instantiated after the change.

  196   31 Oct 2024, 02:43 Ulrike BoesenbergMIDShift summary

12.389keV, HXRSS, 1-400+ bunches, 450uJ

  • Seeding alignment
  • Beamline alignment (Mirror + foccussing)
  • Measured energy and moved it the requested value
  • Cryos for sample and monochromator are prepared
  • Aligned DES spectrometer
  • Started with DCCM characterization (440)
  195   28 Oct 2024, 09:58 Peter ZaldenFXESA1 status

This is to report some findings during operation that should be looked into (mostly to avoid in the future, because both negatively affect data quality of the user expt at FXE):

  • At the moment, the 99:1 mode affects 4 to 5 pulses out of 100 in SA1, see first screenshot that shows in the top right the IPM reading with each blue point denoting the average pulse energy in a train. This was not always the case. After initial setup on thursday it did affect only a single train out of 100.
  • On 27.10., the switching of SA2 between 1 and multiple bunches affected SA1 beam heavily, see the jumps in SA1 pulse energy in the second screenshot shown here:
    • The pulse energy dropped by 15% and
    • the pointing changed vertically by 150 um on the FXE PI (data not shown, but available from karabo trendlines)

Edit 8.11.: Added screenshot (grafana/ctrend) to show that the vertical displacement was permanent (not corrected by the feedback)

Attachment 1: 2024-10-28-095823_156722903_exflqr37199.png
2024-10-28-095823_156722903_exflqr37199.png
Attachment 2: 2024-10-28-100227_584317851_exflqr37199.png
2024-10-28-100227_584317851_exflqr37199.png
Attachment 3: Screenshot_2024-11-08_093316.png
Screenshot_2024-11-08_093316.png
  194   25 Oct 2024, 05:51 Chan KimSPBShift summary

X-ray delivery

12.4 keV, ~1.4 mJ

 

Achievements/Observations

Beam was focused on a YaG, alignment finished up to JF4M

  193   24 Oct 2024, 06:51 Chan KimSPBShift summary
X-ray delivery

12.4 keV, ~1.4 mJ

Achievements/Observations

  • Beam is through the inline mic
  • Flight tube vacuum optimized
  • Beam trajectory up to JF4M manually optimized
  • EHC CRLs aligned
  192   07 Oct 2024, 16:34 Naresh KujalaPRCStatus

Desy Laser group has fixed the issue about the laser Synchronization.

https://ttfinfo.desy.de/XFELelog/show.jsp?dir=/2024/41/07.10_M&pos=2024-10-07T14:51:12

Naresh

PRC

wk41

  191   07 Oct 2024, 15:15 Naresh KujalaPRCIssue

14:40hr

FXE and HED reported the laser synchronization issue. Informed to Machine RC. 

LbSync team couldn't fixed this issue remotely and on there way to Schenefeld to fix this issue.

Naresh 

PRC wk 41

  190   01 Oct 2024, 02:39 Tokushi SatoSPBShift summary

X-ray delivery

    6 keV, ~2.8 mJ, continuous beam drift in both direction

Optical laser delivery

    ns laser for particle visualization

Achievements/Observations

    DAta collection for CDI
    Julabo chiller (AGIPD) is still on target (-32 degree)

Issues

    Continuous beam drift in the both direction: NKB focus was optimized almost every 2 hours

ELOG V3.1.4-7c3fd00