Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2023 logbook, Contact: F.Wolff-Fabris, A.Galler, L.Samoylova, H.Sinn, Page 22 of 24  Not logged in ELOG logo
ID Date Author Group Subjectdown
  349   20 Sep 2023, 19:42 Peter ZaldenFXEIssue

Connerning the phase shifter adjustment after undulator gap scan via karabo:

At 19:30 FXE tested the scan change via Karabo with BKR. When the energy was changed with step size of 100 eV, the intensity of X-ray stays unchanged. This means the phase shifter adjusment is corrrected and it is working now.

Many thanks for your help.

  356   23 Sep 2023, 07:31 Jan GruenertPRCIssue

7:23: RC info:
an electrical power glitch interrupted beam operation, machine down. DESY has power back
and accelerator is recovering beam towards TLD, but no beam possible into TLDs, apparently power issues in Schenefeld.

7:32 SCS info:
SA1+SA2 have electricity, SA3 not. TS is informed and on the way. Power went away around 6h50.
Lights in hall remained on, but everything in SA3 is off, even regular power sockets. In karabo all red.
Hutch search system indicates FAULT state.

7:45 FXE info:
FXE was not affected by any power going away, all systems are operational, but of course no beam due to SA3.
The only thing unusual was an (fire ?) alarm audible at 6h53. FXE is ready and waiting for beam.

7:50 HED info:
all normal, just XFEL beam went away. RELAX laser not affected, all ok, waiting for beam.

7:55 FXE info:
Interlock fault lamp on FXE interlock panel is RED. Interlock cannot be set. FXE will contact SRP radiation protection.

8:00 RC :
MPS on-call is present at BKR. Machine will try to restart south branch, but there seems to be also an interlock by an XTD8 (sic!) shutter (?!).

08:03 FXE / SRP:
SRP informs, that they cannot help in this case but DESY MPS-OCD must solve this. 

08:13 BKR:
BKR shift leader Dennis Haupt will try to contact MPS team.

Further updates in next entry https://in.xfel.eu/elog/OPERATION-2023/359

  358   23 Sep 2023, 08:14 Peter ZaldenFXEIssue

FXE interlock in fault state. See photo attached with original error message from 6:53h. It cannot be reset by acknowledging. BKR is informed and they are contacting MPS OCD.

Attachment 1: 20230923_074956.jpg
20230923_074956.jpg
  359   23 Sep 2023, 08:50 Jan GruenertPRCIssue

Updates on power glitch

8h50 No feedback from TS-OCD (still not arrived at SASE3 / SCS ?). Phone is busy. Apparently they are busy working on it in the hall (confirmed at 9h00).

There is a PANDORA radiation monitor which has no power, but it is needed to resolve the interlock issue in FXE + SCS

Important:
regular SCS phone cannot be reached anymore (no power). Please use this number instead to call SCS hutch: 86448

Update 9h15
- SXP/SQS informed by email, SQS informed / confirmed by phone (Michael Meyer), they will check their hutch and equipment. No phone contact to SXP.
- FXE info: power back at PANDORA but interlock error persists

Beam is back to SA2 / XTD6 since 7h35.

9h20 Info by TS-OCD / Christian Holz: power at SASE3 area in hall is now back.

9h25 FXE:
The affected Pandora without power was PANDORA X05 in the SASE3 SXP hutch.
That PANDORA now has power back, but the FXE hutch interlock error is still persisting.
D3 Wolfgang Clement is informed and said he might have to come in to take care of the alarm.

9h40 info RC: 
"new" problem, now SA2 : Balcony MPS crate SA1 balcony room , communication lost to "DIXHEXPS1.3 server  - MPS condenser"

09:55 info FXE:
Wolfgang Clement was at FXE and has reset "all the burnthrough monitors", in total 5 units, "including also SXP and SPB optics". At least at FXE the X-ray interlock is now fully functional, confirmed.

10:05 Info SCS:
their interlock panel this shows an error, but which probably just has to be acknowledged. FXE / SCS will communicate and solve it.

10h35 BKR info:
AIBS in all hutches of SA1/3 are blocking beam. Also DIXHEXPS1.3 is blocking beam to SA2. see att#1 and att#2.

11h10
EEE-OCD is working on AIBS / MPS from PLC/karabo side.

11h25 EEE-OCD:
PLC errors cleared for all AIBS.

11h30
beam to SA1/3 should be possible now. SA2 still blocked. The MTCA crate apparently needs to be power cycled.
EEE-Fast Electronics OCD is infomed through DOC.

12h00
EEE-PLC-OCD has succeeded to clear errors on the karabo/PLC side of AIBS. Now beam to SA1/3 is re-established.
However, the MTCA crates XFELcpuDIXHEXPS2 (and XFELcpuSYNCPPL7) might have power but are not operational (MCH unreachable) and must be locally power-cycled.
These are "DESY-operated" crates, not "EuXFEL-MTCA-crates".
Nevertheless, DOC staff with online instruction by EEE-FE will go to the balcony room and reboot crates locally as required/requested by RC. Hopefully this will bring us back beam permission to SA2.

12h10
Another new issue: SA2 karabo is down. DOC is working on this. No control possible of anything in SASE2 tunnel via karabo. 
PRC instructs BKR to close the shutter between XTD2 and XTD6 (as agreed also with HED) for safety reasons, so that the beam won't be uncontrolled once SA2 gets beam permission back.

12h45 info from SCS:
a) the SCS pump-probe laser is down since the morning (and they absolutely need this for the experiment), they are in contact with LAS-OCD, but LAS-OCD waits DOC support to recover motor positions etc.
b) SCS received XFEL beam at 11h30 but since 12h33 the beam is intermittently interrupted (see att#3) or completely off. BKR informed but interruptions not yet understood.

12h50
The crate XFELcpuDIXHEXPS2 appeared to be revived. It reads OK on the DOOCS panel Controls--> MTCA crates --> XHEXP, but after some moments again in error (device offline).
Info RC: Tim Wilksen / DESY and team are working on this from remote and might come in if broken hardware needs to be exchanged.

13h40
SCS had reported 12h45 that beam is always set to zero. SCS and PRC investigated but couldn't find any EPS / karabo item that would do this.
We suspect that some karabo macro is asking to set the beam to 0 bunches but cannot find anything.
We then disabled the user control of the number of bunches from BKR. Now SCS receives beam but has to call BKR if they want to change the number of bunches. To be followed up by DOC once they have time.

14h20
I see now that the crate suddenly is OK, which probably means that it was finally cured by somebody. Beam permission to SA2 is back.
 

14h40
Beam is back in XTD1 (SA2) until the shutter XTD1/XTD6 and checked. Currently 300uJ.
Beam monitoring without karabo is possible in DOOCS with XGM (unless vaccuum problems would come up) and the Transmissive Imager.
Now the HED shift team is leaving until DOC has recovered karabo. Info will be circulated to HED@xfel.eu when this is achieved.

 

Attachment 1: JG_2023-09-23_um_10.28.37.png
JG_2023-09-23_um_10.28.37.png
Attachment 2: JG_2023-09-23_um_10.43.59.png
JG_2023-09-23_um_10.43.59.png
Attachment 3: JG_2023-09-23_um_12.46.35.png
JG_2023-09-23_um_12.46.35.png
  361   23 Sep 2023, 15:35 Jan GruenertPRCIssue

SASE2 :
The unavailability of karabo and possibly missing cooling water is a problem for the tunnel components as well.
XGM in XTD6 : server error since about 11am. We don't know about the status of this device anymore, neither from DOOCS or karabo.
VAC-OCD is informed and will work with EEE-PLC-OCD to check and secure SA2 tunnel systems possibly via PLC.
Main concerns: XGM and cryo-cooler systems. Overheating of racks which contain XGM and monochromator electronics.

16h00 SA2 karabo is back online (partially) !

Another good news: 
together with VAC-OCD we see that the vacuum system of SA2 tunnel is fine, unaffected by the power cut and the karabo outage. Pressures seem fine.

Janusz Malka found for the SA3 GPFS servers that the balcony room redundancy cooling water didn't start, and no failure was reported.

16h25 DOC info
Cooling in rack rooms ok. GPFS servers are up, ITDM still checking. DAQ servers up.

16h30 XTD6-XGM is down.
Vacuum ok but MTCA crate cannot be reached / all RS232 connections not responding. XPD and Fini checking
but looks like an access is required. Raimund Kammering also checking crate.

16h30 FXE:
Optical laser shutter in error state. LAS-OCD informed. Therefore FXE is not taking beam now.
Actually the optical laser safety shutter between FXE optical laser hutch and experimental hutch has a problem and LAS_OCD cannot help. SRP to be contacted.

16h40 info ITDM:
recovery of SA2 GPFS hardware after cooling failure is completed. Everything seems to be working. Only few power supplies are damaged but there is redundancy.

16h40 info RC
SA2 beam ins now blocked by Big Brother since it receives an EPS remote power limit of 0 W. This info is now masked until all SA2 karabo devices will be back up. (e.g. bunchpattern-MDL is still down)

17h30 info by VAC-OCD
Vacuum systems SA1+SA2+SA3 are checked OK. Only some MDLs needed to be retstarted.
Some more problems with the cryo compressor on monochromators. 2 on HED and 1 on MID. Vacuum pressure is ok but if
users need the mono it will not be possible. VAC-OCD checks back with HED when they are back in the hutch.

  376   30 Sep 2023, 10:11 Romain LetrunSPBIssue

On multiple occasions this morning, the number of bunches was reduced to 1 without actions from our side.

Looking at the history of SA1_XTD2_BUNCHPATTERN/MDL/CONFIGURATOR, we noticed changes were applied by this device, presumably originating from one of the watchdogs in SA1. This kind of silent changes is particularly inconvenient and could be easy to miss if one is not constantly monitoring that the number of bunches remains the same. Given that there are dozens of watchdogs in SA1, it is at best cumbersome and time consuming to track the origin. Finding the origin of the restriction needs to be made easier.

Attachment 1: Screenshot_from_2023-09-30_10-00-26.png
Screenshot_from_2023-09-30_10-00-26.png
  387   13 Oct 2023, 10:19 Theophilos MaltezopoulosXPDIssue

An XGM has two single-shot detectors, XGMD and HAMP.

The XTD2 XGMD amplifiers are damaged, thus I operate the XTD2 XGM with HAMP, which I usually use at > 18 keV.

In summary, the XTD2 XGM is fully operational, but I do not have a spare detector any more! Thus, a ZZ is organized for Monday 16.10. around 12:00 and Joakim and I will exchange the damaged amplifiers.

  409   24 Oct 2023, 19:26 Thomas PrestonHEDIssue

Vacuum issue at HED

At 18:26 the vacuum failed in the HED CRL3 chamber in the OPTs hutch. The vacuum failed in the section and tripped the valves in the OPT hutch; it went from 1e-8 to 5 mbar in seconds. The leak is confined to the CRL chamber -- the sections upstream and downstream are not compromised and still have good vacuum. Therefore we believe the leak to be in one of the bellows of the CRL arms. This needs to be investigated tomorrow by the technicians.

The late shift is cancelled since we cannot continue with lens alignment. The night shift may continue with work on the DiPOLE laser - up to them.

BKR is informed that they may use the time for tuning.

Attachment 1: Vac_issue_2023-10-24-18-50-16.png
Vac_issue_2023-10-24-18-50-16.png
  410   25 Oct 2023, 04:48 Chan KimSPBIssue

SA3 background is very strong and it affects our NKB (nano KB, Ru coating) focus.

 

Attachment 1: optimized NKB focus with 1 pulse @SA3

Attachment 2: optimized NKB focus + SA3 BG with 375 pulses @SA3

Attachment 3: optimized NKB focus with 375 pulses @SA3

Attachment 4: optimized NKB focus with 375 pulses @SA3, but after changing number of pulses to 1 @SA3

Attachment 1: Screenshot_from_2023-10-25_02-37-38.png
Screenshot_from_2023-10-25_02-37-38.png
Attachment 2: Screenshot_from_2023-10-25_02-42-21.png
Screenshot_from_2023-10-25_02-42-21.png
Attachment 3: Screenshot_from_2023-10-25_02-35-35.png
Screenshot_from_2023-10-25_02-35-35.png
Attachment 4: Screenshot_from_2023-10-25_02-36-16.png
Screenshot_from_2023-10-25_02-36-16.png
  421   27 Oct 2023, 03:19 Chan KimSPBIssue

We observed auto changes of the number of X-ray pulses last night and tonight.

It is related to 500 um thick Si attenuator in XTD2_ATT.

Even with 2.4 mm thick CVD (XTD2_ATT) IN, the number of pulses @SA1 changes to 28 whenever we insert 500 um thick Si attenuator.

Watchdog triggered this process. It has to be updated.

 

First row in the atttachment: The number of X-ray pulses @ SA1

Second row in the attachment: Status of 2.4 mm thick CVD in XTD2_ATT

Third row in the attachment: Status of 0.5 mm thick Si in XTD2_ATT

Attachment 1: Screenshot_from_2023-10-27_03-09-25.png
Screenshot_from_2023-10-27_03-09-25.png
  425   30 Oct 2023, 14:36 Jan GruenertPRCIssue

The EPD reading system at the turnstiles of XTD10 entrance were not working this morning.
A necessary ZZ access to XTD10 could not be performed in the morning and had to be postponed.
Resetting the system by Michael Prollius was not sufficient.
At 14h20 XO informed that the EPD system was fixed by D3 (an issue with Windows passwords) and again operational.
The ZZ access by XRO concerning VSLIT is therefore proceeding now.

  435   04 Nov 2023, 08:15 Jan GruenertPRCIssue

Beam down for SASE1/3: SASE1 XTD2 vacuum issue

Vacuum interlock of the SA1 / XTD2 solid attenuator area triggered closing beamline gatevalves.
VAC-OCD is investigating. No beam possible for SA1/SA3 since 7h18.

The vacuum interlock trip is a real vacuum problem. It could be temporarily circumvented to re-establish beam operation in SA1/SA3, BUT IT IS NOT FIXED.
Beam re-established at 8h50.

Currently all attenuators arms are OUT. The vacuum pressure is still high, returned below critical, but any attenuator movement can increase the vacuum pressure from the leak.
Further assessment by VAC in the next hour(s) will decide if a tunnel intervention must happen today or can be delayed until after the weekend.

Attachment 1: JG_2023-11-04_um_09.34.41.png
JG_2023-11-04_um_09.34.41.png
  436   04 Nov 2023, 13:49 Jan GruenertPRCIssue

(Temporary) solution for the vacuum incident in SA1

This morning at 7h18 the vacuum interlock of SA1 closed the beamline valves and thus stopped the XFEL beam delivery for SA1 and SA3.
The reason was a real vacuum problem, with vacuum pressure critically rising in the SA1_XTD2_ATT solid attenuator.
Thanks to VAC-OCD and a workaround it was possible to re-establish beam operation SA1/SA3 already by 8h50, but the problem is not fixed !

History data shows that movements of ATT progressively deteriorated the vacuum since FRI 3.11.2023 morning.
At the moment it is stabilized at a high level, but any further significant deterioration of the vacuum level in ATT would make a tunnel intervention unavoidable.

A vacuum work intervention in the tunnel will cause a beam delivery downtime for SA1/SA3 of at least 5 hours
as we know from a similar previous cases(*). This including an extensive XTD2 undulator tunnel search,
which is required because this cannot happen as ZZ but interlock must be broken.

In order to protect the user beamtimes in SA1 (SPB/SFX) and SA3 (SQS),
the following is necessary and agreed by the coordinators (PRC+RC) and VAC:

1) It is not allowed to move ATT until VAC has repaired the leak (PRC has locked the device)
2) VAC group will enter XTD2 after the user beamtimes end on monday, and will work on removing the vacuum leak. This will affect the FD studies scheduled for monday.
3) SPB confirms that they can work in the current (OUT) state of the XTD2 attenuator
4) If FXE (in-house beamtime) cannot work with these same conditions as SPB/SFX (ATT out), then the FXE beamtime is cancelled
5) FXE is requested to contact PRC by phone

  438   05 Nov 2023, 12:18 Jan GruenertPRCIssue

Accelerator down 9h23 - 12h

RC / BKR informs (10am), that due to a quadrupole power supply failure in A21 a tunnel access to XTL is required.
After the XTL intervention finished, BKR started beam in SA2 at 12h09.

However, when sending beam to SA1/3 a new problem occurred with magnets in that branch, which at the moment makes it impossible to deliver beam to SA1/SA3.
The experts are on their way, we will update when there is new information.

  439   05 Nov 2023, 12:47 Jan GruenertPRCIssue

North branch down : another e-beam magnets problem

RC informs 12h45 that the linac and the south branch are back in operation and performance is as before the issues with the A21 quadrupole.
However, during the access to XTL, additional manget problems occurred in the T4 and SA3 section.
Several magnet power supplies failed several times, MPS is working on this issue.
Additionally the SA3 burn-through radiation safety monitor was triggered which requires an onsite intervention by D3.
The downtime for SA1/SA3 is expected to last at least one hour.

  453   22 Nov 2023, 10:19 Johannes MoellerMIDIssue

The Si 0.5mm absorber of XTD1 attenuator seems to be damaged. First screenshot is with that one inserted. In the second screenshot, it is replaced with a Si 0.5mm from MID_XTD6_ATT.

 

Attachment 1: Screenshot_from_2023-11-22_10-15-34.png
Screenshot_from_2023-11-22_10-15-34.png
Attachment 2: Screenshot_from_2023-11-22_10-14-47.png
Screenshot_from_2023-11-22_10-14-47.png
  469   18 Apr 2024, 23:19 Jan GruenertPRCIssue

22h55:
HED informs that they have issues with the HED pulse picker in XTD6. It affects the ongoing experiment, but not to an extent that they cannot continue the beamtime.
It will be more an issue for the next beamtime starting this weekend, therefore they are looking into solutions, working together with DOC and EEE.
They might need a ZZ access to XTD6 to check the pulse picker mechanics outside vacuum.

  108   25 Mar 2023, 21:01 Lisa RandolphHEDHED summary 25.3.

This day we had lots of issues.

  • Nightshift noticed that the mirror 2 feedback was not running anymore since about 22hrs; PBLM2 cam was unknown; powercycling recovered the camera, feedback operational again.
  • Continued shooting until ~9:00
  • At ~9:30 the whole Karabo, online cluster were dead. No control possible anymore. Whole SASE2 was affected.
  • In the meantime, BKR called us, EPS is triggered and bunch number went to zero. However, we didn't (couldn't) change anything due to the Karabo issue. It may be the side-effect of Karabo-issue.
  • Karabo slowly came back at ~15:15
  • DOC asked us to take test DAQ. JF4 control is stuck at INIT state and cannot take DAQ. DA08 went to error. HED Playground server cannot be restarted.... Need to solve those issues one by one.
  • The remaining shift we mostly spent in restarting servers, motors, reloading configurations
  • We could not start Jungfrau 4 - at some point the controller was started and has hung in INIT state for the last few hours. It doesn't respond to shutdown commands to initialise it correctly in order. We called DOC, they could restart it.
  106   24 Mar 2023, 21:46 Lisa RandolphHEDHED shift summary 23.-24.3.

8.2 keV, 1-2 bunches, 600uJ

23.3.

  • did timing runs and optimized the beamline alignment
  • Karabo was working very slow
  • continued with sample alignment
  • performed several xfel+laser shots
  • opened the chamber and replaced samples

24.3.

  • aligned xfel and Relax to TCC
  • sample alignment
  • taking shots
  98   22 Mar 2023, 23:57 Lisa RandolphHEDHED shift summary 22.03.

During the night we worked on the temporal overlap between Relax laser and XFEL.

In the morning we noticed that the beam jitter is quite large (in and out of the aperture of the CRL4 lenses). There seems to be a correlation with the HED popin. We adjusted the TCC with x-ray and laser. Performed transmission scans.

We managed to see the direct beam on JF. Continued with sample alignment (grazing-incidence) and also aligned the beamblocks to block the direkt and the reflected beam.

ELOG V3.1.4-7c3fd00