Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2024 logbook, Contact: F.Wolff-Fabris, A.Galler, H.Sinn, Page 9 of 10  Not logged in ELOG logo
ID Date Author Group Subjectdown
  108   19 May 2024, 14:49 Antje TrappXROM1 - M2 - M3 euler fine tuning geometriy parameters

We fine tuned geometry parameters for linear effects on X motion (parallel offset) for all three mirrors.

dTz was corrected for M2 and M3 to -6.5mm (same geometry)

dTz was corrected for M1 to +6mm

for M2 we were able to do a parabola scan Y motion to check for lrTz

lrTz was then corrected to -433mm

correction in the same direction for M3 did not yield better results

Final mirror positions at HED - see picture

  1   28 Jan 2024, 13:50 Jan GruenertPRCIssue

RC / BKR informs : Beam is down (since 12h10), all beamlines affected

The reason is failure of a quadrupole (power supply) downstream of the dogleg.
The MPS crew is working on recovery, access to XTL ongoing.
RC will inform when new estimates come in on when we can restart beam operation.

Currently affected on Photon side:
- SA1: Naresh (XPD) and Marco (DET) working on Gotthard-II commissioning. Naresh is present at BKR.
- SA2/SA3: Antje + Mikako (XRO) working on SA2 mirror flipping tests and SA3 exit slit check. Both are located at SXP control hutch.
Other XRO members will come in for the afternoon/evening shift work. Antje will inform them via chat.

  7   01 Feb 2024, 21:56 Romain LetrunSPBIssue

At 21:29:20 today, the number of bunches in SASE1 was increased from 1 to 352 without intervention of the SPB/SFX shift crew. 30 seconds later, the number of bunches was reduced to 1, again without intervention from the SPB/SFX shift crew (see attachments 1 and 2).

We were in the middle of a measurement at that time and the sudden increase from 10 mW to 3.5 W average power resulted in the destruction of the sample we were using.

AGIPD and JUNGFRAU are safe and were not at risk at the time of the incident, but this is merely by luck.

The origin of the change in number of bunches is unclear to us and needs to be investigated. It goes without saying that this is extremely dangerous for equipment and the damages could have been even worse.

  24   13 Feb 2024, 08:52 Romain LetrunSPBIssue

Like after the previous winter maintenance period, the shutter of FXE appears as opened in DOOCS panels even though it is closed (https://in.xfel.eu/elog/OPERATION-2023/58).

  27   17 Feb 2024, 08:15 Tommaso MazzaSQSIssue

The cross on the FEL imager, which we use as a reference for BKR crew, is not centred on the GATT aperture. when was this changed, and why?

att. 1 and 2 show the current position (the saturated image aids the positioning of the GATT aperture).

att. 3 is from two weeks ago. then, the cross was centred on the GATT aperture.

  37   27 Feb 2024, 09:56 Romain LetrunSPBIssue

The status of the XTD2 attenuators has been checked during the experiment last week and many were found to cause disturbances in the wavefront ranging from mild to severe.

Attachment 1: No attenuator
Attachment 2: 75 um CVD - Fringes visible in beam
Attachment 3: 150 um CVD - Strong fringes in beam
Attachment 4: 300 um CVD - Looks more or less ok
Attachment 5: 600 um CVD - Weak distortion in the beam profile
Attachment 6: 1200 um CVD - Ok
Attachment 7: 2400 um CVD - Ok
Attachment 8: 500 um Si - Not ok

  38   28 Feb 2024, 06:57 Peter ZaldenFXEIssue

Not sure if this is the cause of the fringes reported before, but at 5.6 keV there are clear cracks visible on the XTD2 Attenuators 75 umm and 150um CVD, see attached. There is also a little bit of damage on the 600 um CVD, but not as strong as the others.

At 5.6 keV we will need to use the thin attenuators. I will mention again in case this causes problems downstream.

Edit 8:50h: We do not observe additional interference effects in the downstream beam profile (see attachement 4)

  46   04 Mar 2024, 09:42 Romain LetrunSPBIssue

The number of bunches during the last night shift kept being changed down to 24 by the device SA1_XTD2_WATCHDOG/MDL/ATT1_MONITOR020 acting on SA1_XTD2_BUNCHPATTERN/MDL/CONFIGURATOR. This was not the case during the previous shifts, even though the pulse energy was comparable. Looking at the history of SA1_XTD2_WATCHDOG/MDL/ATT1_MONITOR020, it looks like this device was started at 16:26:25 on Friday, 1st March.

  50   15 Mar 2024, 00:07 Harald SinnPRCIssue

SASE1 bunch number was reduced to 10 pulses for no obvious reason. 

Traced it back to a malfunctioning of the watchdog for the solid attenuator (see attached error message)

Deacttivated the watchdog for SASE1 solid attenuator. Re-activated it again.

Issue: SASE1 (SPB?) was requesting 200 pulses at 6 keV with full power with all solid attenuators in the beam. This was a potentailly dangerous situation for the solid attnenuator (!)

After that the team left to go home. Nobody is available at FXE.  Set pulse number to 1.

Checked status of solid attenuator. Seems that plate 1&2 have some vertical cracks (see attached picture)

 

 

  52   15 Mar 2024, 07:08 Romain LetrunSPBIssue

There are now two watchdogs for the XTD2 attenuator running at the same time

  • SA1_XTD2_WATCHDOG/MDL/ATT1_MONITOR020
  • SA1_XTD2_OPTICS_PROTECTION/MDL/ATT1

that are configured differently, which results in different pulse limits (see attachment).

  55   16 Mar 2024, 12:12 Harald SinnPRCIssue

11:15 SASE1/SPB called that they observed and issue with the SASE1 XGM/XTD2: about every 35 seconds the power reading jumps from 4 Watts to zero and back. It seems to be related to some automated re-scaling of the Keithlyes, which coincides. The fast signal and downstream XGMs do not show this behavior. The concern is that this may affect the the data collection later today. 

12:15 The XGM expert was informed via email, however, there is no regular on-call service today. If this cannot be fixed in the next few hours, the PRC advises to use in addition the reading of the downstream XGM for data normalisation. 

13:15 Theo fixed the problem by disabling auto-range. Needs follow-up on Monday, but for this weekend should be ok. 

  56   20 Mar 2024, 12:41 Frederik Wolff-FabrisPRCIssue

BKR called PRC around 06:10AM to inform on vacuum issues at SA1 beamline leading to a lost beam permission for the North Branch.

VAC-OCD found an ion pump in the PBLM / XTD9 XGM presented a spike in pressure that triggered valves in the section to close.

After reopening valves the beam permission was restored; further investigations and monitoring in coming days will continue. Currently all systems work normally.

https://in.xfel.eu/elog/Vacuum-OCD/252

  65   28 Mar 2024, 14:56 Chan KimSPBIssue

Pulse energy fluctuation observed after full train operation in SA2 starts.

  67   30 Mar 2024, 06:59 Romain LetrunSPBIssue

DOOCS panels are again incorrectly reporting the FXE shutter as opened when it is in fact closed. This had been fixed after it was last reported in February (elog:24), but reappeared now.

  69   30 Mar 2024, 22:02 Chan KimSPBIssue

Continuous pointing drift in horizontal direction.

It may trigger a vacuum spike at SPB OH (MKB_PSLIT section) at 21:07 pm.

 

  91   01 May 2024, 16:39 Andreas GallerPRCIssue

840 pulses not possible at SXP due to big brother settings for the EPS Power Limit rule. The minimum pulse energy was assumed to be 6 mJ while the actual pulse XGM reading is 550 uJ.
Solution: The minimum assuemd pulse energy had been lowered from 6 mJ to 2 mJ. This shall be reverted on Tuesday next week.

  96   08 May 2024, 13:51 Raśl Villanueva-GuerreroPRCIssue

Dear all,

Due to a probable issue at the SA1 Transmissive imager, the "OUT" and "YAG" position indication signals seems to be not responsive.

After the FXE team noticed that MODE S was interlocked, the root cause has been narrowed down to this component. Later, further investigation with A. Koch & J. Gruenert (XPD) has confirmed this fact. Apparently to recabling works performed during yesterday's ZZ may have something to do with this issue.

Once that the complete retraction of all the screen has been jointly confirmed via DOOCS and Karabo, the affected interlock condition (see attached picture) has been disabled for the rest of the week to allow normal operation of SA1 (release of MODE S and MODE M).

Further investigation and the eventual issue resolution is expected to happen on Tuesday next week via ZZ-access.

With best regards,

Raul [PRC@CW19]

  99   10 May 2024, 00:04 Rebecca BollSQSIssue

our pp laser experienced timing jumps by 18 ns today. this happened for the first time at ~7:30 in the morning without us noticing and a couple of more times during the morning. we realized it at 15:30 due to the fact that the signal on our pulse arrival monitor (X-ray/laser cross correlator downstream of the experiment) disappeared. we spent some time to investigate it, until we also see this jump in two different photodiodes different triggers (oscilloscope and digitizer) so it was clear that it is indeed the laser timing jumping. we also see the laser spectrum and intensity change for the trains that are offset in timing.

we called laser OCD at ~16:30. they have been trying to fix the problem by readjusting the instability zone multiple times and by adjusting the pockels cell, but this did not change the timing jumps. at 21:30 they concluded that there was nothing more they could do tonight and a meeting should happen in the morning to discuss how one can proceed.

effectively, this means that we lost the entire late and night shifts today for the pump-probe beamtime and have some data of questionable quality from the early shift as well.

  105   14 May 2024, 09:56 Andreas KochXPDIssue

EPS issue for Transmissive imager SA1 (OTRC.2615.T9) w.r.t Operation eLog #96

The motor screen positions w.r.t. the EPS switches had been realigned (last Wednesday 8th May), tested today (14th May) and the EPS is again enabled for the different beam modes. All is back to normal.

  110   25 May 2024, 02:38 Rebecca BollSQSIssue

Today in the afternoon, SASE1 was tuned (a description of the activities in att 4). We now realize that this has in some form changed the SQS pulse parameters.

att 1 shows ion spectra recorded under nominally the same photon parameters before (run 177) and after (run 247)

it's also visible in the history of the SASE viewer that the behavior of the SASE3 pulse energy is different before and after 15:45, which is right when SASE1 was tuned. it clearly started fluctuating more. however, we don't think that the pulse energy fluctuations themselves are the reason for the large change in our data, we rather suspect a change in the pulse duration and/or the source point to cause this. however, we have no means to characterize this now after the fact.

it is particularly unfortunate that the tuning happened exactly BEFORE we started recording a set of reference spectra on the SASE3 spectrometer, which were supposed to serve as a calibration for the spectral corelation analysis to determine the group duration, as well as for a training data set to use the PES with machine learning as a virtual spectrometer for the large data set taken in the shifts before.

ELOG V3.1.4-7c3fd00