Group Selection Page HELP DOC Controls Group Data Analysis Group ITDM WP76 DET Vacuum XRO EEE WP78 WP74 FXE SCS SPB MID HED SQS SXP Sample Environment Photon Commissioning Team Operation PSPO XO TS
OPERATION-2024 OPERATION-2023 OPERATION-2022 OPERATION-2021 OPERATION-2020 OPERATION-2019 SASE1-2018 SASE1-2017 Experiment data
  OPERATION-2023 logbook, Contact: F.Wolff-Fabris, A.Galler, L.Samoylova, H.Sinn, Page 17 of 24  Not logged in ELOG logo
ID Date Author Groupdown Subject
  350   21 Sep 2023, 09:24 Jan GruenertPRCSA1 status

Info by R.Kammering (mail 20.9.23 18h25), related to the SA1 intensity drops during large wavelength scans:

"We found some problematic part in the code of our wavelength control server.
After lengthy debugging session we (Lars F, Olaf H. and me) did find a workaround which seems to have fixed the problem.
But we are still not 100 % sure if this will work under all circumstances, but this was the best we could achieve for today.
So finally we have to see how it behaves with a real scan from your side and if needed follow up on this at a later point of time."

Together with the finding of the test by FXE this seems to have resolved the issue.

  351   22 Sep 2023, 00:56 Jan GruenertPRCSA1 status

As agreed beforehand, PRC and SPB/SFX re-assessed the SA1 EPS power limit based on actual performance, required number of pulses and rep rate, at the agreed SPB photon energy of 7 keV.
The actual lasing performance is on average 1.3 mJ / pulse at the current 7 keV.
Before the change, the assumed minimum pulse energy per requested bunch was set to 3.0 mJ in the Big Brother EPS power limit SA1, where we had set the remote power limit to 5 W.
To allow for the required 202 bunches at 0.5 MHz for SPB, I changed to an assumed minimum pulse energy per requested bunch of 2.2 mJ while maintaining the 5 W limit (rather than the inverse).
Maintaining in this way the 5 W limitation ensures that even if by human error during the shift change a high number of pulses would be requested at 5.5 keV, this could by no means damage the CRLs.

Additionally the SPB/SFX and the BKR operators were reminded about the safe procedure for the shift handover:
a) first the number of bunches has to be set to single bunch per train
b) then the wavelength could be changed
c) then the respective instrument would make sure that the beamline is in a safe state (e.g. CRL1 retracted) before a higher number of bunches would be requested (in particular at 5.5 keV).

  354   22 Sep 2023, 17:46 Jan GruenertPRCStatus

SASE3 gas supply alarm

SCS called the PRC at 17h04 due to a loud audible alarm in XHEXP1 and contacted also TS-OCD.
It turned out to be the gas supply system for SASE3, an electrical cabinet on the balcony level in the SE-corner of XHEXP1.

Actions taken by PRC:
a) contacted TS-OCD (Christian Holz, Alexander Kondraschew)
b) terminated the audible alarm
c) source for the alarm was: "Stroemungssensor -BF15.3+GSS3"
d) from this point on, TS is taking over

1) The instruments need to receive a number to call to reach the hall crew.
2) The alarm was likely triggered by the SCS team since there was a problem with the Krypton gas supply to their XGM around the same time.

  356   23 Sep 2023, 07:31 Jan GruenertPRCIssue

7:23: RC info:
an electrical power glitch interrupted beam operation, machine down. DESY has power back
and accelerator is recovering beam towards TLD, but no beam possible into TLDs, apparently power issues in Schenefeld.

7:32 SCS info:
SA1+SA2 have electricity, SA3 not. TS is informed and on the way. Power went away around 6h50.
Lights in hall remained on, but everything in SA3 is off, even regular power sockets. In karabo all red.
Hutch search system indicates FAULT state.

7:45 FXE info:
FXE was not affected by any power going away, all systems are operational, but of course no beam due to SA3.
The only thing unusual was an (fire ?) alarm audible at 6h53. FXE is ready and waiting for beam.

7:50 HED info:
all normal, just XFEL beam went away. RELAX laser not affected, all ok, waiting for beam.

7:55 FXE info:
Interlock fault lamp on FXE interlock panel is RED. Interlock cannot be set. FXE will contact SRP radiation protection.

8:00 RC :
MPS on-call is present at BKR. Machine will try to restart south branch, but there seems to be also an interlock by an XTD8 (sic!) shutter (?!).

08:03 FXE / SRP:
SRP informs, that they cannot help in this case but DESY MPS-OCD must solve this. 

08:13 BKR:
BKR shift leader Dennis Haupt will try to contact MPS team.

Further updates in next entry https://in.xfel.eu/elog/OPERATION-2023/359

  359   23 Sep 2023, 08:50 Jan GruenertPRCIssue

Updates on power glitch

8h50 No feedback from TS-OCD (still not arrived at SASE3 / SCS ?). Phone is busy. Apparently they are busy working on it in the hall (confirmed at 9h00).

There is a PANDORA radiation monitor which has no power, but it is needed to resolve the interlock issue in FXE + SCS

Important:
regular SCS phone cannot be reached anymore (no power). Please use this number instead to call SCS hutch: 86448

Update 9h15
- SXP/SQS informed by email, SQS informed / confirmed by phone (Michael Meyer), they will check their hutch and equipment. No phone contact to SXP.
- FXE info: power back at PANDORA but interlock error persists

Beam is back to SA2 / XTD6 since 7h35.

9h20 Info by TS-OCD / Christian Holz: power at SASE3 area in hall is now back.

9h25 FXE:
The affected Pandora without power was PANDORA X05 in the SASE3 SXP hutch.
That PANDORA now has power back, but the FXE hutch interlock error is still persisting.
D3 Wolfgang Clement is informed and said he might have to come in to take care of the alarm.

9h40 info RC: 
"new" problem, now SA2 : Balcony MPS crate SA1 balcony room , communication lost to "DIXHEXPS1.3 server  - MPS condenser"

09:55 info FXE:
Wolfgang Clement was at FXE and has reset "all the burnthrough monitors", in total 5 units, "including also SXP and SPB optics". At least at FXE the X-ray interlock is now fully functional, confirmed.

10:05 Info SCS:
their interlock panel this shows an error, but which probably just has to be acknowledged. FXE / SCS will communicate and solve it.

10h35 BKR info:
AIBS in all hutches of SA1/3 are blocking beam. Also DIXHEXPS1.3 is blocking beam to SA2. see att#1 and att#2.

11h10
EEE-OCD is working on AIBS / MPS from PLC/karabo side.

11h25 EEE-OCD:
PLC errors cleared for all AIBS.

11h30
beam to SA1/3 should be possible now. SA2 still blocked. The MTCA crate apparently needs to be power cycled.
EEE-Fast Electronics OCD is infomed through DOC.

12h00
EEE-PLC-OCD has succeeded to clear errors on the karabo/PLC side of AIBS. Now beam to SA1/3 is re-established.
However, the MTCA crates XFELcpuDIXHEXPS2 (and XFELcpuSYNCPPL7) might have power but are not operational (MCH unreachable) and must be locally power-cycled.
These are "DESY-operated" crates, not "EuXFEL-MTCA-crates".
Nevertheless, DOC staff with online instruction by EEE-FE will go to the balcony room and reboot crates locally as required/requested by RC. Hopefully this will bring us back beam permission to SA2.

12h10
Another new issue: SA2 karabo is down. DOC is working on this. No control possible of anything in SASE2 tunnel via karabo. 
PRC instructs BKR to close the shutter between XTD2 and XTD6 (as agreed also with HED) for safety reasons, so that the beam won't be uncontrolled once SA2 gets beam permission back.

12h45 info from SCS:
a) the SCS pump-probe laser is down since the morning (and they absolutely need this for the experiment), they are in contact with LAS-OCD, but LAS-OCD waits DOC support to recover motor positions etc.
b) SCS received XFEL beam at 11h30 but since 12h33 the beam is intermittently interrupted (see att#3) or completely off. BKR informed but interruptions not yet understood.

12h50
The crate XFELcpuDIXHEXPS2 appeared to be revived. It reads OK on the DOOCS panel Controls--> MTCA crates --> XHEXP, but after some moments again in error (device offline).
Info RC: Tim Wilksen / DESY and team are working on this from remote and might come in if broken hardware needs to be exchanged.

13h40
SCS had reported 12h45 that beam is always set to zero. SCS and PRC investigated but couldn't find any EPS / karabo item that would do this.
We suspect that some karabo macro is asking to set the beam to 0 bunches but cannot find anything.
We then disabled the user control of the number of bunches from BKR. Now SCS receives beam but has to call BKR if they want to change the number of bunches. To be followed up by DOC once they have time.

14h20
I see now that the crate suddenly is OK, which probably means that it was finally cured by somebody. Beam permission to SA2 is back.
 

14h40
Beam is back in XTD1 (SA2) until the shutter XTD1/XTD6 and checked. Currently 300uJ.
Beam monitoring without karabo is possible in DOOCS with XGM (unless vaccuum problems would come up) and the Transmissive Imager.
Now the HED shift team is leaving until DOC has recovered karabo. Info will be circulated to HED@xfel.eu when this is achieved.

 

  360   23 Sep 2023, 14:49 Jan GruenertPRCStatus

Update of power outage aftermath

Achieved by now / status

  • SASE1
    • ​beam delivery and FXE operating fine (beam back since about 11h30)
  • SASE3
    • SA3 getting beam since about 11h30, again difficulties / no beam between 12h30 - 13h34, currently ok.
    • SASE3 Optical PP laser issues since the morning. DOC support was required by LAS-OCD (task completed by DOC at 14h30)
    • LAS-OCD working on SA3 PP laser and estimate to have it operational around 16h.
  • SASE2
    • the inoperational MTCA crate for MPS was successfully restarted and AIBS alarms cancelled in PLC
    • beam permission was recovered and XFEL is lasing in XTD1 until shutter
    • karabo of SA2 is DOWN, DOC working on it (see below)

Main remaining issues:

  1. Complete restart of karabo for SASE2 is required. This is a long operation, duration several hours. DOC working on it.
  2. SASE2 Online-GPFS is down, probably due to lack of cooling. Possible hardware damage, ITDM assessing and working on it.
  3. SASE3 optical PP laser to be brought back
  361   23 Sep 2023, 15:35 Jan GruenertPRCIssue

SASE2 :
The unavailability of karabo and possibly missing cooling water is a problem for the tunnel components as well.
XGM in XTD6 : server error since about 11am. We don't know about the status of this device anymore, neither from DOOCS or karabo.
VAC-OCD is informed and will work with EEE-PLC-OCD to check and secure SA2 tunnel systems possibly via PLC.
Main concerns: XGM and cryo-cooler systems. Overheating of racks which contain XGM and monochromator electronics.

16h00 SA2 karabo is back online (partially) !

Another good news: 
together with VAC-OCD we see that the vacuum system of SA2 tunnel is fine, unaffected by the power cut and the karabo outage. Pressures seem fine.

Janusz Malka found for the SA3 GPFS servers that the balcony room redundancy cooling water didn't start, and no failure was reported.

16h25 DOC info
Cooling in rack rooms ok. GPFS servers are up, ITDM still checking. DAQ servers up.

16h30 XTD6-XGM is down.
Vacuum ok but MTCA crate cannot be reached / all RS232 connections not responding. XPD and Fini checking
but looks like an access is required. Raimund Kammering also checking crate.

16h30 FXE:
Optical laser shutter in error state. LAS-OCD informed. Therefore FXE is not taking beam now.
Actually the optical laser safety shutter between FXE optical laser hutch and experimental hutch has a problem and LAS_OCD cannot help. SRP to be contacted.

16h40 info ITDM:
recovery of SA2 GPFS hardware after cooling failure is completed. Everything seems to be working. Only few power supplies are damaged but there is redundancy.

16h40 info RC
SA2 beam ins now blocked by Big Brother since it receives an EPS remote power limit of 0 W. This info is now masked until all SA2 karabo devices will be back up. (e.g. bunchpattern-MDL is still down)

17h30 info by VAC-OCD
Vacuum systems SA1+SA2+SA3 are checked OK. Only some MDLs needed to be retstarted.
Some more problems with the cryo compressor on monochromators. 2 on HED and 1 on MID. Vacuum pressure is ok but if
users need the mono it will not be possible. VAC-OCD checks back with HED when they are back in the hutch.

  363   24 Sep 2023, 07:19 Jan GruenertPRCAttention

Again: power outage !

As yesterday, at around 7am, power was lost in Schendfeld at SASE3. No beam permission, accelerator in TLD mode.
SASE3 has no power, TS-OCD informed and on the way. BKR informs MKK to check cooling water pumps.

PRC is requesting all involved people of yesterday to take action acoording to what happened yesterday.

  365   24 Sep 2023, 08:05 Jan GruenertPRCStatus

8h05 Beam is switched back on in SASE2. HED ready for taking beam.

8h45 BKR is having problems to turn on SA2 beam. Not due to any 

Beam actually went down in all beamlines today at 7:02 (for the records).

9h00: Electricity comes back to SASE3.
We can see SCS-XGM crate coming back on (that's how we can tell that power is back).

9h15 BKR struggles with SA2

9h20: Info from TS-OCD: Problem with USV at SASE3, all needs to be switched 

9h30 BKR is working on making beam in SA1/3. SA1 beam on IMGFEL, switching to FXE conditions (5.6 keV).

9h08 FXE info:
D3 has reset the alarms for SA1 beamstop monitor 9.1 (burnthrough monitor) and SA3. FXE is ready to receive beam.

9h04 EEE-OCD informs:
Power has been restored to the LA3 PPL PLC

9h08 DOC info:
restoring settings for LA3 PPL as of before today's power cut.

 

  389   14 Oct 2023, 09:48 Frederik Wolff-FabrisPRCStatus

Machine went down overnight (4AM and 6AM) and could not be properly restarted by BKR.

Ongoing tuning is proceeding with RC onsite since 6AM; Instruments are informed and in direct contact to BKR.

Status 09:45AM: SA1 ~3mJ; SA2: ~500uJ; SA3 being restarted

Status 11:10AM: Beam restored in all SASEs; SA1 delivery from 10AM; SA2 delivery from 11:10; SA3 delivery from 10:45

  425   30 Oct 2023, 14:36 Jan GruenertPRCIssue

The EPD reading system at the turnstiles of XTD10 entrance were not working this morning.
A necessary ZZ access to XTD10 could not be performed in the morning and had to be postponed.
Resetting the system by Michael Prollius was not sufficient.
At 14h20 XO informed that the EPD system was fixed by D3 (an issue with Windows passwords) and again operational.
The ZZ access by XRO concerning VSLIT is therefore proceeding now.

  426   30 Oct 2023, 23:18 Jan GruenertPRCStatus

At the end of the MON late shift, the tuning status and obtained lasing pulse energies (ATT#1) are as follows:

  • SA1 at 19 keV : 660 uJ +/- 23 uJ
  • SA2 at 18 keV : 1073 uJ +/- 55 uJ
  • SA3 at 1.2 keV : 4330 uJ +/- 122 uJ

SD for a stable 2 min interval, fast XGM train average (INTENSITY.RAW.TRAIN)

More tuning will happen overnight, and for SA3 the photon energies 2.5keV and 3.0 keV will also be prepared.

  431   02 Nov 2023, 19:37 Jan GruenertPRCStatus

Typical delivery status during daytime during the week:

  • SA1 at 19 keV : 841 uJ +/- 41 uJ
  • SA2 at 18 keV : 990 uJ +/- 80 uJ
  • SA3 at 3.0 keV : 2900 uJ +/- 138 uJ

SD for a stable 2 hours long interval, fast XGM train average (INTENSITY.RAW.TRAIN)

  435   04 Nov 2023, 08:15 Jan GruenertPRCIssue

Beam down for SASE1/3: SASE1 XTD2 vacuum issue

Vacuum interlock of the SA1 / XTD2 solid attenuator area triggered closing beamline gatevalves.
VAC-OCD is investigating. No beam possible for SA1/SA3 since 7h18.

The vacuum interlock trip is a real vacuum problem. It could be temporarily circumvented to re-establish beam operation in SA1/SA3, BUT IT IS NOT FIXED.
Beam re-established at 8h50.

Currently all attenuators arms are OUT. The vacuum pressure is still high, returned below critical, but any attenuator movement can increase the vacuum pressure from the leak.
Further assessment by VAC in the next hour(s) will decide if a tunnel intervention must happen today or can be delayed until after the weekend.

  436   04 Nov 2023, 13:49 Jan GruenertPRCIssue

(Temporary) solution for the vacuum incident in SA1

This morning at 7h18 the vacuum interlock of SA1 closed the beamline valves and thus stopped the XFEL beam delivery for SA1 and SA3.
The reason was a real vacuum problem, with vacuum pressure critically rising in the SA1_XTD2_ATT solid attenuator.
Thanks to VAC-OCD and a workaround it was possible to re-establish beam operation SA1/SA3 already by 8h50, but the problem is not fixed !

History data shows that movements of ATT progressively deteriorated the vacuum since FRI 3.11.2023 morning.
At the moment it is stabilized at a high level, but any further significant deterioration of the vacuum level in ATT would make a tunnel intervention unavoidable.

A vacuum work intervention in the tunnel will cause a beam delivery downtime for SA1/SA3 of at least 5 hours
as we know from a similar previous cases(*). This including an extensive XTD2 undulator tunnel search,
which is required because this cannot happen as ZZ but interlock must be broken.

In order to protect the user beamtimes in SA1 (SPB/SFX) and SA3 (SQS),
the following is necessary and agreed by the coordinators (PRC+RC) and VAC:

1) It is not allowed to move ATT until VAC has repaired the leak (PRC has locked the device)
2) VAC group will enter XTD2 after the user beamtimes end on monday, and will work on removing the vacuum leak. This will affect the FD studies scheduled for monday.
3) SPB confirms that they can work in the current (OUT) state of the XTD2 attenuator
4) If FXE (in-house beamtime) cannot work with these same conditions as SPB/SFX (ATT out), then the FXE beamtime is cancelled
5) FXE is requested to contact PRC by phone

  438   05 Nov 2023, 12:18 Jan GruenertPRCIssue

Accelerator down 9h23 - 12h

RC / BKR informs (10am), that due to a quadrupole power supply failure in A21 a tunnel access to XTL is required.
After the XTL intervention finished, BKR started beam in SA2 at 12h09.

However, when sending beam to SA1/3 a new problem occurred with magnets in that branch, which at the moment makes it impossible to deliver beam to SA1/SA3.
The experts are on their way, we will update when there is new information.

  439   05 Nov 2023, 12:47 Jan GruenertPRCIssue

North branch down : another e-beam magnets problem

RC informs 12h45 that the linac and the south branch are back in operation and performance is as before the issues with the A21 quadrupole.
However, during the access to XTL, additional manget problems occurred in the T4 and SA3 section.
Several magnet power supplies failed several times, MPS is working on this issue.
Additionally the SA3 burn-through radiation safety monitor was triggered which requires an onsite intervention by D3.
The downtime for SA1/SA3 is expected to last at least one hour.

  440   05 Nov 2023, 16:14 Jan GruenertPRCStatus

Finally, we're completely back in operation,
and the performance is recovered in all beamlines.

See also ttfinfo, e.g. https://ttfinfo.desy.de/XFELelog/show.jsp?dir=/2023/44/05.11_a&pos=2023-11-05T16:01:25

  469   18 Apr 2024, 23:19 Jan GruenertPRCIssue

22h55:
HED informs that they have issues with the HED pulse picker in XTD6. It affects the ongoing experiment, but not to an extent that they cannot continue the beamtime.
It will be more an issue for the next beamtime starting this weekend, therefore they are looking into solutions, working together with DOC and EEE.
They might need a ZZ access to XTD6 to check the pulse picker mechanics outside vacuum.

  24   11 Feb 2023, 01:09 Jörg HallmannMIDShift summary

sucessfull data taking for user samples

beam loss for ~15 min and a small issue with scannerX of the FSSS (could be fixed without support) but otherwise smooth operation

contact with ITDM since the data collection speed could be too high for the transfer speed from SDD to HDD - should be kept under investigation....

 

 

ELOG V3.1.4-7c3fd00