Ticket 9248 - slurmctld hangs (deadlock?)
Summary: slurmctld hangs (deadlock?)
Status: RESOLVED FIXED
Alias: None
Product: Slurm
Classification: Unclassified
Component: slurmctld (show other tickets)
Version: 20.02.3
Hardware: Linux Linux
: 3 - Medium Impact
Assignee: Marshall Garey
QA Contact:
URL:
: 9255 9464 10936 (view as ticket list)
Depends on:
Blocks:
 
Reported: 2020-06-17 08:44 MDT by Kilian Cavalotti
Modified: 2021-02-24 10:50 MST (History)
6 users (show)

See Also:
Site: Stanford
Slinky Site: ---
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
Google sites: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
NoveTech Sites: ---
Nvidia HWinf-CS Sites: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Tzag Elita Sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed: 20.02.4 20.11.0pre1
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments
gdb output and slurmctld logs (5.72 MB, application/x-bzip)
2020-06-17 08:44 MDT, Kilian Cavalotti
Details
Live process bt output (36.44 KB, application/gzip)
2020-06-17 09:12 MDT, Kilian Cavalotti
Details
Ignore plane distribution (1.64 KB, patch)
2020-06-17 12:50 MDT, Marshall Garey
Details | Diff

Note You need to log in before you can comment on or make changes to this ticket.
Description Kilian Cavalotti 2020-06-17 08:44:35 MDT
Created attachment 14701 [details]
gdb output and slurmctld logs

Hi,

Last night (06/16, 23:05), our slurmctld stopped responding to commands with the usual: "slurm_receive_msgs: Socket timed out on send/recv operation"

The slurmctld process was still running though, it didn't abort or crash. Killing it and restarting it made it work for a few minutes, before it hung again (around 6:44). So right now, our scheduler is down (hence the Sev1 level).

From what I can see, the slurmctld process seems to be stuck on a pthread lock:

(gdb) bt
#0  0x00007fcf61bd32ce in pthread_rwlock_wrlock () from /lib64/libpthread.so.0
#1  0x0000000000472cac in lock_slurmctld (lock_levels=...) at locks.c:125
#2  0x000000000042f461 in _slurmctld_background (no_data=0x0) at controller.c:2144
#3  main (argc=<optimized out>, argv=<optimized out>) at controller.c:787

I'm attaching the output of a "thread apply all bt full" on the stuck process as well as the controller logs for 06/16 and 06/17.

Of course, any assistance to get the scheduler back and running again would be very much appreciated.

Thanks!
-- 
Kilian
Comment 2 Dominik Bartkiewicz 2020-06-17 08:59:37 MDT
Hi

Can you send me the output from this gdb commands:
I have hope that thread 426 is still in _compute_plane_dist()

t 426
f 2
p *job_ptr

Dominik
Comment 3 Kilian Cavalotti 2020-06-17 09:01:20 MDT
FWIW, the "hang" propagates from the primary controller to the backups: when we manually kill slurmctld on the primary host, the 1st backup controller takes over, and it hangs after a few minutes too, with the same backtrace.

Cheers,
-- 
Kilian
Comment 4 Kilian Cavalotti 2020-06-17 09:03:36 MDT
(In reply to Dominik Bartkiewicz from comment #2)
> Hi
> 
> Can you send me the output from this gdb commands:
> I have hope that thread 426 is still in _compute_plane_dist()
> 
> t 426
> f 2
> p *job_ptr

I restarted slurmctld since I sent the core dump, so this is from the saved core, not the current live instance.


(gdb) t 426
[Switching to thread 426 (Thread 0x7fcb45ada700 (LWP 30003))]
#0  0x00007fcf61bd32ce in pthread_rwlock_wrlock () from /lib64/libpthread.so.0


(gdb) f 2
#2  0x0000000000493260 in _slurm_rpc_node_registration (msg=0x7fcb45ad9e50, running_composite=<optimized out>) at proc_req.c:3178
3178    proc_req.c: No such file or directory.

(gdb) p *job_ptr
No symbol "job_ptr" in current context.


Thanks!
-- 
Kilian
Comment 7 Kilian Cavalotti 2020-06-17 09:12:10 MDT
Created attachment 14702 [details]
Live process bt output

Here's the gdb output from the currently running process.

I assume we're interested in thread #308 here, right?

(gdb) t 308
[Switching to thread 308 (Thread 0x7f46feaf7700 (LWP 5158))]
#0  0x00007f4701e3dc2a in dist_tasks_tres_tasks_avail (gres_task_limit=gres_task_limit@entry=0x0, job_res=job_res@entry=0x7f46f424f220, node_offset=node_offset@entry=0) at dist_tasks.c:1285
1285    dist_tasks.c: No such file or directory.

(gdb) f 3
#3  0x00007f4701e43512 in _job_test (job_ptr=job_ptr@entry=0x76beb00, node_bitmap=node_bitmap@entry=0x7f46f42b4c90, min_nodes=min_nodes@entry=1, max_nodes=max_nodes@entry=500000, req_nodes=req_nodes@entry=1, mode=mode@entry=2, cr_type=cr_type@entry=20,
    job_node_req=job_node_req@entry=NODE_CR_ONE_ROW, cr_part_ptr=0x7bd00e0, node_usage=0x7c01160, exc_cores=<optimized out>, exc_cores@entry=0x0, prefer_alloc_nodes=prefer_alloc_nodes@entry=false, qos_preemptor=qos_preemptor@entry=false,
    preempt_mode=preempt_mode@entry=false) at job_test.c:1569
1569    job_test.c: No such file or directory.

(gdb) p *job_ptr
$1 = {magic = 4038539564, account = 0x76befc0 "tpd", admin_comment = 0x0, alias_list = 0x0, alloc_node = 0x76bef90 "sh01-ln03", alloc_resp_port = 0, alloc_sid = 135066, array_job_id = 0, array_task_id = 4294967294, array_recs = 0x0, assoc_id = 26304,
  assoc_ptr = 0x1f485a0, batch_features = 0x0, batch_flag = 1, batch_host = 0x0, billable_tres = 4294967294, bit_flags = 301989928, burst_buffer = 0x0, burst_buffer_state = 0x0, clusters = 0x0, comment = 0x0, cpu_cnt = 0, cpus_per_tres = 0x0, cr_enabled = 0,
  db_flags = 0, db_index = 1082804796, deadline = 0, delay_boot = 0, derived_ec = 0, details = 0x76be800, direct_set_prio = 0, end_time = 0, end_time_exp = 0, epilog_running = false, exit_code = 0, fed_details = 0x0, front_end_ptr = 0x0, gres_list = 0x0,
  gres_alloc = 0x0, gres_detail_cnt = 0, gres_detail_str = 0x0, gres_req = 0x0, gres_used = 0x0, group_id = 57118, het_details = 0x0, het_job_id = 0, het_job_id_set = 0x0, het_job_offset = 0, het_job_list = 0x0, job_id = 2525674, job_next = 0x0,
  job_array_next_j = 0x0, job_array_next_t = 0x0, job_preempt_comp = 0x0, job_resrcs = 0x7f46f424f220, job_state = 0, kill_on_node_fail = 1, last_sched_eval = 1592405701, licenses = 0x0, license_list = 0x0, limit_set = {qos = 0, time = 0, tres = 0x76bead0},
  mail_type = 0, mail_user = 0x0, mem_per_tres = 0x0, mcs_label = 0x0, name = 0x76bef70 "data", network = 0x0, next_step_id = 0, nodes = 0x0, node_addr = 0x0, node_bitmap = 0x0, node_bitmap_cg = 0x0, node_cnt = 0, node_cnt_wag = 1, nodes_completing = 0x0,
  origin_cluster = 0x0, other_port = 0, partition = 0x76bef50 "iric", part_ptr_list = 0x0, part_nodes_missing = false, part_ptr = 0x2b1b670, power_flags = 0 '\000', pre_sus_time = 0, preempt_time = 0, preempt_in_progress = false, prep_epilog_cnt = 0,
  prep_prolog_cnt = 0, prep_prolog_failed = false, priority = 102666, priority_array = 0x0, prio_factors = 0x76be9c0, profile = 0, qos_id = 3, qos_ptr = 0x1b2a2b0, qos_blocking_ptr = 0x0, reboot = 0 '\000', restart_cnt = 0, resize_time = 0, resv_id = 0,
  resv_name = 0x0, resv_ptr = 0x0, requid = 4294967295, resp_host = 0x0, sched_nodes = 0x0, select_jobinfo = 0x76befe0, site_factor = 2147483648, spank_job_env = 0x0, spank_job_env_size = 0, start_protocol_ver = 8960, start_time = 0, state_desc = 0x0,
  state_reason = 1, state_reason_prev = 1, state_reason_prev_db = 0, step_list = 0x76bea60, suspend_time = 0, system_comment = 0x0, time_last_active = 1592405648, time_limit = 10, time_min = 0, tot_sus_time = 0, total_cpus = 1, total_nodes = 0, tres_bind = 0x0,
  tres_freq = 0x0, tres_per_job = 0x0, tres_per_node = 0x0, tres_per_socket = 0x0, tres_per_task = 0x0, tres_req_cnt = 0x76bf1a0, tres_req_str = 0x76bf130 "1=1,2=4000,4=1,5=1", tres_fmt_req_str = 0x76bf160 "cpu=1,mem=4000M,node=1,billing=1", tres_alloc_cnt = 0x0,
  tres_alloc_str = 0x0, tres_fmt_alloc_str = 0x0, user_id = 356908, user_name = 0x0, wait_all_nodes = 0, warn_flags = 0, warn_signal = 0, warn_time = 0, wckey = 0x0, req_switch = 0, wait4switch = 0, best_switch = true, wait4switch_start = 0}

Cheers,
--
Kilian
Comment 9 Dominik Bartkiewicz 2020-06-17 09:18:00 MDT
Hi

Can you try to scancel problematic job 2525674?


Dominik
Comment 10 Kilian Cavalotti 2020-06-17 09:28:17 MDT
(In reply to Dominik Bartkiewicz from comment #9)
> Hi
> 
> Can you try to scancel problematic job 2525674?

I checked the backup controller's process which is also stuck the same way with the same backtrace. Using the same process (getting the thread in _compute_plane_dist, and finding the job being tested in _job_test()), I found the same job id: 2525674. 

So it looks like things are all pointing towards that same job, indeed. I restarted slurmctld and scancel'd 2525674 right away, and it seems to have worked. 

Our controllers are back up and running now, and I'm lowering the severity of the bug. Thanks a lot!


Now, it'd be great to understand why that job caused that issue. Here are the details about it:

JobId=2525674 JobName=data
   UserId=fvcalver(356908) GroupId=tpd(57118) MCS_label=N/A
   Priority=102666 Nice=0 Account=tpd QOS=normal
   JobState=CANCELLED Reason=Priority Dependency=(null)
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0
   RunTime=00:00:00 TimeLimit=00:10:00 TimeMin=N/A
   SubmitTime=2020-06-16T23:04:05 EligibleTime=2020-06-16T23:04:05
   AccrueTime=Unknown
   StartTime=2020-06-17T08:21:06 EndTime=2020-06-17T08:21:06 Deadline=N/A
   PreemptEligibleTime=2020-06-17T08:21:06 PreemptTime=None
   SuspendTime=None SecsPreSuspend=0 LastSchedEval=2020-06-17T08:21:04
   Partition=iric AllocNode:Sid=sh01-ln03:135066
   ReqNodeList=(null) ExcNodeList=(null)
   NodeList=(null)
   NumNodes=1 NumCPUs=1 NumTasks=0 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
   TRES=cpu=1,mem=4000M,node=1,billing=1
   Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
   MinCPUsNode=1 MinMemoryCPU=4000M MinTmpDiskNode=0
   Features=(null) DelayBoot=00:00:00
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)
   Command=/home/users/fvcalver/dqmc/scripts/submit_waffle.sh
   WorkDir=/home/users/fvcalver/dqmc/scripts
   StdErr=/home/users/fvcalver/dqmc/scripts/slurm-2525674.out
   StdIn=/dev/null
   StdOut=/home/users/fvcalver/dqmc/scripts/slurm-2525674.out
   Power=
   MailUser=(null) MailType=NONE


I don't see anything obviously suspicious about it, but its submit time (2020-06-16T23:04:05) is really close to when the controller first hanged (06/16 23:05) which seems to indicate that the issue happened right when that job was submitted. 


Cheers,
-- 
Kilian
Comment 12 Marshall Garey 2020-06-17 11:04:33 MDT
The problem is with the "plane" distribution. I reproduced this with a job that requests the plane distribution. slurmctld gets stuck in an infinite loop. It works in 19.05, though, so it's clearly a regression. There was a bit of new logic added in and the old logic reworked.

Before we figure out a patch, I recommend using a job submit plugin to block jobs requesting the plane distribution so this doesn't happen again. Here's a sample job_submit.lua plugin that does this. Is this an acceptable workaround for now?


function slurm_job_submit(job_desc, part_list, submit_uid)
	if job_desc.task_dist == SLURM_DIST_PLANE then
		slurm.log_error("Job requests plane distribution; this doesn't work in 20.02; denying job.")
		return slurm.FAILURE
	end
	return slurm.SUCCESS
end
Comment 13 Kilian Cavalotti 2020-06-17 11:17:26 MDT
(In reply to Marshall Garey from comment #12)
> The problem is with the "plane" distribution. I reproduced this with a job
> that requests the plane distribution. slurmctld gets stuck in an infinite
> loop. It works in 19.05, though, so it's clearly a regression. There was a
> bit of new logic added in and the old logic reworked.
> 
> Before we figure out a patch, I recommend using a job submit plugin to block
> jobs requesting the plane distribution so this doesn't happen again. Here's
> a sample job_submit.lua plugin that does this. Is this an acceptable
> workaround for now?
> 
> 
> function slurm_job_submit(job_desc, part_list, submit_uid)
> 	if job_desc.task_dist == SLURM_DIST_PLANE then
> 		slurm.log_error("Job requests plane distribution; this doesn't work in
> 20.02; denying job.")
> 		return slurm.FAILURE
> 	end
> 	return slurm.SUCCESS
> end


It perfectly is, yes, thank you very much for the suggestion, I'll implement this right away and follow up with the user to understand why they requested the option.

Well, actually, it looks like the -m option was mistakenly used instead of --mem. Here's the job:

-- 8< -----------------------------------
#!/bin/sh
#SBATCH --job-name=data
#SBATCH --time=00:10:00
#SBATCH -m=8gb
#SBATCH --qos=normal
#SBATCH -p iric
#SBATCH -c 1

python waffle_process_data.py
-- 8< -----------------------------------




Thanks!
-- 
Kilian
Comment 14 Marshall Garey 2020-06-17 11:25:52 MDT
Hmm, that's interesting. I wouldn't expect that syntax to request a plane distribution. I'll look into that and see if that's intended.

Also, I found that my little job submit plugin rejects jobs that request any distribution method, not just plane. I'm not sure why. I tried using plane_size but that's not set with that syntax "-m=8gb"; so for now you might have to just reject all jobs that request a distribution method until this is solved.
Comment 15 Kilian Cavalotti 2020-06-17 11:28:30 MDT
(In reply to Marshall Garey from comment #14)
> Hmm, that's interesting. I wouldn't expect that syntax to request a plane
> distribution. I'll look into that and see if that's intended.

Yes, maybe the job could simply be rejected if its argument is not among the expected "arbitrary|block|cyclic|plane" keywords?

> Also, I found that my little job submit plugin rejects jobs that request any
> distribution method, not just plane. I'm not sure why. I tried using
> plane_size but that's not set with that syntax "-m=8gb"; so for now you
> might have to just reject all jobs that request a distribution method until
> this is solved.

That's fine, I don't think any of our users care about the distribution method.

By the way, is there a way to list all the jobs that may have already been submitted with -m / --distribution ? I mean other than waiting for the next deadlock ? :)
I didn't find any obvious way with squeue.

Thanks!
-- 
Kilian
Comment 16 Kilian Cavalotti 2020-06-17 11:35:44 MDT
About the parsing of the -m option, I tried those:

$ srun -m foobar -p test --pty bash
srun: error: Invalid --distribution specification

$ srun -m 8gb -p test --pty bash
srun: error: Invalid --distribution specification

$ srun -m:8gb -p test --pty bash
srun: error: Invalid --distribution specification

$ srun -m=8gb -p test --pty bash
srun: job 2531626 queued and waiting for resources

And I locked up slurmctld :)

Cheers,
-- 
Kilian
Comment 17 Marshall Garey 2020-06-17 11:43:27 MDT
Thanks for the extra data points. Just to be clear, when you locked up slurmctld you didn't have the code in the job submit plugin yet, correct?

I haven't yet found an easy way to find all jobs that have requested distribution; perhaps there's a way with the REST API or just the normal C API. I'll look into it. However, this code is executed whenever the scheduler tries to run the job, which happens on job submission if you don't have SchedulerParameters=defer. Do you have defer in place? If not, I don't think you have to worry about it, but I'll still look at this.
Comment 18 Marshall Garey 2020-06-17 11:45:07 MDT
Actually I'm not sure about that section of code being called on job submission, so forget about the defer bit.
Comment 19 Kilian Cavalotti 2020-06-17 11:55:48 MDT
(In reply to Marshall Garey from comment #17)
> Thanks for the extra data points. Just to be clear, when you locked up
> slurmctld you didn't have the code in the job submit plugin yet, correct?

Correct yes, that was before I added the check in the job submit plugin.

> I haven't yet found an easy way to find all jobs that have requested
> distribution; perhaps there's a way with the REST API or just the normal C
> API. I'll look into it. However, this code is executed whenever the
> scheduler tries to run the job, which happens on job submission if you don't
> have SchedulerParameters=defer. Do you have defer in place? If not, I don't
> think you have to worry about it, but I'll still look at this.

Gotcha. We don't have "defer", so I guess if any other job had the same malformed option, we would know already, indeed.

> Actually I'm not sure about that section of code being called on job submission, so forget about the defer bit.

Given the initial lockup happened a few seconds after that job submission, and also that my test job hanged the controller almost immediately, I think we can safely assume that code path is hit pretty early in a job's life.


Anyway, we're good with the job submit plugin filtering for now. Thank you!

Cheers,
-- 
Kilian
Comment 20 Kilian Cavalotti 2020-06-17 12:06:53 MDT
Actually, I have to take that back: the job submit plugin bit seems to be rejecting all jobs. I'm not sure the task_dist job property is exposed in the job_submit plugin, is it? There no mention of job_desc->task_dist in https://github.com/SchedMD/slurm/blob/master/src/plugins/job_submit/lua/job_submit_lua.c

I tried to log job_desc.task_dist for each job in the job_submit plugin, and even those submitted with "--distribution cyclic" show "task_dist: n/a".

Cheers,
-- 
Kilian
Comment 21 Kilian Cavalotti 2020-06-17 12:46:33 MDT
Look like task_dist is part of the slurm_step_layout_req_t structure, so it's a step property, and I'm not sure it actually even exists yet when the job is submitted and goes through the job submit plugin. :\

Any other idea about how to filter those jobs? 

Cheers,
-- 
Kilian
Comment 22 Marshall Garey 2020-06-17 12:50:26 MDT
Created attachment 14705 [details]
Ignore plane distribution

Darn it, you're right. That's another thing I'll look at, then - potentially adding those to the job submit lua plugin. Can you apply this patch instead? It just completely ignores the plane distribution method. I commented out the part of code that calls the function with the infinite loop and forced the code to take the other branch (cyclic distribution). And just for good measure I log an error and return an error in that bad function just in case I missed any spots that might call it.


With the patch, these jobs are running fine:


$ srun -m plane=1 whereami
0000 n1-1 - Cpus_allowed:       0101    Cpus_allowed_list:      0,8
$ srun -m=8gb whereami
0000 n1-1 - Cpus_allowed:       0101    Cpus_allowed_list:      0,8
Comment 23 Kilian Cavalotti 2020-06-17 13:03:14 MDT
(In reply to Marshall Garey from comment #22)
> Created attachment 14705 [details]
> Ignore plane distribution
> 
> Darn it, you're right. That's another thing I'll look at, then - potentially
> adding those to the job submit lua plugin. Can you apply this patch instead?
> It just completely ignores the plane distribution method. I commented out
> the part of code that calls the function with the infinite loop and forced
> the code to take the other branch (cyclic distribution). And just for good
> measure I log an error and return an error in that bad function just in case
> I missed any spots that might call it.
> 
> 
> With the patch, these jobs are running fine:
> 
> 
> $ srun -m plane=1 whereami
> 0000 n1-1 - Cpus_allowed:       0101    Cpus_allowed_list:      0,8
> $ srun -m=8gb whereami
> 0000 n1-1 - Cpus_allowed:       0101    Cpus_allowed_list:      0,8

Thank you, I'm applying it now.

I guess in addition to fixing the deadlock/infinite loop that is triggered when a plane distribution is requested, making sure that "-m=8gb" is rejected as an invalid option would be helpful to have too.

Cheers,
-- 
Kilian
Comment 24 Marshall Garey 2020-06-17 13:04:33 MDT
(In reply to Kilian Cavalotti from comment #23)
> Thank you, I'm applying it now.

Let me know if there are any issues with applying the patch or issues afterward.


> I guess in addition to fixing the deadlock/infinite loop that is triggered
> when a plane distribution is requested, making sure that "-m=8gb" is
> rejected as an invalid option would be helpful to have too.

Absolutely, that's on my to-do list.
Comment 25 Kilian Cavalotti 2020-06-17 13:12:57 MDT
(In reply to Marshall Garey from comment #24)
> Let me know if there are any issues with applying the patch or issues
> afterward.

The patch applied cleanly, and things seem to be working fine. I tried to submit a job with "-m=8gb", and that didn't lock up the controller. So that's good :)

> > I guess in addition to fixing the deadlock/infinite loop that is triggered
> > when a plane distribution is requested, making sure that "-m=8gb" is
> > rejected as an invalid option would be helpful to have too.
> 
> Absolutely, that's on my to-do list.

Excellent, thank you!

Cheers,
-- 
Kilian
Comment 26 Brigitte May 2020-06-19 08:16:02 MDT
The -m option was mistakenly used instead of --mem. We installed the necessary code and we can confirm that the slurmctld isn't stuck right now.
Comment 27 Marshall Garey 2020-06-19 08:24:46 MDT
*** Ticket 9255 has been marked as a duplicate of this ticket. ***
Comment 39 Marshall Garey 2020-07-10 11:57:56 MDT
Hi Kilian,

The following range of commits fix this issue and a few others around distribution in 20.02. They'll be in 20.02.4:

eb79918f810..af21d0103237

Specifically, commit eb79918f810 fixes the regression that caused the infinite loop, and commits 6beb50bc81fdaf and 88d142f185576597 fix issues with parsing the -m,--distribution CLI option.

I'm closing this as resolved/fixed. Let us know if you run into any more problems.
Comment 40 Kilian Cavalotti 2020-07-10 11:59:32 MDT
Very cool, thanks for the update!

Cheers,
Comment 41 Marshall Garey 2020-07-27 11:07:59 MDT
*** Ticket 9464 has been marked as a duplicate of this ticket. ***
Comment 42 Simon Raffeiner 2020-07-27 11:08:15 MDT
(NOTE: English version below)


Sehr geehrter Absender,


ich befinde mich bis zum 02.08.2020 im Urlaub.


Bei technischen Fragen zu den HPC-Systemen wenden Sie sich bitte an das bwSupportPortal [1].


Bei organisatorischen Fragen, Fragen zur Beschaffungen oder der Öffentlichkeitsarbeit wenden Sie sich bitte an die Abteilungsleiterin Frau Dr. Jennifer Schröter (jennifer.schroeter@kit.edu).


E-Mails werden während meiner Abwesenheit nicht automatisch weitergeleitet.


mit freundlichen Grüßen,

Simon Raffeiner


============================================================================


Dear sender,


I am on vacation until August 2, 2020.


Please redirect technical questions about the HPC systems to the bwSupportPortal [1].


All other questions, especially about procurements, administrative topics or public relations should be directed to the head of the department, Dr. Jennifer Schröter (jennifer.schroeter@kit.edu).


E-Mails are not being forwarded during my absence.


kind regards,

Simon Raffeiner



[1] https://bw-support.scc.kit.edu/
Comment 43 Marshall Garey 2021-02-24 10:50:15 MST
*** Ticket 10936 has been marked as a duplicate of this ticket. ***