Dear Schedmd, Happy New Year. Trying to find the appropriate job_descriptor struct element representing "--mem" option specified in a job submission. I would like to check if this option is specified by the user in job submit plugin(lua). And take appropriate actions based on that. I find pn_min_memory which is --mem-per-cpu and not --mem. Any help here greatly appreciated. Thank you, Amit
I just happened to read that since mem and mem-per-cpu are mutually exclusive this option(pn_min_memory) would indicate both mem-per-node or mem-per-cpu. If this is true then no worries on this issues, please let me know otherwise. Thank you, Amit
Also curious if I have the ability to selectively redirect a message from job submit plugin to user's stdout or stderr? And if this is recommended or not? Thank you, Amit
(In reply to Amit Kumar from comment #1) > I just happened to read that since mem and mem-per-cpu are mutually > exclusive this option(pn_min_memory) would indicate both mem-per-node or > mem-per-cpu. If this is true then no worries on this issues, please let me > know otherwise. > > Thank you, > Amit The information is in the same field. You can use the flag "SLURM.MEM_PER_CPU" in the LUA plugin to determine how to interpret the field. If that flag (it's just the top bit) is set, then subtract the value and you'll have the --mem-per-cpu value. If its not set, then its the --mem value.
(In reply to Amit Kumar from comment #2) > Also curious if I have the ability to selectively redirect a message from > job submit plugin to user's stdout or stderr? And if this is recommended or > not? > > Thank you, > Amit Write to "slurm.log_info". There is an example distributed with Slurm here: https://github.com/SchedMD/slurm/blob/master/contribs/lua/job_submit.lua slurm.log_info("slurm_job_submit: job from uid %u, setting default partition value: %s", job_desc.user_id, new_partition)
(In reply to Moe Jette from comment #3) > The information is in the same field. You can use the flag > "SLURM.MEM_PER_CPU" in the LUA plugin to determine how to interpret the > field. If that flag (it's just the top bit) is set, then subtract the value > and you'll have the --mem-per-cpu value. If its not set, then its the --mem > value. Sorry, that should be "slurm.MEM_PER_CPU"
Dear Moe, Thank your for a quick response. This(slurm.MEM_PER_CPU) is helpful, and I will use this. As far as writing to log: using "slurm.log_info", it works fine if I want the messages logged to default syslog(slurmctld logs). I was wondering if I could redirect specific messages to job's stdout and stderr? Thank you, Amit
(In reply to Amit Kumar from comment #6) > Dear Moe, > > Thank your for a quick response. This(slurm.MEM_PER_CPU) is helpful, and I > will use this. > > As far as writing to log: using "slurm.log_info", it works fine if I want > the messages logged to default syslog(slurmctld logs). Correct. You also have log_error(), log_verbose, log_debug, etc. > I was wondering if I could redirect specific messages to job's stdout and > stderr? user_msg() will write to the user's stdout: slurm.user_msg("Whatever...")
I tried slurm.user_msg("Whatever...") but no success I tried slurm.log_debug4 etc but no success. I also restarted slurmctld but not luck. I don't see any errors in slurmctld logs when I use slurm.user_msg(...) but at the same time I don't see any output in the users stdout or stderr files. Am I missing something? Thank you, Amit
(In reply to Amit Kumar from comment #8) > I tried slurm.user_msg("Whatever...") but no success I tried > slurm.log_debug4 etc but no success. I also restarted slurmctld but not > luck. > > I don't see any errors in slurmctld logs when I use slurm.user_msg(...) but slurm.user_msg and slurm.log_user are used to write to user stderr, not to logs. > at the same time I don't see any output in the users stdout or stderr files. > > Am I missing something? Hi Amit, slurm.user_msg and slurm.log_user messages are only shown when job_submit.lua ends up with a value different of SLURM.SUCCESS. There is bug 4038 opened regarding this issue and *it is solved in 17.11.0* for lua slurm_job_submit function. Work in progress is being done for slurm_job_modify function.
(In reply to Moe Jette from comment #7) > (In reply to Amit Kumar from comment #6) > > Dear Moe, > > > > Thank your for a quick response. This(slurm.MEM_PER_CPU) is helpful, and I > > will use this. > > > > As far as writing to log: using "slurm.log_info", it works fine if I want > > the messages logged to default syslog(slurmctld logs). > > Correct. You also have log_error(), log_verbose, log_debug, etc. > > > I was wondering if I could redirect specific messages to job's stdout and > > stderr? > > user_msg() will write to the user's stdout: > > slurm.user_msg("Whatever...") Hi Amit, Just fyi, you have this two fields in a lua script that contain the values you are looking for: job_desc.min_mem_per_node job_desc.min_mem_per_cpu Find all the available fields here: grep -r xstrcmp src/plugins/job_submit/lua/job_submit_lua.c Example: function slurm_job_submit( job_desc, part_list, submit_uid ) if (job_desc.min_mem_per_cpu) then slurm.log_info("min_mem_per_cpu:%d", job_desc.min_mem_per_cpu) end if (job_desc.min_mem_per_node) then slurm.log_info("min_mem_per_node:%d", job_desc.min_mem_per_node) end return slurm.SUCCESS end At a C-level you can also see how this bits are set looking at job_submit_lua.c:_job_rec_field function. There you will find what Moe was talking about. The AND over job_ptr->details->pn_min_memory & MEM_PER_CPU determines whether min_mem_per_cpu or min_mem_per_node is set: } else if (!xstrcmp(name, "min_mem_per_node")) { if (job_ptr->details && !(job_ptr->details->pn_min_memory & MEM_PER_CPU)) lua_pushnumber(L, job_ptr->details->pn_min_memory); else lua_pushnil(L); } else if (!xstrcmp(name, "min_mem_per_cpu")) { if (job_ptr->details && (job_ptr->details->pn_min_memory & MEM_PER_CPU)) lua_pushnumber(L, job_ptr->details->pn_min_memory & ~MEM_PER_CPU); else lua_pushnil(L); Let me know if all your questions are solved now, also from comment 9. Thanks Felip
Amit, can we close this bug? Are all your concerns about this resolved?
Closing this bug as we didn't get response in 2 weeks + all required info given.
Hi Felip, Apologies for not responding earlier,i had to be off work for family emergency for over 2 weeks. My original intention to dig through on pn_min_memory option was to be able to display a friendly message in users stdout, based on the choices they have made for the specific options and queues. Thank you for your information and clarification that "4038" is being worked on allow use of slurm.user_msg and slurm.log_user that will work normally. I see 4038 has been part of the 17.11 release. Would their be a patch available for 17.09 if I wanted to give this a try? Thank you, Amit