Ticket 15378 - Can a limit be set on number of jobs per user if accountingstorageenforce is not set?
Summary: Can a limit be set on number of jobs per user if accountingstorageenforce is ...
Status: RESOLVED INFOGIVEN
Alias: None
Product: Slurm
Classification: Unclassified
Component: Scheduling (show other tickets)
Version: 22.05.2
Hardware: Linux Linux
: 4 - Minor Issue
Assignee: Ben Roberts
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2022-11-08 20:25 MST by Renata Dart
Modified: 2023-01-05 12:20 MST (History)
0 users

See Also:
Site: SLAC
Slinky Site: ---
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
Google sites: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
NoveTech Sites: ---
Nvidia HWinf-CS Sites: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Tzag Elita Sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed:
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments
slurm.conf (5.64 KB, text/plain)
2022-11-08 20:25 MST, Renata Dart
Details

Note You need to log in before you can comment on or make changes to this ticket.
Description Renata Dart 2022-11-08 20:25:47 MST
Created attachment 27661 [details]
slurm.conf

Hi SchedMD, at the moment we don't have accountingstorageenforce set, to allow any user to submit jobs without having a user entry or account set up.  There is only one partition that everyone runs in.  Given this setup, is there a way to set a limit on the number of jobs users can have queued/pending?  Also, all jobs get the same priority at the moment, is there a way to implement some kind of weighting factor based on the number of jobs they have queued or running?  

Thanks,
Renata
Comment 1 Jason Booth 2022-11-09 11:36:44 MST
> Can a limit be set on number of jobs per user if accountingstorageenforce is not set?

Unfortunately, no, these limits are enabled through the database and accounting. Even just setting limits will also enable associations.

> AccountingStorageEnforce = associations,limits

If you want this behavior you will need the following option which is gated behind AccountingStorageEnforce.
 
https://slurm.schedmd.com/sacctmgr.html#OPT_MaxSubmitJobs
Comment 2 Renata Dart 2022-11-09 11:58:36 MST
Hi Jason, are there any limits that can be applied through a partition
qos?  Also, all jobs get the same priority, is there a way to get
priority working without the AccountingStorageEnforce, maybe a
weighting factor based on the number of jobs they have queued or running?

Thanks,
Renata


On Wed, 9 Nov 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>Jason Booth <jbooth@schedmd.com> changed:
>
>           What    |Removed                     |Added
>----------------------------------------------------------------------------
>           Assignee|support@schedmd.com         |jbooth@schedmd.com
>
>--- Comment #1 from Jason Booth <jbooth@schedmd.com> ---
>> Can a limit be set on number of jobs per user if accountingstorageenforce is not set?
>
>Unfortunately, no, these limits are enabled through the database and
>accounting. Even just setting limits will also enable associations.
>
>> AccountingStorageEnforce = associations,limits
>
>If you want this behavior you will need the following option which is gated
>behind AccountingStorageEnforce.
>
>https://slurm.schedmd.com/sacctmgr.html#OPT_MaxSubmitJobs
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 3 Jason Booth 2022-11-09 13:03:06 MST
> Hi Jason, are there any limits that can be applied through a partition
> qos? 

Partition QOS'es are enabled through AccountingStorageEnforce just like normal QOS'es and limits. 


> Also, all jobs get the same priority, is there a way to get
> priority working without the AccountingStorageEnforce, maybe a
> weighting factor based on the number of jobs they have queued or running?

Priority is enabled by setting PriorityType=priority/multifactor and not through AccountingStorageEnforce, so you can make use of priority separately, although some aspects of this plugin do require the slurm accounting database to be setup.
Comment 4 Renata Dart 2022-11-09 13:38:47 MST
Hi Jason, I hate to bother you with these questions because I am
really trying to switch over to AccountingStorageEnforce, but in
the meantime I am trying to get this to work better.  For the 
priority question, our jobs look like:

         JOBID PARTITION   PRIORITY       SITE        AGE  FAIRSHARE    JOBSIZE
         776048 roma               1          0          0          0         20
         776049 roma               1          0          0          0         20
         776050 roma               1          0          0          0         20
         776051 roma               1          0          0          0         20

We have this in slurm.conf:

PriorityType=priority/multifactor
PriorityDecayHalfLife=14-0
PriorityWeightAge=100
PriorityWeightFairshare=1000000
PriorityWeightJobSize=1000


Is there something I can change or set to get priority "working"?

Thanks,
Renata


On Wed, 9 Nov 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>Jason Booth <jbooth@schedmd.com> changed:
>
>           What    |Removed                     |Added
>----------------------------------------------------------------------------
>              Group|                            |CSCS and Cray
>
>--- Comment #3 from Jason Booth <jbooth@schedmd.com> ---
>> Hi Jason, are there any limits that can be applied through a partition
>> qos? 
>
>Partition QOS'es are enabled through AccountingStorageEnforce just like normal
>QOS'es and limits. 
>
>
>> Also, all jobs get the same priority, is there a way to get
>> priority working without the AccountingStorageEnforce, maybe a
>> weighting factor based on the number of jobs they have queued or running?
>
>Priority is enabled by setting PriorityType=priority/multifactor and not
>through AccountingStorageEnforce, so you can make use of priority separately,
>although some aspects of this plugin do require the slurm accounting database
>to be setup.
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 5 Renata Dart 2022-11-09 14:02:50 MST
Hi again Jason, a few questions about what will happen to jobs when I
turn on AccountingStorageEnforce=associations,limits,qos.  An "scontrol
show job" for users in our current no-association-state in some cases
shows that they have an account of null:

   UserId=smau(17417) GroupId=ki(1092) MCS_label=N/A
   Priority=22 Nice=0 Account=(null) QOS=normal

While others are  specifying shared (an account I think I created
when I thought we were going to ceate associations and have accounts):

   UserId=lsstsvc1(17951) GroupId=gu(1126) MCS_label=N/A
   Priority=1 Nice=50 Account=shared QOS=normal

I however did not create a user lsstsvc1 and so it has no association
with the account shared:

[renata@sdfrome004 matlab]$ sacctmgr show account shared withassoc
   Account                Descr                  Org    Cluster ParentName       User     Share   Priority GrpJobs GrpNodes  GrpCPUs  GrpMem GrpSubmit     GrpWall  GrpCPUMins MaxJobs MaxNodes  MaxCPUs MaxSubmit     MaxWall  MaxCPUMins                  QOS   Def QOS 
---------- -------------------- -------------------- ---------- ---------- ---------- --------- ---------- ------- -------- -------- ------- --------- ----------- ----------- ------- -------- -------- --------- ----------- ----------- -------------------- --------- 
    shared               shared               shared       s3df       root                    1                                                                                                                                                          normal           
    shared               shared               shared       s3df                   ytl         1                                                                                                                                                          normal           
    shared               shared               shared       s3df                renata         1                                                                                                                                                          normal           

1.  Why is lsstsvc1 able to specify the account shared?  If the user
hasn't been created with sacctmgr, can they use any account that is
created?

2.  What will happen to running jobs in these 2 cases if I create a
user entry for each user and a default account that in the case of
lsstsvc1 using "shared" is changed to something else?  Will their
running jobs fail?

3.  If I add a default account for lsstsvc1 and don't explicitly add
them to shared and they continue to submit specify account=shared,
will their job submissions start failing?

Thanks,
Renata

On Wed, 9 Nov 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>Jason Booth <jbooth@schedmd.com> changed:
>
>           What    |Removed                     |Added
>----------------------------------------------------------------------------
>              Group|                            |CSCS and Cray
>
>--- Comment #3 from Jason Booth <jbooth@schedmd.com> ---
>> Hi Jason, are there any limits that can be applied through a partition
>> qos? 
>
>Partition QOS'es are enabled through AccountingStorageEnforce just like normal
>QOS'es and limits. 
>
>
>> Also, all jobs get the same priority, is there a way to get
>> priority working without the AccountingStorageEnforce, maybe a
>> weighting factor based on the number of jobs they have queued or running?
>
>Priority is enabled by setting PriorityType=priority/multifactor and not
>through AccountingStorageEnforce, so you can make use of priority separately,
>although some aspects of this plugin do require the slurm accounting database
>to be setup.
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 6 Jason Booth 2022-11-10 15:18:21 MST
Renata, my apologies for my delay in getting back to you with a reply.


> what will happen to jobs when I turn on AccountingStorageEnforce=associations,limits,qos. 
Jobs submitted without the appropriate associations will need to be re-submitted. 

Enabling AccountingStorageEnforce is not a light option to just enable, and should be done when
all users have an association or when the cluster has been drained of jobs that do not have a valid
association linked to them.

> Is there something I can change or set to get priority "working"?

For the priority plugin to work correctly, each user will need to be in the accounting database.
You do not need to enforce limits or association, however association are tied to the correct operation
of the multifactor/priority plugin.



> 1.  Why is lsstsvc1 able to specify the account shared?  If the user
> hasn't been created with sacctmgr, can they use any account that is
> created?

There is no enforcement, so a user can specify any account even if they do not have an association with that account.


> 2.  What will happen to running jobs in these 2 cases if I create a
> user entry for each user and a default account that in the case of
> lsstsvc1 using "shared" is changed to something else?  Will their
> running jobs fail?

Running jobs will continue to run. Changing this will only impact future jobs and jobs in the queue.

> 3.  If I add a default account for lsstsvc1 and don't explicitly add
> them to shared and they continue to submit specify account=shared,
> will their job submissions start failing?

This depends on if you are enforcing association or not. Leaving things are they are, assoc and limits disabled, the user can user any account
even if you create an association and define a default account. Adding an association to the current cluster now will not cause jobs to fail.
Comment 7 Renata Dart 2022-11-13 10:34:34 MST
Hi Jason, thanks for this information.  Just to verify that I
understand, without "AccountingStorageEnforce=associations,limits,qos":

1.  we cannot have a partition qos or any other kind of qos that
defines the grptres allowed with that qos, or flags like denyonlimit
or overpart qos

2.  jobs will not have priorities

3.  limits like maxjobspu cannot be set

Is there any documentation that spells out the full situation of what
you don't get to do unless you have AccountingStorageEnforce?

Another question, in our other cluster which does have
"AccountingStorageEnforce=associations,limits,qos" when a user dumps
in say 500 jobs at once, they all get the same priority.  Is there
some way to maybe weight the jobs?

Thanks,
Renata



On Thu, 10 Nov 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>--- Comment #6 from Jason Booth <jbooth@schedmd.com> ---
>Renata, my apologies for my delay in getting back to you with a reply.
>
>
>> what will happen to jobs when I turn on AccountingStorageEnforce=associations,limits,qos. 
>Jobs submitted without the appropriate associations will need to be
>re-submitted. 
>
>Enabling AccountingStorageEnforce is not a light option to just enable, and
>should be done when
>all users have an association or when the cluster has been drained of jobs that
>do not have a valid
>association linked to them.
>
>> Is there something I can change or set to get priority "working"?
>
>For the priority plugin to work correctly, each user will need to be in the
>accounting database.
>You do not need to enforce limits or association, however association are tied
>to the correct operation
>of the multifactor/priority plugin.
>
>
>
>> 1.  Why is lsstsvc1 able to specify the account shared?  If the user
>> hasn't been created with sacctmgr, can they use any account that is
>> created?
>
>There is no enforcement, so a user can specify any account even if they do not
>have an association with that account.
>
>
>> 2.  What will happen to running jobs in these 2 cases if I create a
>> user entry for each user and a default account that in the case of
>> lsstsvc1 using "shared" is changed to something else?  Will their
>> running jobs fail?
>
>Running jobs will continue to run. Changing this will only impact future jobs
>and jobs in the queue.
>
>> 3.  If I add a default account for lsstsvc1 and don't explicitly add
>> them to shared and they continue to submit specify account=shared,
>> will their job submissions start failing?
>
>This depends on if you are enforcing association or not. Leaving things are
>they are, assoc and limits disabled, the user can user any account
>even if you create an association and define a default account. Adding an
>association to the current cluster now will not cause jobs to fail.
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 8 Ben Roberts 2022-11-14 13:14:34 MST
Hi Renata,

Jason is tied up in other work and asked if I could get back to you on this.  You are asking about the behavior without "AccountingStorageEnforce=associations,limits,qos":

> 1.  we cannot have a partition qos or any other kind of qos that
> defines the grptres allowed with that qos, or flags like denyonlimit
> or overpart qos

You can define have a Partition QOS or other QOS with limits defined, but any limits you define will not be enforced.  

> 2.  jobs will not have priorities

Jobs will still have priorities without AccountingStorageEnforce enabled, but there will be certain priority factors that will not be reliably enforced.  For example, you can have different priorities defined for your partitions (with PriorityWeightPartition enabled) and your jobs will get different priorities based on the partition they request.  But if you are trying to use Fairshare as a priority factor and users don't have an association created in the database or request Accounts they don't have access to, then there is no way to make sure their usage is tracked in the correct Account and the Fairshare value will become meaningless.  

> 3.  limits like maxjobspu cannot be set

Similar to the above answers, you can define a maxjobspu limit, but it won't be enforced.  


> Is there any documentation that spells out the full situation of what
> you don't get to do unless you have AccountingStorageEnforce?

The documentation for the AccountingStorageEnforce parameter has a breakdown of the behavior with different options set.
https://slurm.schedmd.com/slurm.conf.html#OPT_AccountingStorageEnforce

This section also has a section that talks about using AccountingStorageEnforce:
https://slurm.schedmd.com/accounting.html#slurm-accounting-configuration-after-build


> Another question, in our other cluster which does have
> "AccountingStorageEnforce=associations,limits,qos" when a user dumps
> in say 500 jobs at once, they all get the same priority.  Is there
> some way to maybe weight the jobs?

If there is a block of jobs submitted at once and they are similar in things like size, partition, account and qos then there isn't really anything you can do to make the priorities of those jobs be different.  If you are enforcing limits and the jobs accumulate usage in the user/account association then the fairshare value for the jobs that haven't run yet will be affected by the usage of the prior jobs.  If you are trying to prevent a single user from using all the resources on the cluster there are things you can do.  With limits enforced you can set a limit on the number of jobs or resources a particular user or association can have in use at once.  You can also limit the number of jobs that are evaluated from a particular user each iteration by the backfill scheduler (https://slurm.schedmd.com/slurm.conf.html#OPT_bf_max_job_user=#).  This will allow you to make sure that jobs from all users are given a chance to start each scheduling cycle.  

I hope this helps.  Let me know if you need any further clarification about any of this.

Thanks,
Ben
Comment 9 Renata Dart 2022-12-06 12:38:51 MST
Hi Ben, in the current state of "no AccountingStorageEnforce" how are
priorities set?  It seems to be according to how many jobs they have
been running.  We have one user that has run 300,000+ jobs in the last
6 days and all of their pending jobs get a priority of 1, and a user
who has only run 400 jobs in that same time period gets a priority
~60000 for his jobs.  We only have a few users running at this point
and so the priority 1 jobs do run, but the concern is that once we
have more users jumping in who have not run any jobs, they will get a
high priority and the priority 1 jobs will never be able to run.  We
are still planning to transition to AccountingStorageEnforce but not
until the new year, so in the meantime we'd like to avoid any prioriy
problems if possible.

Thanks,
Renata



On Mon, 14 Nov 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>--- Comment #8 from Ben Roberts <ben@schedmd.com> ---
>Hi Renata,
>
>Jason is tied up in other work and asked if I could get back to you on this. 
>You are asking about the behavior without
>"AccountingStorageEnforce=associations,limits,qos":
>
>> 1.  we cannot have a partition qos or any other kind of qos that
>> defines the grptres allowed with that qos, or flags like denyonlimit
>> or overpart qos
>
>You can define have a Partition QOS or other QOS with limits defined, but any
>limits you define will not be enforced.  
>
>> 2.  jobs will not have priorities
>
>Jobs will still have priorities without AccountingStorageEnforce enabled, but
>there will be certain priority factors that will not be reliably enforced.  For
>example, you can have different priorities defined for your partitions (with
>PriorityWeightPartition enabled) and your jobs will get different priorities
>based on the partition they request.  But if you are trying to use Fairshare as
>a priority factor and users don't have an association created in the database
>or request Accounts they don't have access to, then there is no way to make
>sure their usage is tracked in the correct Account and the Fairshare value will
>become meaningless.  
>
>> 3.  limits like maxjobspu cannot be set
>
>Similar to the above answers, you can define a maxjobspu limit, but it won't be
>enforced.  
>
>
>> Is there any documentation that spells out the full situation of what
>> you don't get to do unless you have AccountingStorageEnforce?
>
>The documentation for the AccountingStorageEnforce parameter has a breakdown of
>the behavior with different options set.
>https://slurm.schedmd.com/slurm.conf.html#OPT_AccountingStorageEnforce
>
>This section also has a section that talks about using
>AccountingStorageEnforce:
>https://slurm.schedmd.com/accounting.html#slurm-accounting-configuration-after-build
>
>
>> Another question, in our other cluster which does have
>> "AccountingStorageEnforce=associations,limits,qos" when a user dumps
>> in say 500 jobs at once, they all get the same priority.  Is there
>> some way to maybe weight the jobs?
>
>If there is a block of jobs submitted at once and they are similar in things
>like size, partition, account and qos then there isn't really anything you can
>do to make the priorities of those jobs be different.  If you are enforcing
>limits and the jobs accumulate usage in the user/account association then the
>fairshare value for the jobs that haven't run yet will be affected by the usage
>of the prior jobs.  If you are trying to prevent a single user from using all
>the resources on the cluster there are things you can do.  With limits enforced
>you can set a limit on the number of jobs or resources a particular user or
>association can have in use at once.  You can also limit the number of jobs
>that are evaluated from a particular user each iteration by the backfill
>scheduler (https://slurm.schedmd.com/slurm.conf.html#OPT_bf_max_job_user=#). 
>This will allow you to make sure that jobs from all users are given a chance to
>start each scheduling cycle.  
>
>I hope this helps.  Let me know if you need any further clarification about any
>of this.
>
>Thanks,
>Ben
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 10 Ben Roberts 2022-12-06 14:20:37 MST
Hi Renata,

The way to see how a job's priority is calculated is to run 'sprio'.  It will give you a break down of the different priority factors and how much each of those factors contribute to the overall priority.  

From the slurm.conf you uploaded in November, I see that you have the following priority weight factors:
PriorityWeightAge=100
PriorityWeightFairshare=1000000
PriorityWeightJobSize=1000

With that in mind, I'll speculate about what is probably happening.  Since you say that the jobs from the user who has the most usage have a priority of 1, my guess would be that they are running small jobs and they haven't been on the system for very long (to accumulate age based priority).  Since they have run so many jobs compared to the other user, their fairshare value is probably small enough to effectively be 0.  The fairshare of the user who has run fewer jobs is probably much higher, and since your PriorityWeightFairshare is so high, this equates to that user getting much higher priority values for their jobs.

If there are more users coming on the system then it is true that their fairshare values will be much higher at first than any user who has been running jobs.  As the usage accumulates for all the users then the fairshare values should begin to even out and it should make it so that the user who had so much usage initially should start getting a higher fairshare value again.  

If you don't want the priorities to be so heavily weighted to what the fairshare value is for the user, you can adjust the PriorityWeight values.  You could either give more weight to the factors you are currently using (age and job size), or you could add other factors into the mix, such as the association, partition or QOS.  
https://slurm.schedmd.com/slurm.conf.html#OPT_PriorityWeightAssoc

I hope that helps.  Let me know if you have any problems getting the information you need from the sprio command.  

Thanks,
Ben
Comment 11 Renata Dart 2022-12-06 15:03:51 MST
Hi Ben, just to be clear, we don't have any associations in this
cluster yet.  The user, lsstsvc1, that has run the 300,000+ jobs in
the last 6 days has actually been running since we first opened this
cluster to users and is by far the heaviest user of the cluster.  This
is what sprio looks like, the jobs all belong to lsstsvc1:

[renata@sdfrome001 ~]$ sprio
          JOBID PARTITION   PRIORITY       SITE        AGE  FAIRSHARE    JOBSIZE
        1749798 roma               1          0          0          0          7
        1749799 roma               1          0          0          0          7
        1749800 roma               1          0          0          0          7
        1749802 roma               1          0          0          0          7
        1749803 roma               1          0          0          0          7
        1749804 roma               1          0          0          0          7
        1749805 roma               1          0          0          0          7
        1749806 roma               1          0          0          0          7
        1749807 roma               1          0          0          0          7
        1749808 roma               1          0          0          0          7
        1749809 roma               1          0          0          0          7

So the things we could use to control priority in this case are age and jobsize?
We don't have multiple partitions yet.  Is there anything else that can be tweaked
to control priority in this kind of setup with no associations?

Thanks,
Renata

On Tue, 6 Dec 2022, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>--- Comment #10 from Ben Roberts <ben@schedmd.com> ---
>Hi Renata,
>
>The way to see how a job's priority is calculated is to run 'sprio'.  It will
>give you a break down of the different priority factors and how much each of
>those factors contribute to the overall priority.  
>
>>From the slurm.conf you uploaded in November, I see that you have the following
>priority weight factors:
>PriorityWeightAge=100
>PriorityWeightFairshare=1000000
>PriorityWeightJobSize=1000
>
>With that in mind, I'll speculate about what is probably happening.  Since you
>say that the jobs from the user who has the most usage have a priority of 1, my
>guess would be that they are running small jobs and they haven't been on the
>system for very long (to accumulate age based priority).  Since they have run
>so many jobs compared to the other user, their fairshare value is probably
>small enough to effectively be 0.  The fairshare of the user who has run fewer
>jobs is probably much higher, and since your PriorityWeightFairshare is so
>high, this equates to that user getting much higher priority values for their
>jobs.
>
>If there are more users coming on the system then it is true that their
>fairshare values will be much higher at first than any user who has been
>running jobs.  As the usage accumulates for all the users then the fairshare
>values should begin to even out and it should make it so that the user who had
>so much usage initially should start getting a higher fairshare value again.  
>
>If you don't want the priorities to be so heavily weighted to what the
>fairshare value is for the user, you can adjust the PriorityWeight values.  You
>could either give more weight to the factors you are currently using (age and
>job size), or you could add other factors into the mix, such as the
>association, partition or QOS.  
>https://slurm.schedmd.com/slurm.conf.html#OPT_PriorityWeightAssoc
>
>I hope that helps.  Let me know if you have any problems getting the
>information you need from the sprio command.  
>
>Thanks,
>Ben
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 12 Ben Roberts 2022-12-07 13:32:54 MST
Hi Renata,

It does limit your options without user associations created yet.  If there isn't a user association for the user with the most usage then that usage at least won't count against them with the fairshare calculation.  Once you start creating accounts and users and forcing users to use those accounts then their usage would begin to count against them.  

Without multiple partitions, enforcing user associations or QOS's, the only other option available to modify the priority would be PriorityWeightTRES.  This is probably going to end up being similar to the priority you get from the job size though.  
https://slurm.schedmd.com/slurm.conf.html#OPT_PriorityWeightTRES

My recommendation would be to configure Accounts and user associations soon and begin enforcing them to make different priority factors available.

Thanks,
Ben
Comment 13 Ben Roberts 2023-01-03 10:49:15 MST
Hi Renata,

I wanted to follow up with you and see if you have had a chance to create user associations with sacctmgr.  If so, do you have plans to begin enforcing users to use these user/account associations?  

Thanks,
Ben
Comment 14 Renata Dart 2023-01-05 11:38:49 MST
Hi Ben, we are in the middle of creating a coact service so that users
can create accounts.  Once that is complete we will be switching over
to accounting in slurm, but for now we are still running without.  
They think coact will be ready in a few weeks.

Thanks,
Renata


On Tue, 3 Jan 2023, bugs@schedmd.com wrote:

>https://bugs.schedmd.com/show_bug.cgi?id=15378
>
>--- Comment #13 from Ben Roberts <ben@schedmd.com> ---
>Hi Renata,
>
>I wanted to follow up with you and see if you have had a chance to create user
>associations with sacctmgr.  If so, do you have plans to begin enforcing users
>to use these user/account associations?  
>
>Thanks,
>Ben
>
>-- 
>You are receiving this mail because:
>You reported the bug.
Comment 15 Ben Roberts 2023-01-05 12:20:07 MST
Hi Renata,

Thanks for the update.  Since it sounds like you have plans in place to make this happen I'll go ahead and close this ticket.  If anything comes up once you start enforcing the user associations please let us know.

Thanks,
Ben