|
Description
Ahmed Essam ElMazaty
2019-07-11 08:28:01 MDT
Hi Ahmed, Did you change UsageFactor without restarting the slurmctld? If so, that is a known issue that could possibly cause this. The solution is to restart the slurmctld, which will recalculate usage with the new UsageFactor and get rid of those underflow errors. Thanks, -Michael (In reply to Michael Hinton from comment #2) > Hi Ahmed, > > Did you change UsageFactor without restarting the slurmctld? If so, that is > a known issue that could possibly cause this. The solution is to restart the > slurmctld, which will recalculate usage with the new UsageFactor and get rid > of those underflow errors. > > Thanks, > -Michael Hello Michael, No UsageFactor wasn't changed. Only some new QoS were added. Regarding slurmctld service, we restarted it 3 weeks ago. But I can see similar errors in our logs before and after the restart. Best regards, Ahmed (In reply to Ahmed Essam ElMazaty from comment #4) > No UsageFactor wasn't changed. Only some new QoS were added. > Regarding slurmctld service, we restarted it 3 weeks ago. But I can see > similar errors in our logs before and after the restart. Ok. In that case, I'll look into it deeper and get back to you. Thanks, Michael Hi Ahmed, Which cluster is this on? Could you attach the slurm.conf? What QOSs did you add? What QOSs did job 5265892 have? What QOSs did the partition have? What is the output of `scontrol show assoc`? Thanks, Michael Could you also grep the logs for everything related to `5265892`? Thanks. Created attachment 11026 [details]
slurm.conf
(In reply to Michael Hinton from comment #6) > Hi Ahmed, Hello Michael, > > Which cluster is this on? Could you attach the slurm.conf? The cluster is Ibex. I've attached slurm.conf file > > What QOSs did you add? QoS called "mse394" was added recently. > What QOSs did job 5265892 have? 5265892 has "normal" QoS. > What QOSs did the partition have? only "normal" Qos is allowed on this partition > What is the output of `scontrol show assoc`? The output of "scontrol show assoc" has more than 80k lines. > > Thanks, > Michael Best regards, Ahmed (In reply to Michael Hinton from comment #7) > Could you also grep the logs for everything related to `5265892`? Thanks. Here is everything related to this job from the logs /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:34:52.541] sched: Allocate JobId=5217525_1999(5265892) NodeList=dbn306-13-l #CPUs=10 Partition=group-stsda /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.635] _job_complete: JobId=5217525_1999(5265892) WEXITSTATUS 0 /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.637] error: _handle_assoc_tres_run_secs: job 5265892: assoc 3994 TRES cpu grp_used_tres_run_secs underflow, tried to remove 1728000 seconds when only 1727250 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.638] error: _handle_assoc_tres_run_secs: job 5265892: assoc 3994 TRES mem grp_used_tres_run_secs underflow, tried to remove 8847360000 seconds when only 8843520000 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.640] error: _handle_assoc_tres_run_secs: job 5265892: assoc 3994 TRES node grp_used_tres_run_secs underflow, tried to remove 172800 seconds when only 172725 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.641] error: _handle_assoc_tres_run_secs: job 5265892: assoc 3994 TRES billing grp_used_tres_run_secs underflow, tried to remove 1728000 seconds when only 1727250 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.643] error: _handle_assoc_tres_run_secs: job 5265892: assoc 2125 TRES cpu grp_used_tres_run_secs underflow, tried to remove 1728000 seconds when only 1727250 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.644] error: _handle_assoc_tres_run_secs: job 5265892: assoc 2125 TRES mem grp_used_tres_run_secs underflow, tried to remove 8847360000 seconds when only 8843520000 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.646] error: _handle_assoc_tres_run_secs: job 5265892: assoc 2125 TRES node grp_used_tres_run_secs underflow, tried to remove 172800 seconds when only 172725 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.647] error: _handle_assoc_tres_run_secs: job 5265892: assoc 2125 TRES billing grp_used_tres_run_secs underflow, tried to remove 1728000 seconds when only 1727250 remained. /var/spool/slurm/log/slurmctld.log-20190711:[2019-07-11T01:35:21.649] _job_complete: JobId=5217525_1999(5265892) done /var/spool/slurm/log/slurmsched.log-20190711:sched: [2019-07-11T01:34:52.539] JobId=5217525_1999(5265892) initiated /var/spool/slurm/log/slurmsched.log-20190711:sched: [2019-07-11T01:34:52.540] Allocate JobId=5217525_1999(5265892) NodeList=dbn306-13-l #CPUs=10 Partition=group-stsda Hey Ahmed, How frequently is this issue occurring? Does the error cause any real harm, or is it just the error message? Are there any commonalities between each job with this error? Can you attach the output of `sacctmgr show assoc id=3994,2125`? Can you also attach the output of `scontrol show job X`, where X is the most recent job with the error? Job 5265892 is probably no longer in the controller, but there might be newer jobs we can look at. I want to see what the TRES specification and time duration are. I noticed that you do not have `safe` set in AccountingStorageEnforce, but you do in the slurm.conf we have on file for the Shaheen cluster. Is there a reason you don't want safe for this cluster? I'm curious to see if setting this could avoid the error (safe rejects jobs unless they are calculated to run within their GrpTRESMins limit). Thanks, -Michael Also, even though it's 80k lines, if you could attach the output of `scontrol show assoc`, that would be helpful to see what the GrpTRESMins limits are. If you don't want to attach the entire thing for security concerns, then you can either send it privately to my email/share with my google drive, or you can attach a redacted snippet showing each QOS and association for the job. (In reply to Michael Hinton from comment #11) > Hey Ahmed, > Hi Michael > How frequently is this issue occurring? Does the error cause any real harm, > or is it just the error message? Are there any commonalities between each > job with this error? > For example it happenned for 25 jobs since beginning of August. No they don't cause harm, but I'm not sure if they won't and I don't understand why they do appear.. that's why I've opened this ticket. Nothing common between these jobs, They have different Qos, running on different partitions and submitted from different accounts > Can you attach the output of `sacctmgr show assoc id=3994,2125`? > # sacctmgr show assoc id=3994,2125 Cluster Account User Partition Share GrpJobs GrpTRES GrpSubmit GrpWall GrpTRESMins MaxJobs MaxTRES MaxTRESPerNode MaxSubmit MaxWall MaxTRESMins QOS Def QOS GrpTRESRunMin ---------- ---------- ---------- ---------- --------- ------- ------------- --------- ----------- ------------- ------- ------------- -------------- --------- ----------- ------------- -------------------- --------- ------------- dragon stsda 1 node=39 cpu=780 normal dragon stsda lenzia group-sts+ 1 cpu=780 normal > Can you also attach the output of `scontrol show job X`, where X is the most > recent job with the error? Job 5265892 is probably no longer in the > controller, but there might be newer jobs we can look at. I want to see what > the TRES specification and time duration are. > Unfortunately I can't .. All of them were completed. I'll try to do it whenever I see a new job with the same issue > I noticed that you do not have `safe` set in AccountingStorageEnforce, but > you do in the slurm.conf we have on file for the Shaheen cluster. Is there a > reason you don't want safe for this cluster? I'm curious to see if setting > this could avoid the error (safe rejects jobs unless they are calculated to > run within their GrpTRESMins limit). We don't have GrpTRESMins set for any of the QoS. Shaheen uses this for billing, we don't. > > Thanks, > -Michael (In reply to Michael Hinton from comment #12) > Also, even though it's 80k lines, if you could attach the output of > `scontrol show assoc`, that would be helpful to see what the GrpTRESMins > limits are. > > If you don't want to attach the entire thing for security concerns, then you > can either send it privately to my email/share with my google drive, or you > can attach a redacted snippet showing each QOS and association for the job. Here is assoc_id=3994 ClusterName=dragon Account=stsda UserName=lenzia(160919) Partition=group-stsda ID=3994 SharesRaw/Norm/Level/Factor=1/0.03/40/0.94 UsageRaw/Norm/Efctv=1107410020.92/0.02/0.77 ParentAccount= Lft=147 DefAssoc=No GrpJobs=N(0) GrpJobsAccrue=N(0) GrpSubmitJobs=N(0) GrpWall=N(1234998.28) GrpTRES=cpu=N(0),mem=N(0),energy=N(0),node=N(0),billing=N(0),fs/disk=N(0),vmem=N(0),pages=N(0) GrpTRESMins=cpu=N(18456833),mem=N(74064302270),energy=N(0),node=N(1234998),billing=N(18456833),fs/disk=N(0),vmem=N(0),pages=N(0) GrpTRESRunMins=cpu=N(0),mem=N(0),energy=N(0),node=N(0),billing=N(0),fs/disk=N(0),vmem=N(0),pages=N(0) MaxJobs= MaxJobsAccrue= MaxSubmitJobs= MaxWallPJ= MaxTRESPJ=cpu=780 MaxTRESPN= MaxTRESMinsPJ= And here is 2125 ClusterName=dragon Account=stsda UserName= Partition= ID=2125 SharesRaw/Norm/Level/Factor=1/0.04/27/0.00 UsageRaw/Norm/Efctv=1442125157.03/0.02/0.02 ParentAccount=root(1) Lft=108 DefAssoc=No GrpJobs=N(0) GrpJobsAccrue=N(0) GrpSubmitJobs=N(0) GrpWall=N(1487600.97) GrpTRES=cpu=N(0),mem=N(0),energy=N(0),node=39(0),billing=N(0),fs/disk=N(0),vmem=N(0),pages=N(0) GrpTRESMins=cpu=N(24035419),mem=N(93600818635),energy=N(0),node=N(1531179),billing=N(24035419),fs/disk=N(0),vmem=N(0),pages=N(0) GrpTRESRunMins=cpu=N(0),mem=N(0),energy=N(0),node=N(0),billing=N(0),fs/disk=N(0),vmem=N(0),pages=N(0) MaxJobs= MaxJobsAccrue= MaxSubmitJobs= MaxWallPJ= MaxTRESPJ=cpu=780 MaxTRESPN= MaxTRESMinsPJ= MinPrioThresh= Best regards, Ahmed MinPrioThresh= *** Ticket 7704 has been marked as a duplicate of this ticket. *** Hello Ahmed, Are you still seeing this error? If so, how frequently? Thanks, -Michael Another question for both Ahmed and Matt: What CPU architectures are you running on? Intel x86-64? In reply to Michael Hinton from comment #17) > Hello Ahmed, > > Are you still seeing this error? If so, how frequently? > > Thanks, > -Michael Hello Michael, Yes even after upgrading to 19.05 I can still see those errors. They appear for couple of jobs every day. Best regards, Ahmed (In reply to Michael Hinton from comment #20) > Another question for both Ahmed and Matt: > > What CPU architectures are you running on? Intel x86-64? We have Intel and AMD nodes. Errors appear for jobs running on both We are running on Intel Skylake on our primary cluster and today have 64 loglines with that error across 6 jobs. I haven't yet seen any commonality to the jobs, although I haven't seen this on our other clusters (which have both a smaller node and job count). - Matt Both of your configurations have long accounting reset values. KAUST set PriorityDecayHalfLife to 56 days, and Michigan set PriorityDecayHalfLife to 0 with PriorityUsageResetPeriod=NONE, which disables any accounting half life or reset. My current theory is that there may be some kind of race condition with the decay thread in the multifactor priority plugin. I think it's rare, but your accounting resets are so large that Slurm can't 'heal' itself after it hits the error, so you continually see errors thereafter. I would recommend setting the PriorityDecayHalfLife to no longer than 14 days, like most other sites. Or, set it to 0 and set PriorityUsageResetPeriod to WEEKLY or less. That would likely just limit the damage of the bug rather than fixing it, but it's a workaround you can try for now. I'll keep looking into it to see if I can fix the actual bug. Hi Ahmed, Matt, I wanted to follow up to see if you all are still seeing this issue, and if so, whether those errors are causing any real problems you can identify in your system. I have been unsuccessful in reproducing the error, so there might not be much I can do at this point unless you can provide me with a way I can reproduce the issue. Thanks, -Michael Good evening Michael. Sorry for the delay in responding. I'll do some more digging, but I know we were curious if the issue would disappear after we upgraded to version 19. I just verified we are still seeing the error. Somebody from our team will update the ticket later this week. - Matt Hello, Sorry for the lack of progress on this. I'm still not sure yet that this is indicative of a real problem or has any real-world negative effects. And until we can find a solid reproducer for it, it's unlikely there is much we can do at this point to track it down. If anyone is able to create a reproducer, please feel free to reopen this ticket and we can get to the bottom of it. Thanks, -Michael Hello, Is there any status on this? We are still seeing the issue, even today, with 20.02.6. We are starting to wonder if it has impact on billing issues we've been seeing for some time now. Thanks, David >Is there any status on this? We are still seeing the issue, even today, with 20.02.6. We are starting to wonder if it has impact on billing issues we've been seeing for some time now.
Michael is out of the office today, however, I will have him reply on Monday when he returns.
(In reply to ARCTS Admins from comment #34) > Is there any status on this? We are still seeing the issue, even today, with > 20.02.6. We are starting to wonder if it has impact on billing issues we've > been seeing for some time now. Hi David, I'm sorry I did not reply back to you. What version of Slurm are you on, and are you still seeing this issue today? Do you have any idea on how to reproduce the issue, or any insight into what conditions may cause it? Thanks, Michael (In reply to Michael Hinton from comment #36) > Hi David, I'm sorry I did not reply back to you. What version of Slurm are > you on, and are you still seeing this issue today? Do you have any idea on > how to reproduce the issue, or any insight into what conditions may cause it? > > Thanks, > Michael Hi, Michael, Yes we are still seeing this. I believe since this bug was opened we have upgraded to Slurm 20.11.8. Here is just a snippet from the slurmctld.log: ``` [2021-09-15T11:41:43.380] error: _handle_assoc_tres_run_secs: job 25415082: assoc 12764 TRES mem grp_used_tres_run_secs underflow, tried to remove 792576 seconds when only 116736 remained. [2021-09-15T11:41:43.380] error: _handle_assoc_tres_run_secs: job 25415082: assoc 12764 TRES node grp_used_tres_run_secs underflow, tried to remove 129 seconds when only 19 remained. [2021-09-15T11:41:43.380] error: _handle_assoc_tres_run_secs: job 25415082: assoc 12764 TRES billing grp_used_tres_run_secs underflow, tried to remove 323016 seconds when only 47576 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12767 TRES cpu grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12767 TRES mem grp_used_tres_run_secs underflow, tried to remove 1167360 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12767 TRES node grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12767 TRES billing grp_used_tres_run_secs underflow, tried to remove 475760 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12765 TRES cpu grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12765 TRES mem grp_used_tres_run_secs underflow, tried to remove 1167360 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12765 TRES node grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12765 TRES billing grp_used_tres_run_secs underflow, tried to remove 475760 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12764 TRES cpu grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12764 TRES mem grp_used_tres_run_secs underflow, tried to remove 1167360 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12764 TRES node grp_used_tres_run_secs underflow, tried to remove 190 seconds when only 0 remained. [2021-09-15T11:41:55.517] error: _handle_assoc_tres_run_secs: job 25415008: assoc 12764 TRES billing grp_used_tres_run_secs underflow, tried to remove 475760 seconds when only 0 remained. ``` We aren't sure what causes this. No user has complained of anything to my knowledge. At least, not yet. David (In reply to ARCTS Admins from comment #37) > Yes we are still seeing this. I believe since this bug was opened we have > upgraded to Slurm 20.11.8. Here is just a snippet from the slurmctld.log: Let's get more information about the jobs that are being complained about. Could you run the following command for some of the jobs with the error?: sacct -D --jobs=<job-ids> -o jobid,jobidraw,jobname,user,account,partition,start,state,exitcode,derivedexitcode,nodelist,reqtres,alloctres,tresusageinmin,tresusageinmintask -p Of course, feel free to remove sensitive output fields as needed, or add other fields as you see fit. Mostly I just want to see the tres-related fields and if anything suspicious is occurring with these jobs. Thanks! -Michael (In reply to Michael Hinton from comment #38) > (In reply to ARCTS Admins from comment #37) > > Yes we are still seeing this. I believe since this bug was opened we have > > upgraded to Slurm 20.11.8. Here is just a snippet from the slurmctld.log: > Let's get more information about the jobs that are being complained about. > Could you run the following command for some of the jobs with the error?: > > sacct -D --jobs=<job-ids> -o > jobid,jobidraw,jobname,user,account,partition,start,state,exitcode, > derivedexitcode,nodelist,reqtres,alloctres,tresusageinmin,tresusageinmintask > -p > > Of course, feel free to remove sensitive output fields as needed, or add > other fields as you see fit. Mostly I just want to see the tres-related > fields and if anything suspicious is occurring with these jobs. > > Thanks! > -Michael Here you go: JobID|JobIDRaw|JobName|User|Account|Partition|Start|State|ExitCode|DerivedExitCode|NodeList|ReqTRES|AllocTRES|TRESUsageInMin|TRESUsageInMinTask| 25415008_0|25415009|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3031|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_0.batch|25415009.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3031||cpu=1,mem=6G,node=1|cpu=00:07:40,energy=0,fs/disk=61027,mem=327500K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_0.extern|25415009.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3031||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_1|25415010|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3037|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_1.batch|25415010.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3037||cpu=1,mem=6G,node=1|cpu=00:05:13,energy=0,fs/disk=61027,mem=323744K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_1.extern|25415010.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3037||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_2|25415011|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3161|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_2.batch|25415011.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3161||cpu=1,mem=6G,node=1|cpu=00:05:11,energy=0,fs/disk=61027,mem=325976K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_2.extern|25415011.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3161||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=7768447414786248K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=2319670954477955108,mem=0,pages=0,vmem=0| 25415008_3|25415012|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3233|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_3.batch|25415012.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3233||cpu=1,mem=6G,node=1|cpu=00:12:56,energy=0,fs/disk=61027,mem=326112K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_3.extern|25415012.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3233||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140516578630064,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=104586048,mem=0,pages=0,vmem=0| 25415008_4|25415013|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3316|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_4.batch|25415013.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3316||cpu=1,mem=6G,node=1|cpu=00:10:32,energy=0,fs/disk=61027,mem=325020K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_4.extern|25415013.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3316||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=68719198201,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_5|25415014|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3360|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_5.batch|25415014.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3360||cpu=1,mem=6G,node=1|cpu=00:09:51,energy=0,fs/disk=61027,mem=324952K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_5.extern|25415014.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3360||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_6|25415015|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3166|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_6.batch|25415015.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3166||cpu=1,mem=6G,node=1|cpu=00:23:18,energy=0,fs/disk=61027,mem=326764K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_6.extern|25415015.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3166||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=8193119758310621K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=3414407380873276269,mem=0,pages=0,vmem=0| 25415008_7|25415016|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3166|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_7.batch|25415016.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3166||cpu=1,mem=6G,node=1|cpu=00:23:18,energy=0,fs/disk=61027,mem=327400K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_7.extern|25415016.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3166||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=8193119758310621K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=3415255057966263343,mem=0,pages=0,vmem=0| 25415008_8|25415017|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3171|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_8.batch|25415017.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3171||cpu=1,mem=6G,node=1|cpu=00:21:58,energy=0,fs/disk=61027,mem=327048K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_8.extern|25415017.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3171||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_9|25415018|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3171|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_9.batch|25415018.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3171||cpu=1,mem=6G,node=1|cpu=00:05:35,energy=0,fs/disk=61027,mem=324764K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_9.extern|25415018.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3171||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_10|25415019|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3198|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_10.batch|25415019.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3198||cpu=1,mem=6G,node=1|cpu=00:05:01,energy=0,fs/disk=61027,mem=324876K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_10.extern|25415019.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3198||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_11|25415020|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3199|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_11.batch|25415020.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3199||cpu=1,mem=6G,node=1|cpu=00:05:37,energy=0,fs/disk=61027,mem=325100K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_11.extern|25415020.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3199||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_12|25415021|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3307|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_12.batch|25415021.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3307||cpu=1,mem=6G,node=1|cpu=00:12:06,energy=0,fs/disk=61027,mem=325828K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_12.extern|25415021.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3307||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_13|25415022|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3307|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_13.batch|25415022.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3307||cpu=1,mem=6G,node=1|cpu=00:12:06,energy=0,fs/disk=61027,mem=326456K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_13.extern|25415022.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3307||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_14|25415023|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3318|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_14.batch|25415023.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3318||cpu=1,mem=6G,node=1|cpu=00:10:21,energy=0,fs/disk=61027,mem=325984K,pages=0,vmem=142864K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_14.extern|25415023.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3318||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_15|25415024|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3318|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_15.batch|25415024.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3318||cpu=1,mem=6G,node=1|cpu=00:22:45,energy=0,fs/disk=61027,mem=327344K,pages=0,vmem=142864K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_15.extern|25415024.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3318||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=68002713591,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=25673596927,mem=0,pages=0,vmem=0| 25415008_16|25415025|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3027|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_16.batch|25415025.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3027||cpu=1,mem=6G,node=1|cpu=00:20:06,energy=0,fs/disk=61027,mem=328112K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_16.extern|25415025.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3027||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=3673797170893400.50K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=58216939519,mem=0,pages=0,vmem=0| 25415008_17|25415026|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_17.batch|25415026.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:20:58,energy=0,fs/disk=61027,mem=327276K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_17.extern|25415026.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140514150055940,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=4,mem=0,pages=0,vmem=0| 25415008_18|25415027|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_18.batch|25415027.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:04:47,energy=0,fs/disk=61027,mem=326400K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_18.extern|25415027.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_19|25415028|op-sim|smaity|yuekai1|standard|2021-09-15T10:38:16|COMPLETED|0:0|0:0|gl3202|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_19.batch|25415028.batch|batch||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3202||cpu=1,mem=6G,node=1|cpu=00:05:05,energy=0,fs/disk=61027,mem=325016K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_19.extern|25415028.extern|extern||yuekai1||2021-09-15T10:38:16|COMPLETED|0:0||gl3202||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_20|25415029|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3198|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_20.batch|25415029.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3198||cpu=1,mem=6G,node=1|cpu=00:05:00,energy=0,fs/disk=61027,mem=324940K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_20.extern|25415029.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3198||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_21|25415030|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3202|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_21.batch|25415030.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||cpu=1,mem=6G,node=1|cpu=00:10:47,energy=0,fs/disk=61027,mem=325360K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_21.extern|25415030.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140517375551712,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140517400442128,mem=0,pages=0,vmem=0| 25415008_22|25415031|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3202|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_22.batch|25415031.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||cpu=1,mem=6G,node=1|cpu=00:10:41,energy=0,fs/disk=61027,mem=325820K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_22.extern|25415031.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_23|25415032|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3202|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_23.batch|25415032.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||cpu=1,mem=6G,node=1|cpu=00:10:41,energy=0,fs/disk=61027,mem=325796K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_23.extern|25415032.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3202||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_24|25415033|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3219|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_24.batch|25415033.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3219||cpu=1,mem=6G,node=1|cpu=00:25:51,energy=0,fs/disk=61027,mem=326236K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_24.extern|25415033.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3219||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_25|25415034|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3219|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_25.batch|25415034.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3219||cpu=1,mem=6G,node=1|cpu=00:25:51,energy=0,fs/disk=61027,mem=326928K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_25.extern|25415034.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3219||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_26|25415035|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3220|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_26.batch|25415035.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3220||cpu=1,mem=6G,node=1|cpu=00:23:30,energy=0,fs/disk=61027,mem=327092K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_26.extern|25415035.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3220||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_27|25415036|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3220|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_27.batch|25415036.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3220||cpu=1,mem=6G,node=1|cpu=00:11:10,energy=0,fs/disk=61027,mem=324912K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_27.extern|25415036.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3220||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=68719476735,mem=0,pages=0,vmem=0| 25415008_28|25415037|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3378|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_28.batch|25415037.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3378||cpu=1,mem=6G,node=1|cpu=00:10:43,energy=0,fs/disk=61027,mem=325308K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_28.extern|25415037.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3378||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_29|25415038|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3378|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_29.batch|25415038.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3378||cpu=1,mem=6G,node=1|cpu=00:10:44,energy=0,fs/disk=61027,mem=325764K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_29.extern|25415038.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3378||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_30|25415039|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3379|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_30.batch|25415039.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3379||cpu=1,mem=6G,node=1|cpu=00:20:33,energy=0,fs/disk=61027,mem=326200K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_30.extern|25415039.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3379||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_31|25415040|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3379|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_31.batch|25415040.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3379||cpu=1,mem=6G,node=1|cpu=00:20:30,energy=0,fs/disk=61027,mem=326840K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_31.extern|25415040.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3379||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140516843654016,mem=0,pages=0,vmem=0| 25415008_32|25415041|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3034|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_32.batch|25415041.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3034||cpu=1,mem=6G,node=1|cpu=00:17:53,energy=0,fs/disk=61027,mem=325348K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_32.extern|25415041.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3034||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_33|25415042|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3034|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_33.batch|25415042.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3034||cpu=1,mem=6G,node=1|cpu=00:35:38,energy=0,fs/disk=61027,mem=326368K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_33.extern|25415042.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3034||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=8122212047199385K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=6155685625454414970,mem=0,pages=0,vmem=0| 25415008_34|25415043|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3035|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_34.batch|25415043.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||cpu=1,mem=6G,node=1|cpu=00:35:13,energy=0,fs/disk=61027,mem=325968K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_34.extern|25415043.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=6292028754840982K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=8593213633223087437,mem=0,pages=0,vmem=0| 25415008_35|25415044|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3035|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_35.batch|25415044.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||cpu=1,mem=6G,node=1|cpu=00:39:46,energy=0,fs/disk=61027,mem=325908K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_35.extern|25415044.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=6140719996918069080,mem=0,pages=0,vmem=0| 25415008_36|25415045|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3035|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_36.batch|25415045.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||cpu=1,mem=6G,node=1|cpu=00:09:30,energy=0,fs/disk=61027,mem=326128K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_36.extern|25415045.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_37|25415046|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3035|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_37.batch|25415046.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||cpu=1,mem=6G,node=1|cpu=00:10:38,energy=0,fs/disk=61027,mem=326568K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_37.extern|25415046.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3035||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_38|25415047|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3104|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_38.batch|25415047.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3104||cpu=1,mem=6G,node=1|cpu=00:14:13,energy=0,fs/disk=61027,mem=325004K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_38.extern|25415047.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3104||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_39|25415048|op-sim|smaity|yuekai1|standard|2021-09-15T10:39:17|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_39.batch|25415048.batch|batch||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:23:30,energy=0,fs/disk=61027,mem=326292K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_39.extern|25415048.extern|extern||yuekai1||2021-09-15T10:39:17|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_40|25415050|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:17|COMPLETED|0:0|0:0|gl3031|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_40.batch|25415050.batch|batch||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3031||cpu=1,mem=6G,node=1|cpu=00:26:49,energy=0,fs/disk=61027,mem=325436K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_40.extern|25415050.extern|extern||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3031||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_41|25415051|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:17|COMPLETED|0:0|0:0|gl3035|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_41.batch|25415051.batch|batch||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3035||cpu=1,mem=6G,node=1|cpu=00:21:20,energy=0,fs/disk=61027,mem=324632K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_41.extern|25415051.extern|extern||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3035||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_42|25415052|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:17|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_42.batch|25415052.batch|batch||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:35:23,energy=0,fs/disk=61027,mem=326620K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_42.extern|25415052.extern|extern||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140512409700416,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_43|25415053|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:17|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_43.batch|25415053.batch|batch||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:35:29,energy=0,fs/disk=61027,mem=326972K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_43.extern|25415053.extern|extern||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=72057594054705200,mem=0,pages=0,vmem=0| 25415008_44|25415054|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:17|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_44.batch|25415054.batch|batch||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:45:27,energy=0,fs/disk=61027,mem=326416K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_44.extern|25415054.extern|extern||yuekai1||2021-09-15T10:40:17|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=2260606816652808.50K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=8314610790814727508,mem=0,pages=0,vmem=0| 25415008_45|25415055|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_45.batch|25415055.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:11:37,energy=0,fs/disk=61027,mem=324996K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_45.extern|25415055.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=17112758271,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=68719220432,mem=0,pages=0,vmem=0| 25415008_46|25415056|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_46.batch|25415056.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:11:35,energy=0,fs/disk=61027,mem=327844K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_46.extern|25415056.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=17112758271,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=68719220432,mem=0,pages=0,vmem=0| 25415008_47|25415057|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_47.batch|25415057.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:11:34,energy=0,fs/disk=61027,mem=327152K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_47.extern|25415057.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_48|25415058|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3105|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_48.batch|25415058.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||cpu=1,mem=6G,node=1|cpu=00:22:51,energy=0,fs/disk=61027,mem=327328K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_48.extern|25415058.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3105||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140517713904064,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140517708605808,mem=0,pages=0,vmem=0| 25415008_49|25415059|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3098|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_49.batch|25415059.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3098||cpu=1,mem=6G,node=1|cpu=00:20:33,energy=0,fs/disk=61027,mem=327236K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_49.extern|25415059.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3098||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_50|25415060|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3098|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_50.batch|25415060.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3098||cpu=1,mem=6G,node=1|cpu=00:20:31,energy=0,fs/disk=61027,mem=325360K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_50.extern|25415060.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3098||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=6994868796447709K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=7669474517394731364,mem=0,pages=0,vmem=0| 25415008_51|25415061|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3099|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_51.batch|25415061.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3099||cpu=1,mem=6G,node=1|cpu=00:38:47,energy=0,fs/disk=61027,mem=327328K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_51.extern|25415061.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3099||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140514015854624,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=72057594054705200,mem=0,pages=0,vmem=0| 25415008_52|25415062|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3099|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_52.batch|25415062.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3099||cpu=1,mem=6G,node=1|cpu=00:37:47,energy=0,fs/disk=61027,mem=327400K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_52.extern|25415062.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3099||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140513884469664,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140513883601488,mem=0,pages=0,vmem=0| 25415008_53|25415063|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3097|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_53.batch|25415063.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||cpu=1,mem=6G,node=1|cpu=00:41:53,energy=0,fs/disk=61027,mem=326236K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_53.extern|25415063.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140516771420288,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140516771420288,mem=0,pages=0,vmem=0| 25415008_54|25415064|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3097|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_54.batch|25415064.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||cpu=1,mem=6G,node=1|cpu=00:14:42,energy=0,fs/disk=61027,mem=324456K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_54.extern|25415064.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_55|25415065|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3097|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_55.batch|25415065.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||cpu=1,mem=6G,node=1|cpu=00:12:33,energy=0,fs/disk=61027,mem=325728K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_55.extern|25415065.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_56|25415066|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|COMPLETED|0:0|0:0|gl3097|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_56.batch|25415066.batch|batch||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||cpu=1,mem=6G,node=1|cpu=00:12:56,energy=0,fs/disk=61027,mem=325476K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_56.extern|25415066.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3097||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_57|25415067|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_57.batch|25415067.batch|batch||yuekai1||2021-09-15T10:40:18|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:14,energy=0,fs/disk=61027,mem=327280K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_57.extern|25415067.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_58|25415068|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_58.batch|25415068.batch|batch||yuekai1||2021-09-15T10:40:18|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:13,energy=0,fs/disk=61027,mem=326452K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_58.extern|25415068.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_59|25415069|op-sim|smaity|yuekai1|standard|2021-09-15T10:40:18|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_59.batch|25415069.batch|batch||yuekai1||2021-09-15T10:40:18|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:12,energy=0,fs/disk=61027,mem=326592K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_59.extern|25415069.extern|extern||yuekai1||2021-09-15T10:40:18|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=0,mem=0,pages=0,vmem=0| 25415008_60|25415071|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3099|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_60.batch|25415071.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3099||cpu=1,mem=6G,node=1|cpu=00:56:36,energy=0,fs/disk=61027,mem=326960K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_60.extern|25415071.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3099||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140515163143024,mem=0,pages=0,vmem=0| 25415008_61|25415072|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3200|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_61.batch|25415072.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3200||cpu=1,mem=6G,node=1|cpu=00:57:30,energy=0,fs/disk=61027,mem=325792K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_61.extern|25415072.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3200||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_62|25415073|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_62.batch|25415073.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:59:46,energy=0,fs/disk=61027,mem=325424K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_62.extern|25415073.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_63|25415074|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_63.batch|25415074.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:12:49,energy=0,fs/disk=61027,mem=326048K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_63.extern|25415074.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_64|25415075|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_64.batch|25415075.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||cpu=1,mem=6G,node=1|cpu=00:35:17,energy=0,fs/disk=61027,mem=325340K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_64.extern|25415075.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_65|25415076|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_65.batch|25415076.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||cpu=1,mem=6G,node=1|cpu=00:35:26,energy=0,fs/disk=61027,mem=325088K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_65.extern|25415076.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_66|25415077|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_66.batch|25415077.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=325604K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_66.extern|25415077.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_67|25415078|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_67.batch|25415078.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=325596K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_67.extern|25415078.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_68|25415079|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_68.batch|25415079.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=325712K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_68.extern|25415079.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_69|25415080|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_69.batch|25415080.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=326460K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_69.extern|25415080.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_70|25415081|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_70.batch|25415081.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=326488K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_70.extern|25415081.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_71|25415082|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|TIMEOUT|0:0|0:0|gl3325|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_71.batch|25415082.batch|batch||yuekai1||2021-09-15T10:41:19|CANCELLED|0:15||gl3325||cpu=1,mem=6G,node=1|cpu=01:00:19,energy=0,fs/disk=61027,mem=326036K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_71.extern|25415082.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3325||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=17179607032G,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=3402314966033655139,mem=0,pages=0,vmem=0| 25415008_72|25415083|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_72.batch|25415083.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:13:36,energy=0,fs/disk=61027,mem=324568K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_72.extern|25415083.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=0,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=1631717697,mem=0,pages=0,vmem=0| 25415008_73|25415084|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_73.batch|25415084.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:13:37,energy=0,fs/disk=61027,mem=327140K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_73.extern|25415084.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_74|25415085|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_74.batch|25415085.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:13:37,energy=0,fs/disk=61027,mem=325520K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_74.extern|25415085.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_75|25415086|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_75.batch|25415086.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:26:57,energy=0,fs/disk=61027,mem=324604K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_75.extern|25415086.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=3884152950363731.50K,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=8811632893973918773,mem=0,pages=0,vmem=0| 25415008_76|25415087|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_76.batch|25415087.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:27:17,energy=0,fs/disk=61027,mem=326588K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_76.extern|25415087.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_77|25415088|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3136|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_77.batch|25415088.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||cpu=1,mem=6G,node=1|cpu=00:26:20,energy=0,fs/disk=61027,mem=327344K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_77.extern|25415088.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3136||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| 25415008_78|25415089|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3137|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_78.batch|25415089.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3137||cpu=1,mem=6G,node=1|cpu=00:50:39,energy=0,fs/disk=61027,mem=325652K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_78.extern|25415089.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3137||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140514898084784,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=140514888486096,mem=0,pages=0,vmem=0| 25415008_79|25415090|op-sim|smaity|yuekai1|standard|2021-09-15T10:41:19|COMPLETED|0:0|0:0|gl3137|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_79.batch|25415090.batch|batch||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3137||cpu=1,mem=6G,node=1|cpu=00:50:22,energy=0,fs/disk=61027,mem=327040K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_79.extern|25415090.extern|extern||yuekai1||2021-09-15T10:41:19|COMPLETED|0:0||gl3137||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=140514893755456,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=40176640,mem=0,pages=0,vmem=0| 25415008_80|25415008|op-sim|smaity|yuekai1|standard|2021-09-15T10:42:20|COMPLETED|0:0|0:0|gl3026|billing=2504,cpu=1,mem=6G,node=1|billing=2504,cpu=1,mem=6G,node=1||| 25415008_80.batch|25415008.batch|batch||yuekai1||2021-09-15T10:42:20|COMPLETED|0:0||gl3026||cpu=1,mem=6G,node=1|cpu=00:59:29,energy=0,fs/disk=61027,mem=326856K,pages=0,vmem=142872K|cpu=0,fs/disk=0,mem=0,pages=0,vmem=0| 25415008_80.extern|25415008.extern|extern||yuekai1||2021-09-15T10:42:20|COMPLETED|0:0||gl3026||billing=2504,cpu=1,mem=6G,node=1|cpu=00:00:00,energy=0,fs/disk=2012,license/stata@slurmdb=240,mem=0,pages=0,vmem=108052K|cpu=0,fs/disk=0,license/stata@slurmdb=240,mem=0,pages=0,vmem=0| David (In reply to ARCTS Admins from comment #34) > We are starting to wonder if it has impact on billing issues we've > been seeing for some time now. Can you explain some of the billing issues you are having? Or link to any open bugs you have on it? For array job 25415008, the only curious thing I see is that some of the array job tasks are in state TIMEOUT or CANCELLED instead of COMPLETED. Can you attach the portion of slurmctld.log related to array job 25415008, and the surrounding log context while that job was running? I would like to see what was going on in the controller at that time. Thanks! -Michael Currently, we don't know the conditions needed to reproduce this error, so if anyone knows how to consistently do that, feel free to respond. Created attachment 22285 [details]
logs from 2021-11-16
these are logs for today where we are seeing excessive amounts of things like:
[2021-11-16T13:46:22.443] error: gres/gpu: job 27893398 dealloc node gl1005 type v100 gres count underflow (0 1)
Hello, I updated logs that pertained to underflow issues, but not exactly the ones reported here. I apologize if these aren't helpful. David Hi David, Michael is off for some days, so I am taking care of this bug now. One question, do you know if your users do resize their jobs? I suspect that modifying the number of nodes of a running job, be it manually o because a node fails, may lead to an error like this. Both errors could be related, the "gres count underflow" and the "underflow, tried to remove...". I am investigating this at the moment. (In reply to Felip Moll from comment #44) Hi, Felip! > One question, do you know if your users do resize their jobs? I suspect that > modifying the number of nodes of a running job, be it manually o because a > node fails, may lead to an error like this. I am not entirely sure that our users do/don't resize their jobs. My hunch would be that they might *try* resizing when using a highly utilized, limited resource, partition like our GPU partition, in order to get their job to "run sooner". Are there any characteristics/indicators I could search the logs for that would indicate resizing? David (In reply to ARCTS Admins from comment #45) > (In reply to Felip Moll from comment #44) > Hi, Felip! > > > One question, do you know if your users do resize their jobs? I suspect that > > modifying the number of nodes of a running job, be it manually o because a > > node fails, may lead to an error like this. > > I am not entirely sure that our users do/don't resize their jobs. My hunch > would be that they might *try* resizing when using a highly utilized, > limited resource, partition like our GPU partition, in order to get their > job to "run sooner". > > Are there any characteristics/indicators I could search the logs for that > would indicate resizing? > > David That should appear in the sched log with the string "_update_job", but I don't really think this is the case, I am more inclined to think about a node failure within a running job. I am looking more in deep into the logs to try to correlate a job, the error you see and some issue with the nodes. For now I am seeing this happening in array jobs. Does it happen in jobs that are not array jobs? Michael saw this in comment 40: > for array job 25415008, the only curious thing I see is that some of the array job tasks are in state TIMEOUT or CANCELLED instead of COMPLETED. (In reply to Felip Moll from comment #46) Morning, Felip! > For now I am seeing this happening in array jobs. Does it happen in jobs > that are not array jobs? > > Michael saw this in comment 40: > > > for array job 25415008, the only curious thing I see is that some of the array job tasks are in state TIMEOUT or CANCELLED instead of COMPLETED. Here's what I am seeing today: ``` [2021-12-10T08:16:45.933] error: gres/gpu: job 29802851 dealloc node gl1016 type v100 gres count underflow (0 1) [2021-12-10T08:16:45.934] error: gres/gpu: job 29942929 dealloc node gl1018 type v100 gres count underflow (0 1) [2021-12-10T08:16:45.934] error: gres/gpu: job 29651558 dealloc node gl1014 type v100 gres count underflow (0 1) [2021-12-10T08:16:45.936] error: gres/gpu: job 29802851 dealloc node gl1016 type v100 gres count underflow (0 1) [2021-12-10T08:16:45.938] error: gres/gpu: job 29942929 dealloc node gl1018 type v100 gres count underflow (0 1) [2021-12-10T08:16:45.938] error: gres/gpu: job 29651558 dealloc node gl1014 type v100 gres count underflow (0 1) ``` Most of what I see resembles that output, and aren't array jobs. However, that doesn't mean there _aren't_ array jobs in there. The output scrolls past rather quickly at times. David (In reply to ARCTS Admins from comment #47) > (In reply to Felip Moll from comment #46) > Morning, Felip! > > > For now I am seeing this happening in array jobs. Does it happen in jobs > > that are not array jobs? > > > > Michael saw this in comment 40: > > > > > for array job 25415008, the only curious thing I see is that some of the array job tasks are in state TIMEOUT or CANCELLED instead of COMPLETED. > > Here's what I am seeing today: > > ``` > [2021-12-10T08:16:45.933] error: gres/gpu: job 29802851 dealloc node gl1016 > type v100 gres count underflow (0 1) > [2021-12-10T08:16:45.934] error: gres/gpu: job 29942929 dealloc node gl1018 > type v100 gres count underflow (0 1) > [2021-12-10T08:16:45.934] error: gres/gpu: job 29651558 dealloc node gl1014 > type v100 gres count underflow (0 1) > [2021-12-10T08:16:45.936] error: gres/gpu: job 29802851 dealloc node gl1016 > type v100 gres count underflow (0 1) > [2021-12-10T08:16:45.938] error: gres/gpu: job 29942929 dealloc node gl1018 > type v100 gres count underflow (0 1) > [2021-12-10T08:16:45.938] error: gres/gpu: job 29651558 dealloc node gl1014 > type v100 gres count underflow (0 1) > ``` > > Most of what I see resembles that output, and aren't array jobs. However, > that doesn't mean there _aren't_ array jobs in there. The output scrolls > past rather quickly at times. > > David Can you upload the fragment of the log from the start of job 29802851 to at least this error? And also the slurmd log of gl1016. Paste here the output of 'scontrol show job' too. Thanks! Created attachment 22621 [details]
slurmd log of node
Created attachment 22622 [details]
slurmctld log for 2021-12-10
Created attachment 22623 [details]
slurmsched log for 2021-12-10
Can you upload your latest gres.conf, nodes*.conf (the one containing gl1016) and just in case you have it, the cgroup_allowed_devices_file.conf? Another fact I see, is that the node gl1016 crashed, so that effectively corresponds to a job 'shrink' or modification. Now this error which is different from the TRES one, could be due to bug 11881. > [2021-12-10T08:16:45.933] error: gres/gpu: job 29802851 dealloc node gl1016 Hello, Here are the contents (they're both small): ``` [root@gl1016 etc]# cat gres.conf # # This file is managed by Ansible. # # template: /etc/ansible/roles/slurm-compute/templates/gres.conf.j2 # NodeName=gl[1000-1019] Name=gpu Type=v100 File=/dev/nvidia[0-1] NodeName=gl[1020-1023] Name=gpu Type=v100 File=/dev/nvidia[0-2] NodeName=gl[1500-1519] Name=gpu Type=sp File=/dev/nvidia[0-7] #NodeName=gl[2002-2003] Name=viz Type=p40 File=/dev/nvidia0 [root@gl1016 etc]# cat cgroup_allowed_devices_file.conf /dev/* ``` David (In reply to ARCTS Admins from comment #54) > Hello, > > Here are the contents (they're both small): > > ``` > [root@gl1016 etc]# cat gres.conf > # > # This file is managed by Ansible. > # > # template: /etc/ansible/roles/slurm-compute/templates/gres.conf.j2 > # > > NodeName=gl[1000-1019] Name=gpu Type=v100 File=/dev/nvidia[0-1] > NodeName=gl[1020-1023] Name=gpu Type=v100 File=/dev/nvidia[0-2] > NodeName=gl[1500-1519] Name=gpu Type=sp File=/dev/nvidia[0-7] > #NodeName=gl[2002-2003] Name=viz Type=p40 File=/dev/nvidia0 > > [root@gl1016 etc]# cat cgroup_allowed_devices_file.conf > /dev/* > ``` > > David Actually the cgroup_allowed_devices_file is not used. Only the devices in gres.conf are denied if not specified in the command line (--gres) so adding "/dev/*" does nothing and could be the cause of: [2021-12-08T03:26:39.457] [29802851.batch] error: _file_write_content: unable to write 5 bytes to cgroup /sys/fs/cgroup/devices/slurm/uid_114159461/job_29802851/devices.allow: Invalid argument I suggest you to remove the cgroup_allowed_devices_file.conf file entirely. Can you do that and check if this error shows up again? The other error is related to a failure on a node, and is probably bug 11881. This bug is fixed in the 21.08.2 release, commits: https://github.com/SchedMD/slurm/compare/5a0a5c331285...5d6b93a1e1e8. Do you have any plans to upgrade? We could study the possibility of preparing a patch for 20.11 exclusively for you with these commits, but they don't apply cleanly and some work should be done. Hello David, Please, remove your cgroup_allowed_devices_file.conf. I can confirm you do not need it and also that it is conflicting with the devices defined in gres.conf. In any case the error is harmless. I will work on another internal bug to make this better. (In reply to Felip Moll from comment #56) > Hello David, > > Please, remove your cgroup_allowed_devices_file.conf. I can confirm you do > not need it and also that it is conflicting with the devices defined in > gres.conf. > > In any case the error is harmless. > > I will work on another internal bug to make this better. Hello David, Did the errors disappear, are you still having these or other issues? Thanks! (In reply to Felip Moll from comment #57) > Hello David, > > Did the errors disappear, are you still having these or other issues? > > Thanks! Hi, Felip! We are planning on removing the file per your suggestion (and the suggestion of Michael Hinton in bug 12537). Best, David (In reply to ARCTS Admins from comment #58) > (In reply to Felip Moll from comment #57) > > Hello David, > > > > Did the errors disappear, are you still having these or other issues? > > > > Thanks! > > Hi, Felip! > > We are planning on removing the file per your suggestion (and the suggestion > of Michael Hinton in bug 12537). > > Best, > > David Okay, thanks. I will mark this bug as infogiven for the moment. Please, reopen if you still have issues after file removal. Regards (In reply to Felip Moll from comment #59) > (In reply to ARCTS Admins from comment #58) > > (In reply to Felip Moll from comment #57) > > > Hello David, > > > > > > Did the errors disappear, are you still having these or other issues? > > > > > > Thanks! > > > > Hi, Felip! > > > > We are planning on removing the file per your suggestion (and the suggestion > > of Michael Hinton in bug 12537). > > > > Best, > > > > David > > Okay, thanks. > > I will mark this bug as infogiven for the moment. > Please, reopen if you still have issues after file removal. > > Regards Hi, Felip, We removed the file, but the messages initially reported persist: [2022-01-20T19:56:07.602] error: _handle_assoc_tres_run_secs: job 31244213: assoc 14999 TRES billing grp_used_tres_run_secs underflow, tried to remove 901656000 seconds when only 894217338 remained. I recall, in the other ticket 12537, removing the file was meant to stop the "cgroup 5 bytes" error. Was the removal of the file here meant to stop the "underflow" messages? David (In reply to ARCTS Admins from comment #60) > (In reply to Felip Moll from comment #59) > > (In reply to ARCTS Admins from comment #58) > > > (In reply to Felip Moll from comment #57) > > > > Hello David, > > > > > > > > Did the errors disappear, are you still having these or other issues? > > > > > > > > Thanks! > > > > > > Hi, Felip! > > > > > > We are planning on removing the file per your suggestion (and the suggestion > > > of Michael Hinton in bug 12537). > > > > > > Best, > > > > > > David > > > > Okay, thanks. > > > > I will mark this bug as infogiven for the moment. > > Please, reopen if you still have issues after file removal. > > > > Regards > > Hi, Felip, > > We removed the file, but the messages initially reported persist: > > [2022-01-20T19:56:07.602] error: _handle_assoc_tres_run_secs: job 31244213: > assoc 14999 TRES billing grp_used_tres_run_secs underflow, tried to remove > 901656000 seconds when only 894217338 remained. > > I recall, in the other ticket 12537, removing the file was meant to stop the > "cgroup 5 bytes" error. Was the removal of the file here meant to stop the > "underflow" messages? > > David Hi David. Removal of this file was only meant to stop cgroup log issues. For the underflow issues we need more investigation. (In reply to Felip Moll from comment #61) > Hi David. Removal of this file was only meant to stop cgroup log issues. > > For the underflow issues we need more investigation. Thanks. Just wanted to be sure. Shall we leave this ticket open for that investigation, then? David (In reply to ARCTS Admins from comment #62) > (In reply to Felip Moll from comment #61) > > Hi David. Removal of this file was only meant to stop cgroup log issues. > > > > For the underflow issues we need more investigation. > > Thanks. Just wanted to be sure. Shall we leave this ticket open for that > investigation, then? > > David Yes, let's keep it open for the moment. I will continue with it. Hi, this is just to inform that I partly reproduce the issue, storming my controller with thousands of job submissions and completions (an array of a very short job), and eventually cancelling the job. Running this and cancelling after a while is enough to make the log to appear: ]$ sbatch --array 1-10000 --mem=10M --wrap hostname I will keep you informed on what I find. I started to reproduce the issue consistently. This is not because of cancelling a job but due to the decay thread. When the decay is happening, in your case every 5 minutes, if this coincides with a job from an array that ends/starts (still determining how exactly), the issue is triggered. It seems there's some missing lock. I've found that this issue is related to PriorityCalcPeriod. I manage to reproduce it lowering this setting to get a higher frequency of running the decay thread. It doesn't happen all the time, but it seems that after some point there are missing seconds added in the accounting, which then triggers this error. Could you tell me, for a that you've seen this error, what was the duration of the job? Can you send us an update on this and reply to comment#74? (In reply to Felip Moll from comment #74) > I've found that this issue is related to PriorityCalcPeriod. I manage to > reproduce it lowering this setting to get a higher frequency of running the > decay thread. It doesn't happen all the time, but it seems that after some > point there are missing seconds added in the accounting, which then triggers > this error. > > Could you tell me, for a that you've seen this error, what was the duration > of the job? Hi, Felip! I found a job that exhibited the issue earlier this morning: ``` /var/log/slurm/slurmctld.log-20220915.gz:[2022-09-15T01:16:31.370] error: _handle_assoc_tres_run_secs: job 42356863: assoc 786 TRES billing grp_used_tres_run_secs underflow, tried to remove 1802880000 seconds when only 1802849952 remained. ``` Here's all of the time info I can grab for the job: ``` [root@glctld ~]# date; sacct -Pj 42356863 --format=JobId,Submit,Start,End,TimeLimit,TimeLimitRaw,Elapsed,ElapsedRaw,State Thu Sep 15 08:36:52 EDT 2022 JobID|Submit|Start|End|Timelimit|TimelimitRaw|Elapsed|ElapsedRaw|State 42356863|2022-09-15T01:16:17|2022-09-15T01:16:19|2022-09-15T01:16:31|8-08:00:00|12000|00:00:12|12|COMPLETED 42356863.batch|2022-09-15T01:16:19|2022-09-15T01:16:19|2022-09-15T01:16:31|||00:00:12|12|COMPLETED 42356863.extern|2022-09-15T01:16:19|2022-09-15T01:16:19|2022-09-15T01:16:31|||00:00:12|12|COMPLETED ``` Looks like they requested 8 and 8 hours, but the job ran in 12 seconds. David Sorry for not having responded before. I continued with the tests but with the last version of slurm I am again having issues to reproduce. Will continue and inform you if we do any progress. Hello, This bug has been sitting in my queue for a long time because I am not able to reproduce it. Is the issue still happening? there's a possibility that a bug fixed in 22.05.7 in commit 5c3a7f6aaf could help with this problem. Please, let me know if you still see this issue and in what version are you actually. Thanks Good afternoon, David is currently away, but yes, we are still receiving these errors (81 today so far): [snippet] [2023-07-07T14:28:09.979] error: _handle_assoc_tres_run_secs: job 55450642: assoc 36191 TRES cpu grp_used_tres_run_secs underflow, tried to remove 18 seconds when only 0 remained. [2023-07-07T14:28:09.979] error: _handle_assoc_tres_run_secs: job 55450642: assoc 36191 TRES mem grp_used_tres_run_secs underflow, tried to remove 36864 seconds when only 0 remained. [2023-07-07T14:28:09.979] error: _handle_assoc_tres_run_secs: job 55450642: assoc 36191 TRES node grp_used_tres_run_secs underflow, tried to remove 18 seconds when only 0 remained. [2023-07-07T14:28:09.979] error: _handle_assoc_tres_run_secs: job 55450642: assoc 36191 TRES billing grp_used_tres_run_secs underflow, tried to remove 45072 seconds when only 0 remained. At maintenance we upgraded Slurm versions: # sacct --version slurm 23.02.1 Thanks, - Matt Matt, I was wrong, the patch was included in 23.02.2: commit 5c3a7f6aaf0e584cedf3b8596907f61259cb1ce5 Author: Scott Hilton <scott@schedmd.com> AuthorDate: Tue Apr 25 15:33:55 2023 -0600 Commit: Marshall Garey <marshall@schedmd.com> CommitDate: Fri Apr 28 15:19:03 2023 -0600 Delete gres_list_alloc on requeue. Since gres_list_alloc is created during gres allocation it should be deleted before the job is reallocated. Currently, because the gres is cleared but not deleted, when it gets reallocated in _job_alloc(), new_alloc is set to false. This activates goto already_allocated; skiping the reallocation. Bug 16121 I will try to reproduce again with the information on this other bug and let you know. It is possible that this can help your situation. Thanks Felip - that is great to know. We require a maintenance soon for other reasons, so do let us know what you discover, as we can probably update Slurm at that time. Thanks again, Matt *** Ticket 21275 has been marked as a duplicate of this ticket. *** *** Ticket 22325 has been marked as a duplicate of this ticket. *** Hello, After several other tests we want to inform that unfortunately this seems to not be reproducible on our side anymore with 25.05. Since a lot has changed from the opening of this ticket, we have took the decision to close the issue. The original logs just indicate a situation where some seconds are not added to the accounting in the moment that the job is terminating, but this happened for very short jobs only so it is possible that there's a race with job ending and the handling of the run secs at that very moment, but in any case that does not impact the accounting noticeably. If the issue is happening too often in the future, and this clearly impacts the workflow, we suggest to open a new ticket to investigate it again. Thank you for your understanding. |