While we worked with you (bugs 157 and 166) to obtain the correct behavior for srun task options, there is now an inconsistency with how salloc works. While a straight srun -n64 will correctly allocate 4 nodes, salloc -n64 allocates 64 nodes. While a straight srun -N1 -n64 will correctly complain "This isn't a valid request without --overcommit", salloc -N1 -n64 succeeds and allocates 64 nodes. Is there a rationale for this discrepancy or is it a bug?
You shouldn't get 64 nodes there. I'll see what I can find. I am guessing this was always the case with salloc and not directly related to anything we did with the 2 bugs you mention here.
This is fixed in 2.5. It was referencing code that only applied to an L or P system. sbatch was affected in the same way. If you want to backport it to 2.4 the patch is here... https://github.com/SchedMD/slurm/commit/3e89da1164312ab8a0d049cb70931347942340fa