Hi SchedMD Team: We have multiple plates spinning, with new hardware that requires a higher OS of SLES11 SP3, while the remainder of the cluster remains on SP1. We've built the new image with Slurm 14.03.10, and have added lua, hwloc, and cgroups to the new image. We'd like to upgrade Slurm to 14.03.10 (or .11, if it will soon be available), on the controller nodes, but their image will remain at SP1, *without* lua, hwloc, and cgroups. We have no intention of actually using these packages until all nodes have the packages installed and Slurm is rebuilt with them (everywhere). Will there be an issue with this lack of those packages on the controller nodes, when the new compute nodes come online? Thanks much, Lyn
We'll probably tag 14.03.11 later in December. There is no problem mixing and matching hardware, OS, even 32- and 64-bit systems in the same Slurm cluster. You may well need different Slurm builds for the different distros. On a related note, we discovered today that Slurm Perl APIs built with Perl version 5.20 will not run on a system having Perl version 5.18 (and vice versa). We have never observed this problem in the past, so this problem might be specific to certain versions of Perl. In any case, a Perl update may necessitate that Slurm's Perl APIs and our torque command wrappers be rebuilt.
Hy Lynn, did this answer your question? David
Hi David, Yes, Moe's confirmation gave us the warm-n-fuzzy feeling we needed. :) Please feel free to close this ticket. Thanks and Best, Lyn On Tue, Dec 2, 2014 at 11:40 AM, <bugs@schedmd.com> wrote: > *Comment # 2 <http://bugs.schedmd.com/show_bug.cgi?id=1288#c2> on bug > 1288 <http://bugs.schedmd.com/show_bug.cgi?id=1288> from David Bigagli > <david@schedmd.com> * > > Hy Lynn, did this answer your question? > > David > > ------------------------------ > You are receiving this mail because: > > - You reported the bug. > >
Information provided. David