Summary: | Undefined symbols in lots of plugins prevent first launch of slurmctld and leads to sinfo, sacctmgr failures | ||
---|---|---|---|
Product: | Slurm | Reporter: | Regine Gaudin <regine.gaudin> |
Component: | Build System and Packaging | Assignee: | Felip Moll <felip.moll> |
Status: | RESOLVED DUPLICATE | QA Contact: | |
Severity: | 4 - Minor Issue | ||
Priority: | --- | CC: | cinek, felip.moll, matthieu.hautreux |
Version: | 19.05.5 | ||
Hardware: | Linux | ||
OS: | Linux | ||
See Also: | https://bugs.schedmd.com/show_bug.cgi?id=7806 | ||
Site: | CEA | Alineos Sites: | --- |
Atos/Eviden Sites: | --- | Confidential Site: | --- |
Coreweave sites: | --- | Cray Sites: | --- |
DS9 clusters: | --- | Google sites: | --- |
HPCnow Sites: | --- | HPE Sites: | --- |
IBM Sites: | --- | NOAA SIte: | --- |
NoveTech Sites: | --- | Nvidia HWinf-CS Sites: | --- |
OCF Sites: | --- | Recursion Pharma Sites: | --- |
SFW Sites: | --- | SNIC sites: | --- |
Tzag Elita Sites: | --- | Linux Distro: | --- |
Machine Name: | CLE Version: | ||
Version Fixed: | Target Release: | --- | |
DevPrio: | --- | Emory-Cloud Sites: | --- |
Attachments: | build.log obtained by rpmbuild -ba slurm.spec |tee build.log |
Description
Regine Gaudin
2020-02-03 07:37:48 MST
Hi Regine, - Can you please show me your ./configure log (config.log), and output of make install? Can you tell me exactly all the steps on how you installed the software? There seems to be some wrong path in your include install dir. - Can you look at /usr/lib64/slurm/ and see if it is empty? - Show me an 'echo $PATH' from your console. - Finally an 'ldd slurmctld' must give you some hint. It seems that missing symbols are contained in /root/rpmbuild/BUILD/slurm-19.05.5/src/slurmctld/.libs/slurmctld nm slurmctld |grep unlock_slurmctld 000000000007a8a0 T unlock_slurmctld [root@vm0 .libs]# nm slurmctld |grep slurm_job_preempt_mode 000000000009673d T slurm_job_preempt_mode [root@vm0 .libs]# nm slurmctld |grep powercap_get_cluster_current_cap 0000000000095216 T powercap_get_cluster_current_cap generated with: libtool: link: gcc -DNUMA_VERSION1_COMPATIBILITY -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -pthread -ggdb3 -Wall -g -O1 -fno-strict-aliasing -Wl,-z -Wl,relro -Wl,-z -Wl,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -o .libs/slurmctld acct_policy.o agent.o backup.o burst_buffer.o controller.o fed_mgr.o front_end.o gang.o groups.o heartbeat.o job_mgr.o job_scheduler.o job_submit.o licenses.o locks.o node_mgr.o node_scheduler.o partition_mgr.o ping_nodes.o port_mgr.o power_save.o powercapping.o preempt.o proc_req.o read_config.o reservation.o sched_plugin.o slurmctld_plugstack.o srun_comm.o state_save.o statistics.o step_mgr.o trigger_mgr.o -Wl,-rpath=/usr/lib64/slurm -Wl,--export-dynamic ../../src/common/.libs/libdaemonize.a -L../../src/api/.libs /root/rpmbuild/BUILD/slurm-19.05.5/src/api/.libs/libslurmfull.so -ldl -pthread -Wl,-rpath -Wl,/usr/lib64/slurm But I do not see any corresponding library installation for libs/slurmctld only for slurmctld_nonstop : libtool: install: /usr/bin/install -c .libs/slurmctld_nonstop.so /root/rpmbuild/BUILDROOT/slurm-19.05.5-1.ocean1.el8.x86_64/usr/lib64/slurm/slurmctld_nonstop.so libtool: install: /usr/bin/install -c .libs/slurmctld_nonstop.lai /root/rpmbuild/BUILDROOT/slurm-19.05.5-1.ocean1.el8.x86_64/usr/lib64/slurm/slurmctld_nonstop.la libtool: install: /usr/bin/install -c .libs/slurmctld_nonstop.a /root/rpmbuild/BUILDROOT/slurm-19.05.5-1.ocean1.el8.x86_64/usr/lib64/slurm/slurmctld_nonstop.a while libtool: install: /usr/bin/install -c .libs/slurmctld /root/rpmbuild/BUILDROOT/slurm-19.05.5-1.ocean1.el8.x86_64/usr/sbin/slurmctld Created attachment 12923 [details]
build.log obtained by rpmbuild -ba slurm.spec |tee build.log
rpmbuild -ba slurm.spec |tee build.log (see attachment) virtual machine echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin ls /usr/lib64/slurm accounting_storage_filetxt.so cli_filter_none.so job_container_cncu.so mpi_pmi2.so select_cons_tres.so accounting_storage_mysql.so core_spec_cray_aries.so job_container_none.so mpi_pmix.so select_cray_aries.so accounting_storage_none.so core_spec_none.so job_submit_all_partitions.so mpi_pmix_v3.so select_linear.so accounting_storage_slurmdbd.so cred_munge.so job_submit_cray_aries.so node_features_knl_generic.so site_factor_none.so acct_gather_energy_cray_aries.so cred_none.so job_submit_lua.so power_none.so slurmctld_nonstop.so acct_gather_energy_ibmaem.so ext_sensors_none.so job_submit_pbs.so preempt_none.so spank_pbs.so acct_gather_energy_ipmi.so gpu_generic.so job_submit_require_timelimit.so preempt_partition_prio.so src acct_gather_energy_none.so gres_gpu.so job_submit_throttle.so preempt_qos.so switch_cray_aries.so acct_gather_energy_rapl.so gres_mic.so launch_slurm.so priority_basic.so switch_generic.so acct_gather_energy_xcc.so gres_mps.so layouts_power_cpufreq.so priority_multifactor.so switch_none.so acct_gather_filesystem_lustre.so gres_nic.so layouts_power_default.so proctrack_cgroup.so task_affinity.so acct_gather_filesystem_none.so jobacct_gather_cgroup.so layouts_unit_default.so proctrack_linuxproc.so task_cgroup.so acct_gather_interconnect_none.so jobacct_gather_linux.so libslurmfull.so proctrack_pgid.so task_cray_aries.so acct_gather_profile_influxdb.so jobacct_gather_none.so mcs_account.so route_default.so task_none.so acct_gather_profile_none.so jobcomp_elasticsearch.so mcs_group.so route_topology.so topology_3d_torus.so auth_munge.so jobcomp_filetxt.so mcs_none.so sched_backfill.so topology_hypercube.so burst_buffer_generic.so jobcomp_mysql.so mcs_user.so sched_builtin.so topology_node_rank.so checkpoint_none.so jobcomp_none.so mpi_none.so sched_hold.so topology_none.so checkpoint_ompi.so jobcomp_script.so mpi_openmpi.so select_cons_res.so topology_tree.so [root@vm0 SPECS]# ldd /usr/sbin/slurmctld linux-vdso.so.1 (0x00007ffc5d1ed000) libslurmfull.so => /usr/lib64/slurm/libslurmfull.so (0x00007fced9860000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fced965c000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fced943c000) libc.so.6 => /lib64/libc.so.6 (0x00007fced9078000) /lib64/ld-linux-x86-64.so.2 (0x00007fced9f6c000) [root@vm0 slurm]# pwd /usr/lib64/slurm [root@vm0 slurm]# for i in `ls *`;do nm $i |grep unlock_slurmctld; done U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld nm: 'src:': No such file nm: 'sattach': No such file nm: 'srun': No such file After rpms obtained I just do yum install slurm*19.05.5* rpm -qa|grep slurm slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 slurm-contribs-19.05.5-1.ocean1.el8.x86_64 slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 slurm-devel-19.05.5-1.ocean1.el8.x86_64 slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 slurm-openlava-19.05.5-1.ocean1.el8.x86_64 slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 slurm-torque-19.05.5-1.ocean1.el8.x86_64 slurm-example-configs-19.05.5-1.ocean1.el8.x86_64 slurm-19.05.5-1.ocean1.el8.x86_64 slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 (In reply to Regine Gaudin from comment #5) > After rpms obtained I just do yum install slurm*19.05.5* > > rpm -qa|grep slurm > slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 > slurm-contribs-19.05.5-1.ocean1.el8.x86_64 > slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 > slurm-devel-19.05.5-1.ocean1.el8.x86_64 > slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 > slurm-openlava-19.05.5-1.ocean1.el8.x86_64 > slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 > slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 > slurm-torque-19.05.5-1.ocean1.el8.x86_64 > slurm-example-configs-19.05.5-1.ocean1.el8.x86_64 > slurm-19.05.5-1.ocean1.el8.x86_64 > slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 If you do a complete uninstall, a cleanup of /usr/lib64/slurm/ , then remove /root/rpmbuild/, and start again, does it still happen? If you do a complete uninstall, a cleanup of /usr/lib64/slurm/ , then remove /root/rpmbuild/, and start again, does it still happen? Yes it does happen again, once again none library in /usr/lib64/slurm is containing the missing symbols...which is conforted by the build log where you can see that none library is finally built with these symbols because temporary .libs/slurmctld (containing the missing symbols) is finally only use with libtool: install: /usr/bin/install -c .libs/slurmctld /root/rpmbuild/BUILDROOT/slurm-19.05.5-1.ocean1.el8.x86_64/usr/sbin/slurmctld and not for building library Here is what you've asked: 1) complete uninstall # yum remove slurm Dependencies resolved. ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Removing: slurm x86_64 19.05.5-1.ocean1.el8remove /root/rpmbuild/ @@commandline 64 M Removing dependent packages: slurm-contribs x86_64 19.05.5-1.ocean1.el8 @@commandline 31 k slurm-devel x86_64 19.05.5-1.ocean1.el8 @@commandline 366 k slurm-libpmi x86_64 19.05.5-1.ocean1.el8 @@commandline 490 k slurm-openlava root@vm0 ~]# rm -rf /root/rpmbuild [root@vm0 ~]# ls -altr /root/rpmbuild ls: cannot access '/root/rpmbuild': No such file or directory x86_64 19.05.5-1.ocean1.el8 @@commandline 24 k slurm-pam_slurm x86_64 19.05.5-1.ocean1.el8 @@commandline 487 k slurm-slurmctld x86_64 19.05.5-1.ocean1.el8 @@commandline 4.6 M slurm-slurmd x86_64 19.05.5-1.ocean1.el8 @@commandline 3.0 M slurm-slurmdbd x86_64 19.05.5-1.ocean1.el8 @@commandline 3.1 M slurm-torque x86_64 19.05.5-1.ocean1.el8 @@commandline 394 k Removing unused dependencies: slurm-perlapi x86_64 19.05.5-1.ocean1.el8 @@commandline 3.7 M Transaction Summary ==================================================================================================================================================================== Remove 11 Packages Freed space: 80 M Is this ok [y/N]: y Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded.root@vm0 ~]# rm -rf /root/rpmbuild [root@vm0 ~]# ls -altr /root/rpmbuild ls: cannot access '/root/rpmbuild': No such file or directory Running transaction Preparing : 1/1 Running scriptlet: slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 1/1 Running scriptlet: slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 1/11 Erasing : slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 1/11 Running scriptlet: slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 1/11 Running scriptlet: slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 2/11 Erasing : slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 2/11 Running scriptlet: slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 2/11 Running scriptlet: slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 3/11 Erasing : slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 3/11 Running scriptlet: slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 3/11 Erasing : slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 4/11 Erasing : slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 5/11 Erasing : slurm-torque-19.05.5-1.ocean1.el8.x86_64 6/11 Erasing : slurm-openlava-19.05.5-1.ocean1.el8.x86_64 7/11 Erasing : slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 8/11 Erasing : slurm-devel-19.05.5-1.ocean1.el8.x86_64 9/11 Erasing : slurm-contribs-19.05.5-1.ocean1.el8.x86_64 10/11 Running scriptlet: slurm-19.05.5-1.ocean1.el8.x86_64 11/11 Erasing : slurm-19.05.5-1.ocean1.el8.x86_64 11/11 Running scriptlet: slurm-19.05.5-1.ocean1.el8.x86_64 11/11 Verifying : slurm-19.05.5-1.ocean1.el8.x86_64 1/11 Verifying : slurm-contribs-19.05.5-1.ocean1.el8.x86_64 2/11 Verifying : slurm-devel-19.05.5-1.ocean1.el8.x86_64 3/11 Verifying : slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 4/11 Verifying : slurm-openlava-19.05.5-1.ocean1.el8.x86_64 5/11 Verifying : slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 6/11 Verifying : slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 7/11 Verifying : slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 8/11 Verifying : slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 9/11 Verifying : slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 10/11 Verifying : slurm-torque-19.05.5-1.ocean1.el8.x86_64 11/11 Removed: slurm-19.05.5-1.ocean1.el8.x86_64 slurm-contribs-19.05.5-1.ocean1.el8.x86_64 slurm-devel-19.05.5-1.ocean1.el8.x86_64 slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 slurm-openlava-19.05.5-1.ocean1.el8.x86_64 slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 slurm-torque-19.05.5-1.ocean1.el8.x86_64 slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 Complete! 2) cleanup of /usr/lib64/slurm/ root@vm0 ~]# ls -altr /usr/lib64/slurm/ total 72 drwxr-xr-x 2 root root 6 Feb 4 09:12 . dr-xr-xr-x. 74 root root 49152 Feb 4 09:12 .. 3) remove /root/rpmbuild/ root@vm0 ~]# rm -rf /root/rpmbuild [root@vm0 ~]# ls -altr /root/rpmbuild ls: cannot access '/root/rpmbuild': No such file or directory 4) then restart (so need to reinstall) yum localinstall slurm* Last metadata expiration check: 2:43:10 ago on Tue 04 Feb 2020 06:31:11 AM CET. Package slurm-example-configs-19.05.5-1.ocean1.el8.x86_64 is already installed. Dependencies resolved. ==================================================================================================================================================================== Package Arch Version Repository Size ==================================================================================================================================================================== Installing: slurm x86_64 19.05.5-1.ocean1.el8 @commandline 14 M slurm-contribs x86_64 19.05.5-1.ocean1.el8 @commandline 21 k slurm-devel x86_64 19.05.5-1.ocean1.el8 @commandline 82 k slurm-libpmi x86_64 19.05.5-1.ocean1.el8 @commandline 154 k slurm-openlava x86_64 19.05.5-1.ocean1.el8 @commandline 13 k slurm-pam_slurm x86_64 19.05.5-1.ocean1.el8 @commandline 148 k slurm-perlapi x86_64 19.05.5-1.ocean1.el8 @commandline 812 k slurm-slurmctld x86_64 19.05.5-1.ocean1.el8 @commandline 1.3 M slurm-slurmd x86_64 19.05.5-1.ocean1.el8 @commandline 765 k slurm-slurmdbd x86_64 19.05.5-1.ocean1.el8 @commandline 822 k slurm-torque x86_64 19.05.5-1.ocean1.el8 @commandline 129 k Transaction Summary ==================================================================================================================================================================== Install 11 Packages Total size: 18 M Installed size: 76 M Is this ok [y/N]: y Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: slurm-19.05.5-1.ocean1.el8.x86_64 1/11 Installing : slurm-19.05.5-1.ocean1.el8.x86_64 1/11 Running scriptlet: slurm-19.05.5-1.ocean1.el8.x86_64 1/11 Installing : slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 2/11 Installing : slurm-openlava-19.05.5-1.ocean1.el8.x86_64 3/11 Installing : slurm-torque-19.05.5-1.ocean1.el8.x86_64 4/11 Installing : slurm-contribs-19.05.5-1.ocean1.el8.x86_64 5/11 Installing : slurm-devel-19.05.5-1.ocean1.el8.x86_64 6/11 Installing : slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 7/11 Installing : slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 8/11 Installing : slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 9/11 Running scriptlet: slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 9/11 Installing : slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 10/11 Running scriptlet: slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 10/11 Installing : slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 11/11 Running scriptlet: slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 11/11 Verifying : slurm-19.05.5-1.ocean1.el8.x86_64 1/11 Verifying : slurm-contribs-19.05.5-1.ocean1.el8.x86_64 2/11 Verifying : slurm-devel-19.05.5-1.ocean1.el8.x86_64 3/11 Verifying : slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 4/11 Verifying : slurm-openlava-19.05.5-1.ocean1.el8.x86_64 5/11 Verifying : slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 6/11 Verifying : slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 7/11 Verifying : slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 8/11 Verifying : slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 9/11 Verifying : slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 10/11 Verifying : slurm-torque-19.05.5-1.ocean1.el8.x86_64 11/11 Installed: slurm-19.05.5-1.ocean1.el8.x86_64 slurm-contribs-19.05.5-1.ocean1.el8.x86_64 slurm-devel-19.05.5-1.ocean1.el8.x86_64 slurm-libpmi-19.05.5-1.ocean1.el8.x86_64 slurm-openlava-19.05.5-1.ocean1.el8.x86_64 slurm-pam_slurm-19.05.5-1.ocean1.el8.x86_64 slurm-perlapi-19.05.5-1.ocean1.el8.x86_64 slurm-slurmctld-19.05.5-1.ocean1.el8.x86_64 slurm-slurmd-19.05.5-1.ocean1.el8.x86_64 slurm-slurmdbd-19.05.5-1.ocean1.el8.x86_64 slurm-torque-19.05.5-1.ocean1.el8.x86_64 Complete! 5) slurm librairies are then again there ls -altr /usr/lib64/slurm/ total 39580 -rwxr-xr-x 1 root root 10447208 Feb 3 16:03 libslurmfull.so -rwxr-xr-x 1 root root 141872 Feb 3 16:03 layouts_unit_default.so -rwxr-xr-x 1 root root 136304 Feb 3 16:03 layouts_power_default.so -rwxr-xr-x 1 root root 137200 Feb 3 16:03 layouts_power_cpufreq.so -rwxr-xr-x 1 root root 243496 Feb 3 16:03 acct_gather_energy_rapl.so -rwxr-xr-x 1 root root 318576 Feb 3 16:03 acct_gather_energy_ipmi.so -rwxr-xr-x 1 root root 222096 Feb 3 16:03 acct_gather_energy_ibmaem.so -rwxr-xr-x 1 root root 222056 Feb 3 16:03 acct_gather_energy_cray_aries.so -rwxr-xr-x 1 root root 494544 Feb 3 16:03 accounting_storage_slurmdbd.so -rwxr-xr-x 1 root root 255000 Feb 3 16:03 accounting_storage_none.so -rwxr-xr-x 1 root root 2459568 Feb 3 16:03 accounting_storage_mysql.so -rwxr-xr-x 1 root root 395712 Feb 3 16:03 accounting_storage_filetxt.so -rwxr-xr-x 1 root root 175632 Feb 3 16:03 cred_munge.so -rwxr-xr-x 1 root root 207200 Feb 3 16:03 core_spec_none.so -rwxr-xr-x 1 root root 208648 Feb 3 16:03 core_spec_cray_aries.so -rwxr-xr-x 1 root root 229992 Feb 3 16:03 cli_filter_none.so -rwxr-xr-x 1 root root 220984 Feb 3 16:03 checkpoint_ompi.so -rwxr-xr-x 1 root root 210008 Feb 3 16:03 checkpoint_none.so -rwxr-xr-x 1 root root 330752 Feb 3 16:03 burst_buffer_generic.so -rwxr-xr-x 1 root root 237000 Feb 3 16:03 auth_munge.so -rwxr-xr-x 1 root root 225824 Feb 3 16:03 acct_gather_profile_none.so -rwxr-xr-x 1 root root 301520 Feb 3 16:03 acct_gather_profile_influxdb.so -rwxr-xr-x 1 root root 208736 Feb 3 16:03 acct_gather_interconnect_none.so -rwxr-xr-x 1 root root 208704 Feb 3 16:03 acct_gather_filesystem_none.so -rwxr-xr-x 1 root root 236968 Feb 3 16:03 acct_gather_filesystem_lustre.so -rwxr-xr-x 1 root root 1012256 Feb 3 16:03 acct_gather_energy_xcc.so -rwxr-xr-x 1 root root 208976 Feb 3 16:03 acct_gather_energy_none.so -rwxr-xr-x 1 root root 241392 Feb 3 16:03 jobcomp_script.so -rwxr-xr-x 1 root root 195632 Feb 3 16:03 jobcomp_none.so -rwxr-xr-x 1 root root 559712 Feb 3 16:03 jobcomp_mysql.so -rwxr-xr-x 1 root root 270136 Feb 3 16:03 jobcomp_filetxt.so -rwxr-xr-x 1 root root 310056 Feb 3 16:03 jobcomp_elasticsearch.so -rwxr-xr-x 1 root root 225320 Feb 3 16:03 jobacct_gather_none.so -rwxr-xr-x 1 root root 316424 Feb 3 16:03 jobacct_gather_linux.so -rwxr-xr-x 1 root root 415072 Feb 3 16:03 jobacct_gather_cgroup.so -rwxr-xr-x 1 root root 287328 Feb 3 16:03 gres_nic.so -rwxr-xr-x 1 root root 343016 Feb 3 16:03 gres_mps.so -rwxr-xr-x 1 root root 287328 Feb 3 16:03 gres_mic.so -rwxr-xr-x 1 root root 336344 Feb 3 16:03 gres_gpu.so -rwxr-xr-x 1 root root 221432 Feb 3 16:03 gpu_generic.so -rwxr-xr-x 1 root root 235888 Feb 3 16:03 ext_sensors_none.so -rwxr-xr-x 1 root root 163704 Feb 3 16:03 cred_none.so -rwxr-xr-x 1 root root 70368 Feb 3 16:03 spank_pbs.so -rwxr-xr-x 1 root root 219736 Feb 3 16:03 mpi_none.so -rwxr-xr-x 1 root rootls -altr /usr/lib64/slurm/ total 39580 -rwxr-xr-x 1 root root 10447208 Feb 3 16:03 libslurmfull.so -rwxr-xr-x 1 root root 141872 Feb 3 16:03 layouts_unit_default.so -rwxr-xr-x 1 root root 136304 Feb 3 16:03 layouts_power_default.so -rwxr-xr-x 1 root root 137200 Feb 3 16:03 layouts_power_cpufreq.so -rwxr-xr-x 1 root root 243496 Feb 3 16:03 acct_gather_energy_rapl.so -rwxr-xr-x 1 root root 318576 Feb 3 16:03 acct_gather_energy_ipmi.so -rwxr-xr-x 1 root root 222096 Feb 3 16:03 acct_gather_energy_ibmaem.so -rwxr-xr-x 1 root root 222056 Feb 3 16:03 acct_gather_energy_cray_aries.so -rwxr-xr-x 1 root root 494544 Feb 3 16:03 accounting_storage_slurmdbd.so -rwxr-xr-x 1 root root 255000 Feb 3 16:03 accounting_storage_none.so -rwxr-xr-x 1 root root 2459568 Feb 3 16:03 accounting_storage_mysql.so -rwxr-xr-x 1 root root 395712 Feb 3 16:03 accounting_storage_filetxt.so -rwxr-xr-x 1 root root 175632 Feb 3 16:03 cred_munge.so -rwxr-xr-x 1 root root 207200 Feb 3 16:03 core_spec_none.so -rwxr-xr-x 1 root root 208648 Feb 3 16:03 core_spec_cray_aries.so -rwxr-xr-x 1 root root 229992 Feb 3 16:03 cli_filter_none.so -rwxr-xr-x 1 root root 220984 Feb 3 16:03 checkpoint_ompi.so -rwxr-xr-x 1 root root 210008 Feb 3 16:03 checkpoint_none.so -rwxr-xr-x 1 root root 330752 Feb 3 16:03 burst_buffer_generic.so -rwxr-xr-x 1 root root 237000 Feb 3 16:03 auth_munge.so -rwxr-xr-x 1 root root 225824 Feb 3 16:03 acct_gather_profile_none.so -rwxr-xr-x 1 root root 301520 Feb 3 16:03 acct_gather_profile_influxdb.so -rwxr-xr-x 1 root root 208736 Feb 3 16:03 acct_gather_interconnect_none.so -rwxr-xr-x 1 root root 208704 Feb 3 16:03 acct_gather_filesystem_none.so -rwxr-xr-x 1 root root 236968 Feb 3 16:03 acct_gather_filesystem_lustre.so -rwxr-xr-x 1 root root 1012256 Feb 3 16:03 acct_gather_energy_xcc.so -rwxr-xr-x 1 root root 208976 Feb 3 16:03 acct_gather_energy_none.so -rwxr-xr-x 1 root root 241392 Feb 3 16:03 jobcomp_script.so -rwxr-xr-x 1 root root 195632 Feb 3 16:03 jobcomp_none.so -rwxr-xr-x 1 root root 559712 Feb 3 16:03 jobcomp_mysql.so -rwxr-xr-x 1 root root 270136 Feb 3 16:03 jobcomp_filetxt.so -rwxr-xr-x 1 root root 310056 Feb 3 16:03 jobcomp_elasticsearch.so -rwxr-xr-x 1 root root 225320 Feb 3 16:03 jobacct_gather_none.so -rwxr-xr-x 1 root root 316424 Feb 3 16:03 jobacct_gather_linux.so -rwxr-xr-x 1 root root 415072 Feb 3 16:03 jobacct_gather_cgroup.so -rwxr-xr-x 1 root root 287328 Feb 3 16:03 gres_nic.so -rwxr-xr-x 1 root root 343016 Feb 3 16:03 gres_mps.so -rwxr-xr-x 1 root root 287328 Feb 3 16:03 gres_mic.so -rwxr-xr-x 1 root root 336344 Feb 3 16:03 gres_gpu.so -rwxr-xr-x 1 root root 221432 Feb 3 16:03 gpu_generic.so -rwxr-xr-x 1 root root 235888 Feb 3 16:03 ext_sensors_none.so -rwxr-xr-x 1 root root 163704 Feb 3 16:03 cred_none.so -rwxr-xr-x 1 root root 70368 Feb 3 16:03 spank_pbs.so -rwxr-xr-x 1 root root 219736 Feb 3 16:03 mpi_none.so -rwxr-xr-x 1 root root 197544 Feb 3 16:03 mcs_user.so -rwxr-xr-x 1 root root 194616 Feb 3 16:03 mcs_none.so -rwxr-xr-x 1 root root 208224 Feb 3 16:03 mcs_group.so -rwxr-xr-x 1 root root 198696 Feb 3 16:03 mcs_account.so -rwxr-xr-x 1 root root 335200 Feb 3 16:03 launch_slurm.so -rwxr-xr-x 1 root root 232248 Feb 3 16:03 job_submit_throttle.so -rwxr-xr-x 1 root root 204248 Feb 3 16:03 job_submit_require_timelimit.so -rwxr-xr-x 1 root root 241776 Feb 3 16:03 job_submit_pbs.so -rwxr-xr-x 1 root root 340016 Feb 3 16:03 job_submit_lua.so -rwxr-xr-x 1 root root 218312 Feb 3 16:03 job_submit_cray_aries.so -rwxr-xr-x 1 root root 218136 Feb 3 16:03 job_submit_all_partitions.so -rwxr-xr-x 1 root root 208672 Feb 3 16:03 job_container_none.so -rwxr-xr-x 1 root root 242776 Feb 3 16:03 job_container_cncu.so -rwxr-xr-x 1 root root 224120 Feb 3 16:03 route_topology.so -rwxr-xr-x 1 root root 208144 Feb 3 16:03 route_default.so -rwxr-xr-x 1 root root 191608 Feb 3 16:03 proctrack_pgid.so -rwxr-xr-x 1 root root 216824 Feb 3 16:03 proctrack_linuxproc.so -rwxr-xr-x 1 root root 216136 Feb 3 16:03 proctrack_cgroup.so -rwxr-xr-x 1 root root 362472 Feb 3 16:03 priority_multifactor.so -rwxr-xr-x 1 root root 211256 Feb 3 16:03 priority_basic.so -rwxr-xr-x 1 root root 208552 Feb 3 16:03 preempt_qos.so -rwxr-xr-x 1 root root 208376 Feb 3 16:03 preempt_partition_prio.so -rwxr-xr-x 1 root root 197704 Feb 3 16:03 preempt_none.so -rwxr-xr-x 1 root root 264976 Feb 3 16:03 power_none.so -rwxr-xr-x 1 root root 329912 Feb 3 16:03 node_features_knl_generic.so -rwxr-xr-x 1 root root 1438456 Feb 3 16:03 mpi_pmix_v3.so lrwxrwxrwx 1 root root 16 Feb 3 16:03 mpi_pmix.so -> ./mpi_pmix_v3.so -rwxr-xr-x 1 root root 851112 Feb 3 16:03 mpi_pmi2.so -rwxr-xr-x 1 root root 219744 Feb 3 16:03 mpi_openmpi.so -rwxr-xr-x 1 root root 225992 Feb 3 16:03 task_none.so -rwxr-xr-x 1 root root 247776 Feb 3 16:03 task_cray_aries.so -rwxr-xr-x 1 root root 537480 Feb 3 16:03 task_affinity.so -rwxr-xr-x 1 root root 230280 Feb 3 16:03 switch_none.so -rwxr-xr-x 1 root root 282184 Feb 3 16:03 switch_generic.so -rwxr-xr-x 1 root root 406856 Feb 3 16:03 switch_cray_aries.so -rwxr-xr-x 1 root root 474128 Feb 3 16:03 slurmctld_nonstop.so -rwxr-xr-x 1 root root 227712 Feb 3 16:03 site_factor_none.so -rwxr-xr-x 1 root root 448576 Feb 3 16:03 select_linear.so -rwxr-xr-x 1 root root 473920 Feb 3 16:03 select_cray_aries.so -rwxr-xr-x 1 root root 803440 Feb 3 16:03 select_cons_tres.so -rwxr-xr-x 1 root root 700568 Feb 3 16:03 select_cons_res.so -rwxr-xr-x 1 root root 200976 Feb 3 16:03 sched_hold.so -rwxr-xr-x 1 root root 252880 Feb 3 16:03 sched_builtin.so -rwxr-xr-x 1 root root 436104 Feb 3 16:03 sched_backfill.so -rwxr-xr-x 1 root root 219160 Feb 3 16:03 topology_tree.so -rwxr-xr-x 1 root root 153280 Feb 3 16:03 topology_none.so -rwxr-xr-x 1 root root 188752 Feb 3 16:03 topology_node_rank.so -rwxr-xr-x 1 root root 252168 Feb 3 16:03 topology_hypercube.so -rwxr-xr-x 1 root root 217336 Feb 3 16:03 topology_3d_torus.so -rwxr-xr-x 1 root root 493104 Feb 3 16:03 task_cgroup.so drwxr-xr-x 4 root root 33 Feb 4 09:14 src dr-xr-xr-x. 74 root root 49152 Feb 4 09:14 .. drwxr-xr-x 3 root root 8192 Feb 4 09:14 . [root@vm0 slurm-19.05.5-1]# 197544 Feb 3 16:03 mcs_user.so -rwxr-xr-x 1 root root 194616 Feb 3 16:03 mcs_none.so -rwxr-xr-x 1 root root 208224 Feb 3 16:03 mcs_group.so -rwxr-xr-x 1 root root 198696 Feb 3 16:03 mcs_account.so -rwxr-xr-x 1 root root 335200 Feb 3 16:03 launch_slurm.so -rwxr-xr-x 1 root root 232248 Feb 3 16:03 job_submit_throttle.so -rwxr-xr-x 1 root root 204248 Feb 3 16:03 job_submit_require_timelimit.so -rwxr-xr-x 1 root root 241776 Feb 3 16:03 job_submit_pbs.so -rwxr-xr-x 1 root root 340016 Feb 3 16:03 job_submit_lua.so -rwxr-xr-x 1 root root 218312 Feb 3 16:03 job_submit_cray_aries.so -rwxr-xr-x 1 root root 218136 Feb 3 16:03 job_submit_all_partitions.so -rwxr-xr-x 1 root root 208672 Feb 3 16:03 job_container_none.so -rwxr-xr-x 1 root root 242776 Feb 3 16:03 job_container_cncu.so -rwxr-xr-x 1 root root 224120 Feb 3 16:03 route_topology.so -rwxr-xr-x 1 root root 208144 Feb 3 16:03 route_default.so -rwxr-xr-x 1 root root 191608 Feb 3 16:03 proctrack_pgid.so -rwxr-xr-x 1 root root 216824 Feb 3 16:03 proctrack_linuxproc.so -rwxr-xr-x 1 root root 216136 Feb 3 16:03 proctrack_cgroup.so -rwxr-xr-x 1 root root 362472 Feb 3 16:03 priority_multifactor.so -rwxr-xr-x 1 root root 211256 Feb 3 16:03 priority_basic.so -rwxr-xr-x 1 root root 208552 Feb 3 16:03 preempt_qos.so -rwxr-xr-x 1 root root 208376 Feb 3 16:03 preempt_partition_prio.so -rwxr-xr-x 1 root root 197704 Feb 3 16:03 preempt_none.so -rwxr-xr-x 1 root root 264976 Feb 3 16:03 power_none.so -rwxr-xr-x 1 root root 329912 Feb 3 16:03 node_features_knl_generic.so -rwxr-xr-x 1 root root 1438456 Feb 3 16:03 mpi_pmix_v3.so lrwxrwxrwx 1 root root 16 Feb 3 16:03 mpi_pmix.so -> ./mpi_pmix_v3.so -rwxr-xr-x 1 root root 851112 Feb 3 16:03 mpi_pmi2.so -rwxr-xr-x 1 root root 219744 Feb 3 16:03 mpi_openmpi.so -rwxr-xr-x 1 root root 225992 Feb 3 16:03 task_none.so -rwxr-xr-x 1 root root 247776 Feb 3 16:03 task_cray_aries.so -rwxr-xr-x 1 root root 537480 Feb 3 16:03 task_affinity.so -rwxr-xr-x 1 root root 230280 Feb 3 16:03 switch_none.so -rwxr-xr-x 1 root root 282184 Feb 3 16:03 switch_generic.so -rwxr-xr-x 1 root root 406856 Feb 3 16:03 switch_cray_aries.so -rwxr-xr-x 1 root root 474128 Feb 3 16:03 slurmctld_nonstop.so -rwxr-xr-x 1 root root 227712 Feb 3 16:03 site_factor_none.so -rwxr-xr-x 1 root root 448576 Feb 3 16:03 select_linear.so -rwxr-xr-x 1 root root 473920 Feb 3 16:03 select_cray_aries.so -rwxr-xr-x 1 root root 803440 Feb 3 16:03 select_cons_tres.so -rwxr-xr-x 1 root root 700568 Feb 3 16:03 select_cons_res.so -rwxr-xr-x 1 root root 200976 Feb 3 16:03 sched_hold.so -rwxr-xr-x 1 root root 252880 Feb 3 16:03 sched_builtin.so -rwxr-xr-x 1 root root 436104 Feb 3 16:03 sched_backfill.so -rwxr-xr-x 1 root root 219160 Feb 3 16:03 topology_tree.so -rwxr-xr-x 1 root root 153280 Feb 3 16:03 topology_none.so -rwxr-xr-x 1 root root 188752 Feb 3 16:03 topology_node_rank.so -rwxr-xr-x 1 root root 252168 Feb 3 16:03 topology_hypercube.so -rwxr-xr-x 1 root root 217336 Feb 3 16:03 topology_3d_torus.so -rwxr-xr-x 1 root root 493104 Feb 3 16:03 task_cgroup.so drwxr-xr-x 4 root root 33 Feb 4 09:14 src dr-xr-xr-x. 74 root root 49152 Feb 4 09:14 .. drwxr-xr-x 3 root root 8192 Feb 4 09:14 . [root@vm0 slurm-19.05.5-1]# but again missing symbols: [root@vm0 slurm]# pwd /usr/lib64/slurm [root@vm0 slurm]# for i in `ls *`;do nm $i |grep unlock_slurmctld; done U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld U unlock_slurmctld 6) I restart (conf using the plugin with mising symbols) [root@vm0 slurm-19.05.5-1]# systemctl start slurmdbd [root@vm0 slurm-19.05.5-1]# systemctl start slurmctld [root@vm0 slurm-19.05.5-1]# sinfo sinfo: error: plugin_load_from_file: dlopen(/usr/lib64/slurm/select_cray_aries.so): /usr/lib64/slurm/select_cray_aries.so: undefined symbol: unlock_slurmctld sinfo: error: Couldn't load specified plugin name for select/cray_aries: Dlopen of plugin file failed sinfo: error: plugin_load_from_file: dlopen(/usr/lib64/slurm/select_linear.so): /usr/lib64/slurm/select_linear.so: undefined symbol: slurm_job_preempt_mode sinfo: error: Couldn't load specified plugin name for select/linear: Dlopen of plugin file failed sinfo: error: plugin_load_from_file: dlopen(/usr/lib64/slurm/select_cons_res.so): /usr/lib64/slurm/select_cons_res.so: undefined symbol: powercap_get_cluster_current_cap sinfo: error: Couldn't load specified plugin name for select/cons_res: Dlopen of plugin file failed sinfo: error: plugin_load_from_file: dlopen(/usr/lib64/slurm/select_cons_tres.so): /usr/lib64/slurm/select_cons_tres.so: undefined symbol: powercap_get_cluster_current_cap sinfo: error: Couldn't load specified plugin name for select/cons_tres: Dlopen of plugin file failed sinfo: fatal: Can't find plugin for select/cons_res Please note the vm are in rhel8 (dlopen scope might have changed). I'll try in rhel7 (In reply to Regine Gaudin from comment #8) > Please note the vm are in rhel8 (dlopen scope might have changed). > I'll try in rhel7 One step you missed in my instructions when I said "start again", is the need to recompile again with rpmbuild. I was wondering if anything in your build directory was causing any issues, that's the reason I asked you to remove rpmbuild directories, and do an uninstall, just to recompile again from scratch. In any case, let me install a RHEL8 VM and I will do the proper checks. Ok I have tried same building/installation (rpmbuild -ba slurm.spec, yum localinstall) on rhel7 instead of rhel8 and there is no such problem:
one difference between both but I don't think it is the origin of the problem is the mandatory use of python3 in rhel8 instead of python in rhel7
so we need to modify your slurm.spec a little for building in rhel8.
dlopen behavior might have changed between rhel7 and rhel8 which could explain why plugins loading are failing....
[root@vm0 SPECS]# diff slurm.spec slurm.spec.19.05.5.orig
65c65
< BuildRequires: python3
---
> BuildRequires: python
179,180d178
< Patch0: slurm-19.05.3-shebang-for-python3.patch
<
187d184
<
311,312d307
<
< %patch0 -p1
with
cat slurm-19.05.3-shebang-for-python3.patch
diff -rau slurm-19.05.3/contribs/cray/csm/slurmconfgen_smw.py slurm-19.05.3.work/contribs/cray/csm/slurmconfgen_smw.py
--- slurm-19.05.3/contribs/cray/csm/slurmconfgen_smw.py 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/contribs/cray/csm/slurmconfgen_smw.py 2019-11-25 11:53:48.977657319 +0100
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
#
# Copyright 2015-2016 Cray Inc. All Rights Reserved.
""" A script to generate slurm.conf and gres.conf for a
diff -rau slurm-19.05.3/contribs/cray/slurmconfgen.py.in slurm-19.05.3.work/contribs/cray/slurmconfgen.py.in
--- slurm-19.05.3/contribs/cray/slurmconfgen.py.in 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/contribs/cray/slurmconfgen.py.in 2019-11-25 11:54:10.319657319 +0100
@@ -1,4 +1,4 @@
-#!/usr/bin/python
+#!/usr/bin/env python3
#
# (c) Copyright 2013 Cray Inc. All Rights Reserved.
#
diff -rau slurm-19.05.3/contribs/slurm.spec-legacy slurm-19.05.3.work/contribs/slurm.spec-legacy
--- slurm-19.05.3/contribs/slurm.spec-legacy 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/contribs/slurm.spec-legacy 2019-11-25 11:53:34.865657319 +0100
@@ -88,7 +88,7 @@
Requires: slurm-plugins
%ifos linux
-BuildRequires: python
+BuildRequires: python3
%endif
# not sure if this is always an actual rpm or not so leaving the requirement out
diff -rau slurm-19.05.3/doc/html/shtml2html.py slurm-19.05.3.work/doc/html/shtml2html.py
--- slurm-19.05.3/doc/html/shtml2html.py 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/doc/html/shtml2html.py 2019-11-25 11:54:58.941657319 +0100
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
import re
import sys
diff -rau slurm-19.05.3/doc/man/man2html.py slurm-19.05.3.work/doc/man/man2html.py
--- slurm-19.05.3/doc/man/man2html.py 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/doc/man/man2html.py 2019-11-25 11:53:15.345657319 +0100
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
import re
import sys
diff -rau slurm-19.05.3/testsuite/expect/regression.py slurm-19.05.3.work/testsuite/expect/regression.py
--- slurm-19.05.3/testsuite/expect/regression.py 2019-10-04 02:55:33.000000000 +0200
+++ slurm-19.05.3.work/testsuite/expect/regression.py 2019-11-25 11:54:35.998657319 +0100
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
############################################################################
# Copyright (C) 2006 The Regents of the University of California.
# Produced at Lawrence Livermore National Laboratory (cf, DISCLAIMER).
Hi, I see the problem. It turns out to be a duplicate of bug 2443. In RHEL 8 (and since Fedora 23) there are some cflags enabled by default for security, trying to avoid lazy linking in applications. Slurm is using lazy linking. Please, see bug 2443 comment 13: https://bugs.schedmd.com/show_bug.cgi?id=2443#c13 See + info about hardening: https://fedoraproject.org/wiki/Changes/Harden_All_Packages One quick workaround should be to add this to your .rpmmacros: %undefine _hardened_build Or something like this in the spec file: %undefine _hardened_build %global _hardened_cflags "-Wl,-z,lazy" %global _hardened_ldflags "-Wl,-z,lazy" Important, see also: https://bugzilla.redhat.com/show_bug.cgi?id=1211296 https://fedoraproject.org/wiki/Changes/Harden_All_Packages#Troubleshooting_steps_for_package_maintainers By default I think to remember that these variables are set like: %_hardening_cflags -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 %_hardening_ldflags -specs=/usr/lib/rpm/redhat/redhat-hardened-ld %_hardened_build 1 You can see you are affected by the gcc line you passed, where it has: ... -Wl,relro -Wl,-z -Wl,now ... This issue is tracked in 2443, and if the workaround works for you I will close it as a dup. Let me know about your progress. An off-topic comment. I understand you are working on a non-production system, so in a test machine. Therefore I also understand this is not really a sev-2 issue. We take severity seriously because it helps us to really prioritize bugs which really need immediate assistance and cannot wait. In this situation, a severity 2 means that a system which is in production is having problems which are interfering with users normal workflow and is decreasing productivity considerably: Severity 2 — High Impact A Severity 2 issue is a high-impact problem that is causing sporadic outages or is consistently encountered by end users with adverse impact to end user interaction with the system. https://www.schedmd.com/support.php Threfore I am lowering the severity of this issue and tagging it as a sev-4. I would appreciate if in further issues you can mark the correct sev level according to our definitions shown in the table. Note that a decrease in severity doesn't mean we forget about the bug, but instead, as I mentioned, it helps us to differentiate really urgent issues. Thanks for your comprehension. I have added these lines to spec file andit wars the problem of missing symbols %undefine _hardened_build %global _hardened_cflags "-Wl,-z,lazy" %global _hardened_ldflags "-Wl,-z,lazy" Severity notes have been understood (We are building slurm packages on a virtual machine as a reference for uniformization of packages and the packages will be installed on production machines after validation). Thanks (In reply to Regine Gaudin from comment #14) > I have added these lines to spec file andit wars the problem Sorry, does it mean that it 'workarounds' the problem? :) Sorry, does it mean that it 'workarounds' the problem? :) Yes,it does Thanks you Ok, I am marking as a dup then. *** This ticket has been marked as a duplicate of ticket 2443 *** |