Ticket 3534 - getting roughly 1/2 performance on knl after upgrade to 17.02
Summary: getting roughly 1/2 performance on knl after upgrade to 17.02
Status: RESOLVED INVALID
Alias: None
Product: Slurm
Classification: Unclassified
Component: Other (show other tickets)
Version: 17.02.1
Hardware: Cray XC Linux
: 2 - High Impact
Assignee: Moe Jette
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2017-03-03 13:29 MST by Doug Jacobsen
Modified: 2017-03-03 14:24 MST (History)
0 users

See Also:
Site: NERSC
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed:
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments

Note You need to log in before you can comment on or make changes to this ticket.
Description Doug Jacobsen 2017-03-03 13:29:17 MST
Hello,

Early testing didn't show this up, but in my checkout procedures a DGEMM we run is getting roughly half the performance using 17.02.1 relative to 16.05.9


Last night it was run like this:

time srun --ntasks-per-node=1 -c 272 numactl -m 2,3 /tmp/healthcheck.ex > cori.9686.snc2.all_1proc_3

and we got numbers like this from it:
wdmj@cori01:/global/cscratch1/sd/dmj/systemcheckout/build> head cori.9686.snc2.all_1proc_3
nid02304|2065.614358|192.952455
nid02305|2230.694438|193.732286
nid02306|2210.416525|194.330687
nid02307|2203.832001|191.703097
nid02308|2185.428939|194.480866
nid02309|2222.560802|192.178877
nid02310|2173.965848|193.063475
nid02311|2178.690278|193.063475
nid02312|2221.724849|192.178877
nid02313|2210.212779|193.248792


where the first number is flops (DGEMM), and the second number is memory bandwidth (STREAM)


running today with the same srun, but with slurm 17.02, same configuration i get

time srun --ntasks-per-node=1 -c 272 numactl -m 2,3 /tmp/healthcheck.ex > cori.9686.snc2.all_1proc_17.02_1

dmj@cori01:/global/cscratch1/sd/dmj/systemcheckout/build> head cori.9686.snc2.all_1proc_17.02_1
nid02304|1216.107482|194.781920
nid02305|1214.279635|194.180741
nid02306|1216.797868|195.424764
nid02307|1214.325967|193.248792
nid02308|1217.271447|199.530815
nid02309|1214.879969|196.569607
nid02310|1215.290458|192.952455
nid02311|1214.108662|197.031309
nid02312|1217.670284|195.919221
nid02313|1216.926733|196.416187



we get roughly half the flops.

I'm trying to report this quickly as I'm in a very short window before I need to start unwinding and reverting to 16.05 if we can't find a solution.

Please let me know what you need to help debug this.

-Doug

dmj@cori01:/global/cscratch1/sd/dmj/systemcheckout/build> scontrol show config
Configuration data as of 2017-03-03T12:28:55
AccountingStorageBackupHost = corique02-144
AccountingStorageEnforce = associations,limits,qos,safe
AccountingStorageHost   = corique01-144
AccountingStorageLoc    = N/A
AccountingStoragePort   = 6819
AccountingStorageTRES   = cpu,mem,energy,node,bb/cray
AccountingStorageType   = accounting_storage/slurmdbd
AccountingStorageUser   = N/A
AccountingStoreJobComment = Yes
AcctGatherEnergyType    = acct_gather_energy/cray
AcctGatherFilesystemType = acct_gather_filesystem/none
AcctGatherInfinibandType = acct_gather_infiniband/none
AcctGatherNodeFreq      = 0 sec
AcctGatherProfileType   = acct_gather_profile/none
AllowSpecResourcesUsage = 1
AuthInfo                = (null)
AuthType                = auth/munge
BackupAddr              = 128.55.144.220
BackupController        = corique01
BatchStartTimeout       = 10 sec
BOOT_TIME               = 2017-03-03T11:47:19
BurstBufferType         = burst_buffer/cray
CacheGroups             = 0
CheckpointType          = checkpoint/none
ChosLoc                 = (null)
ClusterName             = cori
CompleteWait            = 300 sec
ControlAddr             = ctl1
ControlMachine          = ctl1
CoreSpecPlugin          = core_spec/cray
CpuFreqDef              = Unknown
CpuFreqGovernors        = Performance,OnDemand
CryptoType              = crypto/munge
DebugFlags              = Backfill,BurstBuffer
DefMemPerNode           = UNLIMITED
DisableRootJobs         = Yes
EioTimeout              = 60
EnforcePartLimits       = ANY
Epilog                  = /etc/slurm/nodeepilog.sh
EpilogMsgTime           = 2000 usec
EpilogSlurmctld         = (null)
ExtSensorsType          = ext_sensors/none
ExtSensorsFreq          = 0 sec
FairShareDampeningFactor = 1
FastSchedule            = 1
FirstJobId              = 1
GetEnvTimeout           = 2 sec
GresTypes               = craynetwork,hbm
GroupUpdateForce        = 1
GroupUpdateTime         = 600 sec
HASH_VAL                = Different Ours=0x3592a7d2 Slurmctld=0xa44ecd8e
HealthCheckInterval     = 0 sec
HealthCheckNodeState    = ANY
HealthCheckProgram      = (null)
InactiveLimit           = 600 sec
JobAcctGatherFrequency  = 0
JobAcctGatherType       = jobacct_gather/cgroup
JobAcctGatherParams     = (null)
JobCheckpointDir        = /var/slurm/checkpoint
JobCompHost             = localhost
JobCompLoc              = /etc/slurm/jobcomplete.sh
JobCompPort             = 0
JobCompType             = jobcomp/nersc
JobCompUser             = root
JobContainerType        = job_container/cncu
JobCredentialPrivateKey = (null)
JobCredentialPublicCertificate = (null)
JobFileAppend           = 0
JobRequeue              = 0
JobSubmitPlugins        = cray,lua
KeepAliveTime           = SYSTEM_DEFAULT
KillOnBadExit           = 1
KillWait                = 30 sec
LaunchParameters        = test_exec
LaunchType              = launch/slurm
Layouts                 = 
Licenses                = SCRATCH:1000000,cscratch1:1000000,project:1000000,projecta:1000000,projectb:1000000,dna:1000000
LicensesUsed            = dna:0/1000000,projectb:0/1000000,projecta:0/1000000,project:0/1000000,cscratch1:0/1000000,SCRATCH:0/1000000
MailDomain              = (null)
MailProg                = /bin/mail
MaxArraySize            = 65000
MaxJobCount             = 500000
MaxJobId                = 67043328
MaxMemPerNode           = UNLIMITED
MaxStepCount            = 40000
MaxTasksPerNode         = 512
MCSPlugin               = mcs/none
MCSParameters           = (null)
MemLimitEnforce         = Yes
MessageTimeout          = 100 sec
MinJobAge               = 300 sec
MpiDefault              = openmpi
MpiParams               = ports=63001-64000
MsgAggregationParams    = (null)
NEXT_JOB_ID             = 3939824
NodeFeaturesPlugins     = knl_cray
OverTimeLimit           = 0 min
PluginDir               = /usr/lib64/slurm
PlugStackConfig         = /etc/slurm/plugstack.conf
PowerParameters         = (null)
PowerPlugin             = 
PreemptMode             = REQUEUE
PreemptType             = preempt/qos
PriorityParameters      = (null)
PriorityDecayHalfLife   = 7-00:00:00
PriorityCalcPeriod      = 00:05:00
PriorityFavorSmall      = No
PriorityFlags           = 
PriorityMaxAge          = 128-00:00:00
PriorityUsageResetPeriod = NONE
PriorityType            = priority/multifactor
PriorityWeightAge       = 184320
PriorityWeightFairShare = 1
PriorityWeightJobSize   = 0
PriorityWeightPartition = 0
PriorityWeightQOS       = 253440
PriorityWeightTRES      = (null)
PrivateData             = none
ProctrackType           = proctrack/cray
Prolog                  = /etc/slurm/nodeprolog.sh
PrologEpilogTimeout     = 65534
PrologSlurmctld         = (null)
PrologFlags             = Alloc,Contain
PropagatePrioProcess    = 0
PropagateResourceLimits = ALL
PropagateResourceLimitsExcept = (null)
RebootProgram           = (null)
ReconfigFlags           = (null)
RequeueExit             = (null)
RequeueExitHold         = (null)
ResumeProgram           = /usr/lib/nersc-slurm-plugins/nersc_capmc_resume.py
ResumeRate              = 300 nodes/min
ResumeTimeout           = 7200 sec
ResvEpilog              = (null)
ResvOverRun             = 0 min
ResvProlog              = (null)
ReturnToService         = 1
RoutePlugin             = route/default
SallocDefaultCommand    = srun -n1 -N1 --mem-per-cpu=0 --pty --preserve-env --gres=craynetwork:0 --mpi=none $SHELL
SbcastParameters        = (null)
SchedulerParameters     = no_backup_scheduling,bf_window=7200,bf_resolution=120,bf_max_job_array_resv=20,default_queue_depth=400,bf_max_job_test=1000000,bf_continue,kill_invalid_depend,sched_min_interval=2,bf_interval=120,bf_min_age_reserve=600,bf_max_job_user=30,bf_min_prio_reserve=69120,sbatch_wait_all_nodes,salloc_wait_all_nodes,bf_min_prio_reserve_maxjobs=800,bf_userpart_resvprio_jobcnt=2,max_switch_wait=345600
SchedulerTimeSlice      = 30 sec
SchedulerType           = sched/backfill
SelectType              = select/cray
SelectTypeParameters    = CR_SOCKET_MEMORY,OTHER_CONS_RES,NHC_ABSOLUTELY_NO
SlurmUser               = root(0)
SlurmctldDebug          = debug
SlurmctldLogFile        = /var/tmp/slurm/slurmctld.log
SlurmctldPort           = 6817
SlurmctldTimeout        = 120 sec
SlurmdDebug             = info
SlurmdLogFile           = /var/spool/slurmd/%h.log
SlurmdPidFile           = /var/run/slurmd.pid
SlurmdPlugstack         = (null)
SlurmdPort              = 6818
SlurmdSpoolDir          = /var/spool/slurmd
SlurmdTimeout           = 300 sec
SlurmdUser              = root(0)
SlurmSchedLogFile       = (null)
SlurmSchedLogLevel      = 0
SlurmctldPidFile        = /var/run/slurmctld.pid
SlurmctldPlugstack      = (null)
SLURM_CONF              = /etc/slurm/slurm.conf
SLURM_VERSION           = 17.02.1-2
SrunEpilog              = (null)
SrunPortRange           = 60001-63000
SrunProlog              = (null)
StateSaveLocation       = /global/syscom/cori/sc/nsg/var/cori-slurm-state
SuspendExcNodes         = (null)
SuspendExcParts         = (null)
SuspendProgram          = /usr/sbin/capmc_suspend
SuspendRate             = 60 nodes/min
SuspendTime             = 30000000 sec
SuspendTimeout          = 30 sec
SwitchType              = switch/cray
TaskEpilog              = (null)
TaskPlugin              = affinity,cray,cgroup
TaskPluginParam         = (null type)
TaskProlog              = (null)
TCPTimeout              = 60 sec
TmpFS                   = /tmp
TopologyParam           = NoInAddrAny
TopologyPlugin          = topology/tree
TrackWCKey              = No
TreeWidth               = 22
UsePam                  = 0
UnkillableStepProgram   = /etc/slurm/unkillable_step.sh
UnkillableStepTimeout   = 900 sec
VSizeFactor             = 0 percent
WaitTime                = 0 sec
Comment 2 Doug Jacobsen 2017-03-03 14:04:35 MST
I just found that we had (before as well) TaskParam=sched    the
description of it sounds like perhaps not what we want.  I just commented
it out and am trying restarting all slurmd now.

----
Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacobsen@lbl.gov

------------- __o
---------- _ '\<,_
----------(_)/  (_)__________________________


On Fri, Mar 3, 2017 at 1:01 PM, <bugs@schedmd.com> wrote:

> Danny Auble <da@schedmd.com> changed bug 3534
> <https://bugs.schedmd.com/show_bug.cgi?id=3534>
> What Removed Added
> Assignee support@schedmd.com jette@schedmd.com
>
> ------------------------------
> You are receiving this mail because:
>
>    - You reported the bug.
>
>
Comment 3 Moe Jette 2017-03-03 14:07:20 MST
There were some recent changes related to CPU frequency management.
Could you check the governor and CPU frequency?
Comment 4 Doug Jacobsen 2017-03-03 14:07:37 MST
that didn't seem to help

----
Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacobsen@lbl.gov

------------- __o
---------- _ '\<,_
----------(_)/  (_)__________________________


On Fri, Mar 3, 2017 at 1:03 PM, Douglas Jacobsen <dmjacobsen@lbl.gov> wrote:

> I just found that we had (before as well) TaskParam=sched    the
> description of it sounds like perhaps not what we want.  I just commented
> it out and am trying restarting all slurmd now.
>
> ----
> Doug Jacobsen, Ph.D.
> NERSC Computer Systems Engineer
> National Energy Research Scientific Computing Center
> <http://www.nersc.gov>
> dmjacobsen@lbl.gov
>
> ------------- __o
> ---------- _ '\<,_
> ----------(_)/  (_)__________________________
>
>
> On Fri, Mar 3, 2017 at 1:01 PM, <bugs@schedmd.com> wrote:
>
>> Danny Auble <da@schedmd.com> changed bug 3534
>> <https://bugs.schedmd.com/show_bug.cgi?id=3534>
>> What Removed Added
>> Assignee support@schedmd.com jette@schedmd.com
>>
>> ------------------------------
>> You are receiving this mail because:
>>
>>    - You reported the bug.
>>
>>
>
Comment 5 Doug Jacobsen 2017-03-03 14:09:42 MST
To me, these look OK

dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build> lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                272
On-line CPU(s) list:   0-271
Thread(s) per core:    4
Core(s) per socket:    68
Socket(s):             1
NUMA node(s):          4
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 87
Model name:            Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz
Stepping:              1
CPU MHz:               1401.000
CPU max MHz:           1401.0000
CPU min MHz:           1000.0000
BogoMIPS:              2800.00
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
NUMA node0 CPU(s):     0-33,68-101,136-169,204-237
NUMA node1 CPU(s):     34-67,102-135,170-203,238-271
NUMA node2 CPU(s):
NUMA node3 CPU(s):
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
ds_cpl est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt
tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida
arat epb xsaveopt pln pts dtherm fsgsbase tsc_adjust bmi1 avx2 smep bmi2
erms avx512f rdseed adx avx512pf avx512er avx512cd
dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build> srun lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                272
On-line CPU(s) list:   0-271
Thread(s) per core:    4
Core(s) per socket:    68
Socket(s):             1
NUMA node(s):          4
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 87
Model name:            Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz
Stepping:              1
CPU MHz:               1401.000
CPU max MHz:           1401.0000
CPU min MHz:           1000.0000
BogoMIPS:              2800.00
L1d cache:             32K
L1i cache:             32K
L2 cache:              1024K
NUMA node0 CPU(s):     0-33,68-101,136-169,204-237
NUMA node1 CPU(s):     34-67,102-135,170-203,238-271
NUMA node2 CPU(s):
NUMA node3 CPU(s):
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
ds_cpl est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt
tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida
arat epb xsaveopt pln pts dtherm fsgsbase tsc_adjust bmi1 avx2 smep bmi2
erms avx512f rdseed adx avx512pf avx512er avx512cd
dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build>




I did find that if I run 2 tasks per node, binding to sockets (snc2) I am
getting expected performance.  The issue seems to be somehow related to
full mask assignments.

----
Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacobsen@lbl.gov

------------- __o
---------- _ '\<,_
----------(_)/  (_)__________________________


On Fri, Mar 3, 2017 at 1:07 PM, <bugs@schedmd.com> wrote:

> *Comment # 3 <https://bugs.schedmd.com/show_bug.cgi?id=3534#c3> on bug
> 3534 <https://bugs.schedmd.com/show_bug.cgi?id=3534> from Moe Jette
> <jette@schedmd.com> *
>
> There were some recent changes related to CPU frequency management.
> Could you check the governor and CPU frequency?
>
> ------------------------------
> You are receiving this mail because:
>
>    - You reported the bug.
>
>
Comment 6 Doug Jacobsen 2017-03-03 14:22:55 MST
AHA   I found the problem.

It turrns out that if you run half the threads,  you get half the
performance!!

This was operator error.

Sorry about that!
Please resolve this.

----
Doug Jacobsen, Ph.D.
NERSC Computer Systems Engineer
National Energy Research Scientific Computing Center <http://www.nersc.gov>
dmjacobsen@lbl.gov

------------- __o
---------- _ '\<,_
----------(_)/  (_)__________________________


On Fri, Mar 3, 2017 at 1:09 PM, Douglas Jacobsen <dmjacobsen@lbl.gov> wrote:

> To me, these look OK
>
> dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build> lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                272
> On-line CPU(s) list:   0-271
> Thread(s) per core:    4
> Core(s) per socket:    68
> Socket(s):             1
> NUMA node(s):          4
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 87
> Model name:            Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz
> Stepping:              1
> CPU MHz:               1401.000
> CPU max MHz:           1401.0000
> CPU min MHz:           1000.0000
> BogoMIPS:              2800.00
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              1024K
> NUMA node0 CPU(s):     0-33,68-101,136-169,204-237
> NUMA node1 CPU(s):     34-67,102-135,170-203,238-271
> NUMA node2 CPU(s):
> NUMA node3 CPU(s):
> Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
> syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
> nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> ds_cpl est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt
> tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida
> arat epb xsaveopt pln pts dtherm fsgsbase tsc_adjust bmi1 avx2 smep bmi2
> erms avx512f rdseed adx avx512pf avx512er avx512cd
> dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build> srun lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                272
> On-line CPU(s) list:   0-271
> Thread(s) per core:    4
> Core(s) per socket:    68
> Socket(s):             1
> NUMA node(s):          4
> Vendor ID:             GenuineIntel
> CPU family:            6
> Model:                 87
> Model name:            Intel(R) Xeon Phi(TM) CPU 7250 @ 1.40GHz
> Stepping:              1
> CPU MHz:               1401.000
> CPU max MHz:           1401.0000
> CPU min MHz:           1000.0000
> BogoMIPS:              2800.00
> L1d cache:             32K
> L1i cache:             32K
> L2 cache:              1024K
> NUMA node0 CPU(s):     0-33,68-101,136-169,204-237
> NUMA node1 CPU(s):     34-67,102-135,170-203,238-271
> NUMA node2 CPU(s):
> NUMA node3 CPU(s):
> Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe
> syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good
> nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
> ds_cpl est tm2 ssse3 fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt
> tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch ida
> arat epb xsaveopt pln pts dtherm fsgsbase tsc_adjust bmi1 avx2 smep bmi2
> erms avx512f rdseed adx avx512pf avx512er avx512cd
> dmj@nid10372:/global/cscratch1/sd/dmj/systemcheckout/build>
>
>
>
>
> I did find that if I run 2 tasks per node, binding to sockets (snc2) I am
> getting expected performance.  The issue seems to be somehow related to
> full mask assignments.
>
> ----
> Doug Jacobsen, Ph.D.
> NERSC Computer Systems Engineer
> National Energy Research Scientific Computing Center
> <http://www.nersc.gov>
> dmjacobsen@lbl.gov
>
> ------------- __o
> ---------- _ '\<,_
> ----------(_)/  (_)__________________________
>
>
> On Fri, Mar 3, 2017 at 1:07 PM, <bugs@schedmd.com> wrote:
>
>> *Comment # 3 <https://bugs.schedmd.com/show_bug.cgi?id=3534#c3> on bug
>> 3534 <https://bugs.schedmd.com/show_bug.cgi?id=3534> from Moe Jette
>> <jette@schedmd.com> *
>>
>> There were some recent changes related to CPU frequency management.
>> Could you check the governor and CPU frequency?
>>
>> ------------------------------
>> You are receiving this mail because:
>>
>>    - You reported the bug.
>>
>>
>
Comment 7 Moe Jette 2017-03-03 14:24:16 MST
(In reply to Doug Jacobsen from comment #6)
> AHA   I found the problem.
> 
> It turrns out that if you run half the threads,  you get half the
> performance!!
> 
> This was operator error.
> 
> Sorry about that!
> Please resolve this.

Thanks for the update.

Have a great weekend!