Ticket 8873 - Slurmdbd Archiving issues
Summary: Slurmdbd Archiving issues
Status: RESOLVED TIMEDOUT
Alias: None
Product: Slurm
Classification: Unclassified
Component: slurmdbd (show other tickets)
Version: - Unsupported Older Versions
Hardware: Linux Linux
: 4 - Minor Issue
Assignee: Nate Rini
QA Contact:
URL:
Depends on:
Blocks:
 
Reported: 2020-04-16 03:23 MDT by Hjalti Sveinsson
Modified: 2021-02-19 17:12 MST (History)
1 user (show)

See Also:
Site: deCODE
Slinky Site: ---
Alineos Sites: ---
Atos/Eviden Sites: ---
Confidential Site: ---
Coreweave sites: ---
Cray Sites: ---
DS9 clusters: ---
Google sites: ---
HPCnow Sites: ---
HPE Sites: ---
IBM Sites: ---
NOAA SIte: ---
NoveTech Sites: ---
Nvidia HWinf-CS Sites: ---
OCF Sites: ---
Recursion Pharma Sites: ---
SFW Sites: ---
SNIC sites: ---
Tzag Elita Sites: ---
Linux Distro: ---
Machine Name:
CLE Version:
Version Fixed:
Target Release: ---
DevPrio: ---
Emory-Cloud Sites: ---


Attachments

Note You need to log in before you can comment on or make changes to this ticket.
Description Hjalti Sveinsson 2020-04-16 03:23:13 MDT
Hello,

I was just trying to re-enable archiving in my Slurmdbd but it seems to time out and mysql throws an error. Also I found 5 runaway jobs after this.

root@ru-lhpc-head:~# sacctmgr show runawayjobs
NOTE: Runaway jobs are jobs that don't exist in the controller but have a start time and no end time in the database
          ID       Name  Partition    Cluster      State           TimeStart             TimeEnd
------------ ---------- ---------- ---------- ---------- ------------------- -------------------
17158194     snakejob.+    cpu_hog       lhpc    RUNNING 2020-04-14T10:29:52             Unknown
17158197     snakejob.+    cpu_hog       lhpc    RUNNING 2020-04-14T10:29:52             Unknown
17158213     snakejob.+    cpu_hog       lhpc    RUNNING 2020-04-14T10:29:52             Unknown
17158218     snakejob.+    cpu_hog       lhpc    RUNNING 2020-04-14T10:29:52             Unknown
17158239     snakejob.+    cpu_hog       lhpc    RUNNING 2020-04-14T10:29:52             Unknown

They all have error status: 0 so I have fixed these.

However I would like to know how I can fix this issue with the archiving. The database is quite large, over 300GB and I am planning to upgrade to a newer release of Slurm 19.05.5 to be exact but I was thinking of trying to archvive beforehand.

[2020-04-16T00:00:00.678] 0(as_mysql_archive.c:3240) query
select time_submit from "lhpc_job_table" where time_submit <= 1520035199 && time_end != 0 order by time_submit asc LIMIT 1
[2020-04-16T00:20:14.734] debug:  Purging lhpc_job_table before 1520035199
[2020-04-16T00:20:14.734] 0(as_mysql_archive.c:3446) query
delete from "lhpc_job_table" where time_submit <= 1520035199 && time_end != 0 LIMIT 50000
[2020-04-16T01:06:17.635] error: mysql_query failed: 1213 Deadlock found when trying to get lock; try restarting transaction
update "lhpc_job_table" set nodelist='ru-hpc-0325', account='bioprod', `partition`='cpu_hog', node_inx='185', gres_req='', gres_alloc='', array_task_str=NULL, array_task_pending=0, tres_alloc='1=1,2=4096,3=18446744073709551614,4=1,5=1', tres_req='1=1,2=4096,4=1,5=1', work_dir='/nfs/odinn/datavault/damp/tmp/vol101/ont_build38_v3/ONTBase/vol01/200403_PAE63869', time_start=1586996455, job_name='fetch-rp', state=greatest(state, 1), nodes_alloc=1, id_qos=1, id_assoc=5, id_resv=0, timelimit=180, mem_req=4096, id_array_job=0, id_array_task=4294967294, pack_job_id=0, pack_job_offset=4294967294, time_eligible=1586996450, mod_time=UNIX_TIMESTAMP() where job_db_inx=285589560
[2020-04-16T01:06:17.785] Warning: Note very large processing time from daily_rollup for lhpc: usec=3977122786 began=00:00:00.662
[2020-04-16T01:06:17.793] debug:  Problem sending response to connection 10(172.17.147.210) uid(598)
[2020-04-16T01:06:17.793] debug:  cluster lhpc has disconnected
[2020-04-16T01:36:18.055] error: mysql_query failed: 1205 Lock wait timeout exceeded; try restarting transaction
update cluster_table set mod_time=1586999177, control_host='', control_port=0 where name='lhpc' && control_host='172.17.147.210' && control_port=6817;
[2020-04-16T01:36:18.055] fatal: mysql gave ER_LOCK_WAIT_TIMEOUT as an error. The only way to fix this is restart the calling program
[2020-04-16T01:36:18.056] error: mysql_query failed: 1205 Lock wait timeout exceeded; try restarting transaction
insert into "lhpc_job_table" (id_job, mod_time, id_array_job, id_array_task, pack_job_id, pack_job_offset, id_assoc, id_qos, id_user, id_group, nodelist, id_resv, timelimit, time_eligible, time_submit, time_start, job_name, track_steps, state, priority, cpus_req, nodes_alloc, mem_req, account, `partition`, node_inx, gres_req, gres_alloc, array_task_str, array_task_pending, tres_alloc, tres_req, work_dir) values (17910920, UNIX_TIMESTAMP(), 0, 4294967294, 0, 4294967294, 5, 1, 1065, 1042, 'ru-hpc-0362', 0, 1440, 1586996447, 1586996446, 1586996455, 'ADD_Alcohol_Dependence_13042020_complete', 0, 1, 1, 1, 1, 4096, 'bioprod', 'cpu_hog', '222', '', '', NULL, 0, '1=1,2=4096,3=18446744073709551614,4=1,5=1', '1=1,2=4096,4=1,5=1', '/nfs/odinn/datavault/damp/tmp/vol101/assocAsterMale/AssocpCCM/ADD_Alcohol_Dependence_13042020') on duplicate key update job_db_inx=LAST_INSERT_ID(job_db_inx), id_assoc=5, id_user=1065, id_group=1042, nodelist='ru-hpc-0362', id_resv=0, timelimit=1440, time_submit=1586996446, time_eligible=1586996447, time_start=1586996455, mod_time=UNIX_TIMESTAMP(), job_name='ADD_Alcohol_Dependence_13042020_complete', track_steps=0, id_qos=1, state=greatest(state, 1), priority=1, cpus_req=1, nodes_alloc=1, mem_req=4096, id_array_job=0, id_array_task=4294967294, pack_job_id=0, pack_job_offset=4294967294, account='bioprod', `partition`='cpu_hog', node_inx='222', gres_req='', gres_alloc='', array_task_str=NULL, array_task_pending=0, tres_alloc='1=1,2=4096,3=18446744073709551614,4=1,5=1', tres_req='1=1,2=4096,4=1,5=1', work_dir='/nfs/odinn/datavault/damp/tmp/vol101/assocAsterMale/AssocpCCM/ADD_Alcohol_Dependence_13042020'
[2020-04-16T01:36:18.056] fatal: mysql gave ER_LOCK_WAIT_TIMEOUT as an error. The only way to fix this is restart the calling program
[2020-04-16T01:37:11.317] debug:  Log file re-opened
[2020-04-16T01:37:11.321] debug:  Munge authentication plugin loaded
[2020-04-16T01:37:11.877] Accounting storage MYSQL plugin loaded
[2020-04-16T01:37:12.186] debug:  post user: couldn't get a uid for user charlottea
[2020-04-16T01:37:12.332] debug:  post user: couldn't get a uid for user gudrunja
[2020-04-16T01:37:12.357] debug:  post user: couldn't get a uid for user hakong
[2020-04-16T01:37:12.378] debug:  post user: couldn't get a uid for user heidab
[2020-04-16T01:37:12.530] debug:  post user: couldn't get a uid for user lucasw
[2020-04-16T01:37:12.598] debug:  post user: couldn't get a uid for user olafurd
[2020-04-16T01:37:12.644] debug:  post user: couldn't get a uid for user pauli
[2020-04-16T01:37:12.649] debug:  post user: couldn't get a uid for user pauln
[2020-04-16T01:37:12.697] debug:  post user: couldn't get a uid for user sebastianr
[2020-04-16T01:37:12.919] debug:  post user: couldn't get a uid for user yingh
[2020-04-16T01:37:12.919] slurmdbd version 18.08.7 started
Comment 1 Nate Rini 2020-04-16 11:36:32 MDT
Please provide a copy of your slurm.conf and this:
> slurmdbd -V
Comment 2 Hjalti Sveinsson 2020-04-16 16:14:02 MDT
Created attachment 13842 [details]
slurmdbd.conf
Comment 3 Hjalti Sveinsson 2020-04-16 16:14:51 MDT
[root@ru-lhpc-head-db-01 ~]# slurmdbd -V
slurm 18.08.7
Comment 4 Nate Rini 2020-04-16 16:55:44 MDT
(In reply to Hjalti Sveinsson from comment #2)
> Created attachment 13842 [details]
> slurmdbd.conf

Please make sure to change your StoragePass and please also attach your slurm.conf.
Comment 5 Hjalti Sveinsson 2020-04-16 17:00:16 MDT
I will.
Comment 6 Nate Rini 2020-04-17 10:39:31 MDT
(In reply to Hjalti Sveinsson from comment #0)
> I was just trying to re-enable archiving in my Slurmdbd but it seems to time
> out and mysql throws an error. Also I found 5 runaway jobs after this.
>
> [2020-04-16T01:36:18.055] fatal: mysql gave ER_LOCK_WAIT_TIMEOUT as an
> error. The only way to fix this is restart the calling program

Can you please try it again and then call this in mysql and attach the output:
> show engine innodb status;

before trying to adjust any timeouts: I want to make sure this isn't a deadlock.
Comment 7 Nate Rini 2020-04-23 10:17:46 MDT
Hjalti,

I'm going to time this ticket out. Please respond when convenient and we can continue.

Thanks,
--Nate
Comment 8 Hjalti Sveinsson 2020-04-24 05:07:50 MDT
| InnoDB |      |
=====================================
200424 11:06:58 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 43 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 1048408 1_second, 1048405 sleeps, 104086 10_second, 12317 background, 12316 flush
srv_master_thread log flush and writes: 865019
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 1764053, signal count 7282768
Mutex spin waits 4749214, rounds 48071554, OS waits 1215282
RW-shared spins 1614044, rounds 13461842, OS waits 313571
RW-excl spins 1249758, rounds 21438952, OS waits 208690
Spin rounds per wait: 10.12 mutex, 8.34 RW-shared, 17.15 RW-excl
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: 0 [0, 0, 0, 0] , aio writes: 0 [0, 0, 0, 0] ,
 ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0
Pending flushes (fsync) log: 0; buffer pool: 0
74331766 OS file reads, 18497075 OS file writes, 1987288 OS fsyncs
0.16 reads/s, 16384 avg bytes/read, 18.26 writes/s, 2.14 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 1, free list len 26, seg size 28, 121972 merges
merged operations:
 insert 174865, delete mark 2597239, delete 406
discarded operations:
 insert 0, delete mark 0, delete 0
Hash table size 21249841, node heap has 18042 buffer(s)
27.72 hash searches/s, 161.69 non-hash searches/s
---
LOG
---
Log sequence number 1439115807620
Log flushed up to   1439115807620
Last checkpoint at  1439115739150
Max checkpoint age    216721613
Checkpoint age target 209949063
Modified age          68470
Checkpoint age        68470
0 pending log writes, 0 pending chkp writes
1552979 log i/o's done, 1.72 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total memory allocated 11020533760; in additional pool allocated 0
Total memory allocated by read views 512
Internal hash tables (constant factor + variable factor)
    Adaptive hash index 465603072       (169998728 + 295604344)
    Page hash           10625704 (buffer pool 0 only)
    Dictionary cache    42717790        (42501104 + 216686)
    File system         83536   (82672 + 864)
    Lock system         26564144        (26563016 + 1128)
    Recovery system     0       (0 + 0)
Dictionary memory allocated 216686
Buffer pool size        655359
Buffer pool size, bytes 10737401856
Free buffers            0
Database pages          637317
Old database pages      235239
Modified db pages       96
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 73676797, not young 0
0.16 youngs/s, 0.00 non-youngs/s
Pages read 74331755, created 535567, written 16591582
0.16 reads/s, 0.65 creates/s, 16.21 writes/s
Buffer pool hit rate 1000 / 1000, young-making rate 0 / 1000 not 0 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 637317, unzip_LRU len: 0
I/O sum[911]:cur[0], unzip sum[0]:cur[0]
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
1 read views open inside InnoDB
0 transactions active inside InnoDB
0 out of 1000 descriptors used
---OLDEST VIEW---
Normal read view
Read view low limit trx n:o A247CF9
Read view up limit trx id A247CF9
Read view low limit trx id A247CF9
Read view individually stored trx ids:
-----------------
Main thread process no. 16729, id 140233853875968, state: sleeping
Number of rows inserted 6903086, updated 9638638, deleted 2586808, read 2347320525
7.12 inserts/s, 12.60 updates/s, 0.00 deletes/s, 16.86 reads/s
------------------------
LATEST DETECTED DEADLOCK
------------------------
200416  1:06:15
*** (1) TRANSACTION:
TRANSACTION A09DBF8, ACTIVE 1825 sec updating or deleting
mysql tables in use 1, locked 1
LOCK WAIT 28 lock struct(s), heap size 3112, 43 row lock(s), undo log entries 33
MySQL thread id 149241, OS thread handle 0x7f8ab79a3700, query id 6427683 localhost slurm Updating
update "lhpc_job_table" set nodelist='ru-hpc-0325', account='bioprod', `partition`='cpu_hog', node_inx='185', gres_req='', gres_alloc='', array_task_str=NULL, array_task_pending=0, tres_alloc='1=1,2=4096,3=18446744073709551614,4=1,5=1', tres_req='1=1,2=4096,4=1,5=1', work_dir='/nfs/odinn/datavault/damp/tmp/vol101/ont_build38_v3/ONTBase/vol01/200403_PAE63869', time_start=1586996455, job_name='fetch-rp', state=greatest(state, 1), nodes_alloc=1, id_qos=1, id_assoc=5, id_resv=0, timelimit=180, mem_req=4096, id_array_job=0, id_array_task=4294967294, pack_job_id=0, pack_job_offset=4294967294, time_eligible=1586996450, mod_time=UNIX_TIMESTAMP() where job_db_inx=285589560
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 11113417 n bits 624 index "rollup2" of table "slurm_acc"."lhpc_job_table" trx id A09DBF8 lock_mode X insert intention waiting
*** (2) TRANSACTION:
TRANSACTION A09E3C7, ACTIVE 1322 sec fetching rows
mysql tables in use 1, locked 1
6107735 lock struct(s), heap size 572340664, 364869929 row lock(s)
MySQL thread id 145576, OS thread handle 0x7f8ab7c3d700, query id 6427273 localhost slurm updating
delete from "lhpc_job_table" where time_submit <= 1520035199 && time_end != 0 LIMIT 50000
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 0 page no 11113417 n bits 624 index "rollup2" of table "slurm_acc"."lhpc_job_table" trx id A09E3C7 lock_mode X
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 0 page no 11113420 n bits 96 index "rollup2" of table "slurm_acc"."lhpc_job_table" trx id A09E3C7 lock_mode X waiting
*** WE ROLL BACK TRANSACTION (1)
------------
TRANSACTIONS
------------
Trx id counter A247CFB
Purge done for trx's n:o < A247CF8 undo n:o < 0
History list length 819
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 711289, OS thread handle 0x7f8ab7c3d700, query id 26619908 localhost root
show engine innodb status
---TRANSACTION A247CF9, not started
MySQL thread id 230217, OS thread handle 0x7f8ab7fb5700, query id 26619907 localhost slurm
---TRANSACTION A0CE39C, not started
MySQL thread id 230215, OS thread handle 0x7f8abc06f700, query id 26609410 localhost slurm
----------------------------
END OF INNODB MONITOR OUTPUT
============================
Comment 9 Hjalti Sveinsson 2020-04-24 05:08:35 MDT
Sorry for late reply.
Comment 10 Nate Rini 2020-04-24 12:39:33 MDT
(In reply to Hjalti Sveinsson from comment #9)
> Sorry for late reply.

We are happy to work on your schedule. We time out tickets to avoid spending time checking for updates on them.

(In reply to Hjalti Sveinsson from comment #8)
> TRANSACTION A09DBF8, ACTIVE 1825 sec updating or deleting
1825 sec -> 30 minutes

> TRANSACTION A09E3C7, ACTIVE 1322 sec fetching rows
1322 sec -> 20 minutes

Looks like lock starvation is the issue or the database is just very slow.

Please call this in SQL:
> show variables like 'innodb%';
Comment 11 Nate Rini 2020-05-04 11:36:40 MDT
(In reply to Hjalti Sveinsson from comment #9)
> Sorry for late reply.

Reducing severity of this ticket while waiting for data. Please attach the data when convenient.
Comment 12 Hjalti Sveinsson 2020-05-06 06:09:55 MDT
MariaDB [(none)]> show variables like 'innodb%';
+-------------------------------------------+------------------------+
| Variable_name                             | Value                  |
+-------------------------------------------+------------------------+
| innodb_adaptive_flushing                  | ON                     |
| innodb_adaptive_flushing_method           | estimate               |
| innodb_adaptive_hash_index                | ON                     |
| innodb_adaptive_hash_index_partitions     | 1                      |
| innodb_additional_mem_pool_size           | 8388608                |
| innodb_autoextend_increment               | 8                      |
| innodb_autoinc_lock_mode                  | 2                      |
| innodb_blocking_buffer_pool_restore       | OFF                    |
| innodb_buffer_pool_instances              | 1                      |
| innodb_buffer_pool_populate               | OFF                    |
| innodb_buffer_pool_restore_at_startup     | 0                      |
| innodb_buffer_pool_shm_checksum           | ON                     |
| innodb_buffer_pool_shm_key                | 0                      |
| innodb_buffer_pool_size                   | 10737418240            |
| innodb_change_buffering                   | all                    |
| innodb_checkpoint_age_target              | 0                      |
| innodb_checksums                          | ON                     |
| innodb_commit_concurrency                 | 0                      |
| innodb_concurrency_tickets                | 500                    |
| innodb_corrupt_table_action               | assert                 |
| innodb_data_file_path                     | ibdata1:10M:autoextend |
| innodb_data_home_dir                      |                        |
| innodb_dict_size_limit                    | 0                      |
| innodb_doublewrite                        | ON                     |
| innodb_doublewrite_file                   |                        |
| innodb_fake_changes                       | OFF                    |
| innodb_fast_checksum                      | OFF                    |
| innodb_fast_shutdown                      | 1                      |
| innodb_file_format                        | Antelope               |
| innodb_file_format_check                  | ON                     |
| innodb_file_format_max                    | Antelope               |
| innodb_file_per_table                     | OFF                    |
| innodb_flush_log_at_trx_commit            | 1                      |
| innodb_flush_method                       |                        |
| innodb_flush_neighbor_pages               | area                   |
| innodb_force_load_corrupted               | OFF                    |
| innodb_force_recovery                     | 0                      |
| innodb_ibuf_accel_rate                    | 100                    |
| innodb_ibuf_active_contract               | 1                      |
| innodb_ibuf_max_size                      | 5368692736             |
| innodb_import_table_from_xtrabackup       | 0                      |
| innodb_io_capacity                        | 200                    |
| innodb_kill_idle_transaction              | 0                      |
| innodb_large_prefix                       | OFF                    |
| innodb_lazy_drop_table                    | 0                      |
| innodb_lock_wait_timeout                  | 1800                   |
| innodb_locking_fake_changes               | ON                     |
| innodb_locks_unsafe_for_binlog            | OFF                    |
| innodb_log_block_size                     | 512                    |
| innodb_log_buffer_size                    | 8388608                |
| innodb_log_file_size                      | 134217728              |
| innodb_log_files_in_group                 | 2                      |
| innodb_log_group_home_dir                 | ./                     |
| innodb_max_bitmap_file_size               | 104857600              |
| innodb_max_changed_pages                  | 1000000                |
| innodb_max_dirty_pages_pct                | 75                     |
| innodb_max_purge_lag                      | 0                      |
| innodb_merge_sort_block_size              | 1048576                |
| innodb_mirrored_log_groups                | 1                      |
| innodb_old_blocks_pct                     | 37                     |
| innodb_old_blocks_time                    | 0                      |
| innodb_open_files                         | 300                    |
| innodb_page_size                          | 16384                  |
| innodb_print_all_deadlocks                | OFF                    |
| innodb_purge_batch_size                   | 20                     |
| innodb_purge_threads                      | 1                      |
| innodb_random_read_ahead                  | OFF                    |
| innodb_read_ahead                         | linear                 |
| innodb_read_ahead_threshold               | 56                     |
| innodb_read_io_threads                    | 4                      |
| innodb_recovery_stats                     | OFF                    |
| innodb_recovery_update_relay_log          | OFF                    |
| innodb_replication_delay                  | 0                      |
| innodb_rollback_on_timeout                | OFF                    |
| innodb_rollback_segments                  | 128                    |
| innodb_show_locks_held                    | 10                     |
| innodb_show_verbose_locks                 | 0                      |
| innodb_simulate_comp_failures             | 0                      |
| innodb_spin_wait_delay                    | 6                      |
| innodb_stats_auto_update                  | 1                      |
| innodb_stats_method                       | nulls_equal            |
| innodb_stats_modified_counter             | 0                      |
| innodb_stats_on_metadata                  | ON                     |
| innodb_stats_sample_pages                 | 8                      |
| innodb_stats_traditional                  | ON                     |
| innodb_stats_update_need_lock             | 1                      |
| innodb_strict_mode                        | OFF                    |
| innodb_support_xa                         | ON                     |
| innodb_sync_spin_loops                    | 30                     |
| innodb_table_locks                        | ON                     |
| innodb_thread_concurrency                 | 0                      |
| innodb_thread_concurrency_timer_based     | OFF                    |
| innodb_thread_sleep_delay                 | 10000                  |
| innodb_track_changed_pages                | OFF                    |
| innodb_use_atomic_writes                  | OFF                    |
| innodb_use_fallocate                      | OFF                    |
| innodb_use_global_flush_log_at_trx_commit | ON                     |
| innodb_use_native_aio                     | ON                     |
| innodb_use_stacktrace                     | OFF                    |
| innodb_use_sys_malloc                     | ON                     |
| innodb_use_sys_stats_table                | OFF                    |
| innodb_version                            | 5.5.61-MariaDB-38.13   |
| innodb_write_io_threads                   | 4                      |
+-------------------------------------------+------------------------+
103 rows in set (0.00 sec)
Comment 13 Nate Rini 2020-05-19 11:31:53 MDT
(In reply to Nate Rini from comment #11)
> (In reply to Hjalti Sveinsson from comment #9)
> > Sorry for late reply.
> 
> Reducing severity of this ticket while waiting for data. Please attach the
> data when convenient.

What kind of storage is the database on? SSD or raid?
Comment 14 Nate Rini 2020-06-08 10:12:02 MDT
Timing this ticket out. Please respond when convenient and we can continue.