Derek Zhou derek@shannon-data.com schrieb am 14.12.2018 um 07:27 in Nachricht
On Thursday, December 13, 2018 05:24:53 PM Derek Zhou wrote:
Howard and others:
Again on deleting lots of entries. I have 2 experiments:
1, in a fresh db insert 10 million entries. let's call this state A. then
delete 9 million entries over night. let's call this state B
2, in a fresh db insert 1 million entries I call this state B'
In B and B' even the 1 million entries are the same; so from user's
perspective B and B' are indistinguishable. However, deleting entries from B is much slower than deleting entries from B', like 10x slower. It seems like deleting speed depends on the peak db size, and how full the db currently is.
My question is: is this wide a deletion performance gap expected?
Further testing shows that this deletion speed drop only happens in writemap mode. default mode is much better.
When you are that deep into analyzing (assuming you use Linux), it wmight be interesting to run blocktrace to see how your filesystem is accessed. (Once I did that for our database server, and you could clearly see accesses to the redo logs as well as the effects of fsync)
[...]
Regards, Ulrich
openldap-technical@openldap.org