Hi, Peter Mogensen apm@mutex.dk writes:
Hi,
I have a database with close to 11 million entries and lately deletes have started to get painfully slow. I've set up a new server with a lot of improvements, but if anyone have an idea about what the deciding factor for the performance difference is, then I would be grateful.
On the old server: -16 cores. 42Gb RAM, entire database in memory -XFS file system on (hw)RAID-1 -Database and BerkeleDB log on same filesystem -some, but not much load (~35 read waiters) -time to delete 157 entries: 9 minutes.
New server: -16 cores, 48Gb RAM, entire database in memory -ext3 filesystem on (hw)RAID-10 -Database and log on difference disk -no load. -time to delete the same 157 entries: 6.2 seconds
I'm aware that the new server has all the benefits, but even under low load conditions, the old server is only able to delete an entry every 3 seconds and there's orders of magnitude difference between 6 and 540 seconds.
My suspicion is that there's one of the above factors (XFS?, db/log on same fs?) which get very pronounced when the database gets above a certain size, since this slowdown for deletes seem to have accelerated with the growth of the database the last few months.
To my opinion there are three factors which have influence on performance: - keeping db transaction logs on a different disk reduces writes on the database disk, - ext3 vs. xfs, xfs is known to be slower than ext3 in handling small files, - raid-10 provides a small gain in performance, if disk caching is configured properly.
-Dieter