--On Wednesday, June 26, 2013 04:19:27 AM -0700 Howard Chu hyc@symas.com wrote:
Bill MacAllister wrote:
--On Tuesday, June 25, 2013 03:10:17 PM -0700 Howard Chu hyc@symas.com wrote:
Probably bad default FS settings, and changed from your previous OS revision.
Also, you should watch vmstat while it runs to get a better idea of how much time the system is spending in I/O wait.
I have just re-mkfs'ed the new, slow system to make it look like the old, fast system. Just to make sure nothing else changed I have started a load on the older system. Things look fine.
I meant mount options, mkfs should have very little impact.
ext3 journaling is pretty awful. For ext4 you probably want data=writeback mode, and you probably should compare the commit= value, which defaults to 5 seconds and also barrier; I believe the barrier default changed between kernel revisions.
I started with mkfs because I wanted to see if I could make things better by putting the ext4 journal file on a different disk than the database. With the default on Debian of data=ordered the load time was awful even with the journal on a separate disk. I killed it after about 20 minutes when the eta topped two hours and was climbing.
My next attempt was to do away with journaling altogether and create the database on an ext2 file system. Not surprisingly the load time was great, just a bit over 21 minutes. This is the bench mark that I used, i.e. the best that I can expect.
I tried a load on an ext4 system with options 'rw,noatime, user_xattr, barrier=1, data=writeback' and got a load time of 01h40m06s. This is the best time I have gotten so far loading on ext4.
I ended up writing a script that creates an ext2 file systems, loads the backend, umounts the partition, converts it to ext4 journaling, and then mounts the partition again. This will allow me to continue with the server rebuilds, but it is a pretty ugly hack.
Bill