ITS#8038 (syncrepl hanging onto its presentlist) only came to my attention due to the amount of memory involved. On a refresh of a DB with 2.8M entries I saw the consumer using about 320MB just for the presentlist. This list consists solely of 16 byte entryUUIDs; 2.8M items should have used no more than 48MB. An AVL node itself is 28 bytes on 64-bit platform, plus 16 bytes for the struct berval wrapped around the UUID.
I'm looking into adding an in-memory B+tree library to liblutil. For the type of fixed-size records we're usually storing in AVL trees, a Btree will be much more compact and higher performance since it will need rebalancing far less frequently.
Le 29/01/15 04:20, Howard Chu a écrit :
ITS#8038 (syncrepl hanging onto its presentlist) only came to my attention due to the amount of memory involved. On a refresh of a DB with 2.8M entries I saw the consumer using about 320MB just for the presentlist. This list consists solely of 16 byte entryUUIDs; 2.8M items should have used no more than 48MB. An AVL node itself is 28 bytes on 64-bit platform, plus 16 bytes for the struct berval wrapped around the UUID.
I'm looking into adding an in-memory B+tree library to liblutil. For the type of fixed-size records we're usually storing in AVL trees, a Btree will be much more compact and higher performance since it will need rebalancing far less frequently.
Why using a B+tree ? A hash map wouldn't be a more appropriate data structure ? EntryUUID ordering seems overkilling...
Emmanuel Lécharny wrote:
Le 29/01/15 04:20, Howard Chu a écrit :
ITS#8038 (syncrepl hanging onto its presentlist) only came to my attention due to the amount of memory involved. On a refresh of a DB with 2.8M entries I saw the consumer using about 320MB just for the presentlist. This list consists solely of 16 byte entryUUIDs; 2.8M items should have used no more than 48MB. An AVL node itself is 28 bytes on 64-bit platform, plus 16 bytes for the struct berval wrapped around the UUID.
I'm looking into adding an in-memory B+tree library to liblutil. For the type of fixed-size records we're usually storing in AVL trees, a Btree will be much more compact and higher performance since it will need rebalancing far less frequently.
Why using a B+tree ? A hash map wouldn't be a more appropriate data structure ? EntryUUID ordering seems overkilling...
I'm not fond of hashes, they're always cache-unfriendly and most of them have very poor dynamic growth behavior. Since we don't know in advance how many IDs are being stored, growth/resizing is a major concern. Tree structures are generally preferred because they have very good incremental growth performance, and B+trees have the best CPU cache behavior.
Le 29/01/15 08:25, Howard Chu a écrit :
Emmanuel Lécharny wrote:
Le 29/01/15 04:20, Howard Chu a écrit :
ITS#8038 (syncrepl hanging onto its presentlist) only came to my attention due to the amount of memory involved. On a refresh of a DB with 2.8M entries I saw the consumer using about 320MB just for the presentlist. This list consists solely of 16 byte entryUUIDs; 2.8M items should have used no more than 48MB. An AVL node itself is 28 bytes on 64-bit platform, plus 16 bytes for the struct berval wrapped around the UUID.
I'm looking into adding an in-memory B+tree library to liblutil. For the type of fixed-size records we're usually storing in AVL trees, a Btree will be much more compact and higher performance since it will need rebalancing far less frequently.
Why using a B+tree ? A hash map wouldn't be a more appropriate data structure ? EntryUUID ordering seems overkilling...
I'm not fond of hashes, they're always cache-unfriendly and most of them have very poor dynamic growth behavior. Since we don't know in advance how many IDs are being stored, growth/resizing is a major concern. Tree structures are generally preferred because they have very good incremental growth performance, and B+trees have the best CPU cache behavior.
Hashes have three problems : - first, as you say, growing a hash is a matter of copying the hash completely (most of the time) - second, they can degenerate - third, they have an average emptiness of roughly 30%
Now, on average, with data that are well distributed, they have some major advantages : - they are faster than any other data structure, with a O(1) average lookup cost - the memory that it uses is minimal, as it's generally backed with an array containing the data plus a flag that indicates a follow link if the bucket is shared by more than one element - adding and deleting elements in a hash map is generally not expensive
If you compare it with a B+tree, which is stable in O(logN), it's faster, uses less memory, and it's easier to implement. The most criticial point being that addition or removal from a hash is way less expensive than for e BTree. You can also protect the hash against concurrent access way more than a B+Tree, by splitting the buckets in blocks of sub-buckets, with a lock being set on each separated block.
At this point, some real world experiment is needed to validate those approaches.