On 10/4/07, Howard Chu < hyc@symas.com> wrote:
Pierangelo Masarati wrote:
> Suhel Momin wrote:
>> I have made changes to keep log in memory using DB_LOG_INMEMORY flag.
>> The log size is kept 10MB.
>> Now when I try to add 64K entries,  65526 entries get added but 65527th
>> entry always fails.
>> The error I get is "Index Generation Failed".
>> I tried to debug the issue and found that the  cursor->c_del function in
>> bdb_idl_insert_key  fails returning an error code as DB_LOG_BUFFER_FULL.
>> Am I missing something or is this a known problem?
>> Do I need to do anything more while keeping logs in memory?
> Have you read Berkeley documentation about DB_LOG_INMEMORY?  This is the
> expected result of creating an in-memory buffer smaller than the size of
> the largest transaction you want to create.  Either use a larger buffer,
> or write smaller chunks.  And read the docs
> < http://www.oracle.com/technology/documentation/berkeley-db/db/articles/inmemory/C/index.html >.

In this case they cannot write smaller chunks, it's purely a function of how
we generate indexes. When an index slot gets to about 65536 elements we delete
that list and replace it with a 3-element range. This delete operation
consumes a great deal of log space.

The solution of course is to use a larger buffer.

I tried to make the value of BDB_IDL_LOGN to be 20 instead of 16. This changes the value of
BDB_IDL_DB_MAX.  It looks like this change has solved my problem of "index generation failed" while keeping logs in memory.

Is this the right way to go about or should I still go with increasing the buffer size for logs??