I have made changes to keep log in memory using DB_LOG_INMEMORY flag. The log size is kept 10MB. Now when I try to add 64K entries, 65526 entries get added but 65527th entry always fails. The error I get is "Index Generation Failed". I tried to debug the issue and found that the cursor->c_del function in bdb_idl_insert_key fails returning an error code as DB_LOG_BUFFER_FULL.
Am I missing something or is this a known problem? Do I need to do anything more while keeping logs in memory?
Regards, Suhel
--On Wednesday, October 03, 2007 6:59 PM +0530 Suhel Momin suhelmomin@gmail.com wrote:
I have made changes to keep log in memory using DB_LOG_INMEMORY flag. The log size is kept 10MB. Now when I try to add 64K entries, 65526 entries get added but 65527th entry always fails. The error I get is "Index Generation Failed". I tried to debug the issue and found that the cursor->c_del function in bdb_idl_insert_key fails returning an error code as DB_LOG_BUFFER_FULL.
Am I missing something or is this a known problem? Do I need to do anything more while keeping logs in memory?
Why are you keeping the logs in memory? This overall sounds like a BDB issue and not an OpenLDAP issue.
--Quanah
--
Quanah Gibson-Mount Principal Software Engineer Zimbra, Inc -------------------- Zimbra :: the leader in open source messaging and collaboration
Suhel Momin wrote:
I have made changes to keep log in memory using DB_LOG_INMEMORY flag. The log size is kept 10MB. Now when I try to add 64K entries, 65526 entries get added but 65527th entry always fails. The error I get is "Index Generation Failed". I tried to debug the issue and found that the cursor->c_del function in bdb_idl_insert_key fails returning an error code as DB_LOG_BUFFER_FULL.
Am I missing something or is this a known problem? Do I need to do anything more while keeping logs in memory?
Have you read Berkeley documentation about DB_LOG_INMEMORY? This is the expected result of creating an in-memory buffer smaller than the size of the largest transaction you want to create. Either use a larger buffer, or write smaller chunks. And read the docs http://www.oracle.com/technology/documentation/berkeley-db/db/articles/inmemory/C/index.html.
p.
Ing. Pierangelo Masarati OpenLDAP Core Team
SysNet s.r.l. via Dossi, 8 - 27100 Pavia - ITALIA http://www.sys-net.it --------------------------------------- Office: +39 02 23998309 Mobile: +39 333 4963172 Email: pierangelo.masarati@sys-net.it ---------------------------------------
Pierangelo Masarati wrote:
Suhel Momin wrote:
I have made changes to keep log in memory using DB_LOG_INMEMORY flag. The log size is kept 10MB. Now when I try to add 64K entries, 65526 entries get added but 65527th entry always fails. The error I get is "Index Generation Failed". I tried to debug the issue and found that the cursor->c_del function in bdb_idl_insert_key fails returning an error code as DB_LOG_BUFFER_FULL.
Am I missing something or is this a known problem? Do I need to do anything more while keeping logs in memory?
Have you read Berkeley documentation about DB_LOG_INMEMORY? This is the expected result of creating an in-memory buffer smaller than the size of the largest transaction you want to create. Either use a larger buffer, or write smaller chunks. And read the docs http://www.oracle.com/technology/documentation/berkeley-db/db/articles/inmemory/C/index.html.
In this case they cannot write smaller chunks, it's purely a function of how we generate indexes. When an index slot gets to about 65536 elements we delete that list and replace it with a 3-element range. This delete operation consumes a great deal of log space.
The solution of course is to use a larger buffer.
openldap-software@openldap.org