Hello, I seek to make an importation in openldap of more than 3.000.000 of entries within the shortest times. Then well on I have some questions has to pose to you. The importation is carried out starting from a ldapadd and not a slapadd because I do not have the data timestamps UIDS etc . How can I to make differently? I noted to make an indexing after an importation LDIF is faster but, I have the impression that the objectclass are indexed in any event has the importation even if `index objectClass eq' is not specified in slapd.conf. What wants to thus say that the objectclass are still indexed by slapindex, it is thus a waste of time, how can I to make differently? To optimize the importation I have: Slapd.conf : Backend bdb Loglevel = 0 put that 3 shemas necessary A the importation (core/cosine/private) schemacheck off dbnosync dbnolocking # replogfile disabeled the monitor and indexs DB DB_CONFIG : set_flags DB_TXN_NOSYNC (but it seems to me that dbnosync of slapd.conf is equivalent) set_cachesize 0 268435456 1 (for 1Go of memory 512 was slower) and I did not modify the repertory of the logs because I nothing gained there, I found that odd besides
since I make db_archive - D to remove the whole of the logs, would not have it there not a means of avoiding creating them? kind to make an importation with a backend which is faster to make a slapcat. to go up a backend bdb and to import the LDIF directly in the DIB without passing by the DSA?
Thank you for your councils.
and sorry for translater