I have an environment with one database:
alecm@db0 /fusionio1/lmdb> mdb_stat -fear dbgraph-to-compress/vx_to_edg
Environment Info
Map address: (nil)
Map size: 1200000000000
Page size: 4096
Max pages: 292968750
Number of pages used: 24631912
Last transaction ID: 69821
Max readers: 126
Number of readers used: 0
Reader Table Status
(no active readers)
Freelist Status
Tree depth: 2
Branch pages: 1
Leaf pages: 13
Overflow pages: 26338
Entries: 1801
Free pages: 13028375
Status of Main DB
Tree depth: 1
Branch pages: 0
Leaf pages: 1
Overflow pages: 0
Entries: 1
Status of vx_to_edg
Tree depth: 4
Branch pages: 45690
Leaf pages: 7845337
Overflow pages: 0
Entries: 3993304504
I compressed it (i.e. got rid of Free Pages) with
mdb_copy -c dbgraph-to-compress/vx_to_edg/ dbgraph-compressed/vx_to_edg
This command exited normaly with return status 0.
After executing mdb_stat -a on the compressed database I got SIGSEGV
alecm@db0 /fusionio1/lmdb> gdb --args mdb_stat -fear
dbgraph-compressed/vx_to_edg
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later
<http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mdb_stat...done.
(gdb) r
Starting program: /usr/local/bin/mdb_stat -fear dbgraph-compressed/vx_to_edg
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Environment Info
Map address: (nil)
Map size: 1200000000000
Page size: 4096
Max pages: 292968750
Number of pages used: 11577185
Last transaction ID: 1
Max readers: 126
Number of readers used: 0
Reader Table Status
(no active readers)
Freelist Status
Tree depth: 0
Branch pages: 0
Leaf pages: 0
Overflow pages: 0
Entries: 0
Free pages: 0
Status of Main DB
Tree depth: 1
Branch pages: 0
Leaf pages: 1
Overflow pages: 0
Entries: 1
Program received signal SIGSEGV, Segmentation fault.
mdb_xcursor_init1 (node=node@entry=0x7ee891ecdfc4, mc=0x6154a0, mc=0x6154a0)
at mdb.c:8556
8556 mx->mx_cursor.mc_flags &= C_SUB|C_ORIG_RDONLY|C_WRITEMAP;
(gdb) bt
#0 mdb_xcursor_init1 (node=node@entry=0x7ee891ecdfc4, mc=0x6154a0,
mc=0x6154a0) at mdb.c:8556
#1 0x0000000000405f10 in mdb_cursor_first (mc=0x6154a0, key=0x7fffffffe400,
data=0x0) at mdb.c:7279
#2 0x00000000004060fc in mdb_cursor_next (mc=<optimized out>,
key=<optimized out>, data=<optimized out>, op=<optimized out>) at mdb.c:6886
#3 0x0000000000404e35 in mdb_cursor_get (mc=0x6154a0,
key=key@entry=0x7fffffffe400, data=data@entry=0x0,
op=op@entry=MDB_NEXT_NODUP) at mdb.c:7466
#4 0x000000000040222f in main (argc=<optimized out>, argv=<optimized out>)
at mdb_stat.c:230
(gdb)
#mdb_copy -V
LMDB 0.9.70: (December 19, 2015)
Linux db0 3.13.0-95-generic
XFS filesystem
Ok, thanks.
Regards
Chandan
On Fri, Mar 12, 2021, 23:55 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Friday, March 12, 2021 11:38 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > Quanah I'm waiting for your take on this as you are aware of the whole
> > thread from start.
>
> I don't have anything to add at this point. I don't know what sort of
> writes your application does, I don't know what it's requirements are as
> far as doing reads after writes, etc. In general, I'd expect a well
> written application that needs to do thousands of reads to be isolated
> from
> the write side of data processing, which doesn't seem to be the case here.
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
More server behind load balancer means more distribution of traffic and
less load on a single node .Sudden spike in traffic won't choke the setup.
Regards
Chandan
On Fri, Mar 12, 2021, 00:18 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Friday, March 12, 2021 12:12 AM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> > Quanah, I am already having this setup, but business wants to horizontal
> > scale the setup.
>
> If you only have a single application using LDAP, how does horizontal
> scaling help in any way?
>
> --Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
Quanah, I am already having this setup, but business wants to horizontal
scale the setup.
As far as I understood, horizontal scaling is for reads replicas only, as
writes would go to single node with sticky session.
Your earlier solution was perfect for my use case, its just that I am
confused at how to bifurcate read and write connections. Shall I use two
separate connection string from single application for read and write
traffic to same set of servers with different pools.
Regards
Chandan
On Thu, Mar 11, 2021, 21:30 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Thursday, March 11, 2021 10:56 AM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > OK, but if I put a single connection string, how below setup suggested by
> > quanah will work.
> >
> >
> > I have a single application which read as well as write to ldap.
>
> If you have only a single application using LDAP, just set up two nodes
> with sticky failover and a single pool, since nothing else is using LDAP.
> As has been said repeatedly, in general, an application that does writes
> should use the same connection for reads.
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
Understood quanah, but I have a single app which performs both read and
write. App is using single connection string for binding with ldap. So
shall I use two separate connection string, one for read and one for write
in the application code ?
Also, as per the configuration setup suggested by you, how the replication
need to be setup, I mean mirror mode across write pool members and another
mirroring for read pool members from one of write pool member.
Regards
Chandan
On Wed, Mar 10, 2021, 21:41 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Wednesday, March 10, 2021 6:43 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > A load-balancer that is not doing round-robin (but some other policy,
> > like response time or throughput) probably would be OK
> >
> >
> >
> > OK agreed with you, but above loadbalancer config doesn't solve problem
> > of horizontal scaling and load balancing.
> >
> >
> > In other words, is it possible to achieve a horizontally scalable, highly
> > available and load balanced setup.
>
> You set up two pools in the load balancer
>
> Pool 1 -> For apps that only do reads, and handles load distribution in
> whatever method you feel best. Example DNS: ldap.example.com
>
> Pool 2 -> For apps that write directly. Sticky session to a single
> provider unless it goes offline. Example DNS: ldap-provider.example.com
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
Thanks quanah for detailed explanation you have sorted the confusion.
I have last doubt that read and write bifurcation for ldap connection
string is to be handled in application code . It has nothing to do on the
ldap end. I mean I need to define ldap.example.com
<http://ldap-read.example.com> as connection string for reads and
ldap-provider.example.com for writes in the application code.
Regards
On Wed, Mar 10, 2021, 23:19 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Wednesday, March 10, 2021 10:38 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > Understood quanah, but I have a single app which performs both read and
> > write. App is using single connection string for binding with ldap. So
> > shall I use two separate connection string, one for read and one for
> > write in the application code ?
>
> If you look closely at my response, I noted that apps that do writes
> should
> use the same pool for reads. This is generally due to the fact most apps
> I've run across do a read after write and may hit problems if the change
> is
> not there (i.e., due to replication delays).
>
> > Also, as per the configuration setup suggested by you, how the
> > replication need to be setup, I mean mirror mode across write pool
> > members and another mirroring for read pool members from one of write
> > pool member.
>
> I don't understand this question. There's a single set of servers, say A,
> B, C, D. There are two pools configured in the load balancer. The first
> pool uses a sticky setting, and always points to a single server for write
> ops (say A) unless its down, at which point it will fail over to the first
> available server (say B). The second pool is for reads, and does whatever
> algorithm you think best (say round robin), and bounces between A, B, C, D.
>
> What replication mechanism is in use has nothing to do with the load
> balancer configuration. I would generally advise using delta-syncrepl
> between nodes A, B, C D, all of which connect directly to one another and
> don't interact directly with the load balancer at all.
>
> Regards,
> Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
Thanks,
Mirror mode configuration cannot be horizontal scaled what I understood as
writes are going to one of the node, and other act as an active standby.
I want 2 or more nodes behind a load balancer which can share read/write
load. A kind of active active setup.
Regards
Chandan Jain
On Mon, Mar 8, 2021, 23:44 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Sunday, March 7, 2021 8:39 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > Thanks, Quanah
> >
> >
> > Is it possible to direct upgrade from 2.4.32 to latest version.
>
> If you (temporarily) stick with the same backend, and in this case, if
> that
> same backend is linked to the exact same version of BDB, yes. I.e.,
> compile the back-bdb/hdb backends against the same version of BDB,
> upgrade,
> and then migrate to back-mdb.
>
> > Also, can we horizontal scale a 2 node mirror mode setup? I am confused
> > after seeing suggestions on different sites.
>
> I don't understand the question here. Mirror mode is just a configuration
> of MMR with a load balancer in front.
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
OK, but how to spread out read traffic.
I mean how I can bifurcate read and write go to different nodes. I mean how
application decide which node to write and which node to read from.
Regards
Chandan
On Wed, Mar 10, 2021, 00:02 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Tuesday, March 9, 2021 3:28 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> >
> > Thanks,
> >
> >
> > Mirror mode configuration cannot be horizontal scaled what I understood
> > as writes are going to one of the node, and other act as an active
> > standby.
> >
> >
> > I want 2 or more nodes behind a load balancer which can share read/write
> > load. A kind of active active setup.
>
> The point of mirror mode is that only one server in the pool gets writes.
> You can horizontally scale that as much as you want, whether there are 2
> servers in the pool or 5000. I.e., as long as write traffic only goes to
> one of those servers, you have mirror mode.
>
> Generally I would advise against distributing write traffic (i.e., do
> exactly what mirror mode does, direct all write traffic to a single active
> node unless it goes down and fail over is necessary). Spread out read
> traffic as desired.
>
> --Quanah
>
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>
Thanks, Quanah
Is it possible to direct upgrade from 2.4.32 to latest version.
Also, can we horizontal scale a 2 node mirror mode setup? I am confused
after seeing suggestions on different sites.
Regards
Chandan Jain
On Fri, Mar 5, 2021, 00:28 Quanah Gibson-Mount <quanah(a)symas.com> wrote:
>
>
> --On Thursday, March 4, 2021 11:49 PM +0530 chandan jain
> <chandandevops(a)gmail.com> wrote:
>
> >
> > It is openldap-2.4.32, i don't see any mdb support option while
> > compiling .
> > It is compiled with below options:
> >
> >
> > tar -xzf db-5.3.21.tar.gz
> > tar -zxf openldap-2.4.32.tgz
> > cd db-5.3.21
> > cd build_unix/
> > ../dist/configure --enable-compat185 --enable-dbm --disable-static
> > --enable-cxx && make && make install
> > db_verify -V
> > cd ../..
> > cd openldap-2.4.32
> > ./configure --prefix=/usr/local/OpenLDAP --with-tls=no
> > --enable-modules=yes --enable-overlays=yes --enable-ppolicy=yes && make
> > depend && make && make install
>
> 2.4.32 is over 8 years old. As I said, use a current release (2.4.57).
>
> --Quanah
>
> --
>
> Quanah Gibson-Mount
> Product Architect
> Symas Corporation
> Packaged, certified, and supported LDAP solutions powered by OpenLDAP:
> <http://www.symas.com>
>