Howard Chu wrote:
With the recent hubbub on the -software list about it I decided to keep quiet for a bit. Talking about it before it's done smacks of vaporware; it needs to be tested more before we start talking about it more broadly.
And on that note... Here are some scenarios that need to be tested:
single DB with multiple consumers: this isn't what most people think of re: multimaster. The server is a pure slave, all of its data comes from other providers. Since it is not multimaster, local writes are disallowed, same as the old single DB / single consumer scenario.
pair of peer servers: this is 2-way multimaster, like mirrormode. Each server has a provider, and a consumer pointed at the other. Local writes are allowed. This is what test050 sets up, but the script doesn't actually test anything after the DBs are populated. I've been poking and prodding it manually. Will probably want to use slapd-tester here.
N-way multimaster: this could take two basic approaches - sparse connection, or full connections. Currently the code does not support the sparse approach. (We need to add support for multiple URLs in a single consumer.)
For the fully connected approach, each server has a separate consumer pointed at each other server. The provider would only propagate locally-generated changes; changes received via a consumer would just stop there because it's assumed that all of the servers are receiving the changes at the same time.
At the moment it doesn't work that way - any change is always propagated, so in Persist mode there's some excess traffic.
For the sparse approach, each server would have a single consumer configured with multiple URLs. This means there would only be one active consumer session at a time, and it would only switch away from the active URL if that server went down. The most obvious implementation would be to treat the list of URLs as a ring. Each server would find its own listenerURL in the list, and connect to the next server in the list first. (Again, I'm assuming that all of the servers are replicating their configs as well as their main data, thus a single consumer config would list the URLs of all of the participating servers.) Any change that arrived anywhere would have to be propagated onward. The advantage here is fewer open connections; a disadvantage is the long propagation delay for updates traversing the ring.
Of course you could do some combination of the two approaches - configure more than one consumer, but fewer than the total number of servers. In each consumer you would configure URLs for a subset of all the servers.
... For any of the multimaster configurations, we should also set up a couple of regular consumers to verify that cascading still works.