In cases where the processing of "def synchronize", let's say P(a), breaks unexpectedly, the "last_change" in domain.tld.db is set to the last_change date of the ldap filter, let me name it F(a) result, and this I call R(a), item just processed before the failed item/entry processing.
Now the effect is that while we changed or added some users relevant to the LDAP filter and due to what ever reason the processing P(a) breakes after adding, let's say, one or just not all results [R(a)] of filter [F(a)], some of them to the kolab system, all new users not processed withing the "synchronize" run [P(a)] just crashed will not be processed at any time any more.
It would be possible to just run "kolab sync --resync" regularly but than the process itself, even "kolab sync --resync, is still not reliable.
We here discussed the problem and think that it would be a good idea to switch the query result item processing initiation to a more reliable queuing mechanism.
As there would possibly be a different "last_change" field functionality in domain.tld.db differentiating [ldap] last_change and [kolab] last_change.
With this it would be possible to:
- process sync on all domain.tld.db entries where [ldap] last_change >= [kolab] last_change and update "[kolab] last_change = now" entry by entry after processing
- request LDAP user records based on max([kolab] last_chage) and additional regular filter settings and update and integrate domain.tld.db
- process as described in 1.
I know that this would change the design of sync massively and changes sync design at all. But gives much more reliability.