e synced. Even
> removing their .conversations and .counters files doesn't help.
>
> Can I, and how can I, get rid of those conversation indexes in order to have
> my mailboxes being "as if there never been conversation" ?
After removing "conversations: 1" option
ending on your deployment. I think you'll probably want to upgrade your
> 3.0 systems in place as far forward as you can (while staying 3.0), and then
> use the replication strategy to upgrade to 3.2 after that.
I just do that. My test server is now using 3.0.14 (self build debian
Within-series,
an in-place upgrade ought to be safe -- but please check the release notes
carefully for extra steps/considerations you may need to make, depending on
your deployment. I think you'll probably want to upgrade your 3.0 systems in
place as far forward as you can (while staying 3.
Jean Charles Delépine écrivait (wrote) :
> Hello,
>
> I'm on the way to migrate one quite big murder config with Cyrus IMAP
> 3.0.8-Debian-3.0.8-6+deb10u4
> to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1.
>
> My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has work
> before for 2.
Hello,
I'm on the way to migrate one quite big murder config with Cyrus IMAP
3.0.8-Debian-3.0.8-6+deb10u4
to Cyrus IMAP 3.2.3-Debian-3.2.3-1~bpo10+1.
My plan is to replicate 3.0.8's backends on 3.2.3 ones. This plan has
work before for 2.5
to 3.0. migration.
Il can replicate empty mailbox
Hi!,
I was writing some code for automating the server fail-over and was trying to
see, how should or could I handle not run log files from sync_client. Well when
in a clean shutdown, it’s pretty easy to know how to manage because the
replication is up-to-date… so almost no problem there
On Thu 04 Jun 2020 at 18:57:37, Michael Menge
(michael.me...@zdv.uni-tuebingen.de) wrote:
>
you also need to run cyr_expire on the "new_server" to remove the old
expunged mails and deleted folders.
Obvious when you try it! Thanks so much.
Expired 23 and expunged 7617 out of 289060 me
Hi,
Quoting Ian Batten via Info-cyrus :
Hi, long-time Cyrus user (25 years, I think), but stumped on this one…
I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am
trying to migrate off. The strategy is to run rolling replication
onto the new server (3.0.8-6+deb10u4 on Debian
Hi, long-time Cyrus user (25 years, I think), but stumped on this one…
I have an ancient Cyrus 2.5.11 on Solaris 11 installation I am trying to
migrate off. The strategy is to run rolling replication onto the new server
(3.0.8-6+deb10u4 on Debian 10.4), and then point the DNS record at the
ing a hardware
entropy generator. Eg. on a PCI-e card. They aren't super cheap but also
not too expensive. I think someone with good cryptography knowledge
could help you with this topic. I suppose the storage I/O can be a
bigger issue here.
I've also noticed that replication documen
W dniu 22.04.2020 o 10:19, Olaf Frączyk pisze:
> On 2020-04-22 09:16, Andrzej Kwiatkowski wrote:
>> W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:
>>> Hi,
>>>
>>> I'm running 3.0.5.
>>>
>>> I want to migrate to a new machine. I set up
On 2020-04-22 09:16, Andrzej Kwiatkowski wrote:
W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:
Hi,
I'm running 3.0.5.
I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.
The replication started but it didn't transfer all mails.
The store isn't big 44GB, transferr
s got this pretty much covered -- you need to disable the
rolling replication for now, and then use sync_client -u (or if you're brave,
sync_client -A) to get an initial sync of everything. These two options work
entire-user-at-a-time, so they should detect and fix the problems introduced
Since you use replication - are sieve scripts replicated as well?
There is -s option called sieve mode but it needs to specify which
users' files are to replicate and there is written that it is mostly
for debugging.
Yes, sieve scripts are replicated.
The way the rolling replication work
W dniu 20.04.2020 o 16:11, Olaf Frączyk pisze:
> Hi,
>
> I'm running 3.0.5.
>
> I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.
>
> The replication started but it didn't transfer all mails.
>
> The store isn't big 44GB, transferr
I think Michael's got this pretty much covered -- you need to disable the
rolling replication for now, and then use sync_client -u (or if you're brave,
sync_client -A) to get an initial sync of everything. These two options work
entire-user-at-a-time, so they should detect and fix th
Quoting Olaf Frączyk :
Yes, at the beginning I was also thinking if initial sync is
necessary, but there was nothing in docs about it, something started
replicating and I simply assumed it does initial resync. I'll try it
this evening. :)
Since you use replication - are sieve sc
On 2020-04-21 16:00, Michael Menge wrote:
Hi,
Quoting Olaf Frączyk :
I managed to get strace on both sides, however it doesn't make me
wiser - there is nothing obvious for me.
Additionally I see that replication works more or less for new
messages, but older are not processed.
I
Hi,
Quoting Olaf Frączyk :
I managed to get strace on both sides, however it doesn't make me
wiser - there is nothing obvious for me.
Additionally I see that replication works more or less for new
messages, but older are not processed.
I have several subfolders in my mailbox, so
I managed to get strace on both sides, however it doesn't make me wiser
- there is nothing obvious for me.
Additionally I see that replication works more or less for new messages,
but older are not processed.
I have several subfolders in my mailbox, some of them unreplicated. If I
c
Quoting Olaf Frączyk :
Thank you for the telemetry hint :)
I don't use the syncserver - the replication is done via IMAP port
on the replica side. I have no idea how to have strace spawned by
cyrus master process. When I attach later to imapd using strace -p
I'm afraid some in
Thank you for the telemetry hint :)
I don't use the syncserver - the replication is done via IMAP port on
the replica side. I have no idea how to have strace spawned by cyrus
master process. When I attach later to imapd using strace -p I'm afraid
some info already will be l
oting Olaf Frączyk :
Hi,
I upgraded to 3.0.13 but it didn't help.
This time it copied about 18GB
in the logs I still see:
1 - inefficient replication
2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error
from START section to SERVICES,
it seems that it is not automatically restarted
On 2020-04-21 08:47, Michael Menge wrote:
Hi Olaf
Quoting Olaf Frączyk :
Hi,
I upgraded to 3.0.13 but it didn't help.
This time it copied about 18GB
in the logs I still see:
1 - inefficient replication
2
SERVICES, it
seems that it is not automatically restarted
On 2020-04-21 08:47, Michael Menge wrote:
Hi Olaf
Quoting Olaf Frączyk :
Hi,
I upgraded to 3.0.13 but it didn't help.
This time it copied about 18GB
in the logs I still see:
1 - inefficient replication
2 - IOERROR: zero l
B
in the logs I still see:
1 - inefficient replication
2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailing out! Bad protocol
But I have no idea what can I do next and why it fails
Apr 21 02:
Hi Olaf
Quoting Olaf Frączyk :
Hi,
I upgraded to 3.0.13 but it didn't help.
This time it copied about 18GB
in the logs I still see:
1 - inefficient replication
2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too
Hi,
I upgraded to 3.0.13 but it didn't help.
This time it copied about 18GB
in the logs I still see:
1 - inefficient replication
2 - IOERROR: zero length response to MAILBOXES (idle for too long)
IOERROR: zero length response to RESTART (idle for too long)
Error in do_sync(): bailin
Hi,
I'm running 3.0.5.
I want to migrate to a new machine. I set up cyrus-imapd 3.0.13.
The replication started but it didn't transfer all mails.
The store isn't big 44GB, transferred was about 24 GB.
In the logs I see:
Apr 20 14:54:03 ifs sync_client[24239]: couldn
On Sun, Apr 5, 2020, at 00:45, Olaf Frączyk wrote:
> Hello,
>
> 1. Is currently master-master replication possible (maybe 3.2) Is it OK
> to sync them two-way?
No, not really. It'll mostly be fine, but it doesn't (yet) handle folder
create/rename/delete safely.
> If
Hello,
1. Is currently master-master replication possible (maybe 3.2) Is it OK
to sync them two-way?
If yes - how to set up such config?
2. If master-master is impossible, is there any guide how to setup
failover from master to slave and possibly back? If split-brain happens
- is there an
On Wed, Nov 20, 2019, at 4:41 PM, Deborah Pickett wrote:
> > I'm curious how these are working for you, or what sort of configuration
> > and workflows leads to having #calendars and #addressbooks as top-level
> > shared mailboxes? I've only very recently started learning how our DAV bits
> > work
> I'm curious how these are working for you, or what sort of configuration
and workflows leads to having #calendars and #addressbooks as top-level
shared mailboxes? I've only very recently started learning how our DAV bits
work (they have previously been black-boxes for me), and so far have only
s
On Wed, Nov 20, 2019, at 11:06 AM, Deborah Pickett wrote:
> On 2019-11-20 10:03, ellie timoney wrote:
>>> foo also includes "#calendars" and "#addressbooks" on my server so there
are weird characters to deal with.
>>>
>> Now that's an interesting detail to consider.
>>
> I should restate my ori
On 2019-11-20 10:03, ellie timoney wrote:
foo also includes "#calendars" and "#addressbooks" on my server so there
are weird characters to deal with.
Now that's an interesting detail to consider.
I should restate my original message because I'm being fast and loose
with the meaning of "contain
On Tue, Nov 19, 2019, at 9:38 AM, Deborah Pickett wrote:
> > Food for thought. Maybe instead of having one "%SHARED" backup, having one
> > "%SHARED.foo" backup per top-level shared folder would be a better
> > implementation? I haven't seen shared folders used much in practice, so
> > it's in
e a mailbox from a partition (a renm to
different partition) to another one… we usually do :
- stop replication between master/slave (as a safety measure for having a very
last “fall back” if the renm goes wrong). You know, promoting the slave to
master would have the mailbox of the failed renaming
Food for thought. Maybe instead of having one "%SHARED" backup, having one
"%SHARED.foo" backup per top-level shared folder would be a better implementation? I
haven't seen shared folders used much in practice, so it's interesting to hear about it.
Looking at your own data, if you had one "%S
du
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Info-cyrus digest..."
>
>
> Today's Topics:
>
>1. Cyrus doesn't preserve hard-links on replication
> (Adrien Remillieux)
>
>
> ---
replication !
Le dim. 17 nov. 2019 à 18:00, a
écrit :
> Send Info-cyrus mailing list submissions to
> info-cyrus@lists.andrew.cmu.edu
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
> or, via ema
> Related: I had to apply the patch described in
> (https://www.mail-archive.com/info-cyrus@lists.andrew.cmu.edu/msg47320.html),
> "backupd IOERROR reading backup files larger than 2GB", because during
> initial population of my backup, chunks tended to by multiple GB in size
> (my %SHARED user ba
Hello,
I set up replication between two cyrus servers (master runs 2.5.10 and
slave 3.0.8) with plans to decommission the old server once everything is
working. I noticed that the mail spool takes 950GB instead of ~300GB on the
old server. I suspected the hardlinks for message deduplication
Further progress report: with small chunks, compaction takes about 15
times longer. It's almost as if there is an O(n^2) complexity
somewhere, looking at the rate that the disk file grows. (Running perf
on a compaction suggests that 90% of the time ctl_backups is doing
compression, decompress
On 2019-11-11 11:10, ellie timoney wrote:
This setting might be helpful:
Thanks, I saw that setting but didn't really think through how it would
help me. I'll experiment with it and report back.
That would be great, thanks!
Progress report: I started with very large chunks (minimum 64 MB,
m
artition, reconstruct, and it comes back as a new message (unread, no flags,
etc)?
You would need to be careful of the window between delivery of a message,
replication to the replica, and deletion of the message (and replication of the
deletion), to ensure you get a backup of the state where th
On 2019-11-08 09:13, ellie timoney wrote:
I'm not sure if I'm just not understanding, but if the chunk offsets were to
remain the same, then there's no benefit to compaction? A (say) 2gb file full
of zeroes between small chunks is still the same 2gb on disk as one that's
never been compacted a
ff-site server over a much
> slower link. The off-site server doesn't speak the Cyrus sync
> protocol. What it does do well is block-level backups: if only a part
> of a file has changed, only that part needs to be transferred over the
> slow link. [I haven't decided wh
ly a part
of a file has changed, only that part needs to be transferred over the
slow link. [I haven't decided whether my technology will be the rsync
--checksum protocol, or Synology NAS XFS replication, or Microsoft
Server VFS snapshots. They all do block-level backups well.]
Since Cyru
Thanks ! I'll look into it.
Le lun. 16 sept. 2019 à 18:01, a
écrit :
> Date: Sun, 15 Sep 2019 13:04:31 -0600
> From: Scott Lambert
> To: info-cyrus@lists.andrew.cmu.edu
> Subject: Re: Possible issue when upgrading to cyrus 3.0.8 using
> replication ?
> Message-ID
If you can create the mailboxes on the new server, without replication,
perhaps it would be safer/less downtime to use IMAPsync to move the data
to the new server. It will be slow, but I don't mind slow while the
source server is still online and users are happy.
On 9/14/19 5:12 PM, A
Thank you for your answer !
Considering what you said I'll try to enable replication on the new server.
If it doesn't work I'll just schedule some downtime, copy the
/var/spool/cyrus folder to the new server, install cyrus 3.0.11 from the
backports and then upgrade the mailboxes i
Hi Adrien,
The replication upgrade path should be okay. In-place upgrades (that would use
the affected reconstruct to bring mailboxes up to the same version as the
server) would get bitten. Whereas if you replicate to a newer version server,
the mailboxes on the replica will be created at the
Hello,
I have a server that I can't update running cyrus 2.5.10 which contain
mailboxes that have existed from 2.3 and earlier (around 300Gb total). My
plan is to update by enabling replication with a new server running Debian
Buster (so cyrus 3.0.8) and then shutting down the old server.
ned>
www.sarenet.es <http://www.sarenet.es/>
Antes de imprimir este correo electrónico piense si es necesario hacerlo.
> El 10 jul 2019, a las 10:03, Egoitz Aurrekoetxea escribió:
>
> The subject of this email is not properly set… it should be . Issues in
> replication with
It would be better to just talk daemons IMHO…. That should work… but sometimes…
perhaps another things could be implied… so… the most clear way, should be to
just put as a normal client with a expect script for instance all the source
scripts to Cyrus… but using sieveshell… and not using files d
The subject of this email is not properly set… it should be . Issues in
replication with folder subscription and Sieve
As I think I discovered something I reopen a new thread with the title properly
Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170
Le 09/07/2019 à 22:49:01+0200, Egoitz Aurrekoetxea a écrit
> By the way, for your case I would recommend doing a script that does a get
> from
> dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a
> much
> more cleaner way…
Yes, it is what I did, before I try de sync I eve
Le 09/07/2019 à 22:44:19+0200, Egoitz Aurrekoetxea a écrit
Hi,
>
> If instead of -A you used -u for each of your users did it worked? Or did it
If I remember correctly (but I not sure) this is how I find out the
problem.
Try time -A, notice it crash,
so try -u first_user, notice it work
By the way, for your case I would recommend doing a script that does a get from
dovecot and a put to Cyrus instead of copying Sieve files directly… it’s a much
more cleaner way…
Cheers!
Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
Hi Albert,
If instead of -A you used -u for each of your users did it worked? Or did it
crashed in the same user as with -A?. Which Cyrus version were you running?.
Cheers,
Egoitz Aurrekoetxea
Dpto. de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@saren
Le 09/07/2019 à 14:10:49+0200, Egoitz Aurrekoetxea a écrit
> Good morning,
>
>
> After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas
> didn't
> have some folders (or all) subscribed the same way they had in previous env in
> Cyrus 2.3. Same happened for some users with Sieve s
Could perhaps, some of this something to do, with having intermediate folders
subscribed/unsubscribed in the middle of the tree? And that to cause something
like it that perhaps is not caused when the mailbox is being accessed by the
user instead of being replicated (when the change is applied b
Good morning,
After we upgraded to Cyrus 3.0.8, we saw that some users in the replicas didn't
have some folders (or all) subscribed the same way they had in previous env in
Cyrus 2.3. Same happened for some users with Sieve scripts. It seemed the
content itself was perfectly copied. It was lik
663]: CRC failure on sync for
, trying full update
Feb 14 15:11:20 mx7c sync_client[62663]: SYNCNOTICE: highestmodseq
higher on replica , updating 8758 => 8778
Perhaps all this situations are properly handled by the own replication,
and I should just
Hi mates,
I have been taking a lot at the code and trying to debug this...
I have seen there seems not to be replication aparently for neither
squatter nor cyr_expire. For instance, ipurge seems to be replicated
properly but not this way the two commented commands. I could launch
them in the
Hi!
Previously (in 2.3 and older versions), cyr_expire and ipurge actions
for instance where not replicated to the slave. So, you needed to launch
them in both, the master and the slave. My question is, are now
replicated as mailbox replication commands?. What about commands like
Squatter -F for
Le 16/01/2019 à 17:10:30+0100, Egoitz Aurrekoetxea a écrit
> Good afternoon,
>
>
> I would try doing it user by user (with -u). This way you would have all
> synced
> except the problematic mailbox.
Hi, thanks for the help.
I got some progress in my problem :
> [root@imap-mirror-p /bal
[1]
Antes de imprimir este correo electrónico piense si es necesario
hacerlo.
El 16-01-2019 16:15, Albert Shih escribió:
> Hi everyone.
>
> I've got some big issue with replication.
>
> I've
>
> master --- replica ---> slave_1 --- replica ---> slave_2
&g
Hi everyone.
I've got some big issue with replication.
I've
master --- replica ---> slave_1 --- replica ---> slave_2
The replication between master and slave_1 work nice.
Between slave_1 and slave_2 I've got some issue (log to big after network
failure and wo
Thanks a lot Bron!!! :) :)
---
EGOITZ AURREKOETXEA
Departamento de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48170 Zamudio (Bizkaia)
ego...@sarenet.es
www.sarenet.es [1]
Antes de imprimir este correo electrónico piense si es necesario
hacerlo.
El 08-01-2019 12:02, Bron Gondwana
Yep, that's totally safe. Even doing the same user twice at the same time
should be safe, though it may do extra work.
Bron.
On Tue, Jan 8, 2019, at 05:05, Egoitz Aurrekoetxea wrote:
> Good afternoon,
>
> I know it seems a pretty stupid question, but some time ago, you cannot have
> a Cyru
Good afternoon,
I know it seems a pretty stupid question, but some time ago, you cannot
have a Cyrus server acting for instance as a master and as slave... it
was not supported... worked... but not supported... so having multiple
sync_client instances... perhaps could damage something (although I
Good afternoon,
Is it possible to launch several instances of
"/usr/local/cyrus/bin/sync_client -S DEST-HOST -v -u EMAIL" in
parallel?. Doing it just one mailbox at a time takes ages It would
help me a lot, the fact of parallelizing and have no disk bottleneck
issues
I think it should b
On 11/15/18 2:16 AM, Zorg wrote:
I ve one cyrus imap server I want to create a replicated one
I have read the documentation but nothing explain how two start the
first replication
If my slave master is empty how can i synchronise them the first time
Once you've got replication confi
Hello
I ve one cyrus imap server I want to create a replicated one
I have read the documentation but nothing explain how two start the
first replication
If my slave master is empty how can i synchronise them the first time
Thanks
Cyrus Home Page: http://www.cyrusimap.org/
List
ores each on
>two locations. We have a total of ~44000 accounts, ~457000 Mailboxes,
>and 2x6.5 TB Mails
>
>Each server is running 3-4 instances. One frontend, two backend/replic
>and on one of the servers the cyrus mupdate master. Each Server on one
>location is paired with one serve
total of ~44000 accounts, ~457000 Mailboxes,
and 2x6.5 TB Mails
Each server is running 3-4 instances. One frontend, two backend/replic
and on one of the servers the cyrus mupdate master. Each Server on one
location is paired with one server on the other location for replication
so in normal operat
Best regards.
>Четверг, 13 сентября 2018, 13:22 +05:00 от Michael Menge
>:
>
>Hi,
>
>This setup is NOT SUPPORTED and WILL BREAK if the replication process
>is triggered
>from the wrong server (user is active on both servers, user switched
>from one server
>to the o
Hi,
This setup is NOT SUPPORTED and WILL BREAK if the replication process
is triggered
from the wrong server (user is active on both servers, user switched
from one server
to the other while the sync-log file is still processed, after split
brain) and
some mailboxes have been subscribed
Sorry! Previous message was sent by mistake.
For example, I can configure both servers as follows.
Server A.
-
/etc/cyrus.conf
START {
...
syncclient cmd="sync_client -r"
...
}
SERVICES {
...
syncserver cmd="sync_server" listen="csync"
...
}
/etc/imapd.conf
...
sync_h
For example, on server A
--
Evgeniy Kononov
Cyrus Home Page: http://www.cyrusimap.org/
List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/
To Unsubscribe:
https://lists.andrew.cmu.edu/mailman/listinfo/info-cyrus
wait, but if I create a folder on the master, it perfectly syncs to the
replica. Also when I delete the folder on the master, it is also deleted on the
replica. It means that information about subscriptions to folders is
transmitted when synchronizing. but it works only if the client is the ser
Yes! This is on our roadmap, and I really hope to land it before we release
3.2.
The subscriptions are a particularly tricky part of it, because there's
currently no change information in the subscriptions database, but I'll make
sure that gets added so we can tell if it's a subscription add
pport master-master replication. Because of
CONDSTORE (https://tools.ietf.org/html/rfc4551) Cyrus is able
to handle messages on a master-master setup, but the information
about folder operations is not tracked and so cyrus is unable to
distinguish is a folder was subscribed on one server, or if a f
Hello!
I have two servers with cyrus-imapd
cyrus-imapd-2.5.8-13.3.el7.centos.kolab_16.x86_64
One server as master and second as replica.
All worked fine when users login on master server, but when I temporary move
users on replica I found some trouble
Messages synchronisation from replica to mas
* 27/06/2018, Bron Gondwana wrote :
>Yep, that will be enough. The only thing it might not catch is if
>there are users on the replica which aren't present on the master (for
>whatever reason)... in that case, they will remain on the replica
>still.
ok I check, but I don't think I
> I have to switch a master/replica in rolling replication.
>
> How I can be sure the replication is totally done ? it's
> enough to run
>
> /usr/lib/cyrus/sync_client -v -A
>
> from master ? and then, which operation I have to do ?
>
> thanks in advance
&g
hi all,
I have to switch a master/replica in rolling replication.
How I can be sure the replication is totally done ? it's
enough to run
/usr/lib/cyrus/sync_client -v -A
from master ? and then, which operation I have to do ?
thanks in advance
--
Never try to teach a pig to sing.
It w
Hi Albert,
The main logical difference between ordinary replication and the experimental
backup system in Cyrus 3.0 is that in a replicated system, the replica is a
copy of the account's current state (as of the last replication). The backup
system is a historical record, not just a cu
Hi everyone,
I'm not sure I really understand what's the benefice of backup (cyrus>3.x) vs
replication ?
Is the main goal are to save disk space with compression ? Less inode (with
large file) ?
I believe the add backup feature to cyrus-imapd was/still very lots of
work. So what
anyplace in the 2.5 code that replicates folder annotations.
>
> Annotation replication does work in rolling replication mode.
>
> Or have I busted it with other mods I make?
>
> Patch attached that fixes it for me.
>
> John Capo
>
> Cyrus Home Pag
Replicating annotations when sync_client -u is used to move mailboxes to a
different
server does not work in 2.4.20 and probably not in 2.5.X either. At lest I
can't find
anyplace in the 2.5 code that replicates folder annotations.
Annotation replication does work in rolling replication
Dear Members,
I am already running Cyrus-IMAP for storing mailboxes with quota and sieve
filter features on RHLE 7.
I have the following requirement.
1. The same mailboxes also should be accessible from another site and that
site also should run Cyrus-IMAP ( a kind of replication).
2. The
On 07/26/2017 04:54 PM, Michael Sofka wrote:
A while back there was some discussion of supporting Master-Master
replication in Cyrus. I'm busy updating from 2.4.17 to 3.0.2. What is
the state of Master-Master, as opposed to Master-Replica replications?
My current configuration is a M
A while back there was some discussion of supporting Master-Master
replication in Cyrus. I'm busy updating from 2.4.17 to 3.0.2. What is
the state of Master-Master, as opposed to Master-Replica replications?
My current configuration is a Murder cluster with three front-end
servers, two
ou'll get a new user with a different uniqueid. You
shouldn't be creating users on replicas.
Bron.
On Tue, 23 Aug 2016, at 10:38, Tod A. Sandman via Info-cyrus wrote:
> I resorted to deleting the mailbox on the replication slave and trying to
> start from scratch, but I get now
I resorted to deleting the mailbox on the replication slave and trying to start
from scratch, but I get nowhere.
On slave:
cyrus@cyrus2c:~> cyradm --user mailadmin `hostname`
cyrus2c.mail.rice.edu> dm user/lamemm7
cyrus2c.mail.rice.edu> cm --partition cyrus2g use
What do you see in syslog? (both for the reconstruct and later when the
sync_client runs)
On Tue, 23 Aug 2016, at 03:49, Tod A. Sandman via Info-cyrus wrote:
> I'm using rolling replication with cyrus-imapd-2.5.9. sync_client died and I
> am not able to get replication working a
I'm using rolling replication with cyrus-imapd-2.5.9. sync_client died and I
am not able to get replication working again. I've narrowed it down to one
mailbox, user.lamemm7, and I've successfully reconstructed the mailbox on both
replication partners with various o
usimap.org/imap/admin/sop/replication.html
>
> But how in this case restore replication for shared folders? Run sync
> for each folder separately?
>
> On Wed, Mar 2, 2016, 6:01 PM Konstantin Udalov via Info-cyrus cy...@lists.andrew.cmu.edu> wrote:
>> Hello.
>>
>>
I
1 - 100 of 730 matches
Mail list logo