Ldirector and IPVS can "sticky" same ip to same server, so the ocfs2 cache
still good.
We are trying to saparete the DLM network to ssee any performance issue!
[]'sf.rique
On Wed, Jan 26, 2011 at 6:42 PM, Stan Hoeppner wrote:
> Luben Karavelov put forth on 1/26/2011 1:21 PM:
>
> > Finaly we s
Luben Karavelov put forth on 1/26/2011 1:21 PM:
> Finaly we scrapped the ocfs2 setup and moved to less advanced setup:
> We created distinct volumes for every worker on the SAN, formated it with with
> XFS. The volumes got mounted on different mountpoints on workers. We setup a
> Pacemaker as clus
On Thu, 13 Jan 2011 10:33:34 -0200, Henrique Fernandes
wrote:
I use ocfs2 with 3 dovecots. one only for mailman.
We have problens with IO. Have about 4k active users.
We are now testing more ocfs2 clusters, becasue one of yours theorys
is that
iff all mail resides in only one ocfs2 cluster,
On Sun, Jan 23, 2011 at 09:54:56AM +, John Moorhouse wrote:
>
> > http://www.sogo.nu/
> > http://www.sogo.nu/english/tour/online_demo.html
> >
>
> Have a look at roundcube
>
> http://roundcube.net/
>
Yes, roundcube is looking good, but AFAIK it's missing an integrated
calendar
thanks to all!
I am already have considering sogo. Gonna test some day!
[]'sf.rique
On Sun, Jan 23, 2011 at 9:52 AM, Patrick Ben Koetter
wrote:
> * Jan-Frode Myklebust :
> > On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote:
> > >
> > > It is better, because now we have an de
* Jan-Frode Myklebust :
> On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote:
> >
> > It is better, because now we have an decent webmail ( horde with dimp
> > enable, before were just imp ) , and most people use to have pop configured,
> > becasue of quota of 200mb, and little use
On 23 Jan 2011, at 08:51, Jan-Frode Myklebust wrote:
> On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote:
>>
>> It is better, because now we have an decent webmail ( horde with dimp
>> enable, before were just imp ) , and most people use to have pop configured,
>> becasue of quo
On Sun, Jan 23, 2011 at 02:01:49AM -0200, Henrique Fernandes wrote:
>
> It is better, because now we have an decent webmail ( horde with dimp
> enable, before were just imp ) , and most people use to have pop configured,
> becasue of quota of 200mb, and little user use webmail. Now much more peopl
[]'sf.rique
On Sun, Jan 23, 2011 at 1:20 AM, Stan Hoeppner wrote:
> Henrique Fernandes put forth on 1/22/2011 2:59 PM:
>
> > About change the EMC to raid 10, can not do it because other people are
> > using it. So we can not edit anything one strage. Those other luns i talk
> > about, they are m
Henrique Fernandes put forth on 1/22/2011 2:59 PM:
> About change the EMC to raid 10, can not do it because other people are
> using it. So we can not edit anything one strage. Those other luns i talk
> about, they are ment to be for WEB but as we are testing we are allowed to
> use it.
You need
[]'sf.rique
On Fri, Jan 21, 2011 at 8:06 PM, Stan Hoeppner wrote:
> Henrique Fernandes put forth on 1/21/2011 12:53 PM:
>
> > We think it is the ocfs2 and the size of the partition, becasue. We can
> > write an big file in a accetable speed. But if we try to delete or create
> or
> > read lots o
So i read all emails and stuff.
But i am sorry to say, much of the things you said we are not able to do.
About change the EMC to raid 10, can not do it because other people are
using it. So we can not edit anything one strage. Those other luns i talk
about, they are ment to be for WEB but as we
Henrique Fernandes put forth on 1/21/2011 12:53 PM:
> We think it is the ocfs2 and the size of the partition, becasue. We can
> write an big file in a accetable speed. But if we try to delete or create or
> read lots of small files the speed is horrible. We think is an DLM problem
> in propagate
Henrique Fernandes put forth on 1/21/2011 12:53 PM:
> But you asked before about haardware.
I asked about the hardware.
> It is an EMC CX4, linked with ONE 1gbE to ONE dlink ( i am not sure but i
> guess if full Gbit ) and from this dlink it conects to 4 XEN machines at
> 1gbit and in the virtua
[]'sf.rique
On Fri, Jan 21, 2011 at 4:31 PM, Ed W wrote:
> On 21/01/2011 17:50, Henrique Fernandes wrote:
>
>> I don't know if i got your question right, but before, while using mbox,
>> we had less users and much less quota, it was only 200MB now is about 1GB.
>> And before we did not have a
On 21/01/2011 17:50, Henrique Fernandes wrote:
I don't know if i got your question right, but before, while using
mbox, we had less users and much less quota, it was only 200MB now is
about 1GB. And before we did not have a good backup system, had many
problens.
We pretty much change to maildi
Henrique Fernandes put forth on 1/21/2011 9:50 AM:
> Let me try explain better.
>
> We have 3 virtual machines with this set up:
>
> /dev/sda1 3.6T 2.4T 1.3T 66% /A
> /dev/sdb1 1.0T 36G 989G 4% /B
> /dev/sdc1 1.0T 3.3G 1021G 1% /C
>
> /dev/sda1 on
[]'sf.rique
On Fri, Jan 21, 2011 at 3:29 PM, Ed W wrote:
> Hi
>
>
> I have considered the idea, but we just change from mbox to maildir about
>> 4 months ago, and we have many problens with some accouts. We were using
>> dsync to migrate.
>>
>
> Out of curiousity - how did the backup times cha
Hi
I have considered the idea, but we just change from mbox to maildir
about 4 months ago, and we have many problens with some accouts. We
were using dsync to migrate.
Out of curiousity - how did the backup times change between mbox vs
maildir? I would suggest that this gives you a baseline
I have considered the idea, but we just change from mbox to maildir about 4
months ago, and we have many problens with some accouts. We were using dsync
to migrate.
But once we choose mdbox we are sticky to dovecot, or gona have to migrate
all users again if we choose to use another imap server.
On 20/01/2011 16:20, Henrique Fernandes wrote:
Same question!
I have about 1TB used and it takes 22 hrs to backup maildirs!
I have problens with ocfs2 in fouding the file!
Just an idea, but have you evaluated performance of mdbox (new dovecot
format) on your storage devices? It appears to
[]'sf.rique
On Fri, Jan 21, 2011 at 5:59 AM, Stan Hoeppner wrote:
> Henrique Fernandes put forth on 1/21/2011 1:38 AM:
>
> > We are out of ideias to make it faster. We only came up making more ocfs2
> > cluster with smaller disks. With this we are gettng better performance.
> We
> > have now 2 c
Jan-Frode Myklebust put forth on 1/21/2011 5:49 AM:
> On Thu, Jan 20, 2011 at 10:14:42PM -0600, Stan Hoeppner wrote:
>>
>> Have you considered SGI CXFS? It's the fastest cluster FS on the planet by
>> an
>> order of magnitude. It uses dedicated metadata servers instead of a DLM,
>> which
>> is
On Thu, Jan 20, 2011 at 10:14:42PM -0600, Stan Hoeppner wrote:
>
> Have you considered SGI CXFS? It's the fastest cluster FS on the planet by an
> order of magnitude. It uses dedicated metadata servers instead of a DLM,
> which
> is why it's so fast. Directory traversal operations would be ord
Henrique Fernandes put forth on 1/21/2011 1:38 AM:
> We are out of ideias to make it faster. We only came up making more ocfs2
> cluster with smaller disks. With this we are gettng better performance. We
> have now 2 cluster one with 4 TB other with 1 TB and are migrating some os
> emails form 4TB
[]'sf.rique
On Fri, Jan 21, 2011 at 2:14 AM, Stan Hoeppner wrote:
> Henrique Fernandes put forth on 1/20/2011 11:55 AM:
>
> > Even the storage system is not SUN those ocfs2 servers are connect via
> iSCSI
> > from the storage with ocfs2 in virtual machine
>
> Storage are an EMC CX4 ( don't have
Henrique Fernandes put forth on 1/20/2011 11:55 AM:
> Even the storage system is not SUN those ocfs2 servers are connect via iSCSI
> from the storage with ocfs2 in virtual machine
Please provide a web link to the iSCSI storage array product you are using, and
tell us how many 1GbE ports you are l
Stan!
Sorry i did not explained well!
FULL
Spool to disk: ~24h TransferRate: 6MB/s
Despool to tape: ~7h TransferRate: 16MB/s
INCREMENTAL
Spool to disk: ~11hTransferRate: 300KB/s
Despool to tape: ~12m TransferRate: 16MB/s
When doind a backup, we turn on another machine in the ocf
--- On Thu, 20/1/11, Henrique Fernandes wrote:
> From: Henrique Fernandes
> Subject: Re: [Dovecot] Best Cluster Storage
> To: "alex handle"
> Cc: dovecot@dovecot.org
> Date: Thursday, 20 January, 2011, 18:20
> []'sf.rique
>
>
> On Thu, Jan
On Thu, Jan 20, 2011 at 5:20 PM, Henrique Fernandes wrote:
>> > Not all, if this counts as large:
>> >
>> > Filesystem Size Used Avail Use% Mounted on
>> > /dev/gpfsmail 9.9T 8.7T 1.2T 88% /maildirs
>> >
>> > Filesystem Inodes IUsed IFree IU
Henrique Fernandes put forth on 1/20/2011 10:20 AM:
> I have about 1TB used and it takes 22 hrs to backup maildirs!
To tape library or D2D? Are you doing differential backup or full backup each
time?
4/8Gb fiber channel or 1 GbE iSCSI based SAN array?
--
Stan
[]'sf.rique
On Thu, Jan 20, 2011 at 12:10 PM, alex handle wrote:
> On Mon, Jan 17, 2011 at 7:32 AM, Jan-Frode Myklebust
> wrote:
> > On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad Davidson wrote:
> >>
> >> Don't give up on the simplest solution too easily - lots of us run NFS
> >> with quite l
On Mon, Jan 17, 2011 at 7:32 AM, Jan-Frode Myklebust wrote:
> On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad Davidson wrote:
>>
>> Don't give up on the simplest solution too easily - lots of us run NFS
>> with quite large installs. As a matter of fact, I think all of the large
>> installs run NFS;
On Fri, Jan 14, 2011 at 05:16:50PM -0800, Brad Davidson wrote:
>
> Don't give up on the simplest solution too easily - lots of us run NFS
> with quite large installs. As a matter of fact, I think all of the large
> installs run NFS; hence the need for the Director in 2.0.
Not all, if this counts
Quoting Jonathan Tripathy :
Generally, I would give an LVM LV to each of my Xen guests, which
according to the DRBD site, is ok:
http://www.drbd.org/users-guide/s-lvm-lv-as-drbd-backing-dev.html
I do not use img files with loopback devices
Is this a bit better now?
There are implications
Quoting Stan Hoeppner :
DRBD is for mirroring physical devices over a network. You might be
able to do
DRBD inside a VM guest, but to what end? What sense does it make to do so?
It doesn't really make sense, and it can cause problems... What problems
depends on your VM implementation (Xen
On 15/01/11 01:14, Brad Davidson wrote:
-Original Message-
I'm sorry I don't follow this. It would be appreciated if you could
include a simpler example. The way I see it, a VM disk is just a
small
chunck "LVM LV in my case" of a real disk.
Perhaps if you were to compare and contrast
Jonathan,
> -Original Message-
>
> I really wish NFS didn't have the caching issue, as it's the most
simple
> to set up
Don't give up on the simplest solution too easily - lots of us run NFS
with quite large installs. As a matter of fact, I think all of the large
installs run NFS; hence
On 15/01/11 00:59, Eric Shubert wrote:
On 01/14/2011 03:58 PM, Jonathan Tripathy wrote:
On 14/01/11 19:00, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 4:17 PM:
Regarding the servers, I was thinking of having a 2 node drbd cluster
(in
active+standby), which would export a s
> -Original Message-
> >>
> > I'm sorry I don't follow this. It would be appreciated if you could
> > include a simpler example. The way I see it, a VM disk is just a
small
> > chunck "LVM LV in my case" of a real disk.
>
> Perhaps if you were to compare and contrast a virtual disk to a ra
On 01/14/2011 03:58 PM, Jonathan Tripathy wrote:
On 14/01/11 19:00, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 4:17 PM:
Regarding the servers, I was thinking of having a 2 node drbd cluster
(in
active+standby), which would export a single iSCSI LUN. Then, I would
have a 2
n
Jonathan Tripathy put forth on 1/14/2011 4:58 PM:
> I'm sorry I don't follow this. It would be appreciated if you could include a
> simpler example. The way I see it, a VM disk is just a small chunck "LVM LV in
> my case" of a real disk.
We can't teach you everything on a mailing list. You need
On 14/01/11 19:00, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 4:17 PM:
Regarding the servers, I was thinking of having a 2 node drbd cluster (in
active+standby), which would export a single iSCSI LUN. Then, I would have a 2
node dovecot+postfix cluster (in active-active), wh
On 14/01/11 20:07, Eric Rostetter wrote:
Quoting Patrick Westenberg :
just to get it right:
DRBD for shared storage replication is OK?
Yes, but only if done correctly. ;) There is some concern on Stan's part
(and mime) that you might do it wrong (e.g., in a vm guest rather than
at the vm ho
Am 14.01.2011 20:16, schrieb Patrick Westenberg:
> Hello,
>
> just to get it right:
> DRBD for shared storage replication is OK?
>
> Patrick
using it allready
--
Best Regards
MfG Robert Schetterer
Germany/Munich/Bavaria
Quoting Patrick Westenberg :
Eric Rostetter schrieb:
Quoting Patrick Westenberg :
just to get it right:
DRBD for shared storage replication is OK?
Yes, but only if done correctly. ;) There is some concern on Stan's part
(and mime) that you might do it wrong (e.g., in a vm guest rather than
Eric Rostetter schrieb:
Quoting Patrick Westenberg :
just to get it right:
DRBD for shared storage replication is OK?
Yes, but only if done correctly. ;) There is some concern on Stan's part
(and mime) that you might do it wrong (e.g., in a vm guest rather than
at the vm host, etc).
My stor
Quoting Patrick Westenberg :
just to get it right:
DRBD for shared storage replication is OK?
Yes, but only if done correctly. ;) There is some concern on Stan's part
(and mime) that you might do it wrong (e.g., in a vm guest rather than
at the vm host, etc).
--
Eric Rostetter
The Department
Hello,
just to get it right:
DRBD for shared storage replication is OK?
Patrick
Stan Hoeppner put forth on 1/14/2011 1:00 PM:
> You have a consolidated Xen cluster of two 24 core AMD Magny Cours servers
> each
> with 128GB RAM, an LSI MegaRAID SAS controller with dual SFF8087 ports backed
> by
> 32 SAS drives in external jbod enclosures setup as a single hardware RAID 10.
>
Jonathan Tripathy put forth on 1/13/2011 4:17 PM:
> Regarding the servers, I was thinking of having a 2 node drbd cluster (in
> active+standby), which would export a single iSCSI LUN. Then, I would have a 2
> node dovecot+postfix cluster (in active-active), where each node would mount
> the
> sam
>
> i've actually been reading on ocfs2 and it looks quite promising. According
> to this presentation:
>
>
> http://www.gpaterno.com/publications/2010/dublin_ossbarcamp_2010_fs_comparison.pdf
>
> ocfs2 seems to work quite well with lots of small files (typical of
> maildir). I'm guessing that sinc
On 14 January 2011 17:06, Ed W wrote:
> Where are all the glusterfs users in this thread... There are at least a
> couple of folks here using such a system? Any comments on how it's working
> out for you?
Not on production systems yet. Still getting bugs fixed.
-Naresh V.
Where are all the glusterfs users in this thread... There are at least
a couple of folks here using such a system? Any comments on how it's
working out for you?
Ed W
On Fri, 2011-01-14 at 03:48 +, Jonathan Tripathy wrote:
> ocfs2 seems to work quite well with lots of small files (typical of
> maildir). I'm guessing that since ocfs2 reboot a system automatically,
> it doesn't require any additional fencing?
We have a two-node active-active DRBD+OCFS2 Dove
Am 13.01.2011 23:17, schrieb Jonathan Tripathy:
>
> On 13/01/11 21:34, Stan Hoeppner wrote:
>> Jonathan Tripathy put forth on 1/13/2011 7:11 AM:
>>
>>> Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't
>>> mind
>>> experimenting with DRBD and GFS2 is it means fewer problems?
>>
Quoting Jonathan Tripathy :
Linux kernel bonding, mode=4 (IEEE 802.3ad Dynamic link aggregation).
I'm guessing that since you're using a cross over cable, by just
setting up the bond0 interfaces as usual (As per this article
http://www.cyberciti.biz/tips/linux-bond-or-team-multiple-network-
On 14/01/11 03:26, Eric Rostetter wrote:
Quoting Jonathan Tripathy :
Either way, I would probably use a crossover cable for the DRBD
cluster.
I use 2 1Gb links bonded together, over crossover cables...
Could maybe even bond 2 cables together if I'm feeling adventurous!
Yes, recommended.
On 14/01/11 03:39, Eric Rostetter wrote:
Quoting Henrique Fernandes :
for drbd you only need a heartbeat i guess.
Fencing is not needed for drbd, though recommended.
But to use gfs2 you need fence device, ocfs2 does not require once,
like the
ocfs2 driver takes care, it reboots if it thin
Quoting Henrique Fernandes :
for drbd you only need a heartbeat i guess.
Fencing is not needed for drbd, though recommended.
But to use gfs2 you need fence device, ocfs2 does not require once, like the
ocfs2 driver takes care, it reboots if it thinks it is desyncronized
gfs2 technically re
Quoting Jonathan Tripathy :
Either way, I would probably use a crossover cable for the DRBD cluster.
I use 2 1Gb links bonded together, over crossover cables...
Could maybe even bond 2 cables together if I'm feeling adventurous!
Yes, recommended. That is what I do on all my clusters.
How
for drbd you only need a heartbeat i guess.
But to use gfs2 you need fence device, ocfs2 does not require once, like the
ocfs2 driver takes care, it reboots if it thinks it is desyncronized
[]'sf.rique
On Thu, Jan 13, 2011 at 9:04 PM, Jonathan Tripathy wrote:
>
>
>> Does gfs2 guarantee integr
Does gfs2 guarantee integridy withou anm fency device ?
You make a fair point. Would I need any hardware fencing for DRBD (and
GFS2)?
As are you thinking, you will have 2 servers with drbd active/standby you
could teset both setups, exporting over NFS or over iscsi + gfs2
Does gfs2 guarantee integridy withou anm fency device ?
Where i work i guess we choose ocfs2 becasue of this litle problem, we could
not have an fenc device
Either way, I would probably use a crossover cable for the DRBD cluster.
I use 2 1Gb links bonded together, over crossover cables...
Could maybe even bond 2 cables together if I'm feeling adventurous!
Yes, recommended. That is what I do on all my clusters.
How do you bond the connection
Quoting Jonathan Tripathy :
I'm hearing different things on whether dovecot works well or not with GFS2.
Dovecot works fine with GFS2. The question is performance of
Dovecot on GFS2. I do dovecot on GFS2 (with mbox instead of maildir)
and it works fine for my user load... Your userload may
On 13/01/11 21:34, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 7:11 AM:
Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind
experimenting with DRBD and GFS2 is it means fewer problems?
Depends on your definition of "better". If you do two dovecot+drbd
Jonathan Tripathy put forth on 1/13/2011 7:11 AM:
> Would DRBD + GFS2 work better than NFS? While NFS is simple, I don't mind
> experimenting with DRBD and GFS2 is it means fewer problems?
Depends on your definition of "better". If you do two dovecot+drbd nodes you
have only two nodes. If you d
On 13/01/11 10:57, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 2:24 AM:
Ok so this is interesting. As long as I use Postfix native delivery,
along with
Dovecot director, NFS should work ok?
One has nothing to do with the other. Director doesn't touch smtp
(afaik), only
ima
On 13/01/11 10:57, Stan Hoeppner wrote:
Jonathan Tripathy put forth on 1/13/2011 2:24 AM:
Ok so this is interesting. As long as I use Postfix native delivery, along with
Dovecot director, NFS should work ok?
One has nothing to do with the other. Director doesn't touch smtp (afaik), only
imap
I use ocfs2 with 3 dovecots. one only for mailman.
We have problens with IO. Have about 4k active users.
We are now testing more ocfs2 clusters, becasue one of yours theorys is that
iff all mail resides in only one ocfs2 cluster, it takes too long to find
the file. ocfs2 i guess does not support
On Thu, Jan 13, 2011 at 04:57:20AM -0600, Stan Hoeppner wrote:
>
> One has nothing to do with the other. Director doesn't touch smtp (afaik),
> only
> imap.
The director can do lmtp proxying, but I haven't seen much documentation
on it except the few lines at:
http://wiki2.dovecot.org/
Jonathan Tripathy put forth on 1/13/2011 2:24 AM:
> Ok so this is interesting. As long as I use Postfix native delivery, along
> with
> Dovecot director, NFS should work ok?
One has nothing to do with the other. Director doesn't touch smtp (afaik), only
imap. The reason for having Postfix use
Am 13.01.2011 08:22, schrieb Jonathan Tripathy:
> Hi Everyone,
>
> I wish to create a Postfix/Dovecot active-active cluster (each node will
> run Postfix *and* Dovecot), which will obviously have to use central
> storage. I'm looking for ideas to see what's the best out there. All of
> this will b
In this Xen setup, I think the best way to accomplish your goals is to create 6
guests:
2 x Linux Postfix
2 x Linux Dovecot
1 x Linux NFS server
1 x Linux Dovecot director
Each of these can be painfully small stripped down Linux instances. Configure
each Postfix and Dovecot server to access t
Jonathan Tripathy put forth on 1/13/2011 1:22 AM:
> I wish to create a Postfix/Dovecot active-active cluster (each node will run
> Postfix *and* Dovecot), which will obviously have to use central storage. I'm
> looking for ideas to see what's the best out there. All of this will be
> running
> on
Hi Everyone,
I wish to create a Postfix/Dovecot active-active cluster (each node will
run Postfix *and* Dovecot), which will obviously have to use central
storage. I'm looking for ideas to see what's the best out there. All of
this will be running on multiple Xen hosts, however I don't think t
77 matches
Mail list logo