Sérgio Surkamp 2011-12-09 11:17:
Hi.
Why are you using OCFS2 version 1.5.0 in production?
As long as I known, 1.5 series is for developers only.
I think that's just the version tag line they give on the mainline
kernel. It's not just for developers, it just may not be as well
supported by
I'm pretty sure you can have multiple devices for a single cluster.
Brian
Richard Pickett 2011-06-27 23:26:
We need 3 clusters concurrently because each one is only 1T each, the
underlying infrastructure won't allow us to combine the 3 into one device
without shelling out more money than
Patrick J. LoPresti 2010-06-13 19:14:
> Hello. I am experimenting with OCFS2 on Suse Linux Enterprise Server
> 11 Service Pack 1.
>
> I am performing various stress tests. My current exercise involves
> writing to files using a shared-writable mmap() from two nodes. (Each
> node mmaps and writ
Michael Austin 2010-05-24 13:32:
>I would like to get some feedback on the overall perception on the support
>and stability of OCFS2 (latest). This tool looks like a perfect fit for
>a production system I am planning, but, due to it's open source roots,
>there are some concerns ab
Brian Kroth 2010-04-26 09:17:
> Hello all,
>
> I've got a moderately active mail system running on OCFS2. It's been on
> fresh volume for about 9 months now, however only ever with one node
> active at a time. For the most part it's been very happy, however
>
lenny-backports has a 2.6.32 based kernel that might already have the
free space fix in it. I haven't checked yet.
Also you don't really explain what you're trying to use the data store
for (eg: lots of small files, video files, heavy writes, heavy reads,
random, sequential, etc.). It may impact
I believe that due to the way the fencing works you only need a single
cluster to have multiple volumes. Just make sure that all of the hosts
involved are specified in the same cluster.conf file.
For example, nodes a, b, c could mount volume1, while b, c, d mount
volume2, and e, f, g mount volume
Joel Becker 2010-03-05 13:48:
> On Fri, Mar 05, 2010 at 08:33:34AM -0600, Brian Kroth wrote:
> > As mentioned in the bug (didn't think it was a proper place for
> > discussion) I'm also curious more generally about backporting these
> > fixes to the 2.6.32 kernel
I also have a mail volume hosted on OCFS2 and I'm somewhat concerned
about /when/ we will run into this problem and what we can do to help
avoid too much hurt when it happens.
Are there any tips on reading the output of stat_sysdir.sh? The man
page wasn't especially helpful, but I'm guessing I'm
That seems unwise.
Presumably the connection to the disk or other network nodes was lost
due to some failure in which case you don't want nodes operating on the
disk unless they can agree on what's safe.
If there was a planned outage of the disk or network connection, then
the related volumes s
initely missing this patch.
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=a1b08e75dff3dc18a88444803753e667bb1d126e
>
>
> Brian Kroth wrote:
>> We've gotten a couple of dumps likes this in the last couple of days
>> while migrating some
We've gotten a couple of dumps likes this in the last couple of days
while migrating some new users to our mail store which involves
untarring/moving large quantities of files. We've gracefully rebooted
the node after every instance and it seems to do fine with normal mail
operations. I'm wonderi
kernels that had a bug
>> in cfq. That bug was fixed years ago.
>>
>> I am unsure how using noop in guest will trigger starvation. Not that
>> I am recommending it. I have not thought about this much.
>>
>> On Jan 15, 2010, at 9:55 AM, Brian Kroth wrote:
>>
http://lonesysadmin.net/2008/02/21/elevatornoop/
I ran across this recently which describes, when operating in a virtual
environment with shared storage, how to try and let the storage and
hypervisor deal with arranging disk write operations in a more globally
optimal way rather than having all th
Luis Freitas 2009-12-11 05:40:
> Patrick,
>
>Depending on what you are using, you could use the volume manager
>to do the striping, but you need to use CLVM. So if you can, go for
>Heartbeat2+CLVM+OCFS2, all integrated.
>
>Not sure but I think Heartbeat2+OCFS2 is only available o
Late to the party...
Here's what I did to get OCFS2 going with RDMs _and_ VMotion (with some
exceptions). Almost surely not supported, but it works:
First, create the RDM as a physical passthru using either the gui or the
cli. It need not be on a separate virtual controller. However, to have
V
test this out some more on our test setup.
Any thoughts?
Thanks,
Brian
Brian Kroth 2009-08-25 08:52:
> Sunil Mushran 2009-08-24 18:12:
> > So a delete was called for some inodes that had not been orphaned.
> > The pre-checks detected the same and correctly aborted the deletes.
Angelo McComis 2009-09-29 11:19:
> I'm sorry -- it's lvm2, and yes. :-)
>
> On Tue, Sep 29, 2009 at 10:41 AM, Charlie Sharkey
> wrote:
> >
> > It was mentioned:
> >
> > - Checked our lvm configuration - seems to be good as well.
> >
> > Is lvm supported by ocfs2 ?
I didn't think this part wa
You can do what Sunil mentioned using heartbeat [1]. However, MySQL
also has replication built into it and you can also use heartbeat to
automatically turn a slave into a master very quickly without any need
for shared storage. This way you could also use the slave to do load
balancing of reads a
on that file. Anything. It may help us narrow down the
> issue.
>
> Sunil
Will do.
Thanks again,
Brian
> Brian Kroth wrote:
> > I recently brought up a mail server with two ocfs2 volumes on it, one
> > large one for the user maildirs, and one small one for queue/spool
>
I recently brought up a mail server with two ocfs2 volumes on it, one
large one for the user maildirs, and one small one for queue/spool
directories. More information on the specifics below. When flushing
the queues from the MXs I saw the messages listed below fly by, but
since then nothing.
A c
I didn't see this in the bug list. Which mainline release is this fixed
in?
Thanks,
Brian
Sunil Mushran 2009-08-20 17:46:
> Yes, this is a known issue in OCFS2 1.4.1 and 1.4.2. That is assuming
> no process in the cluster has that file open. We have the fix. It will be
> available with 1.4.3 wh
I just did this:
mkfs.ocfs2 -v -L ocfs2mail -N 8 -T mail /dev/sdb1
The tools happen to choose "-b 4096 -C 4096" for you at that point.
Brian
Sérgio Surkamp 2009-08-12 12:03:
> Em Wed, 12 Aug 2009 15:05:44 +0800
> "Thomas G. Lau" escreveu:
>
> > Dear all,
> >
> > Anyone using OCFS2 on email s
Sunil Mushran 2009-06-16 16:38:
> LOOKING AHEAD
>
> We are aiming to release OCFS2 1.6 later this year. This release will
> include the features that we have worked on over the past year. These are:
>
> 1. Extended Attributes (unlimited number of attributes)
> 2. POSIX ACLs
> 3. Security Attribu
Sunil Mushran 2009-05-20 15:36:
> Well, as long as the LVM mappings remain consistent on all nodes,
> it will work. The problem is that if someone changes the setup on a
> node, you will encounter the problem you just did. The only safe way
> is to have the lvm clustered too. Whereas clvm is clust
Luis Freitas 2009-05-20 10:46:
>I am not aware of any filesystem that can withstand a online fsck.
>Sun ZFS can do online correction, but it doesnt have a fsck tool.
I hear btrfs will support this. It may be a feature that's easier to
accomplish with copy on write.
Brian
__
We've used Veritas NetBackup before without problems but are currently
toying with rsyncing to ZFS (running on OpenSolaris) with fs compression
and daily (ZFS) snapshots and then possibly dumping to tape. It's
working really well so far.
All of this actually happens from a SAN based snapshot of t
Uwe Schuerkamp 2009-03-13 10:42:
> Hi folks,
>
> I was wondering what is a good backup strategy for ocfs2 based
> clusters.
>
>
> Background: We're running a cluster of 8 SLES 10 sp2 machines sharing
> a common SAN-based FS (/shared) which is about 350g in size at the
> moment. We've already t
I'm doing some research on the possibility of using OCFS2 to serve
users' home directories and other shared space. I noticed that quota
and posix acl support was added in 2.6.29 but the tools are not there
yet. When can we expect that?
Also, are the quotas implemented on a directory or volume le
Attempting this again since I got a DSN earlier.
Brian Kroth 2009-02-25 10:25:
> I'm doing some research on the possibility of using OCFS2 to serve
> users' home directories and other shared space. I noticed that quota
> and posix acl support was added in 2.6.29 but the
I've run a web cluster with OCFS2 for almost two years now and found the
local log files option to work just fine. You can use various tools to
merge them, though the things that come with awstats have suited my
tastes.
As for monitoring multiple nodes logs during troubleshooting, I've used
swatc
Our iscsi San (equallogic) does block level replication so we were
thinking of trying to set something up soon so that we could have some
nodes in another building connected via fiber to provide site level
failover. I'll report back our experiences when we do that, but I
imagine it would b
ks again,
Brian
On Jan 15, 2009, at 11:58 AM, Coly Li wrote:
>
>
> Brian Kroth Wrote:
>> I've been working on creating a mail cluster using ocfs2. Dovecot
>> was
>> configured to use flock since the kernel we're running is debian
>> based
>>
I've been working on creating a mail cluster using ocfs2. Dovecot was
configured to use flock since the kernel we're running is debian based
2.6.26 which supports cluster aware flock. User space is 1.4.1. During
testing everything seemed fine, but when we got a real load on things we
got a whole
Add one for --srcport as well and I think you'll be ok. Actually, since
my cluster traffic all goes over a separate switch I usually just allow
all traffic in/out of eth1.
Brian
Bret Palsson 2009-01-15 08:12:
>So it looks like iptables is what is stopping it from working. After
>disabli
Brett Worth 2009-01-14 20:00:
> Christophe BOUDER wrote:
>
> > but i can't format my new big device to use more than 16To for it.
>
> You should "Consider increasing the block size" to perhaps 16k. That should
> increase the
> size to 64TB
I think he meant cluster size.
http://oss.oracle.co
ile data? Are you seeing more disk reads in this
> configuration than in the one you're comparing with?
>
> Thanks,
> Herbert.
>
>
> Brian Kroth wrote:
>> I've got a question about tuning mem usage which may or may not be ocfs2
>> related. I have some VMs that share an
I've got a question about tuning mem usage which may or may not be ocfs2
related. I have some VMs that share an iSCSI device formatted with
ocfs2. They're all running a debian based 2.6.26 kernel. We basically
just dialed the kernel down to 100HZ rather than the default 1000HZ.
Everything else i
] ocfs2_statfs+0x1a5/0x2d3 [ocfs2] SS:ESP
0068:dee1fe68
[ 818.490805] ---[ end trace ae323790ea69e92a ]---
Ulf Zimmermann 2008-12-05 12:51:
> > -Original Message-
> > From: ocfs2-users-boun...@oss.oracle.com [mailto:ocfs2-users-
> > boun...@oss.oracle.com] On Behalf Of Bri
I'm working on setting up a mail cluster (imap, pop, mx) using ocfs2.
Does anyone have any advice or experiences they'd like to share?
I've done web and video clusters before with great success, however
those were structured in a such a way that there was generally only ever
one write node active
Just for clarity, can you post the proper sequence you're now using to
take SAN based snapshots? I'd like to try this on a new cluster I'm
setting up.
Thanks,
Brian
Daniel Keisling <[EMAIL PROTECTED]> 2008-12-04 12:15:
> I've restarted the box and the heartbeat threads and messages are now
> gon
Dr J Pelan <[EMAIL PROTECTED]> 2008-05-20 17:47:
>
> On Tue, 20 May 2008, Tapas Mallick wrote:
>
> > I have installed OCFS2 as per the instruction in two of the Guest
> > OS(Both RHEL 4 AS) in VMware Workstation 6. In the first OS where I
> > export the file system,
>
> There is insufficient
Just to be clear, which versions are "latest"?
Thanks,
Brian
Tao Ma <[EMAIL PROTECTED]> 2008-04-13 15:49:
> yes you can.
> Please see
> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html#RESIZE.
>
> One more thing about it. The online resize capability has already been
> ad
43 matches
Mail list logo