- Original Message -
| You know, that's a good point. We don't use GFS2 for any non-clustered
| fs, right now, but why not? Are you saying I can do an online
| gfs2_grow
| even with lock_nolock?
|
| -Jeff
Hi Jeff,
Yes, you should be able to.
Regards,
Bob Peterson
Red Hat File Systems
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
> On Behalf Of Bob Peterson
> Sent: Tuesday, February 15, 2011 11:24 AM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster with shared sto
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
> On Behalf Of Nikola Savic
> Sent: Tuesday, February 15, 2011 3:09 PM
> To: linux clustering
> Subject: Re: [Linux-cluster] Cluster with shared storage on low budg
On 02/15/2011 06:19 PM, yvette hirth wrote:
Thomas Sjolshagen wrote:
Just so you realize; If you intend to use clvm (i.e. lvme in a cluster
where you expect to be able to write to the volume from more than one
node at/around the same time w/o a full-on failover), you will _not_
have snapshot su
Jeff Sturm wrote:
> We actually resize volumes often. Some of our storage volumes have 30
> LUNs or more. We have so many because we've virtualized most of our
> infrastructure, and some of the hosts are single-purpose hosts.
>
Can you please provide more information on how storage is organ
Thomas Sjolshagen wrote:
Just so you realize; If you intend to use clvm (i.e. lvme in a cluster
where you expect to be able to write to the volume from more than one
node at/around the same time w/o a full-on failover), you will _not_
have snapshot support. And no, this isn't "not supported" a
- Original Message -
| We don't want to allocate too more storage in advance, simply because
| it's easier to grow than to shrink. Stop the host, grow the volume,
| e2fsck/resize2fs, start up and go. Much nicer than increasing disk
| capacity on physical hosts.
These might be good for ext3
Jeff Sturm wrote:
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
On Behalf Of Gordan Bobic
Sent: Tuesday, February 15, 2011 7:05 AM
Volume resizing is, IMO, over-rated and unnecessary in most cases,
except where data
growth is quit
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.com]
> On Behalf Of Gordan Bobic
> Sent: Tuesday, February 15, 2011 7:05 AM
>
> Volume resizing is, IMO, over-rated and unnecessary in most cases,
except where data
> growth is quite mind-bog
Nikola Savic wrote:
Gordan Bobic wrote:
Something else just occurs to me - you mentioned MySQL. You do realize
that the performance of it will be attrocious on a shared cluster file
system (ANY shared cluster file system), right? Unless you only intend
to run mysqld on a single node at a time (i
Gordan Bobic wrote:
> Something else just occurs to me - you mentioned MySQL. You do realize
> that the performance of it will be attrocious on a shared cluster file
> system (ANY shared cluster file system), right? Unless you only intend
> to run mysqld on a single node at a time (in which case th
Nikola Savic wrote:
Gordan Bobic wrote:
What is main reason for you not to use LVM on top of DRBD? Is it just
that you didn't require benefits it brings? Or, it makes more
problems by your opinion?
Traditionally, CLVM didn't provide any tangible benefits (no
snapshots), and I never found myself
Gordan Bobic wrote:
>> What is main reason for you not to use LVM on top of DRBD? Is it just
>> that you didn't require benefits it brings? Or, it makes more
>> problems by your opinion?
>
> Traditionally, CLVM didn't provide any tangible benefits (no
> snapshots), and I never found myself in a sit
Nikola Savic wrote:
Digimer wrote:
Once, and *only* if the fence was successful, the cluster will reform.
Once the cluster configuration is in place, recovery of the file system
can begin (ie: the journal can be replayed). Finally, normal operation
can continue, albeit with one less node. This i
Thomas Sjolshagen wrote:
On Tue, 15 Feb 2011 12:49:38 +0100, Nikola Savic wrote:
This is interesting approach. I understand that DRBD with GFS2
doesn't require LVM between, but it does bring some inflexibility:
* For each logical volume, one has to setup separate DRBD
* Cluster wid
Digimer wrote:
> Once, and *only* if the fence was successful, the cluster will reform.
> Once the cluster configuration is in place, recovery of the file system
> can begin (ie: the journal can be replayed). Finally, normal operation
> can continue, albeit with one less node. This is also where th
On Tue, 15 Feb 2011 12:49:38 +0100, Nikola Savic wrote:
> This is
interesting approach. I understand that DRBD with GFS2 doesn't require
LVM between, but it does bring some inflexibility:
>
> * For each
logical volume, one has to setup separate DRBD
> * Cluster wide logical
volume resizing n
Nikola Savic wrote:
Gordan Bobic wrote:
DRBD and GFS will take care of that for you. DRBD directs reads to
nodes that are up to date until everything is in sync.
Make sure that in drbd.conf you put in a stonith parameter pointing at
your fencing agent with suitable parameters, and set the tim
Gordan Bobic wrote:
> DRBD and GFS will take care of that for you. DRBD directs reads to
> nodes that are up to date until everything is in sync.
>
> Make sure that in drbd.conf you put in a stonith parameter pointing at
> your fencing agent with suitable parameters, and set the timeout to
> slight
Fajar A. Nugraha wrote:
On Tue, Feb 15, 2011 at 4:57 PM, Gordan Bobic wrote:
Nikola Savic wrote:
If I understand you well, even before sync is completely done DRBD
will take care of reading and writing of dirty blocks on problematic
node that got back online? Let's say that node was down for
On Tue, Feb 15, 2011 at 4:57 PM, Gordan Bobic wrote:
> Nikola Savic wrote:
>> If I understand you well, even before sync is completely done DRBD
>> will take care of reading and writing of dirty blocks on problematic
>> node that got back online? Let's say that node was down for longer time
>> an
Nikola Savic wrote:
Digimer wrote:
First, it will rejoin the other DRBD members. These members will have a
"dirty block" list in memory which will allow them to quickly bring the
recovered server back into sync. During this time, you can bring that
node online (ie: set it primary and start acces
I have an in-progress tutorial, which I would recommend as a guide only.
If you are interested, I will send you the link off-list.
As for your question; No, you can read/write to the shared storage at
the same time without the need for iSCSI. DRBD can run in
"Primary/Primary[/Primary]" mode. The
Digimer wrote:
> First, it will rejoin the other DRBD members. These members will have a
> "dirty block" list in memory which will allow them to quickly bring the
> recovered server back into sync. During this time, you can bring that
> node online (ie: set it primary and start accessing it via GFS
On 02/14/2011 09:23 PM, Nikola Savic wrote:
>> I have an in-progress tutorial, which I would recommend as a guide only.
>> If you are interested, I will send you the link off-list.
>>
>> As for your question; No, you can read/write to the shared storage at
>> the same time without the need for iSCS
On 02/14/2011 08:49 PM, Nikola Savic wrote:
> Digimer wrote:
>> On 02/14/2011 06:39 PM, Nikola Savic wrote:
>>
>>> Hello,
>>>
>>> I need to setup cluster of 3 servers without separate storage device
>>> (SAN). Servers should join their local hard drives to create shared
>>> storage space. Ev
Digimer wrote:
> On 02/14/2011 06:39 PM, Nikola Savic wrote:
>
>> Hello,
>>
>> I need to setup cluster of 3 servers without separate storage device
>> (SAN). Servers should join their local hard drives to create shared
>> storage space. Every server in cluster has public (100Mbps) and privat
On 02/14/2011 06:45 PM, Fajar A. Nugraha wrote:
> On Tue, Feb 15, 2011 at 6:39 AM, Nikola Savic wrote:
>>
>> Hello,
>>
>> I need to setup cluster of 3 servers without separate storage device
>> (SAN). Servers should join their local hard drives to create shared storage
>> space. Every server i
On 02/14/2011 06:39 PM, Nikola Savic wrote:
>
> Hello,
>
> I need to setup cluster of 3 servers without separate storage device
> (SAN). Servers should join their local hard drives to create shared
> storage space. Every server in cluster has public (100Mbps) and private
> (1Gbps) NIC. Privat
On Tue, Feb 15, 2011 at 6:39 AM, Nikola Savic wrote:
>
> Hello,
>
> I need to setup cluster of 3 servers without separate storage device
> (SAN). Servers should join their local hard drives to create shared storage
> space. Every server in cluster has public (100Mbps) and private (1Gbps) NIC.
Hello,
I need to setup cluster of 3 servers without separate storage device
(SAN). Servers should join their local hard drives to create shared
storage space. Every server in cluster has public (100Mbps) and private
(1Gbps) NIC. Private 1Gbit network will be used for exchange of data
(files)
31 matches
Mail list logo