Hi Jackie,
Here is the sample output of scrub status where two files are corrupted.
root@FEDORA2:$gluster vol bitrot master scrub status
Volume name : master
State of scrub: Active (Idle)
Scrub impact: lazy
Scrub frequency: biweekly
Bitrot error log location: /var/log/glusterfs/bitd.log
On 10/26/2016 03:42 PM, Lindsay Mathieson wrote:
On 27/10/2016 8:14 AM, Joe Julian wrote:
To be fair, though, I can't blame ceph. We had a cascading hardware
failure with those storage trays. Even still, if it had been gluster
- I would have had files on disks.
Ouch :(
In that regard how
On 27/10/2016 8:14 AM, Joe Julian wrote:
To be fair, though, I can't blame ceph. We had a cascading hardware
failure with those storage trays. Even still, if it had been gluster -
I would have had files on disks.
Ouch :(
In that regard how do you view sharding? why not as simple as pulling
On 10/26/2016 02:54 PM, Lindsay Mathieson wrote:
Maybe a controversial question (and hopefully not trolling), but any
particularly reason you choose gluster over ceph for these larger
setups Joe?
For myself, gluster is much easier to manage and provides better
performance on my small
Maybe a controversial question (and hopefully not trolling), but any
particularly reason you choose gluster over ceph for these larger setups
Joe?
For myself, gluster is much easier to manage and provides better
performance on my small non-enterprise setup, plus it plays nice with zfs.
But
On 10/26/2016 02:12 PM, Gandalf Corvotempesta wrote:
2016-10-26 23:07 GMT+02:00 Joe Julian :
And yes, they can fail, but 20TB is small enough to heal pretty quickly.
20TB small enough to build quickly? On which network? Gluster doesn't
have a dedicated cluster network,
2016-10-26 23:09 GMT+02:00 Joe Julian :
> Open Compute WiWynn Knox trays. I don't recommend them but they are pretty.
> https://goo.gl/photos/tmkRE58xKKaWKdL96
What are you hosting on that huge cluster?
10GB network I suppose.
___
2016-10-26 23:07 GMT+02:00 Joe Julian :
> And yes, they can fail, but 20TB is small enough to heal pretty quickly.
20TB small enough to build quickly? On which network? Gluster doesn't
have a dedicated cluster network, if the cluster is being hevily
accessed, the healing
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote:
2016-10-26 23:04 GMT+02:00 Joe Julian :
I just add enough disks to saturate (and I don't like zfs, personally)
per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and
create 6 bricks per server.
30
On 10/26/2016 02:06 PM, Gandalf Corvotempesta wrote:
2016-10-26 23:04 GMT+02:00 Joe Julian :
I just add enough disks to saturate (and I don't like zfs, personally)
per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and
create 6 bricks per server.
30
On 10/26/2016 02:04 PM, Joe Julian wrote:
On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote:
2016-10-26 22:59 GMT+02:00 Joe Julian :
Personally, I prefer raid0 bricks just to get the throughput to
saturate my
network, then I use replicate to meet my availability
2016-10-26 23:04 GMT+02:00 Joe Julian :
> I just add enough disks to saturate (and I don't like zfs, personally)
> per-brick. So with 30 disks on a server, I typically do 5-disk raid-0 and
> create 6 bricks per server.
30 disks per server? which chassis are you using?
Why
On 27/10/2016 6:59 AM, Joe Julian wrote:
Personally, I prefer raid0 bricks just to get the throughput to
saturate my network, then I use replicate to meet my availability
requirements (typically replica 3).
My network is the limiting factor already :( Only 1G * 3 Bond. Cheap and
nasty D-Link
On 10/26/2016 02:02 PM, Gandalf Corvotempesta wrote:
2016-10-26 22:59 GMT+02:00 Joe Julian :
Personally, I prefer raid0 bricks just to get the throughput to saturate my
network, then I use replicate to meet my availability requirements
(typically replica 3).
Isn't the
On 27/10/2016 6:58 AM, mabi wrote:
Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;)
right now its averaging at 45000. Low load though.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
2016-10-26 22:59 GMT+02:00 Joe Julian :
> Personally, I prefer raid0 bricks just to get the throughput to saturate my
> network, then I use replicate to meet my availability requirements
> (typically replica 3).
Isn't the ZFS cache on SSD enough to saturate the network?
On 27/10/2016 6:56 AM, Gandalf Corvotempesta wrote:
Velociraptors: are still around ? I heard that were EOL a couple of years ago.
Legacy hardware :) I must admit they last really well.
--
Lindsay Mathieson
___
Gluster-users mailing list
On 10/26/2016 01:56 PM, Gandalf Corvotempesta wrote:
2016-10-26 22:31 GMT+02:00 Lindsay Mathieson :
Yah, RAID10.
- Two nodes with 4 WD 3TB RED
I really hate RAID10.
Personally, I prefer raid0 bricks just to get the throughput to saturate
my network, then I use
Sorry yes I meant vmstat, I was doing too much ionice/iostat today ;)
Original Message
Subject: Re: [Gluster-users] Production cluster planning
Local Time: October 26, 2016 10:56 PM
UTC Time: October 26, 2016 8:56 PM
From: lindsay.mathie...@gmail.com
To:
2016-10-26 22:31 GMT+02:00 Lindsay Mathieson :
> Yah, RAID10.
>
> - Two nodes with 4 WD 3TB RED
I really hate RAID10.
I'm evaluating 2 RAIZ2 on each gluster node (12 disks: 6+6 on each
RAIDZ2) or one huge RAIDZ3 with 12 disks.
The biggest drawback with RAIDZ is that
On 27/10/2016 6:35 AM, mabi wrote:
I was wondering with your setup you mention, how high are your context
switches? I mean what is your typical average context switch and what
are your highest context switch peeks (as seen in iostat).
Wouldn't that be vmstat?
--
Lindsay Mathieson
I was wondering with your setup you mention, how high are your context
switches? I mean what is your typical average context switch and what are your
highest context switch peeks (as seen in iostat).
Best,
M.
Original Message
Subject: Re: [Gluster-users] Production
2016-10-05 23:48 GMT+02:00 Lindsay Mathieson :
> Its enough? I also run 10 windows VM's per node.
>
>
> My servers typically run at 4-6% max ioload. They idle under 1%
Are you using any ZFS RAID on your servers?
___
On 26 October 2016 at 19:47, Leung, Alex (398C)
wrote:
> Does anyone has any idea to troubleshoot the following problem?
>
>
>
> Alex
>
>
>
Can you please provide the gluster client logs (in /var/log/glusterfs) and
the gluster volume info?
Regards,
Nithya
>
>
>
That’s great, thank you! We plan to automatically look for these signs.
For “scrub status”, do you have a sample output for a positive hit you could
share easily?
> On Oct 26, 2016, at 12:05 AM, Kotresh Hiremath Ravishankar
> wrote:
>
> Correcting the command..I had
Does anyone has any idea to troubleshoot the following problem?
Alex
[root@pdsimg-6 alex]# rsync -av
pdsraid1:/export/pdsdata1/mro/safed/rsds/09000_0_ops-120210/crism/ops_crism_vnir_09000_0/
.
root@pdsraid1's password:
receiving incremental file list
./
4A_03_B42000_01.DAT
Hi all!
This weeks meeting was GD.
We got rid of the regular updates, and just had an open floor. This
had the intended effect of more conversations.
We discussed 3 main topics today,
- How do we recognize contributors and their contributions [manikandan]
- What's happening with memory
Correcting the command..I had missed 'scrub' keyword.
"gluster vol bitrot scrub status"
Thanks and Regards,
Kotresh H R
- Original Message -
> From: "Kotresh Hiremath Ravishankar"
> To: "Jackie Tung"
> Cc: gluster-users@gluster.org
> Sent:
I'm really sorry, I've wrote to the bad mailing list. :(
Il 26 ott 2016 8:30 AM, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:
> As I'm planning some server migrations and a new mail architecture, i
> would like to create an HA cluster
>
> Any advice on which kind of
As I'm planning some server migrations and a new mail architecture, i
would like to create an HA cluster
Any advice on which kind of shared storage should i use? Are gluster
performances with small files enough for dovecot? Any other solution?
It's mandatory to avoid any splibrains or similiar
30 matches
Mail list logo