set Bit 4 set Bit 6 set Bit 9 set Bit 11 set Bit 13 set Bit 14
> > set Bit 18 set Bit 23 set Bit 25 set Bit 27 set Bit 30 set Bit 35 set
> > Bit 36 set Bit 37 set Bit 39 set Bit 41 set Bit 42 set Bit 48 set Bit
> > 57 set Bit 58 set Bit 59 set
> >
> > So all it's done is *adde
chooseleaf firstn 0 type host
step emit
}
# end crush map
On Thu, Feb 23, 2017 at 7:37 PM, Brad Hubbard <bhubb...@redhat.com> wrote:
> Did you dump out the crushmap and look?
>
> On Fri, Feb 24, 2017 at 1:36 PM, Schlacta, Christ <aarc...@aarcane.org> wrote:
>>
insofar as I can tell, yes. Everything indicates that they are in effect.
On Thu, Feb 23, 2017 at 7:14 PM, Brad Hubbard <bhubb...@redhat.com> wrote:
> Is your change reflected in the current crushmap?
>
> On Fri, Feb 24, 2017 at 12:07 PM, Schlacta, Christ <aarc...@aar
-- Forwarded message --
From: Schlacta, Christ <aarc...@aarcane.org>
Date: Thu, Feb 23, 2017 at 6:06 PM
Subject: Re: [ceph-users] Upgrade Woes on suse leap with OBS ceph.
To: Brad Hubbard <bhubb...@redhat.com>
So setting the above to 0 by sheer brute force didn't w
-- Forwarded message --
From: Schlacta, Christ <aarc...@aarcane.org>
Date: Thu, Feb 23, 2017 at 5:56 PM
Subject: Re: [ceph-users] Upgrade Woes on suse leap with OBS ceph.
To: Brad Hubbard <bhubb...@redhat.com>
They're from the suse leap ceph team. They maintain cep
equire_feature_tunables": 1,
"require_feature_tunables2": 1,
"has_v2_rules": 0,
"require_feature_tunables3": 1,
"has_v3_rules": 0,
"has_v4_buckets": 0,
"require_feature_tunables5": 1,
"has_v5_rules": 0
}
On Thu,
So I updated suse leap, and now I'm getting the following error from
ceph. I know I need to disable some features, but I'm not sure what
they are.. Looks like 14, 57, and 59, but I can't figure out what
they correspond to, nor therefore, how to turn them off.
libceph: mon0 10.0.0.67:6789
hind OSDs?
>>
>> -Paul
>>
>> *Ceph on native Infiniband may be available some day, but it seems
>> impractical with the current releases. IP-over-IB is also known to work.
>>
>>
>> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ <aarc...@aarcane.org&g
> impractical with the current releases. IP-over-IB is also known to work.
>
>
> On Apr 21, 2016, at 8:12 PM, Schlacta, Christ <aarc...@aarcane.org> wrote:
>
> Is it possible? Can I use fibre channel to interconnect my ceph OSDs?
> Intuition tells me it should be possible,
Is it possible? Can I use fibre channel to interconnect my ceph OSDs?
Intuition tells me it should be possible, yet experience (Mostly with
fibre channel) tells me no. I don't know enough about how ceph works
to know for sure. All my googling returns results about using ceph as
a BACKEND for
What do you use as an interconnect between your osds, and your clients?
On Mar 20, 2016 11:39 AM, "Mike Almateia" <mike.almat...@gmail.com> wrote:
> 18-Mar-16 21:15, Schlacta, Christ пишет:
>
>> Insofar as I've been able to tell, both BTRFS and ZFS provide similar
On Mar 18, 2016 4:31 PM, "Lionel Bouton"
>
> Will bluestore provide the same protection against bitrot than BTRFS?
> Ie: with BTRFS the deep-scrubs detect inconsistencies *and* the OSD(s)
> with invalid data get IO errors when trying to read corrupted data and
>
I posted about this a while ago, and someone else has since inquired,
but I am seriously wanting to know if anybody has figured out how to
boot from a RBD device yet using ipxe or similar. Last I read.
loading the kernel and initrd from object storage would be
theoretically easy, and would only
Insofar as I've been able to tell, both BTRFS and ZFS provide similar
capabilities back to CEPH, and both are sufficiently stable for the
basic CEPH use case (Single disk -> single mount point), so the
question becomes this: Which actually provides better performance?
Which is the more highly
If you can swing 2u chassis and 2.5" drives instead, you can trivially get
between 15 and 24 drives across the front and rear of a beautiful hot-swap
chassis. There are numerous makes and models available from custom builds
up/down through used on ebay. Worth a peek.
On Thu, Feb 11, 2016 at
In just the last week I've seen at least two failures as a result of
replication factor two. I would highly suggest that for any critical data
you choose an rf of at least three.
With your stated capacity, you're looking at a mere 16TB with rf3. You'll
need to look into slightly more capacity or
and rbd. I'll be posting a blog post
about it later, but for now, I just thought I'd share the facts in case
anyone here cares besides me.
On Wed, Jan 29, 2014 at 1:52 AM, zorg z...@probesys.com wrote:
Hello
we use libvirt from wheezy-backports
Le 29/01/2014 04:13, Schlacta, Christ
On Jan 29, 2014 10:44 AM, Dimitri Maziuk dmaz...@bmrb.wisc.edu wrote:
On 01/29/2014 12:40 PM, Schlacta, Christ wrote:
Why can't you compile it yourself using rhel's equivalent of dkms?
Because of
fully supported RedHat
^^^
Dkms is red hat technology
I can only comment I the log. I would recommend using three logs (6 disks
as mirror pairs) per system, and add a crush map hierarchy level for cache
drive so that any given pg will never mirror twice to the same log. That'll
also reduce your failure domain.
On Jan 29, 2014 4:26 PM, Geraint Jones
I'll have to look at the iscsi and zfs initramfs hooks, and see if I can
model it most concisely on what they currently do. Between the two, I
should be able to hack something up.
On Mon, Jan 27, 2014 at 9:46 PM, Stuart Longland stua...@vrt.com.au wrote:
On 28/01/14 15:29, Schlacta, Christ
Is the list misconfigured? Clicking Reply in my mail client on nearly
EVERY list sends a reply to the list, but for some reason, this list is one
of the very, extremely, exceedingly few lists where that doesn't work as
expected. Is the list misconfigured? Anyway, if someone could fix this,
it
I'm pasting this in here piecemeal, due to a misconfiguration of the list.
I'm posting this back to the original thread in the hopes of the
conversation being continued. I apologize in advance for the poor
formatting below.
On Mon, Jan 27, 2014 at 12:50 PM, Schlacta, Christ aarc
So on Debian wheezy, qemu is built without ceph/rbd support. I don't know
about everyone else, but I use backported qemu. Does anyone provide a
trusted, or official, build of qemu from Debian backports that supports
ceph/rbd?
___
ceph-users mailing list
There are some seldom-used files (namely install ISOs) that I want to throw
in ceph to keep them widely available, but throughput and response times
aren't critical for them, nor is redundancy. Is it possible to throw them
into OSDs on cheap, bulk offline storage, and more importantly, will idle
What guarantees does ceph place on data integrity? Zfs uses a Merkel tree
to guarantee the integrity of all data and metadata on disk and will
ultimately refuse to return duff data to an end user consumer.
I know ceph provides some integrity mechanisms and has a scrub feature.
Does it provide
So I just have a few more questions that are coming to mind. Firstly, I
have OSDi whose underlying filesystems can be.. Dun dun dun Resized!!
If I choose to expand my allocation to ceph, I can in theory do so by
expanding the quota on the OSDi. (I'm using ZFS) Similarly, if the OSD is
26 matches
Mail list logo