n 56: invalid start byte
>
> Which is certainly not the type of error expected.
> But it is hard to detect any 0x86 in the arguments.
Are you able to reproduce this problem manually? I.e. in src dir, start the
cluster using vstart.sh:
./vstart.sh -n
Check it is running:
./ceph -s
Repea
cular examples? Because what I see in the
master is usually like below:
if LINUX
xio_server_LDADD += -ldl
endif
If you see somewhere -ldl is added unconditionally it is likely to
have to be fixed the same way.
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe
and
my patch requires some work to apply on the current master.
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
;ceph_arch_neon"
>
> So of to find where ceph_arch_neon is, and why it seems not defined.
> Perhaps as simple as loading the shared libs??
You have to add -export-dynamic to LDFLAGS, something like in this
patch:
https://github.com/trociny/ceph/commit/dcee0c0635d37f2b36257c55a3cc69d05b5afe5e#diff-ef3c0ccbdde56cca8
dev cluster
> > ./ceph -s # check it works
> >
>
> So what backend for the OSDs are you using here?
The default one, which is FileStore. I have not done much testing
though...
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body
clone --recursive -b wip-freebsd https://github.com/trociny/ceph.git
cd ceph
./install-deps.sh
./do_autogen.sh
gmake
cd src
./vstart.sh # start dev cluster
./ceph -s # check it works
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, Oct 16, 2015 at 08:32:12AM +0300, Mykola Golub wrote:
> Thank you all for your comments! I will come back with PR and pull
> request.
Here it is: https://github.com/ceph/ceph/pull/6369
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel&q
Thank you all for your comments! I will come back with PR and pull
request.
--
Mykola Golub
On Thu, Oct 15, 2015 at 11:29:56AM -0700, Josh Durgin wrote:
> On 10/15/2015 06:45 AM, Sage Weil wrote:
> >On Thu, 15 Oct 2015, Mykola Golub wrote:
> >>On Thu, Oct 15, 2015 at 08:47
set(uint64_t), et al.
For C++ I was also thinking about xyz_set(const std::string&),
xyz_set(uint64_t) variants, i.e:
int rbd::Image::options::set(int optname, const std::string& val);
int rbd::Image::options::set(int optname, uint64_t val);
...
--
Mykola Golub
--
To unsubscribe from this lis
it code with 32bit possible (supported)?
Also, for this particular (char*) case, length would actually be the
length of the string, not the pointer length. From my example:
const char* journal_object_pool = "journal";
r = rbd_image_options_set(opts, RBD_OPTION_JOURNAL_OBJECT_POOL,
;>uint64_t stripe_unit = 65536;
> >>>r = rbd_image_options_set(opts, RBD_OPTION_STRIPE_UNIT,
> >>> &stripe_unit, size_of(stripe_unit));
> >>>assert(r == 0);
> >>>uint64_t stripe_count = 16;
> >>>r = rb
My initial thought was that a user would want to set features and
orders more frequently than other options, so keeping them as
additional arguments would be useful. But now thinking more about it,
I agree that they can be moved to options.
> --
>
> Jason Dillaman
>
>
> -
ON_JOURNAL_OBJECT_POOL,
journal_object_pool, strlen(journal_object_pool) + 1);
assert(r == 0);
r = rbd_create4(io, name, size, features, int *order, rbd_image_options_t opts);
cleanup:
rbd_image_options_destroy(opts);
--
Mykola Golub
--
To unsubscribe from this list: send the
>
> Thanks Mykola for implementing this, I will rebase the priority
> setting stuff against it.
Ah, so you are working on adding per pool priority options? Because I
have just started working on this too, but if you already do this I
will abandon it.
--
Mykola Golub
--
To unsubscribe fro
On Fri, Sep 25, 2015 at 11:02:36AM -0700, Sage Weil wrote:
> It's just this last commit, right?
>
> https://github.com/ceph/ceph/pull/6084
Yes, thanks.
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the bod
On Fri, Sep 25, 2015 at 07:09:53AM -0700, Sage Weil wrote:
> Hi Mykola,
>
> On Fri, 25 Sep 2015, Mykola Golub wrote:
> > What do you think about this implementation, which adds a dictionary
> > for pool options to pg_pool_t?
> >
> > https://github.com/ceph/ceph/
Hi,
On Mon, Sep 21, 2015 at 04:32:19PM +0300, Mykola Golub wrote:
> On Wed, Sep 16, 2015 at 09:23:07AM -0700, Sage Weil wrote:
> > On Wed, 16 Sep 2015, GuangYang wrote:
> > > Hi Sam,
> > > As part of the effort to solve problems similar to issue #13104
> > > (
On Wed, Sep 23, 2015 at 09:33:14AM +0300, Mykola Golub wrote:
> Also, I am not sure we should specify this way, as it is
> not consistent with other rbd commands. By default rbd operates on
> 'rbd' pool, which can be changed by --pool option.
The same reasoning for these c
rbd feature show []
BTW, where do you think these default feature flags will be stored?
Storing in pg_pool_t::flags I suppose is the easiest but it looks like
a layering violation.
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
p attach volume/volume1 volume/cgroup1
rbd consistency-group attach volume/volume2 volume/cgroup1
I guess before attaching I will need to disable journaling for volumes
(because it was automatically enabled previously with 'rbd mirror pool
enable')? Will it automatically enable mirroring feature for the
attached images?
>
> --
>
> Jason Dillaman
>
> [1] http://tracker.ceph.com/projects/ceph/wiki/RBD_Async_Mirroring
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
them instead?
BTW, I see we already have in pg_pool_t:
map properties; ///< OBSOLETE
I wonder what it was supposed to be used for and why it is marked
obsolete?
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
he second one is for extra config parameters, which are specified via
command line like below:
./vstart.sh -n -o 'debug ms = 20'
The section is at the bottom so it could override options set in the
config earlier.
--
Mykola Golub
--
To unsubscribe from this list: send the line "uns
On Fri, Sep 11, 2015 at 04:24:23PM +0300, Mykola Golub wrote:
> On Fri, Sep 11, 2015 at 05:59:56AM -0700, Sage Weil wrote:
>
> > I wonder if, in addition, we should also allow scrub and deep-scrub
> > intervals to be set on a per-pool basis?
>
> ceph osd pool set
On Fri, Sep 11, 2015 at 05:59:56AM -0700, Sage Weil wrote:
> I wonder if, in addition, we should also allow scrub and deep-scrub
> intervals to be set on a per-pool basis?
ceph osd pool set [deep-]scrub_interval N ?
I could do this too.
--
Mykola Golub
--
To unsubscribe from this list
On Fri, Sep 11, 2015 at 11:08:29AM +0100, Gregory Farnum wrote:
> On Fri, Sep 11, 2015 at 7:42 AM, Mykola Golub wrote:
> > Hi,
> >
> > I would like to add new pool flags: noscrub and nodeep-scrub, to be
> > able to control scrubbing on per pool basis. In our case it co
.
Before I created a pull request, I would like to see if other people
consider this useful or may be have some other suggestions?
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordo
On Mon, Jan 19, 2015 at 07:17:58AM -0800, Sage Weil wrote:
> On Mon, 19 Jan 2015, Gregory Farnum wrote:
> > On Sun, Jan 18, 2015 at 11:02 PM, Mykola Golub wrote:
> > > On Sun, Jan 18, 2015 at 10:33:05AM -0800, Sage Weil wrote:
> > >> On Sun, 18 Jan 2015, Mykola
: previously the method always calculated crc, now they
calculate it only if crc is enabled in the config. This means crc can
not be disabled if there are monitors of older version in the
cluster. I suppose we can't fix this, but it might worth documenting
in the release report?
--
Mykola Golub
e
> osd_pool_default_flag_nopgchange = true
> osd_pool_default_flag_nosizechange = true
>
> The big question for me is should we enable these by default in hammer?
Please see https://github.com/ceph/ceph/pull/3409
I have not enabled any by default.
Also, I have not mirrored CEPH_FEATURE_OSDHASHPSPOOL, should I?
--
Mykola Golub
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Sun, Jan 18, 2015 at 10:33:05AM -0800, Sage Weil wrote:
> On Sun, 18 Jan 2015, Mykola Golub wrote:
> > Hi Ceph,
> >
> > Right now, for not a monitor leader, if a received command is not
> > supported locally, but is supported by the leader, it is forwarded to
>
e:
https://github.com/trociny/ceph/commit/98f835357e378b1c5f05b32ba90a8b8537ba1ad8
But may be we need a more general solution? We might face a similar
issue in future, when adding a new command, which is not expected to
be forwarded to the leader (like injectargs).
--
Mykola Golub
--
To unsubs
On Mon, Jan 12, 2015 at 10:22:51AM +0200, Mykola Golub wrote:
> On Sun, Jan 11, 2015 at 09:33:57AM -0800, Sage Weil wrote:
>
> > By the way I took another look and I'm not sure that it is worth
> > duplicating all of the tree logic for a tree view. It seems easier to
simpler than the tree traversal stack)... or generalize it somehow?
Note, we already have duplication, at least CrushWrapper::dump_tree()
and OSDMap::print_tree(). I will work on generalization, I think some
tree dumper in CrushWrapper.
--
Mykola Golub
--
To unsubscribe from this list: send the
ate.
BTW, wouldn't disk usage in bytes (size used avail) be useful in this
output too? I.e something like below:
# id weight reweight size used avail %util var
01.0 1.0 886G 171G 670G 19.30 1.00
...
--
total size/used/avail: 886G/171G/670G
avg %util: 41.78 min/max var: 0.
st osd3
21.00 1.00 45.66 1.09 osd.2
-5 1.00- 44.15 1.06 host osd4
31.00 1.00 44.15 1.06 osd.3
-6 1.00 - 36.82 0.88 host osd5
41.00 0.80 36.82 0.88 osd.4
--
AVG %UTIL: 41.78 MIN/MAX VAR: 0.88/1.09 DEV: 6.19
--
al reverse DNS
> HostNames
> std::map user_map; //< optional user KV map
> void resolve(); //< reverse DNS OSD IPs and
> store in HostNames
> } whereis_t;
>
> static int whereis(IoCtx &ioctx, const std::string &oid,
> s
36 matches
Mail list logo