2015 at 5:38 AM, Owen Synge <osy...@suse.com> wrote:
> > On Mon, 14 Sep 2015 13:57:26 -0700
> > Gregory Farnum <gfar...@redhat.com> wrote:
> >
> >> The OSD is supposed to stay down if any of the networks are
> >> missing. Ceph is a CP system in CAP parlanc
On Mon, 14 Sep 2015 13:57:26 -0700
Gregory Farnum wrote:
> The OSD is supposed to stay down if any of the networks are missing.
> Ceph is a CP system in CAP parlance; there's no such thing as a CA
> system. ;)
I know I am being fussy, but within my team your email was sited
first action items.
-Ali
- Original Message -
From: Owen Synge osy...@suse.com
To: Ali Maredia amare...@redhat.com, ceph-devel@vger.kernel.org
Sent: Tuesday, August 4, 2015 6:42:31 AM
Subject: Re: Transitioning Ceph from Autotools to CMake
Dear Ali,
I am glad you are making
Dear Ali,
I am glad you are making progress.
Sadly I don't yet know cmake.
Please consider the systemd wip branch. It might be wise to leave
autotools around a little longer, until all functionality is in the cmake.
Best regards
Owen
On 07/30/2015 09:01 PM, Ali Maredia wrote:
After
On 08/04/2015 12:13 PM, Owen Synge wrote:
On 08/03/2015 09:07 PM, Sage Weil wrote:
On Mon, 3 Aug 2015, Owen Synge wrote:
I will check the rgw.
It is not working due to missing:
/usr/lib/ceph-radosgw/ceph-radosgw-prestart.sh
which is a useful check tool, available in this commit:
https
On 08/03/2015 09:07 PM, Sage Weil wrote:
On Mon, 3 Aug 2015, Owen Synge wrote:
Dear all,
My plan is to make a fedora22-systemd branch. I will leave fedora 20 as
sysvinit.
Ok just done my first proper install of systemd ceph branch on fedora22.
I can confirm most of the issues.
I am
On 08/04/2015 03:07 PM, Sage Weil wrote:
On Tue, 4 Aug 2015, Owen Synge wrote:
On 08/04/2015 12:13 PM, Owen Synge wrote:
On 08/03/2015 09:07 PM, Sage Weil wrote:
On Mon, 3 Aug 2015, Owen Synge wrote:
I will check the rgw.
It is not working due to missing:
/usr/lib/ceph-radosgw/ceph
On 07/29/2015 04:08 PM, Alex Elsayed wrote:
Sage Weil wrote:
On Wed, 29 Jul 2015, Alex Elsayed wrote:
Travis Rhoden wrote:
On Tue, Jul 28, 2015 at 12:13 PM, Sage Weil sw...@redhat.com wrote:
Hey,
I've finally had some time to play with the systemd integration branch
on
fedora 22.
Dear all,
My plan is to make a fedora22-systemd branch. I will leave fedora 20 as
sysvinit.
Ok just done my first proper install of systemd ceph branch on fedora22.
I can confirm most of the issues.
I am giving up for the day, but so far applying SUSE/opensuse code to
Fedora ceph-deploy code
On 07/29/2015 06:50 PM, Vasiliy Angapov wrote:
Hi colleagues,
I see some systemd-related actions here. Can you please also have a
look at how I managed to rule Ceph with systemd -
https://github.com/angapov/ceph-systemd/ ?
It uses systemd generator script, which is called every time host
On 07/28/2015 09:13 PM, Sage Weil wrote:
Hey,
I've finally had some time to play with the systemd integration branch on
fedora 22. It's in wip-systemd and my current list of issues includes:
- after mon creation ceph-create-keys isn't run automagically
- Personally I kind of hate how
Owen, I'd like to get this just a tad bit more functional and then merge
ASAP, then up any issues in the weeks leading up to infernalis. What say
ye?
I will look into this today and deploy a cluster on fedora that is close
to equivalent to what we have on SUSE.
Ill give you a report at the
was also expanded to be more complete than needed for
this discussion, in part for some new members of the SUSE ceph team to
understand the constraints and structure of a ceph cluster deployment.
Best regards
Owen
- Travis
On Tue, Jul 14, 2015 at 4:24 AM, Owen Synge osy...@suse.com
On 07/09/2015 09:58 PM, Travis Rhoden wrote:
On Jul 9, 2015, at 4:59 AM, Owen Synge osy...@suse.com wrote:
On 07/09/2015 12:46 PM, John Spray wrote:
Owen,
Hi John,
thanks for your reasonable mail.
Please can you say what your overall goal is with recent ceph-deploy
patches?
To give
Dear Travis,
We clearly disagree in this area.
I hope me explaining my perspective is not seen as unhelpful.
On 07/09/2015 07:00 PM, Travis Rhoden wrote:
(2B) inflexible / complex include paths for shared code between facade
implementations.
I disagree here. There are plenty of places
cross
language enough.
Thanks
Owen
PS
Realising it is actually 12 years since some a collaborator at CERN gave
me python test scripts to maintain when they left CERN, for the C++
project I was working on, is really scary.
-Greg
On Tue, Jul 14, 2015 at 11:41 AM, Owen Synge osy...@suse.com
Dear all,
ceph-deploy is to quote
quote
It is not a generic deployment system, it is only for Ceph, and is
designed for users who want to quickly get Ceph running with sensible
initial settings without the overhead of installing Chef, Puppet or Juju.
It does not handle client configuration
regards
Owen
On 07/10/2015 07:03 AM, Travis Rhoden wrote:
On Jul 9, 2015, at 12:45 PM, Owen Synge osy...@suse.com wrote:
Typo:
On 07/09/2015 09:37 PM, Owen Synge wrote:
Dear all,
Their are other details to be discussed, and hopefully lead to
agreement, but lets get to issue #1
Dear all,
Lets put a positive spin on this thread and set all misunderstandings on
my side :)
I propose that John clarified and I misunderstood the others in upstream
ceph-deploy's position, the style guide includes:
(0) Opposes duplication of code.
(1) Opposes duplication of code for each
Small correction due to not proof reading enough.
On 07/09/2015 06:28 PM, Owen Synge wrote:
Dear all,
Lets put a positive spin on this thread and set all misunderstandings on
my side :)
I propose that John clarified and I misunderstood the others in upstream
ceph-deploy's position
PPS I did not intentionally kill the branch and will investigate why its
missing.
Correction it is not missing and is still here.
https://github.com/ceph/ceph-deploy/pull/320
I still feel that I should close it in case people stop reviewing my
patches though.
Best regards
Owen
--
To
this has occurred.
Best regards
Owen
PS I have to fork ceph-deploy rgw today as I have deadlines to get some
thing out of the door.
PPS I did not intentionally kill the branch and will investigate why its
missing.
John
On 09/07/15 11:08, Owen Synge wrote:
Dear all,
The facade pattern
Dear all,
The facade pattern (or façade pattern) is a software design pattern
commonly used with object-oriented programming. The name is by analogy
to an architectural facade. (wikipedia)
I am frustrated with the desire to standardise on the one bad practice
implementations of the facade
Typo:
On 07/09/2015 09:37 PM, Owen Synge wrote:
Dear all,
Their are other details to be discussed, and hopefully lead to
agreement, but lets get to issue #1. The style issues still apply to
ceph and ceph-deploy.
From what you said, in my opinion the boat anchor in ceph-deploy
Dear all,
Their are other details to be discussed, and hopefully lead to
agreement, but lets get to issue #1. The style issues still apply to
ceph and ceph-deploy.
From what you said, in my opinion the boat anchor in ceph-deploy is
redefined, as coupling of facade pattern, where all data is
Dear All,
We have working SUSE systemd support using the master systemd files.
The only things that prove tricky is merging it to master.
You can find it working here
https://github.com/SUSE/ceph/tree/distro/suse-0-80-9
We also have a release for hammer.
The problem is that nearly every
On 06/19/2015 02:10 AM, Sage Weil wrote:
Hey Owen,
Hey Sage.
as of master merging
02ef5cf9b3ab1a4dbe13bdb3f036591f3ed0b6f7
you can finally build ceph as is master on opensuse 13.2 and SLE12 and
make an rpm.
Hopefully upstream can act on this news, and add this to their test suites.
At suse
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Robert,
I have a pull request open for exactly this use case.
https://github.com/ceph/ceph/pull/4911
I find it particularly useful for debugging spec file issues when
running on other operating systems than suse.
The pull request gives two
On 06/10/2015 01:06 AM, Ken Dreyer wrote:
On 06/09/2015 11:19 AM, Owen Synge wrote:
we can be remove many hard coded values replaced with variable and that
probably will only grow in number for example
%if 0%{?rhel} || 0%{?fedora}
--with-systemd-libexec-dir=/usr/libexec
Sorry to catch this thread late.
I come here via patch
https://github.com/ceph/ceph/pull/4911#issuecomment-110422312
I think you guys are missing that configure is doing some thing here.
(1) Configure is generating the spec file.
(2) It could also generate the deb files.
What no one has done
we can be remove many hard coded values replaced with variable and that
probably will only grow in number for example
%if 0%{?rhel} || 0%{?fedora}
--with-systemd-libexec-dir=/usr/libexec/ceph \
%endif
%if 0%{?opensuse} || 0%{?suse_version}
On 06/09/2015 08:44 PM, Sage Weil wrote:
On Tue, 9 Jun 2015, Owen Synge wrote:
On 06/09/2015 07:22 PM, Sage Weil wrote:
On Tue, 9 Jun 2015, Owen Synge wrote:
Sorry to catch this thread late.
There were two goals here:
1- make the generated tarball deterministic and independent
On 06/03/2015 06:26 PM, Sage Weil wrote:
On Wed, 3 Jun 2015, Owen Synge wrote:
Dear ceph-devel,
Linux has more than one init systems.
We in SUSE are in the process of up streaming our spec files, and all
our releases are systemd based.
Ceph seems more tested with sysVinit upstream.
We
Dear ceph-devel,
Linux has more than one init systems.
We in SUSE are in the process of up streaming our spec files, and all
our releases are systemd based.
Ceph seems more tested with sysVinit upstream.
We have 3 basic options for doing this in a packaged upstream system.
1) We dont install
An erasure encoded pool cannot be accessed directly using rbd. For this
reason we need a cache pool and an erasure pool. This not only allows
supporting rbd but increases performance.
http://karan-mj.blogspot.de/2014/04/erasure-coding-in-ceph.html
Dear all,
By default ceph-disk will do the following:
# ceph-disk - prepare --fs-type xfs --cluster ceph -- /dev/sdk
DEBUG:ceph-disk:Preparing osd data dir /dev/sdk
No block device /dev/sdk exists so ceph-disk decides a block device
is not wanted and makes a directory for an OSD.
I think
On 08/01/2014 05:10 PM, Sage Weil wrote:
On Fri, 1 Aug 2014, Owen Synge wrote:
Dear all,
By default ceph-disk will do the following:
# ceph-disk - prepare --fs-type xfs --cluster ceph -- /dev/sdk
DEBUG:ceph-disk:Preparing osd data dir /dev/sdk
No block device /dev/sdk exists so
Dear All,
This email is about
$ ceph-deploy osd create ceph-node4:vdb
and it not behaving identically too:
$ ceph-deploy osd prepare ceph-node2:vdb
$ ceph-deploy osd activate ceph-node2:vdb1
It is my understanding that the following sequence should deploy ceph
correctly and
Dear all,
I chatted with Alfredodeza, on IRC and will try and make this code work
without depending upon udev.
Regards
Owen
On 07/10/2014 12:19 PM, Owen Synge wrote:
Dear All,
This email is about
$ ceph-deploy osd create ceph-node4:vdb
and it not behaving identically too
39 matches
Mail list logo