Re: [ceph-users] Mimic 13.2.3?

2019-01-08 Thread Ken Dreyer
On Fri, Jan 4, 2019 at 12:23 PM Gregory Farnum  wrote:
> I imagine we will discuss all this in more detail after the release,
> but everybody's patience is appreciated as we work through these
> challenges.

We have some people on the list asking for more frequent releases, and
some people on the list asking for more quality releases. It's hard to
achieve speed and quality with our current resources.

For CentOS this is one of the reasons why I want to expose this sort
of thing via the "testing" and "release" SIG repositories.
Leading-edge Ceph admins will still be able to upgrade within hours of
a new release, while conservative users can have something that's a
few days old. Ideally our "testing" users would give us more feedback
before we push things out to the main "release" repos.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Should ceph build against libcurl4 for Ubuntu 18.04 and later?

2018-11-26 Thread Ken Dreyer
On Thu, Nov 22, 2018 at 11:47 AM Matthew Vernon  wrote:
>
> On 22/11/2018 13:40, Paul Emmerich wrote:
> > We've encountered the same problem on Debian Buster
>
> It looks to me like this could be fixed simply by building the Bionic
> packages in a Bionic chroot (ditto Buster); maybe that could be done in
> future? Given I think the packaging process is being reviewed anyway at
> the moment (hopefully 12.2.10 will be along at some point...)

That's how we're building it currently. We build ceph in pbuilder
chroots that correspond to each distro.

On master, debian/control has Build-Depends: libcurl4-openssl-dev so
I'm not sure why we'd end up with a dependency on libcurl3.

Would you please give me a minimal set of `apt-get` reproduction steps
on Bionic for this issue? Then we can get it into tracker.ceph.com.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Packaging bug breaks Jewel -> Luminous upgrade

2018-11-08 Thread Ken Dreyer
Hi Matthew,

What's the full apt-get command you're running?
On Thu, Nov 8, 2018 at 9:31 AM Matthew Vernon  wrote:
>
> Hi,
>
> in Jewel, /etc/bash_completion.d/radosgw-admin is in the radosgw package
> In Luminous, /etc/bash_completion.d/radosgw-admin is in the ceph-common
> package
>
> ...so if you try and upgrade, you get:
>
> Unpacking ceph-common (12.2.8-1xenial) over (10.2.9-0ubuntu0.16.04.1) ...
> dpkg: error processing archive ceph-common_12.2.8-1xenial_amd64.deb
> (--install):
>   trying to overwrite '/etc/bash_completion.d/radosgw-admin', which is
> also in package radosgw 10.2.9-0ubuntu0.16.04.1
>
> This is a packaging bug - ceph-common needs to declare (via Replaces and
> Breaks) that it's taking over some of the radosgw package -
> https://www.debian.org/doc/debian-policy/ch-relationships.html#overwriting-files-in-other-packages
>
> The exact versioning would depend on when the move was made (I presume
> either Jewel -> Kraken or Kraken -> Luminous). Does anyone know?
>
> [would you like this reported formally, or is the fix trivial enough to
> just be done? :-) ]
>
> Regards,
>
> Matthew
>
>
> --
>  The Wellcome Sanger Institute is operated by Genome Research
>  Limited, a charity registered in England with number 1021457 and a
>  company registered in England with number 2742969, whose registered
>  office is 215 Euston Road, London, NW1 2BE.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.9 release

2018-11-07 Thread Ken Dreyer
On Wed, Nov 7, 2018 at 8:57 AM Kevin Olbrich  wrote:
> We solve this problem by hosting two repos. One for staging and QA and one 
> for production.
> Every release gets to staging (for example directly after building a scm tag).
>
> If QA passed, the stage repo is turned into the prod one.
> Using symlinks, it would be possible to switch back if problems occure.
> Example: https://incoming.debian.org/

With the CentOS Storage SIG's cbs.centos.org , we have the ability to
tag builds as "-candidate", "-testing", and "-released". I think that
mechanism could help here, so brave users can run "testing" early
before it goes out to the entire world in "released".

We would have to build out something like that for Ubuntu, maybe
copying around the binaries.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] can we drop support of centos/rhel 7.4?

2018-09-24 Thread Ken Dreyer
On Thu, Sep 13, 2018 at 8:48 PM kefu chai  wrote:
> my question is: is it okay to drop the support of centos/rhel 7.4? so
> we will solely build and test the supported Ceph releases (luminous,
> mimic) on 7.5 ?

CentOS itself does not support old point releases, and I don't think
we should imply that we do either.

>From #centos-devel earlier:

< fidencio> TrevorH: avij: do you know whether there's an offical EOL
  announcement for 6.9?
< avij> fidencio: it happens every time there's a 6.(x+1) release
< avij> it's always implied
< avij> RH has some support for past RHEL releases, but there's no such
  mechanism in CentOS

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-ansible

2018-09-24 Thread Ken Dreyer
Hi Alfredo,

I've packaged the latest version in Fedora, but I didn't update EPEL.
I've submitted the update for EPEL now at
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2018-7f8d3be3e2 .
solarflow99, you can test this package and report "+1" in Bodhi there.

It's also in the CentOS Storage SIG
(http://cbs.centos.org/koji/buildinfo?buildID=23004) . Today I've
tagged that build in CBS into storage7-ceph-luminous-testing and
storage7-ceph-mimic-testing, so it will show up at
https://buildlogs.centos.org/centos/7/storage/x86_64/ceph-luminous/
soon. solarflow99, you could test this as well (although CentOS does
not have a feedback mechanism like Fedora's Bodhi yet)
On Fri, Sep 21, 2018 at 4:43 AM Alfredo Deza  wrote:
>
> On Thu, Sep 20, 2018 at 7:04 PM solarflow99  wrote:
> >
> > oh, was that all it was...  git clone https://github.com/ceph/ceph-ansible/
> > I installed the notario  package from EPEL, 
> > python2-notario-0.0.11-2.el7.noarch  and thats the newest they have
>
> Hey Ken, I thought the latest versions were being packaged, is there
> something I've missed? The tags have changed format it seems, from
> 0.0.11
> >
> >
> >
> >
> > On Thu, Sep 20, 2018 at 3:57 PM Alfredo Deza  wrote:
> >>
> >> Not sure how you installed ceph-ansible, the requirements mention a
> >> version of a dependency (the notario module) which needs to be 0.0.13
> >> or newer, and you seem to be using an older one.
> >>
> >>
> >> On Thu, Sep 20, 2018 at 6:53 PM solarflow99  wrote:
> >> >
> >> > Hi, tying to get this to do a simple deployment, and i'm getting a 
> >> > strange error, has anyone seen this?  I'm using Centos 7, rel 5   
> >> > ansible 2.5.3  python version = 2.7.5
> >> >
> >> > I've tried with mimic luninous and even jewel, no luck at all.
> >> >
> >> >
> >> >
> >> > TASK [ceph-validate : validate provided configuration] 
> >> > **
> >> > task path: 
> >> > /home/jzygmont/ansible/ceph-ansible/roles/ceph-validate/tasks/main.yml:2
> >> > Thursday 20 September 2018  14:05:18 -0700 (0:00:05.734)   
> >> > 0:00:37.439 
> >> > The full traceback is:
> >> > Traceback (most recent call last):
> >> >   File 
> >> > "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", 
> >> > line 138, in run
> >> > res = self._execute()
> >> >   File 
> >> > "/usr/lib/python2.7/site-packages/ansible/executor/task_executor.py", 
> >> > line 561, in _execute
> >> > result = self._handler.run(task_vars=variables)
> >> >   File 
> >> > "/home/jzygmont/ansible/ceph-ansible/plugins/actions/validate.py", line 
> >> > 43, in run
> >> > notario.validate(host_vars, install_options, defined_keys=True)
> >> > TypeError: validate() got an unexpected keyword argument 'defined_keys'
> >> >
> >> > fatal: [172.20.3.178]: FAILED! => {
> >> > "msg": "Unexpected failure during module execution.",
> >> > "stdout": ""
> >> > }
> >> >
> >> > NO MORE HOSTS LEFT 
> >> > **
> >> >
> >> > PLAY RECAP 
> >> > **
> >> > 172.20.3.178   : ok=25   changed=0unreachable=0
> >> > failed=1
> >> >
> >> > ___
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] packages names for ubuntu/debian

2018-08-21 Thread Ken Dreyer
Yes, this is a bummer.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-November/022687.html

Unfortunately we chose to add the the Ubuntu distro codename suffixes
like "xenial" to the ceph.com packages long ago, because who knew that
the release names would ever wrap around :)

If we were to switch to "ubuntuXX.XX" suffixes (like ubuntu.com does),
our "x" suffix still sorts ahead of the letter "u" in "ubuntu". Maybe
we can fix this bug before Ubuntu releases hit "u" codenames again in
2028.

If you're do-release-upgrade'ing from Xenial to Bionic, once you've
ensured that you've disabled the xenial repos and enable the bionic
ones on your cluster nodes:

A) If you were running the latest point release of Ceph, you'll need
to "downgrade" to get the bionic builds

B) If your Xenial boxes happened to be behind the latest Ceph point
release, then you can use apt to get to the latest Ceph point release
without the apt "downgrade" operation.

- Ken



On Mon, Aug 20, 2018 at 6:56 PM, Alfredo Daniel Rezinovsky
 wrote:
> On 20/08/18 03:50, Bastiaan Visser wrote:
>>
>> you should only use the 18.04 repo in 18.04, and remove the 16.04 repo.
>>
>> use:
>> https://download.ceph.com/debian-luminous bionic main
>>
>> - Bastiaan
>
>
> Right. But if I came from a working 16.04 system upgraded to 18.04 the ceph
> (xenial) packages are already there and wont upgrade to beaver ones because
> the names means downgrade.
>
>
>> - Original Message -
>> From: "Alfredo Daniel Rezinovsky"
>> 
>> To: "ceph-users" 
>> Sent: Sunday, August 19, 2018 10:15:00 PM
>> Subject: [ceph-users] packages names for ubuntu/debian
>>
>> Last packages for ubuntu 16.04 are version 13.2.1-1xenial
>> while last packages for ubuntu 18.04 are 13.2.1-1bionic
>>
>> I recently upgraded from ubuntu 16 to 18 and the ceph packages stayed in
>> xenial because alphabetically xenial > bionic.
>>
>> I had to set the piining to force the upgrade to bionic (Which was trated
>> as a downgrade)
>>
>> In Ubuntu maling lists they said those packages are "wrongly versioned"
>>
>> I think the names should be 13.2.1-1ubuntu16.04-xenial and
>> 13.2.1.ubuntu18.04-bionic.
>>
>
> --
> Alfredo Daniel Rezinovsky
> Director de Tecnologías de Información y Comunicaciones
> Facultad de Ingeniería - Universidad Nacional de Cuyo
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS Snapshots in Mimic

2018-07-31 Thread Ken Dreyer
On Tue, Jul 31, 2018 at 9:23 AM, Kenneth Waegeman
 wrote:
> Thanks David and John,
>
> That sounds logical now. When I did read "To make a snapshot on directory
> “/1/2/3/”, the client invokes “mkdir” on “/1/2/3/.snap” directory
> (http://docs.ceph.com/docs/master/dev/cephfs-snapshots/)" it didn't come to
> mind I should create subdirectory immediately.

That does sound unclear to me as well. Here's a proposed docs change:
https://github.com/ceph/ceph/pull/23353

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-maintainers] download.ceph.com repository changes

2018-07-30 Thread Ken Dreyer
On Fri, Jul 27, 2018 at 1:28 AM, Fabian Grünbichler
 wrote:
> On Tue, Jul 24, 2018 at 10:38:43AM -0400, Alfredo Deza wrote:
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installing a known bad release.
>>
>> The way our repos are structured today means every single version of
>> the release is included in the repository. That is, for Luminous,
>> every 12.x.x version of the binaries is in the same repo. This is true
>> for both RPM and DEB repositories.
>>
>> However, the DEB repos don't allow pinning to a given version because
>> our tooling (namely reprepro) doesn't construct the repositories in a
>> way that this is allowed. For RPM repos this is fine, and version
>> pinning works.
>
> If you mean that reprepo does not support referencing multiple versions
> of packages in the Packages file, there is a patched fork that does
> (that seems well-supported):
>
> https://github.com/profitbricks/reprepro

Thanks for this link. That's great to know someone's working on this.

What's the status of merging that back into the main reprepro code, or
else shipping that fork as the new reprepro package in Debian /
Ubuntu? The Ceph project could end up responsible for maintaining that
reprepro fork if the main Ubuntu community does not pick it up :) The
fork is several years old, and the latest update on
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=570623 was over a
year ago.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] download.ceph.com repository changes

2018-07-24 Thread Ken Dreyer
On Tue, Jul 24, 2018 at 8:54 AM, Dan van der Ster  wrote:
> On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza  wrote:
>>
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installing a known bad release.
>>
>> The way our repos are structured today means every single version of
>> the release is included in the repository. That is, for Luminous,
>> every 12.x.x version of the binaries is in the same repo. This is true
>> for both RPM and DEB repositories.
>>
>> However, the DEB repos don't allow pinning to a given version because
>> our tooling (namely reprepro) doesn't construct the repositories in a
>> way that this is allowed. For RPM repos this is fine, and version
>> pinning works.
>>
>> To remove a bad version we have to proposals (and would like to hear
>> ideas on other possibilities), one that would involve symlinks and the
>> other one which purges the known bad version from our repos.
>
> What we did with our mirror was: `rm -f *12.2.6*; createrepo --update
> .` Took a few seconds. Then disabled the mirror cron.

Unfortunately with Debian repositories, reprepro is a lot more
complicated, and then we have to re-sign the new repository metadata,
so it's a little more involved there.

BUT perfect is the enemy of the good so maybe we should have just done
your suggestion for RPMs at least.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous 12.2.6 release date?

2018-07-11 Thread Ken Dreyer
Sage, does http://tracker.ceph.com/issues/24597 cover the full problem
you're describing?

- Ken

On Wed, Jul 11, 2018 at 9:40 AM, Sage Weil  wrote:
> Please hold off on upgrading.  We discovered a regression (in 12.2.5
> actually) but the triggering event is OSD restarts or other peering
> combined with RGW workloads on EC pools, so unnecessary OSD restarts
> should be avoided with 12.2.5 until we have is sorted out.
>
> sage
>
>
> On Wed, 11 Jul 2018, Dan van der Ster wrote:
>
>> And voila, I see the 12.2.6 rpms were released overnight.
>>
>> Waiting here for an announcement before upgrading.
>>
>> -- dan
>>
>> On Tue, Jul 10, 2018 at 10:08 AM Sean Purdy  wrote:
>> >
>> > While we're at it, is there a release date for 12.2.6?  It fixes a 
>> > reshard/versioning bug for us.
>> >
>> > Sean
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does anyone use rcceph script in CentOS/SUSE?

2018-01-11 Thread Ken Dreyer
Please drop it, it has been untested for a long time.

- Ken

On Thu, Jan 11, 2018 at 4:49 AM, Nathan Cutler  wrote:
> To all who are running Ceph on CentOS or SUSE: do you use the "rcceph"
> script? The ceph RPMs ship it in /usr/sbin/rcceph
>
> (Why I ask: more-or-less the same functionality is provided by the
> ceph-osd.target and ceph-mon.target systemd units, and the script is no
> longer maintained, so we'd like to drop it from the RPM packaging unless
> someone is using it.)
>
> Thanks,
> Nathan
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Any way to get around selinux-policy-base dependency

2017-12-06 Thread Ken Dreyer
Hi Bryan,

Why not upgrade to RHEL 7.4? We don't really build Ceph to run on
older RHEL releases.

- Ken

On Mon, Dec 4, 2017 at 11:26 AM, Bryan Banister
 wrote:
> Hi all,
>
>
>
> I would like to upgrade to the latest Luminous release but found that it
> requires the absolute latest selinux-policy-base.  We aren’t using selinux,
> so was wondering if there is a way around this dependency requirement?
>
>
>
> [carf-ceph-osd15][WARNIN] Error: Package: 2:ceph-selinux-12.2.2-0.el7.x86_64
> (ceph)
>
> [carf-ceph-osd15][WARNIN]Requires: selinux-policy-base >=
> 3.13.1-166.el7_4.5
>
> [carf-ceph-osd15][WARNIN]Installed:
> selinux-policy-targeted-3.13.1-102.el7_3.13.noarch
> (@rhel7.3-rhn-server-production/7.3)
>
>
>
> Thanks for any help!
>
> -Bryan
>
>
> 
>
> Note: This email is for the confidential use of the named addressee(s) only
> and may contain proprietary, confidential or privileged information. If you
> are not the intended recipient, you are hereby notified that any review,
> dissemination or copying of this email is strictly prohibited, and to please
> notify the sender immediately and destroy this email and any attachments.
> Email transmission cannot be guaranteed to be secure or error-free. The
> Company, therefore, does not make any guarantees as to the completeness or
> accuracy of this email or any attachments. This email is for informational
> purposes only and does not constitute a recommendation, offer, request or
> solicitation of any kind to buy, sell, subscribe, redeem or perform any type
> of transaction of a financial product.
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ubuntu upgrade Zesty => Aardvark, Implications for Ceph?

2017-11-21 Thread Ken Dreyer
As a tangent, it's a problem for download.ceph.com packages that
"xenial" happens to sort alphabetically after "bionic", because
do-release-upgrade will consider (for example) "ceph_12.2.1-1xenial"
to be newer than "ceph_12.2.1-1bionic". I think we need to switch to
using "ubuntu16.04", "ubuntu18.04" suffixes instead.

- Ken

On Mon, Nov 13, 2017 at 4:41 AM, Ranjan Ghosh  wrote:
> Hi everyone,
>
> In January, support for Ubuntu Zesty will run out and we're planning to
> upgrade our servers to Aardvark. We have a two-node-cluster (and one
> additional monitoring-only server) and we're using the packages that come
> with the distro. We have mounted CephFS on the same server with the kernel
> client in FSTab. AFAIK, Aardvark includes Ceph 12.0. What would happen if we
> used the usual "do-release-upgrade" to upgrade the servers one-by-one? I
> assume the procedure described here
> "http://ceph.com/releases/v12-2-0-luminous-released/"; (section "Upgrade from
> Jewel or Kraken") probably won't work for us, because "do-release-upgrade"
> will upgrade all packages (including the ceph ones) at once and then reboots
> the machine. So we cannot really upgrade only the monitoring nodes. And I'd
> rather avoid switching to PPAs beforehand. So, what are the real
> consequences if we upgrade all servers one-by-one with "do-release-upgrade"
> and then reboot all the nodes? Is it only the downtime why this isnt
> recommended or do we lose data? Any other recommendations on how to tackle
> this?
>
> Thank you / BR
>
> Ranjan
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph mirrors

2017-10-09 Thread Ken Dreyer
On Thu, Oct 5, 2017 at 1:35 PM, Stefan Kooman  wrote:
> Hi,
>
> Sorry for empty mail, that shouldn't have happened. I would like to
> address the following. Currently the repository list for debian-
> packages contain _only_ the latest package version. In case of a
> (urgent) need to downgrade you cannot easily select an older version.
> You then need to resort to download packages manually. I want to suggest
> that we keep the older packages in the repo list. They are on the
> mirrors anyway (../debian/pool/main/{c,r}/ceph/).

This is a limitation of the way reprepro works. It only provides Apt
metadata for the latest version of a package.

Do you know of another open-source software project that provides Apt
repos that contain metadata for all older versions of packages? (in a
single debian repo URL?)

If so, I would like to look at their repository and ask them what
tools they are using to generate their repositories. As far as I know,
debian.org and ubuntu.com both use reprepro.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph release cadence

2017-09-06 Thread Ken Dreyer
On Wed, Sep 6, 2017 at 9:23 AM, Sage Weil  wrote:
> * Keep even/odd pattern, but force a 'train' model with a more regular
> cadence
>
>   + predictable schedule
>   - some features will miss the target and be delayed a year

This one (#2, regular release cadence) is the one I will value the most.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs file size limit 0f 1.1TB?

2017-05-25 Thread Ken Dreyer
On Wed, May 24, 2017 at 12:36 PM, John Spray  wrote:
>
> CephFS has a configurable maximum file size, it's 1TB by default.
>
> Change it with:
>   ceph fs set  max_file_size 

How does this command relate to "ceph mds set max_file_size" ? Is it different?

I've put some of the information in this thread into a docs PR:
https://github.com/ceph/ceph/pull/15287

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RGW: removal of support for fastcgi

2017-05-15 Thread Ken Dreyer
On Fri, May 5, 2017 at 1:51 PM, Yehuda Sadeh-Weinraub  wrote:
>
> TL;DR: Does anyone care if we remove support for fastcgi in rgw?

Please remove it as soon as possible. The old libfcgi project's code
is a security liability. When upstream died, there was a severe lack
of coordination around distributing patches to fix CVE-2012-6687. I
expect a similar level of chaos if another CVE surfaces in this
library. There are also unanswered questions about libfcgi's continued
use of poll vs select, see
https://bugs.launchpad.net/ubuntu/+source/libfcgi/+bug/933417/comments/5
.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] A Jewel in the rough? (cache tier bugs and documentation omissions)

2017-03-13 Thread Ken Dreyer
At a general level, is there any way we could update the documentation
automatically whenever src/common/config_opts.h changes?

- Ken

On Tue, Mar 7, 2017 at 2:44 AM, Nick Fisk  wrote:
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
>> John Spray
>> Sent: 07 March 2017 01:45
>> To: Christian Balzer 
>> Cc: ceph-users@lists.ceph.com
>> Subject: Re: [ceph-users] A Jewel in the rough? (cache tier bugs and 
>> documentation omissions)
>>
>> On Tue, Mar 7, 2017 at 12:28 AM, Christian Balzer  wrote:
>> >
>> >
>> > Hello,
>> >
>> > It's now 10 months after this thread:
>> >
>> > http://www.spinics.net/lists/ceph-users/msg27497.html (plus next
>> > message)
>> >
>> > and we're at the fifth iteration of Jewel and still
>> >
>> > osd_tier_promote_max_objects_sec
>> > and
>> > osd_tier_promote_max_bytes_sec
>> >
>> > are neither documented (master or jewel), nor mentioned in the
>> > changelogs and most importantly STILL default to the broken reverse 
>> > settings above.
>>
>> Is there a pull request?
>
> Mark fixed it in this commit, but looks like it was never marked for backport 
> to Jewel.
>
> https://github.com/ceph/ceph/commit/793ceac2f3d5a2c404ac50569c44a21de6001b62
>
> I will look into getting the documentation updated for these settings.
>
>>
>> John
>>
>> > Anybody coming from Hammer or even starting with Jewel and using cache
>> > tiering will be having a VERY bad experience.
>> >
>> > Christian
>> > --
>> > Christian BalzerNetwork/Systems Engineer
>> > ch...@gol.com   Global OnLine Japan/Rakuten Communications
>> > http://www.gol.com/
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Tendrl-devel] Calamari-server for CentOS

2017-02-17 Thread Ken Dreyer
I think the most up-to-date source of Calamari CentOS packages would
be https://shaman.ceph.com/repos/calamari/1.5/

On Fri, Feb 17, 2017 at 7:38 AM, Martin Kudlej  wrote:
> Hello all,
>
> I would like to ask again about calamari-server package for CentOS 7. Is
> there any plan to have calamari-server in Storage SIG in CentOS 7, please?
>
>
> On 01/17/2017 02:31 PM, Martin Kudlej wrote:
>>
>> Hello Ceph users,
>>
>> I've installed Ceph from
>> SIG(https://wiki.centos.org/SpecialInterestGroup/Storage) on CentOS 7.
>> I would like to install Calamari server too. It is not available in
>> SIG(http://mirror.centos.org/centos/7/storage/x86_64/ceph-jewel/). I've
>> found
>> https://github.com/ksingh7/ceph-calamari-packages/tree/master/CentOS-el7
>> but there I cannot be sure
>> that it is well formed and maintained version for CentOS.
>>
>> Where can I find Calamari server package for CentOS 7, please?
>>
>
> --
> Best Regards,
> Martin Kudlej.
> RHSC/USM Senior Quality Assurance Engineer
> Red Hat Czech s.r.o.
>
> Phone: +420 532 294 155
> E-mail:mkudlej at redhat.com
> IRC:   mkudlej at #brno, #gluster, #storage-qa, #rhs, #rh-ceph, #usm-meeting
> @ redhat
>   #tendrl-devel @ freenode
>
> ___
> Tendrl-devel mailing list
> tendrl-de...@redhat.com
> https://www.redhat.com/mailman/listinfo/tendrl-devel
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Kernel 4 repository to use?

2017-01-11 Thread Ken Dreyer
On Wed, Jan 11, 2017 at 3:27 PM, Marc Roos  wrote:
> We are going to setup a test cluster with kraken using CentOS7. And
> obviously like to stay as close as possible to using their repositories.

Ilya has backported the latest kernel code to CentOS 7.3's kernel, so
I'd recommend the version in the distro (kernel-3.10.0-514.2.2.el7).

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ubuntu Xenial - Ceph repo uses weak digest algorithm (SHA1)

2017-01-05 Thread Ken Dreyer
Apologies for the thread necromancy :)

We've (finally) configured our signing system to use sha256 for GPG
digests, so this issue should no longer appear on Debian/Ubuntu.

- Ken

On Fri, May 27, 2016 at 6:20 AM, Saverio Proto  wrote:
> I started to use Xenial... does everyone have this error ? :
>
> W: http://ceph.com/debian-hammer/dists/xenial/InRelease: Signature by
> key 08B73419AC32B4E966C1A330E84AC2C0460F3994 uses weak digest
> algorithm (SHA1)
>
> Saverio
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unwanted automatic restart of daemons during an upgrade since 10.2.5 (on Trusty)

2016-12-19 Thread Ken Dreyer
On Mon, Dec 19, 2016 at 12:31 PM, Ken Dreyer  wrote:
> On Tue, Dec 13, 2016 at 4:42 AM, Francois Lafont
>  wrote:
>> So, now with 10.2.5 version, in my process, OSD daemons are stopped,
>> then automatically restarted by the upgrade and then stopped again
>> by the reboot. This is not an optimal process of course. ;)
>
> We do not intend for anything in the packaging to restart the daemons.
>
> The last time I looked into this issue, it behaved correctly (dpkg did
> not restart the daemons during the apt-get process - the PID files
> were the same before and after the upgrade).

I looked into this again on a Trusty VM today. I set up a single
mon+osd cluster on v10.2.3, with the following:

  # status ceph-osd id=0
  ceph-osd (ceph/0) start/running, process 1301

  #ceph daemon osd.0 version
  {"version":"10.2.3"}

I ran "apt-get upgrade" to get go 10.2.3 -> 10.2.5, and the OSD PID
(1301) and version from the admin socket (v10.2.3) remained the same.

Could something else be restarting the daemons in your case?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unwanted automatic restart of daemons during an upgrade since 10.2.5 (on Trusty)

2016-12-19 Thread Ken Dreyer
On Tue, Dec 13, 2016 at 4:42 AM, Francois Lafont
 wrote:
> So, now with 10.2.5 version, in my process, OSD daemons are stopped,
> then automatically restarted by the upgrade and then stopped again
> by the reboot. This is not an optimal process of course. ;)

We do not intend for anything in the packaging to restart the daemons.

The last time I looked into this issue, it behaved correctly (dpkg did
not restart the daemons during the apt-get process - the PID files
were the same before and after the upgrade).

Did you dig further to find out what is restarting them? Are you using
any configuration management system (ansible, chef, puppet) to do your
package upgrades?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Server crashes on high mount volume

2016-12-13 Thread Ken Dreyer
On Tue, Dec 13, 2016 at 6:45 AM, Diego Castro
 wrote:
> Thank you for the tip.
> Just found out the repo is empty, am i doing something wrong?
>
> http://mirror.centos.org/centos/7/cr/x86_64/Packages/
>

Sorry for the confusion. CentOS 7.3 shipped a few hours ago.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Server crashes on high mount volume

2016-12-12 Thread Ken Dreyer
On Fri, Dec 9, 2016 at 2:28 PM, Diego Castro
 wrote:
> Hello, my case is very specific but i think other may have this issue.
>
> I have a ceph cluster up and running hosting block storage for my openshift
> (kubernetes) cluster.
> Things goes bad when i "evacuate" a node, which is move all containers to
> other hosts, when this happens i can see a lot of map/mount commands and
> suddenly the node crashes, here is the log [1].
>
>
> 1.https://gist.github.com/spinolacastro/ff2bb85b3768a71d3ff6d1d6d85f00a2
>
> [root@n-13-0 ~]# cat /etc/redhat-release
> CentOS Linux release 7.2.1511 (Core)
>
> [root@n-13-0 ~]# uname -a
> Linux n-13-0 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016
> x86_64 x86_64 x86_64 GNU/Linux

Does this still happen with the kernel that is (soon to be) in CentOS
7.3? The Ceph kernel code got a big update in 7.3, with a rebase to
the latest RBD and CephFS (libceph.ko, rbd.ko and ceph.ko) upstream.

CentOS 7.3 is not GA yet, but you can get kernel-3.10.0-514.2.2.el7
from the CR repository:
https://wiki.centos.org/AdditionalResources/Repositories/CR

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] nfs-ganesha and rados gateway, Cannot find supported RGW runtime. Disabling RGW fsal build

2016-11-16 Thread Ken Dreyer
On Fri, Nov 4, 2016 at 2:14 AM, 于 姜  wrote:
> ceph version 10.2.3
> ubuntu 14.04 server
> nfs-ganesha 2.4.1
> ntirpc 1.4.3
>
> cmake -DUSE_FSAL_RGW=ON ../src/
>
> -- Found rgw libraries: /usr/lib
> -- Could NOT find RGW: Found unsuitable version ".", but required is at
> least "1.1" (found /usr)
> CMake Warning at CMakeLists.txt:571 (message):
> Cannot find supported RGW runtime. Disabling RGW fsal build
>
> Hello, everyone, Will the nfs-ganesha in ceph 10.2.3 version available?

Unfortunately nfs-ganesha 2.4 will not build with vanilla Ceph
v10.2.3. You probably need some or all of the patches here:
https://github.com/ceph/ceph/pull/11335 (or more?)

I think this is fixed in Ceph v10.2.4.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Memory leak in radosgw

2016-10-24 Thread Ken Dreyer
Hi Trey,

If you run the upstream curl releases, please note that curl has a
poor security record and it's important to stay on top of updates.
https://curl.haxx.se/docs/security.html indicates that 7.44 has
security problems, and in fact there are eleven more security
announcements coming soon
(https://curl.haxx.se/mail/lib-2016-10/0076.html)

If you could provide us more information about the memory leak you're
seeing, we can coordinate that with the curl maintainers in RHEL and
see if it's feasible to get a fix into RHEL's 7.29.

- Ken

On Mon, Oct 24, 2016 at 10:31 AM, Trey Palmer  wrote:
> Updating to libcurl 7.44 fixed the memory leak issue.   Thanks for the tip,
> Ben.
>
> FWIW this was a massive memory leak, it rendered the system untenable in my
> testing.   RGW multisite will flat not work with the current CentOS/RHEL7
> libcurl.
>
> Seems like there are a lot of different problems caused by libcurl
> bugs/incompatibilities.
>
>-- Trey
>
> On Fri, Oct 21, 2016 at 11:04 AM, Trey Palmer  wrote:
>>
>> Hi Ben,
>>
>> I previously hit this bug:
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1327142
>>
>> So I updated from libcurl 7.29.0-25 to the new update package libcurl
>> 7.29.0-32 on RHEL 7, which fixed the deadlock problem.
>>
>> I had not seen the issue you linked.   It doesn't seem directly related,
>> since my problem is a memory leak and not CPU.   Clearly, though, older
>> libcurl versions remain problematic for multiple reasons, so I'll give a
>> newer one a try.
>>
>> Thanks for the input!
>>
>>-- Trey
>>
>>
>>
>> On Fri, Oct 21, 2016 at 3:21 AM, Ben Morrice  wrote:
>>>
>>> What version of libcurl are you using?
>>>
>>> I was hitting this bug with RHEL7/libcurl 7.29 which could also be your
>>> catalyst.
>>>
>>> http://tracker.ceph.com/issues/15915
>>>
>>> Kind regards,
>>>
>>> Ben Morrice
>>>
>>> __
>>> Ben Morrice | e: ben.morr...@epfl.ch | t: +41-21-693-9670
>>> EPFL ENT CBS BBP
>>> Biotech Campus
>>> Chemin des Mines 9
>>> 1202 Geneva
>>> Switzerland
>>>
>>> On 20/10/16 21:41, Trey Palmer wrote:
>>>
>>> I've been trying to test radosgw multisite and have a pretty bad memory
>>> leak.It appears to be associated only with multisite sync.
>>>
>>> Multisite works well for a small numbers of objects.However, it all
>>> fell over when I wrote in 8M 64K objects to two buckets overnight for
>>> testing (via cosbench).
>>>
>>> The leak appears to happen on the multisite transfer source -- that is,
>>> the
>>> node where the objects were written originally.   The radosgw process
>>> eventually dies, I'm sure via the OOM killer, and systemd restarts it.
>>> Then repeat, though multisite sync pretty much stops at that point.
>>>
>>> I have tried 10.2.2, 10.2.3 and a combination of the two.   I'm running
>>> on
>>> CentOS 7.2, using civetweb with SSL.   I saw that the memory profiler
>>> only
>>> works on mon, osd and mds processes.
>>>
>>> Anyone else seen anything like this?
>>>
>>>-- Trey
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph website problems?

2016-10-11 Thread Ken Dreyer
I think this may be related:
http://www.dreamhoststatus.com/2016/10/11/dreamcompute-us-east-1-cluster-service-disruption/

On Tue, Oct 11, 2016 at 5:57 AM, Sean Redmond  wrote:
> Hi,
>
> Looks like the ceph website and related sub domains are giving errors for
> the last few hours.
>
> I noticed the below that I use are in scope.
>
> http://ceph.com/
> http://docs.ceph.com/
> http://download.ceph.com/
> http://tracker.ceph.com/
>
> Thanks
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPHFS file or directories disappear when ls (metadata problem)

2016-09-29 Thread Ken Dreyer
kernel-3.10.0-327.*.el7 comes from RHEL 7.2. (although I don't know
what "ug" means in your version, there?)

kernel-3.10.0-493.el7 ships in the RHEL 7.3 Beta. You need a RHEL
subscription to access the RHEL betas (CentOS doesn't build them).

- Ken


On Thu, Sep 29, 2016 at 7:19 AM, Kenneth Waegeman
 wrote:
>
>
> On 29/09/16 14:29, Yan, Zheng wrote:
>>
>> On Thu, Sep 29, 2016 at 8:13 PM, Kenneth Waegeman
>>  wrote:
>>>
>>> Hi all,
>>>
>>> Following up on this thread:
>>>
>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008537.html
>>>
>>> we still see files missing when doing ls on cephfs with
>>> 3.10.0-327.18.2.el7.ug.x86_64
>>>
>>> Is there already a solution for this?I don't see anything ceph related
>>> popping up in the release notes of the newer kernels..
>>>
>> try updating your kernel. The newest fixes are included in
>> kernel-3.10.0-448.el7
>
> Thanks!! We are running centos 7.2.. Is there a way to get the
> 3.10.0-448.el7 kernel yet?
>
> K
>
>>
>> Regards
>> Yan, Zheng
>>
>>
>>> Thanks !!
>>>
>>> Kenneth
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] High CPU load with radosgw instances

2016-09-16 Thread Ken Dreyer
On Fri, Sep 16, 2016 at 2:03 PM, Ken Dreyer  wrote:
> Hi Casey,
>
> That warning message tells users to upgrade to a new version of
> libcurl. Telling users to upgrade to a newer version of a base system
> package like that sets the user on a trajectory to have to maintain
> their own curl packages forever, decreasing the security of their
> overall system in the long run. For example ceph.com itself shipped a
> newer el6 curl package for a while in "ceph-extras", until it fell of
> everyone's radar, no one updated it, and it had many outstanding CVEs
> until we finally dropped el6 support altogether.
>

I got the details wrong here - in ceph-extras, it was qemu-kvm on el6
that had a bunch of unfixed security issues, and on Fedora it was
libcurl :)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] High CPU load with radosgw instances

2016-09-16 Thread Ken Dreyer
Hi Casey,

That warning message tells users to upgrade to a new version of
libcurl. Telling users to upgrade to a newer version of a base system
package like that sets the user on a trajectory to have to maintain
their own curl packages forever, decreasing the security of their
overall system in the long run. For example ceph.com itself shipped a
newer el6 curl package for a while in "ceph-extras", until it fell of
everyone's radar, no one updated it, and it had many outstanding CVEs
until we finally dropped el6 support altogether.

Would you please remove this last sentence from the RGW log message?

- Ken


On Fri, Sep 16, 2016 at 12:02 PM, Casey Bodley  wrote:
> In the meantime, we've made changes to radosgw so that it can detect and
> work around this libcurl bug. You can track the progress of this workaround
> (currently in master and pending backport to jewel) at
> http://tracker.ceph.com/issues/16695.
>
> Casey
>
>
>
> On 09/16/2016 01:38 PM, Ken Dreyer wrote:
>>
>> Hi Lewis,
>>
>> This sounds a lot like https://bugzilla.redhat.com/1347904 , currently
>> slated for the upcoming RHEL 7.3 (and CentOS 7.3).
>>
>> There's an SRPM in that BZ that you can rebuild and test out. This
>> method won't require you to keep chasing upstream curl versions
>> forever (curl has a lot of CVEs).
>>
>> Mind testing that out and reporting back?
>>
>> - Ken
>>
>>
>> On Fri, Sep 16, 2016 at 11:06 AM, lewis.geo...@innoscale.net
>>  wrote:
>>>
>>> Hi Yehuda,
>>> Well, again, thank you!
>>>
>>> I was able to get a package built from the latest curl release, and after
>>> upgrading on my radosgw hosts, the load is no longer running high. The
>>> load
>>> is just sitting at almost nothing and I only see the radosgw process
>>> using
>>> CPU when it is actually doing something now.
>>>
>>> So, I am still curious if this would be considered a bug or not, since
>>> the
>>> curl version from the base CentOS repo seems to have an issue.
>>>
>>> Have a good day,
>>>
>>> Lewis George
>>>
>>>
>>>
>>> 
>>> From: "lewis.geo...@innoscale.net" 
>>> Sent: Friday, September 16, 2016 7:28 AM
>>> To: "Yehuda Sadeh-Weinraub" 
>>>
>>> Cc: "ceph-users@lists.ceph.com" 
>>> Subject: Re: [ceph-users] High CPU load with radosgw instances
>>>
>>> Hi Yehuda,
>>> Thank you for the idea. I will try to test that and see if it helps.
>>>
>>> If that is the case, would that be considered a bug with radosgw? I ask
>>> because, that version of curl seems to be what is currently standard on
>>> RHEL/CentOS 7 (fully updated). I will have to either compile it or search
>>> 3rd-party repos for newer version, which is not usually something that is
>>> great.
>>>
>>> Have a good day,
>>>
>>> Lewis George
>>>
>>>
>>> 
>>> From: "Yehuda Sadeh-Weinraub" 
>>> Sent: Thursday, September 15, 2016 10:42 PM
>>> To: lewis.geo...@innoscale.net
>>> Cc: "ceph-users@lists.ceph.com" 
>>> Subject: Re: [ceph-users] High CPU load with radosgw instances
>>>
>>> On Thu, Sep 15, 2016 at 4:53 PM, lewis.geo...@innoscale.net
>>>  wrote:
>>>>
>>>> Hi,
>>>> So, maybe someone has an idea of where to go on this.
>>>>
>>>> I have just setup 2 rgw instances in a multisite setup. They are working
>>>> nicely. I have add a couple of test buckets and some files to make sure
>>>> it
>>>> works is all. The status shows both are caught up. Nobody else is
>>>> accessing
>>>> or using them.
>>>>
>>>> However, the CPU load on both hosts is sitting at like 3.00, with the
>>>> radosgw process taking up 99% CPU constantly. I do not see anything in
>>>> the
>>>> logs happening at all.
>>>>
>>>> Any thoughts or direction?
>>>>
>>> We've seen that happening when running on a system with older version
>>> of libcurl (e.g., 7.29). If that's the case upgrading to a newer
>>> version should fix it for you.
>>>
>>> Yehuda
>>>
>>>
>>>> Have a good day,
>>>>
>>>> Lewis George
>>>>
>>>>
>>>> ___
>>>> ceph-users mailing list
>>>> ceph-users@lists.ceph.com
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] High CPU load with radosgw instances

2016-09-16 Thread Ken Dreyer
Hi Lewis,

This sounds a lot like https://bugzilla.redhat.com/1347904 , currently
slated for the upcoming RHEL 7.3 (and CentOS 7.3).

There's an SRPM in that BZ that you can rebuild and test out. This
method won't require you to keep chasing upstream curl versions
forever (curl has a lot of CVEs).

Mind testing that out and reporting back?

- Ken


On Fri, Sep 16, 2016 at 11:06 AM, lewis.geo...@innoscale.net
 wrote:
> Hi Yehuda,
> Well, again, thank you!
>
> I was able to get a package built from the latest curl release, and after
> upgrading on my radosgw hosts, the load is no longer running high. The load
> is just sitting at almost nothing and I only see the radosgw process using
> CPU when it is actually doing something now.
>
> So, I am still curious if this would be considered a bug or not, since the
> curl version from the base CentOS repo seems to have an issue.
>
> Have a good day,
>
> Lewis George
>
>
>
> 
> From: "lewis.geo...@innoscale.net" 
> Sent: Friday, September 16, 2016 7:28 AM
> To: "Yehuda Sadeh-Weinraub" 
>
> Cc: "ceph-users@lists.ceph.com" 
> Subject: Re: [ceph-users] High CPU load with radosgw instances
>
> Hi Yehuda,
> Thank you for the idea. I will try to test that and see if it helps.
>
> If that is the case, would that be considered a bug with radosgw? I ask
> because, that version of curl seems to be what is currently standard on
> RHEL/CentOS 7 (fully updated). I will have to either compile it or search
> 3rd-party repos for newer version, which is not usually something that is
> great.
>
> Have a good day,
>
> Lewis George
>
>
> 
> From: "Yehuda Sadeh-Weinraub" 
> Sent: Thursday, September 15, 2016 10:42 PM
> To: lewis.geo...@innoscale.net
> Cc: "ceph-users@lists.ceph.com" 
> Subject: Re: [ceph-users] High CPU load with radosgw instances
>
> On Thu, Sep 15, 2016 at 4:53 PM, lewis.geo...@innoscale.net
>  wrote:
>> Hi,
>> So, maybe someone has an idea of where to go on this.
>>
>> I have just setup 2 rgw instances in a multisite setup. They are working
>> nicely. I have add a couple of test buckets and some files to make sure it
>> works is all. The status shows both are caught up. Nobody else is
>> accessing
>> or using them.
>>
>> However, the CPU load on both hosts is sitting at like 3.00, with the
>> radosgw process taking up 99% CPU constantly. I do not see anything in the
>> logs happening at all.
>>
>> Any thoughts or direction?
>>
>
> We've seen that happening when running on a system with older version
> of libcurl (e.g., 7.29). If that's the case upgrading to a newer
> version should fix it for you.
>
> Yehuda
>
>
>> Have a good day,
>>
>> Lewis George
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-dbg package for Xenial (ubuntu-16.04.x) broken

2016-08-03 Thread Ken Dreyer
For some reason, during the v10.2.2 release,
ceph-dbg_10.0.2-1xenial_amd64.deb did not get transferred to
http://download.ceph.com/debian-jewel/pool/main/c/ceph/

- Ken

On Wed, Aug 3, 2016 at 12:27 PM, J. Ryan Earl  wrote:
> Hello,
>
> New to the list.  I'm working on performance tuning and testing a new Ceph
> cluster built on Ubuntu 16.04 LTS and newest "Jewel" Ceph release.  I'm in
> the process of collecting stack frames as part of a profiling inspection
> using FlameGraph (https://github.com/brendangregg/FlameGraph) to inspect
> where the CPU is spending time but need to load the 'dbg' packages to get
> symbol information.  However, it appears the 'ceph-dbg' package has broken
> dependencies:
>
> ceph1.oak:/etc/apt# apt-get install ceph-dbgReading package lists...
> DoneBuilding dependency tree   Reading state information... DoneSome
> packages could not be installed. This may mean that you haverequested an
> impossible situation or if you are using the unstabledistribution that some
> required packages have not yet been createdor been moved out of Incoming.The
> following information may help to resolve the situation:
> The following packages have unmet dependencies: ceph-dbg : Depends: ceph (=
> 10.2.2-0ubuntu0.16.04.2) but 10.2.2-1xenial is to be installedE: Unable to
> correct problems, you have held broken packages.
> Any ideas on how to quickly work around this issue so I can continue
> performance profiling?
> Thank you,-JR
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS Samba VFS RHEL packages

2016-07-20 Thread Ken Dreyer
The Samba packages in Fedora 22+ do enable the Samba VFS:
https://bugzilla.redhat.com/1174412

>From what Ira said downthread, this is pretty experimental, so you
could run your tests on a Fedora system and see how it goes :)

- Ken

On Tue, Jul 19, 2016 at 11:45 PM, Blair Bethwaite
 wrote:
> Hi all,
>
> We've started a CephFS Samba PoC on RHEL but just noticed the Samba
> Ceph VFS doesn't seem to be included with Samba on RHEL, or we're not
> looking in the right place. Trying to avoid needing to build Samba
> from source if possible. Any pointers appreciated.
>
> --
> Cheers,
> ~Blairo
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-1: (13) Permission denied

2016-07-06 Thread Ken Dreyer
On Wed, Jul 6, 2016 at 9:31 AM, RJ Nowling  wrote:
> Sam, George, thanks for your help!
>
> The problem is that the data directory is symlinked to /home and the systemd
> unit file had `ProtectHome=true`

Good to know that feature works! :D

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH/CEPHFS upgrade questions (9.2.0 ---> 10.2.1)

2016-05-25 Thread Ken Dreyer
On Wed, May 25, 2016 at 8:05 AM, Gregory Farnum  wrote:
> On Tue, May 24, 2016 at 9:54 PM, Goncalo Borges
>  wrote:
>> Thank you Greg...
>>
>> There is one further thing which is not explained in the release notes and
>> that may be worthwhile to say.
>>
>> The rpm structure (for redhat compatible releases) changed in Jewel where
>> now there is a ( ceph + ceph-common + ceph-base + ceph-mon/osd/mds + others
>> ) packages while in infernalis there was only  ( ceph + ceph-common + others
>> ) packages
>>
>> I haven't tested things yet myself but the standard upgrade instructions
>> just say to do a 'yum update && yum install ceph' and I actually wonder how
>> this will pull ceph-mon in a MON, ceoh-osd in an OSD server or ceph-mds in a
>> MDS. Unless everything is pulled together in each service (even if not used
>> afterwards).
>
> I have no idea in specific, but in general the package managers
> provide mechanisms to do this correctly and we've succeeded in the
> past, so just working on the ceph meta-package *should* do the
> trick...

Right. Goncalo, yry "yum update" without a "-y" and see what happens :)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] jewel 10.2.1 lttng & rbdmap.service

2016-05-25 Thread Ken Dreyer
On Wed, May 25, 2016 at 8:00 AM, kefu chai  wrote:
> On Tue, May 24, 2016 at 5:23 AM, Max Vernimmen
>  wrote:
>> Hi,
>>
>> I upgraded to 10.2.1 and noticed that lttng is a dependency for the RHEL
>> packages in that version. Since I have no intention of doing traces on ceph
>> I find myself  wondering why ceph is now requiring these libraries to be
>> installed. Since the lttng packages are not included in RHEL/CentOS 7 I’ll
>> need to pull these in from http://packages.efficios.com/ which is easy
>> enough but I’m not really looking forward to adding and managing another
>> package source for my systems. Anyone have some info on the reasoning behind
>> the dependency ?
>
> lttng is also included[1] in epel-7 [2]. the lttng dependency was
> added in https://github.com/ceph/ceph/pull/7857 by me.
> personally, i am good either way: enable lttng or disable it in the
> rpm packages.
>
> Ken, do you have any insight on this?
>

Max can you please confirm that you have epel enabled?

On CentOS you do this by running "yum install epel-release"

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v10.2.1 Jewel released

2016-05-17 Thread Ken Dreyer
On Mon, May 16, 2016 at 11:14 PM, Karsten Heymann
 wrote:
> the updated debian packages are *still* missing ceph-{mon,osd}.target.
> Was it intentional to release the point release without the fix?

It was not intentional.

Teuthology does not test systemd, so these sort of things tend to fall
off the radar when it comes to pre-release testing.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] yum install ceph on RHEL 7.2

2016-03-08 Thread Ken Dreyer
On Tue, Mar 8, 2016 at 4:11 PM, Shinobu Kinjo  wrote:
> If you register subscription properly, you should be able to install
> the Ceph without the EPEL.

The opposite is true (when installing upstream / ceph.com).

We rely on EPEL for several things, like leveldb and xmlstarlet.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade from Hammer LTS to Infernalis or wait for Jewel LTS?

2016-03-04 Thread Ken Dreyer
On Fri, Mar 4, 2016 at 1:53 AM, Luis Periquito  wrote:
> On Wed, Mar 2, 2016 at 9:32 AM, Mihai Gheorghe  wrote:
> From previous history the last 2 LTS versions are supported (currently
> Firefly and Hammer).

Note that Firefly reached end-of-life in January, and we're no longer
issuing releases for it.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-03-01 Thread Ken Dreyer
In theory the RPM should contain either the init script, or the
systemd .service files, but not both.

If that's not the case, you can file a bug @ http://tracker.ceph.com/
. Patches are even better!

- Ken

On Tue, Mar 1, 2016 at 2:36 AM, Florent B  wrote:
> By the way, why /etc/init.d/ceph script packaged in Infernalis "ceph"
> package, is not the same script as git's one ?
> (https://github.com/ceph/ceph/blob/master/systemd/ceph) ?
> Is it expected ? I think there's too mixing things between both systems...
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] babeltrace and lttng-ust headed to EPEL 7

2016-03-01 Thread Ken Dreyer
lttng is destined for EPEL 7, so we will finally have lttng
tracepoints in librbd for our EL7 Ceph builds, as we've done with the
EL6 builds.

https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-200bd827c6
https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2016-8c74b0b27f

If you are running RHEL 7 or CentOS 7 with EPEL enabled, please try
those two packages out on a test machine. If they don't cause issues
for you, add positive karma in Fedora's Bodhi tool (note that you have
to log in for it to take effect). Karma points will allow the updates
to land in EPEL sooner than the normal waiting period (2 weeks).

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] systemd & sysvinit scripts mix ?

2016-02-29 Thread Ken Dreyer
I recommend we simply drop the init scripts from the master branch.
All our supported platforms (CentOS 7 or newer, and Ubuntu Trusty or
newer) use upstart or systemd.

- Ken

On Mon, Feb 29, 2016 at 3:44 AM, Florent B  wrote:
> Hi everyone,
>
> On a few servers, updated from Hammer to Infernalis, and from Debian
> Wheezy to Jessie, I can see that it seems to have some mixes between old
> sysvinit "ceph" script and the new ones on systemd.
>
> I always have an /etc/init.d/ceph old script, converted as a service by
> systemd :
>
> # systemctl status ceph
> ● ceph.service - LSB: Start Ceph distributed file system daemons at boot
> time
>Loaded: loaded (/etc/init.d/ceph)
>Active: active (running) since Mon 2016-01-25 13:48:31 CET; 1 months
> 4 days ago
>CGroup: /system.slice/ceph.service
>└─13458 /usr/bin/python /usr/sbin/ceph-create-keys --cluster
> ceph ...
>
>
> And some new systemd services as "ceph.target" are inactive & disabled :
>
> # systemctl status ceph.target
> ● ceph.target - ceph target allowing to start/stop all ceph*@.service
> instances at once
>Loaded: loaded (/lib/systemd/system/ceph.target; disabled)
>Active: inactive (dead)
>
> But others are loaded & enabled :
>
> # systemctl status ceph-osd@*
> ● ceph-osd@0.service - Ceph object storage daemon
>Loaded: loaded (/lib/systemd/system/ceph-osd@.service; disabled)
>Active: active (running) since Thu 2015-12-03 17:51:55 CET; 2 months
> 26 days ago
>  Main PID: 13350 (ceph-osd)
>CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
>└─13350 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser
> ceph ...
>
> ● ceph-osd@1.service - Ceph object storage daemon
>Loaded: loaded (/lib/systemd/system/ceph-osd@.service; disabled)
>Active: active (running) since Thu 2015-12-03 17:55:02 CET; 2 months
> 26 days ago
>  Main PID: 57626 (ceph-osd)
>CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@1.service
>└─57626 /usr/bin/ceph-osd -f --cluster ceph --id 1 --setuser
> ceph ...
>
>
> Isn't there any misconfiguration there ? I think "/etc/init.d/ceph"
> script should have been deleted on upgrade by Infernalis, isn't it ?
>
> What are the official recommendations about this ? Should I have to
> delete old "ceph" script myself and enable all new services ? (and why
> does it have to be done manually ?)
>
> Thank you.
>
> Florent
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] State of Ceph documention

2016-02-26 Thread Ken Dreyer
On Fri, Feb 26, 2016 at 6:08 AM, Andy Allan  wrote:
> including a nice big obvious version switcher banner on every
> page.

We used to have something like this, but we didn't set it back up when
we migrated the web servers to new infrastructure a while back. It was
using https://github.com/alfredodeza/ayni

I think it would be simpler to incorporate a patch into the sphinx
template itself.
https://github.com/ceph/ceph/blob/master/doc/_templates/layout.html

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] "ceph-installer" in GitHub

2016-02-25 Thread Ken Dreyer
Hi folks,

A few of us at RH are working on a project called "ceph-installer",
which is a Pecan web app that exposes endpoints for running
ceph-ansible under the hood.

The idea is that other applications will be able to consume this REST
API in order to orchestrate Ceph installations.

Another team within Red Hat is also working on a GUI component that
will interact with the ceph-installer web service, and that is
https://github.com/skyrings

These are all nascent projects that are very much works-in-progress,
and so the workflows are very rough and there are a hundred things we
could do to improve the experience and integration, etc. We welcome
feedback from the rest of the community :)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph extras package support for centos kvm-qemu

2015-12-08 Thread Ken Dreyer
When we re-arranged the download structure for packages and moved
everything from ceph.com to download.ceph.com, we did not carry
ceph-extras over.

The reason is that the packages there were unmaintained. The EL6 QEMU
binaries were vulnerable to VENOM (CVE-2015-3456) and maybe other
CVEs, and no users should rely on them any more.

If you need QEMU with RBD support on CentOS, I recommend that you
upgrade from CentOS 6 to CentOS 7.1+. Red Hat's QEMU package in RHEL
7.1 is built with librbd support.

On Thu, Nov 19, 2015 at 1:59 AM, Xue, Chendi  wrote:
> Hi, All
>
>
>
> We noticed ceph.com/packages url is no longer available, we used to download
> rbd supported centos qemu-kvm from http://ceph.com/packages/ceph-extras/rpm
> as intructed below.
>
>
>
> Is there another way to fix this? Or any other qemu-kvm version is rbd
> supported?
>
>
>
> [root@client03]# /usr/libexec/qemu-kvm --drive format=?
>
> Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed
> vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file gluster
> gluster gluster gluster
>
>
>
> Sadly no rbd listed in the supported format
>
>
>
> Original URL we followed:
>
>
>
> [ceph-qemu]
>
> name=Ceph Packages for QEMU
>
> baseurl=http://ceph.com/packages/ceph-extras/rpm/{distro}/$basearch
>
> enabled=1
>
> priority=2
>
> gpgcheck=1
>
> type=rpm-md
>
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
>
>
> [ceph-qemu-noarch]
>
> name=Ceph QEMU noarch
>
> baseurl=http://ceph.com/packages/ceph-extras/rpm/{distro}/noarch
>
> enabled=1
>
> priority=2
>
> gpgcheck=1
>
> type=rpm-md
>
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
>
>
> [ceph-qemu-source]
>
> name=Ceph QEMU Sources
>
> baseurl=http://ceph.com/packages/ceph-extras/rpm/{distro}/SRPMS
>
> enabled=1
>
> priority=2
>
> gpgcheck=1
>
> type=rpm-md
>
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
>
>
>
>
> Best Regards,
>
> Chendi
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] http://gitbuilder.ceph.com/

2015-12-08 Thread Ken Dreyer
Yes, we've had to move all of our hardware out of the datacenter in
Irvine, California to a new home in Raleigh, North Carolina. The
backend server for gitbuilder.ceph.com had a *lot* of data and we were
not able to sync all of it to an interim server in Raleigh before we
had to unplug the old one.

Since you brought up fastcgi, it's a good idea to transition your
cluster from Apache+mod_fastcgi and start using RGW's Civetweb server
instead. Civetweb is much simpler, and future RGW optimizations are
all going into the Civetweb stack.

- Ken

On Tue, Dec 8, 2015 at 2:54 AM, Xav Paice  wrote:
> Hi,
>
> Just wondering if there's a known issue with http://gitbuilder.ceph.com/ -
> if I go to several urls, e.g.
> http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-trusty-x86_64-basic, I
> get a 403.  That's still the right place to get deb's, right?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] [Ceph-maintainers] ceph packages link is gone

2015-12-03 Thread Ken Dreyer
On Thu, Dec 3, 2015 at 5:53 PM, Dan Mick  wrote:
> This was sent to the ceph-maintainers list; answering here:
>
> On 11/25/2015 02:54 AM, Alaâ Chatti wrote:
>> Hello,
>>
>> I used to install qemu-ceph on centos 6 machine from
>> http://ceph.com/packages/, but the link has been removed, and there is
>> no alternative in the documentation. Would you please update the link so
>> I can install the version of qemu that supports rbd.
>>
>> Thank you
>
> Packages can be found on http://download.ceph.com/

When we re-arranged the download structure for packages and moved
everything from ceph.com to download.ceph.com, we did not carry
ceph-extras over.

The reason is that the packages there were unmaintained. The EL6 QEMU
binaries were vulnerable to VENOM (CVE-2015-3456) and maybe other
CVEs, and no users should rely on them any more.

We've removed all references to ceph-extras from our test framework
upstream (eg https://github.com/ceph/ceph-cm-ansible/pull/137) and I
recommend that everyone else do the same.

If you need QEMU with RBD support on CentOS, I recommend that you
upgrade from CentOS 6 to CentOS 7.1+. Red Hat's QEMU package in RHEL
7.1 is built with librbd support.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with infernalis el7 package

2015-11-17 Thread Ken Dreyer
You're right stijn, I apologize that we did not bump the release
number in this case. That would have been the correct thing to do, but
our build system simply isn't set up to do that easily, and we wanted
to get a fix out as soon as possible.

- Ken

On Wed, Nov 11, 2015 at 1:34 AM, Stijn De Weirdt
 wrote:
> did you recreate new rpms with same version/release? it would be better
> to make new rpms with different release (e.g. 9.2.0-1). we have
> snapshotted mirrors and nginx caches between ceph yum repo and the nodes
> that install the rpms, so cleaning the cache locally will not help.
>
> stijn
>
> On 11/11/2015 01:06 AM, Ken Dreyer wrote:
>> On Mon, Nov 9, 2015 at 6:03 PM, Bob R  wrote:
>>> We've got two problems trying to update our cluster to infernalis-
>>
>>
>> This was our bad. As indicated in http://tracker.ceph.com/issues/13746
>> , Alfredo rebuilt CentOS 7 infernalis packages, re-signed them, and
>> re-uploaded them to the same location on download.ceph.com. Please
>> clear your yum cache (`yum makecache`) and try again.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] can't stop ceph

2015-11-17 Thread Ken Dreyer
The version of the documentation you were browsing was for "argonaut",
which is very old, and predates Upstart integration. Here's the
version of the docs for firefly (0.80.z), the version that you're
using on Ubuntu:

http://docs.ceph.com/docs/firefly/rados/operations/operating/

This version has the command you're looking for regarding stopping all OSDs:

  sudo stop ceph-osd-all

- Ken


On Mon, Nov 16, 2015 at 6:41 PM, Yonghua Peng  wrote:
> Thanks a lot. that works.
> Do you know how to stop all ceph-osd daemons via one command?
>
>
>
> On 2015/11/17 星期二 9:35, wd_hw...@wistron.com wrote:
>>
>> Hi,
>> You may try the following command
>> 'sudo stop ceph-mon id=ceph2'
>>
>> WD
>>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Yonghua Peng
>> Sent: Tuesday, November 17, 2015 9:34 AM
>> To: ceph-users@lists.ceph.com
>> Subject: [ceph-users] can't stop ceph
>>
>> Hello,
>>
>> My system is ubuntu 12.04, ceph 0.80.10 installed.
>> I followed the document here,
>> http://docs.ceph.com/docs/argonaut/init/
>>
>> But couldn't stop a mon daemon successfully.
>>
>> root@ceph2:~# ps -efw|grep ceph-
>> root  2763 1  0 Oct28 ?00:05:11 /usr/bin/ceph-mon
>> --cluster=ceph -i ceph2 -f
>> root  4299 1  0 Oct28 ?00:21:49 /usr/bin/ceph-osd
>> --cluster=ceph -i 0 -f
>> root  4703 1  0 Oct28 ?00:21:44 /usr/bin/ceph-osd
>> --cluster=ceph -i 1 -f
>> root 12353 1  0 Oct29 ?00:21:08 /usr/bin/ceph-osd
>> --cluster=ceph -i 2 -f
>> root 19143 17226  0 09:29 pts/400:00:00 grep --color=auto ceph-
>> root@ceph2:~#
>> root@ceph2:~# service ceph -v stop mon
>> root@ceph2:~# echo $?
>> 0
>> root@ceph2:~# ps -efw|grep ceph-
>> root  2763 1  0 Oct28 ?00:05:11 /usr/bin/ceph-mon
>> --cluster=ceph -i ceph2 -f
>> root  4299 1  0 Oct28 ?00:21:49 /usr/bin/ceph-osd
>> --cluster=ceph -i 0 -f
>> root  4703 1  0 Oct28 ?00:21:44 /usr/bin/ceph-osd
>> --cluster=ceph -i 1 -f
>> root 12353 1  0 Oct29 ?00:21:08 /usr/bin/ceph-osd
>> --cluster=ceph -i 2 -f
>> root 19184 17226  0 09:29 pts/400:00:00 grep --color=auto ceph-
>>
>>
>> Can you help? thanks.
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> ---
>> This email contains confidential or legally privileged information and is
>> for the sole use of its intended recipient.
>> Any unauthorized review, use, copying or distribution of this email or the
>> content of this email is strictly prohibited.
>> If you are not the intended recipient, you may reply to the sender and
>> should delete this e-mail immediately.
>>
>> ---
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unable to install ceph

2015-11-13 Thread Ken Dreyer
Looks like ceph-deploy 1.5.20 thinks your Ubuntu 15.10 wily system is
an Upstart-based system, and it's systemd instead.

- Ken

On Fri, Nov 13, 2015 at 7:51 AM, Robert Shore  wrote:
> OS: Linux b260-space-pc62 4.2.0-18-generic #22-Ubuntu SMP Fri Nov 6 18:25:50
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
>(wily werewolf)
>
> Command: ceph-deploy mon create-initial
>
> Log fragment:
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /home/dfs/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy mon
> create-initial
> [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts b260-space-pc39
> b260-space-pc61 b260-space-pc62
> [ceph_deploy.mon][DEBUG ] detecting platform for host b260-space-pc39 ...
> [b260-space-pc39][DEBUG ] connection detected need for sudo
> [b260-space-pc39][DEBUG ] connected to host: b260-space-pc39
> [b260-space-pc39][DEBUG ] detect platform information from remote host
> [b260-space-pc39][DEBUG ] detect machine type
> [ceph_deploy.mon][INFO  ] distro info: Ubuntu 15.10 wily
> [b260-space-pc39][DEBUG ] determining if provided host has same hostname in
> remote
> [b260-space-pc39][DEBUG ] get remote short hostname
> [b260-space-pc39][DEBUG ] deploying mon to b260-space-pc39
> [b260-space-pc39][DEBUG ] get remote short hostname
> [b260-space-pc39][DEBUG ] remote hostname: b260-space-pc39
> [b260-space-pc39][DEBUG ] write cluster configuration to
> /etc/ceph/{cluster}.conf
> [b260-space-pc39][DEBUG ] create the mon path if it does not exist
> [b260-space-pc39][DEBUG ] checking for done path:
> /var/lib/ceph/mon/ceph-b260-space-pc39/done
> [b260-space-pc39][DEBUG ] done path does not exist:
> /var/lib/ceph/mon/ceph-b260-space-pc39/done
> [b260-space-pc39][INFO  ] creating keyring file:
> /var/lib/ceph/tmp/ceph-b260-space-pc39.mon.keyring
> [b260-space-pc39][DEBUG ] create the monitor keyring file
> [b260-space-pc39][INFO  ] Running command: sudo ceph-mon --cluster ceph
> --mkfs -i b260-space-pc39 --keyring
> /var/lib/ceph/tmp/ceph-b260-space-pc39.mon.keyring
> [b260-space-pc39][DEBUG ] ceph-mon: set fsid to
> 74e0d24e-8cd7-4e17-bd9c-d504fc709c31
> [b260-space-pc39][DEBUG ] ceph-mon: created monfs at
> /var/lib/ceph/mon/ceph-b260-space-pc39 for mon.b260-space-pc39
> [b260-space-pc39][INFO  ] unlinking keyring file
> /var/lib/ceph/tmp/ceph-b260-space-pc39.mon.keyring
> [b260-space-pc39][DEBUG ] create a done file to avoid re-doing the mon
> deployment
> [b260-space-pc39][DEBUG ] create the init path if it does not exist
> [b260-space-pc39][DEBUG ] locating the `service` executable...
> [b260-space-pc39][INFO  ] Running command: sudo initctl emit ceph-mon
> cluster=ceph id=b260-space-pc39
> [b260-space-pc39][WARNING] initctl: Unable to connect to Upstart: Failed to
> connect to socket /com/ubuntu/upstart: Connection refused
> [b260-space-pc39][ERROR ] RuntimeError: command returned non-zero exit
> status: 1
> [ceph_deploy.mon][ERROR ] Failed to execute command: initctl emit ceph-mon
> cluster=ceph id=b260-space-pc39
> [ceph_deploy.mon][DEBUG ] detecting platform for host b260-space-pc61 ...
>
> The same log pattern (with the error) repeats for the other 2 hosts.
>
> Any suggestions? More info?
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] No Presto metadata available for Ceph-noarch ceph-release-1-1.el7.noarch.rp FAILED

2015-11-10 Thread Ken Dreyer
On Fri, Oct 30, 2015 at 2:02 AM, Andrey Shevel  wrote:
> ceph-release-1-1.el7.noarch.rp FAILED
> http://download.ceph.com/rpm-giant/el7/noarch/ceph-release-1-1.el7.noarch.rpm:
> [Errno 14] HTTP Error 404 - Not Found
> ]  0.0 B/s |0 B  --:--:-- ETA
> Trying other mirror.
>
>
> Error downloading packages:
>   ceph-release-1-1.el7.noarch: [Errno 256] No more mirrors to try.

This was our bad. As indicated in http://tracker.ceph.com/issues/13746
, Alfredo has fixed the ceph-release package on download.ceph.com.
Please clear your yum cache (`yum makecache`) and try again.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SHA1 wrt hammer release and tag v0.94.3

2015-11-10 Thread Ken Dreyer
On Fri, Oct 30, 2015 at 7:20 PM, Artie Ziff  wrote:
> I'm looking forward to learning...
> Why the different SHA1 values in two places that reference v0.94.3?

95cefea9fd9ab740263bf8bb4796fd864d9afe2b is the commit where we bump
the version number in the debian packaging.
b2503b0e15c0b13f480f0835060479717b9cf935 is not a commit, but an
annotated tag (it is a separate object in Git's database, hence the
unique sha1)

×
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Issues with CentOS RDO Liberty (OpenStack) and Ceph Repo (dependency resolution failed)

2015-11-10 Thread Ken Dreyer
On Fri, Nov 6, 2015 at 4:05 PM, c...@dolphin-it.de  wrote:
> Error: Package: 1:cups-client-1.6.3-17.el7.x86_64 (core-0)
>Requires: cups-libs(x86-64) = 1:1.6.3-17.el7
>Installed: 1:cups-libs-1.6.3-17.el7_1.1.x86_64 (@updates)
>cups-libs(x86-64) = 1:1.6.3-17.el7_1.1
>Available: 1:cups-libs-1.6.3-17.el7.x86_64 (core-0)
>cups-libs(x86-64) = 1:1.6.3-17.el7
>  You could try using --skip-broken to work around the problem
>  You could try running: rpm -Va --nofiles --nodigest
>

You need cups-client-1.6.3-17.el7_1.1, but yum is trying to install an
older version, cups-client-1.6.3-17.el7.

That newer ceph-cups package shipped back in June
(https://lists.centos.org/pipermail/centos-announce/2015-June/021178.html).
My guess is that your CentOS mirror is slightly out of date, or your
maybe your client. Run `yum makecache` and try again?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Ken Dreyer
On Mon, Nov 9, 2015 at 6:03 PM, Bob R  wrote:
> We've got two problems trying to update our cluster to infernalis-


This was our bad. As indicated in http://tracker.ceph.com/issues/13746
, Alfredo rebuilt CentOS 7 infernalis packages, re-signed them, and
re-uploaded them to the same location on download.ceph.com. Please
clear your yum cache (`yum makecache`) and try again.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-10 Thread Ken Dreyer
On Mon, Nov 9, 2015 at 5:20 PM, Jason Altorf  wrote:
> On Tue, Nov 10, 2015 at 7:34 AM, Ken Dreyer  wrote:
>> It is not a known problem. Mind filing a ticket @
>> http://tracker.ceph.com/ so we can track the fix for this?
>>
>> On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de  
>> wrote:
>>>
>>>
>>> Dear Ceph-users,
>>>
>>> I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install 
>>> --release infernalis host1 host2 ..." fails with:
>>>
>>> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh 
>>> --replacepkgs 
>>> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>>>
>>> The resolution would be to download "ceph-release-1-1.el7.noarch.rpm" 
>>> instead of v1.0.
>>>
>>> Is this a known problem?
>
> I hit it this morning, haven't had time to look into it further, did
> you get a chance fill in a ticket for it?
>

This was our bad. As indicated in http://tracker.ceph.com/issues/13746
, Alfredo rebuilt CentOS 7 infernalis packages, re-signed them, and
re-uploaded them to the same location on download.ceph.com. Please
clear your yum cache (`yum makecache`) and try again.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Problem with infernalis el7 package

2015-11-10 Thread Ken Dreyer
Yeah, this was our bad. As indicated in
http://tracker.ceph.com/issues/13746 , Alfredo rebuilt CentOS 7
infernalis packages so that they don't have this dependency, re-signed
them, and re-uploaded them to the same location. Please clear your yum
cache (`yum makecache`) and try again.

On Tue, Nov 10, 2015 at 9:01 AM, Kenneth Waegeman
 wrote:
> Because our problem was not related to ceph-deploy, I created a new ticket:
> http://tracker.ceph.com/issues/13746
>
>
> On 10/11/15 16:53, Kenneth Waegeman wrote:
>>
>>
>>
>> On 10/11/15 02:07, c...@dolphin-it.de wrote:
>>>
>>>
>>> Hello,
>>>
>>> I filed a new ticket:
>>> http://tracker.ceph.com/issues/13739
>>>
>>> Regards,
>>> Kevin
>>>
>>> [ceph-users] Problem with infernalis el7 package (10-Nov-2015 1:57)
>>> From:   Bob R
>>> To:ceph-users@lists.ceph.com
>>>
>>>
>>> Hello,
>>>
>>>
>>> We've got two problems trying to update our cluster to infernalis-
>>>
>>>
>>> ceph-deploy install --release infernalis neb-kvm00
>>>
>>>
>>>
>>> [neb-kvm00][INFO  ] Running command: sudo rpm --import
>>> https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>>> [neb-kvm00][INFO  ] Running command: sudo rpm -Uvh --replacepkgs
>>> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>>> [neb-kvm00][WARNIN] curl: (22) The requested URL returned error: 404 Not
>>> Found
>>> [neb-kvm00][WARNIN] error: skipping
>>> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm -
>>> transfer failed
>>> [neb-kvm00][DEBUG ] Retrieving
>>> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>>> [neb-kvm00][ERROR ] RuntimeError: command returned non-zero exit status:
>>> 1
>>> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh
>>> --replacepkgs
>>> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>>>
>>>
>>> ^^ the ceph-release package is named "ceph-release-1-1.el7.noarch.rpm"
>>>
>>>
>>> Trying to install manually on that (or any other) host we're seeing a
>>> dependency which makes us think the package is built improperly-
>>>
>>>
>>> --> Running transaction check
>>> ---> Package ceph.x86_64 1:9.2.0-0.el7 will be an update
>>> --> Processing Dependency:
>>> /home/jenkins-build/build/workspace/ceph-build-next/ARCH/x86_64/DIST/centos7/venv/bin/python
>>> for package: 1:ceph-9.2.0-0.el7.x86_64
>>> ---> Package selinux-policy.noarch 0:3.13.1-23.el7 will be updated
>>> ---> Package selinux-policy.noarch 0:3.13.1-23.el7_1.21 will be an update
>>> --> Processing Dependency:
>>> /home/jenkins-build/build/workspace/ceph-build-next/ARCH/x86_64/DIST/centos7/venv/bin/python
>>> for package: 1:ceph-9.2.0-0.el7.x86_64
>>> --> Finished Dependency Resolution
>>> Error: Package: 1:ceph-9.2.0-0.el7.x86_64 (Ceph)
>>> Requires:
>>> /home/jenkins-build/build/workspace/ceph-build-next/ARCH/x86_64/DIST/centos7/venv/bin/python
>>>   You could try using --skip-broken to work around the problem
>>>   You could try running: rpm -Va --nofiles --nodigest
>>
>> Hi,
>>
>> We also see this last problem with the 9.2.0 release. We tried to update
>> from 0.94.5 to infernalis, and got this jenkins dependency thingy. Our
>> packages are not installed with ceph-deploy. I'll update the ticket with our
>> logs.
>>
>> K
>>>
>>>
>>> Thanks -Bob
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade from Hammer to Infernalis with ceph-deploy fails on Centos7

2015-11-09 Thread Ken Dreyer
It is not a known problem. Mind filing a ticket @
http://tracker.ceph.com/ so we can track the fix for this?

On Mon, Nov 9, 2015 at 1:35 PM, c...@dolphin-it.de  wrote:
>
>
> Dear Ceph-users,
>
> I am trying to upgrade from Hammer to Infernalis but "ceph-deploy install 
> --release infernalis host1 host2 ..." fails with:
>
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh 
> --replacepkgs 
> http://ceph.com/rpm-infernalis/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>
> The resolution would be to download "ceph-release-1-1.el7.noarch.rpm" instead 
> of v1.0.
>
> Is this a known problem?
>
> Regards,
> Kevin
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rsync mirror download.ceph.com - broken file on rsync server

2015-10-29 Thread Ken Dreyer
On Wed, Oct 28, 2015 at 7:54 PM, Matt Taylor  wrote:
> I still see rsync errors due to permissions on the remote side:
>

Thanks for the heads' up; I bet another upload rsync process got
interrupted there.

I've run the following to remove all the oddly-named RPM files:

  for f in $(locate *.rpm.* ) ; do rm -i $f; done

Please let us know if there are other problems like this.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rsync mirror download.ceph.com - broken file on rsync server

2015-10-27 Thread Ken Dreyer
Thanks, I've deleted it from the download.ceph.com web server.

- Ken

On Tue, Oct 27, 2015 at 11:06 AM, Alfredo Deza  wrote:
> Yes that file can (should) be deleted
>
> On Tue, Oct 27, 2015 at 12:49 PM, Ken Dreyer  wrote:
>> On Tue, Oct 27, 2015 at 2:51 AM, Björn Lässig  
>> wrote:
>>> indeed there is:
>>>
>>> [09:40:49] ~ > rsync -4 -L
>>> download.ceph.com::ceph/debian-hammer/pool/main/c/ceph/.ceph-dbg_0.94.5-1trusty_amd64.deb.3xQnIQ
>>> -rw--- 91,488,256 2015/10/26 19:36:46
>>> .ceph-dbg_0.94.5-1trusty_amd64.deb.3xQnIQ
>>>
>>> i would be thankful, if you could remove this broken file or complete the
>>> mirror process.
>>>
>>
>> Alfredo this looks like a leftover from some rsync upload process. I
>> think we can just delete this file, right?
>>
>> - Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] fedora core 22

2015-10-27 Thread Ken Dreyer
On Tue, Oct 27, 2015 at 7:13 AM, Andrew Hume  wrote:
> a while back, i had installed ceph (firefly i believe) on my fedora core 
> system and all went smoothly.
> i went to repeat this yesterday with hammer, but i am stymied by lack of 
> packages. there doesn’t
> appear anything for fc21 or fc22.
>
> i initially tried ceph-deploy, but it fails because of the above issue.
> i then looked at the manual install documentation but am growing nervous 
> because
> it is clearly out of date (contents of ceph.conf are different than what 
> ceps-deploy generated).
>
> how do i make progress?
>
> andrew


The Fedora distribution itself ships the latest Hammer package [1], so
you can use the ceph packages from there. I think ceph-deploy's
"--no-adjust-repos" flag will keep it from trying to contact ceph.com?

- Ken

[1] https://bodhi.fedoraproject.org/updates/?packages=ceph
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rsync mirror download.ceph.com - broken file on rsync server

2015-10-27 Thread Ken Dreyer
On Tue, Oct 27, 2015 at 2:51 AM, Björn Lässig  wrote:
> indeed there is:
>
> [09:40:49] ~ > rsync -4 -L
> download.ceph.com::ceph/debian-hammer/pool/main/c/ceph/.ceph-dbg_0.94.5-1trusty_amd64.deb.3xQnIQ
> -rw--- 91,488,256 2015/10/26 19:36:46
> .ceph-dbg_0.94.5-1trusty_amd64.deb.3xQnIQ
>
> i would be thankful, if you could remove this broken file or complete the
> mirror process.
>

Alfredo this looks like a leftover from some rsync upload process. I
think we can just delete this file, right?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can we place the release key on download.ceph.com?

2015-10-15 Thread Ken Dreyer
Good suggestion! Done: http://download.ceph.com/keys/release.asc

The gitbuilder key is now there also:
http://download.ceph.com/keys/autobuild.asc

- Ken

On Wed, Oct 14, 2015 at 1:22 PM, Wido den Hollander  wrote:
> Hi,
>
> Currently the public keys for signing the packages can be found on
> git.ceph.com:
> https://git.ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
>
> git.ceph.com doesn't have IPv6, but it also isn't mirrored to any system.
>
> It would be handy if http://download.ceph.com/release.asc would exist.
>
> Any objections against mirroring the pubkey there as well? If not, could
> somebody do it?
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] download.ceph.com unreachable IPv6 [was: v9.1.0 Infernalis release candidate released]

2015-10-15 Thread Ken Dreyer
I wonder if there are routing issues with IPv6 in the DreamHost cloud?

http://ipv6-test.com/validate.php prints the right IP, but then "web
server is unreachable : Connection timed out"

- Ken

On Thu, Oct 15, 2015 at 12:41 AM, Björn Lässig  wrote:
> On 10/14/2015 09:19 PM, Wido den Hollander wrote:
>
> unfortunately this site is not reachable at the moment.
>
>> http://eu.ceph.com/debian-testing/dists/wheezy/InRelease
>
>
> this one works fine for all sites. I'll tell my puppetmaster to deploy the
> source and we will debug this issue later in detail.
>
> Thanks.
>
>   Björn
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Annoying libust warning on ceph reload

2015-10-08 Thread Ken Dreyer
On Wed, Sep 30, 2015 at 7:46 PM, Goncalo Borges
 wrote:
> - Each time logrotate is executed, we received a daily notice with the
> message
>
> ibust[8241/8241]: Warning: HOME environment variable not set. Disabling
> LTTng-UST per-user tracing. (in setup_local_apps() at lttng-ust-comm.c:305)

Thanks for this detailed report!

Would you mind filing a new bug in tracker.ceph.com for this? It would
be nice to fix this in Ceph or LTTNG without having to set the HOME
env var.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy error

2015-10-08 Thread Ken Dreyer
This issue with the conflicts between Firefly and EPEL is tracked at
http://tracker.ceph.com/issues/11104

On Sun, Aug 30, 2015 at 4:11 PM, pavana bhat
 wrote:
> In case someone else runs into the same issue in future:
>
> I came out of this issue by installing epel-release before installing
> ceph-deploy. If the order of installation is ceph-deploy followed by
> epel-release, the issue is being hit.
>
> Thanks,
> Pavana
>
> On Sat, Aug 29, 2015 at 10:02 AM, pavana bhat 
> wrote:
>>
>> Hi,
>>
>> I'm trying to install ceph for the first time following the quick
>> installation guide. I'm getting the below error, can someone please help?
>>
>> ceph-deploy install --release=firefly ceph-vm-mon1
>>
>> [ceph_deploy.conf][DEBUG ] found configuration file at:
>> /home/cloud-user/.cephdeploy.conf
>>
>> [ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/bin/ceph-deploy install
>> --release=firefly ceph-vm-mon1
>>
>> [ceph_deploy.cli][INFO  ] ceph-deploy options:
>>
>> [ceph_deploy.cli][INFO  ]  verbose   : False
>>
>> [ceph_deploy.cli][INFO  ]  testing   : None
>>
>> [ceph_deploy.cli][INFO  ]  cd_conf   :
>> 
>>
>> [ceph_deploy.cli][INFO  ]  cluster   : ceph
>>
>> [ceph_deploy.cli][INFO  ]  install_mds   : False
>>
>> [ceph_deploy.cli][INFO  ]  stable: None
>>
>> [ceph_deploy.cli][INFO  ]  default_release   : False
>>
>> [ceph_deploy.cli][INFO  ]  username  : None
>>
>> [ceph_deploy.cli][INFO  ]  adjust_repos  : True
>>
>> [ceph_deploy.cli][INFO  ]  func  : > install at 0x7f34b410e938>
>>
>> [ceph_deploy.cli][INFO  ]  install_all   : False
>>
>> [ceph_deploy.cli][INFO  ]  repo  : False
>>
>> [ceph_deploy.cli][INFO  ]  host  :
>> ['ceph-vm-mon1']
>>
>> [ceph_deploy.cli][INFO  ]  install_rgw   : False
>>
>> [ceph_deploy.cli][INFO  ]  repo_url  : None
>>
>> [ceph_deploy.cli][INFO  ]  ceph_conf : None
>>
>> [ceph_deploy.cli][INFO  ]  install_osd   : False
>>
>> [ceph_deploy.cli][INFO  ]  version_kind  : stable
>>
>> [ceph_deploy.cli][INFO  ]  install_common: False
>>
>> [ceph_deploy.cli][INFO  ]  overwrite_conf: False
>>
>> [ceph_deploy.cli][INFO  ]  quiet : False
>>
>> [ceph_deploy.cli][INFO  ]  dev   : master
>>
>> [ceph_deploy.cli][INFO  ]  local_mirror  : None
>>
>> [ceph_deploy.cli][INFO  ]  release   : firefly
>>
>> [ceph_deploy.cli][INFO  ]  install_mon   : False
>>
>> [ceph_deploy.cli][INFO  ]  gpg_url   : None
>>
>> [ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster
>> ceph hosts ceph-vm-mon1
>>
>> [ceph_deploy.install][DEBUG ] Detecting platform for host ceph-vm-mon1 ...
>>
>> [ceph-vm-mon1][DEBUG ] connection detected need for sudo
>>
>> [ceph-vm-mon1][DEBUG ] connected to host: ceph-vm-mon1
>>
>> [ceph-vm-mon1][DEBUG ] detect platform information from remote host
>>
>> [ceph-vm-mon1][DEBUG ] detect machine type
>>
>> [ceph_deploy.install][INFO  ] Distro info: Red Hat Enterprise Linux Server
>> 7.1 Maipo
>>
>> [ceph-vm-mon1][INFO  ] installing Ceph on ceph-vm-mon1
>>
>> [ceph-vm-mon1][INFO  ] Running command: sudo yum clean all
>>
>> [ceph-vm-mon1][DEBUG ] Loaded plugins: fastestmirror, priorities
>>
>> [ceph-vm-mon1][DEBUG ] Cleaning repos: epel rhel-7-ha-rpms
>> rhel-7-optional-rpms rhel-7-server-rpms
>>
>> [ceph-vm-mon1][DEBUG ]   : rhel-7-supplemental-rpms
>>
>> [ceph-vm-mon1][DEBUG ] Cleaning up everything
>>
>> [ceph-vm-mon1][DEBUG ] Cleaning up list of fastest mirrors
>>
>> [ceph-vm-mon1][INFO  ] Running command: sudo yum -y install epel-release
>>
>> [ceph-vm-mon1][DEBUG ] Loaded plugins: fastestmirror, priorities
>>
>> [ceph-vm-mon1][DEBUG ] Determining fastest mirrors
>>
>> [ceph-vm-mon1][DEBUG ]  * epel: kdeforge2.unl.edu
>>
>> [ceph-vm-mon1][DEBUG ]  * rhel-7-ha-rpms:
>> rhel-repo.eu-biere-1.t-systems.cloud.cisco.com
>>
>> [ceph-vm-mon1][DEBUG ]  * rhel-7-optional-rpms:
>> rhel-repo.eu-biere-1.t-systems.cloud.cisco.com
>>
>> [ceph-vm-mon1][DEBUG ]  * rhel-7-server-rpms:
>> rhel-repo.eu-biere-1.t-systems.cloud.cisco.com
>>
>> [ceph-vm-mon1][DEBUG ]  * rhel-7-supplemental-rpms:
>> rhel-repo.eu-biere-1.t-systems.cloud.cisco.com
>>
>> [ceph-vm-mon1][DEBUG ] Package epel-release-7-5.noarch already installed
>> and latest version
>>
>> [ceph-vm-mon1][DEBUG ] Nothing to do
>>
>> [ceph-vm-mon1][INFO  ] Running command: sudo yum -y install
>> yum-plugin-priorities
>>
>> [ceph-vm-mon1][DEBUG ] Loaded plugins: fastestmirror, priorities
>>
>> [ceph-vm-mon1][DEBUG ] Loading mirror speeds from cached hostfile
>>
>> [ceph-vm-mon1][DEBUG ]  * epel: kdeforge2.unl.edu
>>
>> [ceph-vm-mon1][DEBUG

Re: [ceph-users] Potential OSD deadlock?

2015-10-06 Thread Ken Dreyer
On Tue, Oct 6, 2015 at 8:38 AM, Sage Weil  wrote:
> Oh.. I bet you didn't upgrade the osds to 0.94.4 (or latest hammer build)
> first.  They won't be allowed to boot until that happens... all upgrades
> must stop at 0.94.4 first.

This sounds pretty crucial. is there Redmine ticket(s)?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Can not download from http://ceph.com/packages/ceph-extras/rpm/centos6.3/

2015-10-02 Thread Ken Dreyer
When we re-arranged the download structure for packages and moved
everything to download.ceph.com, we did not carry ceph-extras over.

The reason is that the packages there were unmaintained. The EL6 QEMU
binaries were vulnerable to VENOM (CVE-2015-3456) and maybe other
CVEs, and no users should rely on them any more.

We've removed all references to ceph-extras from our test framework
upstream (eg https://github.com/ceph/ceph-cm-ansible/pull/137) and I
recommend that everyone else do the same.

If you need QEMU with RBD support on CentOS, I recommend that you
upgrade from CentOS 6 to CentOS 7.1+. Red Hat's QEMU package in RHEL
7.1 is built with librbd support.

- Ken


On Thu, Oct 1, 2015 at 8:01 PM, MinhTien MinhTien
 wrote:
> Dear all,
>
> I cant not download from http://ceph.com/packages/ceph-extras/rpm/centos6.3/
>
> Please fix!
>
>
>
> 
> Thank you!
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd/rados packages in python virtual environment

2015-10-02 Thread Ken Dreyer
On Thu, Oct 1, 2015 at 9:32 PM, shiva rkreddy  wrote:
> Hi,
> Any one has tried installing python-rbd and python-rados packages in python
> virtual environment?
> We are planning to have openstack services(cinder/glance) run in the virtual
> environment. There are no pip install packages available for python-rbd and
> python-rados, atleast on pypi.python.org.
>
> Alternate is to copy the files manually or make own package.

This occasionally comes up in the context of openstack.

There is a Redmine ticket for it, at http://tracker.ceph.com/issues/5900

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Important security noticed regarding release signing key

2015-09-22 Thread Ken Dreyer
Hi Songbo, It's been removed from Ansible now:
https://github.com/ceph/ceph-cm-ansible/pull/137

- Ken

On Tue, Sep 22, 2015 at 8:33 PM, wangsongbo  wrote:
> Hi Ken,
> Thanks for your reply. But in the ceph-cm-ansible project scheduled by
> teuthology, "ceph.com/packages/ceph-extras" is in used now, such as
> qemu-kvm-0.12.1.2-2.415.el6.3ceph, qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph
> etc.
> Any new releases will be provided ?
>
>
> On 15/9/22 下午10:24, Ken Dreyer wrote:
>>
>> On Tue, Sep 22, 2015 at 2:38 AM, Songbo Wang  wrote:
>>>
>>> Hi, all,
>>>  Since the last week‘s attack, “ceph.com/packages/ceph-extras”
>>> can be
>>> opened never, but where can I get the releases of ceph-extra now?
>>>
>>> Thanks and Regards,
>>> WangSongbo
>>>
>> The packages in "ceph-extras" were old and subject to CVEs (the big
>> one being VENOM, CVE-2015-3456). So I don't intend to host ceph-extras
>> in the new location.
>>
>> - Ken
>
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Important security noticed regarding release signing key

2015-09-22 Thread Ken Dreyer
On Tue, Sep 22, 2015 at 2:38 AM, Songbo Wang  wrote:
> Hi, all,
> Since the last week‘s attack, “ceph.com/packages/ceph-extras” can be
> opened never, but where can I get the releases of ceph-extra now?
>
> Thanks and Regards,
> WangSongbo
>

The packages in "ceph-extras" were old and subject to CVEs (the big
one being VENOM, CVE-2015-3456). So I don't intend to host ceph-extras
in the new location.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] debian repositories path change?

2015-09-21 Thread Ken Dreyer
On Sat, Sep 19, 2015 at 7:54 PM, Lindsay Mathieson
 wrote:
> I'm getting:
>
>   W: GPG error: http://download.ceph.com wheezy Release: The following
> signatures couldn't be verified because the public key is not available:
> NO_PUBKEY E84AC2C0460F3994
>
>
> Trying to update from there

Hi Lindsay, did you add the new release key?

As described at
http://ceph.com/releases/important-security-notice-regarding-signing-key-and-binary-downloads-of-ceph/

  sudo apt-key del 17ED316D
  curl https://git.ceph.com/release.asc | sudo apt-key add -
  sudo apt-get update

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] debian repositories path change?

2015-09-18 Thread Ken Dreyer
On Fri, Sep 18, 2015 at 9:28 AM, Sage Weil  wrote:
> On Fri, 18 Sep 2015, Alfredo Deza wrote:
>> The new locations are in:
>>
>>
>> http://packages.ceph.com/
>>
>> For debian this would be:
>>
>> http://packages.ceph.com/debian-{release}
>
> Make that download.ceph.com .. the packages url was temporary while we got
> the new site ready and will go away shortly!
>
> (Also, HTTPS is enabled now.)


To avoid confusion here, I've deleted packages.ceph.com from DNS
today, and the change will propagate soon.

Please use download.ceph.com (it's the same IP address and server,
173.236.248.54)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ruby bindings for Librados

2015-07-23 Thread Ken Dreyer


- Original Message -
> From: "Ken Dreyer" 
> To: ceph-users@lists.ceph.com
> Sent: Tuesday, July 14, 2015 9:06:01 PM
> Subject: Re: [ceph-users] Ruby bindings for Librados
> 
> On 07/13/2015 02:11 PM, Wido den Hollander wrote:
> > On 07/13/2015 09:43 PM, Corin Langosch wrote:
> >> Hi Wido,
> >>
> >> I'm the dev of https://github.com/netskin/ceph-ruby and still use it in
> >> production on some systems. It has everything I
> >> need so I didn't develop any further. If you find any bugs or need new
> >> features, just open an issue and I'm happy to
> >> have a look.
> >>
> > 
> > Ah, that's great! We should look into making a Ruby binding "official"
> > and moving it to Ceph's Github project. That would make it more clear
> > for end-users.
> > 
> > I see that RADOS namespaces are currently not implemented in the Ruby
> > bindings. Not many bindings have them though. Might be worth looking at.
> > 
> > I'll give the current bindings a try btw!
> 
> I'd like to see this happen too. Corin, would you be amenable to moving
> this under the "ceph" GitHub org? You'd still have control over it,
> similar to the way Wido manages https://github.com/ceph/phprados
> 

After some off-list email with Wido and Corin, I've set up 
https://github.com/ceph/ceph-ruby and a "ceph-ruby" GitHub team with Corin as 
the admin (similar to Wido's admin rights to phprados).

Have fun!

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] SSL for tracker.ceph.com

2015-07-14 Thread Ken Dreyer
On 07/14/2015 04:14 PM, Wido den Hollander wrote:
> Hi,
> 
> Curently tracker.ceph.com doesn't have SSL enabled.
> 
> Every time I log in I'm sending my password over plain text which I'd
> rather not.
> 
> Can we get SSL enabled on tracker.ceph.com?
> 
> And while we are at it, can we enable IPv6 as well? :)
> 

File a ... tracker ticket for it! :D

I'm not sure what is involved with getting IPv6 on the rest of our
servers, but we need to look into it. Particularly git.ceph.com.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ruby bindings for Librados

2015-07-14 Thread Ken Dreyer
On 07/13/2015 02:11 PM, Wido den Hollander wrote:
> On 07/13/2015 09:43 PM, Corin Langosch wrote:
>> Hi Wido,
>>
>> I'm the dev of https://github.com/netskin/ceph-ruby and still use it in 
>> production on some systems. It has everything I
>> need so I didn't develop any further. If you find any bugs or need new 
>> features, just open an issue and I'm happy to
>> have a look.
>>
> 
> Ah, that's great! We should look into making a Ruby binding "official"
> and moving it to Ceph's Github project. That would make it more clear
> for end-users.
> 
> I see that RADOS namespaces are currently not implemented in the Ruby
> bindings. Not many bindings have them though. Might be worth looking at.
> 
> I'll give the current bindings a try btw!

I'd like to see this happen too. Corin, would you be amenable to moving
this under the "ceph" GitHub org? You'd still have control over it,
similar to the way Wido manages https://github.com/ceph/phprados

- Ken


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-02 Thread Ken Dreyer
On 07/02/2015 12:16 AM, Stefan Priebe - Profihost AG wrote:
> Hi,
> Am 01.07.2015 um 23:35 schrieb Loic Dachary:
>> Hi,
>>
>> The details of the differences between the Hammer point releases and the 
>> RedHat Ceph Storage 1.3 can be listed as described at
>>
>> http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between 
>> hammer and v0.94.1.2
>>
>> The same analysis should be done for 
>> https://github.com/ceph/ceph/releases/tag/v0.94.1.3 which presumably matches 
>> RedHat Ceph Storage 1.3.
> 
> can you clarify this? In the past the ceph inktank releases were exactly
> based on git tags. Is there now a "hidden" git repo for the ceph
> releases done by redhat? Or how can we understand this?
> 

- The git repo for RHCS is "hidden" in the sense that the authoritative
Git repository for the code in the RHCS product does reside behind our
corporate firewall, so users can't clone it directly.

- The patches themselves that we apply on top of upstream are not
"hidden", in the sense that you can download the SRPM from
http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RHCEPH/SRPMS/
and unpack to see what is in it yourself. Every binary package that Red
Hat publishes as part of the downstream RHCS product is represented by
those SRPMs. This will always be the case - Ceph is LGPL so Red Hat is
under obligation to publish the source for anything we distribute to
customers.

- Additionally, I have been pushing our changes to GitHub in the form of
"rhcs" branches. One of the reasons I do this is that it's a little
easier to coordinate with Loic and the rest of the developers when
dealing with Git directly. I can't promise that this is going to
continue forever if we switch to some other method of patch management
or something, but it just happens to be the most convenient option for
now :)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Redhat Storage Ceph Storage 1.3 released

2015-07-01 Thread Ken Dreyer
On 07/01/2015 03:02 PM, Vickey Singh wrote:
> - What's the exact version number of OpenSource Ceph is provided with
> this Product 

It is Hammer, specifically 0.94.1 with several critical bugfixes on top
as the product went through QE. All of the bugfixes have been proposed
or merged to Hammer upstream, IIRC, so the product has many of the
serious bug fixes that were in 0.94.2 or the upcoming 0.94.3.

> - RHCS 1.3 Features that are mentioned in the blog , will all of them
> present in open source Ceph.

Yep! That blog post describes many of the changes from Firefly -> Hammer.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Unexpected period of iowait, no obvious activity?

2015-06-23 Thread Ken Dreyer
On 06/23/2015 09:09 AM, Scottix wrote:
> Ya Ubuntu has a process called mlocate which run updatedb
>
> We basically turn it off shown
> here http://askubuntu.com/questions/268130/can-i-disable-updatedb-mlocate
>
> If you still want it you could edit the settings /etc/updatedb.conf and
> add a prunepath to your ceph directory

The way I'm attempting to fix this "out of the box" is to get the change
into mlocate in the various distros, so there are tickets linked off
http://tracker.ceph.com/issues/7451 for Fedora, Ubuntu, Debian, etc.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CEPH on RHEL 7.1

2015-06-10 Thread Ken Dreyer
- Original Message -
> From: "Varada Kari" 
> To: "ceph-devel" 
> Cc: "ceph-users" 
> Sent: Wednesday, June 10, 2015 3:33:08 AM
> Subject: [ceph-users] CEPH on RHEL 7.1
> 
> Hi,
> 
> We are trying to build CEPH on RHEL7.1. But facing some issues with the build
> with "Giant" branch.
> Enabled the redhat server rpms and redhat ceph storage rpm channels along
> with optional, extras and supplementary. But we are not able to find
> gperftools, leveldb and yasm rpms in the channels.
> Did anyone try building ceph in RHEL 7.1? Do I need to add any channels to
> get these packages?

The leveldb-devel, gperftools-devel, and yasm packages in EPEL should allow you 
to build Ceph. The ceph.com builds use the EPEL packages as dependencies.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph on RHEL7.0

2015-06-01 Thread Ken Dreyer
For the sake of providing more clarity regarding the Ceph kernel module
situation on RHEL 7.0, I've removed all the files at
https://github.com/ceph/ceph-kmod-rpm and updated the README there.

The summary is that if you want to use Ceph's RBD kernel module on RHEL
7, you should use RHEL 7.1 or later. And if you want to use the kernel
CephFS client on RHEL 7, you should use the latest upstream kernel
packages from ELRepo.

Hope that clarifies things from a RHEL 7 kernel perspective.

- Ken


On 05/28/2015 09:16 PM, Luke Kao wrote:
> Hi Bruce,
> RHEL7.0 kernel has many issues on filesystem sub modules and most of
> them fixed only in RHEL7.1.
> So you should consider to go to RHEL7.1 directly and upgrade to at least
> kernel 3.10.0-229.1.2
> 
> 
> BR,
> Luke
> 
> 
> *From:* ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of
> Bruce McFarland [bruce.mcfarl...@taec.toshiba.com]
> *Sent:* Friday, May 29, 2015 5:13 AM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Ceph on RHEL7.0
> 
> We’re planning on moving from Centos6.5 to RHEL7.0 for Ceph storage and
> monitor nodes. Are there any known issues using RHEL7.0?
> 
> Thanks
> 
> 
> 
> 
> This electronic message contains information from Mycom which may be
> privileged or confidential. The information is intended to be for the
> use of the individual(s) or entity named above. If you are not the
> intended recipient, be aware that any disclosure, copying, distribution
> or any other use of the contents of this information is prohibited. If
> you have received this electronic message in error, please notify us by
> post or telephone (to the numbers or correspondence address above) or by
> email (at the email address above) immediately.
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph_argparse packaging error in Hammer/debian?

2015-05-07 Thread Ken Dreyer
On 05/07/2015 12:53 PM, Andy Allan wrote:
> Hi Loic,
> 
> Sorry for the noise! I'd looked when I first ran into it and didn't
> find any reports or PRs, I should have checked again today.
> 
> Thanks,
> Andy

That's totally fine. If you want, you can review that PR and give a
thumbs up or down comment there :) More eyes on the Debian-related
changes are always a good thing.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] systemd unit files and multiple daemons

2015-04-22 Thread Ken Dreyer
I could really use some eyes on the systemd change proposed here:
http://tracker.ceph.com/issues/11344

Specifically, on bullet #4 there, should we have a single
"ceph-mon.service" (implying that users should only run one monitor
daemon per server) or if we should support multiple "ceph-mon@" services
(implying that users will need to specify additional information when
starting the service(s)). The version in our tree is "ceph-mon@". James'
work for Ubuntu Vivid is only "ceph-mon" [2]. Same thing for ceph-mds vs
ceph-mds@.

I'd prefer to keep Ubuntu downstream the same as Ceph upstream.

What do we want to do for this?

How common is it to run multiple monitor daemons or mds daemons on a
single host?

- Ken


[1] https://github.com/ceph/ceph/tree/master/systemd
[2]
http://bazaar.launchpad.net/~ubuntu-branches/ubuntu/vivid/ceph/vivid/files/head:/debian/lib-systemd/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy : systemd unit files not deployed to a centos7 nodes

2015-04-17 Thread Ken Dreyer
As you've seen, a set of systemd unit files has been committed to git,
but the packages do not yet use them.

There is an open ticket for this task,
http://tracker.ceph.com/issues/11344 . Feel free to add yourself as a
watcher on that if you are interested in the progress.

- Ken

On 04/17/2015 06:22 AM, Alexandre DERUMIER wrote:
> Oh,
> 
> I didn't see that a sysvinit file was also deployed.
> 
> works fine with /etc/init.d/ceph  
> 
> 
> - Mail original -
> De: "aderumier" 
> À: "ceph-users" 
> Envoyé: Vendredi 17 Avril 2015 14:11:45
> Objet: [ceph-users] ceph-deploy : systemd unit files not deployed to a
> centos7 nodes
> 
> Hi, 
> 
> I'm currently try to deploy a new ceph test cluster on centos7, (hammer) 
> 
> from ceph-deploy (on a debian wheezy). 
> 
> And it seem that systemd unit files are not deployed 
> 
> Seem that ceph git have systemd unit file 
> https://github.com/ceph/ceph/tree/hammer/systemd 
> 
> I don't have look inside the rpm package. 
> 
> 
> (This is my first install on centos, so I don't known if it's working with 
> previous releases) 
> 
> 
> I have deployed with: 
> 
> ceph-deploy install --release hammer ceph1-{1,2,3} 
> ceph-deploy new ceph1-{1,2,3} 
> 
> 
> Is it normal ? 
> ___ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Purpose of the s3gw.fcgi script?

2015-04-14 Thread Ken Dreyer
On 04/13/2015 07:35 PM, Yehuda Sadeh-Weinraub wrote:
> 
> 
> - Original Message -
>> From: "Francois Lafont" 
>> To: ceph-users@lists.ceph.com
>> Sent: Monday, April 13, 2015 5:17:47 PM
>> Subject: Re: [ceph-users] Purpose of the s3gw.fcgi script?
>>
>> Hi,
>>
>> Yehuda Sadeh-Weinraub wrote:
>>
>>> You're not missing anything. The script was only needed when we used
>>> the process manager of the fastcgi module, but it has been very long
>>> since we stopped using it.
>>
>> Just to be sure, so if I understand well, these parts of the documentation:
>>
>> 1.
>> 
>> http://docs.ceph.com/docs/master/radosgw/config/#create-a-cgi-wrapper-script
>> 2.
>> 
>> http://docs.ceph.com/docs/master/radosgw/config/#adjust-cgi-wrapper-script-permission
>>
>> can be completely skipped. Is it correct?
>>
> 
> Yes.
> 
> Yehuda

I've filed http://tracker.ceph.com/issues/11396 to track the doc changes
needed. Looks like qa/qa_scripts/rgw_install_config.pl refers to this
file as well.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS 7.1: Upgrading (downgrading) from 0.80.9 to bundled rpms

2015-04-10 Thread Ken Dreyer
Hi Dan, Arne, KB,

I got a chance to look into this this afternoon. In a CentOS 7.1 VM
(that's not using EPEL), I found that ceph.com's 0.80.9 fails to install
due to an Epoch issue. I've opened a ticket for that:
http://tracker.ceph.com/issues/11371

I think you're asking about the reverse, though - how do you downgrade
your server from upstream's 0.80.9 to RHEL's 0.80.7?

The problem with the distro-sync operation you've pasted below is that
there is no "ceph" RPM in Base RHEL. The solution is to "yum uninstall
ceph", then "yum distro-sync". Then you will have 0.80.7-2.el7 installed
on your system.

- Ken

On 04/10/2015 12:45 PM, Karan Singh wrote:
> Hi Dan
> 
> You could give a try to the fixed mentioned here
>  http://tracker.ceph.com/issues/11345
> 
> 
> 
> Karan Singh 
> Systems Specialist , Storage Platforms
> CSC - IT Center for Science,
> Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
> mobile: +358 503 812758
> tel. +358 9 4572001
> fax +358 9 4572302
> http://www.csc.fi/
> 
> 
>> On 10 Apr 2015, at 18:37, Irek Fasikhov > > wrote:
>>
>> I use Centos 7.1. The problem is that in the basic package repository
>> has "ceph-common".
>>
>> [root@ceph01p24 cluster]# yum --showduplicates list ceph-common
>> Loaded plugins: dellsysid, etckeeper, fastestmirror, priorities
>> Loading mirror speeds from cached hostfile
>>  * base: centos-mirror.rbc.ru 
>>  * epel: be.mirror.eurid.eu 
>>  * extras: ftp.funet.fi 
>>  * updates: centos-mirror.rbc.ru 
>> Installed Packages
>> ceph-common.x86_64
>>
>>  0.80.7-0.el7.centos  
>> @Ceph
>> Available Packages
>> ceph-common.x86_64
>>
>>  0.80.6-0.el7.centos  
>> Ceph 
>> ceph-common.x86_64
>>
>>  0.80.7-0.el7.centos  
>> Ceph 
>> ceph-common.x86_64
>>
>>  0.80.8-0.el7.centos  
>> Ceph 
>> ceph-common.x86_64
>>
>>  0.80.9-0.el7.centos  
>> Ceph 
>> ceph-common.x86_64
>>
>>  1:0.80.7-0.4.el7
>>  epel 
>> ceph-common.x86_64
>>
>>  1:0.80.7-2.el7  
>>  base 
>>
>> I make the installation as follows:
>>
>> rpm
>> -ivh http://ceph.com/rpm-firefly/el7/noarch/ceph-release-1-0.el7.noarch.rpm
>> yum install redhat-lsb-core-4.1-27.el7.centos.1.x86_64
>> gperftools-libs.x86_64 yum-plugin-priorities.noarch ntp -y
>> yum install librbd1-0.80.7-0.el7.centos
>> librados2-0.80.7-0.el7.centos.x86_64.rpm -y
>> yum install gdisk cryptsetup leveldb python-jinja2 hdparm -y
>>
>> yum install --disablerepo=base --disablerepo=epel
>> ceph-common-0.80.7-0.el7.centos.x86_64 -y
>> yum install --disablerepo=base --disablerepo=epel
>> ceph-0.80.7-0.el7.centos -y
>>
>> 2015-04-10 17:57 GMT+03:00 Dan van der Ster > >:
>>
>> Hi Ken,
>>
>> Do you happen to know how to upgrade a CentOS 7.1 Ceph client that is
>> today using 0.80.9 from ceph.com  [1] to the
>> suite of ceph rpms which
>> are now bundled with 7.1 ? We're getting distrosync errors like this:
>>
>> # yum distro-sync --skip-broken
>> Loaded plugins: changelog, fastestmirror, kernel-module, priorities,
>> rpm-warm-
>>  : cache, tsflags, versionlock
>> Loading mirror speeds from cached hostfile
>> 78 packages excluded due to reposi

Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-04-09 Thread Ken Dreyer
On 04/08/2015 03:00 PM, Travis Rhoden wrote:
> Hi Vickey,
> 
> The easiest way I know of to get around this right now is to add the
> following line in section for epel in /etc/yum.repos.d/epel.repo
> 
> exclude=python-rados python-rbd
> 
> So this is what my epel.repo file looks like: http://fpaste.org/208681/
> 
> It is those two packages in EPEL that are causing problems.  I also
> tried enabling epel-testing, but that didn't work either.

My wild guess is that enabling epel-testing is not enough, because the
offending 0.80.7-0.4.el7 build in the stable EPEL repository is still
visible to yum.

When you set that "exclude=" parameter in /etc/yum.repos.d/epel.repo,
like "exclude=python-rados python-rbd python-cephfs", *and* also try
"--enablerepo=epel-testing", does it work?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Firefly - Giant : CentOS 7 : install failed ceph-deploy

2015-04-07 Thread Ken Dreyer
Hi Vickey,

Sorry about the issues you've been seeing. This looks very similar to
http://tracker.ceph.com/issues/11104 .

Here are two options you can try in order to work around this:

- If you must run Firefly (0.80.x) or Giant (0.87.x), please try
enabling the "epel-testing" repository on your system prior to
installing Ceph. There is an update in epel-testing that should help
with this issue.
https://admin.fedoraproject.org/updates/FEDORA-EPEL-2015-1607/ceph-0.80.7-0.5.el7

- If you can run Hammer (0.94), please try testing that out. The Hammer
release's packages have been split up to match the split that happened
in EPEL.

- Ken

On 04/07/2015 04:09 PM, Vickey Singh wrote:
> Hello There
> 
> I am trying to install Giant on CentOS7 using ceph-deploy and
> encountered below problem.
> 
> [rgw-node1][DEBUG ] Package python-ceph is obsoleted by python-rados,
> but obsoleting package does not provide for requirements
> [rgw-node1][DEBUG ] ---> Package cups-libs.x86_64 1:1.6.3-17.el7 will be
> installed
> [rgw-node1][DEBUG ] --> Finished Dependency Resolution
> [rgw-node1][DEBUG ]  You could try using --skip-broken to work around
> the problem
> [rgw-node1][WARNIN] Error: Package:
> 1:ceph-common-0.87.1-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]Requires: python-ceph = 1:0.87.1-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.86-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.86-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.87-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.87-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.87.1-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.87.1-0.el7.centos
> [rgw-node1][WARNIN] Error: Package: 1:ceph-0.87.1-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]Requires: python-ceph = 1:0.87.1-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.86-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.86-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.87-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.87-0.el7.centos
> [rgw-node1][WARNIN]Available:
> 1:python-ceph-0.87.1-0.el7.centos.x86_64 (Ceph)
> [rgw-node1][WARNIN]python-ceph = 1:0.87.1-0.el7.centos
> [rgw-node1][DEBUG ]  You could try running: rpm -Va --nofiles --nodigest
> [rgw-node1][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y
> install ceph
> 
> [root@ceph-node1 ceph]#
> [root@ceph-node1 ceph]#
> [root@ceph-node1 ceph]#
> [root@ceph-node1 ceph]# ceph-deploy --version
> 1.5.22
> [root@ceph-node1 ceph]#
> [root@ceph-node1 ceph]# ceph -v
> ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
> [root@ceph-node1 ceph]#
> 
> 
> On rgw-node1 macine 
> 
> /etc/yum.repos.d/ceph.repo seems to be correct 
> 
> [root@rgw-node1 yum.repos.d]# cat ceph.repo
> [Ceph]
> name=Ceph packages for $basearch
> baseurl=http://ceph.com/rpm-giant/el7/$basearch
> enabled=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> priority=1
> 
> [Ceph-noarch]
> name=Ceph noarch packages
> baseurl=http://ceph.com/rpm-giant/el7/noarch
> enabled=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> priority=1
> 
> [ceph-source]
> name=Ceph source packages
> baseurl=http://ceph.com/rpm-giant/el7/SRPMS
> enabled=1
> gpgcheck=1
> type=rpm-md
> gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
> priority=1
> 
> 
> When i visit this directory http://ceph.com/rpm-giant/el7 , i can see
> multiple versions of python-ceph i.e. 
> python-ceph-0.86-0.el7.centos.x86_64
> python-ceph-0.87-0.el7.centos.x86_64
> python-ceph-0.87-1.el7.centos.x86_64
> 
> *This is the reason , yum is getting confused to install the latest
> available version python-ceph-0.87-1.el7.centos.x86_64. This issue looks
> like yum priority plugin and RPM obsolete.*
> 
> http://tracker.ceph.com/issues/10476
> 
> [root@rgw-node1 yum.repos.d]# cat /etc/yum/pluginconf.d/priorities.conf
> [main]
> enabled = 1
> check_obsoletes = 1
> 
> [root@rgw-node1 yum.repos.d]#
> 
> [root@rgw-node1 yum.repos.d]#
> [root@rgw-node1 yum.repos.d]# uname -r
> 3.10.0-229.1.2.el7.x86_64
> [root@rgw-node1 yum.repos.d]# cat /etc/redhat-release
> CentOS Linux release 7.1.1503 (Core)
> [root@rgw-node1 yum.repos.d]#
> 
> 
> However it worked *fine 1 week back* on CentOS 7.0
> 
> [root@ceph-node1 ceph]# uname -r
> 3.10.0-123.20.1.el7.x86_64
> [root@ceph-node1 ceph]# cat /etc/redhat-release
> CentOS Linux release 7.0.1406 (Core)
> [root@ceph-node1 ceph]#
> 
> 
> Any fix to this is highly appreciated. 
> 
> Regards
> VS
> 
> 
> 
> ___

Re: [ceph-users] Install problems GIANT on RHEL7

2015-04-06 Thread Ken Dreyer
On 04/04/2015 02:49 PM, Don Doerner wrote:
> Key problem resolved by actually installing (as opposed to simply
> configuring) the EPEL repo.  And with that, the cluster became viable. 
> Thanks all.

Hi Don,

I'm not sure I understand what you did to fix this. Can you share more
information about the steps you took to fix this?

- Ken

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Where is the systemd files?

2015-03-30 Thread Ken Dreyer
The systemd service unit files were imported into the tree, but they
have not been added into any upstream packaging yet. See the discussion
at https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=769593 or "git log
-- systemd". I don't think there are any upstream tickets in Redmine for
this yet.

Since Hammer is very close to being released, the service unit files
will not be available in the Hammer packages. The earliest we would ship
them would be the Infernalis release series.

I've recently added a "_with_systemd" conditional to the RPM spec
(ceph.spec.in) in master in order to support socket directory creation
using tmpfiles.d. That same "_with_systemd" logic could be extended to
ship the service unit files on the relevant RPM-based platforms and ship
SysV-init scripts on the older platforms (eg RHEL 6).

I'm not quite sure how we ought to handle that on Debian-based packages.
Is there a way to conditionalize the Debian packaging to "use systemd on
some versions of the distro, and use upstart on other versions" ?

- Ken

On 03/26/2015 11:13 PM, Robert LeBlanc wrote:
> I understand that Giant should have systemd service files, but I don't
> see them in the CentOS 7 packages.
> 
> https://github.com/ceph/ceph/tree/giant/systemd
> 
> [ulhglive-root@mon1 systemd]# rpm -qa | grep --color=always ceph
> ceph-common-0.93-0.el7.centos.x86_64
> python-cephfs-0.93-0.el7.centos.x86_64
> libcephfs1-0.93-0.el7.centos.x86_64
> ceph-0.93-0.el7.centos.x86_64
> ceph-deploy-1.5.22-0.noarch
> [ulhglive-root@mon1 systemd]# for i in $(rpm -qa | grep ceph); do rpm
> -ql $i | grep -i --color=always systemd; done
> [nothing returned]
> 
> Thanks,
> Robert LeBlanc

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.80.8 and librbd performance

2015-03-03 Thread Ken Dreyer
On 03/03/2015 04:19 PM, Sage Weil wrote:
> Hi,
> 
> This is just a heads up that we've identified a performance regression in 
> v0.80.8 from previous firefly releases.  A v0.80.9 is working it's way 
> through QA and should be out in a few days.  If you haven't upgraded yet 
> you may want to wait.
> 
> Thanks!
> sage

Hi Sage,

I've seen a couple Redmine tickets on this (eg
http://tracker.ceph.com/issues/9854 ,
http://tracker.ceph.com/issues/10956). It's not totally clear to me
which of the 70+ unreleased commits on the firefly branch fix this
librbd issue.  Is it only the three commits in
https://github.com/ceph/ceph/pull/3410 , or are there more?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS7 librbd1-devel problem.

2015-02-17 Thread Ken Dreyer
On 02/17/2015 01:07 AM, Leszek Master wrote:
> Hello all. I have to install qemu on one of my ceph nodes to test
> somethings. I added there a ceph-giant repository and connceted it to
> ceph cluster. The problem is that i need to build from sourcess qemu
> with rbd support and there is no librbd1-devel in the ceph repository.
> Also in the epel i have only librbd1-devel at version 0.80.7, my ceph
> version installed is 0.87. So there is dependency problem. How can i get
> it working properly? where can i find librbd1-devel at 0.87 version that
> i can install at my Centos 7?

The -devel RPMs were split up downstream in EPEL's 0.80.7 packages, but
this change has not yet been done in the upstream packaging. It's in
progress, at http://tracker.ceph.com/issues/10884

If you need RBD headers for the v0.87 release, you can install the
"ceph-devel-0.87" RPM.

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Installation failure

2015-02-16 Thread Ken Dreyer
On 02/16/2015 08:44 AM, HEWLETT, Paul (Paul)** CTR ** wrote:
> Thanks for that Travis.  Much appreciated.
> 
> Paul Hewlett
> Senior Systems Engineer
> Velocix, Cambridge
> Alcatel-Lucent
> t: +44 1223 435893 m: +44 7985327353
> 
> 
> 
> 
> From: Travis Rhoden [trho...@gmail.com]
> Sent: 16 February 2015 15:35
> To: HEWLETT, Paul (Paul)** CTR **
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Installation failure
> 
> Hi Paul,
> 
> Looking a bit closer, I do believe it is the same issue.  It looks
> like python-rbd in EPEL (and others like python-rados) were updated in
> EPEL on January 21st, 2015.  This update included some changes to how
> dependencies were handled between EPEL and RHEL for Ceph.  See
> http://pkgs.fedoraproject.org/cgit/ceph.git/commit/?h=epel7
> 
> Fedora and EPEL both split out the older python-ceph package into
> smaller subsets (python-{rados,cephfs,rbd}), but these changes are not
> upstream yet (from the ceph.com hosted packages).  So if repos enable
> both ceph.com and EPEL, the EPEL packages will override the ceph.com
> packages because the RPMs have "obsoletes: python-ceph" in them, even
> though the EPEL packages are older.
> 
> It's a bit of a problematic transition period until the upstream
> packaging splits in the same way.  I do believe that using
> "check_obsoletes=1" in /etc/yum/pluginconf.d/priorities.conf will take
> care of the problem for you.  However, it may be the case that you
> would need to make your ceph .repo files that point to rpm-giant be
> "priority=1".
> 
> That's my best advice of something to try for now.

Thanks Travis for the spot-on diagnosis.

We're tracking this breakage upstream at
http://tracker.ceph.com/issues/10893 , "Install failing on RHEL-family
machines (due to mixed package sources?)"

and downstream at https://bugzilla.redhat.com/1193182 , "ceph has
unversioned obsoletes"

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Compilation problem

2015-02-09 Thread Ken Dreyer
On 02/09/2015 08:17 AM, Gregory Farnum wrote:
> I think there's ongoing work to backport (portions of?) Ceph to RHEL5,
> but it definitely doesn't build out of the box. Even beyond the
> library dependencies you've noticed you'll find more issues with e.g.
> the boost and gcc versions. :/

As far as I know, only librados v0.79 is under development so far. I was
curious about the location of the current work, and found that it's by
Rohan Mars, at https://github.com/droneware/ceph/tree/librados-aix-port

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD deprecated?

2015-02-05 Thread Ken Dreyer
On 02/05/2015 08:55 AM, Don Doerner wrote:
> I have been using Ceph to provide block devices for various, nefarious
> purposes (mostly testing ;-).  But as I have worked with various Linux
> distributions (RHEL7, CentOS6, CentOS7) and various Ceph releases
> (firefly, giant), I notice that the onlycombination for which I seem
> able to find the needed kernel modules (rbd, libceph) is RHEL7-firefly.

Hi Don,

The RBD kernel module is not deprecated; quite the opposite in fact.

A year ago things were a bit rough regarding supporting the Ceph kernel
modules on RHEL 6 and 7. All Ceph kernel module development goes
upstream first into Linus' kernel tree, and that tree is very different
than what ships in RHEL 6 (2.6.32 plus a lot of patches) and RHEL 7
(3.10.0 plus a lot of patches). This meant that it was historically much
harder for the Ceph developer community to integrate what was going on
upstream with what was happening in the downstream RHEL kernels.

Currently, Red Hat's plan is to ship rbd.ko and some of the associated
firefly userland bits in RHEL 7.1. You mention that you've been testing
on RHEL 7, so I'm guessing you're got a RHEL subscription. As it turns
out, you can try the new kernel package out today in the RHEL 7.1 Beta
that's available to all RHEL subscribers. It's a beta, so please open
support requests with Red Hat if you happen to hit bugs with those new
packages.

Unfortunately CentOS does not rebuild and publish the public RHEL Betas,
so for CentOS 7, you'll have to wait until RHEL 7.1 reaches GA and
CentOS 7.1 rebuilds it. (I suppose you could jump ahead of the CentOS
developers here and rebuild your own kernel package and ceph userland if
you're really eager... but you're really on your own there :)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] got "XmlParseFailure" when libs3 client accessing radosgw object gateway

2015-01-14 Thread Ken Dreyer
On 01/06/2015 02:21 AM, Liu, Xuezhao wrote:
> But when I using libs3 (clone from http://github.com/wido/libs3.git ),
> the s3 commander does not work as expected:

Hi Xuezhao,

Wido's fork of libs3 is pretty old and not up to date [1]. It's best to
use Bryan's repository instead: https://github.com/bji/libs3

- Ken

[1] https://github.com/wido/libs3/pull/3
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Building Ceph

2015-01-05 Thread Ken Dreyer
On 01/05/2015 11:26 AM, Garg, Pankaj wrote:
> I’m trying to build Ceph on my RHEL (Scientific Linux 7 – Nitrogen),
> with 3.10.0.
> 
> I am using the configure script and I am now stuck on “libkeyutils not
> found”.
> 
> I can’t seem to find the right library for this. What Is the right yum
> update name for this library?


The package name is not exactly intuitive: it's keyutils-libs-devel

Some of the autoconf AC_CHECK_LIB functions fail with messages that are
slightly more helpful when you're trying to figure this stuff out. I've
altered the libkeytuils check to do the same:

https://github.com/ceph/ceph/pull/3293

(I'm not a Debian/Ubuntu expert yet, but I'm guessing the package name
is just "libkeyutils-dev", right? That's what's in ./debian/control,
anyway.)

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy Errors - Fedora 21

2015-01-02 Thread Ken Dreyer
On 01/02/2015 12:38 PM, Travis Rhoden wrote:
> Hello,
> 
> I believe this is a problem specific to Fedora packaging.  The Fedora
> package for ceph-deploy is a bit different than the ones hosted at
> ceph.com .  Can you please tell me the output of "rpm
> -q python-remoto"?
> 
> I believe the problem is that the python-remoto package is too old, and
> there is not a correct dependency on it when it comes to versions.  The
> minimum version should be 0.0.22, but the latest in Fedora is 0.0.21
> (and latest upstream is 0.0.23).  I'll push to get this updated
> correctly.  The Fedora package maintainers will need to put out a new
> release of python-remoto, and hopefully update the spec file for
> ceph-deploy to require >= 0.0.22.

Thanks Travis for tracking this down!

Federico has granted me access to the Fedora Rawhide and F21 branches
today (thanks Federico!) in Fedora's package database [1].

I've built python-remoto 0.0.23 in Rawhide (Fedora 22) and Fedora 21 [2].

deeepdish, you can grab the Fedora 21 build directly from [3]
immediately if you wish.

If you'd rather wait for signed builds, you can wait a few days for the
Fedora infra admins to sign the package and push it out to the Fedora
mirrors [4]. When that's done, you can run "yum
--enablerepo=updates-testing update python-remoto", and yum will then
update your system to python-remoto-0.0.23-1.fc21 .

Either way, we'd really welcome your feedback and confirmation that this
does in fact fix your issue.

- Ken

[1] https://admin.fedoraproject.org/pkgdb/package/python-remoto/
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1146478
[3] http://koji.fedoraproject.org/koji/taskinfo?taskID=8516634
[4] https://admin.fedoraproject.org/updates/python-remoto-0.0.23-1.fc21

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy Errors - Fedora 21

2015-01-02 Thread Ken Dreyer
On 12/29/2014 08:24 PM, deeepdish wrote:
> Hello.
> 
> I’m having an issue with ceph-deploy on Fedora 21.   
> 
> - Installed ceph-deploy via ‘yum install ceph-deploy'
> - created non-root user
> - assigned sudo privs as per documentation
> - http://ceph.com/docs/master/rados/deployment/preflight-checklist/
>  
> $ ceph-deploy install smg01.erbus.kupsta.net
>  
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /cephfs/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.20): /bin/ceph-deploy install
> [hostname]
> [ceph_deploy.install][DEBUG ] Installing stable version firefly on
> cluster ceph hosts [hostname]
> [ceph_deploy.install][DEBUG ] Detecting platform for host [hostname] ...
> [ceph_deploy][ERROR ] RuntimeError: connecting to
> host: [hostname] resulted in errors: TypeError __init__() got an
> unexpected keyword argument 'detect_sudo'

Hi deeepdish,

Sorry you're having issues with ceph-deploy. Would you mind filing a bug
at http://tracker.ceph.com/ so this doesn't get lost? There's a lot of
traffic on ceph-users and it's best if we have a ticket for this.

If you don't already have an account in our tracker, you can register
for a new account using the "Register" link in the upper-right corner.

Also, it would be useful to have a bit more information from you:

1) What is the host OS and version of smg01.erbus.kupsta.net ?
2) What is the content of your .cephdeploy.conf file?

- Ken
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


  1   2   >