Centos 7 - the ugrade was done simply with "yum update -y ceph" on each node
one by one, so the package order would have been determined by yum.
From: Jason Dillaman
Sent: Monday, June 6, 2016 10:42 PM
To: Adrian Saul
Cc:
Hi,
I think you can specify the pool name in client settings.
for example, in our environment,
# rbd ls
rbd: error opening pool rbd: (2) No such file or directory
# rbd ls -p block
f7470c3f-e051-4f3d-86ff-52e8ba78ac4a
022e9944-122c-4ad0-b652-9e52ba32e2c0
Here -p pool_name was specified. that
Hello,
On Mon, 6 Jun 2016 14:14:17 -0500 Brady Deetz wrote:
> This is an interesting idea that I hadn't yet considered testing.
>
> My test cluster is also looking like 2K per object.
>
> It looks like our hardware purchase for a one-half sized pilot is getting
> approved and I don't really
On 06/06/2016 18:41, Gregory Farnum wrote:
> We had several metadata caching improvements in ceph-fuse recently which I
> think went in after Infernalis. That could explain it.
Ok, in this case, it could be good news. ;)
I had doubts concerning my fio bench. I know that benchs can be tricky
Hey cephers,
So we have gone from not having a Ceph Tech Talk this month…to having
two! As a part of our regularly scheduled Ceph Tech Talk series, Lenz
Grimmer from OpenATTIC will be talking about the architecture of their
management/GUI solution, which will also include a live demo.
As a
Blargh, sounds like we need a better error message there, thanks!
-Sam
On Mon, Jun 6, 2016 at 12:16 PM, Tu Holmes wrote:
> It was a permission issue. While I followed the process and haven't changed
> any data. The "current map" files on each OSD were still listed as owner
>
It was a permission issue. While I followed the process and haven't changed
any data. The "current map" files on each OSD were still listed as owner
root as they were created while the older ceph processes were still
running.
Changing that after the fact was still a necessity and I will make sure
This is an interesting idea that I hadn't yet considered testing.
My test cluster is also looking like 2K per object.
It looks like our hardware purchase for a one-half sized pilot is getting
approved and I don't really want to modify it when we're this close to
moving forward. So, using spare
Oh, what was the problem (for posterity)?
-Sam
On Mon, Jun 6, 2016 at 12:11 PM, Tu Holmes wrote:
> It totally did and I see what the problem is.
>
> Thanks for your input. I truly appreciate it.
>
>
> On Mon, Jun 6, 2016 at 12:01 PM Samuel Just wrote:
>>
It totally did and I see what the problem is.
Thanks for your input. I truly appreciate it.
On Mon, Jun 6, 2016 at 12:01 PM Samuel Just wrote:
> If you reproduce with
>
> debug osd = 20
> debug filestore = 20
> debug ms = 1
>
> that might make it clearer what is going on.
>
If you reproduce with
debug osd = 20
debug filestore = 20
debug ms = 1
that might make it clearer what is going on.
-Sam
On Mon, Jun 6, 2016 at 11:53 AM, Tu Holmes wrote:
> Hey cephers. I have been following the upgrade documents and I have done
> everything regarding
Hey cephers. I have been following the upgrade documents and I have done
everything regarding upgrading the client to the latest version of Hammer,
then to Jewel.
I made sure that the owner of log partitions and all other items is the
ceph user and I've gone through the process as was described
Hi,
thank you for your suggestion.
Rsync will copy the whole file new, if the size is different.
Since we talk about raw image files of virtual servers, rsync is no option.
We need something which will inside of a file just copy the delta's.
Something like lvmsync ( which is just working with
On Mon, Jun 6, 2016 at 11:43 AM, Oliver Dzombic wrote:
> Hi,
>
> we will have to copy all data
>
> from: hammer cephfs
>
> to: jewel cephfs
>
> and i would like to keep the resulting downtime low for the underlying
> services.
>
> So does anyone know a good way/tool to
Hi,
we will have to copy all data
from: hammer cephfs
to: jewel cephfs
and i would like to keep the resulting downtime low for the underlying
services.
So does anyone know a good way/tool to make a "pre" copy of the files
and then just copy the delta's since last copy ?
Thank you !
--
Mit
On Mon, Jun 6, 2016 at 7:06 AM, Christian Balzer wrote:
>
> Hello,
>
> On Fri, 3 Jun 2016 15:43:11 +0100 David wrote:
>
> > I'm hoping to implement cephfs in production at some point this year so
> > I'd be interested to hear your progress on this.
> >
> > Have you considered SSD
We do the same thing. OSPF between ToR switches, BGP to all of the hosts
with each one advertising its own /32 (each has 2 NICs).
On Mon, Jun 6, 2016 at 6:29 AM, Luis Periquito wrote:
> Nick,
>
> TL;DR: works brilliantly :)
>
> Where I work we have all of the ceph nodes
Separate OSPF areas would make this unnecessarily complex. In a world where
(some) routers are built to accommodate the number of Internet prefixes of over
a half million, your few hundred or few thousand /32s represent very little
load to a modern network element.
The number of links will
We had several metadata caching improvements in ceph-fuse recently which I
think went in after Infernalis. That could explain it.
-Greg
On Monday, June 6, 2016, Francois Lafont wrote:
> Hi,
>
> I have a little Ceph cluster in production with 5 cluster nodes and 2
> client
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 06/06/2016 03:26 PM, David Turner wrote:
> Best practices in general say to do them separate. If something
> doesn't work... Is it the new kernel, some package that are
> different on 16.04, Jewel, etc. The less things in that list the
> easier
There isn't a recommendation to use 3.16. We have experiences that force us to
use 3.16 due to our setup.
We upgraded from 3.13 to 3.19 on our storage nodes to avoid occasional XFS
kernel panics, but we found that we would randomly have load spikes that would
lock up the storage node and show
Hi Ken,
Thanks for your reply.The ceph cluster runs well.
:~$ sudo ceph -s cluster 285441d6-c059-405d-9762-86bd91f279d0 health
HEALTH_OK monmap e1: 1 mons at {strony-pc=10.132.138.233:6789/0}
election epoch 9, quorum 0 strony-pc osdmap e200: 2 osds: 2 up, 2 in
Hi,
first I’f have one remark. You run both a ceph-deploy mon create-initial then a
"ceph-deploy mon create + ceph-deploy gatherkeys". Choose one or the other not
both.
Then, I notice that you are zapping and deploying using drive /dev/sda which is
usually the system disks. So next question
Hey cephers,
We’re about 2.5 weeks away from our next monthly Ceph Tech Talk, and I
need a volunteer for this month and next month. If you have a
ceph-related technical topic that you would like to share with the
community, please let me know and we’ll get you added to the list.
> -Original Message-
> From: Luis Periquito [mailto:periqu...@gmail.com]
> Sent: 06 June 2016 14:30
> To: Nick Fisk
> Cc: Ceph Users
> Subject: Re: [ceph-users] OSPF to the host
>
> Nick,
>
> TL;DR: works brilliantly :)
Excellent, just
Hi David,
> Am 06.06.2016 um 15:26 schrieb David Turner :
>
> Best practices in general say to do them separate. If something doesn't
> work... Is it the new kernel, some package that are different on 16.04,
> Jewel, etc. The less things in that list the easier
Nick,
TL;DR: works brilliantly :)
Where I work we have all of the ceph nodes (and a lot of other stuff) using
OSPF and BGP server attachment. With that we're able to implement solutions
like Anycast addresses, removing the need to add load balancers, for the
radosgw solution.
The biggest issues
Would it be beneficial for anyone to have an archive copy of an osd
that took more than 4 days to export. All but an hour of that time was
spent exporting 1 pg (that ended up being 197MB). I can even send
along the extracted pg for analysis...
--
Adam
On Fri, Jun 3, 2016 at 2:39 PM, Adam Tygart
Best practices in general say to do them separate. If something doesn't work...
Is it the new kernel, some package that are different on 16.04, Jewel, etc. The
less things in that list the easier it is to track down the issue and fix it.
As far as order, hammer 0.94.5 wasn't built with 16.04 in
What OS are you using? It actually sounds like the plugins were
updated, the Infernalis OSD was reset, and then the Jewel OSD was
installed.
On Sun, Jun 5, 2016 at 10:42 PM, Adrian Saul
wrote:
>
> Thanks Jason.
>
> I don’t have anything specified explicitly for
Hi,
I have a little Ceph cluster in production with 5 cluster nodes and 2
client nodes. The clients are using cephfs via fuse.ceph. Recently, I
have upgraded my cluster from Infernalis to Jewel (servers _and_ clients).
When the cluster was in Infernalis version the fio command below gave
me
Hi All,
Has anybody had any experience with running the network routed down all the
way to the host?
I know the standard way most people configured their OSD nodes is to bond
the two nics which will then talk via a VRRP gateway and then probably from
then on the networking is all Layer3.
On Mon, Jun 6, 2016 at 12:23 PM, qisy wrote:
> Yan, Zheng:
>
> Thanks for your reply.
> But change into jewel, application read/write disk slowly. confirms the
> fio tested iops.
Does your application use buffered IO or direct IO? direct-IO in
hammer actually is
hello,
Does ceph cluster work right? run ceph -s and ceph -w for watching more
details.
2016-06-06 16:17 GMT+08:00 strony zhang :
> Hi,
>
> I am a new learner in ceph. Now I install an All-in-one ceph on the host
> A. Then I tried accessing the ceph from another host B
Hi,
I am a new learner in ceph. Now I install an All-in-one ceph on the host A.
Then I tried accessing the ceph from another host B with librados and librbd
installed.
>From host B: I run python to access the ceph on host A.>>> import rados>>>
>cluster1 =
Hello,
On Fri, 3 Jun 2016 15:43:11 +0100 David wrote:
> I'm hoping to implement cephfs in production at some point this year so
> I'd be interested to hear your progress on this.
>
> Have you considered SSD for your metadata pool? You wouldn't need loads
> of capacity although even with
36 matches
Mail list logo