Hi,
On 16.11.2016 19:01, Vincent Godin wrote:
> Hello,
>
> We now have a full cluster (Mon, OSD & Clients) in jewel 10.2.2
> (initial was hammer 0.94.5) but we have still some big problems on our
> production environment :
>
> * some ceph filesystem are not mounted at startup and we have to
>
Olá Pedro...
These are extremely generic questions, and therefore, hard to answer. Nick did
a good job in defining the risks.
In our case, we are running a Ceph/CephFS system in production for over an
year, and before that, we tried to understand Ceph for a year also.
Ceph is incredibility
Hi,
Yes, I can’t think of anything else at this stage. Could you maybe repost some
dump historic op dumps now that you have turned off snapshots. I wonder if
they might reveal anything.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Thomas
Danan
Sent: 16
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Pedro Benites
> Sent: 16 November 2016 17:51
> To: ceph-users@lists.ceph.com
> Subject: [ceph-users] how possible is that ceph cluster crash
>
> Hi,
>
> I have a ceph cluster with 50 TB,
On Wed, Nov 16, 2016 at 5:13 AM, Jan Krcmar wrote:
> hi,
>
> i've got found problem/feature in pool snapshots
>
> when i delete some object from pool which was previously snapshotted,
> i cannot list the object name in the snapshot anymore.
>
> steps to reproduce
>
> # ceph -v
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Vincent Godin
Sent: 16 November 2016 18:02
To: ceph-users
Subject: [ceph-users] Help needed ! cluster unstable after upgrade from Hammer
to Jewel
Hello,
We now have a full cluster (Mon,
On Wed, 16 Nov 2016, Yuri Weinstein wrote:
> Sage,
>
> We had discussed xenial support in sepia today and right now jobs
> asking for it from smithi and mira nodes will fail because there are
> no bare-metals provisioned for it.
>
> The question is - how do we split nodes between 14.04, 16.04
Sage,
We had discussed xenial support in sepia today and right now jobs
asking for it from smithi and mira nodes will fail because there are
no bare-metals provisioned for it.
The question is - how do we split nodes between 14.04, 16.04 and centos ?
Thx
YuriW
On Mon, Nov 14, 2016 at 6:24 AM,
Hello,
We now have a full cluster (Mon, OSD & Clients) in jewel 10.2.2 (initial
was hammer 0.94.5) but we have still some big problems on our production
environment :
- some ceph filesystem are not mounted at startup and we have to mount
them with the "/bin/sh -c 'flock /var/lock/ceph-disk
Hi,
I have a ceph cluster with 50 TB, with 15 osds, it is working fine for
one year and I would like to grow it and migrate all my old storage,
about 100 TB to ceph, but I have a doubt. How possible is that the
cluster fail and everything went very bad? How reliable is ceph? What is
the risk
Hi Nick,
We have deleted all Snapshots and observed the system for several hours.
From what I see this did not help to reduce the blocked ops and IO freeze on
Client ceph side.
We have also tried to increase a little bit the PGs (by 8 than 128) because
this is something we should do and we
On Fri, Nov 4, 2016 at 2:14 AM, 于 姜 wrote:
> ceph version 10.2.3
> ubuntu 14.04 server
> nfs-ganesha 2.4.1
> ntirpc 1.4.3
>
> cmake -DUSE_FSAL_RGW=ON ../src/
>
> -- Found rgw libraries: /usr/lib
> -- Could NOT find RGW: Found unsuitable version ".", but required is at
> least
Hello John.
I'm sorry for the lack of information at the first post.
The same version is in use for servers and clients.
About the workload, it varies.
On one server it's about *5 files created/written and then fully read per
second*.
On the other server it's about *5 to 6 times that number*, so
I'm sorry, by server, I meant cluster.
On one cluster the rate of files created and read is about 5 per second.
On another cluster it's from 25 to 30 files created and read per second.
On Wed, Nov 16, 2016 at 2:03 PM Webert de Souza Lima
wrote:
> Hello John.
>
> I'm sorry
On Wed, Nov 16, 2016 at 3:15 PM, Webert de Souza Lima
wrote:
> hi,
>
> I have many clusters running cephfs, and in the last 45 days or so, 2 of
> them started giving me the following message in ceph health:
> mds0: Client dc1-mx02-fe02:guest failing to respond to capability
hi,
I have many clusters running cephfs, and in the last 45 days or so, 2 of
them started giving me the following message in *ceph health:*
*mds0: Client dc1-mx02-fe02:guest failing to respond to capability release*
When this happens, cephfs stops responding. It will only get back
after I
Hello,
now I made an
'apt-get autoremove ceph-common',
reinstalled hammer
'apt-get install --reinstall ceph=0.94.9-1xenial'
readded the systemctl unit-files from backup after changing the
ExecStart entry to command string without -setuser an --setgroup
option
that hammer daemon binaries
hi,
i've got found problem/feature in pool snapshots
when i delete some object from pool which was previously snapshotted,
i cannot list the object name in the snapshot anymore.
steps to reproduce
# ceph -v
ceph version 10.2.3 (ecc23778eb545d8dd55e2e4735b53cc93f92e65b)
# rados -p test ls
stats
>>> Robert Sander schrieb am Mittwoch,
16. November
2016 um 10:23:
> On 16.11.2016 09:05, Steffen Weißgerber wrote:
>> Hello,
>>
Hello,
>> we started upgrading ubuntu on our ceph nodes to Xenial and had to
see that
> during
>> the upgrade ceph automatically
The snapshot works by using Copy On Write. If you dirty even a 4kb section of a
4MB object in the primary RBD, that entire 4MB object then needs to be read and
then written into the snapshot RBD.
From: Thomas Danan [mailto:thomas.da...@mycom-osi.com]
Sent: 16 November 2016 12:58
To: Thomas
Hi Nick,
Yes our application is doing small Random IO and I did not realize that the
snapshotting feature could so much degrade performances in that case.
We have just deactivated it and deleted all snapshots. Will notify you if it
drastically reduce the blocked ops and consequently the IO
On Wed, Nov 16, 2016 at 12:18 PM, Burkhard Linke
wrote:
> Hi,
>
>
> On 11/16/2016 11:17 AM, John Spray wrote:
>>
>> On Wed, Nov 16, 2016 at 1:16 AM, Patrick Donnelly
>> wrote:
>>>
>>> On Tue, Nov 15, 2016 at 8:40 AM, Hauke
Hi,
On 11/16/2016 11:17 AM, John Spray wrote:
On Wed, Nov 16, 2016 at 1:16 AM, Patrick Donnelly wrote:
On Tue, Nov 15, 2016 at 8:40 AM, Hauke Homburg wrote:
In the last weeks we enabled for testing the dir fragmentation. The Resultat
is that we
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Thomas
Danan
Sent: 15 November 2016 21:14
To: Peter Maloney
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] ceph cluster having blocke requests very frequently
Very
I didn't find documentation about how to that onpremise and not for aws
itself
On Nov 16, 2016 13:20, "Haomai Wang" wrote:
> On Wed, Nov 16, 2016 at 7:19 PM, fridifree wrote:
> > Hi,
> > Thanks
> > This is for rados, not for s3 with nodejs
> > If someone
On Wed, Nov 16, 2016 at 7:19 PM, fridifree wrote:
> Hi,
> Thanks
> This is for rados, not for s3 with nodejs
> If someone can send examples how to do that I will appreciate it
oh, if you refer to s3, you can get nodejs support from aws doc
>
> Thank you
>
>
> On Nov 16,
Hi,
Thanks
This is for rados, not for s3 with nodejs
If someone can send examples how to do that I will appreciate it
Thank you
On Nov 16, 2016 13:07, "Haomai Wang" wrote:
> https://www.npmjs.com/package/rados
>
> On Wed, Nov 16, 2016 at 6:29 PM, fridifree
https://www.npmjs.com/package/rados
On Wed, Nov 16, 2016 at 6:29 PM, fridifree wrote:
> Hi Everyone,
>
> Someone knows how to nodeJS with Ceph S3(Radosgw)
> I succeed to do that on python using boto, I don't find any examples about
> how to this on Nodejs.
> If someone can
since you have ceph stuffs installed, you could comment the ceph line in in sourse.lst
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> Robert Sander schrieb am Mittwoch,
16. November
2016 um 10:23:
> On 16.11.2016 09:05, Steffen Weißgerber wrote:
>> Hello,
>>
Hello,
>> we started upgrading ubuntu on our ceph nodes to Xenial and had to
see that
> during
>> the upgrade ceph automatically
Hi,
after doing 'apt-mark hold ceph' the upgrade failed.
It seems due to some kind of fetch failed:
...
OK http://archive.ubuntu.com trusty-backports/universe amd64 Packages
Fehl http://ceph.com xenial/main Translation-en
Hi Everyone,
Someone knows how to nodeJS with Ceph S3(Radosgw)
I succeed to do that on python using boto, I don't find any examples about
how to this on Nodejs.
If someone can share with me examples I would be happy
Thanks
___
ceph-users mailing list
On Wed, Nov 16, 2016 at 8:55 AM, James Wilkins
wrote:
> Hello,
>
>
>
> Hoping to pick any users brains in relation to production CephFS deployments
> as we’re preparing to deploy CephFS to replace Gluster for our container
> based storage needs.
>
>
>
> (Target OS is
On 16.11.2016 09:05, Steffen Weißgerber wrote:
> Hello,
>
> we started upgrading ubuntu on our ceph nodes to Xenial and had to see that
> during
> the upgrade ceph automatically was upgraded from hammer to jewel also.
>
> Because we don't want to upgrade ceph and the OS at the same time we
>
Hi,
looks good.
Because I've made an image fo the node's system disk I can revert to
the state before the upgrade and restart the hole process.
Thank you.
Steffen
>>> "钟佳佳" 16.11.2016 09:32 >>>
hi :
you can google apt-mark
apt-mark hold PACKAGENAME
I assume you mean you only had 1 mon and it crashed, so effectively the iSCSI
suddenly went offline?
I suspect somehow that you have corrupted the NTFS volume, are there any errors
in the event log?
You may be able to use some disk recovery tools to try and fix the FS. Maybe
also try
hi :
you can google apt-mark
apt-mark hold PACKAGENAME
-- Original --
From: "Steffen Weißgerbe";
Date: Wed, Nov 16, 2016 04:05 PM
To: "CEPH list";
Subject: [ceph-users] hammer on xenial
Hello,
we started
Hello,
we started upgrading ubuntu on our ceph nodes to Xenial and had to see that
during
the upgrade ceph automatically was upgraded from hammer to jewel also.
Because we don't want to upgrade ceph and the OS at the same time we deinstalled
the ceph jewel components reactivated
38 matches
Mail list logo