Alexandre,
Based on discussion with them at Ceph day in Tokyo JP, they have their own
frozen the Ceph repository.
And they've been optimizing codes by their own team to meet their requirements.
AFAICT they had not done any do PR.
Cheers,
Shinobu
- Original Message -
From: "Alexandre
Hi,
I was reading this presentation from SK telecom about flash optimisations
AFCeph: Ceph Performance Analysis & Improvement on Flash [Slides]
http://fr.slideshare.net/Inktank_Ceph/af-ceph-ceph-performance-analysis-and-improvement-on-flash
Byung-Su Park, SK Telecom
They seem to have made
Hello,
On Tue, 12 Apr 2016 09:56:32 -0400 (EDT) Sage Weil wrote:
> Hi all,
>
> I've posted a pull request that updates any mention of ext4 in the docs:
>
> https://github.com/ceph/ceph/pull/8556
>
> In particular, I would appreciate any feedback on
>
>
>
Hello,
On Tue, 12 Apr 2016 09:00:19 +0200 Michael Metz-Martini | SpeedPartner
GmbH wrote:
> Hi,
>
> Am 11.04.2016 um 23:39 schrieb Sage Weil:
> > ext4 has never been recommended, but we did test it. After Jewel is
> > out, we would like explicitly recommend *against* ext4 and stop
> > testing
Hello,
On Tue, 12 Apr 2016 09:56:13 +0200 Udo Lembke wrote:
> Hi Sage,
Not Sage, but since he hasn't piped up yet...
> we run ext4 only on our 8node-cluster with 110 OSDs and are quite happy
> with ext4.
> We start with xfs but the latency was much higher comparable to ext4...
>
Welcome to
Hello,
On Tue, 12 Apr 2016 09:46:55 +0100 (BST) Andrei Mikhailovsky wrote:
> I've done the ceph osd reweight-by-utilization and it seems to have
> solved the issue. However, not sure if this will be the long term
> solution.
>
No.
As I said in my reply, use "crush reweight" to permanently
I apologise, I probably should have dialed down a bit.
I'd like to personally apologise to Sage, for being so patient with my ranting.
To be clear: We are so lucky to have Ceph. It was something we sorely needed
and for the right price (free).
It's was a dream come true to cloud providers - and
On Tue, Apr 12, 2016 at 1:33 PM, Jan Schermer wrote:
> Still the answer to most of your points from me is "but who needs that?"
> Who needs to have exactly the same data in two separate objects (replicas)?
> Ceph needs it because "consistency"?, but the app (VM filesystem) is
Thank you for the votes of confidence, everybody. :)
It would be good if we could keep this thread focused on who is harmed
by retiring ext4 as a tested configuration at what speed, and break
out other threads for other issues. (I'm about to do that for one of
them!)
-Greg
Hi Jan,
i can answer your question very quickly: We.
We need that!
We need and want a stable, selfhealing, scaleable, robust, reliable
storagesystem which can talk to our infrastructure in different languages.
I have full understanding, that people who are using an infrastructure,
which is
> Op 12 apr. 2016 om 23:09 heeft Nick Fisk het volgende
> geschreven:
>
> Jan,
>
> I would like to echo Sage's response here. It seems you only want a subset
> of what Ceph offers, whereas RADOS is designed to offer a whole lot more,
> which requires a lot more intelligence
On 12/04/2016 22:33, Jan Schermer wrote:
> I don't think it's apples and oranges.
> If I export two files via losetup over iSCSI and make a raid1 swraid out of
> them in guest VM, I bet it will still be faster than ceph with bluestore.
> And yet it will provide the same guarantees and do the same
Jan,
I would like to echo Sage's response here. It seems you only want a subset
of what Ceph offers, whereas RADOS is designed to offer a whole lot more,
which requires a lot more intelligence at the lower levels.
I must say I have found your attitude to both Sage and the Ceph project as a
whole
On Tue, 12 Apr 2016, Jan Schermer wrote:
> Still the answer to most of your points from me is "but who needs that?"
> Who needs to have exactly the same data in two separate objects
> (replicas)? Ceph needs it because "consistency"?, but the app (VM
> filesystem) is fine with whatever version
Still the answer to most of your points from me is "but who needs that?"
Who needs to have exactly the same data in two separate objects (replicas)?
Ceph needs it because "consistency"?, but the app (VM filesystem) is fine with
whatever version because the flush didn't happen (if it did the
I thought that I had corrected that already and apparently I was wrong.
The permissions set on MDS for the user mounting the filesystem needs to be
"rw". Mine was set to "r'.
ceph auth caps client.cephfs mon 'allow r' mds 'allow rw' osd 'allow rwx
pool=cephfs_metadata,allow rwx pool=cephfs_data'
Okay, I'll bite.
On Tue, 12 Apr 2016, Jan Schermer wrote:
> > Local kernel file systems maintain their own internal consistency, but
> > they only provide what consistency promises the POSIX interface
> > does--which is almost nothing.
>
> ... which is exactly what everyone expects
> ... which
On Tue, Apr 12, 2016 at 12:20 PM, Nate Curry wrote:
> I am seeing an issue with cephfs where I am unable to write changes to the
> files system in anyway. I am running commands using sudo with a user
> account as well as the root user itself to modify ownership of files,
The "out" OSD was "out" before the crash and doesn't hold any data as it
was weighted out prior.
Restarting OSDs named as repeat offenders as listed by 'ceph health
detail' has cleared problems.
Thanks to all for the guidance and suffering my panic,
--
Eric
On 4/12/16 12:38 PM, Eric Hall
On 12/04/2016 21:19, Jan Schermer wrote:
>
>> On 12 Apr 2016, at 20:00, Sage Weil wrote:
>>
>> On Tue, 12 Apr 2016, Jan Schermer wrote:
>>> I'd like to raise these points, then
>>>
>>> 1) some people (like me) will never ever use XFS if they have a choice
>>> given no choice,
On Tue, 12 Apr 2016, Jan Schermer wrote:
> I'd like to raise these points, then
>
> 1) some people (like me) will never ever use XFS if they have a choice
> given no choice, we will not use something that depends on XFS
Huh ?
> 3) doesn't majority of Ceph users only care about RBD?
Well, half
I am seeing an issue with cephfs where I am unable to write changes to the
files system in anyway. I am running commands using sudo with a user
account as well as the root user itself to modify ownership of files,
delete files, and create new files and all I get is "Permission denied".
At first
> On 12 Apr 2016, at 20:00, Sage Weil wrote:
>
> On Tue, 12 Apr 2016, Jan Schermer wrote:
>> I'd like to raise these points, then
>>
>> 1) some people (like me) will never ever use XFS if they have a choice
>> given no choice, we will not use something that depends on XFS
>>
On Tue, 12 Apr 2016, Jan Schermer wrote:
> I'd like to raise these points, then
>
> 1) some people (like me) will never ever use XFS if they have a choice
> given no choice, we will not use something that depends on XFS
>
> 2) choice is always good
Okay!
> 3) doesn't majority of Ceph users
Hi,
looks like one of your OSDs has been marked as out. Just make sure it’s in so
you can read '67 osds: 67 up, 67 in' rather than '67 osds: 67 up, 66 in’ in the
‘ceph -s’ output
You can quickly check which one is not in with the ‘ceph old tree’ command
JC
> On Apr 12, 2016, at 11:21, Joao
On 04/12/2016 07:16 PM, Eric Hall wrote:
Removed mon on mon1, added mon on mon1 via ceph-deply. mons now have
quorum.
I am left with:
cluster 5ee52b50-838e-44c4-be3c-fc596dc46f4e
health HEALTH_WARN 1086 pgs peering; 1086 pgs stuck inactive; 1086
pgs stuck unclean; pool vms has too
Removed mon on mon1, added mon on mon1 via ceph-deply. mons now have
quorum.
I am left with:
cluster 5ee52b50-838e-44c4-be3c-fc596dc46f4e
health HEALTH_WARN 1086 pgs peering; 1086 pgs stuck inactive; 1086
pgs stuck unclean; pool vms has too few pgs
monmap e5: 3 mons at
On 04/12/2016 06:38 PM, Eric Hall wrote:
Ok, mon2 and mon3 are happy together, but mon1 dies with
mon/MonitorDBStore.h: 287: FAILED assert(0 == "failed to write to db")
I take this to mean mon1:store.db is corrupt as I see no permission issues.
So... remove mon1 and add a mon?
Nothing special
Ok, mon2 and mon3 are happy together, but mon1 dies with
mon/MonitorDBStore.h: 287: FAILED assert(0 == "failed to write to db")
I take this to mean mon1:store.db is corrupt as I see no permission issues.
So... remove mon1 and add a mon?
Nothing special to worry about re-adding a mon on mon1,
On 04/12/2016 05:06 PM, Joao Eduardo Luis wrote:
On 04/12/2016 04:27 PM, Eric Hall wrote:
On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:
So this looks like the monitors didn't remove version 1, but this may
just be a red herring.
What matters, really, is the values in 'first_committed' and
On 04/12/2016 04:27 PM, Eric Hall wrote:
On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:
So this looks like the monitors didn't remove version 1, but this may
just be a red herring.
What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed
On 4/12/16 9:53 AM, Joao Eduardo Luis wrote:
So this looks like the monitors didn't remove version 1, but this may
just be a red herring.
What matters, really, is the values in 'first_committed' and
'last_committed'. If either first or last_committed happens to be '1',
then there may be a bug
On 04/12/2016 03:33 PM, Eric Hall wrote:
On 4/12/16 9:02 AM, Gregory Farnum wrote:
On Tue, Apr 12, 2016 at 4:41 AM, Eric Hall
wrote:
On 4/12/16 12:01 AM, Gregory Farnum wrote:
Exactly what values are you reading that's giving you those values?
The "real" OSDMap
Thank you so much Ilya!
This is exactly what I have searched for!!
-Original Message-
From: Ilya Dryomov
To: Mathias Buresch
Cc: ceph-us...@ceph.com
Subject: Re: [ceph-users] CephFS and Ubuntu Backport Kernel
On 4/12/16 9:02 AM, Gregory Farnum wrote:
On Tue, Apr 12, 2016 at 4:41 AM, Eric Hall wrote:
On 4/12/16 12:01 AM, Gregory Farnum wrote:
Exactly what values are you reading that's giving you those values?
The "real" OSDMap epoch is going to be at least 38630...if
Hi all,
I've posted a pull request that updates any mention of ext4 in the docs:
https://github.com/ceph/ceph/pull/8556
In particular, I would appreciate any feedback on
https://github.com/ceph/ceph/pull/8556/commits/49604303124a2b546e66d6e130ad4fa296602b01
both on substance
On Tue, Apr 12, 2016 at 4:08 PM, Mathias Buresch
wrote:
>
> Hi there,
>
> I have an issue with using Ceph and Ubuntu Backport Kernel newer than
> 3.19.0-43.
>
> Following setup I have:
>
> Ubuntu 14.04
> Kernel 3.19.0-43 (Backport Kernel)
> Ceph 0.94.6
>
> I am using
On Tue, Apr 12, 2016 at 3:08 PM, Mathias Buresch
wrote:
>
> Hi there,
>
> I have an issue with using Ceph and Ubuntu Backport Kernel newer than
> 3.19.0-43.
>
> Following setup I have:
>
> Ubuntu 14.04
> Kernel 3.19.0-43 (Backport Kernel)
> Ceph 0.94.6
>
> I am using
Hi there,
I have an issue with using Ceph and Ubuntu Backport Kernel newer than
3.19.0-43.
Following setup I have:
Ubuntu 14.04
Kernel 3.19.0-43 (Backport Kernel)
Ceph 0.94.6
I am using CephFS! The kernel 3.19.0-43 was the last working kernel.
Every newer kernel is failing and has a kernel
On Tue, Apr 12, 2016 at 4:41 AM, Eric Hall wrote:
> On 4/12/16 12:01 AM, Gregory Farnum wrote:
>>
>> On Mon, Apr 11, 2016 at 3:45 PM, Eric Hall
>> wrote:
>>>
>>> Power failure in data center has left 3 mons unable to start with
>>>
On 4/12/16 12:01 AM, Gregory Farnum wrote:
On Mon, Apr 11, 2016 at 3:45 PM, Eric Hall wrote:
Power failure in data center has left 3 mons unable to start with
mon/OSDMonitor.cc: 125: FAILED assert(version >= osdmap.epoch)
Have found simliar problem discussed at
On Tue, Apr 12, 2016 at 12:21 PM, Simon Ferber
wrote:
> Am 12.04.2016 um 12:09 schrieb Florian Haas:
>> On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
>> wrote:
>>> Thank you! That's it. I have installed the Kernel from the Jessie
> Op 12 april 2016 om 12:21 schreef Florian Haas :
>
>
> Hi everyone,
>
> I wonder what others think about the following suggestion: running an
> even number of mons almost never makes sense, and specifically two
> mons never does at all. Wouldn't it make sense to
On Tue, 12 Apr 2016 10:53:50 +0200 Alwin Antreich wrote:
>
> On 04/12/2016 01:48 AM, Christian Balzer wrote:
> > On Mon, 11 Apr 2016 09:25:35 -0400 (EDT) Jason Dillaman wrote:
> >
> > > In general, RBD "fancy" striping can help under certain workloads
> > > where small IO would normally be
On Tue, 12 Apr 2016 12:21:51 +0200 Simon Ferber wrote:
> Am 12.04.2016 um 12:09 schrieb Florian Haas:
> > On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
> > wrote:
> >> Thank you! That's it. I have installed the Kernel from the Jessie
> >> backport. Now the
Hi everyone,
I wonder what others think about the following suggestion: running an
even number of mons almost never makes sense, and specifically two
mons never does at all. Wouldn't it make sense to just flag a
HEALTH_WARN state if the monmap contained an even number of mons, or
maybe only if
Am 12.04.2016 um 12:09 schrieb Florian Haas:
> On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
> wrote:
>> Thank you! That's it. I have installed the Kernel from the Jessie
>> backport. Now the crashes are gone.
>> How often do these things happen? It would be a
On Tue, Apr 12, 2016 at 11:53 AM, Simon Ferber
wrote:
> Thank you! That's it. I have installed the Kernel from the Jessie
> backport. Now the crashes are gone.
> How often do these things happen? It would be a worst case scenario, if
> a system update breaks a
Thank you! That's it. I have installed the Kernel from the Jessie
backport. Now the crashes are gone.
How often do these things happen? It would be a worst case scenario, if
a system update breaks a productive system.
Best
Simon
Am 11.04.2016 um 16:58 schrieb Ilya Dryomov:
> On Mon, Apr 11, 2016
Здравствуйте!
On Tue, Apr 12, 2016 at 07:48:58AM +, Maxime.Guyot wrote:
> Hi Adrian,
> Looking at the documentation RadosGW has multi region support with the
> “federated gateways”
> (http://docs.ceph.com/docs/master/radosgw/federated-config/):
> "When you deploy a Ceph Object Store
On 04/12/2016 01:48 AM, Christian Balzer wrote:
> On Mon, 11 Apr 2016 09:25:35 -0400 (EDT) Jason Dillaman wrote:
>
> > In general, RBD "fancy" striping can help under certain workloads where
> > small IO would normally be hitting the same object (e.g. small
> > sequential IO).
> >
>
> While the
Hi,
> However, while creating bucket using *s3cmd mb s3://buck *gives error
message
DEBUG: ConnMan.get(): creating new connection:
http://buck.s3.amazonaws.com:7480
ERROR: [Errno 110] Connection timed out
Can anyone show forward path to check this further?
Not sure if all of these settings
I've done the ceph osd reweight-by-utilization and it seems to have solved the
issue. However, not sure if this will be the long term solution.
Thanks for your help
Andrei
- Original Message -
> From: "Shinobu Kinjo"
> To: "Andrei Mikhailovsky"
Hello!
On Mon, Apr 11, 2016 at 05:39:37PM -0400, sage wrote:
> Hi,
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
1. Does filestore_xattr_use_omap fix issues with ext4? So, can I continue
Hi All,
I am trying to create a bucket using s3cmd on ceph radosgw. I am able to
get list of buckets using
#s3cmd ls
2016-04-12 07:02 s3://my-new-bucket
2016-04-11 14:46 s3://new-bucket-6f2327c1
However, while creating bucket using *s3cmd mb s3://buck *gives error
message
DEBUG:
At this stage the RGW component is down the line - pretty much just concept
while we build out the RBD side first.
What I wanted to get out of EC was distributing the data across multiple DCs
such that we were not simply replicating data - which would give us much better
storage efficiency
Hi Sage,
we run ext4 only on our 8node-cluster with 110 OSDs and are quite happy
with ext4.
We start with xfs but the latency was much higher comparable to ext4...
But we use RBD only with "short" filenames like
rbd_data.335986e2ae8944a.000761e1.
If we can switch from Jewel to K* and
Hi Adrian,
Looking at the documentation RadosGW has multi region support with the
“federated gateways”
(http://docs.ceph.com/docs/master/radosgw/federated-config/):
"When you deploy a Ceph Object Store service that spans geographical locales,
configuring Ceph Object Gateway regions and
I'd like to raise these points, then
1) some people (like me) will never ever use XFS if they have a choice
given no choice, we will not use something that depends on XFS
2) choice is always good
3) doesn't majority of Ceph users only care about RBD?
(Angry rant coming)
Even our last
On Mon, 11 Apr 2016 10:01:15 +0100 Nick Fisk wrote:
> > -Original Message-
> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf
> > Of nick
> > Sent: 11 April 2016 08:26
> > To: ceph-users@lists.ceph.com
> > Subject: Re: [ceph-users] How can I monitor current ceph
Hi,
Am 11.04.2016 um 23:39 schrieb Sage Weil:
> ext4 has never been recommended, but we did test it. After Jewel is out,
> we would like explicitly recommend *against* ext4 and stop testing it.
Hmmm. We're currently migrating away from xfs as we had some strange
performance-issues which were
Hi Sage,
I suspect most people nowadays run tests and develop on ext4. Not supporting
ext4 in the future means we'll need to find a convenient way for developers to
run tests against the supported file systems.
My 2cts :-)
On 11/04/2016 23:39, Sage Weil wrote:
> Hi,
>
> ext4 has never been
hi,
The next ceph breizh meetup up will be organized at Nantes,the April 19th
in the Suravenir Building:
at 2 Impasse Vasco de Gama, 44800 Saint-Herblain
Here the doodle:
http://doodle.com/poll/3mxqqgfkn4ttpfib
Will see you soon at Nantes
--
Eric Mourgaya,
Respectons la planete!
Luttons
63 matches
Mail list logo