Hi,
I've been playing around with the rados gateway and RBD and have some
questions about user access restrictions. I'd like to be able to set up
a cluster that would be shared among different clients without any
conflicts...
Is there a way to limit S3/Swift clients to be able to write data
On 2015年07月01日 16:11, Gregory Farnum wrote:
On Wed, Jul 1, 2015 at 9:02 AM, flisky yinjif...@lianjia.com wrote:
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any
Hi Valery,
With the old account did you try to give FULL access to the new one user ID ?
Process should be :
From OLD account add FULL access to NEW account (S3 ACL with CloudBerry for
example)
With radosgw admin update link from OLD account to NEW account (link allow user
to see bucket with
Thanks Mark
Are there any plans for ZFS like L2ARC to CEPH or is the cache tiering what
should work like this in the future?
I have tested cache tier + EC pool, and that created too much load on our
servers, so it was not viable to be used.
I was also wondering if EnhanceIO would be a good
On 07/01/2015 01:39 PM, Tuomas Juntunen wrote:
Thanks Mark
Are there any plans for ZFS like L2ARC to CEPH or is the cache tiering what
should work like this in the future?
I have tested cache tier + EC pool, and that created too much load on our
servers, so it was not viable to be used.
We
Hi,
I think it's because secret key for swift subuser is not generated :
radosgw-admin key create --subuser=johndoe:swift --key-type=swift --gen-secret
Mikaël
Le 01/07/2015 14:50, Jimmy Goffaux a écrit :
radosgw-agent= 1.2.1trust
Ubuntu 14.04
English version :
Hello,
According to
Hi Cephers,
On Sunday evening we are upgraded Ceph form 0.87 to 0.94. After upgrade VM's
running on Proxmox, freezes for 3-4s in 10min periods (application is not
responding on Windows). Before upgrade everything was working fine. On
/proc/diskstats at field 7 (time spent reading (ms) ) and 11
Hi community
Do you know if there is page with all the official Ceph cluster deployed ? With
number of nodes, volumetry, protocol (block / file / object)
If not are you agree to create this list on Ceph site?
Thanks
Sent from my iPhone
___
ceph-users
It's not really a problem, swift johndoe user works if he has a record
in swift_keys.
The s3 secret key of johndoe user is here :
keys: [
{ user: johndoe,
access_key: 91KC4JI5BRO39A22JY9I,
secret_key: Z5kLaBtg870xBhYtb4RKY82qGsbiqRpGs\/KQUXKF},
I tested swift and s3
hi,
Thank you for your return but,
I just regenerate the user completely and I confirm that I have a
problem :(
radosgw-admin user create --uid=johndoe --display-name=John Doe
--email=m...@email.com
subusers: [],
keys: [
{ user: johndoe,
access_key:
ok, I think I found the answer to the second question:
http://wiki.ceph.com/Planning/Blueprints/Giant/Add_QoS_capacity_to_librbd
..librbd doesn't support any QoS for now..
Can anyone shed some light on the namespaces and limiting S3 users to
one bucket?
J
On 07/01/2015 10:31 AM, Jacek
Yes it also works ... It's more that I do not expect to have an element
johndoe:swift in keys
Thank you for providing answers.
On Wed, 01 Jul 2015 15:28:15 +0200, Mikaël Guichard wrote:
It's not really a problem, swift johndoe user works if he has a
record in swift_keys.
The s3 secret key of
Le Tue, 16 Jun 2015 10:04:26 +0200
Marcus Forness pixel...@gmail.com écrivait:
hi! anyone able to privide som tips on performance issue on a newly
installe all flash ceph cluster? When we do write test we get 900MB/s
write. but read tests are only 200MB/s all servers are on 10GBit
Hi,
I am new to ceph project. I am trying to benchmark erasure code on Intel
and I am getting following error.
[root@nitin ceph]#
CEPH_ERASURE_CODE_BENCHMARK=src/ceph_erasure_code_benchmark
PLUGIN_DIRECTORY=src/.libs qa/workunits/erasure-code/bench.sh
seconds KB plugin k m
Hi Nitin,
Are you installed YASM compiler ?
David
On 07/01/2015 01:46 PM, Nitin Saxena wrote:
Hi,
I am new to ceph project. I am trying to benchmark erasure code on
Intel and I am getting following error.
[root@nitin ceph]#
CEPH_ERASURE_CODE_BENCHMARK=src/ceph_erasure_code_benchmark
On 06/30/2015 10:42 PM, Tuomas Juntunen wrote:
Hi
For seq reads here's the latencies:
lat (usec) : 2=0.01%, 10=0.01%, 20=0.01%, 50=0.02%, 100=0.03%
lat (usec) : 250=1.02%, 500=87.09%, 750=7.47%, 1000=1.50%
lat (msec) : 2=0.76%, 4=1.72%, 10=0.19%, 20=0.19%
Random reads:
On Wed, Jul 1, 2015 at 3:10 PM, Jacek Jarosiewicz
jjarosiew...@supermedia.pl wrote:
ok, I think I found the answer to the second question:
http://wiki.ceph.com/Planning/Blueprints/Giant/Add_QoS_capacity_to_librbd
..librbd doesn't support any QoS for now..
But libvirt/qemu can do QoS: see
Hello all,
I've got a coworker who put filestore_xattr_use_omap = true in the
ceph.conf when we first started building the cluster. Now he can't
remember why. He thinks it may be a holdover from our first Ceph
cluster (running dumpling on ext4, iirc).
In the newly built cluster, we are using XFS
Hi,
Like David said: the most probable cause is that there is no recent yasm
installed. You can ./install-deps.sh to ensure the necessary dependencies are
installed.
Cheers
On 01/07/2015 13:46, Nitin Saxena wrote:
Hi,
I am new to ceph project. I am trying to benchmark erasure code on
On Tue, Jun 30, 2015 at 10:36 AM, Daniel Schneller
daniel.schnel...@centerdevice.com wrote:
Hi!
We are seeing a strange - and problematic - behavior in our 0.94.1
cluster on Ubuntu 14.04.1. We have 5 nodes, 4 OSDs each.
When rebooting one of the nodes (e. g. for a kernel upgrade) the OSDs
It doesn't matter, I think filestore_xattr_use_omap is a 'noop' and not used
in the Hammer.
Thanks Regards
Somnath
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Adam
Tygart
Sent: Wednesday, July 01, 2015 8:20 AM
To: Ceph Users
Subject:
Hi
I'll check the possibility on testing EnhanceIO. I'll report back on this.
Thanks
Br,T
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: 1. heinäkuuta 2015 21:51
To: Tuomas Juntunen; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Very low 4k randread
Hi cephers,
Is anyone out there that implement enhanceIO in a production
environment? any recommendation? any perf output to share with the diff
between using it and not?
Thanks in advance,
*German*
___
ceph-users mailing list
On 07/01/2015 03:02 PM, Vickey Singh wrote:
- What's the exact version number of OpenSource Ceph is provided with
this Product
It is Hammer, specifically 0.94.1 with several critical bugfixes on top
as the product went through QE. All of the bugfixes have been proposed
or merged to Hammer
This can happen if your OSDs are flapping.. Hope your network is stable.
Thanks Regards
Somnath
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tuomas
Juntunen
Sent: Wednesday, July 01, 2015 2:24 PM
To: 'ceph-users'
Subject: [ceph-users] One of our nodes has logs
Hi,
I’ve asked same question last weeks or so (just search the mailing list
archives for EnhanceIO :) and got some interesting answers.
Looks like the project is pretty much dead since it was bought out by HGST.
Even their website has some broken links in regards to EnhanceIO
I’m keen to try
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Gregory Farnum
Sent: 01 July 2015 16:56
To: Daniel Schneller
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Node reboot -- OSDs not logging off from cluster
On Tue, Jun 30,
Hello Ceph lovers
You would have noticed that recently RedHat has released RedHat Ceph
Storage 1.3
http://redhatstorage.redhat.com/2015/06/25/announcing-red-hat-ceph-storage-1-3/
My question is
- What's the exact version number of OpenSource Ceph is provided with this
Product
- RHCS 1.3
Hi
One our nodes has OSD logs that say wrongly marked me down for every OSD
at some point. What could be the reason for this. Anyone have any similar
experiences?
Other nodes work totally fine and they are all identical.
Br,T
___
ceph-users
Hi,
The details of the differences between the Hammer point releases and the RedHat
Ceph Storage 1.3 can be listed as described at
http://www.spinics.net/lists/ceph-devel/msg24489.html reconciliation between
hammer and v0.94.1.2
The same analysis should be done for
I would like to get some clarification on the size of the journal disks
that I should get for my new Ceph cluster I am planning. I read about the
journal settings on
http://ceph.com/docs/master/rados/configuration/osd-config-ref/#journal-settings
but that didn't really clarify it for me that or I
I've been wrestling with IO performance in my cluster and one area I have
not yet explored thoroughly is whether or not performance constraints on
mon hosts would be likely to have any impact on OSDs. My mons are quite
small, and one in particular has rather high IO waits (frequently 30% or
more)
On 07/01/2015 12:13 PM, Tuomas Juntunen wrote:
Hi
Yes, the OSD's are on spinning disks and we have 18 SSD's for journal, one
SSD for two OSD's
The OSD's are:
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
What I've understood the journals are not used as
Hi
Yes, the OSD's are on spinning disks and we have 18 SSD's for journal, one
SSD for two OSD's
The OSD's are:
Model Family: Seagate Barracuda 7200.14 (AF)
Device Model: ST2000DM001-1CH164
What I've understood the journals are not used as read cache at all, just
for writing. Would SSD
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi everybody,
We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
few km away from each other), and the 3rd one on a remote site, with a maximum
round-trip time (RTT) of 30ms between the local site and the remote site. All
OSDs run on the local site. The reason for the
On 07/01/2015 09:38 AM, - - wrote:
Hi everybody,
We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
few km away from each other), and the 3rd one on a remote site, with a maximum
round-trip time (RTT) of 30ms between the local site and the remote site. All
OSDs
Hey Patrick,
Looks like the GMT+8 time for the 1st day is wrong, should be 10:00 pm - 7:30
am?
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Patrick McGarry
Sent: Tuesday, June 30, 2015 11:28 PM
To: Ceph Devel;
On Wed, Jul 1, 2015 at 8:38 AM, - - francois.pe...@san-services.com wrote:
Hi everybody,
We have 3 monitors in our ceph cluster: 2 in one local site (2 data centers a
few km away from each other), and the 3rd one on a remote site, with a maximum
round-trip time (RTT) of 30ms between the local
On Wed, Jul 1, 2015 at 9:02 AM, flisky yinjif...@lianjia.com wrote:
Hi list,
I meet a strange problem:
sometimes I cannot see the file/directory created by another ceph-fuse
client. It comes into visible after I touch/mkdir the same name.
Any thoughts?
What version are you running? We've
Ive checked the network, we use IPoIB and all nodes are connected to the
same switch, there are no breaks in connectivity while this happens. My
constant ping says 0.03 0.1ms. I would say this is ok.
This happens almost every time when deep scrubbing is running. Our loads on
this particular
Yeah, this can happen during deep_scrub and also during rebalancing..I forgot
to mention that..
Generally, it is a good idea to throttle those..For deep scrub, you can try
using (got it from old post, I never used it)
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_sleep = 0.1
I would probably go with less size osd disks, 4TB is to much to loss in
case of a broken disk, so maybe more osd daemons with less size, maybe 1TB
or 2TB size. 4:1 relationship is good enough, also i think that 200G disk
for the journals would be ok, so you can save some money there, the osd's
of
Re: your previous question
I will not elaborate on this much more, I hope some of you will try it if you
have NUMA systems and see for yourself.
But I can recommend some docs:
http://globalsp.ts.fujitsu.com/dmsp/Publications/public/wp-ivy-bridge-ep-memory-performance-ww-en.pdf
Hello,
On Wed, 1 Jul 2015 15:24:13 + Somnath Roy wrote:
It doesn't matter, I think filestore_xattr_use_omap is a 'noop' and not
used in the Hammer.
Then what was this functionality replaced with, esp. considering EXT4
based OSDs?
Chibi
Thanks Regards
Somnath
-Original
It also depends a lot on the size of your cluster ... I have a test cluster I'm
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I
lose 4 TB - that's a very small fraction of the data. My replicas are going to
be spread out across a lot of spindles, and
I'm interested in such a configuration, can you share some perfomance
test/numbers?
Thanks in advance,
Best regards,
*German*
2015-07-01 21:16 GMT-03:00 Shane Gibson shane_gib...@symantec.com:
It also depends a lot on the size of your cluster ... I have a test
cluster I'm standing up right
47 matches
Mail list logo