Hi guys,
I'm testing rbd with fio. I have test it for more than 3 hours, and then it
happened to see that fio showing instant iops is 0. then I catch the trace
of rbd with blktrace, and found that the kernel worker thread "kworker" is
busy with writing meta data to rbd with the flag "WM". I think t
Hi guys,
I'm testing rbd with fio. I have test it for more than 3 hours, and then it
happened to see that fio showing instant iops is 0. then I catch the trace
of rbd with blktrace, and found that the kernel worker thread "kworker" is
busy with writing meta data to rbd with the flag "WM". I think t
On 07/31/2014 06:37 PM, James Eckersall wrote:
Hi,
The stacktraces are very similar. Here is another one with complete
dmesg: http://pastebin.com/g3X0pZ9E
$ decodecode < tmp.oops
[ 28.636837] Code: dc 00 00 49 8b 50 08 4d 8b 20 49 8b 40 10 4d 85 e4 0f
84 17 01 00 00 48 85 c0 0f 84 0e 01 00 0
On Thu, 31 Jul 2014, James Eckersall wrote:
> Ah, thanks for the clarification on that.We are very close to the 250 limit,
> so that is something we'll have to look at addressing, but I don't think
> it's actually relevant to the panics as since reverting the auth key changes
> I made appears to ha
Hi Ilya,
I think you need to upgrade the kernel version of that ubuntu
server, I've a similar problem and after upgrade the kernel to 3.13
the problem was resolved successfully.
Best regards,
German Anders
--- Original message ---
Asunto: Re: [ceph-users] 0.80.5-1pre
On Fri, Aug 1, 2014 at 12:08 AM, Larry Liu wrote:
> I'm on ubuntu 12.04 kernel 3.2.0-53-generic. Just deployed ceph 0.80.5.
> Creating pools & images work fine, but getting this error when try to map a
> rbi image or mount cephfs:
> rbd: add failed: (5) Input/output error
>
> I did try flipping
I'm on ubuntu 12.04 kernel 3.2.0-53-generic. Just deployed ceph 0.80.5.
Creating pools & images work fine, but getting this error when try to map a rbi
image or mount cephfs:
rbd: add failed: (5) Input/output error
I did try flipping crush tunables instructed at
http://ceph.com/docs/master/rado
Hi there,
I’m glad to announce that Ceph is now part of the mirrors iWeb provides.
It is available in both ipv4 and ipv6 by:
- http on http://mirror.iweb.ca/ or directly on http://ceph.mirror.iweb.ca/
- rsync on ceph.mirror.iweb.ca::ceph
The mirror provides 4 Gbps of connectivity and is located
On Thu, Jul 31, 2014 at 2:35 PM, David Graham wrote:
> Question: I've not gone through a setup yet, just an interested lurker
> reading and interpreting capabilities at this time.
>
> my understanding of Ceph journal is that one can use a partition or a file
> on a Filesystem if i use a files
These sorts of questions are good for ceph-de...@vger.kernel.org,
which I've added. :)
On Thu, Jul 31, 2014 at 12:24 PM, yuelongguang wrote:
> hi,all
> recently i dive into the source code, i am a little confused about them,
> maybe because of many threads,wait,seq.
>
> 1. what does apply_manager
Question: I've not gone through a setup yet, just an interested lurker
reading and interpreting capabilities at this time.
my understanding of Ceph journal is that one can use a partition or a file
on a Filesystem if i use a filesystem, i can have assured integrity if
the FS does this for me
Dear ,
i have one test environment Ceph Firefly 0.80.4, on Debian 7.5 .
i do not have enough SSD for each OSD.
I want to test speed Ceph perfermance by put journal in a Ramdisk or tmpfs,
but when to add new osd use separate disk for OSD data and journal ,it is
failure.
first , i have test Ram m
hi,all
recently i dive into the source code, i am a little confused about them,
maybe because of many threads,wait,seq.
1. what does apply_manager do? it is related to filestore and filejournal.
2. what does SubmitManager do?
3. how they interact and work together?
what a big question :), th
On Wed, Jul 30, 2014 at 5:08 PM, Matan Safriel wrote:
> I'm looking for a distributed file system, for large JSON documents. My file
> sizes are roughly between 20M and 100M, so they are too small for couchbase,
> mongodb, even possibly Riak, but too small (by an order of magnitude) for
> HDFS. Wo
Hi Kenneth,
On Thu, 31 Jul 2014, Kenneth Waegeman wrote:
> Hi all,
>
> We have a erasure coded pool 'ecdata' and a replicated pool 'cache' acting as
> writeback cache upon it.
> When running 'rados -p ecdata bench 1000 write', it starts filling up the
> 'cache' pool as expected.
> I want to see w
Hi all,
We have a erasure coded pool 'ecdata' and a replicated pool 'cache'
acting as writeback cache upon it.
When running 'rados -p ecdata bench 1000 write', it starts filling up
the 'cache' pool as expected.
I want to see what happens when it starts evicting, therefore I've done:
ceph osd
Hello Alfredo,
Yes, I've used ceph-deploy. I can't remember any serious
warnings/errors. You can find the output here:
https://paste.cybertinus.nl/p/yt0sl9z6eL (this link won't work anymore
in 1 week time).
Met vriendelijke groet/With kind regards,
Tijn Buijs
Cloud.nl logo
t...@cloud.nl <
Hello Vincenzo,
Yes, those 6 OSDs are on different hosts. I've got 3 VMs each with 2
OSDs. So this should be enough for the requirement to have 3 replica's
(even though I set it back to 2, as suggested in the howto's).
I will try to have the replica's only over the OSDs en not over the
hosts
On Thu, Jul 31, 2014 at 10:36 AM, Tijn Buijs wrote:
> Hello everybody,
>
> At cloud.nl we are going to use Ceph. So I find it a good idea to get
> some handson experience with it, so I can work with it :). So I'm
> installing a testcluster in a few VirtualBox machines on my iMac, which
> runs OS
Are the 6 osds on different hosts?
The default ruleset that ceph applies to pools states that object replicas
(3 per default) should be placed on OSDs of different hosts.
This cannot be satisfied if you don't have OSDs on separate hosts.
I ran myself into this issue and wrote down the steps I nee
Do not go to a 3.15 or later Ubuntu kernel at this time if your are
using krbd. See bug 8818. The Ubuntu 3.14.x kernels seems to work fine
with krbd on Trusty.
The mainline packages from Ubuntu should be helpful in testing.
Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds
Package
Hello everybody,
At cloud.nl we are going to use Ceph. So I find it a good idea to get
some handson experience with it, so I can work with it :). So I'm
installing a testcluster in a few VirtualBox machines on my iMac, which
runs OS X 10.9.4 offcourse. I know I will get a lousy performance, bu
I am at the point of the install where I am creating 2 osd's and it is failing.
My setup is 1 "head node" (mon, admin) and 2 osd nodes all 3 are
running Scientific Linux 6.5.
I am following the quick deploy guide found here [
ceph.com/docs/master/start/quick-ceph-deploy/ ]
I get to the point (wi
Hi,
I'm looking for a distributed file system, for large JSON documents. My
file sizes are roughly between 20M and 100M, so they are too small for
couchbase, mongodb, even possibly Riak, but too small (by an order of
magnitude) for HDFS. Would you recommend Ceph for this kind of scenario?
Additio
Add a parameter to the OSD's config file
"osd crush update on start = false"
I'd recommend creating a section for just your SSD OSDs which sets this, as
that will let any of your other disks that move continue to be updated. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
O
On Thu, Jul 31, 2014 at 2:41 AM, yuelongguang wrote:
> hi,all
> 1.
> it seems that there are 2 kinds of function that get/set xattrs.
> one kind start with collection_*,the another one start with omap_*.
> what is the differences between them, and what xattrs use which kind of
> function?
IIRC,
Hello List,
I hope, I don't ask a question that is answered already, but if so,
pointing to the solution would be nice.
Although the Subject is problably misleading a bit, that is essentially
my problem (as far as I understand).
Is it possible to apply OSDs directly to a root to circumvent a pool
Hi , Ashish
Thank you for reply.
*swiftclient version is
# swift --version
swift 2.1.0.22.g394cb57
*hostname -f is EXFS1.oqnet.org
Changing config file /etc/httpd/conf.d/rgw.conf
ServerName EXFS1
↓
ServerName EXFS1.oqnet.org
/etc/init.d/httpd restart
/etc/init.d/ceph-radosgw restart
# swift -
The mainline packages from Ubuntu should be helpful in testing.
Info: https://wiki.ubuntu.com/Kernel/MainlineBuilds
Packages: http://kernel.ubuntu.com/~kernel-ppa/mainline/?C=N;O=D
On 31/07/2014 10:31, James Eckersall wrote:
Ah, thanks for the clarification on that.
We are very close to the 250
Hi all,
We have been working closely with Kostis on this and we have some results we
thought we should share.
Increasing the PGs was mandatory for us since we have been noticing
fragmantation* issues on many OSDs. Also, we were below the recommended
number for our main pool for quite some time
Hi There,
I have done the exact steps u mentioned here, but I could not find the
issue in my case(I was using Ubuntu 14.04 though).
We are getting 400 Bad Request from swiftclient.
So I would request you to send me the version of swiftclient u are using,
also to be on safer side can you have '
On Thu, Jul 31, 2014 at 12:37 PM, James Eckersall
wrote:
> Hi,
>
> The stacktraces are very similar. Here is another one with complete dmesg:
> http://pastebin.com/g3X0pZ9E
>
> The rbd's are mapped by the rbdmap service on boot.
> All our ceph servers are running Ubuntu 14.04 (kernel 3.13.0-30-ge
Ah, thanks for the clarification on that.
We are very close to the 250 limit, so that is something we'll have to look
at addressing, but I don't think it's actually relevant to the panics as
since reverting the auth key changes I made appears to have resolved the
issue (no panics yet - 20 hours ish
On Thu, 31 Jul 2014 10:13:11 +0100 James Eckersall wrote:
> Hi,
>
> I thought the limit was in relation to ceph and that 0.80+ fixed that
> limit
> - or at least raised it to 4096?
>
Yes and yes. But 0.80 only made it into kernels 3.14 and beyond. ^o^
> If there is a 250 limit, can you confirm
Hi,
I thought the limit was in relation to ceph and that 0.80+ fixed that limit
- or at least raised it to 4096?
If there is a 250 limit, can you confirm where this is documented?
Thanks
J
On 31 July 2014 09:50, Christian Balzer wrote:
>
> Hello,
>
> are you per-chance approaching the maxim
Hello,
are you per-chance approaching the maximum amount of kernel mappings,
which is somewhat shy of 250 in any kernel below 3.14?
If you can easily upgrade to 3.14 see if that fixes it.
Christian
On Thu, 31 Jul 2014 09:37:05 +0100 James Eckersall wrote:
> Hi,
>
> The stacktraces are very s
Hi,
The stacktraces are very similar. Here is another one with complete dmesg:
http://pastebin.com/g3X0pZ9E
The rbd's are mapped by the rbdmap service on boot.
All our ceph servers are running Ubuntu 14.04 (kernel 3.13.0-30-generic).
Ceph packages are from the Ubuntu repos, version 0.80.1-0ubunt
Hi , All
swift stat and list commnad works buti can not create a container.
swift post command Error occurs.
(Container POST failed: http://EXFS1/swift/v1/test 400 Bad Request [first 60
chars of response] ) .
# swift --debug -A http://EXFS1/auth -U testuser:swift -K
"bVwgc167mmLDVwpw4ufTkurK
On Thu, Jul 31, 2014 at 11:44 AM, James Eckersall
wrote:
> Hi,
>
> I've had a fun time with ceph this week.
> We have a cluster with 4 OSD (20 OSD's per) servers, 3 mons and a server
> mapping ~200 rbd's and presenting cifs shares.
>
> We're using cephx and the export node has its own cephx auth k
Hi,
I've had a fun time with ceph this week.
We have a cluster with 4 OSD (20 OSD's per) servers, 3 mons and a server
mapping ~200 rbd's and presenting cifs shares.
We're using cephx and the export node has its own cephx auth key.
I made a change to the key last week, adding rwx access to anothe
Yes, I have made changes in my config.py file. I do see the populate.py module
in paddles/commands directory. However, when I run pecan populate command,
I get this error -
pecan: argument command: invalid choice: 'populate' (choose from 'create',
'shell', 'serve')
Regards,
Kapil.
>>> Sar
41 matches
Mail list logo