Hello,
log [INF] : 3.136 repair ok, 0 fixed
Thank you Greg, I did like that, it worked well.
Laurent
Le 25/11/2013 19:10, Gregory Farnum a écrit :
On Mon, Nov 25, 2013 at 8:10 AM, Laurent Barbe laur...@ksperis.com wrote:
Hello,
Since yesterday, scrub has detected an inconsistent pg :( :
Hi all
I have 3 OSDs, named sdb, sdc, sdd.
Suppose, one OSD with device */dev/sdc* die = My server have only sdb,
sdc at the moment.
Because device /dev/sdc replaced by /dev/sdd
I have the following configuration:
[osd.0]
host = data-01
devs = /dev/sdb1
[osd.1]
host = data-01
On 2013-11-25 21:55, Kyle Bader wrote:
Several people have reported issues with combining OS and OSD
journals
on the same SSD drives/RAID due to contention.
I wonder whether this has been due to the ATA write cache flush command
bug that was found yesterday. It would seem to explain why a
Hi all
I have 3 OSDs, named sdb, sdc, sdd.
Suppose, one OSD with device /dev/sdc die = My server have only sdb, sdc
at the moment.
Because device /dev/sdc replaced by /dev/sdd
Can you just use one of the /dev/disk/by-something/identifier symlinks?
Eg
Hi James,
Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf,
when, above error can be occur?
--
TuanTaBa
On 11/26/2013 04:04 PM, James Harper wrote:
Hi all
I have 3 OSDs, named sdb, sdc, sdd.
Suppose, one OSD with device /dev/sdc die = My server have only sdb, sdc
at the
I seems that client admin socket has a life cycle, if any operation issued to
rbd, rbd admin socket appears in /var/run/ceph directory, however, when you
use this admin socket to dump perf counter, rbd cache perf counter is not in
the results:
The output of ' ceph --admin-daemon
Hi,
I found in doc: http://ceph.com/docs/master/start/os-recommendations/
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they could.
For now recommended filesystem is XFS.
This means that for the best performance setup should be 1 OSD per host?
Ok, Thanks :-)
--
Regards
Dominik
2013/11/26 Jens Kristian Søgaard j...@mermaidconsulting.dk:
Hi,
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they could.
This means that for the best performance setup should be 1 OSD per host?
The
Hi Ceph,
31 January 2014, the day before FOSDEM ( http://fosdem.org/ ) a meetup is
organized in the center of Brussels, at a walking distance from bars and
restaurants. The location is an office with internet and the participants will
be invited to explain their use case or their projects.
Hi,
Organizing a meetup is easy. Much much easier than organizing a Ceph day ;-) I
just published the announcement for the one in advance of FOSDEM.
http://ceph.com/community/event/brussels-ceph-meetup-in-advance-of-fosdem/
We can discuss the details on IRC if you'd like.
Cheers
On
On Tue, Nov 26, 2013 at 9:21 PM, nicolasc nicolas.cance...@surfsara.nl wrote:
Hi every one,
The official doc for rbd mentions the --stripe_count and --stripe_unit
options. If I understand correctly, these allow fancy striping — i.e.
having multiple stripes in a single object. However, I tried
Adding ceph-user.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Nov 26, 2013 at 1:49 AM, haiquan...@sina.com wrote:
Hi ,
I'm from China , recently we are testing ceph block storage
On Tue, Nov 26, 2013 at 11:43:07AM +0100, Dominik Mostowiec wrote:
Hi,
I found in doc: http://ceph.com/docs/master/start/os-recommendations/
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they could.
For now recommended filesystem is XFS.
Hi Yan,
Thank you for that quick answer. I will try the fuse client. Maybe it
would not hurt to make this a little more explicit in the docs?
Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)
On 11/26/2013 02:34 PM, Yan, Zheng wrote:
On Tue, Nov 26, 2013 at
Good morning (or afternoon),
The first stream for the Ceph Developer Summit Day 2 is now live.
Feel free to get settled in for a 7a PST start (roughly 25mins from
now). You can see the stream, and all related information, at the
following locations:
G+ Event Page:
On Tue, 26 Nov 2013, Christoph Hellwig wrote:
On Tue, Nov 26, 2013 at 11:43:07AM +0100, Dominik Mostowiec wrote:
Hi,
I found in doc: http://ceph.com/docs/master/start/os-recommendations/
Putting multiple ceph-osd daemons using XFS or ext4 on the same host
will not perform as well as they
Does rbd-fuse support cephx? I can't see any reference in the docs:
http://ceph.com/docs/master/man/8/rbd-fuse/
On 2013-11-26 14:40, nicolasc wrote:
Thank you for that quick answer. I will try the fuse client.
___
ceph-users mailing list
Dear TeamAfter executing :ceph-deploy -v osd prepare ceph-node2:/home/ceph/osd1i'm getting some error :[ceph-node2][DEBUG ] connected to host: ceph-node2[ceph-node2][DEBUG ] detect platform information from remote host[ceph-node2][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distro info:
Hi all
we have a ceph 0.67.4 cluster with 24 OSDs
I have noticed that the two servers that have 9 OSD each, have around 10'000
threads running - a number that went up significantly 2 weeks ago.
Looking at the threads:
root@h2:/var/log/ceph# ps -efL | grep ceph-osd | awk '{ print $2 }' | uniq
On 11/26/2013 08:14 AM, Patrick McGarry wrote:
Adding ceph-user.
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
On Tue, Nov 26, 2013 at 1:49 AM, haiquan...@sina.com wrote:
Hi ,
I'm from
On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
Hello,
Is there any idea? I don't know this is s3api limitation or missing feature?
Thank you,
Mihaly
Hi Mihaly,
If all you are looking for is the current size of the bucket this can be
found from the adminops api[1] or when you get do the
Yes.. all the userspace and kernel stuff supports it.
s
On Tue, 26 Nov 2013, James Pearce wrote:
Does rbd-fuse support cephx? I can't see any reference in the docs:
http://ceph.com/docs/master/man/8/rbd-fuse/
On 2013-11-26 14:40, nicolasc wrote:
Thank you for that quick answer.
On Tue, Nov 26, 2013 at 9:19 AM, upendrayadav.u upendrayada...@zohocorp.com
wrote:
Dear Team
After executing : *ceph-deploy -v osd prepare ceph-node2:/home/ceph/osd1*
i'm getting some error :
[ceph-node2][DEBUG ] connected to host: ceph-node2
[ceph-node2][DEBUG ] detect platform
The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is communicating with. That's two threads for each OSD it
shares PGs with, and two threads for each client which is accessing
any data on that OSD.
-Greg
The second (of three) sessions for CDS day 2 is about to start.
YouTube: http://youtu.be/7SRHYyHNBCc
IRC: #ceph-summit
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
YouTube: http://youtu.be/Rvli9pFLnMk
IRC: #ceph-summit (oftc)
Best Regards,
Patrick McGarry
Director, Community || Inktank
http://ceph.com || http://inktank.com
@scuttlemonkey || @ceph || @inktank
___
ceph-users mailing list
Hi All,
There is a new release of ceph-deploy, the easy deployment tool for Ceph.
The most important (non-bug) change for this release is the ability to
specify repository mirrors when installing ceph. This can be done with
environment variables or flags in the `install` subcommand.
Full
ceph-deploy 1.3.3 just got released and you should not see this with the
new version.
On Tue, Nov 26, 2013 at 9:56 AM, Alfredo Deza alfredo.d...@inktank.comwrote:
On Tue, Nov 26, 2013 at 9:19 AM, upendrayadav.u
upendrayada...@zohocorp.com wrote:
Dear Team
After executing :
On 11/26/2013 09:45 PM, Alvin Starr wrote:
I have not gotten so far as to get things working with xenserver but I
have followed the installtion instructions from the technology perview
and installed using that repo.
I have a ceph pool defined.
I can create and destroy volumes
but I cannot read
On Tue, Nov 26, 2013 at 1:27 PM, Dominik Mostowiec
dominikmostow...@gmail.com wrote:
Hi,
We have 2 clusters with copy of objects.
On one of them we splited all large buckets (largest 17mln objects) to
256 buckets each (shards) and we have added 3 extra servers (6-9).
Old bucket was created in
Hello all,
bad news the problem shown again today !
I had mounted a bindfs and nfs server running on the same proxy server
when I ran a massive chmod on a directory with large amount of data I
got part of the directories that disapeared!
The problem wasn t showing as fast as it was with
From ceph-users archive 08/27/2013:
On 08/27/2013 01:39 PM, Timofey Koolin wrote:
Is way to know real size of rbd image and rbd snapshots?
rbd ls -l write declared size of image, but I want to know real size.
You can sum the sizes of the extents reported by:
rbd diff pool/image[@snap]
Hi Alfredo-
Have you looked at adding the ability to specify a proxy on the ceph-deploy
command line? Something like:
ceph-deploy install --proxy {http_proxy}
That would then need to run all the remote commands (rpm, curl, wget, etc) with
the proxy. Not sure how complex that would
On Tue, Nov 26, 2013 at 4:33 PM, Gruher, Joseph R
joseph.r.gru...@intel.com wrote:
Hi Alfredo-
Have you looked at adding the ability to specify a proxy on the ceph-deploy
command line? Something like:
ceph-deploy install --proxy {http_proxy}
That would then need to run all the
Hello Cephers
I was following http://ceph.com/docs/master/rbd/rbd-openstack/ for ceph and
openstack Integration , using this document ih ave done all the changes
required for this integration.
I am not sure how should i test my configuration , how should i make sure
integration is
Hi Sage,
If I recall correctly during the summit you mentioned that it was possible to
disable the journal.
Is it still part of the plan?
Sébastien Han
Cloud Engineer
Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address
Hi,
Well after restarting the services run:
$ cinder create 1
Then you can check both status in Cinder and Ceph:
For Cinder run:
$ cinder list
For Ceph run:
$ rbd -p cinder-pool ls
If the image is there, you’re good.
Cheers.
Sébastien Han
Cloud Engineer
Always give 100%. Unless
I'm curious about this too. With a leveldb backend, how would we invoke
leveldb to get around our current journaling requirements?
On 11/26/2013 05:06 PM, Sebastien Han wrote:
Hi Sage,
If I recall correctly during the summit you mentioned that it was possible to
disable the journal.
Is it
Hi Karan,
You should monitor the OSDs to see if the objects get created in the pool etc.
as Sebastian suggested.
I followed the links below:
http://ceph.com/docs/master/rados/operations/monitoring-osd-pg/
http://ceph.com/docs/master/rados/operations/control/
Another thing you could do is to
On Wed, 27 Nov 2013, Sebastien Han wrote:
Hi Sage,
If I recall correctly during the summit you mentioned that it was possible to
disable the journal.
Is it still part of the plan?
For the kv backend, yeah, since the key/value store will handle making
things transactional.
Haomai, I still
On Wed, Nov 27, 2013 at 5:56 AM, Alphe Salas asa...@kepler.cl wrote:
Hello all,
bad news the problem shown again today !
I had mounted a bindfs and nfs server running on the same proxy server
when I ran a massive chmod on a directory with large amount of data I got
part of the directories
I am working with a small test cluster, but the problems described
here will remain in production. I have an external fiber channel storage
array and have exported 2 3TB disks (just as JBODs). I can use
ceph-deploy to create an OSD for each of these disks on a node named
Vashti. So far
Ahh. Now that is a fish of a different colour.
I was trying vol-upload and vol-download and your explanation makes it
clear why they would not work.
I wonder if sticking an rdb-fuse mount under libvirt to allow copy in
and out functions to work would make sense.
I will back up and try to see
Is there any way to manually configure which OSDs are started on which
machines? The osd configuration block includes the osd name and host, so is
there a way to say that, say, osd.0 should only be started on host vashti
and osd.1 should only be started on host zadok? I tried using this
Hi Mark
Thanks your reply!
Yes, 33T OSD we are use 1 disk per OSD.
- Do you use SSDs for journals? Ask: don't use SSDs for journals , how
much performance can be improved if use SSDs to storage journals?
- What CPU do you have in each node? Ask: CPU : Intel(R)
On Wed, Nov 27, 2013 at 7:39 AM, Sage Weil s...@inktank.com wrote:
On Wed, 27 Nov 2013, Sebastien Han wrote:
Hi Sage,
If I recall correctly during the summit you mentioned that it was
possible to disable the journal.
Is it still part of the plan?
For the kv backend, yeah, since the
Hello Sebastien / Community
I tried the commands mentioned in below email.
[root@rdo ~]#
[root@rdo ~]# cinder create 1
+-+--+
| Property |Value |
47 matches
Mail list logo