Hello,
I noticed that commit/apply latency reported using:
ceph pg dump -f json-pretty
is very different from the values reported when querying the OSD sockets.
What is your opinion? What are the targets the I should fetch metrics
from in order to be as precise as possible?
Hello,
after rebooting a ceph node and the OSDs starting booting and joining
the cluster, we experience slow requests that get resolved immediately
after cluster recovers. It is improtant to note that before the node
reboot, we set noout flag in order to prevent recovery - so there are
only
On Mon, Jul 13, 2015 at 11:25 AM, Kostis Fardelas dante1...@gmail.com wrote:
Hello,
it seems that new packages for firefly have been uploaded to repo.
However, I can't find any details in Ceph Release notes. There is only
one thread in ceph-devel [1], but it is not clear what this new
version
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
osdstat kbused kbavail kb hb in hb out
0 36688 5194908 5231596 [1,2,3,4,5,6,7,8] []
1 34004 5197592 5231596 [] []
2 34004 5197592 5231596 [1] []
3 34004 5197592 5231596
On 13 July 2015 at 21:36, Emmanuel Florac eflo...@intellique.com wrote:
I've benchmarked it and found it has about exactly the same performance
profile as the He6. Compared to the Seagate 6TB it draws much less
power (almost half), and that's the main selling point IMO, with
durability.
On 13-07-15 14:07, alberto ayllon wrote:
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
What does 'ceph osd tree' tell you?
It seems there is something wrong with your
Hi Wido.
Thanks again.
I will rebuild the cluster with bigger disk.
Again thanks for your help.
2015-07-13 14:15 GMT+02:00 Wido den Hollander w...@42on.com:
On 13-07-15 14:07, alberto ayllon wrote:
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the
On Mon, Jul 13, 2015 at 9:49 AM, Ilya Dryomov idryo...@gmail.com wrote:
On Fri, Jul 10, 2015 at 9:36 PM, Jan Pekař jan.pe...@imatic.cz wrote:
Hi all,
I think I found a bug in cephfs kernel client.
When I create directory in cephfs and set layout to
ceph.dir.layout=stripe_unit=1073741824
Hello,
it seems that new packages for firefly have been uploaded to repo.
However, I can't find any details in Ceph Release notes. There is only
one thread in ceph-devel [1], but it is not clear what this new
version is about. Is it safe to upgrade from 0.80.9 to 0.80.10?
Regards,
Kostis
[1]
On Fri, Jul 10, 2015 at 9:36 PM, Jan Pekař jan.pe...@imatic.cz wrote:
Hi all,
I think I found a bug in cephfs kernel client.
When I create directory in cephfs and set layout to
ceph.dir.layout=stripe_unit=1073741824 stripe_count=1
object_size=1073741824 pool=somepool
attepmts to write
Hi,
I am building a ceph cluster on Arm. Is there any limitation for 32 bit in
regard to number of nodes, storage capacity etc?
Please suggest..
Thanks.
Daleep Singh Bais
___
ceph-users mailing list
ceph-users@lists.ceph.com
On 13-07-15 12:25, Kostis Fardelas wrote:
Hello,
it seems that new packages for firefly have been uploaded to repo.
However, I can't find any details in Ceph Release notes. There is only
one thread in ceph-devel [1], but it is not clear what this new
version is about. Is it safe to upgrade
Hello everybody and thanks for your help.
Hello, I'm newbie in CEPH, I'm trying to install a CEPH cluster with test
purpose.
I had just installed a CEPH cluster with three VMs (ubuntu 14.04), each one
has one mon daemon and three OSDs, also each server has 3 disk.
Cluster has only one poll (rbd)
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
What does 'ceph osd tree' tell you?
It seems there is something wrong with your CRUSHMap.
Wido
Thanks for your answer Wido.
Here is
Sorry for reviving an old thread, but could I get some input on this, pretty
please?
ext4 has 256-byte inodes by default (at least according to docs)
but the fragment below says:
OPTION(filestore_max_inline_xattr_size_other, OPT_U32, 512)
The default 512b is too much if the inode is just 256b,
On 13-07-15 13:12, alberto ayllon wrote:
Maybe this can help to get the origin of the problem.
If I run ceph pg dump, and the end of the response i get:
What does 'ceph osd tree' tell you?
It seems there is something wrong with your CRUSHMap.
Wido
osdstatkbusedkbavailkbhb inhb out
Le Wed, 8 Jul 2015 10:28:17 +1000
Blair Bethwaite blair.bethwa...@gmail.com écrivait:
Does anyone have any experience with the newish HGST He8 8TB Helium
filled HDDs?
I've benchmarked it and found it has about exactly the same performance
profile as the He6. Compared to the Seagate 6TB it
Why do you stick to 32bit?
Kinjo
On Mon, Jul 13, 2015 at 7:35 PM, Daleep Bais daleepb...@gmail.com wrote:
Hi,
I am building a ceph cluster on Arm. Is there any limitation for 32 bit in
regard to number of nodes, storage capacity etc?
Please suggest..
Thanks.
Daleep Singh Bais
Does the ceph health detail show anything about stale or unclean PGs, or
are you just getting the blocked ops messages?
On 7/13/15, 5:38 PM, Deneau, Tom tom.den...@amd.com wrote:
I have a cluster where over the weekend something happened and successive
calls to ceph health detail show things
Thank you Lionel,
This was very helpful. I actually chose to split the partition and then
recreated the OSDs. Everything is up and running now.
Rimma
On 7/13/15 6:34 PM, Lionel Bouton wrote:
On 07/14/15 00:08, Rimma Iontel wrote:
Hi all,
[...]
Is there something that needed to be done
Hi,
I have an existing hardware which I have to use.
Please suggest so that accordingly I could implement.
Thanks
On Mon, Jul 13, 2015 at 5:51 PM, Shinobu Kinjo shinobu...@gmail.com wrote:
Why do you stick to 32bit?
Kinjo
On Mon, Jul 13, 2015 at 7:35 PM, Daleep Bais daleepb...@gmail.com
Hello,
to quote Sherlock Holmes:
Data, data, data. I cannot make bricks without clay.
That the number of blocked requests is varying is indeed interesting, but
I presume you're more interested in fixing this than dissecting this
particular tidbit?
If so...
Start with the basics, all relevant
Hello everybody,
I was testing a ceph cluster with osd_pool_default_size = 2 and while
rebuilding the OSD on one ceph node a disk in an other node started
getting read errors and ceph kept taking the OSD down, and instead of me
executing ceph osd set nodown while the other node was rebuilding I
Hi,
I have an Ruby application which currently talks S3, but I want to have
the application talk native RADOS.
Now looking online I found various Ruby bindings for librados, but none
of them seem official.
What I found:
* desperados: https://github.com/johnl/desperados
* ceph-ruby:
Hi Wido,
I'm the dev of https://github.com/netskin/ceph-ruby and still use it in
production on some systems. It has everything I
need so I didn't develop any further. If you find any bugs or need new
features, just open an issue and I'm happy to
have a look.
Best
Corin
Am 13.07.2015 um 21:24
Hi,
I have just expand our ceph-cluster (7 nodes) with one 8TB HGST (change
from 4TB to 8TB) on each node (and 11 4TB HGST).
But I have set the primary affinity to 0 for the 8 TB-disks... in this
case my performance values are not 8-TB-disk related.
Udo
On 08.07.2015 02:28, Blair Bethwaite
inline
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jan
Schermer
Sent: Monday, July 13, 2015 2:32 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] xattrs vs omap
Sorry for reviving an old thread, but could I get some input on
Thanks John. I will back the test down to the simple case of 1 client
without the kernel driver and only running NFS Ganesha, and work forward
till I trip the problem and report my findings.
Eric
On Mon, Jul 13, 2015 at 2:18 AM, John Spray john.sp...@redhat.com wrote:
On 13/07/2015 04:02,
This is to announce that ceph has been packaged for openSUSE 13.2,
openSUSE Factory, and openSUSE Tumbleweed. It is building in the
OpenSUSE Build Service (OBS), filesystems:ceph project, from the
development branch of what will become SUSE Enterprise Storage 2.
On 07/13/2015 09:43 PM, Corin Langosch wrote:
Hi Wido,
I'm the dev of https://github.com/netskin/ceph-ruby and still use it in
production on some systems. It has everything I
need so I didn't develop any further. If you find any bugs or need new
features, just open an issue and I'm happy
On 2015-07-13 12:01, Gregory Farnum wrote:
On Mon, Jul 13, 2015 at 9:49 AM, Ilya Dryomov idryo...@gmail.com wrote:
On Fri, Jul 10, 2015 at 9:36 PM, Jan Pekař jan.pe...@imatic.cz wrote:
Hi all,
I think I found a bug in cephfs kernel client.
When I create directory in cephfs and set layout to
Hi all,
I am trying to set up a three-node ceph cluster. Each node is running
RHEL 7.1 and has three 1TB HDD drives for OSDs (sdb, sdc, sdd) and an
SSD partition (/dev/sda6) for the journal.
I zapped the HDDs and used the following to create OSDs:
# ceph-deploy --overwrite-conf osd create
I have a cluster where over the weekend something happened and successive calls
to ceph health detail show things like below.
What does it mean when the number of blocked requests goes up and down like
this?
Some clients are still running successfully.
-- Tom Deneau, AMD
HEALTH_WARN 20
Hi ,
I'm running a small cephFS ( 21 TB , 16 OSDs having different sizes between
400G and 3.5 TB ) cluster that is used as a file warehouse (both small and big
files).
Every day there are times when a lot of processes running on the client servers
( using either fuse of kernel client) become
On 07/14/15 00:08, Rimma Iontel wrote:
Hi all,
[...]
Is there something that needed to be done to journal partition to
enable sharing between multiple OSDs? Or is there something else
that's causing the isssue?
IIRC you can't share a volume between multiple OSDs. What you could do
if
On 13 Jul 2015, at 4:58 pm, Abhishek Varshney abhishekvrs...@gmail.com
wrote:
I have a requirement wherein I wish to setup Ceph where hostname resolution
is not supported and I just have IP addresses to work with. Is there a way
through which I can achieve this in Ceph? If yes, what are
Hi,
I have a requirement wherein I wish to setup Ceph where hostname resolution
is not supported and I just have IP addresses to work with. Is there a way
through which I can achieve this in Ceph? If yes, what are the caveats
associated with that approach?
PS: I am using ceph-deploy for
Hi,
Could you try to use the host files instead af DNS. - Defining all CEPH hosts
in /etc/hosts with their ip’s
should solve the problem.
thanks,
Peter Calum
Fra: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] På vegne af Abhishek
Varshney
Sendt: 13. juli 2015 08:59
Til:
Hi Peter and Nigel,
I have tries /etc/hosts and it works perfectly fine! But I am looking for
an alternative (if any) to do away completely with hostnames and just use
IP addresses instead.
Thanks
Abhishek
On 13 July 2015 at 12:40, Nigel Williams nigel.d.willi...@gmail.com wrote:
On 13 Jul
Yes: clients need an MDS key that says allow, and an OSD key that
permits it access to the RADOS pool you're using as your CephFS data pool.
If you're already trying that and getting an error, please post the caps
you're using.
Thanks,
John
On 12/07/2015 14:12, Bernhard Duebi wrote:
Hi,
On 13-07-15 09:13, Abhishek Varshney wrote:
Hi Peter and Nigel,
I have tries /etc/hosts and it works perfectly fine! But I am looking
for an alternative (if any) to do away completely with hostnames and
just use IP addresses instead.
It's just that ceph-deploy wants DNS, but if you go
On 13/07/2015 04:02, Eric Eastman wrote:
Hi John,
I am seeing this problem with Ceph v9.0.1 with the v4.1 kernel on all
nodes. This system is using 4 Ceph FS client systems. They all have
the kernel driver version of CephFS loaded, but none are mounting the
file system. All 4 clients are
42 matches
Mail list logo