This was resolved when I found more versioned libs that were also out of
sync in the Python neighborhood: /usr/local/lib/python2.7/site-packages
This exercise motivated me to install multiple versions of Ceph in
versioned app directories, such as /usr/local/ceph-hammer and
/usr/local/ceph-infernal
Thank you very much, Haomai, and others, too!
ms_crc_header was set consistently across nodes at all times. :)
Root cause of my problems was the mismatched libraries being picked up when
shared libs load. Plain & simple. Next time, I should not allow errant
input from other "cooks in the kitchen"
>>
> Yes, this is definitely an old version of librados getting picked up
> somewhere in your library load path.
>
> You can find where the old librados is via:
>
> strace -f -e open ./rbd ls 2>&1 | grep librados.so
Thanks very much, Josh!
That's a big help.
Indeed! I had a pile of Ceph libs here
Hello!
I see similar behavior on a build of version 9.2.0-702-g7d926ce
Single node.
Ceph Mon only service that is running.
In Ceph configuration file (/etc/ceph/ceph.conf)
ms_crc_header = false
$ ceph -s
2015-11-13 16:06:24.594453 7f6944221700 0 -- 17.10.10.60:0/1019560 >>
17.10.10.60:6789/0 p
Hello...
Want to share some more info about the rbd problem I am have.
I took Jason's suggestion to heart as there was a past errant configure
that was run with prefix not getting set so it installed into /. I never
did clean that up but today I configured a Ceph Makefile for / and ran
`make unin
>
>
>
> I think we expect to enable header crc at least. If you want to
> disable it, you need to make all osd/client to disable it.
>
>
Thanks so much for the feedback.
It helps as I can see where I should add clarifications.
In my case, I saw it fail with one single monitor node when bringing up
When I run `rbd create` => seg fault
It worked on previous pulls/builds.
I would need to regress/rebuild to provide version info that worked last.
Ubuntu 14.04.3 LTS
Linux ceph-mon-node 3.13.0-65-generic #106-Ubuntu SMP / x86_64 x86_64
x86_64 GNU/Linux
configure --prefix=/usr/local --sysconfdir=
Greetings Ceph Users everywhere!
I was hoping to locate an entry for this Ceph configuration setting:
ms_crc_header
Would it be here:
http://docs.ceph.com/docs/master/rados/configuration/ms-ref/
Or perhaps it is deprecated?
I have searched Google but I am not satisfied. ;)
Does the "ms crc header
Hello,
In the RELEASE INFORMATION section of the hammer v0.94.3 issue tracker [1]
the git commit SHA1 is: b2503b0e15c0b13f480f0835060479717b9cf935
On the github page for Ceph Release v0.94.3 [2], when I click on the
"95cefea" link [3]
we see the commit SHA1 of: 95cefea9fd9ab740263bf8bb4796fd864d9
Hello,
I have one ceph cluster that works fine and one that is not starting.
On VM cluster Ceph works OK. On my native hardware Ceph is not starting.
OS is same: an recently updated Ubuntu 14
Following the *exact* same procedure as the cluster that is working, as
the fetch/checkout/build/post-ins
My inquiry may be a fundamental Linux thing and/or requiring basic
Ceph guidance.
According to the CBT ReadMe -- https://github.com/ceph/cbt
Currently CBT looks for specific partition labels in
/dev/disk/by-partlabel for the Ceph OSD data and journal partitions.
...each OSD host parti
Hello Ceph-users!
This is my first attempt at getting ceph running.
Does the following, in isolation, indicate any potential troubleshooting
directions
# ceph -s
2015-10-15 18:12:45.586529 7fc86041b700 0 -- :/1006343 >>
10.10.20.60:6789/0 pipe(0x7fc85c00d4c0 sd=3 :0 s=1 pgs=0 cs=0 l=1
c=0x7fc85
12 matches
Mail list logo