Lack of developers response (I reported the issue on Jun, 4) leads me to
believe that it’s not a trivial problem and we all should be getting prepared
for a hard time playing with osdmaptool...
On Jun 9, 2018, 02:10 +0300, Paul Emmerich , wrote:
> Hi,
>
> we are also seeing this (I've also posted
Hi,
we are also seeing this (I've also posted to the issue tracker). It only
affects clusters upgraded from Luminous, not new ones.
Also, it's not about re-using OSDs. Deleting any OSD seems to trigger this
bug for all new OSDs on upgraded clusters.
We are still using the pre-Luminous way to remo
I'm getting the same issue.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I increased the volume size from 1GB to 10GB and that did the trick. Thanks
for the hint!
On Fri, Jun 8, 2018 at 1:30 PM, Alfredo Deza wrote:
>
>
> On Fri, Jun 8, 2018 at 3:59 PM, Rares Vernica wrote:
>
>> Thanks, I will try that.
>>
>> Just to verify I don't need to create a file system or any
On Fri, Jun 8, 2018 at 3:59 PM, Rares Vernica wrote:
> Thanks, I will try that.
>
> Just to verify I don't need to create a file system or any partition table
> on the volume, right? Ceph seems to be trying to create the file system.
>
Right, no need to do anything here for filesystems.
>
> On
Hi everyone,
I appreciate the suggestions. However, this is still an issue. I've tried
adding the OSD using ceph-deploy, and manually from the OSD host. I'm not able
to start newly added OSDs at all, even if I use a new ID. It seems the OSD is
added to CEPH but I cannot start it. OSDs that exist
Thanks, I will try that.
Just to verify I don't need to create a file system or any partition table
on the volume, right? Ceph seems to be trying to create the file system.
On Fri, Jun 8, 2018 at 12:56 PM, Alfredo Deza wrote:
>
>
> On Fri, Jun 8, 2018 at 3:17 PM, Rares Vernica wrote:
>
>> Yes,
On Fri, Jun 8, 2018 at 3:17 PM, Rares Vernica wrote:
> Yes, it exists:
>
> # ls -ld /var/lib/ceph/osd/ceph-0
> drwxr-xr-x. 2 ceph ceph 6 Jun 7 15:06 /var/lib/ceph/osd/ceph-0
> # ls -ld /var/lib/ceph/osd
> drwxr-x---. 4 ceph ceph 34 Jun 7 15:59 /var/lib/ceph/osd
>
> After I ran the ceph-volume c
Yes, it exists:
# ls -ld /var/lib/ceph/osd/ceph-0
drwxr-xr-x. 2 ceph ceph 6 Jun 7 15:06 /var/lib/ceph/osd/ceph-0
# ls -ld /var/lib/ceph/osd
drwxr-x---. 4 ceph ceph 34 Jun 7 15:59 /var/lib/ceph/osd
After I ran the ceph-volume command, I see the directory is mounted:
# mount
...
tmpfs on /var/li
On Fri, Jun 8, 2018 at 2:47 PM, Rares Vernica wrote:
> Hi,
>
> I'm following the Manual Deployment guide at
> http://docs.ceph.com/docs/master/install/manual-deployment/ I'm not able
> to move past the ceph-volume lvm create part. Here is what I do:
>
> # lvcreate -L 1G -n ceph cah_foo
> Logica
Hi,
I'm following the Manual Deployment guide at http://docs.ceph.com/docs/
master/install/manual-deployment/ I'm not able to move past the ceph-volume
lvm create part. Here is what I do:
# lvcreate -L 1G -n ceph cah_foo
Logical volume "ceph" created.
# ceph-volume lvm create --data cah_foo/ce
Hi all,
Maybe this will help:
The issue is with shards 3,4 and 5 of PG 6.3f:
LOG's of OSD's 16, 17 & 36 (the ones crashing on startup).
*Log OSD.16 (shard 4):*
2018-06-08 08:35:01.727261 7f4c585e3700 -1
bluestore(/var/lib/ceph/osd/ceph-16) _txc_add_transaction error (2) No such
file or direct
- ceph-disk was replaced for two reasons: (1) It's design was
centered around udev, and it was terrible. We have been plagued for years
with bugs due to race conditions in the udev-driven activation of OSDs,
mostly variations of "I rebooted and not all of my OSDs started." It's
horrible to obser
Hi all,
I seem to be hitting these tracker issues:
https://tracker.ceph.com/issues/23145
http://tracker.ceph.com/issues/24422
PG's 6.1 and 6.3f are having the issues
When i list all PG's of a down OSD with:
ceph-objectstore-tool --dry-run --type bluestore --data-path
/var/lib/ceph/osd/ceph-17/
On Fri, 8 Jun 2018, Alfredo Deza wrote:
> On Fri, Jun 8, 2018 at 8:13 AM, Sage Weil wrote:
> > I'm going to jump in here with a few points.
> >
> > - ceph-disk was replaced for two reasons: (1) It's design was
> > centered around udev, and it was terrible. We have been plagued for years
> > with
On Fri, Jun 8, 2018 at 8:13 AM, Sage Weil wrote:
> I'm going to jump in here with a few points.
>
> - ceph-disk was replaced for two reasons: (1) It's design was
> centered around udev, and it was terrible. We have been plagued for years
> with bugs due to race conditions in the udev-driven activ
I'm going to jump in here with a few points.
- ceph-disk was replaced for two reasons: (1) It's design was
centered around udev, and it was terrible. We have been plagued for years
with bugs due to race conditions in the udev-driven activation of OSDs,
mostly variations of "I rebooted and not
http://docs.ceph.com/docs/master/ceph-volume/simple/
?
Only 'scan' & 'activate'. Not 'create'.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Den fre 8 juni 2018 kl 12:35 skrev Marc Roos :
>
> I am getting the impression that not everyone understands the subject
> that has been raised here.
>
Or they do and they do not agree with your vision of how things should be
done.
That is a distinct possibility one has to consider when using so
> Answers:
> - unify setup, support for crypto & more
Unify setup by adding a dependency? There is / should be already support
for crypto now, not?
> - none
Costs of lvm can be argued. Something to go through, is worse than
nothing to go through.
https://www.researchgate.net/publication/284897
Beuh ...
I have other questions:
- why not use LVM, and stick with direct disk access ?
- what are the cost of LVM (performance, latency etc) ?
Answers:
- unify setup, support for crypto & more
- none
Tldr: that technical choice is fine, nothing to argue about.
On 06/08/2018 07:15 AM, Marc Ro
I am getting the impression that not everyone understands the subject
that has been raised here.
Why do osd's need to be via lvm, and why not stick with direct disk
access as it is now?
- Bluestore is created to cut out some fs overhead,
- everywhere 10Gb is recommended because of better lat
http://docs.ceph.com/docs/master/ceph-volume/simple/
?
From: ceph-users On Behalf Of Konstantin
Shalygin
Sent: 08 June 2018 11:11
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Why the change from ceph-disk to ceph-volume and lvm?
(and just not stick with direct disk access)
Wh
What is the reasoning behind switching to lvm? Does it make sense to go
through (yet) another layer to access the disk? Why creating this
dependency and added complexity? It is fine as it is, or not?
In fact, the question is why one tool is replaced by another without
saving functionality.
Why
On Fri, Jun 8, 2018 at 6:37 AM, Tracy Reed wrote:
> On Thu, Jun 07, 2018 at 09:30:23AM PDT, Jason Dillaman spake thusly:
>> I think what Ilya is saying is that it's a very old RHEL 7-based
>> kernel (RHEL 7.1?). For example, the current RHEL 7.5 kernel includes
>> numerous improvements that have b
Hi Pardhiv,
On 06/08/2018 05:07 AM, Pardhiv Karri wrote:
We recently added a lot of nodes to our ceph clusters. To mitigate lot
of problems (we are using tree algorithm) we added an empty node first
to the crushmap and then added OSDs with zero weight, made sure the ceph
health is OK and then
Hi all,
In our current production cluster we have the following CRUSH
hierarchy, see https://pastebin.com/640Q4XSH or the attached image.
This reflects 1:1 real physical deployment. We currently use also a
replica factor of 3 with the following CRUSH rule on our pools:
rule hdd_replicated {
id
27 matches
Mail list logo