n are you running?
>
>
> Thanks,
>
> Igor
> On 10/8/2019 3:52 PM, Lazuardi Nasution wrote:
>
> Hi,
>
> I get following weird negative objects number on tiering. Why is this
> happening? How to get back to normal?
>
> Best regards,
>
> [root@management-a ~]#
Hi,
Any body has same problem with my case?
Best regards,
On Tue, Oct 8, 2019, 19:52 Lazuardi Nasution wrote:
> Hi,
>
> I get following weird negative objects number on tiering. Why is this
> happening? How to get back to normal?
>
> Best regards,
>
> [root@managem
Hi,
I get following weird negative objects number on tiering. Why is this
happening? How to get back to normal?
Best regards,
[root@management-a ~]# ceph df detail
GLOBAL:
SIZE AVAIL RAW USED %RAW USED OBJECTS
446T 184T 261T 58.62 22092k
POOLS:
Hi,
On inspecting new installed cluster (Nautilus), I find following result.
ssd-test pool is cache pool for hdd-test pool. After running some RBD bench
and delete all rbd images used for benchmarking, it there is some hidden
objects inside both pools (except rbd_directory, rbd_info and
Hi,
Is there any way to query the list of dirty objects inside tier/hot pool? I
just know how to see the number of them per pool.
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Mon, Sep 30, 2019 at 6:44 AM Lazuardi Nasution
> wrote:
> >
> > Hi,
> >
> > Is 4MB default RBD ob
oking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Sun, Sep 29, 2019 at 4:41 PM Lazuardi Nasution
> wrote:
> >
> > Hi,
> >
> > I'm s
Hi,
Is 4MB default RBD object size still relevant for BlueStore OSD? Any
guideline for best RBD object size for BlueStore OSD especially on high
performance media (SSD, NVME)?
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I'm starting with Nautilus and do create and delete some pools. When I
check with "ceph status" I find something weird with "pools" number
when tall pools have been deleted. I the meaning of "pools" number
different than Luminous? As there is no pool and PG, why there is
usage on "ceph
senger on Nautilus in the future.
>
>
>
> Thanks,
>
> Orlando
>
>
>
>
>
> *From:* Lazuardi Nasution [mailto:mrxlazuar...@gmail.com]
> *Sent:* Saturday, May 25, 2019 9:14 PM
> *To:* Moreno, Orlando ; Tang, Haodong <
> haodong.t...@intel.com>
>
Hi Orlando and Haodong,
Is there any response of this thread? I'm interested with this too.
Best regards,
Date: Fri, 26 Jan 2018 21:53:59 +
> From: "Moreno, Orlando"
> To: "ceph-users@lists.ceph.com" , Ceph
> Development
> Cc: "Tang, Haodong"
> Subject: [ceph-users] Ceph OSDs
Hi David and Justinas,
I'm interested with this old thread. Have it been solved? Would you mind to
share the solution and reference regarding to David statement of some
threads on the ML about RDMA?
Best regards,
> Date: Fri, 02 Mar 2018 06:12:18 +
> From: David Turner
> To: Justinas
think it is
just for cosmetic purposes only. It seem that I have to do it with manual
VG and LV creation if I want to do it with ceph-ansible.
Best regards,
On Sat, May 11, 2019, 02:42 Alfredo Deza wrote:
> On Fri, May 10, 2019 at 3:21 PM Lazuardi Nasution
> wrote:
> >
&g
019 at 2:43 PM Lazuardi Nasution
> wrote:
> >
> > Hi,
> >
> > Let's say I have following devices on a host.
> >
> > /dev/sda
> > /dev/sdb
> > /dev/nvme0n1
> >
> > How can I do ceph-volume batch which create bluestore OSD on HDDs and
>
Hi,
Let's say I have following devices on a host.
/dev/sda
/dev/sdb
/dev/nvme0n1
How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME
(devided to be 4 OSDs) and put block.db of HDDs on the NVME too? Following
are what I'm expecting on created LVs.
/dev/sda: DATA0
.@gentoo.org):
> > On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> > > I'm looking example Ceph configuration and topology on full layer 3
> > > networking deployment. Maybe all daemons can use loopback alias
> address in
> > > this case. B
regating them physically.
>
> On Sat, Mar 9, 2019, 11:10 AM Lazuardi Nasution
> wrote:
>
>> Hi,
>>
>> I'm looking for information about where is the RDMA messaging of Ceph
>> happen, on cluster network, public network or both (it seem both, CMIIW)?
&g
Hi,
I'm looking for information about where is the RDMA messaging of Ceph
happen, on cluster network, public network or both (it seem both, CMIIW)?
I'm talking about configuration of ms_type, ms_cluster_type and
ms_public_type.
In case of OpenStack integration with RBD, which of above three is
/plain; charset="us-ascii"
>
> On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote:
> > I'm looking example Ceph configuration and topology on full layer 3
> > networking deployment. Maybe all daemons can use loopback alias address
> in
> >
Hi,
I'm looking example Ceph configuration and topology on full layer 3
networking deployment. Maybe all daemons can use loopback alias address in
this case. But how to set cluster network and public network configuration,
using supernet? I think using loopback alias address can prevent the
Hi,
What is the best way to do if there network segments different between OSD
to OSD, OSD to MON and OSD to client due to some networking policy? What
should I put for public_addr and cluster_addr? Is that simple "as is"
depend on the connected network segments of each OSD and MON? If it is not
Hi Jason,
I understand. Thank you for your explanation.
Best regards,
On Mar 9, 2018 3:45 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:
> On Thu, Mar 8, 2018 at 3:41 PM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> > Hi Jason,
> >
> >
Hi Jason,
If there is the case that the gateway cannot access the Ceph, I think you
are right. Anyway, I put iSCSI Gateway on MON node.
Best regards,
On Mar 9, 2018 1:41 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:
On Thu, Mar 8, 2018 at 12:47 PM, Lazuardi Nas
,
On Mar 9, 2018 12:35 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:
On Thu, Mar 8, 2018 at 11:59 AM, Lazuardi Nasution
<mrxlazuar...@gmail.com> wrote:
> Hi Mike,
>
> Since I have moved from LIO to TGT, I can do full ALUA (active/active) of
> multiple gateway
-03-07
> >
> > shadowlin
> >
> >
> >
> > *发件人:*Mike Christie <mchri...@redhat.com>
> > *发送时间:*2018-03-
Hi,
I want to do load balanced multipathing (multiple iSCSI gateway/exporter
nodes) of iSCSI backed with RBD images. Should I disable exclusive lock
feature? What if I don't disable that feature? I'm using TGT (manual way)
since I get so many CPU stuck error messages when I was using LIO.
Best
Hi,
I'm still looking for the answer of these questions. Maybe someone can
share their thought on these. Any comment will be helpful too.
Best regards,
On Sat, Sep 16, 2017 at 1:39 AM, Lazuardi Nasution <mrxlazuar...@gmail.com>
wrote:
> Hi,
>
> 1. Is it possible configu
Hi,
1. Is it possible configure use osd_data not as small partition on OSD but
a folder (ex. on root disk)? If yes, how to do that with ceph-disk and any
pros/cons of doing that?
2. Is WAL & DB size calculated based on OSD size or expected throughput
like on journal device of filestore? If no,
ach OSD’s
> journal.
>
> So I have an NVMe drive that serves as a journal for 10 OSD’s, thus I have
> 10 separate partitions, 1 for each OSD.
>
> Hope that helps,
>
> Reed
>
> > On Jun 7, 2017, at 1:54 PM, Lazuardi Nasution <mrxlazuar...@gmail.com>
> wrote
Hi,
Is it possible to have single external journal for more than one OSD
without doing any partitioning or at least only single partition on journal
disk? For example, I want to have single SSD as external journal for some
OSD HDDs but without doing any partitioning or at least only single
Hi,
I'm looking for the way to do erasure profile update regarding to nodes
addition. Let's say on the first I have 5 OSD nodes with 3+2 erasure
profile so all chunks, including the code chunks, will be spread on every
OSD nodes. In the future, let's say I add 2 OSD nodes and I wan to have 5+2
Hi,
For using with OpenStack Cinder multi attached volumes, is it possible to
disable RBD Cache for specific multi attached volumes only? Single attached
volumes still need to enable RBD Cache for better performance.
If I disable RBD Cache on /etc/ceph/ceph.conf, is
, 22 Dec 2016 09:12:44 +0700 Lazuardi Nasution wrote:
>
> > Hi Christian,
> >
> > Thank you for your explanation. Based on your suggestion, I have put
> > writeback cache-mode. But, currently the write ops is more better than
> read
> > ops. I mean "dd if=/
here?
Best regards,
On Dec 22, 2016 08:57, "Christian Balzer" <ch...@gol.com> wrote:
>
> Hello,
>
> On Thu, 22 Dec 2016 01:47:36 +0700 Lazuardi Nasution wrote:
>
> > Hi,
> >
> > I'm looking for the way of setting up read only cache tier bu
Hi,
I'm looking for the way of setting up read only cache tier but the updated
objects to the backend store must be evicted from the cache (if any on the
cache) and it should be promoted to cache again on next miss read. This
will make the cache never contain stale objects. Which cache-mode
s: transactions per sec: 7550
> Rick
> > On Aug 31, 2016, at 10:59 AM, Lazuardi Nasution <mrxlazuar...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or
> RBD for getting best performance. My
Hi,
I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or RBD
for getting best performance. MySQL save data as files on mostly
configuration but the I/O is block access because the file is opened until
MySQL down. This case give us both options for storing the data files. For
Hi,
I have retry the test, but with FUSE CephFS client. It seem everything is
OK. Any explainantion? Is Kernel CephFS client less featured (more limited)
and/or less stable than FUSE CephFS client like on RBD?
Best regards,
> Date: Thu, 25 Aug 2016 00:25:22 +0700
> From: Lazuardi Na
caps: [mon] allow *
caps: [osd] allow *
Best regards,
On Thu, Aug 25, 2016 at 8:15 PM, Yan, Zheng <uker...@gmail.com> wrote:
> On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> > Hi,
> >
> > My setup is defaul
Hi,
I have rebooted the Nova compute node with D state, so I cannot check that
anymore.
Best regards,
On Thu, Aug 25, 2016 at 7:58 PM, Yan, Zheng <uker...@gmail.com> wrote:
> On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> > Hi
> On Thu, Aug 25, 2016 at 1:25 AM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> > Hi,
> >
> > I have problem with CephFS on writing big size file. I have found that my
> > OpenStack Nova backup was not working after I change the rbd based mount
> of
&
Hi Gregory,
Since I have mounted it with /etc/fstab, of course it is kernel client.
What log do you mean? I cannot find anything related on dmesg.
Best regards,
On Aug 25, 2016 00:46, "Gregory Farnum" <gfar...@redhat.com> wrote:
> On Wed, Aug 24, 2016 at 10:25 AM
Hi,
I have problem with CephFS on writing big size file. I have found that my
OpenStack Nova backup was not working after I change the rbd based mount of
/var/lib/nova/instances/snapshots to CephFS based (mounted on /etc/fstab on
all Nova compute nodes). I couldn't relize the cause until I tried
) on top
of CephFS (different directory) or just directly mounts the same directory
of CephFS?
Best regards,
On Wed, Aug 3, 2016 at 11:14 PM, John Spray <jsp...@redhat.com> wrote:
> On Wed, Aug 3, 2016 at 5:10 PM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> >
Hi,
I'm looking for example about what to put on /etc/fstab if I want to auto
mount CephFS on failovered multi MDS (only one MDS is active) especially
with Jewel. My target is to build loadbalanced file/web servers with CephFS
backend.
Best regards,
set_period will helpful?
Best regards,
On Thu, Jun 23, 2016 at 7:23 AM, Christian Balzer <ch...@gol.com> wrote:
>
> Hello,
>
> On Wed, 22 Jun 2016 15:40:40 +0700 Lazuardi Nasution wrote:
>
> > Hi Christian,
> >
> > If I have several cache pool on the same SS
Available
size? If diffrent, how can I know if such cache pool need more size than
other.
Best regards,
Date: Mon, 20 Jun 2016 09:34:05 +0900
> From: Christian Balzer <ch...@gol.com>
> To: ceph-users@lists.ceph.com
> Cc: Lazuardi Nasution <mrxlazuar...@gmail.com>
> Subje
Hi,
Is it possible to do cache tiering for some storage pools with the same
cache pool? What will happen if cache pool is broken or at least doesn't
meet quorum when storage pool is OK?
Best regards,
___
ceph-users mailing list
To: ceph-users@lists.ceph.com, Lazuardi Nasution
> <mrxlazuar...@gmail.com>
> Subject: Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros
> Cons
> Message-ID: <100239228.47.1466160323...@ox.pcextreme.nl>
> Content-Type: text/plain; charset=UTF-
t; small IO.
> >
> > Basically the answer is that there are pluses and minuses, and the exact
> > behavior will depend on your kernel configuration, hardware, and use
> > case. I think 4MB has been a fairly good default thus far (might change
> > with bluestore), but t
Hi,
I'm looking for some pros cons related to RBD stripe/chunk size indicated
by image order number. Default is 4MB (order 22), but OpenStack use 8MB
(order 23) as default. What if we use smaller size (lower order number),
isn't it more chance that image objects is spreaded through OSDs and
nagement-b ~]# rados -p rbd lssnap
0 snaps
[root@management-b ~]# rados -p rbd_cache lssnap
0 snaps
Best regards,
On Tue, Jun 14, 2016 at 4:55 AM, Lazuardi Nasution <mrxlazuar...@gmail.com>
wrote:
> Hi,
>
> I have removed cache tiering due to "missing hit_sets" warning. Aft
Hi,
Is there any relation between PGs on cache pool and PGs on storage pool of
cache tiering? Is is manadatory to set pg(p)_num between cache pool and
storage pool to be equal?
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I have removed cache tiering due to "missing hit_sets" warning. After
removing, I want to try to add tiering again with the same cache pool and
storage pool, but I can't even the cache pool is empty or forced to clear.
Following is some output. How can I deal with this? Is it possible to
The cluster should have
> automatically healed after the OSDs were marked out of the cluster .
> Else this will be a manual process for us every time the disk fails
> which is very regular.
>
> Thanks
> Gaurav
>
> On Tue, May 17, 2016 at 11:06 AM, Lazuardi Nasut
> What could be the difference between two scenarions :
>
> 1. OSD down due to H/W failure.
> 2. OSD daemon killed .
>
> When I remove the 12 osds from the crushmap manually or do ceph osd
> crush remove for those osds, the cluster recovers just fine.
>
> Thanks
> Gaurav
Hi Wido,
The 75% happen on 4 nodes of 24 OSDs each with pool size of two and minimum
size of one. Any relation between this configuration and 75%?
Best regards,
On Tue, May 17, 2016 at 3:38 AM, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 14 mei 2016 om 12:36 schreef La
wrote:
>
> > Op 13 mei 2016 om 11:55 schreef Lazuardi Nasution <
> mrxlazuar...@gmail.com>:
> >
> >
> > Hi Wido,
> >
> > The status is same after 24 hour running. It seem that the status will
> not
> > go to fully active+clean until all down
,
On Fri, May 13, 2016 at 4:44 PM, Wido den Hollander <w...@42on.com> wrote:
>
> > Op 13 mei 2016 om 11:34 schreef Lazuardi Nasution <
> mrxlazuar...@gmail.com>:
> >
> >
> > Hi,
> >
> > After disaster and restarting for automatic recover
Hi,
After disaster and restarting for automatic recovery, I found following
ceph status. Some OSDs cannot be restarted due to file system corruption
(it seem that xfs is fragile).
[root@management-b ~]# ceph status
cluster 3810e9eb-9ece-4804-8c56-b986e7bb5627
health HEALTH_WARN
Hi,
Suddenly some of our Infernalis OSD nodes are down with "kernel:BUG: soft
lockup" message. Nothing can do after that until rebooting. When I do
recovery by restarting the down OSDs, one by one while add additional OSDs
too, I get the same error again with on the same nodes. I'm not sure which
Hi,
How to make ceph still work if the cluster lost all OSD hosts except one on
disaster where the capacity of single host is bigger than total used data?
I need to minimize downtime while recovering/reinstalling the lost hosts.
More generally, how to make ceph choose highest number of types
; SSD volume internal to the VM.
>
> --
>
> Jason Dillaman
>
>
> - Original Message -
>
> > From: "Lazuardi Nasution" <mrxlazuar...@gmail.com>
> > To: ceph-users@lists.ceph.com
> > Sent: Sunday, November 8, 2015 12:34:16 PM
> > Subje
Hi,
I'm new with CEPH cache tiering. Is it possible to have multiple cache pool
with single storage pool backend? For example I have some compute nodes
with its own local SSDs. I want each compute node has its own cache by
using its own local SSDs. The target is to minimize load to storage pool
Hi,
I'm looking for recommended client local SSD caching startegy on OpenStack
Compute where the backend storage is CEPH cluster. The target is for
reducing Compute to Storage traffic. I have not known that librbd support
local SSD caching. Beside, I'm not sure if block SSD caching of local
Hi,
I'm looking for pros and cons of combining MON and OSD functionality on the
same nodes. Mostly recommended configuration is to have dedicated, odd
number MON nodes. What I'm thinking is more like single node deployment but
consist more than one node, if we have 3 nodes we have 3 MONs with 3
Hi,
I prefer to use bcache or other likely local write back cache on SSD since
it is only related to local HDDs. It think, it will reduce the risk of
error on cache flushing comparing to CEPH Cache Tiering which still using
clustering network on flushing. After the data has been written to the
Hi,
I prefer to use bCache or other likely local write back cache on SSD since
it is only related to local HDDs. It think, it will reduce the risk of
error on cache flushing comparing to CEPH Cache Tiering which still using
clustering network on flushing. After the data has been written to the
: text/plain; charset=ISO-8859-1; format=flowed
On 07/06/2014 02:42 PM, Lazuardi Nasution wrote:
Hi,
Is it possible to combine MDS and MON or OSD inside the same node? Which
one is better, MON with MDS or OSD with MDS?
Yes, not a problem. Be aware, the MDS can be memory hungry
Hi,
I'm thinking of using SSD for cache on CEPH where the SSDs are on the same
OSD nodes with HDDs. My options are using CEPH Cache Tiering or using
another cache software like bCache or FlashCache. On the second option, the
SSDs will only cache data related to HDDs on the same node only.
Any
Hi,
Is it possible to combine MDS and MON or OSD inside the same node? Which
one is better, MON with MDS or OSD with MDS?
How do I configure OSD and MDS to allow two kind of public network
connections, standard (1 GbEs) and jumbo (10 GbEs). I want to take
advantage of jumbo frame for some
71 matches
Mail list logo