Re: [ceph-users] Ceph Negative Objects Number

2019-10-14 Thread Lazuardi Nasution
n are you running? > > > Thanks, > > Igor > On 10/8/2019 3:52 PM, Lazuardi Nasution wrote: > > Hi, > > I get following weird negative objects number on tiering. Why is this > happening? How to get back to normal? > > Best regards, > > [root@management-a ~]#

Re: [ceph-users] Ceph Negative Objects Number

2019-10-14 Thread Lazuardi Nasution
Hi, Any body has same problem with my case? Best regards, On Tue, Oct 8, 2019, 19:52 Lazuardi Nasution wrote: > Hi, > > I get following weird negative objects number on tiering. Why is this > happening? How to get back to normal? > > Best regards, > > [root@managem

[ceph-users] Ceph Negative Objects Number

2019-10-08 Thread Lazuardi Nasution
Hi, I get following weird negative objects number on tiering. Why is this happening? How to get back to normal? Best regards, [root@management-a ~]# ceph df detail GLOBAL: SIZE AVAIL RAW USED %RAW USED OBJECTS 446T 184T 261T 58.62 22092k POOLS:

[ceph-users] Hidden Objects

2019-10-06 Thread Lazuardi Nasution
Hi, On inspecting new installed cluster (Nautilus), I find following result. ssd-test pool is cache pool for hdd-test pool. After running some RBD bench and delete all rbd images used for benchmarking, it there is some hidden objects inside both pools (except rbd_directory, rbd_info and

[ceph-users] Tiering Dirty Objects

2019-10-03 Thread Lazuardi Nasution
Hi, Is there any way to query the list of dirty objects inside tier/hot pool? I just know how to see the number of them per pool. Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD Object Size for BlueStore OSD

2019-09-30 Thread Lazuardi Nasution
Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Mon, Sep 30, 2019 at 6:44 AM Lazuardi Nasution > wrote: > > > > Hi, > > > > Is 4MB default RBD ob

Re: [ceph-users] Nautilus Ceph Status Pools & Usage

2019-09-30 Thread Lazuardi Nasution
oking for help with your Ceph cluster? Contact us at https://croit.io > > croit GmbH > Freseniusstr. 31h > 81247 München > www.croit.io > Tel: +49 89 1896585 90 > > On Sun, Sep 29, 2019 at 4:41 PM Lazuardi Nasution > wrote: > > > > Hi, > > > > I'm s

[ceph-users] RBD Object Size for BlueStore OSD

2019-09-29 Thread Lazuardi Nasution
Hi, Is 4MB default RBD object size still relevant for BlueStore OSD? Any guideline for best RBD object size for BlueStore OSD especially on high performance media (SSD, NVME)? Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Nautilus Ceph Status Pools & Usage

2019-09-29 Thread Lazuardi Nasution
Hi, I'm starting with Nautilus and do create and delete some pools. When I check with "ceph status" I find something weird with "pools" number when tall pools have been deleted. I the meaning of "pools" number different than Luminous? As there is no pool and PG, why there is usage on "ceph

Re: [ceph-users] Ceph OSDs fail to start with RDMA

2019-05-31 Thread Lazuardi Nasution
senger on Nautilus in the future. > > > > Thanks, > > Orlando > > > > > > *From:* Lazuardi Nasution [mailto:mrxlazuar...@gmail.com] > *Sent:* Saturday, May 25, 2019 9:14 PM > *To:* Moreno, Orlando ; Tang, Haodong < > haodong.t...@intel.com> >

Re: [ceph-users] ceph-users Digest, Vol 60, Issue 26

2019-05-25 Thread Lazuardi Nasution
Hi Orlando and Haodong, Is there any response of this thread? I'm interested with this too. Best regards, Date: Fri, 26 Jan 2018 21:53:59 + > From: "Moreno, Orlando" > To: "ceph-users@lists.ceph.com" , Ceph > Development > Cc: "Tang, Haodong" > Subject: [ceph-users] Ceph OSDs

Re: [ceph-users] Ceph and multiple RDMA NICs

2019-05-23 Thread Lazuardi Nasution
Hi David and Justinas, I'm interested with this old thread. Have it been solved? Would you mind to share the solution and reference regarding to David statement of some threads on the ML about RDMA? Best regards, > Date: Fri, 02 Mar 2018 06:12:18 + > From: David Turner > To: Justinas

Re: [ceph-users] Custom Ceph-Volume Batch with Mixed Devices

2019-05-10 Thread Lazuardi Nasution
think it is just for cosmetic purposes only. It seem that I have to do it with manual VG and LV creation if I want to do it with ceph-ansible. Best regards, On Sat, May 11, 2019, 02:42 Alfredo Deza wrote: > On Fri, May 10, 2019 at 3:21 PM Lazuardi Nasution > wrote: > > &g

Re: [ceph-users] Custom Ceph-Volume Batch with Mixed Devices

2019-05-10 Thread Lazuardi Nasution
019 at 2:43 PM Lazuardi Nasution > wrote: > > > > Hi, > > > > Let's say I have following devices on a host. > > > > /dev/sda > > /dev/sdb > > /dev/nvme0n1 > > > > How can I do ceph-volume batch which create bluestore OSD on HDDs and >

[ceph-users] Custom Ceph-Volume Batch with Mixed Devices

2019-05-10 Thread Lazuardi Nasution
Hi, Let's say I have following devices on a host. /dev/sda /dev/sdb /dev/nvme0n1 How can I do ceph-volume batch which create bluestore OSD on HDDs and NVME (devided to be 4 OSDs) and put block.db of HDDs on the NVME too? Following are what I'm expecting on created LVs. /dev/sda: DATA0

Re: [ceph-users] Full L3 Ceph

2019-03-18 Thread Lazuardi Nasution
.@gentoo.org): > > On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote: > > > I'm looking example Ceph configuration and topology on full layer 3 > > > networking deployment. Maybe all daemons can use loopback alias > address in > > > this case. B

Re: [ceph-users] OpenStack with Ceph RDMA

2019-03-11 Thread Lazuardi Nasution
regating them physically. > > On Sat, Mar 9, 2019, 11:10 AM Lazuardi Nasution > wrote: > >> Hi, >> >> I'm looking for information about where is the RDMA messaging of Ceph >> happen, on cluster network, public network or both (it seem both, CMIIW)? &g

[ceph-users] OpenStack with Ceph RDMA

2019-03-09 Thread Lazuardi Nasution
Hi, I'm looking for information about where is the RDMA messaging of Ceph happen, on cluster network, public network or both (it seem both, CMIIW)? I'm talking about configuration of ms_type, ms_cluster_type and ms_public_type. In case of OpenStack integration with RBD, which of above three is

Re: [ceph-users] ceph-users Digest, Vol 70, Issue 23

2018-11-24 Thread Lazuardi Nasution
/plain; charset="us-ascii" > > On Fri, Nov 23, 2018 at 04:03:25AM +0700, Lazuardi Nasution wrote: > > I'm looking example Ceph configuration and topology on full layer 3 > > networking deployment. Maybe all daemons can use loopback alias address > in > >

[ceph-users] Full L3 Ceph

2018-11-22 Thread Lazuardi Nasution
Hi, I'm looking example Ceph configuration and topology on full layer 3 networking deployment. Maybe all daemons can use loopback alias address in this case. But how to set cluster network and public network configuration, using supernet? I think using loopback alias address can prevent the

[ceph-users] Multi Networks Ceph

2018-03-19 Thread Lazuardi Nasution
Hi, What is the best way to do if there network segments different between OSD to OSD, OSD to MON and OSD to client due to some networking policy? What should I put for public_addr and cluster_addr? Is that simple "as is" depend on the connected network segments of each OSD and MON? If it is not

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Lazuardi Nasution
Hi Jason, I understand. Thank you for your explanation. Best regards, On Mar 9, 2018 3:45 AM, "Jason Dillaman" <jdill...@redhat.com> wrote: > On Thu, Mar 8, 2018 at 3:41 PM, Lazuardi Nasution > <mrxlazuar...@gmail.com> wrote: > > Hi Jason, > > > >

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Lazuardi Nasution
Hi Jason, If there is the case that the gateway cannot access the Ceph, I think you are right. Anyway, I put iSCSI Gateway on MON node. Best regards, On Mar 9, 2018 1:41 AM, "Jason Dillaman" <jdill...@redhat.com> wrote: On Thu, Mar 8, 2018 at 12:47 PM, Lazuardi Nas

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Lazuardi Nasution
, On Mar 9, 2018 12:35 AM, "Jason Dillaman" <jdill...@redhat.com> wrote: On Thu, Mar 8, 2018 at 11:59 AM, Lazuardi Nasution <mrxlazuar...@gmail.com> wrote: > Hi Mike, > > Since I have moved from LIO to TGT, I can do full ALUA (active/active) of > multiple gateway

Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-08 Thread Lazuardi Nasution
-03-07 > > > > shadowlin > > > > > > > > *发件人:*Mike Christie <mchri...@redhat.com> > > *发送时间:*2018-03-

[ceph-users] iSCSI Multipath (Load Balancing) vs RBD Exclusive Lock

2018-03-06 Thread Lazuardi Nasution
Hi, I want to do load balanced multipathing (multiple iSCSI gateway/exporter nodes) of iSCSI backed with RBD images. Should I disable exclusive lock feature? What if I don't disable that feature? I'm using TGT (manual way) since I get so many CPU stuck error messages when I was using LIO. Best

Re: [ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-20 Thread Lazuardi Nasution
Hi, I'm still looking for the answer of these questions. Maybe someone can share their thought on these. Any comment will be helpful too. Best regards, On Sat, Sep 16, 2017 at 1:39 AM, Lazuardi Nasution <mrxlazuar...@gmail.com> wrote: > Hi, > > 1. Is it possible configu

[ceph-users] Bluestore OSD_DATA, WAL & DB

2017-09-15 Thread Lazuardi Nasution
Hi, 1. Is it possible configure use osd_data not as small partition on OSD but a folder (ex. on root disk)? If yes, how to do that with ceph-disk and any pros/cons of doing that? 2. Is WAL & DB size calculated based on OSD size or expected throughput like on journal device of filestore? If no,

Re: [ceph-users] Single External Journal

2017-06-07 Thread Lazuardi Nasution
ach OSD’s > journal. > > So I have an NVMe drive that serves as a journal for 10 OSD’s, thus I have > 10 separate partitions, 1 for each OSD. > > Hope that helps, > > Reed > > > On Jun 7, 2017, at 1:54 PM, Lazuardi Nasution <mrxlazuar...@gmail.com> > wrote

[ceph-users] Single External Journal

2017-06-07 Thread Lazuardi Nasution
Hi, Is it possible to have single external journal for more than one OSD without doing any partitioning or at least only single partition on journal disk? For example, I want to have single SSD as external journal for some OSD HDDs but without doing any partitioning or at least only single

[ceph-users] Erasure Profile Update

2017-02-09 Thread Lazuardi Nasution
Hi, I'm looking for the way to do erasure profile update regarding to nodes addition. Let's say on the first I have 5 OSD nodes with 3+2 erasure profile so all chunks, including the code chunks, will be spread on every OSD nodes. In the future, let's say I add 2 OSD nodes and I wan to have 5+2

[ceph-users] RBD Cache & Multi Attached Volumes

2017-01-03 Thread Lazuardi Nasution
Hi, For using with OpenStack Cinder multi attached volumes, is it possible to disable RBD Cache for specific multi attached volumes only? Single attached volumes still need to enable RBD Cache for better performance. If I disable RBD Cache on /etc/ceph/ceph.conf, is

Re: [ceph-users] Read Only Cache Tier

2016-12-21 Thread Lazuardi Nasution
, 22 Dec 2016 09:12:44 +0700 Lazuardi Nasution wrote: > > > Hi Christian, > > > > Thank you for your explanation. Based on your suggestion, I have put > > writeback cache-mode. But, currently the write ops is more better than > read > > ops. I mean "dd if=/

Re: [ceph-users] Read Only Cache Tier

2016-12-21 Thread Lazuardi Nasution
here? Best regards, On Dec 22, 2016 08:57, "Christian Balzer" <ch...@gol.com> wrote: > > Hello, > > On Thu, 22 Dec 2016 01:47:36 +0700 Lazuardi Nasution wrote: > > > Hi, > > > > I'm looking for the way of setting up read only cache tier bu

[ceph-users] Read Only Cache Tier

2016-12-21 Thread Lazuardi Nasution
Hi, I'm looking for the way of setting up read only cache tier but the updated objects to the backend store must be evicted from the cache (if any on the cache) and it should be promoted to cache again on next miss read. This will make the cache never contain stale objects. Which cache-mode

Re: [ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread Lazuardi Nasution
s: transactions per sec: 7550 > Rick > > On Aug 31, 2016, at 10:59 AM, Lazuardi Nasution <mrxlazuar...@gmail.com> > wrote: > > > > Hi, > > > > I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or > RBD for getting best performance. My

[ceph-users] /var/lib/mysql, CephFS vs RBD

2016-08-31 Thread Lazuardi Nasution
Hi, I'm looking for pros and cons of mounting /var/lib/mysql with CephFS or RBD for getting best performance. MySQL save data as files on mostly configuration but the I/O is block access because the file is opened until MySQL down. This case give us both options for storing the data files. For

Re: [ceph-users] CephFS Big Size File Problem

2016-08-27 Thread Lazuardi Nasution
Hi, I have retry the test, but with FUSE CephFS client. It seem everything is OK. Any explainantion? Is Kernel CephFS client less featured (more limited) and/or less stable than FUSE CephFS client like on RBD? Best regards, > Date: Thu, 25 Aug 2016 00:25:22 +0700 > From: Lazuardi Na

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Lazuardi Nasution
caps: [mon] allow * caps: [osd] allow * Best regards, On Thu, Aug 25, 2016 at 8:15 PM, Yan, Zheng <uker...@gmail.com> wrote: > On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution > <mrxlazuar...@gmail.com> wrote: > > Hi, > > > > My setup is defaul

Re: [ceph-users] CephFS Big Size File Problem

2016-08-25 Thread Lazuardi Nasution
Hi, I have rebooted the Nova compute node with D state, so I cannot check that anymore. Best regards, On Thu, Aug 25, 2016 at 7:58 PM, Yan, Zheng <uker...@gmail.com> wrote: > On Thu, Aug 25, 2016 at 11:12 AM, Lazuardi Nasution > <mrxlazuar...@gmail.com> wrote: > > Hi

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
> On Thu, Aug 25, 2016 at 1:25 AM, Lazuardi Nasution > <mrxlazuar...@gmail.com> wrote: > > Hi, > > > > I have problem with CephFS on writing big size file. I have found that my > > OpenStack Nova backup was not working after I change the rbd based mount > of &

Re: [ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
Hi Gregory, Since I have mounted it with /etc/fstab, of course it is kernel client. What log do you mean? I cannot find anything related on dmesg. Best regards, On Aug 25, 2016 00:46, "Gregory Farnum" <gfar...@redhat.com> wrote: > On Wed, Aug 24, 2016 at 10:25 AM

[ceph-users] CephFS Big Size File Problem

2016-08-24 Thread Lazuardi Nasution
Hi, I have problem with CephFS on writing big size file. I have found that my OpenStack Nova backup was not working after I change the rbd based mount of /var/lib/nova/instances/snapshots to CephFS based (mounted on /etc/fstab on all Nova compute nodes). I couldn't relize the cause until I tried

Re: [ceph-users] Automount Failovered Multi MDS CephFS

2016-08-03 Thread Lazuardi Nasution
) on top of CephFS (different directory) or just directly mounts the same directory of CephFS? Best regards, On Wed, Aug 3, 2016 at 11:14 PM, John Spray <jsp...@redhat.com> wrote: > On Wed, Aug 3, 2016 at 5:10 PM, Lazuardi Nasution > <mrxlazuar...@gmail.com> wrote: > >

[ceph-users] Automount Failovered Multi MDS CephFS

2016-08-03 Thread Lazuardi Nasution
Hi, I'm looking for example about what to put on /etc/fstab if I want to auto mount CephFS on failovered multi MDS (only one MDS is active) especially with Jewel. My target is to build loadbalanced file/web servers with CephFS backend. Best regards,

Re: [ceph-users] Cache Tiering with Same Cache Pool

2016-06-23 Thread Lazuardi Nasution
set_period will helpful? Best regards, On Thu, Jun 23, 2016 at 7:23 AM, Christian Balzer <ch...@gol.com> wrote: > > Hello, > > On Wed, 22 Jun 2016 15:40:40 +0700 Lazuardi Nasution wrote: > > > Hi Christian, > > > > If I have several cache pool on the same SS

Re: [ceph-users] Cache Tiering with Same Cache Pool

2016-06-22 Thread Lazuardi Nasution
Available size? If diffrent, how can I know if such cache pool need more size than other. Best regards, Date: Mon, 20 Jun 2016 09:34:05 +0900 > From: Christian Balzer <ch...@gol.com> > To: ceph-users@lists.ceph.com > Cc: Lazuardi Nasution <mrxlazuar...@gmail.com> > Subje

[ceph-users] Cache Tiering with Same Cache Pool

2016-06-19 Thread Lazuardi Nasution
Hi, Is it possible to do cache tiering for some storage pools with the same cache pool? What will happen if cache pool is broken or at least doesn't meet quorum when storage pool is OK? Best regards, ___ ceph-users mailing list

Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros Cons

2016-06-17 Thread Lazuardi Nasution
To: ceph-users@lists.ceph.com, Lazuardi Nasution > <mrxlazuar...@gmail.com> > Subject: Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros > Cons > Message-ID: <100239228.47.1466160323...@ox.pcextreme.nl> > Content-Type: text/plain; charset=UTF-

Re: [ceph-users] RBD Stripe/Chunk Size (Order Number) Pros Cons

2016-06-17 Thread Lazuardi Nasution
t; small IO. > > > > Basically the answer is that there are pluses and minuses, and the exact > > behavior will depend on your kernel configuration, hardware, and use > > case. I think 4MB has been a fairly good default thus far (might change > > with bluestore), but t

[ceph-users] RBD Stripe/Chunk Size (Order Number) Pros Cons

2016-06-16 Thread Lazuardi Nasution
Hi, I'm looking for some pros cons related to RBD stripe/chunk size indicated by image order number. Default is 4MB (order 22), but OpenStack use 8MB (order 23) as default. What if we use smaller size (lower order number), isn't it more chance that image objects is spreaded through OSDs and

Re: [ceph-users] Clearing Incomplete Clones State

2016-06-14 Thread Lazuardi Nasution
nagement-b ~]# rados -p rbd lssnap 0 snaps [root@management-b ~]# rados -p rbd_cache lssnap 0 snaps Best regards, On Tue, Jun 14, 2016 at 4:55 AM, Lazuardi Nasution <mrxlazuar...@gmail.com> wrote: > Hi, > > I have removed cache tiering due to "missing hit_sets" warning. Aft

[ceph-users] PGs Realationship on Cache Tiering

2016-06-13 Thread Lazuardi Nasution
Hi, Is there any relation between PGs on cache pool and PGs on storage pool of cache tiering? Is is manadatory to set pg(p)_num between cache pool and storage pool to be equal? Best regards, ___ ceph-users mailing list ceph-users@lists.ceph.com

[ceph-users] Clearing Incomplete Clones State

2016-06-13 Thread Lazuardi Nasution
Hi, I have removed cache tiering due to "missing hit_sets" warning. After removing, I want to try to add tiering again with the same cache pool and storage pool, but I can't even the cache pool is empty or forced to clear. Following is some output. How can I deal with this? Is it possible to

Re: [ceph-users] Ceph Recovery

2016-05-18 Thread Lazuardi Nasution
The cluster should have > automatically healed after the OSDs were marked out of the cluster . > Else this will be a manual process for us every time the disk fails > which is very regular. > > Thanks > Gaurav > > On Tue, May 17, 2016 at 11:06 AM, Lazuardi Nasut

Re: [ceph-users] Ceph Recovery

2016-05-16 Thread Lazuardi Nasution
> What could be the difference between two scenarions : > > 1. OSD down due to H/W failure. > 2. OSD daemon killed . > > When I remove the 12 osds from the crushmap manually or do ceph osd > crush remove for those osds, the cluster recovers just fine. > > Thanks > Gaurav

Re: [ceph-users] Ceph Recovery

2016-05-16 Thread Lazuardi Nasution
Hi Wido, The 75% happen on 4 nodes of 24 OSDs each with pool size of two and minimum size of one. Any relation between this configuration and 75%? Best regards, On Tue, May 17, 2016 at 3:38 AM, Wido den Hollander <w...@42on.com> wrote: > > > Op 14 mei 2016 om 12:36 schreef La

Re: [ceph-users] Ceph Recovery

2016-05-14 Thread Lazuardi Nasution
wrote: > > > Op 13 mei 2016 om 11:55 schreef Lazuardi Nasution < > mrxlazuar...@gmail.com>: > > > > > > Hi Wido, > > > > The status is same after 24 hour running. It seem that the status will > not > > go to fully active+clean until all down

Re: [ceph-users] Ceph Recovery

2016-05-13 Thread Lazuardi Nasution
, On Fri, May 13, 2016 at 4:44 PM, Wido den Hollander <w...@42on.com> wrote: > > > Op 13 mei 2016 om 11:34 schreef Lazuardi Nasution < > mrxlazuar...@gmail.com>: > > > > > > Hi, > > > > After disaster and restarting for automatic recover

[ceph-users] Ceph Recovery

2016-05-13 Thread Lazuardi Nasution
Hi, After disaster and restarting for automatic recovery, I found following ceph status. Some OSDs cannot be restarted due to file system corruption (it seem that xfs is fragile). [root@management-b ~]# ceph status cluster 3810e9eb-9ece-4804-8c56-b986e7bb5627 health HEALTH_WARN

[ceph-users] Kernel:BUG: Soft Lockup, H/W or S/W Issue?

2016-05-12 Thread Lazuardi Nasution
Hi, Suddenly some of our Infernalis OSD nodes are down with "kernel:BUG: soft lockup" message. Nothing can do after that until rebooting. When I do recovery by restarting the down OSDs, one by one while add additional OSDs too, I get the same error again with on the same nodes. I'm not sure which

[ceph-users] Single OSD Nodes Rest on Disaster

2016-05-11 Thread Lazuardi Nasution
Hi, How to make ceph still work if the cluster lost all OSD hosts except one on disaster where the capacity of single host is bigger than total used data? I need to minimize downtime while recovering/reinstalling the lost hosts. More generally, how to make ceph choose highest number of types

Re: [ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-09 Thread Lazuardi Nasution
; SSD volume internal to the VM. > > -- > > Jason Dillaman > > > - Original Message - > > > From: "Lazuardi Nasution" <mrxlazuar...@gmail.com> > > To: ceph-users@lists.ceph.com > > Sent: Sunday, November 8, 2015 12:34:16 PM > > Subje

[ceph-users] Multiple Cache Pool with Single Storage Pool

2015-11-08 Thread Lazuardi Nasution
Hi, I'm new with CEPH cache tiering. Is it possible to have multiple cache pool with single storage pool backend? For example I have some compute nodes with its own local SSDs. I want each compute node has its own cache by using its own local SSDs. The target is to minimize load to storage pool

[ceph-users] Client Local SSD Caching

2015-09-21 Thread Lazuardi Nasution
Hi, I'm looking for recommended client local SSD caching startegy on OpenStack Compute where the backend storage is CEPH cluster. The target is for reducing Compute to Storage traffic. I have not known that librbd support local SSD caching. Beside, I'm not sure if block SSD caching of local

[ceph-users] Combining MON OSD Nodes

2015-06-25 Thread Lazuardi Nasution
Hi, I'm looking for pros and cons of combining MON and OSD functionality on the same nodes. Mostly recommended configuration is to have dedicated, odd number MON nodes. What I'm thinking is more like single node deployment but consist more than one node, if we have 3 nodes we have 3 MONs with 3

Re: [ceph-users] ceph-users Digest, Vol 18, Issue 8

2014-07-10 Thread Lazuardi Nasution
Hi, I prefer to use bcache or other likely local write back cache on SSD since it is only related to local HDDs. It think, it will reduce the risk of error on cache flushing comparing to CEPH Cache Tiering which still using clustering network on flushing. After the data has been written to the

Re: [ceph-users] Using large SSD cache tier instead of SSD

2014-07-10 Thread Lazuardi Nasution
Hi, I prefer to use bCache or other likely local write back cache on SSD since it is only related to local HDDs. It think, it will reduce the risk of error on cache flushing comparing to CEPH Cache Tiering which still using clustering network on flushing. After the data has been written to the

Re: [ceph-users] Combining MDS Nodes

2014-07-07 Thread Lazuardi Nasution
: text/plain; charset=ISO-8859-1; format=flowed On 07/06/2014 02:42 PM, Lazuardi Nasution wrote: Hi, Is it possible to combine MDS and MON or OSD inside the same node? Which one is better, MON with MDS or OSD with MDS? Yes, not a problem. Be aware, the MDS can be memory hungry

[ceph-users] CEPH Cache Tiering

2014-07-07 Thread Lazuardi Nasution
Hi, I'm thinking of using SSD for cache on CEPH where the SSDs are on the same OSD nodes with HDDs. My options are using CEPH Cache Tiering or using another cache software like bCache or FlashCache. On the second option, the SSDs will only cache data related to HDDs on the same node only. Any

[ceph-users] Combining MDS Nodes

2014-07-06 Thread Lazuardi Nasution
Hi, Is it possible to combine MDS and MON or OSD inside the same node? Which one is better, MON with MDS or OSD with MDS? How do I configure OSD and MDS to allow two kind of public network connections, standard (1 GbEs) and jumbo (10 GbEs). I want to take advantage of jumbo frame for some