[ceph-users] recommended Linux distro for Ceph Pacific small cluster

2022-06-27 Thread Bobby
Hi,

What is the recommended Linux distro for Ceph Pacific. I would like to set
up a small cluster having around 4-5 OSDs, one monitor node and one client
node.
Earlier I have been using CentOS. Is it recommended to continue with
CentOS? or should I go for another distro? Please do comment.

Looking forward to the reply.

Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Archive in Ceph similar to Hadoop Archive Utility (HAR)

2022-02-25 Thread Bobby
thanks Anthony and Janneexactly what I have been looking for!

On Fri, Feb 25, 2022 at 9:25 AM Janne Johansson  wrote:

> Den fre 25 feb. 2022 kl 08:49 skrev Anthony D'Atri <
> anthony.da...@gmail.com>:
> > There was a similar discussion last year around Software Heritage’s
> archive project, suggest digging up that thread.
> > Some ideas:
> >
> > * Pack them into (optionally compressed) tarballs - from a quick search
> it sorta looks like HAR uses a similar model.  Store the tarballs as RGW
> objects, or as RBD volumes, or on CephFS.
>
> After doing several different kinds of storage solutions in my career,
> this above advice is REALLY important. Many hard to solve problems
> have started out with "it is just one million files/objects" and when
> you reach 50 and sound the alarm, people try to throw money at the
> problem instead, and then you reach 2-3-400M and then you can't ask
> for the index in finite time without it being invalid by the time the
> list is complete.
>
> If you have a possibility to stick 10,100,1000 small items into a
> .tar, into a .zip, into whatever, DO IT. Do it before the numbers grow
> too large to handle. When the numbers grow too big, you seldom get the
> chance to both keep running in the too-large setup AND re-pack them at
> the same time.
>
> --
> May the most significant bit of your life be positive.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Archive in Ceph similar to Hadoop Archive Utility (HAR)

2022-02-24 Thread Bobby
Hi,

Is there any archive utility in Ceph similar to Hadoop Archive Utility
(HAR)? Or in other words. how can one archive small files in Ceph?

Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Monitor dashboard notification: "will be full in less than 5 days......"

2022-02-01 Thread Bobby
Hello all,

Please excuse if my question is too basic and I should have known it
already. Given my OSDs in Ceph storage cluster are not at all full,  I get
this Monitor notification "will be full in less than 5 days assuming the
average fill-up rate of the past 48 hours".

Any idea how to handle this? And what should be my approach?

Thanks in advance
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph source code build bug in Pacific for Ubuntu 18.04?

2022-01-08 Thread Bobby
Hi,

Is there any Ubuntu 18.04 related bug for Ceph latest release Pacific? I
never had a problem building Ceph source code as compared to now. Every
time it points to CMake related errors.  I am using Cmake 3.22. I have made
sure all dependencies are installed but I simply fail to build the source
code. May be something is broken on Ceph cmake files that need to be taken
care of specifically for Ubuntu 18.04 and I am missing that?

many thanks in advance
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Cephalocon 2022 deadline extended?

2021-12-10 Thread Bobby
one typing mistakeI meant 19 December 2021

On Fri, Dec 10, 2021 at 8:21 PM Bobby  wrote:

>
> Hi all,
>
> Has the CfP deadline for Cephalcoon 2022 been extended to 19 December
> 2022? Please confirm if anyone knows it...
>
>
> Thanks
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Cephalocon 2022 deadline extended?

2021-12-10 Thread Bobby
Hi all,

Has the CfP deadline for Cephalcoon 2022 been extended to 19 December 2022?
Please confirm if anyone knows it...


Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph rbd-nbd performance benchmark

2021-06-23 Thread Bobby
Hi,

I am trying to benchmark the Ceph rbd-nbd performance. Are there any
authentic existing benchmark results of rbd-nbd for comparison?


BR

Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

2021-02-16 Thread Bobby
@Marc: thanks a lot.. your results have been helpful to understand.

@Mark: mainly HDDs.not even one SSD.so yes, pretty slow.

On Wed, Feb 10, 2021 at 9:22 PM Marc  wrote:

> > Some more questions please:
> > How many OSDs have you been using in your second email tests for 1gbit
> > [1]
> > and 10gbit [2] ethernet? Or to be precise, what is your cluster for
>
> When I was testing with 1gbit ethernet I had 11 osds on 4 servers, but
> this already showed saturated 1Gbit links. Now on the 10gbit ethernet DAC
> it is with 30 hdd's or so. Keep in mind that the default rados bench is
> using 16 threads.
>
> If I do the 1 thread I will get something like yours[1], if I do the same
> on the ssd pool, I get this[2]. And if I remove the 3x times replication on
> the ssd pool, this[3], and the 16 threads ssd pool with 3x on[4]
>
> Side note is that I did not fully tune my cluster on performance, I have
> still processors doing frequency/powerstate switching. Have slower hdd sata
> drives combined with faster sas. But this fits my use case.
>
> What I have should not be of interest to you. You have to determine what
> you need, and describe your use case, then there are quite a few good
> people here that can advice you how to realize that, or tell you it is not
> possible with ceph ;)
>
>
> [@~]# rados bench -t 1 -p rbd 10 write
> hints = 1
> Maintaining 1 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3768767
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
> 0   0 0 0 0 0   -
>  0
> 1   1 5 4   15.9973160.278477
> 0.240159
> 2   110 9   17.9973200.162663
> 0.219858
> 3   11716 21.3328 0.21535
> 0.181435
> 4   12625   24.9965360.154064
> 0.158931
> 5   13332   25.5966280.119773
> 0.153031
> 6   14241   27.3295360.064895
> 0.144242
> 7   15049   27.9962320.192591
> 0.142036
> 8   15958   28.9961360.108623
> 0.137699
> 9   16968   30.218340   0.0684741
> 0.132143
>10   1787730.796360.118075
>  0.12872
> Total time run: 10.1903
> Total writes made:  79
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 31.01
> Stddev Bandwidth:   7.78603
> Max bandwidth (MB/sec): 40
> Min bandwidth (MB/sec): 16
> Average IOPS:   7
> Stddev IOPS:1.94651
> Max IOPS:   10
> Min IOPS:   4
> Average Latency(s): 0.128988
> Stddev Latency(s):  0.0571245
> Max latency(s): 0.385165
> Min latency(s): 0.0608502
> Cleaning up (deleting benchmark objects)
> Removed 79 objects
> Clean up completed and total clean up time :2.49933
>
> [2]
> [@~]# rados bench -t 1 -p rbd.ssd 10 write
> hints = 1
> Maintaining 1 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3769249
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
> 0   0 0 0 0 0   -
>  0
> 1   13938   151.992   152   0.0318137
>  0.0258572
> 2   18079   157.985   164   0.0239471
>  0.0250284
> 3   1   122   121   161.315   168   0.0240444
>  0.0247604
> 4   1   163   162   161.981   164   0.0270316
> 0.024625
> 5   1   204   203162.38   164   0.0235799
>  0.0245714
> 6   1   246   245   163.313   168   0.0296698
>  0.0244574
> 7   1   286   285   162.836   160   0.0232353
>  0.0245383
> 8   1   326   325   162.479   160   0.0236261
>  0.0245476
> 9   1   367   366   162.646   164   0.0249223
>  0.0245132
>10   1   408   407   162.779   164   0.0229952
>  0.0245034
> Total time run: 10.0277
> Total writes made:  409
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 163.149
> Stddev Bandwidth:   4.63801
> Max bandwidth (MB/sec): 168
> Min bandwidth (MB/sec): 152
> Average IOPS:   40
> Stddev IOPS:1.1595
> Max IOPS:   42
> Min IOPS:   38
> Average Latency(s): 0.0245153
> Stddev Latency(s):  0.00212425
> Max latency(s): 0.0343171
> Min latency(s): 0.0202639
> Cleaning up (deleting benchmark objects)
> Removed 409 objects
> Clean up completed and total clean up time :0.521216
>
> [3]
> [@~]# rados bench -t 1 -p rbd.ssd.r1 10 write
> hints = 1
> 

[ceph-users] how far can we go using vstart.sh script for fake dev cluster-HELP

2021-02-11 Thread Bobby
Hi,

Ceph source code contains a script called vstart.sh  which allows
developers to quickly test their code using a simple deployment on your
development system.

Here: https://docs.ceph.com/en/latest//dev/quick_guide/

I am really curious that how far we can go with vstart.sh script.

While my development cluster is running, I use tools like rados bench, rbd
and rbd-nbd to benchmark simple workload and test my code. Do we have
options to change the network settings in the fake cluster built from
vstart script and later benchmark it? For example , trying 1gbit ethernet
and 10gbit ethernet.

Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

2021-02-10 Thread Bobby
thanks.

Ceph source code contains a script called vstart.sh  which allows
developers to quickly test their code using a simple deployment on your
development system.

Here: https://docs.ceph.com/en/latest//dev/quick_guide/

Although I completely agree with your manual deployment part, I thought may
be the script can also give a good idea.  May be I need to ask this in
another email that how far I can go with the script.


Some more questions please:
How many OSDs have you been using in your second email tests for 1gbit [1]
and 10gbit [2] ethernet? Or to be precise, what is your cluster for both?

On Wed, Feb 10, 2021 at 11:40 AM Marc  wrote:

>
> > And you had the hit the nail by asking about *replication factor*.
> > Because
> > I don't know how to change the replication factor. AFAIK, by default it
> > is
> > *3x*. But I would like to change, for example to* 2x*.
>
> ceph osd pool get rbd size
> https://docs.ceph.com/en/latest/man/8/ceph/
>
> > So please excuse me for two naive questions  before my cluster info [1]:
> >
> > - How can I change my replication factor? I am assuming I can change it
> >through vstart script.
>
> I have no idea what vstart is. If you want to learn ceph (and you should,
> if you are going to play with large amounts of other peoples data) install
> it manually. IMHO deployment tools are for making deployments easier and
> faster and not for for I don't know, so lets run a script.
>
>
> > - How can I change ethernet speed on test cluster? For example, 1gbit
> > ethernet
> >   and 10gbit ethernet. Like you had done it. Assuming I can change it
> > through  vstart script.
>
> Don't do it, it is a waste of time, it is just for reference. I wanted to
> know when I started creating my test cluster.
>
> >  [1]
> > I am running a minimal cluster of 4 OSDs .
>
> I am not sure if you are going to get much more performance out of it
> then. Because you do not utilize the power of many osd's.
>
> This how my individual drives perform under the same rados bench test. All
> around the 20MB/s
>
> [@~]# dstat -d -D sdb,sdc,sdd,sdf,sdl,sdg,sdh,sdi
>
> --dsk/sdb-dsk/sdc-dsk/sdd-dsk/sdf-dsk/sdl-dsk/sdg-dsk/sdh-dsk/sdi--
>  read  writ: read  writ: read  writ: read  writ: read  writ: read  writ:
> read  writ: read  writ
> 3664k  284k:2507k  172k:2692k  204k:6676k  467k:2405k  322k:3220k
> 230k:1932k  196k:2050k  202k
>0 0 :   0 0 :   0 0 :   0 0 :   0 0 :   0 0 :
>  0 0 :   0 0
>0  8192B:   0 0 :   028k:   044k:   0   928k:   028k:
>  0 0 :   012k
>0  4096B:   0 0 :   036k:  68k   32k:   0 0 :   0 0 :
>  0 0 :   0 0
>0 0 :   0 0 :   0 0 :   0 0 :   0 0 :   0 0 :
>  0 0 :   0 0
>0 0 :   0 0 :   0 0 :   024k:   0 0 :   0 0 :
>  0 0 :   0 0
> 4096B  104k:   0 0 :4096B   20k:   0 0 :8192B  152k:   080k:
>  0 0 :   072k
>0 0 :   0 0 :   0 0 :   0 0 :   0 0 :   0 0 :
>  0 0 :   0  4096B
>0 0 :   012k:   0 0 :   0 0 :   012k:   024k:
>  0 0 :   012k
>072k:   016k:   032k:  20k  100k:   0  4096B:   0 0 :
>  0 0 :   024k
>0  8200k:   0 0 :   020M:  12k   20M:   028M:   020M:
>  012M:   020M
>016M:   012M:   024M:   016M:   047M:   012M:
>  0  8212k:   020M
>024M:   011M:   028M:   028M:   049M:   044M:
>  012M:   024M
>038M:   013M:   042M:   032M:   028M:   031M:
>  0  4104k:   021M
>050M:   0  8204k:   028M:4096B   44M:   061M:   033M:
>  0  8204k:   012M
>032M:   020M:4096B   38M:   020M:   055M:8192B   39M:
>  032M:   024M
>016M:   024M:4096B   29M:   036M:   028M:   017M:
>  037M:   0 0
> 4096B   44M:   016M:   040M:  44k   31M:4096B   28M:8192B   32M:
>  012M:   024M
>012M:   028M:   0  6196k:   018M:   052M:   032M:
>  046M:  12k   40M
>020M:   018M:   038M:   052M:   032M:   024M:
>  027M:   043M
>0   128k:   0  2056k:   016k:  20k   12M:   0 0 :   0
> 8212k:4096B   12k:8192B 9804k
>0   520k:   0   116k:   0   280k:   0   452k:   0   364k:   0   208k:
>  0   152k:   0   144k
>064k:   088k:   064k:   0   132k:4096B  156k:   088k:
>  072k:   0   184k
>0   140k:   0 0 :   0 0 :8192B   20k:   012k:   0   112k:
>  0 0 :   0 0
>0 0 :   0  8192B:   012k:  32k 1044k:   0 0 :4096B
>  16k:4096B0 :   024k
>0 0 :   0 0 :   036k:   012k:   0 0 :   0 0 :
>  0 0 :   0 0
>0 0 :   0 0 :   0 0 :   0 0 :   020k:   0 0 :
>  0 0 :   0 0
>092k:   024k:   0

[ceph-users] Re: struggling to achieve high bandwidth on Ceph dev cluster - HELP

2021-02-10 Thread Bobby
thanks, this looks really helpful and it proves me that I am not doing the
right way.

And you had the hit the nail by asking about *replication factor*. Because
I don't know how to change the replication factor. AFAIK, by default it is
*3x*. But I would like to change, for example to* 2x*.

So please excuse me for two naive questions  before my cluster info [1]:

- How can I change my replication factor? I am assuming I can change it
   through vstart script.

- How can I change ethernet speed on test cluster? For example, 1gbit
ethernet
  and 10gbit ethernet. Like you had done it. Assuming I can change it
through  vstart script.


 [1]
I am running a minimal cluster of 4 OSDs .
I am passing following shell parameters for vstart.sh:
MDS=1 RGW=1 MON=1 OSD=4 ../src/vstart.sh -d -l -n --bluestore


cluster:
id: fce9b3c6-2814-4df2-a5e5-ee0d001a8f4f
health: HEALTH_OK

  services:
mon: 1 daemons, quorum a (age 4m)
mgr: x(active, since 4m)
osd: 4 osds: 4 up (since 3m), 4 in (since 3m)
rgw: 1 daemon active (8000)

  data:
pools:   5 pools, 112 pgs
objects: 329 objects, 27 KiB
usage:   4.0 GiB used, 400 GiB / 404 GiB avail
pgs: 0.893% pgs not active
 111 active+clean
 1   peering


On Wed, Feb 10, 2021 at 10:47 AM Marc  wrote:

> You have to tell a bit about your cluster setup, like nr of osd's, 3x
> replication on your testing pool?
>
> Eg. this[1] was my test on a cluster with only 1gbit ethernet, 3x repl hdd
> pool. This[2] with 10gbit and more osd's added
>
> [2]
> [root@c01 ~]# rados bench -p rbd 10 write
> hints = 1
> Maintaining 16 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_3576497
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
> 0   0 0 0 0 0   -
>  0
> 1  164125   99.9948   1000.198773
>  0.41148
> 2  16   10185   169.984   2400.203578
> 0.347027
> 3  16   172   156   207.979   284   0.0863202
> 0.296866
> 4  16   245   229   228.975   2920.139681
> 0.268933
> 5  16   322   306   244.772   3080.107296
> 0.257353
> 6  16   385   369245.97   2520.601879
> 0.250782
> 7  16   460   444   253.684   3000.154803
> 0.247178
> 8  16   541   525   262.467   3240.274302
> 0.241951
> 9  16   604   588 261.3   252 0.11929
> 0.238717
>10  16   672   656   262.367   2720.134654
> 0.241424
> Total time run: 10.1504
> Total writes made:  673
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 265.212
> Stddev Bandwidth:   63.0823
> Max bandwidth (MB/sec): 324
> Min bandwidth (MB/sec): 100
> Average IOPS:   66
> Stddev IOPS:15.7706
> Max IOPS:   81
> Min IOPS:   25
> Average Latency(s): 0.241012
> Stddev Latency(s):  0.154282
> Max latency(s): 1.05851
> Min latency(s): 0.0702826
> Cleaning up (deleting benchmark objects)
> Removed 673 objects
> Clean up completed and total clean up time :1.26346
>
> [1]
> [@]# rados bench -p rbd 10 write --no-cleanup
> hints = 1
> Maintaining 16 concurrent writes of 4194304 bytes to objects of size
> 4194304 for up to 10 seconds or 0 objects
> Object prefix: benchmark_data_c01_18283
>   sec Cur ops   started  finished  avg MB/s  cur MB/s last lat(s)  avg
> lat(s)
> 0   0 0 0 0 0   -
>  0
> 1  162711   43.9884440.554119
> 0.624979
> 2  164731   61.984180 1.04112
> 0.793553
> 3  16574154.65440 1.33104
> 0.876273
> 4  167559   58.9869720.840098
>  0.97091
> 5  169781   64.786488 1.02915
> 0.922043
> 6  16   10589   59.320732  1.2471
> 0.915408
> 7  16   129   113   64.5582960.616579
> 0.947882
> 8  16   145   129   64.486664 1.09397
> 0.921441
> 9  16   163   147   65.3201720.885566
> 0.906388
>10  16   166   150   59.988112 1.22834
> 0.909591
>11  13   167   154   55.988916 2.30029
> 0.942798
> Total time run: 11.141939
> Total writes made:  167
> Write size: 4194304
> Object size:4194304
> Bandwidth (MB/sec): 59.9537
> Stddev Bandwidth:   28.7889
> Max bandwidth (MB/sec): 96
> Min bandwidth (MB/sec): 12
> Average IOPS:   14
> Stddev IOPS:7
> Max IOPS:   24
> Min IOPS:   3
> Average 

[ceph-users] struggling to achieve high bandwidth on Ceph dev cluster - HELP

2021-02-10 Thread Bobby
Hi,

Hello I am using rados bench tool. Currently I am using this tool  on the
development cluster after running vstart.sh script. It is working fine and
I am interested in benchmarking the cluster. However I am struggling to
achieve a good bandwidth i.e. bandwidth (MB/sec).  My target throughput is
at least 50 MB/sec and more. But mostly I am achieving is around 15-20
MB/sec. So, very poor.

I am quite sure I am missing something. Either I have to change my cluster
through vstart.sh script or I am not fully utilizing the rados bench tool.
Or may be both. i.e. not the right cluster and also not using the rados
bench tool correctly.

Some of the shell examples I have been using to build the cluster are
bellow:
MDS=0 RGW=1 ../src/vstart.sh -d -l -n --bluestore
MDS=0 RGW=1 MON=1 OSD=4../src/vstart.sh -d -l -n --bluestore

While using rados bench tool I have been trying with different block sizes
4K, 8K, 16K, 32K, 64K, 128K, 256K, 512K. And I have also been changing the
-t parameter in the shell to increase concurrent IOs.


Looking forward to help.

Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: rados bench error after running vstart script- HELP PLEASE

2021-01-26 Thread Bobby
yes, correct.it was in older releasesI remember once I saw this
video from Sage where he had run the rados bench on a default 'rbd'.

On Tue, Jan 26, 2021 at 2:23 PM Janne Johansson  wrote:

> Den tis 26 jan. 2021 kl 14:20 skrev Bobby :
> > well, you are right. I forgot to create the pool. I thought 'rbd' pool is
> > created by default. Now it works after creating it :-)
>
> It was on older releases, I think many old clusters have an unused "rbd"
> pool.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: rados bench error after running vstart script- HELP PLEASE

2021-01-26 Thread Bobby
well, you are right. I forgot to create the pool. I thought 'rbd' pool is
created by default. Now it works after creating it :-)

On Tue, Jan 26, 2021 at 1:52 PM Eugen Block  wrote:

> The message is quite clear, it seems as if you don't have a pool
> "rbd", do you?
>
>
> Zitat von Bobby :
>
> > Hello,
> >
> >
> > I am having an error while trying to run rados benchmark after running
> > vstart script
> >
> > I run :
> >
> > ../src/vstart.sh -d -n -l
> >
> > and then when I try to run:
> >
> > bin/rados -p rbd bench 30 write
> >
> > it gives me error saying:
> >
> > error opening pool rbd: (2) No such file or directory
> >
> > Can someone please help me in this. I will be grateful
> >
> >
> > Thanks
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] rados bench error after running vstart script- HELP PLEASE

2021-01-26 Thread Bobby
Hello,


I am having an error while trying to run rados benchmark after running
vstart script

I run :

../src/vstart.sh -d -n -l

and then when I try to run:

bin/rados -p rbd bench 30 write

it gives me error saying:

error opening pool rbd: (2) No such file or directory

Can someone please help me in this. I will be grateful


Thanks
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph on vector machines

2020-12-08 Thread Bobby
Hi all,


Just out of curiosity.Considering vector machines are being used in HPC
applications to accelerate certain kernels, do you think there are some
workloads in Ceph that could be good candidates to be offloaded and
accelerated on vector machines ?


Thanks in advance.

BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] uniform and list crush bucket algorithm usage in data centers

2020-11-25 Thread Bobby
Hi all,

For placement purposes ceph uses the default straw2 bucket algorithm. I am
curious if the other two bucket algorithms like uniform and list are also
being used in some present use cases in data centers? Are there any use
cases where straw2 is not being used at all ?


BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] using fio tool in ceph development cluster (vstart.sh)

2020-11-20 Thread Bobby
Hi,

I am using the Ceph development cluster through vstart.sh script. I would
like to measure/benchmark read and write performance (benchmark ceph at a
low level). For that I want to use the fio tool.

Can we use fio on the development cluster? AFAIK, we can. I have seen
the fio option in the CMakeLists.txt of the Ceph source code.

Thanks in advance.

BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [Ceph-qa] Using rbd-nbd tool in Ceph development cluster

2020-11-18 Thread Bobby
Thanks a lot ! It works :-)

On Mon, Nov 16, 2020 at 2:15 PM Mykola Golub 
wrote:

> On Mon, Nov 16, 2020 at 12:19:35PM +0100, Bobby wrote:
>
> > My question is: Can we use this *rbd-nbd* tool in the Ceph cluster? By
> Ceph
> > cluster I mean the development cluster we build through *vstart.sh*
> script.
> > I am quite sure we could use it. I have this script running. I can
> *start*
> >  and *stop* the cluster. But I am struggling to use this rbd-nbd tool in
> > the development cluster which we build through vstart.sh script.
>
> Sure. Running this from the build directory should just work:
>
>   sudo ./bin/rbd-nbd map $pool/$image
>   ./bin/rbd-nbd list-mapped
>   sudo ./bin/rbd-nbd unmap $pool/$image
>
> Doesn't it work for you?
>
> --
> Mykola Golub
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Using rbd-nbd tool in Ceph development cluster

2020-11-16 Thread Bobby
Hi,

I have to use this *rbd-nbd *tool from Ceph. This is part of Ceph source
code.
Here: https://github.com/ceph/ceph/tree/master/src/tools/rbd_nbd

My question is: Can we use this *rbd-nbd* tool in the Ceph cluster? By Ceph
cluster I mean the development cluster we build through *vstart.sh* script.
I am quite sure we could use it. I have this script running. I can *start*
 and *stop* the cluster. But I am struggling to use this rbd-nbd tool in
the development cluster which we build through vstart.sh script.

Looking for help.

Thanks.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] rbd-nbd multi queue

2020-09-15 Thread Bobby
Hi,

I have come across an old thread (2017) on the topic rbd-nbd performance.

Here: https://www.spinics.net/lists/ceph-devel/msg36645.html

It says they have tried  adding multi-connections support on rbd-nbd with
newest nbd driver, so that the nbd driver can create multiple io queues,
and each io queue is associated with one socket connection to talk to
rbd-nbd for request sending and response receiving.

I have to work on the simular use-case.  My question is , does the current
rbd-ndb tool has the multi-connections support by default so that I can
create multiple IO queues in the nbd driver?

Thanks
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bonus Ceph Tech Talk: Edge Application - Stream Multiple Video Sources

2020-08-20 Thread Bobby
thanks!

On Thursday, August 20, 2020, Mike Perez  wrote:
> Here's the video in case you missed it:
>
> https://www.youtube.com/watch?v=Q8bU-m07Czo
>
> On 8/20/20 10:03 AM, Mike Perez wrote:
>>
>> And we're live! Please join us and bring questions!
>>
>> https://bluejeans.com/908675367
>>
>> On 8/17/20 11:03 AM, Mike Perez wrote:
>>>
>>> Hi all,
>>>
>>> We have a bonus Ceph Tech Talk for August. Join us August 20th at 17:00
UTC to hear Neeha Kompala and Jason Weng present on Edge Application -
Streaming Multiple Video Sources.
>>>
>>> Don't forget on August 27th at 17:00 UTC, Pritha Srivastava will also
be presenting on this month's Ceph Tech Talk: Secure Token Service in the
Rados Gateway.
>>>
>>> If you're interested in giving a Ceph Tech Talk for September 24th or
October 22nd, please let me know!
>>>
>>> https://ceph.io/ceph-tech-talks/
>>>
>>> --
>>>
>>> Mike Perez
>>>
>>> He/Him
>>>
>>> Ceph Community Manager
>>>
>>> Red Hat Los Angeles 
>>>
>>> thin...@redhat.com 
>>> M: 1-951-572-2633  IM: IRC Freenode/OFTC: thingee
>>>
>>> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>>>
>>> @Thingee 
>>> 
>>> 
>>>
>> --
>>
>> Mike Perez
>>
>> He/Him
>>
>> Ceph Community Manager
>>
>> Red Hat Los Angeles 
>>
>> thin...@redhat.com 
>> M: 1-951-572-2633  IM: IRC Freenode/OFTC: thingee
>>
>> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>>
>> @Thingee 
>> 
>> 
>>
> --
>
> Mike Perez
>
> He/Him
>
> Ceph Community Manager
>
> Red Hat Los Angeles 
>
> thin...@redhat.com 
> M: 1-951-572-2633  IM: IRC Freenode/OFTC: thingee
>
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>
> @Thingee 
> 
> 
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Bonus Ceph Tech Talk: Edge Application - Stream Multiple Video Sources

2020-08-20 Thread Bobby
Hi...Will it be available on youtube?

On Thursday, August 20, 2020, Marc Roos  wrote:
>
> Can't join as guest without enabling mic and/or camera???
>
> -Original Message-
> From: Mike Perez [mailto:mipe...@redhat.com]
> Sent: donderdag 20 augustus 2020 19:03
> To: ceph-users@ceph.io
> Subject: [ceph-users] Re: Bonus Ceph Tech Talk: Edge Application -
> Stream Multiple Video Sources
>
> And we're live! Please join us and bring questions!
>
> https://bluejeans.com/908675367
>
> On 8/17/20 11:03 AM, Mike Perez wrote:
>>
>> Hi all,
>>
>> We have a bonus Ceph Tech Talk for August. Join us August 20th at
>> 17:00 UTC to hear Neeha Kompala and Jason Weng present on Edge
>> Application - Streaming Multiple Video Sources.
>>
>> Don't forget on August 27th at 17:00 UTC, Pritha Srivastava will also
>> be presenting on this month's Ceph Tech Talk: Secure Token Service in
>> the Rados Gateway.
>>
>> If you're interested in giving a Ceph Tech Talk for September 24th or
>> October 22nd, please let me know!
>>
>> https://ceph.io/ceph-tech-talks/
>>
>> --
>>
>> Mike Perez
>>
>> He/Him
>>
>> Ceph Community Manager
>>
>> Red Hat Los Angeles 
>>
>> thin...@redhat.com 
>> M: 1-951-572-2633  IM: IRC Freenode/OFTC: thingee
>>
>> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>>
>> @Thingee 
>> 
>> 
>>
> --
>
> Mike Perez
>
> He/Him
>
> Ceph Community Manager
>
> Red Hat Los Angeles 
>
> thin...@redhat.com 
> M: 1-951-572-2633  IM: IRC Freenode/OFTC: thingee
>
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
>
> @Thingee 
> 
> 
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an
> email to ceph-users-le...@ceph.io
>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Can you block gmail.com or so!!!

2020-08-06 Thread Bobby
No please :-( ! I'm a Ceph user with a gmail account.

On Thursday, August 6, 2020, David Galloway  wrote:
> Oh, interesting.  You appear to be correct.  I'm running each of the
> mailing lists' services in their own containers so the private IP makes
> sense.
>
> I just commented on a FR for Hyperkitty to disable posting via Web UI:
> https://gitlab.com/mailman/hyperkitty/-/issues/264
>
> Aside from that, I can confirm my new SPF filter has already blocked one
> spam e-mail from getting through so that's good.
>
> Thanks for the tip.
>
> On 8/6/20 2:34 PM, Tony Lill wrote:
>> I looked at the received-from headers, and it looks to me like these
>> messages are being fed into the list from the web interface. The first
>> received from is from mailman web and a private IP.
>>
>> On 8/6/20 2:09 PM, David Galloway wrote:
>>> Hi all,
>>>
>>> As previously mentioned, blocking the gmail domain isn't a feasible
>>> solution since the vast majority of @gmail.com subscribers (about 500 in
>>> total) are likely legitimate Ceph users.
>>>
>>> A mailing list member recommended some additional SPF checking a couple
>>> weeks ago which I just implemented today.  I think what's actually
>>> happening is a bot will subscribe using a gmail address and then
>>> "clicks" the confirmation link.  They then spam from a different domain
>>> pretending to be coming from gmail.com but it's not.  The new config I
>>> put in place should block that.
>>>
>>> Hopefully this should cut down on the spam.  I took over the Ceph
>>> mailing lists last year and it's been a never-ending cat and mouse game
>>> of spam filters/services, configuration changes, etc.  I'm still
>>> learning how to be a mail admin so your patience and understanding is
>>> appreciated.
>>>
>>
>>
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] which exact decimal value is meant here for S64_MIN in CRUSH Mapper

2020-08-01 Thread Bobby
Hi,

In *mapper.c*  file of Ceph CRUSH, I am trying to understand the definition
of a linux macro  ```*S64_MIN*``` used in the  following  ```*else*```
condition i.e. ```*draw = S64_MIN*```.

Which exact decimal value is meant here for ```*S64_MIN*```?

```
if (weights[i])
  {
u = hash(bucket->h.hash, x, ids[i], r);
u &= 0x;
ln = crush_ln(u) - 0x1ll;

__s64 draw = div64_s64(ln, weights[i]);
}
else
   {
__s64 draw = S64_MIN;

  // #define S64_MAX((s64)(U64_MAX >> 1))
  // #define S64_MIN ((s64)(-S64_MAX -1))
  }
if (i == 0 || draw > high_draw)

  {
high = i;
high_draw = draw;
 }
}
return bucket->h.items[high];
}


Thanks

Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] weight_set array in Ceph CRUSH

2020-07-24 Thread Bobby
Hi,

I have a confusion regarding a struct data type in Ceph CRUSH source code.
Header fie in  here
https://github.com/ceph/ceph/blob/master/src/crush/crush.h

If you see below. there is a struct data type namely _weight_set_.  What I
have understood going through different CRUSH maps, this _weight_set_ array
should be a 2D array, am I right?
struct crush_choose_arg {
__s32 *ids; /*!< values to use instead of items */
__u32 ids_size; /*!< size of the __ids__ array */
struct crush_weight_set *weight_set; /*!< weight replacements for a given
position */
__u32 weight_set_positions;
};

BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: script for compiling and running the Ceph source code

2020-07-21 Thread Bobby
And to put it more precisely, I would like to figure out how many times
this particular function is called during the execution of the program?

BR
Bobby !

On Tue, Jul 21, 2020 at 1:24 PM Bobby  wrote:

>
> Hi,
>
> I am trying to profile the number of invocations to a particular function
> in  Ceph source code. I have instrumented the code with time functions.
>
> Can someone please share the script for compiling and running the Ceph
> source code? I am struggling with it. That would be great help !
>
> BR
> Bobby !
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] script for compiling and running the Ceph source code

2020-07-21 Thread Bobby
Hi,

I am trying to profile the number of invocations to a particular function
in  Ceph source code. I have instrumented the code with time functions.

Can someone please share the script for compiling and running the Ceph
source code? I am struggling with it. That would be great help !

BR
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: client - monitor communication.

2020-07-15 Thread Bobby
Hi Budai,

When you ask "*how often the client is retrieving the Cluster Map?*" . The
obvious answer to that is there is nothing 'often' in it. Whenever there is
a change in the map, the monitor will inform the client.

I think you need to read about the CRUSH algorithm in Ceph.  Because that
will explain you the map changes and data movement.  While going through
CRUSH,  forget there is a monitor node. Just suppose there is a client
machine and this client machine READ/WRITE to a cluster ( number of OSDs).
Because theoretically a Ceph client can also be a monitor (not at all
recommended for practical purposes). Once you have understood CRUSH, I am
quite sure that will answer many of your questions.

And feel free to ask about CRUSH. I would be glad to answer.

BR





On Wed, Jul 15, 2020 at 8:54 AM Budai Laszlo  wrote:

>
> to be more specific: if we have an RBD volume used by a client (a
> hypervisor, or or mapped with rbd), we assume continuous activity on the
> volume. How often will the RBD client contact the monitor to get the
> current map? Are you aware of any documentation page that describes this
> interaction?
>
> Thank you,
> Laszlo
>
>
> On 7/15/20 8:12 AM, Budai Laszlo wrote:
> > Hi Nghia,
> >
> > in the docs (https://docs.ceph.com/docs/master/architecture/#about-pools)
> there is the statement "Ceph Clients retrieve a Cluster Map from a Ceph
> Monitor, and write objects to pools." My question is how often the client
> is retrieving the Cluster Map? How does the client get the knowledge about
> a change in the cluster?
> >
> > Thank you,
> > Laszlo
> >
> > On 7/15/20 7:57 AM, Nghia Viet Tran wrote:
> >> Hi Laszlo,
> >>
> >> Which client are you talking about?
> >>
> >> On 7/15/20, 11:54, "Budai Laszlo"  wrote:
> >>
> >> Hello everybody,
> >>
> >> I'm trying to figure out how often the ceph client is contacting
> the monitors for updating its own information about the cluster map.
> >> Can anyone point me to a document describing this client <->
> monitor communication?
> >>
> >> Thank you,
> >> Laszlo
> >> ___
> >> ceph-users mailing list -- ceph-users@ceph.io
> >> To unsubscribe send an email to ceph-users-le...@ceph.io
> >>
> >
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph and Red Hat Summit 2020

2020-07-15 Thread Bobby
Hi,

Any Ceph related event in Red Hat Summit 2020 happening today?

BR
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS

2020-07-13 Thread Bobby
Hi,

I have a question regarding support for multiple BLK-MQ queues for Ceph's
RADOS Block Device (RBD). The below given link says that the driver has
been using the BLK-MQ interface for a while but not actually multiple
queues until now with having a queue per-CPU. A change to not hold onto
caps that aren't actually needed.  These improvements and more can be found
as part of the Ceph changes for Linux 5.7, which should be released as
stable in early June.

https://www.phoronix.com/scan.php?page=news_item=Linux-5.7-Ceph-Performance

My question is: Is it possible that through Ceph FS (Filesystem in User
Space) I can develop a multi-queue driver for Ceph? Asking because this way
I can avoid kernel space. (
https://docs.ceph.com/docs/nautilus/start/quick-cephfs/)

Looking forward for some help

BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Research and Industrial conferences for Ceph research results

2020-07-12 Thread Bobby
Hi Cephers,

Can someone please share about the research and industrial conferences
where one can publish Ceph related new research results? Additionally, are
there any conferences which are particularly interested in Ceph results? I
would like to know all suitable conferences. Thanks :-)

Looking forward to hearing from you.

BR
Bobby !!
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Pointers in __crush_do_rule__ function of CRUSH mapper file

2020-06-26 Thread Bobby
Hi all,

I have a question regarding pointer variables used in the __crush_do_rule__
function of CRUSH __mapper.c__. Can someone please help me understand the
purpose of following four pointer variables inside __crush_do_rule__:

int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;


The function __crush_do_rule__ is below:


/**
 * crush_do_rule - calculate a mapping with the given input and rule
 * @map: the crush_map
 * @ruleno: the rule id
 * @x: hash input
 * @result: pointer to result vector
 * @result_max: maximum result size
 * @weight: weight vector (for map leaves)
 * @weight_max: size of weight vector
 * @cwin: Pointer to at least map->working_size bytes of memory or NULL.
 */
int crush_do_rule(const struct crush_map *map,
 int ruleno, int x, int *result, int result_max,
 const __u32 *weight, int weight_max,
 void *cwin, const struct crush_choose_arg *choose_args)
{
int result_len;
struct crush_work *cw = cwin;
int *a = (int *)((char *)cw + map->working_size);
int *b = a + result_max;
int *c = b + result_max;
int *w = a;
int *o = b;
int recurse_to_leaf;
int wsize = 0;
int osize;
int *tmp;
const struct crush_rule *rule;
__u32 step;
int i, j;
int numrep;
int out_size;



Thanks

Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph and linux multi queue block IO layer

2020-06-22 Thread Bobby
Hi all,

One question please. Does Ceph uses the linux multi queue block IO layer  ?

BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] struct crush_bucket **buckets in Ceph CRUSH

2020-06-16 Thread Bobby
Hi,

I have a question regarding Ceph CRUSH. I have been going through Crush.h
file. It says that  *struct crush_bucket **buckets * (below) is an array of
pointers. My understanding is that this particular array of pointers is a
collection of addresses of six scalar values namely  __s32 id;   __u16
type;__u8 alg,  __u8 hash, __u32 weight, __u32 size and the reason it
has double pointer **buckets because it also points to another pointer
namely __s32 *items? Please correct me if I am wrong.


/** @ingroup API
 *
 * A crush map define a hierarchy of crush_bucket that end with leaves
 * (buckets and leaves are called items) and a set of crush_rule to
 * map an integer to items with the crush_do_rule() function.
 *
 */
struct crush_map {
/*! An array of crush_bucket pointers of size __max_buckets__.
 * An element of the array may be NULL if the bucket was removed
with
 * crush_remove_bucket(). The buckets must be added with
crush_add_bucket().
 * The bucket found at __buckets[i]__ must have a crush_bucket.id
== -1-i.
 */
struct crush_bucket **buckets;
/*! An array of crush_rule pointers of size __max_rules__.
 * An element of the array may be NULL if the rule was removed
(there is
 * no API to do so but there may be one in the future). The rules
must be added
 * with crush_add_rule().
 */
struct crush_rule **rules;
__s32 max_buckets; /*!< the size of __buckets__ */
__u32 max_rules; /*!< the size of __rules__ */
/*! The value of the highest item stored in the crush_map + 1
 */
__s32 max_devices;

/*! Backward compatibility tunable. It implements a bad solution
 * and must always be set to 0 except for backward compatibility
 * purposes
 */
__u32 choose_local_tries;
/*! Backward compatibility tunable. It implements a bad solution
 * and must always be set to 0 except for backward compatibility
 * purposes
 */
__u32 choose_local_fallback_tries;
/*! Tunable. The default value when the CHOOSE_TRIES or
 * CHOOSELEAF_TRIES steps are omitted in a rule. See the
 * documentation for crush_rule_set_step() for more
 * information
 */
__u32 choose_total_tries;
/*! Backward compatibility tunable. It should always be set
 *  to 1 except for backward compatibility. Implemented in 2012
 *  it was generalized late 2013 and is mostly unused except
 *  in one border case, reason why it must be set to 1.
 *
 *  Attempt chooseleaf inner descent once for firstn mode; on
 *  reject retry outer descent.  Note that this does *not*
 *  apply to a collision: in that case we will retry as we
 *  used to.
 */
__u32 chooseleaf_descend_once;
/*! Backward compatibility tunable. It is a fix for bad
 *  mappings implemented in 2014 at
 *  https://github.com/ceph/ceph/pull/1185. It should always
 *  be set to 1 except for backward compatibility.
 *
 *  If non-zero, feed r into chooseleaf, bit-shifted right by
*  (r-1) bits.  a value of 1 is best for new clusters.  for
*  legacy clusters that want to limit reshuffling, a value of
*  3 or 4 will make the mappings line up a bit better with
*  previous mappings.
 */
__u8 chooseleaf_vary_r;

/*! Backward compatibility tunable. It is an improvement that
 *  avoids unnecessary mapping changes, implemented at
 *  https://github.com/ceph/ceph/pull/6572 and explained in
 *  this post: "chooseleaf may cause some unnecessary pg
 *  migrations" in October 2015
 *
https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg26075.html
 *  It should always be set to 1 except for backward compatibility.
 */
__u8 chooseleaf_stable;

/*! @cond INTERNAL */
/* This value is calculated after decode or construction by
  the builder. It is exposed here (rather than having a
  'build CRUSH working space' function) so that callers can
  reserve a static buffer, allocate space on the stack, or
  otherwise avoid calling into the heap allocator if they
  want to. The size of the working space depends on the map,
  while the size of the scratch vector passed to the mapper
  depends on the size of the desired result set.

  Nothing stops the caller from allocating both in one swell
  foop and passing in two points, though. */
size_t working_size;

#ifndef __KERNEL__
/*! @endcond */
/*! Backward compatibility tunable. It is a fix for the straw
 *  scaler values for the straw algorithm which is deprecated
 *  (straw2 replaces it) implemented at
 *  https://github.com/ceph/ceph/pull/3057. It should always
 *  be set to 1 except for backward compatibility.
 *
*/
__u8 straw_calc_version;

/*! @cond INTERNAL */
/*
* allowed bucket algs is a bitmask, here the bit positions
* are CRUSH_BUCKET_*.  note that these are 

[ceph-users] Purpose of crush_ln() function

2020-06-10 Thread Bobby
  Hi,

I am trying to understand Straw2 Bucket of Ceph CRUSH algorithm.

Can someone please tell what is the purpose of* crush_ln?* And what
does "*compute
2^44*log2(input+1)*" means in the comment section above above *crush_ln*
function?

Thanks in advance
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Maximum size of data in crush_choose_firstn Ceph CRUSH source code

2020-06-09 Thread Bobby
Hi all,

I have a question regarding a function called *crush_choose_firstn* in Ceph
source code namely *mapper.c*  This function has following pointer
variables :

- const struct crush_map *map,
- struct crush_work *work,  const struct crush_bucket *bucket,
- int *out,
- const __u32 *weight,
- int *out2,
- const struct crush_choose_arg *choose_args
*- const struct crush_map *map.*

What is the maximum size of data involved here?  I mean what is the upper
bound?

BR
Bobby
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph and iSCSI

2020-05-29 Thread Bobby
Hi all,

I am new to Ceph. But I have a some good understanding of iSCSI protocol. I
will dive into Ceph because it looks promising. I am particularly
interested in Ceph-RBD. I have a request. Can you please tell me, if any,
what are the common similarities between iSCSI and Ceph. If someone has to
work on a common model for iSCSI and Ceph, what would be those significant
points you would suggest to someone who has some understanding of  iSCSI?

Looking forward to answers. Thanks in advance :-)

BR
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Unit testing of CRUSH Algorithm

2020-05-08 Thread Bobby
Hi,

Are there any more resources of unit tests for CRUSH algorithm other than
the test cases here: :
https://github.com/ceph/ceph/tree/master/src/test/crush

Or more unit testing of CRUSH apart from the these test cases would be an
overkill?

BR
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Workload in Unit testing

2020-05-05 Thread Bobby
Hi all,

Ceph documentation mentions it has two types of tests: *unit tests* (also
called make check tests) and *integration tests*. Strictly speaking, the *make
check tests* are not “unit tests”, but rather tests that can be run easily
on a single build machine after compiling Ceph from source .

unit tests: https://github.com/ceph/ceph/tree/master/src/test

In order to develop on ceph, I am using a Ceph utility, *vstart.sh*, which
allows me to deploy fake local cluster for development purpose.  I am doing
unit testing. And these tests are helping me. Thanks !

My question: How real and big is the workload of  unit tests? Are these
tests enough for profiling function calls count, loop counts,
parallelism to a good extent?

Thanks in advance !

BR
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: asynchronous/non-sequential example read and write test codes Librados

2020-05-05 Thread Bobby
 Hi Casey,
 Hi all,

Casey thanks a lot for your reply ! That was really helpful.

A question please. Do these tests reflect realistic workload? Basically I
am profiling (CPU profiling) the computations in these tests. And naturally
I am interested in big workload. I have started with CRUSH and here I would
like to ask you and fellow Ceph people about the workload in:
https://github.com/ceph/ceph/tree/master/src/test/crush.

Thanks in advance

BR
Bobby!

On Mon, May 4, 2020 at 6:19 PM Casey Bodley  wrote:

> On Mon, May 4, 2020 at 9:30 AM Bobby  wrote:
> >
> > Hi Cephers,
> >
> > I am working on Ceph librados. Currently I can test
> sequential/synchronous
> > read and write tests both in C and C++. However I am struggling with
> > asynchronous/non-sequential test codes. Are there any test repositories
> > which contain   asynchronous/non-sequential examples codes?
>
> You can find some test cases for the async interfaces in
> src/test/librados/aio.cc and src/test/librados/aio_cxx.cc
>
> >
> > Thanks in advance
> >
> > Bobby !
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> >
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] asynchronous/non-sequential example read and write test codes Librados

2020-05-04 Thread Bobby
Hi Cephers,

I am working on Ceph librados. Currently I can test sequential/synchronous
read and write tests both in C and C++. However I am struggling with
asynchronous/non-sequential test codes. Are there any test repositories
which contain   asynchronous/non-sequential examples codes?

Thanks in advance

Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph crushtool in developer mode

2020-04-30 Thread Bobby
Hi Cephers,

Can we use *crushtool* in developer mode? I have deployed a fake local
cluster for development purpose as described by Ceph documentation here (
https://docs.ceph.com/docs/mimic/dev/dev_cluster_deployement/)

Best regards
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Newbie Question: CRUSH and Librados Profiling

2020-04-29 Thread Bobby
Hi once again ! Can someone please help me on this question :-)

Bobby !

On Wed, Apr 29, 2020 at 2:05 PM Bobby  wrote:

>
>
> Hi,
>
> It is a newbie question. I would be really thankful if you can answer it
> please.  I want to compile the Ceph source code. Because I want to profile
> *Librados* and *CRUSH* function stacks. Please verify if this is the
> right track I am following:
>
> - I have cloned the Ceph from Ceph git repository
> - I have installed the build code dependencies from script
> *install-deps.sh*
> - Because I would like to use the* gdb debug* client program later, the
> client program will  depend on the librados library, so I must compile ceph
> in debug mode. Therefore I would modify the parameters of ceph cmake in
> *do_cmake.sh* script accordingly.
> - Then I compile *do_cmake*
> *- *In build I run* make - j 32*
> *- *To start the developer mode, I run *make vstart.*
> *- *In the developer mode I can write *READ* and *WRITE* tests...compile
> these tests and then use some profiling tool to call the compiled
> executable to profile the function stacks.
>
> Is this the correct way for *CPU profiling*? Please let me know if it is
> fine or is there something more also.
>
>
> Bobby !
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Newbie Question: CRUSH and Librados Profiling

2020-04-29 Thread Bobby
Hi,

It is a newbie question. I would be really thankful if you can answer it
please.  I want to compile the Ceph source code. Because I want to profile
*Librados* and *CRUSH* function stacks. Please verify if this is the right
track I am following:

- I have cloned the Ceph from Ceph git repository
- I have installed the build code dependencies from script *install-deps.sh*
- Because I would like to use the* gdb debug* client program later, the
client program will  depend on the librados library, so I must compile ceph
in debug mode. Therefore I would modify the parameters of ceph cmake in
*do_cmake.sh* script accordingly.
- Then I compile *do_cmake*
*- *In build I run* make - j 32*
*- *To start the developer mode, I run *make vstart.*
*- *In the developer mode I can write *READ* and *WRITE* tests...compile
these tests and then use some profiling tool to call the compiled
executable to profile the function stacks.

Is this the correct way for *CPU profiling*? Please let me know if it is
fine or is there something more also.


Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Increase number of read and writes

2020-04-23 Thread Bobby
Hi Janne,

Thanks a lot ! I should have checked it earlier..I got it :-)

Basically I would like to compile the client read and write C/C++ codes and
then later profile the executables with valgrind and other profiling
tools.  The reason being I want to see the function calls, execution time
etc. This is very easy with the given Librados example. I am already doing
the profiling of executables.

What you have pointed out rearding *fio*, this is exactly my next goal (you
read my mind).

Given where I am at the moment (a Ceph deployment cluster) and given what I
want to achieve (profile the executables of the read write test codes with
high number of read and writes), how can I bring *fio* in it? May be there
are already some Ceph test codes with high number of write and read calls
in parallel?

I have come across this one example *librbd*  test code in Ceph repository
( https://github.com/ceph/ceph/blob/master/examples/librbd/hello_world.cc )

..



On Thu, Apr 23, 2020 at 4:16 PM Janne Johansson  wrote:

>
>
> Den tors 23 apr. 2020 kl 16:07 skrev Bobby :
>
>> Hi,
>>
>> I am using Ceph in developer mode. Currently I am implementing Librados
>> examples which are also available in Introduction to Librados section
>>
>> https://docs.ceph.com/docs/master/rados/api/librados-intro/#step-3-creating-an-i-o-context
>> .
>> It says once your app has a cluster handle and a connection to a Ceph
>> Storage Cluster, you may create an I/O Context and begin reading and
>> writing data.  For example,
>>
>> *err = rados_write(io, "hw", "Hello World!", 12, 0);
>>
>
>
>>
>> My question, Is "12" is the number of writes? Because I want to test the
>> with high number of read and writes.
>>
>> Looking for help !
>>
>
> Just check what parameters the function takes:
> CEPH_RADOS_API
> <https://docs.ceph.com/docs/master/rados/api/librados/#c.CEPH_RADOS_API>
>  int rados_write(rados_ioctx_t
> <https://docs.ceph.com/docs/master/rados/api/librados/#c.rados_ioctx_t>
> * io*, const char ** oid*, const char ** buf*, size_t* len*, uint64_t
> * off*)¶
> <https://docs.ceph.com/docs/master/rados/api/librados/#c.rados_write>
>
> Write *len* bytes from *buf* into the *oid* object, starting at offset
> *off*. The value of *len* must be <= UINT_MAX/2.
>
>
> The 12 seems to be the lenght of "Hello World!" in bytes, which matches
> what a normal write() call would need.
> In order to test high number of writes, you need to send lots of write
> calls in parallel.
>
> (Or just get fio with rbd support compiled in, this is a solved problem
> already how to benchmark ceph at a low level)
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Increase number of read and writes

2020-04-23 Thread Bobby
Hi,

I am using Ceph in developer mode. Currently I am implementing Librados
examples which are also available in Introduction to Librados section
https://docs.ceph.com/docs/master/rados/api/librados-intro/#step-3-creating-an-i-o-context.
It says once your app has a cluster handle and a connection to a Ceph
Storage Cluster, you may create an I/O Context and begin reading and
writing data.  For example,









*err = rados_write(io, "hw", "Hello World!", 12, 0);if (err < 0) {
  fprintf(stderr, "%s: Cannot write object \"neo-obj\" to pool
%s: %s\n", argv[0], poolname, strerror(-err));
rados_ioctx_destroy(io);rados_shutdown(cluster);
exit(1);} else {printf("\nWrote \"Hello World\"
to object \"neo-obj\".\n");}*

My question, Is "12" is the number of writes? Because I want to test the
with high number of read and writes.

Looking for help !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io