On Thu, Jan 24, 2013 at 9:27 AM, Cesar Mello <cme...@gmail.com> wrote:
> Hi!
>
> I have successfully prototyped read/write access to ceph from Windows
> using the S3 API, thanks so much for the help.
>
> Now I would like to do some prototypes targeting performance
> evaluation. My scenario typically requires parallel storage of data
> from tens of thousands of loggers, but scalability to hundreds of
> thousands is the main reason for investigating ceph.
>
> My tests using a single laptop running ceph with 2 local OSDs and
> local radosgw allows writing in average 2.5 small objects per second
> (100 objects in 40 seconds). Is this the expected performance? It
> seems to be I/O bound because the HDD led keeps on during the
> PutObject requests. Any suggestion or documentation pointers for
> profiling are very appreciated.

Hi Mello,

2.5 objects/sec seems terribly slow, even on your laptop.  How "small"
are these objects?  You might try to benchmark without the disk as a
potential bottleneck, by putting your osd data and journals in /tmp
(for benchmarking only of course) or create/mount a tmpfs and point
your osd backends there.

>
> I am afraid the S3 API is not good for my scenario, because there is
> no way to append data to existing objects (so I won't be able to model
> a single object for each data collector). If this is the case, then I
> would need to store billions of small objects. I would like to know
> how much disk space each object instance requires other than the
> object content length.
>
> If the S3 API is not well suited to my scenario, then my effort should
> be better directed to porting or writing a native ceph client for
> Windows. I just need an API to read and write/append blocks to files.
> Any comments are really appreciated.

Hopefully someone with more windows experience will give you better
info/advice than I can.

You could try to port the rados API to windows.  Its purely userspace,
but does rely on pthreads and other libc/gcc specifics.  With
something like cygwin a port might not be too hard though.  If you
decide to go that route, let us know how you progress!

-sam


>
> Thank you a lot for the attention!
>
> Best regards
> Mello
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to