thanks! Fixed in https://github.com/ceph/ceph/pull/6783. plz review
On Thu, Dec 3, 2015 at 3:19 AM, James (Fei) Liu-SSI
wrote:
> Hi Haomai,
>I happened to run ceph_objectstore_bench against key value store on master
> branch. It always crashed at
On Thu, Dec 3, 2015 at 8:17 AM, Somnath Roy wrote:
> Hi Sage/Sam,
> As discussed in today's performance meeting , I am planning to change the
> queue_transactions() interface to the following.
>
> int queue_transactions(Sequencer *osr, list& tls,
>
On Mon, Nov 30, 2015 at 1:44 AM, Willem Jan Withagen wrote:
> Hi,
>
> Not unlike many others running FreeBSD I'd like to see if I/we can get
> Ceph to build and run on FreeBSD. If not all components than at least
> certain components.
>
> With compilation I do get quite some
gt;
>
> --
> hzwulibin
> 2015-11-26
>
> -
> 发件人:"hzwulibin"<hzwuli...@gmail.com>
> 发送日期:2015-11-23 09:00
> 收件人:Sage Weil,Haomai Wang
> 抄送:ceph-devel
On Thu, Nov 19, 2015 at 11:26 PM, Libin Wu wrote:
> Hi, cepher
>
> I have a cluster of 6 OSD server, every server has 8 OSDs.
>
> I out 4 OSDs on every server, then my client io is blocking.
>
> I reboot my client and then create a new rbd device, but the new
> device also
On Fri, Nov 20, 2015 at 9:08 PM, Sage Weil <s...@newdream.net> wrote:
> On Fri, 20 Nov 2015, Haomai Wang wrote:
>> On Fri, Nov 20, 2015 at 7:41 PM, Sage Weil <s...@newdream.net> wrote:
>> > On Fri, 20 Nov 2015, changtao381 wrote:
>> >>
On Fri, Nov 20, 2015 at 7:41 PM, Sage Weil wrote:
> On Fri, 20 Nov 2015, changtao381 wrote:
>> Hi All,
>>
>> Thanks for you apply!
>>
>> If directioIO + async IO requirement that alignment, it shouldn't aligned by
>> PAGE for each journal entry.
>> For it may write many
On Fri, Nov 20, 2015 at 4:33 PM, changtao381 wrote:
> HI All,
>
> Why it is needed an entry of journal t is aligned by CEPH_PAGE_MASK ? For
> it causes the data of journal write are amplified by 2X for small io
>
linux aio/dio required this
> For example write io size
On Tue, Nov 10, 2015 at 2:19 AM, Samuel Just wrote:
> Ops are hashed from the messenger (or any of the other enqueue sources
> for non-message items) into one of N queues, each of which is serviced
> by M threads. We can't quite have a single thread own a single queue
> yet
Hi sage,
Could we know about your progress to refactor MSubOP and hobject_t,
pg_stat_t decode problem?
We could work on this based on your work if any.
On Thu, Nov 5, 2015 at 1:29 AM, Haomai Wang <hao...@xsky.com> wrote:
> On Thu, Nov 5, 2015 at 1:19 AM, piotr.da...@ts.fujitsu.com
&g
On Thu, Nov 5, 2015 at 1:19 AM, piotr.da...@ts.fujitsu.com
wrote:
>> -Original Message-
>> From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
>> ow...@vger.kernel.org] On Behalf Of ???
>> Sent: Wednesday, November 04, 2015 4:34 PM
>> To: Gregory Farnum
On Tue, Oct 27, 2015 at 9:12 AM, hzwulibin wrote:
> Hi, develops
>
> I also concerns about this problem. And my problem is how many threads will
> the qemu-system-x86 has?
> When will it cut down the threads?
It's because of network model, each connection will has two
On Tue, Oct 20, 2015 at 8:47 PM, Sage Weil wrote:
> On Tue, 20 Oct 2015, Z Zhang wrote:
>> Hi Guys,
>>
>> I am trying latest ceph-9.1.0 with rocksdb 4.1 and ceph-9.0.3 with
>> rocksdb 3.11 as OSD backend. I use rbd to test performance and following
>> is my cluster info.
>>
>>
Actually keyvaluestore would submit transaction with sync flag
too(rely to keyvaluedb impl journal/logfile).
Yes, if we disable sync flag, keyvaluestore's performance will
increase a lot. But we dont provide with this option now
On Tue, Oct 20, 2015 at 9:22 PM, Z Zhang
On Tue, Oct 20, 2015 at 3:49 AM, Sage Weil wrote:
> The current design is based on two simple ideas:
>
> 1) a key/value interface is better way to manage all of our internal
> metadata (object metadata, attrs, layout, collection membership,
> write-ahead logging, overlay data,
xfs has three store types for xattrs like inline, btree and extents.
We only want to let xattr stored inline, so it won't need to hit disk.
So we need to limit the number of xattrs
On Thu, Oct 15, 2015 at 10:54 PM, Somnath Roy wrote:
> Sage,
> Why we are using XFS max
On Wed, Oct 14, 2015 at 1:03 AM, Sage Weil wrote:
> On Mon, 12 Oct 2015, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> After a weekend, I'm ready to hit this from a different direction.
>>
>> I replicated the issue with Firefly so it doesn't
it's a good tradeoff..
>
> Regards
> Somnath
>
>
> -----Original Message-
> From: Haomai Wang [mailto:haomaiw...@gmail.com]
> Sent: Monday, October 12, 2015 11:35 PM
> To: Somnath Roy
> Cc: Mark Nelson; ceph-devel; ceph-us...@lists.ceph.com
> Subject: Re: [ceph
ker threads
3. move more works out of messenger thread.
>
> Could you please send out any documentation around Async messenger ? I tried
> to google it , but, not even blueprint is popping up.
>
>
>
>
>
> Thanks & Regards
>
> Somnath
>
> From: ceph-users [mail
resend
On Tue, Oct 13, 2015 at 10:56 AM, Haomai Wang <haomaiw...@gmail.com> wrote:
> COOL
>
> Interesting that async messenger will consume more memory than simple, in my
> mind I always think async should use less memory. I will give a look at this
>
> On Tue, Oct 13
resend to ML
On Sat, Oct 10, 2015 at 11:20 AM, Haomai Wang <haomaiw...@gmail.com> wrote:
>
>
> On Sat, Oct 10, 2015 at 5:49 AM, Sage Weil <s...@newdream.net> wrote:
>>
>> Hey Marcus,
>>
>> On Fri, 2 Oct 2015, Marcus Watts wrote:
>> > wip-addr
resend
On Thu, Oct 1, 2015 at 7:56 PM, Haomai Wang <haomaiw...@gmail.com> wrote:
>
>
> On Thu, Oct 1, 2015 at 6:44 PM, Tom Nakamura <tnakam...@eml.cc> wrote:
>>
>> Hello ceph-devel,
>>
>> My lab is concerned with developing data mining application
It's really cool. Do you prepare to push to upstream? I think it
should be more convenient if we make fio repo as submodule.
On Sat, Sep 12, 2015 at 5:04 PM, Haomai Wang <haomaiw...@gmail.com> wrote:
> I found my problem why segment:
>
> because fio links librbd/librados from my
@ssi.samsung.com>
>> To: "Casey Bodley" <cbod...@redhat.com>
>> Cc: "Haomai Wang" <haomaiw...@gmail.com>, ceph-devel@vger.kernel.org
>> Sent: Friday, September 11, 2015 1:18:31 PM
>> Subject: RE: About Fio backend with ObjectStore API
>>
>
Yesterday I have a chat with wangrui and the reason is "infos"(legacy
oid) is missing. I'm not sure why it's missing.
PS: resend again because of plain text
On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil wrote:
> On Fri, 11 Sep 2015, ?? wrote:
>> Thank Sage Weil:
>>
>> 1. I
On Fri, Sep 11, 2015 at 10:09 PM, Sage Weil <s...@newdream.net> wrote:
> On Fri, 11 Sep 2015, Haomai Wang wrote:
>> On Fri, Sep 11, 2015 at 8:56 PM, Sage Weil <s...@newdream.net> wrote:
>> On Fri, 11 Sep 2015, ?? wrote:
>> > Thank Sage Weil:
&g
Hi Sage,
I notice your post in rocksdb page about make rocksdb aware of short
alive key/value pairs.
I think it would be great if one keyvalue db impl could support
different key types with different store behaviors. But it looks like
difficult for me to add this feature to an existing db.
So
On Tue, Sep 8, 2015 at 10:12 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> On Tue, Sep 8, 2015 at 3:06 PM, Haomai Wang <haomaiw...@gmail.com> wrote:
>> Hit "Send" by accident for previous mail. :-(
>>
>> some points about pglog:
>> 1. short-
omap keys.
5. a simple loopback impl is efficient and simple
On Tue, Sep 8, 2015 at 9:58 PM, Haomai Wang <haomaiw...@gmail.com> wrote:
> Hi Sage,
>
> I notice your post in rocksdb page about make rocksdb aware of short
> alive key/value pairs.
>
> I think it would be g
ing without any problems. I
> wonder if there's a problem with the autotools build? I've only tested it
> with cmake. When I find some time, I'll rebase it on master and do another
> round of testing.
>
> Casey
>
> - Original Message -
>> From: "James (Fei
On Fri, Aug 28, 2015 at 2:35 PM, Jianhui Yuan zuiwany...@gmail.com wrote:
Hi Haomai,
when we use async messenger, the client(as: ceph -s) always stuck in
WorkerPool::barrier for 30 seconds. It seems the wakeup don't work.
What's the ceph version and os version? It should be a bug we already
On Thu, Aug 27, 2015 at 3:47 PM, Jianhui Yuan zuiwany...@gmail.com wrote:
Hi Haomai,
In my environment, I suffer from long timeout when connect a breakdown node.
So I write some code to support non-block connect in async . And It seems to
be working well. So, I want to know if non-block
On Wed, Aug 26, 2015 at 11:16 PM, huang jun hjwsm1...@gmail.com wrote:
hi,all
we create a 2TB rbd image, after map it to local,
then we format it to xfs with 'mkfs.xfs /dev/rbd0', it spent 318
seconds to finish, but local physical disk with the same size just
need 6 seconds.
I think librbd
On Tue, Aug 25, 2015 at 5:28 AM, Sage Weil sw...@redhat.com wrote:
Hi Haomai,
How did your most recent async messenger run go?
If there aren't major issues, we'd like to start mixing it in with the
regular rados suite by doing 'ms type = random'...
From last run, we have no async related
On Thu, Aug 20, 2015 at 2:35 PM, Dałek, Piotr
piotr.da...@ts.fujitsu.com wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Blinick, Stephen L
Sent: Wednesday, August 19, 2015 6:58 PM
[..
Regarding the all-HDD or
Message-
From: Chaitanya Huilgol
Sent: Thursday, July 02, 2015 3:50 AM
To: James (Fei) Liu-SSI; Allen Samuels; Haomai Wang
Cc: ceph-devel
Subject: RE: Inline dedup/compression
Hi James et.al ,
Here is an example for clarity,
1. Client Writes object object.abcd
2. Based on the crush rules
On Wed, Aug 19, 2015 at 1:36 PM, Somnath Roy somnath@sandisk.com wrote:
Mark,
Thanks for verifying this. Nice report !
Since there is a big difference in memory consumption with jemalloc, I would
say a recovery performance data or client performance data during recovery
would be
I hope this PR could be pushed to I :-) It seemed waited so long.
https://github.com/ceph/ceph/pull/3595
On Thu, Aug 13, 2015 at 5:20 AM, Sage Weil sw...@redhat.com wrote:
The infernalis feature freeze is coming up Real Soon Now. I've marked
some of the pull requests on github that I would
-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Haomai Wang
Sent: Tuesday, August 11, 2015 7:50 PM
To: Yehuda Sadeh-Weinraub
Cc: Samuel Just; Sage Weil; ceph-devel@vger.kernel.org
Subject: Re: Async reads, sync writes, op thread model
On Wed, Aug 12, 2015 at 6:34 AM, Yehuda Sadeh-Weinraub
ysade...@redhat.com wrote:
Already mentioned it on irc, adding to ceph-devel for the sake of
completeness. I did some infrastructure work for rgw and it seems (at
least to me) that it could at least be partially useful here.
Basically it's
On Wed, Aug 12, 2015 at 5:48 AM, Dałek, Piotr
piotr.da...@ts.fujitsu.com wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
ow...@vger.kernel.org] On Behalf Of Sage Weil
Sent: Tuesday, August 11, 2015 10:11 PM
I went ahead and implemented both of
Could you print your all thread callback via thread apply all bt?
On Thu, Aug 6, 2015 at 7:52 PM, Gurjar, Unmesh unmesh.gur...@hp.com wrote:
Hi,
On a Ceph Firefly cluster (version [1]), OSDs are configured to use separate
data and journal disks (using the ceph-disk utility). It is observed,
://paste.openstack.org/show/411139/
Regards,
Unmesh G.
IRC: unmeshg
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Thursday, August 06, 2015 5:31 PM
To: Gurjar, Unmesh
Cc: ceph-devel@vger.kernel.org
Subject: Re: OSD sometimes stuck in init phase
Could you
)!
[1] - http://paste.openstack.org/show/411161/
[2] - http://paste.openstack.org/show/411162/
[3] - http://tracker.ceph.com/issues/9768
Regards,
Unmesh G.
IRC: unmeshg
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Thursday, August 06, 2015 6:22 PM
Agree
On Thu, Aug 6, 2015 at 5:38 AM, Somnath Roy somnath@sandisk.com wrote:
Thanks Sage for digging down..I was suspecting something similar.. As I
mentioned in today's call, in idle time also syncfs is taking ~60ms. I have
64 GB of RAM in the system.
The workaround I was talking about
It's interesting to see ondisk_finisher will occur 1ms, could you
replay this workload and see whether exists read io from iostat. I
guess it may help to see the cause.
On Wed, Aug 5, 2015 at 12:13 AM, Somnath Roy somnath@sandisk.com wrote:
Yes, it has to re-acquire pg_lock today..
But,
-qa-suite/pull/518) and hope
fix this point.
On Fri, Jul 31, 2015 at 5:50 PM, Haomai Wang haomaiw...@gmail.com wrote:
Hi all,
I ran a test
suite(http://pulpito.ceph.com/haomai-2015-07-29_11:40:40-rados-master-distro-basic-multi/)
and found the failed jobs are failed by 2015-07-29 10:52
Hi all,
I ran a test
suite(http://pulpito.ceph.com/haomai-2015-07-29_11:40:40-rados-master-distro-basic-multi/)
and found the failed jobs are failed by 2015-07-29 10:52:35.313197
7f16ae655780 -1 unrecognized ms_type 'async'
Then I found the failed jobs(like
On Wed, Jul 29, 2015 at 5:08 AM, Somnath Roy somnath@sandisk.com wrote:
Hi,
Eventually, I have a working prototype and able to gather some performance
comparison data with the changes I was talking about in the last performance
meeting. Mark's suggestion of a write up was long pending,
get_ioengine(). All I can suggest is that you verify that your job
file is pointing to the correct fio_ceph_objectstore.so. If you've made any
other interesting changes to the job file, could you share it here?
Casey
- Original Message -
From: Haomai Wang haomaiw...@gmail.com
To: Casey
(), make sure you're
pointing it to an empty xfs directory with the directory= option.
Casey
On Tue, Jul 14, 2015 at 2:45 AM, Haomai Wang haomaiw...@gmail.com wrote:
Anyone who have successfully ran the fio with this external io engine
ceph_objectstore?
It's strange that it alwasys hit segment
?
On Fri, Jul 10, 2015 at 3:51 PM, Haomai Wang haomaiw...@gmail.com wrote:
I have rebased the branch with master, and push it to ceph upstream
repo. https://github.com/ceph/ceph/compare/fio-objectstore?expand=1
Plz let me know if who is working on this. Otherwise, I would like to
improve
: Haomai Wang haomaiw...@gmail.com
To: Josh Durgin jdur...@redhat.com
Cc: ceph-devel@vger.kernel.org, Jason Dillaman dilla...@redhat.com
Sent: Thursday, July 9, 2015 11:16:14 PM
Subject: Re: About Adding eventfd support for LibRBD
I made a simple draft about adding async event notification support
I suggest we could split this PR into plugindb impl and
ceph-disk,init-script things. So I think it will be more easier to be
merge ready.
On Thu, Jul 9, 2015 at 4:31 PM, Varada Kari varada.k...@sandisk.com wrote:
Hi Sage/Sam/Haomai,
Sent pull requests for two enhancement for key value store.
-
From: Casey Bodley [mailto:cbod...@gmail.com]
Sent: Thursday, July 09, 2015 12:32 PM
To: James (Fei) Liu-SSI
Cc: Haomai Wang; ceph-devel@vger.kernel.org
Subject: Re: About Fio backend with ObjectStore API
Hi James,
Are you looking at the code from
https://github.com/linuxbox2/linuxbox-ceph
at 11:46 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Wed, Jul 8, 2015 at 11:08 AM, Josh Durgin jdur...@redhat.com wrote:
On 07/07/2015 08:18 AM, Haomai Wang wrote:
Hi All,
Currently librbd support aio_read/write with specified
callback(AioCompletion). It would be nice for simple caller logic
,
Thanks a lot.
Regards,
James
-Original Message-
From: Casey Bodley [mailto:cbod...@gmail.com]
Sent: Tuesday, June 30, 2015 2:16 PM
To: James (Fei) Liu-SSI
Cc: Haomai Wang; ceph-devel@vger.kernel.org
Subject: Re: About Fio backend with ObjectStore API
Hi,
When Danny
On Wed, Jul 8, 2015 at 11:08 AM, Josh Durgin jdur...@redhat.com wrote:
On 07/07/2015 08:18 AM, Haomai Wang wrote:
Hi All,
Currently librbd support aio_read/write with specified
callback(AioCompletion). It would be nice for simple caller logic, but
it also has some problems:
1. Performance
Yes, some fields only used for special ops. But union may increase the
complexity of stuct.
And the extra memory may not a problem because Ops in one
transaction should be within ten.
On Thu, Jul 2, 2015 at 10:05 PM, Dałek, Piotr
piotr.da...@ts.fujitsu.com wrote:
Hello,
In ObjectStore.h we
Hi all,
Long long ago, is there someone said about fio backend with Ceph
ObjectStore API? So we could use the existing mature fio facility to
benchmark ceph objectstore.
--
Best Regards,
Wheat
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to
about it.
Regards,
James
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Friday, June 26, 2015 8:55 PM
To: James (Fei) Liu-SSI
Cc: ceph-devel
Subject: Re: Inline dedup/compression
On Sat, Jun 27, 2015 at 2:03 AM, James (Fei) Liu-SSI
james
comparison. But it is good reference. The datacenters need cost effective
solution.
Regards,
James
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Thursday, June 25, 2015 8:08 PM
To: James (Fei) Liu-SSI
Cc: ceph-devel
Subject: Re: Inline dedup/compression
On Fri, Jun 26, 2015 at 6:01 AM, James (Fei) Liu-SSI
james@ssi.samsung.com wrote:
Hi Cephers,
It is not easy to ask when Ceph is going to support inline
dedup/compression across OSDs in RADOS because it is not easy task and
answered. Ceph is providing replication and EC for
Hi Partick,
It looks confusing to use this. Is it need that we upload a txt file
to describe blueprint instead of editing directly online?
On Wed, May 27, 2015 at 5:05 AM, Patrick McGarry pmcga...@redhat.com wrote:
It's that time again, time to gird up our loins and submit blueprints
for all
On Sat, Jun 6, 2015 at 2:07 PM, Dałek, Piotr piotr.da...@ts.fujitsu.com wrote:
-Original Message-
From: ceph-devel-ow...@vger.kernel.org [mailto:ceph-devel-
I'm digging into perf and the code to see here/how I might be able to
improve performance for small I/O around 16K.
I ran fio
We could wait for the next benchmark until this
PR(https://github.com/ceph/ceph/pull/4775) merged
On Sat, Jun 6, 2015 at 11:06 PM, Robert LeBlanc rob...@leblancnet.us wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I found similar results in my testing as well. Ceph is certainly great
On Tue, Jun 2, 2015 at 5:28 PM, Li Wang liw...@ubuntukylin.com wrote:
I think for scrub, we have a relatively easy way to solve it,
add a field to object metadata with the value being either UNSTABLE
or STABLE, the algorithm is as below,
1 Mark the object be UNSTABLE
2 Perform object data
I guess it should be something like sam designed in
CDS(https://wiki.ceph.com/Planning/Blueprints/Infernalis/osd%3A_Tiering_II_(Warm-%3ECold))
On Wed, May 27, 2015 at 4:39 PM, Marcel Lauhoff m...@irq0.org wrote:
Hi,
I wrote a prototype for an OSD-based object stub feature. An object stub
On Wed, May 27, 2015 at 4:41 PM, Li Wang liw...@ubuntukylin.com wrote:
I have just noticed the new store development, and had a
look at the idea behind it (http://www.spinics.net/lists/ceph-
devel/msg22712.html), so my understanding, we wanna avoid the
double-write penalty of
On Thu, May 28, 2015 at 1:40 AM, Robert LeBlanc rob...@leblancnet.us wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
With all the talk of tcmalloc and jemalloc, I decided to do some
testing og the different memory allocating technologies between KVM
and Ceph. These tests were done a
that running on X86 works fine, never seen bad crc error.
2015-05-16 17:30 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
is this always happen or occasionally?
On Sat, May 16, 2015 at 10:10 AM, huang jun hjwsm1...@gmail.com wrote:
hi,steve
2015-05-15 16:36 GMT+08:00 Steve Capper steve.cap
On Thu, Apr 30, 2015 at 12:38 AM, Sage Weil sw...@redhat.com wrote:
On Wed, 29 Apr 2015, Chen, Xiaoxi wrote:
Hi Mark,
Really good test:) I only played a bit on SSD, the parallel WAL
threads really helps but we still have a long way to go especially on
all-ssd case. I tried this
Still not, we currently only focus on bug fix and stable purpose. But
I think performance improvement will be pick up soon(May?), the
problem is clearly I think.
On Wed, Apr 29, 2015 at 2:10 PM, Alexandre DERUMIER aderum...@odiso.com wrote:
Thanks! So far we've gotten a report that
Thanks for your benchmark!
Yeah, async messenger exists a bottleneck when meet high concurrency
and iops. Because it exists a annoying lock related to crc calculate.
Now my main job is focus on passing on qa tests for async messenger.
If no failed tests, I will solve this problem.
On Tue, Apr
On Tue, Apr 21, 2015 at 2:43 PM, Chen, Xiaoxi xiaoxi.c...@intel.com wrote:
Hi Sage,
Well, that's
submit_transaction -- submit a transaction , whether block
waiting for fdatasync depends on rocksdb-disable-sync.
submit_transaction_sync -- queue
19526457241940 195222632 1% /root/ceph-0-db-wal
/dev/sdb1 156172796 10519532 145653264 7% /var/lib/ceph/osd/ceph-0
-Original Message-
From: Mark Nelson [mailto:mnel...@redhat.com]
Sent: Friday, April 17, 2015 8:11 PM
To: Sage Weil
Cc: Somnath Roy; Chen, Xiaoxi; Haomai Wang
On Fri, Apr 17, 2015 at 8:38 AM, Sage Weil s...@newdream.net wrote:
On Thu, 16 Apr 2015, Mark Nelson wrote:
On 04/16/2015 01:17 AM, Somnath Roy wrote:
Here is the data with omap separated to another SSD and after 1000GB of fio
writes (same profile)..
omap writes:
-
On Wed, Apr 15, 2015 at 2:01 PM, Somnath Roy somnath@sandisk.com wrote:
Hi Sage/Mark,
I did some WA experiment with newstore with the similar settings I mentioned
yesterday.
Test:
---
64K Random write with 64 QD and writing total of 1 TB of data.
Newstore:
Fio
On Wed, Apr 8, 2015 at 10:58 AM, Sage Weil s...@newdream.net wrote:
On Tue, 7 Apr 2015, Mark Nelson wrote:
On 04/07/2015 02:16 PM, Mark Nelson wrote:
On 04/07/2015 09:57 AM, Mark Nelson wrote:
Hi Guys,
I ran some quick tests on Sage's newstore branch. So far given that
this is a
Regards
Somnath
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Friday, April 03, 2015 9:47 PM
To: Somnath Roy
Cc: ceph-devel@vger.kernel.org
Subject: Re: Regarding ceph rbd write path
On Sat, Apr 4, 2015 at 8:30 AM, Somnath Roy somnath@sandisk.com wrote
On Sat, Apr 4, 2015 at 8:30 AM, Somnath Roy somnath@sandisk.com wrote:
In fact, we can probably do it from the OSD side like this.
1. A thread in the sharded opWq is taking the ops within a pg by acquiring
lock in the pg_for_processing data structure.
2. Now, before taking the job, it
On Thu, Mar 19, 2015 at 5:22 PM, Xinze Chi xmdx...@gmail.com wrote:
hi, all:
Currently at keyvaluestore, osd send sync
request(submit_transaction_sync) to filestore when it finishes a
transaction. But sata disk is not suitable for doing sync request. ssd
disk is more suitable.
I think
On Mon, Mar 16, 2015 at 10:04 PM, Xinze Chi xmdx...@gmail.com wrote:
How to process the write request in primary?
Thanks.
2015-03-16 22:01 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
AFAR Pipe and AsyncConnection both will mark self fault and shutdown
socket and peer will detect this reset
AFAR Pipe and AsyncConnection both will mark self fault and shutdown
socket and peer will detect this reset. So each side has chance to
rebuild the session.
On Mon, Mar 16, 2015 at 9:19 PM, Xinze Chi xmdx...@gmail.com wrote:
Such as, Client send write request to osd.0 (primary), osd.0 send
On Mon, Mar 9, 2015 at 1:26 PM, Nicheal zay11...@gmail.com wrote:
2015-03-07 16:43 GMT+08:00 Haomai Wang haomaiw...@gmail.com:
On Sat, Mar 7, 2015 at 12:03 AM, Sage Weil sw...@redhat.com wrote:
Hi!
[copying ceph-devel]
On Fri, 6 Mar 2015, Nicheal wrote:
Hi Sage,
Cool for issue #3878
On Sat, Mar 7, 2015 at 12:03 AM, Sage Weil sw...@redhat.com wrote:
Hi!
[copying ceph-devel]
On Fri, 6 Mar 2015, Nicheal wrote:
Hi Sage,
Cool for issue #3878, Duplicated pg_log write, which is post early in
my issue #3244 and Single omap_setkeys transaction improve the
performance in
On Fri, Feb 27, 2015 at 1:19 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 27 Feb 2015, Haomai Wang wrote:
Anyway, this leads to a few questions:
- Who is interested in using Manila to attach CephFS to guest VMs?
Yeah, actually we are doing this
(https://www.openstack.org/vote-vancouver
Hmm, we already obverse this duplicate omap keys set from pglog operations.
And I think we need to resolve it in upper layer, of course,
coalescing omap operations in FileStore is also useful.
@Somnath Do you do this dedup work in KeyValueStore already?
On Thu, Feb 26, 2015 at 10:28 PM, Andreas
On Fri, Feb 27, 2015 at 8:01 AM, Sage Weil sw...@redhat.com wrote:
Hi everyone,
The online Ceph Developer Summit is next week[1] and among other things
we'll be talking about how to support CephFS in Manila. At a high level,
there are basically two paths:
1) Ganesha + the CephFS FSAL
added a inject-error stress test for lossless_peer_reuse policy, it
can reproduce it easily
On Wed, Feb 25, 2015 at 2:27 AM, Gregory Farnum gfar...@redhat.com wrote:
On Feb 24, 2015, at 7:18 AM, Haomai Wang haomaiw...@gmail.com wrote:
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar
On Tue, Feb 24, 2015 at 12:04 AM, Greg Farnum gfar...@redhat.com wrote:
On Feb 12, 2015, at 9:17 PM, Haomai Wang haomaiw...@gmail.com wrote:
On Fri, Feb 13, 2015 at 1:26 AM, Greg Farnum gfar...@redhat.com wrote:
Sorry for the delayed response.
On Feb 11, 2015, at 3:48 AM, Haomai Wang haomaiw
I don't have detail perf number for sync io latency now.
But a few days ago I did single OSD single io depth benchmark. In
short, Firefly Dumpling Hammer per op latency.
It's great to see Mark's benchmark result! As for pcie ssd, I think
ceph can't make full use of it currently for one OSD. We
!
Let me understand the K/V code base more in-depth and will come back to you
on this.
Regards
Somnath
-Original Message-
From: Sage Weil [mailto:s...@newdream.net]
Sent: Thursday, February 19, 2015 11:46 AM
To: Somnath Roy
Cc: Haomai Wang; sj...@redhat.com; Gregory Farnum; Ceph
Actually, I'm concerned about the correctness of benchmark using
MemStore. AFAR it may cause lots of memory frag and cause performance
degraded hugely. Maybe set filestore_blackhole=true is more
precious?
On Fri, Feb 20, 2015 at 5:49 PM, Blair Bethwaite
blair.bethwa...@gmail.com wrote:
Hi
So cool!
A little notes:
1. What about sync thread in NewStore?
2. Could we consider skipping WAL for large overwrite(backfill, RGW)?
3. Sorry, what means [aio_]fsync?
On Fri, Feb 20, 2015 at 7:50 AM, Sage Weil sw...@redhat.com wrote:
Hi everyone,
We talked a bit about the proposed KeyFile
cases and it may trigger lookup
operation(get_onode).
On Fri, Feb 20, 2015 at 11:00 PM, Sage Weil sw...@redhat.com wrote:
On Fri, 20 Feb 2015, Haomai Wang wrote:
So cool!
A little notes:
1. What about sync thread in NewStore?
My thought right now is that there will be a WAL thread and (maybe
It seemed wiki is better for recording bp and other users(not dev?)
can have a nice look. If we put all in pad, it may be mess for
different bp?
We have some facts:
1. bp needed to be formatted and have a unified view for viewers
2. heavily changes will be applied during CDS mainly
3.
On Fri, Feb 13, 2015 at 1:26 AM, Greg Farnum gfar...@redhat.com wrote:
Sorry for the delayed response.
On Feb 11, 2015, at 3:48 AM, Haomai Wang haomaiw...@gmail.com wrote:
Hmm, I got it.
There exists another problem I'm not sure whether captured by upper layer:
two monitor node(A,B
is the object name used for vs the omap key?
Regards,
Nigel Cook
Intel Fellow Cloud Chief Architect
Cloud Platforms Group
Intel Corporation
+1 720 319 7508
-Original Message-
From: Haomai Wang [mailto:haomaiw...@gmail.com]
Sent: Tuesday, February 10, 2015 10:17 PM
To: Cook, Nigel
I guess we can't increase connect_seq when reconnecting? We need to
let peer side detect remote reset via connect_seq == 0.
On Tue, Feb 10, 2015 at 12:00 AM, Gregory Farnum gfar...@redhat.com wrote:
- Original Message -
From: Haomai Wang haomaiw...@gmail.com
To: Gregory Farnum gfar
1 - 100 of 225 matches
Mail list logo