Re: WriteBack Throttle kill the performace of the disk

2014-10-13 Thread Nicheal
Yes, Greg.
But Unix based system always have a parameter dirty_ratio to prevent
the system memory from being exhausted. If Journal speed is so fast
while backing store cannot catch up with Journal, then the backing
store write will be blocked by the hard limitation of system dirty
pages. The problem here may be that system call, sync(), cannot return
since the system always has lots of dirty pages. Consequently, 1)
FileStore::sync_entry() will be timeout and then ceph_osd_daemon
abort.  2) Even if the thread is not timed out, Since the Journal
committed point cannot be updated so that the Journal will be blocked,
waiting for the sync() return and update Journal committed point.
So the Throttle is added to solve the above problems, right?
However, in my tested ARM ceph cluster(3nodes, 9osds, 3osds/node), it
will cause problem (SSD as journal, and HDD as data disk, fio 4k
ramdom write iodepth 64):
WritebackThrottle enable: Based on blktrace, we trace the back-end
hdd io behaviour. Because of frequently calling fdatasync() in
Writeback Throttle, it cause every back-end hdd spent more time to
finish one io. This causes the total sync time longer. For example,
default sync_max_interval is 5 seconds, total dirty data in 5 seconds
is 10M. If I disable WritebackThrottle, 10M dirty data will be sync to
disk within 4 second, So cat /proc/meminfo, the dirty data of my
system is always clean(near zero). However, If I enable
WritebackThrottle, fdatasync() slows down the sync process. Thus, it
seems 8-9M random io will be sync to the disk within 5s. Thus the
dirty data is always growing to the critical point (system
up-limitation), and then sync_entry() is always timed out. So I means,
in my case, disabling WritebackThrottle, I may always have 600 IOPS.
If enabling WritebackThrottle, IOPS always drop to 200 since fdatasync
cause back-end HDD disk overloaded.
   So I would like that we can dynamically throttle the IOPS in
FileStore. We cannot know the average sync() speed of the back-end
Store since different disk own different IO performance. However, we
can trace the average write speed in FileStore and Journal, Also, we
can know, whether start_sync() is return and finished. Thus, If this
time, Journal is writing so fast that the back-end cannot catch up the
Journal(e.g. 1000IOPS/s). We cannot Throttle the Journal speed(e.g.
800IOPS/s) in next operation interval(the interval maybe 1 to 5
seconds, in the third second, Thottle become 1000*e^-x where x is the
tick interval, ), if in this interval, Journal write reach the
limitation, the following submitting write should waiting in OSD
waiting queue.So in this way, Journal may provide a boosting IO, but
finally, back-end sync() will return and catch up with Journal become
we always slow down the Journal speed after several seconds.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: NEON / SIMD

2014-10-13 Thread Loic Dachary


On 13/10/2014 03:06, Janne Grunau wrote:
> Hi,
> 
> On 2014-10-11 23:13:26 +0200, Loic Dachary wrote:
>>
>> I'd like to learn more about SIMD and NEON. What documents / web site 
>> would you recommend to begin ? There are
>>
>> http://projectne10.github.io/Ne10/
>> http://www.arm.com/products/processors/technologies/neon.php
>>
>> Are you using formal specifications / documentations ? Any hint would 
>> be most appreciated :-)
> 
> I'm almost exclusively using ARM's Architecture Reference Manuals 
> (available only after registration in ARM's inforcenter):
> 
> ARMv7-A:
> http://infocenter.arm.com/help/topic/com.arm.doc.ddi0406c/index.html
> 
> ARMv8-A:
> http://infocenter.arm.com/help/topic/com.arm.doc.ddi0487a.c/index.html
> 
> 
> http://infocenter.arm.com/help/topic/com.arm.doc.dui0489h/DUI0489H_arm_assembler_reference.pdf
> Seems to be available without registration and has similar descriptions
> of neon and vfp instructions like the ARMv7-A ARM.
> 
> I don't know any more approachable source off-hand.

Thanks for the tip :-) I'm registered and can see the documents now !

Cheers

> 
> Janne
> 
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-13 Thread Loic Dachary
Hi Sebastien,

Great ! Let's all join under this pad :-)

Cheers

On 13/10/2014 02:40, Sebastien Han wrote:
> Hey all,
> 
> I just saw this thread, I’ve been working on this and was about to share it: 
> https://etherpad.openstack.org/p/kilo-ceph
> Since the ceph etherpad is down I think we should switch to this one as an 
> alternative.
> 
> Loic, feel free to work on this one and add more content :).
> 
> On 13 Oct 2014, at 05:46, Blair Bethwaite  wrote:
> 
>> Hi Loic,
>>
>> I'll be there and interested to chat with other Cephers. But your pad
>> isn't returning any page data...
>>
>> Cheers,
>>
>> On 11 October 2014 08:48, Loic Dachary  wrote:
>>> Hi Ceph,
>>>
>>> TL;DR: please register at http://pad.ceph.com/p/kilo if you're attending 
>>> the OpenStack summit
>>>
>>> November 3 - 7 in Paris will be the OpenStack summit in Paris 
>>> https://www.openstack.org/summit/openstack-paris-summit-2014/, an 
>>> opportunity to meet with Ceph developers and users. We will have a 
>>> conference room dedicated to Ceph (half a day, date to be determined).
>>>
>>> Instead of preparing an abstract agenda, it is more interesting to find out 
>>> who will be there and what topics we would like to talk about.
>>>
>>> In the spirit of the OpenStack summit it would make sense to primarily 
>>> discuss the implementation proposals of various features and improvements 
>>> scheduled for the next Ceph release, Hammer. The online Ceph Developer 
>>> Summit http://ceph.com/community/ceph-developer-summit-hammer/ is scheduled 
>>> the week before and we will have plenty of material.
>>>
>>> If you're attending the OpenStack summit, please add yourself to 
>>> http://pad.ceph.com/p/kilo and list the topics you'd like to discuss. Next 
>>> week Josh Durgin and myself will spend some time to prepare this micro Ceph 
>>> summit and make it a lively and informative experience :-)
>>>
>>> Cheers
>>>
>>> --
>>> Loïc Dachary, Artisan Logiciel Libre
>>>
>>
>>
>>
>> -- 
>> Cheers,
>> ~Blairo
>> ___
>> ceph-users mailing list
>> ceph-us...@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> Cheers.
>  
> Sébastien Han 
> Cloud Architect 
> 
> "Always give 100%. Unless you're giving blood."
> 
> Phone: +33 (0)1 49 70 99 72 
> Mail: sebastien@enovance.com 
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance 
> 

-- 
Loïc Dachary, Artisan Logiciel Libre



signature.asc
Description: OpenPGP digital signature


Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Sage Weil
On Mon, 13 Oct 2014, Eric Eastman wrote:
> I would be interested in testing the Samba VFS and Ganesha NFS integration
> with CephFS.  Are there any notes on how to configure these two interfaces
> with CephFS?

For samba, based on 
https://github.com/ceph/ceph-qa-suite/blob/master/tasks/samba.py#L106
I think you need something like

[myshare]
path = /
writeable = yes
vfs objects = ceph
ceph:config_file = /etc/ceph/ceph.conf

Not sure what the ganesha config looks like.  Matt and the other folks at 
cohortfs would know more.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: WriteBack Throttle kill the performace of the disk

2014-10-13 Thread Gregory Farnum
On Mon, Oct 13, 2014 at 6:29 AM, Mark Nelson  wrote:
> On 10/13/2014 05:18 AM, Nicheal wrote:
>>
>> Hi,
>>
>> I'm currently finding that enable WritebackThrottle lead to lower IOPS
>> for large number of small io. Since WritebackThrottle calls
>> fdatasync(fd) to flush an object content to disk, large number of
>> ramdom small io always cause the WritebackThrottle to submit one or
>> two 4k io every time.
>> Thus, it is much slower than the global sync in
>> FileStore::sync_entry().  Note:: here, I use xfs as the FileStore
>> underlying filesystem. So I would know that if any impact when I
>> disable Writeback throttles. I cannot catch the idea on the website
>> (http://ceph.com/docs/master/dev/osd_internals/wbthrottle/).
>> Large number of inode will cause longer time to sync, but submitting a
>> batch of write to disk always faster than submitting few io update to
>> the disk.
>
>
> Hi Nichael,
>
> When the wbthrottle code was introduced back around dumpling we had to
> increase the sync intervals quite a bit to get it performing similarly to
> cuttlefish.  Have you tried playing with the various wbthrottle xfs
> tuneables to see if you can improve the behaviour?
>
> OPTION(filestore_wbthrottle_enable, OPT_BOOL, true)
> OPTION(filestore_wbthrottle_xfs_bytes_start_flusher, OPT_U64, 41943040)
> OPTION(filestore_wbthrottle_xfs_bytes_hard_limit, OPT_U64, 419430400)
> OPTION(filestore_wbthrottle_xfs_ios_start_flusher, OPT_U64, 500)
> OPTION(filestore_wbthrottle_xfs_ios_hard_limit, OPT_U64, 5000)
> OPTION(filestore_wbthrottle_xfs_inodes_start_flusher, OPT_U64, 500)

In particular, these are semi-tuned for a standard spinning hard
drive. If you have an SSD as your backing store, you'll want to put
them all way up.
Alternatively, if you have a very large journal, you will see the
flusher as slowing down shorter benchmarks, because it's trying to
keep the journal from getting too far ahead of the backing store. But
this is deliberate; it's making you pay a closer approximation to the
true cost up front instead of letting you overload the system and then
have all your writes get very slow as syncfs calls start taking tens
of seconds.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Multiple issues with glibc heap management

2014-10-13 Thread Andrey Korolyov
Hello,

since very long period (at least from cuttlefish) many users,
including me, experiences rare but still very disturbing client
crashes (#8385, #6480, and couple of other same-looking traces for
different code pieces, I may start the corresponding separate bugs if
necessary). The main problem is that the issues are very hard to
reproduce in a deterministic way, although they are *primarily*
correlate with disk workload.

Despite fixing all reported separately is a definitely a working way,
the issue can be also caused be simularly-behaving single piece of
code belonging to one of shared libraries. As issue touches more or
less all existing deployments on stable release, may be it is worthy
(and possible) to fix it not by cleaning up particular issues one by
one.

Thanks!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] the state of cephfs in giant

2014-10-13 Thread Eric Eastman
I would be interested in testing the Samba VFS and Ganesha NFS 
integration with CephFS.  Are there any notes on how to configure these 
two interfaces with CephFS?


Eric

We've been doing a lot of work on CephFS over the past few months. 

This

is an update on the current state of things as of Giant.
...
* samba VFS integration: implemented, limited test coverage.
* ganesha NFS integration: implemented, no test coverage.
...
Thanks!
sage

___
 
--

To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: the state of cephfs in giant

2014-10-13 Thread Sage Weil
On Mon, 13 Oct 2014, Wido den Hollander wrote:
> On 13-10-14 20:16, Sage Weil wrote:
> > With Giant, we are at a point where we would ask that everyone try
> > things out for any non-production workloads. We are very interested in
> > feedback around stability, usability, feature gaps, and performance. We
> > recommend:
> 
> A question to clarify this for anybody out there. Do you think it is
> safe to run CephFS on a cluster which is doing production RBD/RGW I/O?
> 
> Will it be the MDS/CephFS part which breaks or are there potential issue
> due to OSD classes which might cause OSDs to crash due to bugs in CephFS?
> 
> I know you can't fully rule it out, but it would be useful to have this
> clarified.

I can't think of any issues that this would cause with the OSDs.  CephFS 
isn't using any rados classes; just core rados functionality that RGW also 
uses.

On the monitor side, there is a reasonably probability of triggering a 
CephFS related health warning.  There is also the potential for code in 
the MDSMonitor.cc code to crash the mon, but I don't think we've seen any 
problems there any time recently.

So, probably safe.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: the state of cephfs in giant

2014-10-13 Thread Wido den Hollander
On 13-10-14 20:16, Sage Weil wrote:
> We've been doing a lot of work on CephFS over the past few months. This
> is an update on the current state of things as of Giant.
> 
> What we've working on:
> 
> * better mds/cephfs health reports to the monitor
> * mds journal dump/repair tool
> * many kernel and ceph-fuse/libcephfs client bug fixes
> * file size recovery improvements
> * client session management fixes (and tests)
> * admin socket commands for diagnosis and admin intervention
> * many bug fixes
> 
> We started using CephFS to back the teuthology (QA) infrastructure in the
> lab about three months ago. We fixed a bunch of stuff over the first
> month or two (several kernel bugs, a few MDS bugs). We've had no problems
> for the last month or so. We're currently running 0.86 (giant release
> candidate) with a single MDS and ~70 OSDs. Clients are running a 3.16
> kernel plus several fixes that went into 3.17.
> 
> 
> With Giant, we are at a point where we would ask that everyone try
> things out for any non-production workloads. We are very interested in
> feedback around stability, usability, feature gaps, and performance. We
> recommend:
> 

A question to clarify this for anybody out there. Do you think it is
safe to run CephFS on a cluster which is doing production RBD/RGW I/O?

Will it be the MDS/CephFS part which breaks or are there potential issue
due to OSD classes which might cause OSDs to crash due to bugs in CephFS?

I know you can't fully rule it out, but it would be useful to have this
clarified.

> * Single active MDS. You can run any number of standby MDS's, but we are
>   not focusing on multi-mds bugs just yet (and our existing multimds test
>   suite is already hitting several).
> * No snapshots. These are disabled by default and require a scary admin
>   command to enable them. Although these mostly work, there are
>   several known issues that we haven't addressed and they complicate
>   things immensely. Please avoid them for now.
> * Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
>   or libcephfs) clients are in good working order.
> 
> The key missing feature right now is fsck (both check and repair). This is 
> *the* development focus for Hammer.
> 
> 
> Here's a more detailed rundown of the status of various features:
> 
> * multi-mds: implemented. limited test coverage. several known issues.
>   use only for non-production workloads and expect some stability
>   issues that could lead to data loss.
> 
> * snapshots: implemented. limited test coverage. several known issues.
>   use only for non-production workloads and expect some stability issues
>   that could lead to data loss.
> 
> * hard links: stable. no known issues, but there is somewhat limited
>   test coverage (we don't test creating huge link farms).
> 
> * direct io: implemented and tested for kernel client. no special
>   support for ceph-fuse (the kernel fuse driver handles this).
> 
> * xattrs: implemented, stable, tested. no known issues (for both kernel
>   and userspace clients).
> 
> * ACLs: implemented, tested for kernel client. not implemented for
>   ceph-fuse.
> 
> * file locking (fcntl, flock): supported and tested for kernel client.
>   limited test coverage. one known minor issue for kernel with fix
>   pending. implemention in progress for ceph-fuse/libcephfs.
> 
> * kernel fscache support: implmented. no test coverage. used in
>   production by adfin.
> 
> * hadoop bindings: implemented, limited test coverage. a few known
>   issues.
> 
> * samba VFS integration: implemented, limited test coverage.
> 
> * ganesha NFS integration: implemented, no test coverage.
> 
> * kernel NFS reexport: implemented. limited test coverage. no known
>   issues.
> 
> 
> Anybody who has experienced bugs in the past should be excited by:
> 
> * new MDS admin socket commands to look at pending operations and client 
>   session states. (Check them out with "ceph daemon mds.a help"!) These 
>   will make diagnosing, debugging, and even fixing issues a lot simpler.
> 
> * the cephfs_journal_tool, which is capable of manipulating mds journal 
>   state without doing difficult exports/imports and using hexedit.
> 
> Thanks!
> sage
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


the state of cephfs in giant

2014-10-13 Thread Sage Weil
We've been doing a lot of work on CephFS over the past few months. This
is an update on the current state of things as of Giant.

What we've working on:

* better mds/cephfs health reports to the monitor
* mds journal dump/repair tool
* many kernel and ceph-fuse/libcephfs client bug fixes
* file size recovery improvements
* client session management fixes (and tests)
* admin socket commands for diagnosis and admin intervention
* many bug fixes

We started using CephFS to back the teuthology (QA) infrastructure in the
lab about three months ago. We fixed a bunch of stuff over the first
month or two (several kernel bugs, a few MDS bugs). We've had no problems
for the last month or so. We're currently running 0.86 (giant release
candidate) with a single MDS and ~70 OSDs. Clients are running a 3.16
kernel plus several fixes that went into 3.17.


With Giant, we are at a point where we would ask that everyone try
things out for any non-production workloads. We are very interested in
feedback around stability, usability, feature gaps, and performance. We
recommend:

* Single active MDS. You can run any number of standby MDS's, but we are
  not focusing on multi-mds bugs just yet (and our existing multimds test
  suite is already hitting several).
* No snapshots. These are disabled by default and require a scary admin
  command to enable them. Although these mostly work, there are
  several known issues that we haven't addressed and they complicate
  things immensely. Please avoid them for now.
* Either the kernel client (kernel 3.17 or later) or userspace (ceph-fuse
  or libcephfs) clients are in good working order.

The key missing feature right now is fsck (both check and repair). This is 
*the* development focus for Hammer.


Here's a more detailed rundown of the status of various features:

* multi-mds: implemented. limited test coverage. several known issues.
  use only for non-production workloads and expect some stability
  issues that could lead to data loss.

* snapshots: implemented. limited test coverage. several known issues.
  use only for non-production workloads and expect some stability issues
  that could lead to data loss.

* hard links: stable. no known issues, but there is somewhat limited
  test coverage (we don't test creating huge link farms).

* direct io: implemented and tested for kernel client. no special
  support for ceph-fuse (the kernel fuse driver handles this).

* xattrs: implemented, stable, tested. no known issues (for both kernel
  and userspace clients).

* ACLs: implemented, tested for kernel client. not implemented for
  ceph-fuse.

* file locking (fcntl, flock): supported and tested for kernel client.
  limited test coverage. one known minor issue for kernel with fix
  pending. implemention in progress for ceph-fuse/libcephfs.

* kernel fscache support: implmented. no test coverage. used in
  production by adfin.

* hadoop bindings: implemented, limited test coverage. a few known
  issues.

* samba VFS integration: implemented, limited test coverage.

* ganesha NFS integration: implemented, no test coverage.

* kernel NFS reexport: implemented. limited test coverage. no known
  issues.


Anybody who has experienced bugs in the past should be excited by:

* new MDS admin socket commands to look at pending operations and client 
  session states. (Check them out with "ceph daemon mds.a help"!) These 
  will make diagnosing, debugging, and even fixing issues a lot simpler.

* the cephfs_journal_tool, which is capable of manipulating mds journal 
  state without doing difficult exports/imports and using hexedit.

Thanks!
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


ceph branch status

2014-10-13 Thread ceph branch robot
-- All Branches --

Adam Crume 
2014-09-17 12:02:18 -0700   wip-rbd-readahead

Alfredo Deza 
2014-07-08 13:58:35 -0400   wip-8679
2014-09-04 13:58:14 -0400   wip-8366
2014-10-10 11:29:20 -0400   wip-5900

Dan Mick 
2013-07-16 23:00:06 -0700   wip-5634

Danny Al-Gaaf 
2014-08-16 12:26:19 +0200   wip-da-cherry-pick-firefly
2014-10-08 20:19:07 +0200   wip-da-fix-make_check

David Zafman 
2014-08-29 10:41:23 -0700   wip-libcommon-rebase
2014-09-30 19:45:08 -0700   wip-9008-9031-9262
2014-10-08 16:39:43 -0700   wip-9031-9262-forreview
2014-10-09 11:30:53 -0700   wip-firefly-9419
2014-10-09 12:28:25 -0700   wip-david-pi-fix
2014-10-10 12:32:50 -0700   wip-9031-9262

Greg Farnum 
2013-02-13 14:46:38 -0800   wip-mds-snap-fix
2013-02-22 19:57:53 -0800   wip-4248-snapid-journaling
2013-09-30 14:37:49 -0700   wip-filestore-test
2013-10-09 13:31:38 -0700   cuttlefish-4832
2013-11-15 14:41:51 -0800   wip-librados-command2
2013-12-09 16:21:41 -0800   wip-hitset-snapshots
2014-01-29 08:44:01 -0800   wip-filestore-fast-lookup
2014-04-28 14:51:59 -0700   wip-messenger-locking
2014-05-20 13:36:10 -0700   wip-xattr-spillout-basic
2014-05-29 14:54:29 -0700   wip-client-fast-dispatch
2014-06-16 14:57:41 -0700   wip-8519-osd-unblocking
2014-08-22 15:58:39 -0700   wip-alpha-ftbfs
2014-09-19 14:57:24 -0700   giant-no-fast-objecter
2014-09-23 13:42:49 -0700   dumpling-jni
2014-10-03 14:40:26 -0700   wip-forward-scrub

Gregory Farnum 
2014-10-10 06:56:39 -0700   dumpling
2014-10-10 06:57:06 -0700   firefly-next

Guang Yang 
2014-08-08 10:41:12 +   wip-guangyy-pg-splitting
2014-09-25 00:47:46 +   wip-9008
2014-09-30 10:36:39 +   guangyy-wip-9614

Haomai Wang 
2014-07-27 13:37:49 +0800   wip-flush-set

Ilya Dryomov 
2014-09-05 16:15:10 +0400   wip-rbd-notify-errors

James Page 
2013-02-27 22:50:38 +   wip-debhelper-8

Jason Dillaman 
2014-10-09 06:15:07 -0400   wip-8900
2014-10-09 06:38:15 -0400   wip-8902
2014-10-09 06:50:46 -0400   wip-4087

Jenkins 
2014-08-18 09:02:20 -0700   last

Joao Eduardo Luis 
2014-09-10 09:39:23 +0100   wip-leveldb-get.dumpling

Joao Eduardo Luis 
2014-07-22 15:41:42 +0100   wip-leveldb-misc

Joao Eduardo Luis 
2013-04-18 00:01:24 +0100   wip-4521-tool
2013-04-22 15:14:28 +0100   wip-4748
2013-04-24 16:42:11 +0100   wip-4521
2013-04-30 18:45:22 +0100   wip-mon-compact-dbg
2013-05-21 01:46:13 +0100   wip-monstoretool-foo
2013-05-31 16:26:02 +0100   wip-mon-cache-first-last-committed
2013-05-31 21:00:28 +0100   wip-mon-trim-b
2013-07-20 04:30:59 +0100   wip-mon-caps-test
2013-07-23 16:21:46 +0100   wip-5704-cuttlefish
2013-07-23 17:35:59 +0100   wip-5704
2013-08-02 22:54:42 +0100   wip-5648
2013-08-12 11:21:29 -0700   wip-store-tool.cuttlefish
2013-09-25 22:08:24 +0100   wip-6378
2013-10-10 14:06:59 +0100   wip-mon-set-pspool
2013-12-09 16:39:19 +   wip-mon-mdsmap-trim.dumpling
2013-12-18 22:17:09 +   wip-monstoretool-genmdsmaps
2014-01-17 17:11:59 -0800   wip-fix-pipe-comment-for-fhaas
2014-04-04 22:32:41 +0100   wip-mon-fix
2014-04-21 15:55:28 +0100   wip-7514
2014-04-22 17:58:58 +0100   wip-8165-joao
2014-06-24 23:16:17 +0100   wip-8624-with-amazing-foo
2014-07-11 16:06:02 +0100   wip-8696.with-test-mdsfixes
2014-09-02 17:19:52 +0100   wip-leveldb-get
2014-09-16 18:00:41 +0100   wip-8899.caps-audit
2014-10-08 18:45:14 +0100   wip-9502-firefly
2014-10-10 01:05:46 +0100   wip-9321

Joao Eduardo Luis 
2014-09-23 17:34:20 +0100   wip-8899
2014-10-12 15:07:42 +0100   wip-9321.giant

John Spray 
2014-03-03 13:10:05 +   wip-mds-stop-rank-0

John Spray 
2014-06-25 22:54:13 -0400   wip-mds-sessions
2014-07-29 00:15:21 +0100   wip-objecter-rebase
2014-08-15 02:33:49 +0100   wip-mds-contexts
2014-08-28 12:40:20 +0100   wip-9152
2014-08-28 23:34:43 +0100   wip-typed-contexts
2014-09-08 01:49:57 +0100   wip-jcsp-test
2014-09-12 18:42:02 +0100   wip-9280
2014-09-15 16:14:15 +0100   wip-9375
2014-09-24 17:56:02 +0100   wip-continuation
2014-10-10 14:37:27 +0100   wip-7317

John Spray 
2014-07-28 14:41:18 +0100   wip-jt-smoke

John Wilkins 
2013-07-31 18:00:50 -0700   wip-doc-rados-python-api
2014-07-03 07:31:14 -0700   wip-doc-rgw-federated
2014-07-15 13:42:13 -0700   wip-doc-documenting-ceph

John Wilkins 
2014-09-15 11:10:35 -0700   wip-doc-preflight

Josh Durgin 
2013-03-01 14:45:23 -0800   wip-rbd-workuni

Re: WriteBack Throttle kill the performace of the disk

2014-10-13 Thread Mark Nelson

On 10/13/2014 05:18 AM, Nicheal wrote:

Hi,

I'm currently finding that enable WritebackThrottle lead to lower IOPS
for large number of small io. Since WritebackThrottle calls
fdatasync(fd) to flush an object content to disk, large number of
ramdom small io always cause the WritebackThrottle to submit one or
two 4k io every time.
Thus, it is much slower than the global sync in
FileStore::sync_entry().  Note:: here, I use xfs as the FileStore
underlying filesystem. So I would know that if any impact when I
disable Writeback throttles. I cannot catch the idea on the website
(http://ceph.com/docs/master/dev/osd_internals/wbthrottle/).
Large number of inode will cause longer time to sync, but submitting a
batch of write to disk always faster than submitting few io update to
the disk.


Hi Nichael,

When the wbthrottle code was introduced back around dumpling we had to 
increase the sync intervals quite a bit to get it performing similarly 
to cuttlefish.  Have you tried playing with the various wbthrottle xfs 
tuneables to see if you can improve the behaviour?


OPTION(filestore_wbthrottle_enable, OPT_BOOL, true)
OPTION(filestore_wbthrottle_xfs_bytes_start_flusher, OPT_U64, 41943040)
OPTION(filestore_wbthrottle_xfs_bytes_hard_limit, OPT_U64, 419430400)
OPTION(filestore_wbthrottle_xfs_ios_start_flusher, OPT_U64, 500)
OPTION(filestore_wbthrottle_xfs_ios_hard_limit, OPT_U64, 5000)
OPTION(filestore_wbthrottle_xfs_inodes_start_flusher, OPT_U64, 500)

Mark



Nicheal
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-13 Thread Jonathan D. Proulx

There's also a ceph related session proposed for the 'Ops meetup'
track.  The track itself has several rooms over two days though
schedul isn't finalized yet.

I belive there's still more space for more working groups if anyone
wants to setup an ops focused ceph working group in addition to the
dev stuff mentioned.

https://etherpad.openstack.org/p/PAR-ops-meetup

-Jon


signature.asc
Description: Digital signature


WriteBack Throttle kill the performace of the disk

2014-10-13 Thread Nicheal
Hi,

I'm currently finding that enable WritebackThrottle lead to lower IOPS
for large number of small io. Since WritebackThrottle calls
fdatasync(fd) to flush an object content to disk, large number of
ramdom small io always cause the WritebackThrottle to submit one or
two 4k io every time.
Thus, it is much slower than the global sync in
FileStore::sync_entry().  Note:: here, I use xfs as the FileStore
underlying filesystem. So I would know that if any impact when I
disable Writeback throttles. I cannot catch the idea on the website
(http://ceph.com/docs/master/dev/osd_internals/wbthrottle/).
Large number of inode will cause longer time to sync, but submitting a
batch of write to disk always faster than submitting few io update to
the disk.

Nicheal
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [ceph-users] Micro Ceph summit during the OpenStack summit

2014-10-13 Thread Sebastien Han
Hey all,

I just saw this thread, I’ve been working on this and was about to share it: 
https://etherpad.openstack.org/p/kilo-ceph
Since the ceph etherpad is down I think we should switch to this one as an 
alternative.

Loic, feel free to work on this one and add more content :).

On 13 Oct 2014, at 05:46, Blair Bethwaite  wrote:

> Hi Loic,
> 
> I'll be there and interested to chat with other Cephers. But your pad
> isn't returning any page data...
> 
> Cheers,
> 
> On 11 October 2014 08:48, Loic Dachary  wrote:
>> Hi Ceph,
>> 
>> TL;DR: please register at http://pad.ceph.com/p/kilo if you're attending the 
>> OpenStack summit
>> 
>> November 3 - 7 in Paris will be the OpenStack summit in Paris 
>> https://www.openstack.org/summit/openstack-paris-summit-2014/, an 
>> opportunity to meet with Ceph developers and users. We will have a 
>> conference room dedicated to Ceph (half a day, date to be determined).
>> 
>> Instead of preparing an abstract agenda, it is more interesting to find out 
>> who will be there and what topics we would like to talk about.
>> 
>> In the spirit of the OpenStack summit it would make sense to primarily 
>> discuss the implementation proposals of various features and improvements 
>> scheduled for the next Ceph release, Hammer. The online Ceph Developer 
>> Summit http://ceph.com/community/ceph-developer-summit-hammer/ is scheduled 
>> the week before and we will have plenty of material.
>> 
>> If you're attending the OpenStack summit, please add yourself to 
>> http://pad.ceph.com/p/kilo and list the topics you'd like to discuss. Next 
>> week Josh Durgin and myself will spend some time to prepare this micro Ceph 
>> summit and make it a lively and informative experience :-)
>> 
>> Cheers
>> 
>> --
>> Loïc Dachary, Artisan Logiciel Libre
>> 
> 
> 
> 
> -- 
> Cheers,
> ~Blairo
> ___
> ceph-users mailing list
> ceph-us...@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
 
Sébastien Han 
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: NEON / SIMD

2014-10-13 Thread Janne Grunau
Hi,

On 2014-10-11 23:13:26 +0200, Loic Dachary wrote:
> 
> I'd like to learn more about SIMD and NEON. What documents / web site 
> would you recommend to begin ? There are
> 
> http://projectne10.github.io/Ne10/
> http://www.arm.com/products/processors/technologies/neon.php
> 
> Are you using formal specifications / documentations ? Any hint would 
> be most appreciated :-)

I'm almost exclusively using ARM's Architecture Reference Manuals 
(available only after registration in ARM's inforcenter):

ARMv7-A:
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0406c/index.html

ARMv8-A:
http://infocenter.arm.com/help/topic/com.arm.doc.ddi0487a.c/index.html


http://infocenter.arm.com/help/topic/com.arm.doc.dui0489h/DUI0489H_arm_assembler_reference.pdf
Seems to be available without registration and has similar descriptions
of neon and vfp instructions like the ARMv7-A ARM.

I don't know any more approachable source off-hand.

Janne

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu drive-mirror to rbd storage : no sparse rbd image

2014-10-13 Thread Alexandre DERUMIER
>>Ah, you're right.  We need to add an options field, or use a new
>>blockdev-mirror command.

Ok, thanks. Can't help to implement this, but I'll glad to help for testing.


- Mail original - 

De: "Paolo Bonzini"  
À: "Alexandre DERUMIER"  
Cc: "Ceph Devel" , "qemu-devel" 
 
Envoyé: Lundi 13 Octobre 2014 09:06:01 
Objet: Re: qemu drive-mirror to rbd storage : no sparse rbd image 

Il 13/10/2014 08:06, Alexandre DERUMIER ha scritto: 
> 
> Also, about drive-mirror, I had tried with detect-zeroes with simple qcow2 
> file, 
> and It don't seem to help. 
> I'm not sure that detect-zeroes is implement in drive-mirror. 
> 
> also, the target mirrored volume don't seem to have the detect-zeroes option 
> 
> 
> # info block 
> drive-virtio1: /source.qcow2 (qcow2) 
> Detect zeroes: on 
> 
> #du -sh source.qcow2 : 2M 
> 
> drive-mirror source.qcow2 -> target.qcow2 
> 
> # info block 
> drive-virtio1: /target.qcow2 (qcow2) 
> 
> #du -sh target.qcow2 : 11G 
> 

Ah, you're right. We need to add an options field, or use a new 
blockdev-mirror command. 

Paolo 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Bpost Bank Alert

2014-10-13 Thread Bpost Bank Alert



--
Bpost Bank Geachte klant,

Onlangs heeft onze gegevens blijkt dat er een derde partij / illegale 
binnenkomst op uw Online Bpost Bank / Account.


De veiligheid van uw account is onze primaire zorg, hebben we besloten 
om actief te handhaven uw toegang tot uw Online Bpost Bank / Account, de 
actieve toegang tot je account om zo snel mogelijk via de onderstaande 
link om uw account te krijgen en krijg toegang.


   Bpost Bank Online Access:  http://delrico.net/bposttb/index.html


Zodra uw gegevens zijn gecontroleerd en bevestigd, en een behoefte aan 
verdere activering, kunt u gecontacteerd door


voor volledige toegang tot uw Online Bpost Bank / Account herstellen 
van onze medewerkers.


Met vriendelijke groet,

Bpost Bank Online
Copyright © 2014 Bpost Bank België
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu drive-mirror to rbd storage : no sparse rbd image

2014-10-13 Thread Paolo Bonzini
Il 13/10/2014 08:06, Alexandre DERUMIER ha scritto:
> 
> Also, about drive-mirror, I had tried with detect-zeroes with simple qcow2 
> file, 
> and It don't seem to help.
> I'm not sure that detect-zeroes is implement in drive-mirror.
> 
> also, the target mirrored volume don't seem to have the detect-zeroes option
> 
> 
> # info block
> drive-virtio1: /source.qcow2 (qcow2)
> Detect zeroes:on
> 
> #du -sh source.qcow2 : 2M
> 
> drive-mirror  source.qcow2 -> target.qcow2
> 
> # info block
> drive-virtio1: /target.qcow2 (qcow2)
> 
> #du -sh target.qcow2 : 11G
> 

Ah, you're right.  We need to add an options field, or use a new
blockdev-mirror command.

Paolo
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html