THANKS a lot. This fixes it. I've merged your branch into next and i
wsn't able to trigger the osd crash again. So please include this into 0.48.
Greets
Stefan
Am 26.06.2012 20:01, schrieb Sam Just:
Stefan,
Sorry for the delay, I think I've found the problem. Could you give
Dear list,
I'm currently writing a Boost.Asio-like interface to librados.hpp. Most
things are working like expected, and some things I couldn't figure out
from the available docs and code. I have itemized a couple of questions
below.
* to avoid allocations done by rados, I'm currently
Hello list,
i'm still thinking about optimal OSD hardware and while reading through
the mailinglist and wiki had some questions.
I want to use SSD so my idea was to use a fast single socket cpu with
8-10 SSD disks per OSD.
I got the following recommandations through the mailinglist:
Dual
On 6/27/12 8:04 AM, Stefan Priebe - Profihost AG wrote:
Hello list,
i'm still thinking about optimal OSD hardware and while reading through
the mailinglist and wiki had some questions.
I want to use SSD so my idea was to use a fast single socket cpu with
8-10 SSD disks per OSD.
I got the
Hi Mark,
On 06/27/2012 07:55 AM, Mark Nelson wrote:
For what it's worth, I've got a pair of Dell R515 setup with a single 2.8GHz
6-core 4184 Opteron, 16GB of RAM, and 10 SSDs that are capable of about 200MB/s
each. Currently I'm topping out at about 600MB/s with rados bench using half
of
Hi Mark,
Am 27.06.2012 15:55, schrieb Mark Nelson:
On 6/27/12 8:04 AM, Stefan Priebe - Profihost AG wrote:
Hi Stefan,
I'm not entirely clear how you are coming to the conclusion regarding
the CPU requirements. If we go by the 1Ghz per OSD suggestion, does
that mean you plan to have
On Wed, 27 Jun 2012, Stefan Priebe - Profihost AG wrote:
THANKS a lot. This fixes it. I've merged your branch into next and i wsn't
able to trigger the osd crash again. So please include this into 0.48.
Excellent. Thanks for testing! This now in next.
sage
Greets
Stefan
Am
Am 27.06.2012 16:55, schrieb Jim Schutt:
This is my current best tuning for my hardware, which uses
24 SAS drives/server, and 1 OSD/drive with a journal partition
on the outer tracks and btrfs for the data store.
Which raid level do you use?
I'd be very curious to hear how these work for
On Wed, 27 Jun 2012, Rutger ter Borg wrote:
Dear list,
I'm currently writing a Boost.Asio-like interface to librados.hpp. Most things
are working like expected, and some things I couldn't figure out from the
available docs and code. I have itemized a couple of questions below.
* to
Am 27.06.2012 17:21, schrieb Gregory Farnum:
Well, as we said, 1GHz/OSD was a WAG (wild-ass guess), but 3.6GHz+/OSD
is farther outside of that range than I would have expected. It might
just be a consequence of using SSDs, since they can sustain so much more
throughput.
Sure it was just so
On 06/27/2012 09:55 AM, Jim Schutt wrote:
Hi Mark,
On 06/27/2012 07:55 AM, Mark Nelson wrote:
For what it's worth, I've got a pair of Dell R515 setup with a single
2.8GHz 6-core 4184 Opteron, 16GB of RAM, and 10 SSDs that are capable
of about 200MB/s each. Currently I'm topping out at about
On 06/27/2012 10:28 AM, Stefan Priebe wrote:
Am 27.06.2012 17:21, schrieb Gregory Farnum:
Well, as we said, 1GHz/OSD was a WAG (wild-ass guess), but 3.6GHz+/OSD
is farther outside of that range than I would have expected. It might
just be a consequence of using SSDs, since they can sustain
On 06/27/2012 09:19 AM, Stefan Priebe wrote:
Am 27.06.2012 16:55, schrieb Jim Schutt:
This is my current best tuning for my hardware, which uses
24 SAS drives/server, and 1 OSD/drive with a journal partition
on the outer tracks and btrfs for the data store.
Which raid level do you use?
No
Am 27.06.2012 um 19:23 schrieb Jim Schutt jasc...@sandia.gov:
On 06/27/2012 09:19 AM, Stefan Priebe wrote:
Am 27.06.2012 16:55, schrieb Jim Schutt:
This is my current best tuning for my hardware, which uses
24 SAS drives/server, and 1 OSD/drive with a journal partition
on the outer tracks
On 06/27/2012 11:54 AM, Stefan Priebe wrote:
Am 27.06.2012 um 19:23 schrieb Jim Schuttjasc...@sandia.gov:
On 06/27/2012 09:19 AM, Stefan Priebe wrote:
Am 27.06.2012 16:55, schrieb Jim Schutt:
This is my current best tuning for my hardware, which uses
24 SAS drives/server, and 1 OSD/drive
Am 27.06.2012 20:38, schrieb Jim Schutt:
Actually, when my 166-client test is running,
ps -o pid,nlwp,args -C ceph-osd
tells me that I typically have ~1200 threads/OSD.
huh i see only 124 threads per OSD even with your settings.
Hmmm. The only other obvious difference, based on
what I
On 06/27/2012 12:48 PM, Stefan Priebe wrote:
Am 27.06.2012 20:38, schrieb Jim Schutt:
Actually, when my 166-client test is running,
ps -o pid,nlwp,args -C ceph-osd
tells me that I typically have ~1200 threads/OSD.
huh i see only 124 threads per OSD even with your settings.
FWIW:
2
Hi,
I'm running into trouble with systems going unresponsive,
and perf suggests it's excessive CPU usage by isolate_freepages().
I'm currently testing 3.5-rc4, but I think this problem may have
first shown up in 3.4. I'm only just learning how to use perf,
so I only currently have results to
On 06/27/2012 04:59 PM, Jim Schutt wrote:
Hi,
I'm running into trouble with systems going unresponsive,
and perf suggests it's excessive CPU usage by isolate_freepages().
I'm currently testing 3.5-rc4, but I think this problem may have
first shown up in 3.4. I'm only just learning how to use
On 06/28/2012 06:59 AM, Jim Schutt wrote:
Hi,
I'm running into trouble with systems going unresponsive,
and perf suggests it's excessive CPU usage by isolate_freepages().
I'm currently testing 3.5-rc4, but I think this problem may have
first shown up in 3.4. I'm only just learning how to
On 06/26/2012 09:19 PM, ramu wrote:
Hi all,
I started ceph through /etc/init.d/ -a start,but it was not stop.I
tried
with /etc/init.d/ceph -a stop and /etc/init.d/ceph -v -a stop and
service ceph -a stop also,there is no error logs showing.I tried with
/etc/init.d/ceph -a killall
On 06/28/2012 09:28 AM, Rik van Riel wrote:
On 06/27/2012 07:59 PM, Minchan Kim wrote:
I doubt compaction try to migrate continuously although we have no
free memory.
Could you apply this patch and retest?
https://lkml.org/lkml/2012/6/21/30
Another possibility is that compaction is
On 06/28/2012 09:52 AM, David Rientjes wrote:
On Wed, 27 Jun 2012, Rik van Riel wrote:
I doubt compaction try to migrate continuously although we have no free
memory.
Could you apply this patch and retest?
https://lkml.org/lkml/2012/6/21/30
Not sure if Jim is using memcg;
On Thu, 28 Jun 2012, Minchan Kim wrote:
https://lkml.org/lkml/2012/6/21/30
Not sure if Jim is using memcg; if not, then this won't be helpful.
It doesn't related to memcg.
if compaction_alloc can't find suitable migration target, it returns NULL.
Then, migrate_pages should be
On 06/27/2012 08:52 PM, David Rientjes wrote:
On Wed, 27 Jun 2012, Rik van Riel wrote:
Another possibility is that compaction is succeeding every time,
but since we always start scanning all the way at the beginning
and end of each zone, we waste a lot of CPU time rescanning the
same pages
On 06/28/2012 10:06 AM, David Rientjes wrote:
On Thu, 28 Jun 2012, Minchan Kim wrote:
https://lkml.org/lkml/2012/6/21/30
Not sure if Jim is using memcg; if not, then this won't be helpful.
It doesn't related to memcg.
if compaction_alloc can't find suitable migration target, it returns
Hi all,
On Mon, 2012-06-18 at 09:44 -0700, Sage Weil wrote:
On Mon, 18 Jun 2012, James Page wrote:
Laszlo - thanks for sending the original email - I'd like to get
everything as closely in-sync as possible between the three packaging
sources as well.
It seems I could sync with Ubuntu a
27 matches
Mail list logo