On Dec 18, 2014, at 10:54 AM, Wido den Hollander w...@42on.com wrote:
On 12/18/2014 11:13 AM, Wido den Hollander wrote:
On 12/17/2014 07:42 PM, Gregory Farnum wrote:
On Wed, Dec 17, 2014 at 8:35 AM, Wido den Hollander w...@42on.com wrote:
Hi,
Today I've been playing with CephFS and the
Hi all,
For a given file in cephfs, I would like to determine:
1) the number of PGs
2) the PG IDs
3) the offsets handles by each PG
4) the stripe unit (i.e. bytes per block of data)
preferably using a C API.
I found the getfattr command line tool which provides (1) and (4).
It appears that
On Dec 15, 2014, at 2:42 PM, Gregory Farnum g...@gregs42.com wrote:
On Mon, Dec 15, 2014 at 10:54 AM, Atchley, Scott atchle...@ornl.gov wrote:
Hi all,
For a given file in cephfs, I would like to determine:
1) the number of PGs
2) the PG IDs
3) the offsets handles by each PG
4
On Dec 15, 2014, at 4:10 PM, Gregory Farnum g...@gregs42.com wrote:
On Mon, Dec 15, 2014 at 12:06 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Dec 15, 2014, at 2:42 PM, Gregory Farnum g...@gregs42.com wrote:
On Mon, Dec 15, 2014 at 10:54 AM, Atchley, Scott atchle...@ornl.gov wrote:
Hi
Can a single RADOS system support more than one POSIX namespace (i.e. two or
more distinct POSIX namespaces sharing the same set of RADOS servers?
Scott--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info
On Jan 7, 2014, at 2:52 PM, Yaron Haviv yar...@mellanox.com wrote:
Scott, See below
-Original Message-
From: Atchley, Scott [mailto:atchle...@ornl.gov]
Sent: Monday, January 06, 2014 5:55 PM
To: Matt W. Benjamin
Cc: Sage Weil; ceph-devel; Yaron Haviv; Eyal Salomon
Subject: Re
On Dec 11, 2013, at 8:33 PM, Matt W. Benjamin m...@cohortfs.com wrote:
HI Sage,
inline
- Sage Weil s...@inktank.com wrote:
Hi Matt,
Thanks for posting this! Some comments and questions below.
I was originally thinking that xio was going to be more
mellanox-specific,
but
On Nov 9, 2013, at 4:18 AM, Sage Weil s...@inktank.com wrote:
The SimpleMessenger implementation of the Messenger interface has grown
organically over many years and is one of the cruftier bits of code in
Ceph. The idea of building a fresh implementation has come up several
times in the
On Aug 14, 2013, at 3:21 AM, Andreas Bluemle andreas.blue...@itxperts.de
wrote:
Hi,
maybe some information about the environment I am
working in:
- CentOS 6.4 with custom kernel 3.8.13
- librdmacm / librspreload from git, tag 1.0.17
- application started with librspreload in LD_PRELOAD
On Aug 13, 2013, at 10:06 AM, Andreas Bluemle andreas.blue...@itxperts.de
wrote:
Hi Matthew,
I found a workaround for my (our) problem: in the librdmacm
code, rsocket.c, there is a global constant polling_time, which
is set to 10 microseconds at the moment.
I raise this to 1 - and
read
case) and the repair case.
-Sam
On Tue, Jul 2, 2013 at 10:14 AM, Atchley, Scott atchle...@ornl.gov
wrote:
On Jul 2, 2013, at 10:07 AM, Atchley, Scott atchle...@ornl.gov
wrote:
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out that the API
of available chunks to costs. The costs might allow
us
to
consider the difference between reading local chunks vs remote chunks.
This should be sufficient to cover the read case (esp the degraded
read
case) and the repair case.
-Sam
On Tue, Jul 2, 2013 at 10:14 AM, Atchley, Scott atchle
the parity chunks to write just one of them ?
I assume the later but ... I'm not sure ;-)
Yes, you can recover a parity chunk just as you would a data chunk. I have not
used the jerasure library, so I do not know what it requires.
Cheers
On 05/07/2013 17:02, Atchley, Scott wrote:
On Jul 5
the degraded read
case) and the repair case.
-Sam
On Tue, Jul 2, 2013 at 10:14 AM, Atchley, Scott atchle...@ornl.gov
wrote:
On Jul 2, 2013, at 10:07 AM, Atchley, Scott atchle...@ornl.gov
wrote:
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out that the API for LRC ( Xorbas Hadoop Project Page,
Locally Repairable Codes (LRC) http://smahesh.com/HadoopUSC/ for instance )
would need to be different from the one initialy proposed:
An
On Jul 2, 2013, at 10:07 AM, Atchley, Scott atchle...@ornl.gov wrote:
On Jul 1, 2013, at 7:00 PM, Loic Dachary l...@dachary.org wrote:
Hi,
Today Sam pointed out that the API for LRC ( Xorbas Hadoop Project Page,
Locally Repairable Codes (LRC) http://smahesh.com/HadoopUSC/ for instance
On Jan 17, 2013, at 11:19 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
10GbE should get close to 1.2 GB/s compared to 1 GB/s for IB SDR. Latency
again depends on the Ethernet driver.
10GbE faster than IB SDR? Really
On Jan 22, 2013, at 4:06 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Jan 17, 2013, at 11:19 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
10GbE should get close to 1.2 GB/s compared to 1 GB/s for IB SDR. Latency
again
On Jan 17, 2013, at 8:37 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 01/17/2013 07:32 AM, Joseph Glanville wrote:
On 17 January 2013 20:46, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/16 Mark Nelsonmark.nel...@inktank.com:
I don't know if I have to use a
On Jan 17, 2013, at 10:07 AM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Jan 17, 2013 at 7:00 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Jan 17, 2013, at 9:48 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
IB DDR
On Jan 17, 2013, at 10:14 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
IPoIB appears as a traditional Ethernet device to Linux and can be used as
such. Ceph has no idea that it is not Ethernet.
Ok. Now it's clear.
AFAIK
On Jan 17, 2013, at 11:01 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2013/1/17 Atchley, Scott atchle...@ornl.gov:
Yes. It should get close to 1 GB/s where 1GbE is limited to about 125 MB/s.
Lower latency? Probably since most Ethernet drivers set interrupt coalescing
On Nov 8, 2012, at 5:00 PM, Joseph Glanville joseph.glanvi...@orionvm.com.au
wrote:
On 9 November 2012 08:21, Dieter Kasper d.kas...@kabelmail.de wrote:
Joseph,
I've downloaded and read the presentation from 'Sean Hefty / Intel
Corporation'
about rsockets, which sounds very promising to
On Nov 8, 2012, at 3:22 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/11/8 Mark Nelson mark.nel...@inktank.com:
I haven't done much with IPoIB (just RDMA), but my understanding is that it
tends to top out at like 15Gb/s. Some others on this mailing list can
probably
On Nov 8, 2012, at 10:00 AM, Scott Atchley atchle...@ornl.gov wrote:
On Nov 8, 2012, at 9:39 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/08/2012 07:55 AM, Atchley, Scott wrote:
On Nov 8, 2012, at 3:22 AM, Gandalf Corvotempesta
gandalf.corvotempe...@gmail.com wrote:
2012/11/8
On Nov 8, 2012, at 11:19 AM, Andrey Korolyov and...@xdel.ru wrote:
On Thu, Nov 8, 2012 at 7:02 PM, Atchley, Scott atchle...@ornl.gov wrote:
On Nov 8, 2012, at 10:00 AM, Scott Atchley atchle...@ornl.gov wrote:
On Nov 8, 2012, at 9:39 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/08
On Nov 7, 2012, at 10:01 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/07/2012 06:28 AM, Gandalf Corvotempesta wrote:
2012/11/7 Sage Weil s...@inktank.com:
On Wed, 7 Nov 2012, Gandalf Corvotempesta wrote:
I'm evaluating some SSD drives as journal.
Samsung 840 Pro seems to be the
On Nov 7, 2012, at 11:20 AM, Mark Nelson mark.nel...@inktank.com wrote:
Right now I'm doing 3 journals per SSD, but topping out at about
1.2-1.4GB/s from the client perspective for the node with 15+ drives and
5 SSDs. It's possible newer versions of the code and tuning may
increase that.
On Aug 31, 2012, at 11:15 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 10:37 AM, Stephen Perkins perk...@netmass.com wrote:
Would this require 2 clusters because of the need to have RADOS keep N
copies on one and 1 copy on the other?
That's doable with just multiple RADOS pools, no
On Aug 24, 2012, at 11:49 AM, Stephen Perkins wrote:
Hi all,
I'd like to get feedback from folks as to where the best place would be to
insert a shim into the RADOS object storage.
Currently, you can configure RADOS to use copy based storage to store
redundant copies of a file (I like 3
30 matches
Mail list logo