Hi Sage,
i'm using debian but with a custom build. Should i use the debian branch
to build or the stable branch?
Thanks,
Stefan
Am 12.06.2012 04:41, schrieb Sage Weil:
Hi Laszlo,
Can you take a look at the last 4 commits of
https://github.com/ceph/ceph/commits/debian
and let me kn
On 06/12/2012 01:00 PM, Sage Weil wrote:
> Yep. This was just fixed yesterday, in the testing-next branch, by
> 'libceph: transition socket state prior to actual connect'.
>
> Are you still hitting the bio null deref?
>
No,
Cheers
Yan, Zheng
--
To unsubscribe from this list: send the line "u
On Tue, 12 Jun 2012, Yan, Zheng wrote:
> On Thu, May 31, 2012 at 3:35 AM, Alex Elder wrote:
> > Start explicitly keeping track of the state of a ceph connection's
> > socket, separate from the state of the connection itself. Create
> > placeholder functions to encapsulate the state transitions.
>
On Thu, May 31, 2012 at 3:35 AM, Alex Elder wrote:
> Start explicitly keeping track of the state of a ceph connection's
> socket, separate from the state of the connection itself. Create
> placeholder functions to encapsulate the state transitions.
>
>
> | NEW* | transient initial
Hi Sage,
On Mon, 2012-06-11 at 19:41 -0700, Sage Weil wrote:
> Can you take a look at the last 4 commits of
> https://github.com/ceph/ceph/commits/debian
> and let me know if they address the issues you mentioned?
Yes, they fixes those issues I've mentioned. However you could keep
ceph-kdum
Hi Laszlo,
Can you take a look at the last 4 commits of
https://github.com/ceph/ceph/commits/debian
and let me know if they address the issues you mentioned?
Thanks-
sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vge
Hi Laszlo,
On Mon, 11 Jun 2012, Laszlo Boszormenyi (GCS) wrote:
> Hi Loic, Sage,
>
> On Sun, 2012-06-10 at 14:21 +0200, Loic Dachary wrote:
> > On 06/09/2012 05:01 PM, Laszlo Boszormenyi (GCS) wrote:
> > > On Sat, 2012-06-09 at 15:39 +0200, Loic Dachary wrote:
> > >> Amazingly quick answer ;-) Di
On 06/11/12 23:39, Yehuda Sadeh wrote:
>> If one of the Ceph guys could provide a quick comment on this, I can
>> send a patch to the man page RST. Thanks.
>>
>
> Minimum required to create a user:
>
> radosgw-admin user create --uid= --display-name=
>
> The user id is actually a user 'account'
On Mon, Jun 11, 2012 at 2:35 PM, Florian Haas wrote:
> Hi,
>
> just noticed that radosgw-admin comes with a bit of confusing content in
> its man page and usage message:
>
> EXAMPLES
> Generate a new user:
>
> $ radosgw-admin user gen --display-name="johnny rotten"
> --email=joh...@rot
Hi,
just noticed that radosgw-admin comes with a bit of confusing content in
its man page and usage message:
EXAMPLES
Generate a new user:
$ radosgw-admin user gen --display-name="johnny rotten"
--email=joh...@rotten.com
As far as I remember "user gen" is gone, and it's now "user
I have two questions. My newly created cluster with xfs on all osd,
ubuntu precise, kernel 3.2.0-23-generic. Ceph 0.47.2-1precise
pool 0 'data' rep size 3 crush_ruleset 0 object_hash rjenkins pg_num
64 pgp_num 64 last_change 1228 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 3 crush_
On 06/11/2012 10:07 AM, Guido Winkelmann wrote:
Am Montag, 11. Juni 2012, 09:30:42 schrieb Sage Weil:
On Mon, 11 Jun 2012, Guido Winkelmann wrote:
Am Freitag, 8. Juni 2012, 06:55:19 schrieb Sage Weil:
On Fri, 8 Jun 2012, Oliver Francke wrote:
Are you guys able to reproduce the corruption wi
On Mon, 11 Jun 2012, Guido Winkelmann wrote:
> Am Montag, 11. Juni 2012, 09:30:42 schrieb Sage Weil:
> > On Mon, 11 Jun 2012, Guido Winkelmann wrote:
> > > Am Freitag, 8. Juni 2012, 06:55:19 schrieb Sage Weil:
> > > > On Fri, 8 Jun 2012, Oliver Francke wrote:
>
> > > > Are you guys able to reprodu
On Sun, Jun 10, 2012 at 3:51 AM, udit agarwal wrote:
> Thanks for your reply. I want just virtualization on my ceph system. I will
> explain you the exact implementation that I want in my ceph system. I want to
> run 10 virtual machine instances on my ceph client whilst utilizing the
> storage
>
Am Montag, 11. Juni 2012, 09:30:42 schrieb Sage Weil:
> On Mon, 11 Jun 2012, Guido Winkelmann wrote:
> > Am Freitag, 8. Juni 2012, 06:55:19 schrieb Sage Weil:
> > > On Fri, 8 Jun 2012, Oliver Francke wrote:
> > > Are you guys able to reproduce the corruption with 'debug osd = 20' and
> > >
> > >
On Sun, 10 Jun 2012, Josh Durgin wrote:
> On 06/10/2012 10:03 PM, Sage Weil wrote:
> > Hey-
> >
> > The librados api tests were calling a dummy "test_exec" method in cls_rbd
> > that apparently got removed. We probably want to replace the test with
> > *something*, though... maybe a "version" or
On Mon, 11 Jun 2012, Yan, Zheng wrote:
> From: "Yan, Zheng"
>
> PGMap->num_pg_by_state is a PG state to number of PG in the state
> mapping. PGMonitor::update_logger wrongly interprets the mapping.
Thanks, applied!
sage
>
> Signed-off-by: Yan, Zheng
> ---
> src/mon/PGMonitor.cc | 12 +
On Mon, 11 Jun 2012, Guido Winkelmann wrote:
> Am Freitag, 8. Juni 2012, 06:55:19 schrieb Sage Weil:
> > On Fri, 8 Jun 2012, Oliver Francke wrote:
> > > Hi Guido,
> > >
> > > yeah, there is something weird going on. I just started to establish some
> > > test-VM's. Freshly imported from running *.
Am Freitag, 8. Juni 2012, 06:55:19 schrieb Sage Weil:
> On Fri, 8 Jun 2012, Oliver Francke wrote:
> > Hi Guido,
> >
> > yeah, there is something weird going on. I just started to establish some
> > test-VM's. Freshly imported from running *.qcow2 images.
> > Kernel panic with INIT, seg-faults and
On 06/08/2012 01:17 AM, Hannes Reinecke wrote:
> On 06/06/2012 04:10 PM, Alex Elder wrote:
>> On 06/06/2012 03:03 AM, Yan, Zheng wrote:
>>> From: "Yan, Zheng"
>>>
>>> The bug can cause NULL pointer dereference in write_partial_msg_pages
>>
>> Although this looks simple enough, I want to study it a
On Mon, Jun 11, 2012 at 5:32 AM, John Axel Eriksson wrote:
>
> Also, when PUTting something through radosgw, does ceph/rgw return as
> soon as all data has been received or does it return
> when it has ensured N replicas? (I've seen quite a delay after all
> data has been sent before my PUT return
Am Samstag, 9. Juni 2012, 20:04:20 schrieb Sage Weil:
> On Fri, 8 Jun 2012, Guido Winkelmann wrote:
> > Am Freitag, 8. Juni 2012, 07:50:36 schrieb Josh Durgin:
> > > On 06/08/2012 06:55 AM, Sage Weil wrote:
> > > > On Fri, 8 Jun 2012, Oliver Francke wrote:
> > > >> Hi Guido,
> > > >>
> > > >> yeah
Ok so that's the reason for the keys then? To be able to stop an osd
from connecting to the cluster? Won't "ceph osd down" or something
like that stop it from connecting?
Also, the rados command - doesn't that use the config file to connect
to the osds? If I have the correct config file won't that
On 06/11/2012 02:41 PM, John Axel Eriksson wrote:
Oh sorry. I don't think I was clear on the auth question. What I meant
was if the admin.keyring and keys for the osd:s are really necessary
in a private ceph-cluster.
I'd say: Yes
With keys in place you can ensure that a rogue machine starts
Oh sorry. I don't think I was clear on the auth question. What I meant
was if the admin.keyring and keys for the osd:s are really necessary
in a private ceph-cluster.
On Mon, Jun 11, 2012 at 2:40 PM, Wido den Hollander wrote:
> Hi,
>
>
> On 06/11/2012 02:32 PM, John Axel Eriksson wrote:
>>
>> Is
Hi,
On 06/11/2012 02:32 PM, John Axel Eriksson wrote:
Is there a point to having auth enabled if I run ceph on an internal
network, only for use with radosgw (i.e the object storage part)?
It seems to complicate the setup unnecessarily and ceph doesn't use
encryption anyway as far as I understan
Is there a point to having auth enabled if I run ceph on an internal
network, only for use with radosgw (i.e the object storage part)?
It seems to complicate the setup unnecessarily and ceph doesn't use
encryption anyway as far as I understand, it's only auth.
If my network is trusted and I know wh
On Sat, Jun 09, 2012 at 12:25:57AM -0700, Andrew Morton wrote:
> And... it seems that I misread what's going on. The individual
> filesystems are doing the rcu freeing of their inodes, so it is
> appropriate that they also call rcu_barrier() prior to running
> kmem_cache_free(). Which is what Ki
Hi,
On 06/11/2012 08:47 AM, eric_yh_c...@wiwynn.com wrote:
Dear all:
I would like to know if the journal size influence the performance
of disk.
If the size of each of my disk is 1T, how much size should I prepare
for the journal?
You journal should be able to hold the writes for
29 matches
Mail list logo