On Mon, Dec 3, 2012 at 10:51 PM, Josh Durgin wrote:
> It looks like you've got a good start on integrating it into the main
> ceph infrastructure. Skimming over it, the general structure makes sense
> to me. One part that might be a bit tricky is making the new parsing
> backwards compatible with
On 12/03/2012 01:57 AM, Roald van Loon wrote:
Hi list, ceph devs,
I've been doing some toying with Ceph lately in my spare time and one
of the things I'm doing is trying to refactor some of the
configuration handling / argument parsing. I had some trouble finding
specific runtime options and con
I'm reading README.rst in the ceph-deploy sources, and I've executed the
command, but didn't really analyze what it did when I did; I just
assumed from that doc that it would be necessary. It's possible your
instructions obviate the need for that step, but I'd have to look at the
instructions
Hi Dan,
In the version of the Ceph installation instructions I was given, I don't have
a "ceph-deploy gatherkeys" step. Is there a newer version, or can you briefly
describe the use of this command?
Thanks,
Todd
Todd Blackwell
HARRIS Corporation
321-984-6911
954-817-3662
eblac...@h
Reviewed-by: Josh Durgin
On 11/30/2012 08:05 AM, Alex Elder wrote:
When rbd_req_sync_notify_ack() calls rbd_do_request() it supplies
rbd_simple_req_cb() as its callback function. Because the callback
is supplied, an rbd_req structure gets allocated and populated so it
can be used by the callba
Reviewed-by: Josh Durgin
On 11/30/2012 08:04 AM, Alex Elder wrote:
When rbd_do_request() is called it allocates and populates an
rbd_req structure to hold information about the osd request to be
sent. This is done for the benefit of the callback function (in
particular, rbd_req_cb()), which us
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/12/12 09:50, Smart Weblications GmbH - Florian Wiessner wrote:
> Am 03.12.2012 21:39, schrieb David Clarke:
>> On 04/12/12 08:49, Christopher Kunz wrote:
>>> Am 03.12.12 20:14, schrieb Josh Durgin:
On 12/03/2012 11:05 AM, Oliver Francke wrot
On 11/30/2012 06:00 PM, Simon Frerichs | Fremaks GmbH wrote:
Hi,
we war starting to see this error on some images:
-> rbd info kvm1207
error opening image kvm1207: (2) No such file or directory
2012-12-01 02:58:27.556677 7ffd50c60760 -1 librbd: error finding header:
(2) No such file or director
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 03.12.2012 21:39, schrieb David Clarke:
> On 04/12/12 08:49, Christopher Kunz wrote:
>> Am 03.12.12 20:14, schrieb Josh Durgin:
>>> On 12/03/2012 11:05 AM, Oliver Francke wrote:
> Also, if you're concerned about the time it takes to reboot a machine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 04/12/12 08:49, Christopher Kunz wrote:
> Am 03.12.12 20:14, schrieb Josh Durgin:
>> On 12/03/2012 11:05 AM, Oliver Francke wrote:
>>> Hi *,
>>>
>>> well, even if 0.48.2 is really stable and reliable, it is not everytime the
>>> case with linux
>>
On 12/03/2012 10:53 AM, Blackwell, Edward wrote:
Hi Dan,
Thanks for the welcome and the advice. There indeed was a problem with the host name and
capitalization as you described, but once I corrected that, a new issue began to occur
when I ran the "ceph-deploy mon" command. The command app
On Mon, Dec 3, 2012 at 12:13 PM, Oliver Francke wrote:
> if you encounter all BIOS-, POST-, RAID-controller-checks, linux-boot,
> openvswitch-STP setup and so on, one can imagine, that a reboot takes a
> "couple-of-minutes", normally with our setup after 30 seconds the cluster
> shall detect so
Hi Florian,
Am 03.12.2012 um 20:45 schrieb Smart Weblications GmbH - Florian Wiessner
:
> Am 03.12.2012 20:21, schrieb Oliver Francke:
>> Hi Josh,
>>
>> Am 03.12.2012 um 20:14 schrieb Josh Durgin :
>>
>>> On 12/03/2012 11:05 AM, Oliver Francke wrote:
Hi *,
well, even if 0.48.2
Am 03.12.12 20:14, schrieb Josh Durgin:
> On 12/03/2012 11:05 AM, Oliver Francke wrote:
>> Hi *,
>>
>> well, even if 0.48.2 is really stable and reliable, it is not
>> everytime the case with linux kernel. We have a couple of nodes,
>> where an update would make life better.
>> So, as our OSD-nodes
Am 03.12.2012 20:21, schrieb Oliver Francke:
> Hi Josh,
>
> Am 03.12.2012 um 20:14 schrieb Josh Durgin :
>
>> On 12/03/2012 11:05 AM, Oliver Francke wrote:
>>> Hi *,
>>>
>>> well, even if 0.48.2 is really stable and reliable, it is not everytime the
>>> case with linux kernel. We have a couple o
Are we thinking about including this in 0.56?
Mark
On 12/03/2012 01:31 PM, Samuel Just wrote:
I rejected it only because I didn't manage to measure a performance
increase. The wip-filestore patches look fine though.
-Sam
On Mon, Dec 3, 2012 at 3:38 AM, Sage Weil wrote:
While playing with re
I rejected it only because I didn't manage to measure a performance
increase. The wip-filestore patches look fine though.
-Sam
On Mon, Dec 3, 2012 at 3:38 AM, Sage Weil wrote:
> While playing with recovery on the plane I saw big latency spikes caused
> by draining the filestore op queue. This p
Hi Josh,
Am 03.12.2012 um 20:14 schrieb Josh Durgin :
> On 12/03/2012 11:05 AM, Oliver Francke wrote:
>> Hi *,
>>
>> well, even if 0.48.2 is really stable and reliable, it is not everytime the
>> case with linux kernel. We have a couple of nodes, where an update would
>> make life better.
>> S
On 12/03/2012 11:05 AM, Oliver Francke wrote:
Hi *,
well, even if 0.48.2 is really stable and reliable, it is not everytime the
case with linux kernel. We have a couple of nodes, where an update would make
life better.
So, as our OSD-nodes have to care for VM's too, it's not the problem to let
Hi *,
well, even if 0.48.2 is really stable and reliable, it is not everytime the
case with linux kernel. We have a couple of nodes, where an update would make
life better.
So, as our OSD-nodes have to care for VM's too, it's not the problem to let
them drain so migrate all of them to other nod
Hi Dan,
Thanks for the welcome and the advice. There indeed was a problem with the
host name and capitalization as you described, but once I corrected that, a new
issue began to occur when I ran the "ceph-deploy mon" command. The command
appears to run successfully (no output from the command
Hi,
> Can you attach/post the whole log somewhere? I'm curious what is leading
> up to it not having secret_id=0. Ideally with 'debug auth = 20' and
> 'debug osd = 20' and 'debug ms = 1'.
I repoduced the problem with debug auth = 10 and debug ms = 1 (no
debug osd ... that's just too verbose,
Hi,
I've pushed a "working" (read: compiling) branch to github with the
basics of the new configuration handling I mentioned earlier;
http://github.com/roaldvanloon/ceph.git
branch = wip-config
As an example, I've copied the ceph_osd to ceph_osd2 and altered the
initialization in main() such th
While playing with recovery on the plane I saw big latency spikes caused
by draining the filestore op queue. This patch makes it only wait for
dequeued ops (current handed to threads) and not anything else. Passed
the rados suite without problems, and some basic testing on my laptop with
lots
Yeah. Here I also used "cephfs foomount set_layout --pool 3 -c 1 -u 4194304 -s
4194304 " instead of "cephfs foomount set_layout --pool 3 "
On Mon, Dec 3, 2012 at 4:30 PM, hemant surale wrote:
> Yeah. Here I also used "cephfs foomount --pool 3 -c 1 -u 4194304 -s
> 4194304 " instead of "cephfs foom
Yeah. Here I also used "cephfs foomount --pool 3 -c 1 -u 4194304 -s
4194304 " instead of "cephfs foomount --pool 3 "
On Mon, Dec 3, 2012 at 2:50 PM, Sylvain Munaut
wrote:
>> # cephfs foo set_layout --pool 3
>
> From early tests I seem to remember that just setting the pool using
> set_layout wa
On 2012-12-02 18:18, Matthew Anderson wrote:
Hi All,
I've run into a corruption bug when the RBD client cache is set to
false under QEMU-KVM. With the cache on everything is fine but write
speeds drop considerably, 4KB sequential goes from 5.1MB/s to 1.8MB/s
no matter what size the cache is or i
Hi list, ceph devs,
I've been doing some toying with Ceph lately in my spare time and one
of the things I'm doing is trying to refactor some of the
configuration handling / argument parsing. I had some trouble finding
specific runtime options and configuration settings, so I thought this
was a goo
> # cephfs foo set_layout --pool 3
>From early tests I seem to remember that just setting the pool using
set_layout wasn't accepted by the tool, I had to reset all the layout
parameters in the command.
Cheers,
Sylvain
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" i
Hi,
I am working on a sample app to test the object storage and connectivity with
the CEPH cluster(basically an app that talk directly to the Ceph object store).
Initially the aim is to keep 1 CEPH cluster on 1 VM and try the object
operations through 2nd VM. We won't be using RADOS Gateway, CEPH
30 matches
Mail list logo