On 06/08/2012 02:17 PM, Hannes Reinecke wrote:
> On 06/06/2012 04:10 PM, Alex Elder wrote:
>> On 06/06/2012 03:03 AM, Yan, Zheng wrote:
>>> From: "Yan, Zheng"
>>>
>>> The bug can cause NULL pointer dereference in write_partial_msg_pages
>>
>> Although this looks simple enough, I want to study it a
On 06/06/2012 04:10 PM, Alex Elder wrote:
> On 06/06/2012 03:03 AM, Yan, Zheng wrote:
>> From: "Yan, Zheng"
>>
>> The bug can cause NULL pointer dereference in write_partial_msg_pages
>
> Although this looks simple enough, I want to study it a little more
> before committing it. I've been wantin
On Thursday, June 7, 2012 at 9:53 PM, Martin Wilderoth wrote:
> Hello,
>
> Now my mds are all crashing after a while one by one.
> Is it possible to recover without removing my rbd images ?
This is a pretty familiar MDS crash that we haven't tracked down yet. Sorry. :(
However, it has absolutel
Hello,
Now my mds are all crashing after a while one by one.
Is it possible to recover without removing my rbd images ?
/Best Regards Martin
logfile from start to finish
2012-06-08 06:46:10.232863 7f999039b700 0 mds.-1.0 ms_handle_connect on
10.0.6.10:6789/0
2012-06-08 06:46:10.246006 7f999
Hi Mandell,
On Thu, 7 Jun 2012, Mandell Degerness wrote:
> I am thinking about data reliability issues and I'd like to know if we
> can recover a cluster if we have most of the OSD data intact (i.e.
> there are enough copies of all of the PGs), but we have lost all of
> the monitor data.
All of t
On Fri, 8 Jun 2012, eric_yh_c...@wiwynn.com wrote:
> Dear All:
>
> In my testing environment, we deploy ceph cluster by version 0.43, kernel
> 3.2.0.
> (We deploy it several months ago, so the version is not the latest one)
> There are 5 MON and 8 OSD in the cluster. We have 5 servers for the mo
Dear All:
In my testing environment, we deploy ceph cluster by version 0.43, kernel 3.2.0.
(We deploy it several months ago, so the version is not the latest one)
There are 5 MON and 8 OSD in the cluster. We have 5 servers for the monitors.
And two storages servers, 4 OSD for each.
We meet a sit
I am thinking about data reliability issues and I'd like to know if we
can recover a cluster if we have most of the OSD data intact (i.e.
there are enough copies of all of the PGs), but we have lost all of
the monitor data.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
On Thu, Jun 7, 2012 at 2:36 PM, Guido Winkelmann
wrote:
> Again, I'll try that tomorrow. BTW, I could use some advice on how to go about
> that. Right I would stop one osd process (not the whole machine), reformat and
> remount its btrfs devices as XFS, delete the journal, restart the osd, wait
>
On Thursday 07 June 2012 15:53:18 Marcus Sorensen wrote:
> Maybe I did something wrong with your iotester, but I had to mkdir
> ./iotest to get it to run. I straced and found that it died on 'no
> such file'.
It's a bit quick and dirty... You are supposed to pass the directory where it
is to put
Maybe I did something wrong with your iotester, but I had to mkdir
./iotest to get it to run. I straced and found that it died on 'no
such file'.
On Thu, Jun 7, 2012 at 12:37 PM, Guido Winkelmann
wrote:
> Am Donnerstag, 7. Juni 2012, 20:18:52 schrieb Stefan Priebe:
>> I think the test script woul
On Thursday 07 June 2012 12:48:05 Josh Durgin wrote:
> On 06/07/2012 11:04 AM, Guido Winkelmann wrote:
> > Hi,
> >
> > I'm using Ceph with RBD to provide network-transparent disk images for
> > KVM-
> > based virtual servers. The last two days, I've been hunting some weird
> > elusive bug where da
On Thursday 07 June 2012 23:54:04 Andrey Korolyov wrote:
> Hmm, can`t reproduce that(phew!). Qemu-1.1-release, 0.47.2, guest/host
> mainly debian wheezy. Only one main difference with my setup from
> yours is a underlying fs - I`m tired of btrfs unpredictable load
> issues and moved back to xfs.
I
Hmm, can`t reproduce that(phew!). Qemu-1.1-release, 0.47.2, guest/host
mainly debian wheezy. Only one main difference with my setup from
yours is a underlying fs - I`m tired of btrfs unpredictable load
issues and moved back to xfs.
BTW you calculate sha1 in test suite, not sha256 as you mentioned
On 06/07/2012 11:04 AM, Guido Winkelmann wrote:
Hi,
I'm using Ceph with RBD to provide network-transparent disk images for KVM-
based virtual servers. The last two days, I've been hunting some weird elusive
bug where data in the virtual machines would be corrupted in weird ways. It
usually manif
On Thu, Jun 7, 2012 at 11:52 AM, John Axel Eriksson wrote:
> Do you recommend btrfs or perhaps xfs for osds etc?
Right now I think the preference order is xfs btrfs ext4, but we keep
seeing performance & reliability issues fairly evenly these days.
Btrfs apparently still has a tendency to get slo
Thanks Tommi!
Do you recommend btrfs or perhaps xfs for osds etc?
On Thu, Jun 7, 2012 at 6:33 PM, Tommi Virtanen wrote:
> On Thu, Jun 7, 2012 at 3:16 AM, John Axel Eriksson wrote:
>> In general I really like ceph but it does seem like it has a few rough
>> edges.
>
> We're putting a significant
Hi Guido,
unfortunately this sounds very familiar to me. We have been on a long road with
similar "weird" errors.
Our setup is something like "start a couple of VM's ( qemu-*), let them create
a 1G-file each and randomly seek and write 4MB blocks filled with md5sums of
the block as payload, to
Am Donnerstag, 7. Juni 2012, 20:18:52 schrieb Stefan Priebe:
> I think the test script would help a lot so others can test too.
Okay, I've attached the program. It's barely 2 KB. You need Boost 1.45+, CMake
2.6+ and Crypto++ to compile it.
Warning: This will fill up your harddisk completely, whi
I think the test script would help a lot so others can test too.
Am 07.06.2012 um 20:04 schrieb Guido Winkelmann :
> Hi,
>
> I'm using Ceph with RBD to provide network-transparent disk images for KVM-
> based virtual servers. The last two days, I've been hunting some weird
> elusive
> bug wher
Hi,
I'm using Ceph with RBD to provide network-transparent disk images for KVM-
based virtual servers. The last two days, I've been hunting some weird elusive
bug where data in the virtual machines would be corrupted in weird ways. It
usually manifests in files having some random data - usually
On 6/7/12 6:25 AM, Alexandre DERUMIER wrote:
others tests done today: (kernel 3.4 - ubuntu precise)
3 nodes with 5 osd with btrfs, 1GB journal in tmps forced in writeahead
3 nodes with 1 osd with xfs,8GB journal in tmpfs
3 nodes with 1 osd with btfs,8GB journal in tmpfs forced in writeahead
3 n
On Wed, Jun 6, 2012 at 11:56 PM, udit agarwal wrote:
> I have set up ceph system with a client, mon and mds on one system which is
> connected to 2 osds. . The ceph setup worked fine and I did some tests on it
> and
> they too worked fine. Now, I want to set up virtual machines on my system and
>>> On 6/8/2012 at 12:41 AM, in message
, Tommi
Virtanen wrote:
> On Thu, Jun 7, 2012 at 12:22 AM, Guan Jun He wrote:
>> Hi,
>>
>> Maybe there is a typo in this wiki:
>>
>> http://ceph.com/wiki/Rbd
>>
>> it's located in the last title "Snaphots", I think it should be "Snapshots".
>
> Fixed.
>
On Thu, Jun 7, 2012 at 12:22 AM, Guan Jun He wrote:
> Hi,
>
> Maybe there is a typo in this wiki:
>
> http://ceph.com/wiki/Rbd
>
> it's located in the last title "Snaphots", I think it should be "Snapshots".
Fixed.
More up to date documentation is at
http://ceph.com/docs/master/rbd/rados-rbd-cmd
On Thu, Jun 7, 2012 at 3:16 AM, John Axel Eriksson wrote:
> In general I really like ceph but it does seem like it has a few rough
> edges.
We're putting a significant amount of effort in the documentation,
installability and manageability aspects of Ceph, and hope that more
of the fruit of that
Am 07.06.2012 13:33, schrieb Amon Ott:
On Wednesday 06 June 2012 wrote Stefan Priebe - Profihost AG:
Am 06.06.2012 12:57, schrieb Amon Ott:
On Wednesday 06 June 2012 wrote Stefan Priebe - Profihost AG:
/usr/include/unistd.h:
extern int syncfs (int __fd) __THROW;
/usr/include/gnu/stubs-64.h:
#
On Wednesday 06 June 2012 wrote Stefan Priebe - Profihost AG:
> Am 06.06.2012 12:57, schrieb Amon Ott:
> > On Wednesday 06 June 2012 wrote Stefan Priebe - Profihost AG:
> >> Hi Amon,
> >>
> >> i've added your patch:
> >> # strings /lib/libc-2.11.3.so |grep -i syncfs
> >> syncfs
> >>
> >> But config
others tests done today: (kernel 3.4 - ubuntu precise)
3 nodes with 5 osd with btrfs, 1GB journal in tmps forced in writeahead
3 nodes with 1 osd with xfs,8GB journal in tmpfs
3 nodes with 1 osd with btfs,8GB journal in tmpfs forced in writeahead
3 nodes with 5 osd with btrfs, 20G journal on disk
Thanks! That seems to have been the issue.
In general I really like ceph but it does seem like it has a few rough
edges. We're currently using another distributed object storage that
has been slightly unreliable and we've lost data twice now. That's why
we're looking at alternatives and ceph is at
On Wed, 6 Jun 2012, Guido Winkelmann wrote:
I'm asking because BackupPC is the only application I know of that actually
makes heavy use of hardlinks.
I used to run "rsync --link-dest" to create snapshot-like directories with
hard links but have been using real ZFS snapshots lately.
--jerker
Hi,
Maybe there is a typo in this wiki:
http://ceph.com/wiki/Rbd
it's located in the last title "Snaphots", I think it should be "Snapshots".
thanks,
Guanjun
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More ma
On Thu, Jun 7, 2012 at 12:03 AM, John Axel Eriksson wrote:
> Hi I'm new to the list.
>
> We've been looking at Ceph as a possible replacement for our current
> distributed storage system. In particular
> we're interested in the object storage, so I started researching the
> radosgw. It did take me
Hi,
if you want the easy way,
I have done integration of rbd in proxmox 2.0 kvm distribution (currently in
proxmox testing repos).
you only need mon and osd, as kvm only use rbd and not ceph filesystem.
I think,you could also use libvirt, but you need to have a qemu-kvm package
with librbd su
Hi I'm new to the list.
We've been looking at Ceph as a possible replacement for our current
distributed storage system. In particular
we're interested in the object storage, so I started researching the
radosgw. It did take me some time to get setup
and the docs/wiki is missing lots of informatio
35 matches
Mail list logo