Am 25.06.2012 22:23, schrieb Josef Bacik:
On Mon, Jun 25, 2012 at 02:20:31PM -0600, Stefan Priebe wrote:
Am 25.06.2012 22:11, schrieb Josef Bacik:
On Mon, Jun 25, 2012 at 01:33:09PM -0600, Stefan Priebe wrote:
With v3.4 the same. Can't go back more as this really results in very
fast corruption. Any ideas how to debug?


What workload are you running?  I have a ssd here with discard support I can try
and reproduce on.  Thanks,

i'm using fio with 50 jobs and randwrite of 4k blocks in ceph but i
don't know which load ceph then exactly generates. ;-(


Thats fine, I have this handy "create a local ceph cluster" script from an
earlier problem, just send me your fio job and I'll run it locally.  Thanks,

OK my fio job is running in a KVM using a RBD:

fio --filename=$DISK --direct=1 --rw=randwrite --bs=4k --size=200G --numjobs=50 --runtime=300 --group_reporting --name=file1

Backed by 3 server with 4 OSDs (all intel SSDs) each running btrfs on
top.

THANKS!

Greets
Stefan
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to