Hi,
This is Xing, a graduate student at the University of Utah. I am playing with
Ceph and have a few questions about futex calls in OSD process. I am using
Ceph-v0.79. The data node machine has 7 disks, one for OS and the other 6 disks
for running 6 OSDs. I set replica size = 1, for all three
. If the efficiency is about 60%, what are the reasons that cause this? Could
it be because of the locks (futex as I mentioned in my previous email) or
anything else?
Thanks very much for any feedback.
Thanks,
Xing
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body
. I also measured the maximum
bandwidth from each disk we can get in Version 0.79. It does not improve
significantly: we can still only get 90~100 MB/s from each disk.
Thanks,
Xing
On Apr 25, 2014, at 2:42 PM, Gregory Farnum g...@inktank.com wrote:
Bobtail is really too old to draw any
Hi Mark,
That seems pretty good. What is the block level sequential read
bandwidth of your disks? What configuration did you use? What was the
replica size, read_ahead for your rbds and what were the number of
workloads you used? I used btrfs in my experiments as well.
Thanks,
Xing
On 04
concern is I am not sure how close RADOS bench results are when
compared with kernel RBD performance. I would appreciate it if you can
do that. Thanks,
Xing
On 04/25/2014 04:16 PM, Mark Nelson wrote:
I don't have any recent results published, but you can see some of the
older results from bobtail
are running ~10 concurrent
workloads. For your case, you were running 256 concurrent read streams for 6/8
disks, I would expect the aggregated disk bandwidth too be lower than 100 MB/s
per disk. Any thoughts?
Thanks,
Xing
On Apr 25, 2014, at 5:47 PM, Xing Lin xing...@cs.utah.edu wrote:
Hi Mark
Hi Sage,
Thanks for applying these two patches. I will try to accumulate more fixes and
submit pull requests via github later.
Thanks,
Xing
On Nov 3, 2013, at 12:17 AM, Sage Weil s...@inktank.com wrote:
Applied this one too!
BTW, an easier workflow than sending patches to the list
by Noah.
6efc2b54d5ce85fcb4b66237b051bcbb5072e6a3
Noah, do you have any feedback?
Thanks,
Xing
On Nov 3, 2013, at 1:22 PM, Xing xing...@cs.utah.edu wrote:
Hi all,
I was able to compile and run vstart.sh yesterday. When I merged the ceph
master branch into my local master branch, I am
Thanks, Noah!
Xing
On 11/3/2013 3:17 PM, Noah Watkins wrote:
Thanks for looking at this. Unless there is a good solution I think
reverting it is ok as breaking the compile on a few platflorms is not
ok. Ill be lookong at this tonight.
--
To unsubscribe from this list: send the line
Free allocated memory before return because of NULL input
Signed-off-by: Xing Lin xing...@cs.utah.edu
---
src/osd/ErasureCodePluginJerasure/jerasure.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/osd/ErasureCodePluginJerasure/jerasure.c
b/src/osd
When bitmatrix is NULL, this function returns NULL.
Signed-off-by: Xing Lin xing...@cs.utah.edu
---
src/osd/ErasureCodePluginJerasure/jerasure.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/osd/ErasureCodePluginJerasure/jerasure.c
b/src/osd/ErasureCodePluginJerasure
Hi Sage,
I would like to help here as well.
Thanks,
Xing
On 10/31/2013 5:30 PM, Sage Weil wrote:
Hi everyone,
When I send this out several months ago, Danny Al-Gaaf stepped up and
submitted an amazing number of patches cleaning up the most concerning
issues that Coverity had picked up. His
Thanks very much for all your explanations. I am now much clearer about
it. Have a great day!
Xing
On 03/05/2013 01:12 PM, Greg Farnum wrote:
All the data goes to the disk in write-back mode so it isn't safe yet
until the flush is called. That's why it goes into the journal first
No, I am using xfs. The same thing happens even if I specified the
journal mode explicitly as follows.
filestore journal parallel = false
filestore journal writeahead = true
Xing
On 03/04/2013 09:32 AM, Sage Weil wrote:
Are you using btrfs? In that case, the journaling
to use bobtail series. However, I started to make
small changes with Argonaut (0.48) and had ported my changes once to
0.48.2 when it was released. I think I am good to continue with it for
the moment. I may consider to port my changes to bobtail series at a
later time. Thanks,
Xing
Maybe it is easier to tell in this way.
What we want to see is that the newly written data to stay in the
journal disk for as long as possible such that write workloads do not
compete for disk headers for read workloads. Any way to achieve that in
Ceph? Thanks,
Xing
On 03/04/2013 09:55 AM
, it will ask us ssh password seven times. So, we'd better
set up public key first. :)
Xing
On 01/22/2013 11:24 AM, Gandalf Corvotempesta wrote:
Hi all,
i'm trying my very first ceph installation following the 5-minutes quickstart:
http://ceph.com/docs/master/start/quick-start/#install-debian
, mon processes. So, currently, I am using polysh to run the
same commands at all hosts (mostly, to restart ceph service before every
measurement.). Thanks.
Xing
On 01/22/2013 12:35 PM, Neil Levine wrote:
Out of interest, would people prefer that the Ceph deployment script
didn't try to handle
I did not notice that there exists such a parameter. Thanks, Dan!
Xing
On 01/22/2013 02:11 PM, Dan Mick wrote:
The '-a/--allhosts' parameter is to spread the command across the
cluster...that is, service ceph -a start will start across the cluster.
--
To unsubscribe from this list: send
You can change the number replicas in runtime with the following command:
$ ceph osd pool set {poolname} size {num-replicas}
Xing
On 01/15/2013 03:00 PM, Gandalf Corvotempesta wrote:
Is possible to change the number of replicas in realtime ?
--
To unsubscribe from this list: send the line
It seems to be: Ceph will shuffle data to rebalance in situations such
as when we change the replica num or when some nodes or disks are down.
Xing
On 01/15/2013 03:26 PM, Gandalf Corvotempesta wrote:
2013/1/15 Xing Lin xing...@cs.utah.edu:
You can change the number replicas in runtime
It seems to be: Ceph will shuffle data to rebalance in situations such
as when we change the replica num or when some nodes or disks are down.
Xing
On 01/15/2013 03:26 PM, Gandalf Corvotempesta wrote:
So it's absolutely safe to start with just 2 server, make all the
necessary tests and when
in the
crush_do_rule() or my simple bucket_directmap_choose() does not give me
any output. Thanks.
Xing
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
:/users/utos# /usr/bin/rbd -v
ceph version 0.48.2argonaut
(commit:3e02b2fad88c2a95d9c0c86878f10d1beb780bfe)
root@client:/users/utos# /usr/local/bin/rbd -v
ceph version 0.48.2argonaut.fast
(commit:0)
Xing
On 01/05/2013 08:00 PM, Mark Kirkwood wrote:
I'd
It works now. The old version of .so files in /usr/lib are linked,
instead of new version of these files which are installed at
/usr/local/lib. Thanks, Sage.
Xing
On 01/05/2013 09:46 PM, Sage Weil wrote:
Hi,
The rbd binary is dynamically linking to librbd1.so and librados2.so
(usually
extern void crush_destroy_bucket(struct crush_bucket *b);
extern void crush_destroy(struct crush_map *map);
-static inline int crush_calc_tree_node(int i)
-{
-return ((i+1) 1)-1;
-}
-
#endif
Xing
--
To unsubscribe from this list: send the line unsubscribe ceph-devel in
the body
implementation. It would be appreciated if someone can
confirm it. Thank you!
Xing
On 12/20/2012 11:54 AM, Xing Lin wrote:
Hi,
I was trying to add a simple replica placement algorithm in Ceph. This
algorithm simply returns r_th item in a bucket for the r_th replica. I
have made that change
throughputs.
--
[global]
rw=read
bs=4m
thread=0
time_based=1
runtime=300
invalidate=1
direct=1
sync=1
ioengine=sync
[sr-vda]
filename=${DEV}
-
Does anyone have some suggestions or hints for me to try? Thank you very
much!
Xing
28 matches
Mail list logo