On Sat, Jan 17, 2015 at 11:47 AM, Lindsay Mathieson
wrote:
> On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
>> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
>> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
>
> Does the kernel version
On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
Does the kernel version matter if you are using ceph-fuse?
___
This is a long-awaited bugfix release for firefly. It has several
imporant (but relatively rare) OSD peering fixes, performance issues when
snapshots are trimmed, several RGW fixes, a paxos corner case fix, and
some packaging updates.
We recommend that all users for v0.80.x firefly upgrade whe
On Wed, Jan 14, 2015 at 3:36 PM, Межов Игорь Александрович
wrote:
> What is the more right way to do it:
>
> - replace 12x1tb drives with 12x2tb drives, so we will have 2 nodes full of
> 2tb drives and
>
> other nodes remains in 12x1tb confifg
>
> - or replace 1tb to 2tb drives in more unify way,
You’re using a file system on 2 hosts that is not cluster aware. Metadata
written on hosta is not sent to hostb in this case. You may be interested in
looking at cephfs for this use case.
Michael Kuriger
mk7...@yp.com
818-649-7235
MikeKuriger (IM)
From: Rafał Michalak mailto:rafa...@gmail.co
It has just been pointed out to me that you can also workaround this
issue on your existing system by increasing the osd_max_write_size
setting on your OSDs (default 90MB) to something higher, but still
smaller than your osd journal size. That might get you on a path to
having an accessible filesy
On 14 January 2015 at 12:08, JM wrote:
> Hi Roland,
>
> You should tune your Ceph Crushmap with a custom rule in order to do that
> (write first on s3 and then to others). This custom rule will be applied
> then to your proxmox pool.
> (what you want to do is only interesting if you run VM from h
Thanks! I'll give this a shot.
On Thu, Jan 8, 2015 at 8:51 AM, Travis Rhoden wrote:
> Hi Noah,
>
> The root cause has been found. Please see
> http://tracker.ceph.com/issues/10476 for details.
>
> In short, it's an issue between RPM obsoletes and yum priorities
> plugin. Final solution is pendi
Hello,
Into placement groups documentation
(http://ceph.com/docs/giant/rados/operations/placement-groups/) we have the
message bellow:
“When using multiple data pools for storing objects, you need to ensure that
you balance the number of placement groups per pool with the number of
placement
On 16 January 2015 at 17:15, Gregory Farnum wrote:
> > I have set up 4 machines in a cluster. When I created the Windows 2008
> > server VM on S1 (I corrected my first email: I have three Sunfire X
> series
> > servers, S1, S2, S3) since S1 has 36GB of RAM en 8 x 300GB SAS drives, it
> > was run
Thanks for the hints.
My original configuration is with " rgw print continue = false", but does not
work.
Just now I tested to change it be "true" and restart radosgw and apache2
service, strangely to see that everything can work now.
Best,
Xuezhao
>
> This sounds like you're having trouble wi
Hi,
I have a problem for remove one file in cephfs. With the command ls, all
the arguments show me with ???.
*ls: cannot access refseq/source_step2: No such file or directory*
*total 0*
*drwxrwxr-x 1 dtohara BioInfoHSL Users0 Jan 15 15:01 .*
*drwxrwxr-x 1 dtohara BioInfoHSL Users 3.8G Jan 15
On Fri, Jan 16, 2015 at 2:52 AM, Roland Giesler wrote:
> On 14 January 2015 at 21:46, Gregory Farnum wrote:
>>
>> On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler
>> wrote:
>> > I have a 4 node ceph cluster, but the disks are not equally distributed
>> > across all machines (they are substantiall
On Wed, 14 Jan 2015 02:20:21 PM Rafał Michalak wrote:
> Why data not replicating on mounting fs ?
> I try with filesystems ext4 and xfs
> The data is visible only when unmounted and mounted again
Because you are not using a cluster aware filesystem - the respective mounts
don't know when changes
Thanks!
Of course, I know about osd weights and ability to adjust them to make
distribution
more-or-less unified. And we use ceph-deploy to bring up osds and already
noticed, that weights of different sized osds are choose proportionally their
sizes.
But the question is slightly about diffe
Hi Megov,
you should weight the OSD so it's represent the size (like an weight of
3.68 for an 4TB HDD).
cephdeploy do this automaticly.
Nevertheless also with the correct weight the disk was not filled in
equal distribution. For that purposes you can use reweight for single
OSDs, or automaticly wi
We are quickly approaching the Hammer feature freeze but have a few more
dev releases to go before we get there. The headline items are
subtree-based quota support in CephFS (ceph-fuse/libcephfs client support
only for now), a rewrite of the watch/notify librados API used by RBD and
RGW, OSDMa
HI all,
Context : Ubuntu 14.04 TLS firefly 0.80.7
I recently encountered the same issue as described below.
Maybe I missed something between July and January…
I found that the http request wasn't correctly built by
/usr/lib/python2.7/dist-packages/radosgw_agent/client.py
I did the changes b
Agree. I was about to upgrade to 0.90, but has postponed it due to this error.
Any chance for me to recover it first before upgrading it?
Thanks Wido.
Regards,
Bazli
-Original Message-
From: ceph-devel-ow...@vger.kernel.org
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Wido den
Hmm, upgrading should help here, as the problematic data structure
(anchortable) no longer exists in the latest version. I haven't
checked, but hopefully we don't try to write it during upgrades.
The bug you're hitting is more or less the same as a similar one we
have with the sessiontable in the
On 14 January 2015 at 21:46, Gregory Farnum wrote:
> On Tue, Jan 13, 2015 at 1:03 PM, Roland Giesler
> wrote:
> > I have a 4 node ceph cluster, but the disks are not equally distributed
> > across all machines (they are substantially different from each other)
> >
> > One machine has 12 x 1TB SA
On 14.01.2015 14:20, Rafał Michalak wrote:
>
> #node1
> mount /dev/rbd/rbd/test /mnt
>
> #node2
> mount /dev/rbd/rbd/test /mnt
If you want to mount a filesystem on one block device onto multiple
clients, the filesystem has to be clustered, e.g. OCFS2.
A "normal" local filesystem like ext4 or XF
Dear Ceph-Users, Ceph-Devel,
Apologize me if you get double post of this email.
I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2 down
and only 1 up) at the moment.
Plus I have one CephFS client mounted to it.
Now, the MDS always get aborted after recovery and active fo
On Wed, Jan 14, 2015 at 7:27 PM, Liu, Xuezhao wrote:
> Thanks for the replying.
>
> After disable the default site (a2dissite 000-default), I can use libs3's
> commander s3 to create/list bucket, get object also works.
>
> But put object failed:
>
> root@xuezhaoUbuntu74:~# s3 -u put bucket11/seqd
Hi all!
I would like to expand our CEPH Cluster and add a second OSD node.
In this node I will have ten 4TB disks dedicated to CEPH.
What is the proper way of putting them in the already available CEPH
node?
I guess that the first thing to do is to prepare them with ceph-deploy
and mark the
Thanks for the replying.
After disable the default site (a2dissite 000-default), I can use libs3's
commander s3 to create/list bucket, get object also works.
But put object failed:
root@xuezhaoUbuntu74:~# s3 -u put bucket11/seqdata filename=seqdata
it hangs forever, and one the object gateway
Hi,
if I want to clone a running vm-hdd, would it be enough to "cp" or do I
have to "snap, protect, flatten, unprotect, rm" the snapshot to get a as
consistent as possible clone?
Or: Does cp use a internal snapshot while copying the blocks?
Thanks,
Fabian
__
2015-01-15 1:08 GMT-08:00 Walter Valenti :
>
>
>
>
>
>
> - Messaggio originale -
>> Da: Yehuda Sadeh
>> A: Walter Valenti
>> Cc: "ceph-users@lists.ceph.com"
>> Inviato: Martedì 13 Gennaio 2015 1:13
>> Oggetto: Re: [ceph-users] Problem with Rados gateway
>>
>>T ry setting 'rgw print conti
On 01/16/2015 08:37 AM, Mohd Bazli Ab Karim wrote:
> Dear Ceph-Users, Ceph-Devel,
>
> Apologize me if you get double post of this email.
>
> I am running a ceph cluster version 0.72.2 and one MDS (in fact, it's 3, 2
> down and only 1 up) at the moment.
> Plus I have one CephFS client mounted to
- Messaggio originale -
> Da: Yehuda Sadeh
> A: Walter Valenti
> Cc: "ceph-users@lists.ceph.com"
> Inviato: Martedì 13 Gennaio 2015 1:13
> Oggetto: Re: [ceph-users] Problem with Rados gateway
>
>T ry setting 'rgw print continue = false' in your ceph.conf.
>
> Yehuda
Thanks, but
Hi all,
I try to compile and use Ceph on a cluster of Raspberry Pi
single-board computers with Raspbian as operating system. I tried it
this way:
wget http://ceph.com/download/ceph-0.91.tar.bz2
tar -xvjf ceph-0.91.tar.bz2
cd ceph-0.91
./autogen.sh
./configure --without-tcmalloc
make -j2
But res
On 14/01/2015 18:33, Udo Lembke wrote:
> Hi Loic,
> thanks for the answer. I hope it's not like in
> http://tracker.ceph.com/issues/8747 where the issue happens with an
> patched version if understand right.
http://tracker.ceph.com/issues/8747 is a duplicate of
http://tracker.ceph.com/issues/80
So you can see my server names and their osd's too...
# idweighttype nameup/downreweight
-111.13root default
-28.14host h1
1 0.9osd.1 up1
3 0.9osd.3 up1
4 0.9osd.4 up1
50.68
# Get the compiled crushmap
root@server01:~# ceph osd getcrushmap -o /tmp/myfirstcrushmap
# Decompile the compiled crushmap above
root@server01:~# crushtool -d /tmp/myfirstcrushmap -o
/tmp/myfirstcrushmap.txt
then give us your /tmp/myfirstcrushmap.txt file.. :)
2015-01-14 17:36 GMT+01:00 Roland
Hi Loic,
thanks for the answer. I hope it's not like in
http://tracker.ceph.com/issues/8747 where the issue happens with an
patched version if understand right.
So I must only wait few month ;-) for an backport...
Udo
Am 14.01.2015 09:40, schrieb Loic Dachary:
> Hi,
>
> This is http://tracker.c
Yes, it's active/active and I found that VMWare can switch from path to
path with no issues or service impact.
I posted some config files here: github.com/jak3kaj/misc
One set is from my LIO nodes, both the primary and secondary configs so you
can see what I needed to make unique. The other set
On 14 January 2015 at 12:08, JM wrote:
> Hi Roland,
>
> You should tune your Ceph Crushmap with a custom rule in order to do that
> (write first on s3 and then to others). This custom rule will be applied
> then to your proxmox pool.
> (what you want to do is only interesting if you run VM from h
Hello list,
I'm currently trying to understand what I can do with Ceph to optimize
it for a cold-storage (write-once, read-very-rarely) like scenario,
trying to compare cost against LTO-6 tape.
There is a single main objective:
- minimal cost/GB/month of operations (including power, DC)
To achi
subscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I tracked down the bug. Please try the attached patch
Regards
Yan, Zheng
patch
Description: Binary data
> 在 2015年1月13日,07:40,Gregory Farnum 写道:
>
> Zheng, this looks like a kernel client issue to me, or else something
> funny is going on with the cap flushing and the timestamps (note how
>
On 01/14/2015 07:37 AM, John Spray wrote:
On Tue, Jan 13, 2015 at 1:25 PM, James wrote:
I was wondering if anyone has Mesos running on top of Ceph?
I want to test/use Ceph if lieu of HDFS.
You might be interested in http://ceph.com/docs/master/cephfs/hadoop/
It allows you to expose CephFS to
Hi Roland,
You should tune your Ceph Crushmap with a custom rule in order to do that
(write first on s3 and then to others). This custom rule will be applied
then to your proxmox pool.
(what you want to do is only interesting if you run VM from host s3)
Can you give us your crushmap ?
2015-01-
Hi all,
My CEPH cluster is stuck when re-creating the PG ,following are the information
Version 0.9 OS centos 7.0
How can I do a further analyze ,thanks.
ceph@dev140 ~]$ ceph -s
cluster 84d08382-9f31-4d1b-9870-75e6975f69fe
health HEALTH_WARN
9 pgs stuck inactive
Hi, I am adding two monitors in osd0 (node2) and osd1 (node3) in the "ADD
MONITORS" step http://ceph.com/docs/master/start/quick-ceph-deploy/
But it failed to create 2 monitors http://pastebin.com/aSPwKs0H
Can you help why?
___
ceph-users mailing list
ce
44 matches
Mail list logo