On Tue, Sep 4, 2012 at 9:19 AM, Andrew Thompson wrote:
> Yes, it was my `data` pool I was trying to grow. After renaming and removing
> the original data pool, I can `ls` my folders/files, but not access them.
Yup, you're seeing ceph-mds being able to access the "metadata" pool,
but all the direc
On Tue, 4 Sep 2012, Andrew Thompson wrote:
> On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
> > On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson
> > wrote:
> > > Looking at old archives, I found this thread which shows that to mount a
> > > pool as cephfs, it needs to be added to mds:
> > >
> > > h
On 9/4/2012 11:59 AM, Tommi Virtanen wrote:
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson wrote:
Looking at old archives, I found this thread which shows that to mount a
pool as cephfs, it needs to be added to mds:
http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
I start
On Fri, Aug 31, 2012 at 11:58 PM, Andrew Thompson wrote:
> Looking at old archives, I found this thread which shows that to mount a
> pool as cephfs, it needs to be added to mds:
>
> http://permalink.gmane.org/gmane.comp.file-systems.ceph.devel/5685
>
> I started a `rados cppool data tempstore` a
On 8/31/2012 11:05 PM, Sage Weil wrote:
Sadly you can't yet adjust pg_num for an active pool. You can create a
new pool,
ceph osd pool create
I would aim for 20 * num_osd, or thereabouts.. see
http://ceph.com/docs/master/ops/manage/grow/placement-groups/
Then you can copy t
On 09/01/2012 11:05 AM, Sage Weil wrote:
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while o
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
> On 09/01/2012 12:39 AM, Gregory Farnum wrote:
> > On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson
> > wrote:
> > > On 8/31/2012 12:10 PM, Sage Weil wrote:
> > > >
> > > > On Fri, 31 Aug 2012, Andrew Thompson wrote:
> > > > >
> > > > > Have you been rew
On Sat, 1 Sep 2012, Xiaopong Tran wrote:
> On 09/01/2012 12:05 AM, Sage Weil wrote:
> > On Fri, 31 Aug 2012, Xiaopong Tran wrote:
> > > Hi,
> > >
> > > Ceph storage on each disk in the cluster is very unbalanced. On each
> > > node, the data seems to go to one or two disks, while other disks
> > >
On 09/01/2012 12:05 AM, Sage Weil wrote:
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's j
On 09/01/2012 12:39 AM, Gregory Farnum wrote:
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson wrote:
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my cluster a
few days ago reloading differen
On Fri, 31 Aug 2012, Andrew Thompson wrote:
> On 8/31/2012 12:10 PM, Sage Weil wrote:
> > On Fri, 31 Aug 2012, Andrew Thompson wrote:
> > > Have you been reweight-ing osds? I went round and round with my cluster a
> > > few days ago reloading different crush maps only to find that it
> > > re-injec
On Fri, Aug 31, 2012 at 9:24 AM, Andrew Thompson wrote:
> On 8/31/2012 12:10 PM, Sage Weil wrote:
>>
>> On Fri, 31 Aug 2012, Andrew Thompson wrote:
>>>
>>> Have you been reweight-ing osds? I went round and round with my cluster a
>>> few days ago reloading different crush maps only to find that it
On 8/31/2012 12:10 PM, Sage Weil wrote:
On Fri, 31 Aug 2012, Andrew Thompson wrote:
Have you been reweight-ing osds? I went round and round with my
cluster a few days ago reloading different crush maps only to find
that it re-injecting a crush map didn't seem to overwrite reweights.
Take a loo
On Fri, 31 Aug 2012, Andrew Thompson wrote:
> On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
> > Hi,
> >
> > Ceph storage on each disk in the cluster is very unbalanced. On each
> > node, the data seems to go to one or two disks, while other disks
> > are almost empty.
> >
> > I can't find anything w
On Fri, 31 Aug 2012, Xiaopong Tran wrote:
> Hi,
>
> Ceph storage on each disk in the cluster is very unbalanced. On each
> node, the data seems to go to one or two disks, while other disks
> are almost empty.
>
> I can't find anything wrong from the crush map, it's just the
> default for now. Att
On 8/31/2012 7:11 AM, Xiaopong Tran wrote:
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the
Hi,
Ceph storage on each disk in the cluster is very unbalanced. On each
node, the data seems to go to one or two disks, while other disks
are almost empty.
I can't find anything wrong from the crush map, it's just the
default for now. Attached is the crush map.
Here is the current situation on
17 matches
Mail list logo