Thanks Brad,
I have looked through OCFS2 and does exactly what I wanted.
On Tue, Jun 28, 2016 at 1:04 PM, Brad Hubbard wrote:
> On Tue, Jun 28, 2016 at 4:17 PM, Ishmael Tsoaela
> wrote:
> > Hi,
> >
> > I am new to Ceph and most of the concepts are new.
> >
> > image mounted on nodeA, FS is XF
On Tue, Jun 28, 2016 at 4:17 PM, Ishmael Tsoaela wrote:
> Hi,
>
> I am new to Ceph and most of the concepts are new.
>
> image mounted on nodeA, FS is XFS
>
> sudo mkfs.xfs /dev/rbd/data/data_01
>
> sudo mount /dev/rbd/data/data_01 /mnt
>
> cluster_master@nodeB:~$ mount|grep rbd
> /dev/rbd0 on /m
Hi,
I am new to Ceph and most of the concepts are new.
image mounted on nodeA, FS is XFS
sudo mkfs.xfs /dev/rbd/data/data_01
sudo mount /dev/rbd/data/data_01 /mnt
cluster_master@nodeB:~$ mount|grep rbd
/dev/rbd0 on /mnt type xfs (rw)
Basically I need a way to write on nodeA, mount the same
Hello,
On Mon, 27 Jun 2016 17:00:42 +0200 Ishmael Tsoaela wrote:
> Hi ALL,
>
> Anyone can help with this issue would be much appreciated.
>
Your subject line has nothing to do with your "problem".
You're alluding to OSD replication problems, obviously assuming that one
client would write to OS
On Tue, Jun 28, 2016 at 1:00 AM, Ishmael Tsoaela wrote:
> Hi ALL,
>
> Anyone can help with this issue would be much appreciated.
>
> I have created an image on one client and mounted it on both 2 client I
> have setup.
>
> When I write data on one client, I cannot access the data on another clien
Hi ALL,
Anyone can help with this issue would be much appreciated.
I have created an image on one client and mounted it on both 2 client I
have setup.
When I write data on one client, I cannot access the data on another
client, what could be causing this issue?
root@nodeB:/mnt# ceph osd tree
I
Hi Michael,
I finally rebuilt the cluster with xfs. I was suffering the bug you
told me and after few mins the ops/s went to 6-10.
Now I constantly have 136 op/s and it can handle more because the
process is using the cluster is rsync and it has cpu full. So the
bottleneck is on the client.
Hi Michael,
Thank you for your responses. You helped me a lot. I'm loving ceph. I
brought down the node, and everything worked. I even rebuilt the osd
from scratch and everything worked. I brought down both servers, and
rebooted it and imagine, it still works. I optimized network and it's
gre
> Can I remove safely default pools?
Yes, as long as you're not using the default pools to store data, you can
delete them.
> Why total size is about 1GB?, because it should have 500MB since 2
replicas.
I'm assuming that you're talking about the output of 'ceph df' or 'rados
df'. These commands re
Hi Michael,
It worked. I didn't realized of this because docs it installs two osd
nodes and says that would become active+clean after installing them.
(Something that didn't worked for me because the 3 replicas problem).
http://ceph.com/docs/master/start/quick-ceph-deploy/
Now I can shutdow
Hi all,
first, thank you all for your answers. I will try to respond everyone
and to everything.
First, ceph osd dump | grep pool
pool 0 'data' replicated size 2 min_size 2 crush_ruleset 0 object_hash
rjenkins pg_num 100 pgp_num 64 last_change 80 owner 0 flags hashpspool
crash_replay_inter
You may also want to check your 'min_size'... if it's 2, then you'll be
incomplete even with 1 complete copy.
ceph osd dump | grep pool
You can reduce the min size with the following syntax:
ceph osd pool set min_size 1
Thanks,
Michael J. Kidd
Sent from my mobile device. Please excuse brevit
Hi again
Looked at your ceph -s.
You have only 2 OSDs, one on each node. The default replica count is 2, the
default crush map says each replica on a different host, or may be you set
it to 2 different OSDs. Anyway, when one of your OSD goes down, Ceph can no
longer find another OSDs to host the
Hi
Do tou have chooseleaf type host or type node in your crush map?
How many OSDs do you run on each hosts?
Thx
JC
On Saturday, April 19, 2014, Gonzalo Aguilar Delgado <
gagui...@aguilardelgado.com> wrote:
> Hi,
>
> I'm building a cluster where two nodes replicate objects inside. I found
> tha
Hi,
I'm building a cluster where two nodes replicate objects inside. I
found that shutting down just one of the nodes (the second one), makes
everything "incomplete".
I cannot find why, since crushmap looks good to me.
after shutting down one node
cluster 9028f4da-0d77-462b-be9b-dbdf7f
15 matches
Mail list logo