Hi,
I have two node (node0 and node1) with ovirt 3.5 and I created a replicated
volume with one-one bricks on servers.
Now I added the third node (node2) and I would like to pull-out the node1 from
the whole system. Currently it's impossible because there are a brick on node1.
How can I mov
Hi,
I have a 2 node replicated volume under ovirt 3.5.
My self heal daemon is not running. I have a lot of misshealted vms on my
glusterfs
[root@node1 ~]# gluster volume heal g1sata info
Brick node0.itsmart.cloud:/data/sata/brick/
Number of entries: 1
Brick node1.itsmart.cloud:/da
idea?
Tibor
- Eredeti üzenet -
> On Mon, Oct 20, 2014 at 09:04:2.8AM +0200, Demeter Tibor wrote:
> > Hi,
> >
> > This is the full nfs.log after delete & reboot.
> > It is refers to portmap registering problem.
> >
> > [root@node0 gluster
ISTEN
4709/rpcbind
udp0 0 0.0.0.0:111 0.0.0.0:*
4709/rpcbind
udp6 0 0 :::111 :::*
4709/rpcbind
Demeter Tibor
- Eredeti üzenet -
> Hi,
>
> Th
5:21 node1.itsmart.cloud systemd[1]: Started RPC bind service.
Thanks in advance
Tibor
- Eredeti üzenet -
> On 10/19/2014 06:56 PM, Niels de Vos wrote:
> > On Sat, Oct 18, 2014 at 01:24:12PM +0200, Demeter Tibor wrote:
> >> Hi,
> >>
> >> [root@node0 ~
> pending frames:
> > frame : type(0) op(0)
> >
> > patchset: git://git.gluster.com/glusterfs.git
> > signal received: 11
> > time of crash: 2014-10-18 07:41:06configuration details:
> > argp 1
> > backtrace 1
> > dlfcn 1
> > fdatasync 1
3.5.2
Udv:
Demeter Tibor
Email: tdemeter @itsmart.hu
Skype: candyman_78
Phone: +36 30 462 0500
Web : www.it smart.hu
IT SMART KFT.
2120 Dunakeszi Wass Albert utca 2. I. em 9.
Telefon: +36 30 462-0500 Fax: +36 27 637-486
[EN] This message and any attachments are confidential and
erruption to all clients accessing volume engine over nfs.
> Thanks,
> Anirban
> On Saturday, 18 October 2014 1:03 AM, Demeter Tibor
> wrote:
> Hi,
> I have make a glusterfs with nfs support.
> I don't know why, but after a reboot the nfs does not listen on localhost,
&
Hi,
I have make a glusterfs with nfs support.
I don't know why, but after a reboot the nfs does not listen on localhost, only
on gs01.
[root@node0 ~]# gluster volume info engine
Volume Name: engine
Type: Replicate
Volume ID: 2ea009bf-c740-492e-956d-e1bca76a0bd3
Status: Started
Numb
stent or resistant to gluster split-brain. Anyone
> else that's tried this is welcome to put me right on this.
>
> Cheers
>
> Alex
>
>
> On 29/09/14 15:10, Demeter Tibor wrote:
> > Hi,
> >
> > I would like to use glusterfs as ovirt-vmsto
In glusterfs documentation the recommended mode is the mode=6.
My switch (dlink dgs-1510) can be 802.3ad modes, in this case is this better
than mode=6 ?
Tibor
- Eredeti üzenet -
> Indeed. Only the rr (round robin) mode will get higher performance on a
> single stream. It also means th
Hi,
I would like to use glusterfs as ovirt-vmstore.
I this case one vm, that is running on one compute node will use only one tcp
connection?
Thanks
- Eredeti üzenet -
> > Ok, I mean this is a network based solution, but I think the 100MB/sec is
> > possible with one nic too.
> > I
Hi,
I made short tests with glusterfs and bonding, but I have performance issues.
Environment:
- bonding mode=4 (with switch support) or mode=6
- centos7
- vlans
- two servers with 4 nic/node, one nic on the internet (this is the default
route) and 3 nic as bonded interface
- MTU 9000 o
]
>
> > On 09/24/2014 11:59 AM, Demeter Tibor wrote:
>
> > > Hi,
> >
>
> > > Is there any method in glusterfs, like raid-5?
> >
>
> > > I have three node, each node has 5 TB of disk. I would like utilize all
> > > of
> >
Hi,
Could I help anybody?
Tibor
> [+gluster-users]
> On 09/24/2014 11:59 AM, Demeter Tibor wrote:
> > Hi,
>
> > Is there any method in glusterfs, like raid-5?
>
> > I have three node, each node has 5 TB of disk. I would like utilize all of
> >
15 matches
Mail list logo