On 11/24/2014 08:44 PM, noc wrote:
Warning: option deprecated, use lost_tick_policy property of kvm-pit
instead.
[2014-11-24 15:09:13.069041] E [rpc-clnt.c:362:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x186)[0x7f0f46d79396] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind
On 11/12/2014 09:16 PM, Nir Soffer wrote:
Hi Mario,
Please open a bug for this.
Include these logs in the bug for the ovirt engine host, one hypervisor node
that
had no trouble, and one hypervisor node that had trouble (ovirt-node01?).
/var/log/mesages
/var/log/sanlock.log
/var/log/vdsm.log
On 10/30/2014 06:45 PM, Jiri Moskovcak wrote:
On 10/30/2014 09:22 AM, Jaicel R. Sabonsolin wrote:
Hi Guys,
I need help with my ovirt Hosted-Engine HA setup. I am running on 2
ovirt hosts and 2 gluster nodes with replicated volumes. i already have
VMs running on my hosts and they can migrate nor
On 09/25/2014 07:44 PM, ml ml wrote:
Hello List,
i have a Two Node replicated Gluster.
I am running about 15 vm on each host so that in case one node fails
the other one can take over.
My question is how the GlusterFS self-healing-daemon works.
I disconnected the two nodes on purpose and reco
On 08/29/2014 07:34 PM, David King wrote:
Paul,
Thanks for the response.
You mention that the issue is orphaned files during updates when one
node is down. However I am less concerned about adding and removing
files because the file server will be predominately VM disks so the file
structure i
On 07/22/2014 07:21 AM, Itamar Heim wrote:
On 07/22/2014 04:28 AM, Vijay Bellur wrote:
On 07/21/2014 05:09 AM, Pranith Kumar Karampuri wrote:
On 07/21/2014 02:08 PM, Jiri Moskovcak wrote:
On 07/19/2014 08:58 AM, Pranith Kumar Karampuri wrote:
On 07/19/2014 11:25 AM, Andrew Lau wrote:
On
On 07/22/2014 06:15 PM, Itamar Heim wrote:
On 07/16/2014 06:46 PM, Demeter Tibor wrote:
Hi,
We have a production environment with KVM+centos6 and we want to switch
to ovirt.
At this moment we have 12 VM on three independent server.
This VMs uses the local disks of servers, we don't have a centr
>> wrote:
On 07/18/2014 05:43 PM, Andrew Lau wrote:
On Fri, Jul 18, 2014 at 10:06 PM, Vijay Bellur
mailto:vbel...@redhat.com>> wrote:
[Adding gluster-devel]
On 07/18/2014 05:20 PM, Andrew Lau wrote:
Hi all,
As most of you ha
[Adding gluster-devel]
On 07/18/2014 05:20 PM, Andrew Lau wrote:
Hi all,
As most of you have got hints from previous messages, hosted engine
won't work on gluster . A quote from BZ1097639
"Using hosted engine with Gluster backed storage is currently something
we really warn against.
I think
I am sorry, this missed my attention over the last few days.
On 05/23/2014 08:50 PM, Ted Miller wrote:
Vijay, I am not a member of the developer list, so my comments are at end.
On 5/23/2014 6:55 AM, Vijay Bellur wrote:
On 05/21/2014 10:22 PM, Federico Simoncelli wrote:
- Original
On 05/23/2014 05:25 PM, Gabi C wrote:
On problematic node:
[root@virtual5 ~]# ls -la /var/lib/glusterd/peers/
total 20
drwxr-xr-x. 2 root root 4096 May 21 16:33 .
drwxr-xr-x. 9 root root 4096 May 21 16:33 ..
-rw---. 1 root root 73 May 21 16:33
85c2a08c-a955-47cc-a924-cf66c6814654
-rw--
On 05/21/2014 07:22 PM, Kanagaraj wrote:
Ok.
I am not sure deleting the file or re-peer probe would be the right way
to go.
Gluster-users can help you here.
On 05/21/2014 07:08 PM, Gabi C wrote:
Hello!
I haven't change the IP, nor reinstall nodes. All nodes are updated
via yum. All I can t
On 05/21/2014 10:22 PM, Federico Simoncelli wrote:
- Original Message -
From: "Giuseppe Ragusa"
To: fsimo...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, May 21, 2014 5:15:30 PM
Subject: sanlock + gluster recovery -- RFE
Hi,
- Original Message -
From: "Ted Miller"
To: "u
On 05/11/2014 02:04 AM, Vadims Korsaks wrote:
HI!
Created 2 node setup with oVirt 3.4 and CentOS 6.5, for storage created
2 node replicated gluster (3.5) fs on same hosts with oVirt.
mount looks like this:
127.0.0.1:/gluster01 on
/rhev/data-center/mnt/glusterSD/127.0.0.1:_gluster01 type fuse.glu
On 02/09/2014 11:08 PM, ml ml wrote:
Yes, the only thing which brings the wirte I/O almost on my Host Level
is by enabling viodiskcache = writeback.
As far as i can tell this is caching enabled for the guest and the host
which is critical if sudden power loss happens.
Can i turn this is on if i
On 02/09/2014 09:11 PM, ml ml wrote:
I am on Cent OS 6.5 and i am using:
[root@node1 ~]# rpm -qa | grep gluster
glusterfs-rdma-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-api-3.4
On 01/25/2014 01:31 AM, Steve Dainard wrote:
Not sure what a good method to bench this would be, but:
An NFS mount point on virt host:
[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 3.95399 s, 104 M
On 01/17/2014 12:55 PM, Gianluca Cecchi wrote:
On Fri, Nov 29, 2013 at 11:48 AM, Dan Kenigsberg wrote:
On Fri, Nov 29, 2013 at 04:04:03PM +0530, Vijay Bellur wrote:
There are two ways in which GlusterFS can be used as a storage domain:
a) Use gluster native/fuse access with POSIXFS
b) Use
Adding gluster-users.
On 01/06/2014 12:25 AM, Amedeo Salvati wrote:
Hi all,
I'm testing ovirt+glusterfs with only two node for all (engine,
glusterfs, hypervisors), on centos 6.5 hosts following guide at:
http://community.redhat.com/blog/2013/09/up-and-running-with-ovirt-3-3/
http://www.glust
Adding gluster-users.
On 01/02/2014 08:50 PM, gregoire.le...@retenodus.net wrote:
Hello,
I have a Gluster volume in distributed/replicated mode. I have 2 hosts.
When I try to create a VM with a preallocated disk, it uses 100% of the
available CPU and bandwidth (I have 1 Gigabit network card).
T
On 12/23/2013 06:24 PM, gregoire.le...@retenodus.net wrote:
Hi,
For the 2 host scenario, disable quorum will allow you to do this.
I just disabled quorum and disabled the auto migration for my cluster.
Here is what I get :
To remind, the path of my storage is localhost:/path and I selected
"
On 12/17/2013 02:00 AM, tristan...@libero.it wrote:
yes my idea is to start with 1 node ( storage+compute ) and then expand with
more server to add storage and compute.
what do you think ?
Definitely doable. I have not come across many instances of this in the
community and would recommend
On 12/15/2013 10:30 PM, tristan...@libero.it wrote:
i have 1 phisical node with SSD disks, can i install ovirt with glusterFS
storage as backend . In a second time add new ovirt node ( same hardware )
and
attach to the first 1 to create a compute and storage HA ?
gluster volumes can be expan
On 12/09/2013 07:32 PM, lofyer wrote:
I was installing ovirt-engine-3.3.1 in CentOS-6.5 and got dependency
error below:
Error: Package: glusterfs-cli-3.4.0-8.el6.x86_64 (glusterfs-epel)
Requires: glusterfs-lib = 3.4.0-8.el6.x86_64
Available: glusterfs-3.4.0-8.el6.x86_64 (glusterfs-epel)
glusterfs
On 11/29/2013 03:35 AM, tristan...@libero.it wrote:
Hello everybody,
i'm successful using ovirt with 16 physical node, in a FC cluster with a very
BIG dell compellent (and so expensive) enterprise storage ;)
I'm researching a new architecture for a new cluster, and i want to
understand
more bet
On 11/22/2013 11:18 PM, Bob Doolittle wrote:
On 11/22/2013 12:21 PM, Vijay Bellur wrote:
On 11/22/2013 06:23 PM, Bob Doolittle wrote:
On 11/22/2013 06:54 AM, Kristaps wrote:
Bob Doolittle writes:
On 11/21/2013 12:57 PM, Itamar Heim wrote:
On 11/21/2013 07:38 PM, Bob Doolittle wrote
On 11/22/2013 06:23 PM, Bob Doolittle wrote:
On 11/22/2013 06:54 AM, Kristaps wrote:
Bob Doolittle writes:
On 11/21/2013 12:57 PM, Itamar Heim wrote:
On 11/21/2013 07:38 PM, Bob Doolittle wrote:
On 11/21/2013 12:00 PM, Itamar Heim wrote:
On 11/21/2013 06:32 PM, Bob Doolittle wrote:
Yay!
On 11/22/2013 01:00 PM, Itamar Heim wrote:
On 11/22/2013 06:08 AM, Vijay Bellur wrote:
On 11/22/2013 06:25 AM, Bob Doolittle wrote:
On 11/21/2013 07:53 PM, Itamar Heim wrote:
On 11/22/2013 02:52 AM, Bob Doolittle wrote:
On 11/21/2013 07:48 PM, Itamar Heim wrote:
On 11/22/2013 02:33 AM
On 11/22/2013 06:25 AM, Bob Doolittle wrote:
On 11/21/2013 07:53 PM, Itamar Heim wrote:
On 11/22/2013 02:52 AM, Bob Doolittle wrote:
On 11/21/2013 07:48 PM, Itamar Heim wrote:
On 11/22/2013 02:33 AM, Bob Doolittle wrote:
On 11/21/2013 12:57 PM, Itamar Heim wrote:
On 11/21/2013 07:38 PM, B
On 10/25/2013 11:57 AM, Fabian Deutsch wrote:
Am Donnerstag, den 24.10.2013, 19:59 +0200 schrieb Saša Friedrich:
I reinstalled node and remounter / rw then I checked fs before
activating host (in oVirt Engine) and after (which files have been
changed)... The "ro" problem seems to be in /var/lib/
On 10/19/2013 10:44 AM, Gianluca Cecchi wrote:
On Wed, Oct 16, 2013 at 1:01 PM, Itamar Heim wrote:
Hope to meet many next week in LinuxCon Europe/KVM Forum/oVirt conference in
Edinburgh
Unfortunately not me ;-(
- LINBIT published "High-Availability oVirt-Cluster with
iSCSI-Storage"[9]
On 09/25/2013 11:51 AM, Gianluca Cecchi wrote:
On Wed, Sep 25, 2013 at 8:11 AM, Vijay Bellur wrote:
Have the following configuration changes been done?
1) gluster volume set server.allow-insecure on
2) Edit /etc/glusterfs/glusterd.vol on all gluster nodes to contain this
line
On 09/25/2013 11:36 AM, Gianluca Cecchi wrote:
qemu-system-x86_64: -drive
file=gluster://ovnode01/gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161,if=none,id=drive-virtio-disk0,format=raw,serial=d004045e-620b-4d90-8a7f-6c6d
On 09/23/2013 08:08 PM, David Riedl wrote:
Hello everyone,
I recently created my first ovirt/vdms/gluster cluster. I did everything
as it is described in the ovirt and glusterfs quick start.
The glusterfs Domain is recognized in the UI and is also mounted in the
system. Everything looks fine to m
On 07/17/2013 10:20 PM, Steve Dainard wrote:
Completed changes:
*gluster> volume info vol1*
Volume Name: vol1
Type: Replicate
Volume ID: 97c3b2a7-0391-4fae-b541-cf04ce6bde0f
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ovirt001.miovision.corp:/mnt/storage1/vol1
On 07/17/2013 09:04 PM, Steve Dainard wrote:
*Web-UI displays:*
VM VM1 is down. Exit message: internal error process exited while
connecting to monitor: qemu-system-x86_64: -drive
file=gluster://ovirt001/vol1/a87a7ef6-2c74-4d8e-a6e0-a392d0f791cf/images/238cc6cf-070c-4483-b686-c0de7ddf0dfa/ff2
On 07/16/2013 06:02 PM, Itamar Heim wrote:
On 07/02/2013 12:01 AM, Steve Dainard wrote:
Creating /var/lib/glusterd/groups/virt on each node and adding
parameters found here:
https://access.redhat.com/site/documentation/en-US/Red_Hat_Storage/2.0/html/Quick_Start_Guide/chap-Quick_Start_Guide-Virtu
On 04/01/2013 06:13 PM, russell muetzelfeldt wrote:
On 01/04/2013, at 8:53 PM, Itamar Heim wrote:
On 04/01/2013 12:33 PM, russell muetzelfeldt wrote:
Is there any supported way (or advice on the best unsupported way) to provision
a 2-node cluster using local storage?
have you considered us
this still an issue with xfs?
There are no known problems with recent kernels. There are quite a few
enterprise storage solutions that run on xfs.
Thanks,
Vijay
On Fri, Mar 29, 2013 at 1:08 AM, Vijay Bellur mailto:vbel...@redhat.com>> wrote:
On 03/28/2013 08:19 PM, Tony Fel
On 03/28/2013 08:19 PM, Tony Feldmann wrote:
I have been trying for a month or so to get a 2 node cluster up and
running. I have engine installed on the first node, then add each each
system as a host to a posix dc. Both boxes have 4 data disks. After
adding the hosts I create a distributed re
On 03/07/2013 04:36 PM, Dave Neary wrote:
Hi Rob,
On 03/06/2013 05:59 PM, Rob Zwissler wrote:
On one hand I like oVirt, I think you guys have done a good job with
this, and it is free software so I don't want to complain.
But on the other hand, if you release a major/stable release (ie:
oVirt
On 02/01/2013 07:38 PM, Kanagaraj wrote:
On 02/01/2013 06:47 PM, Joop wrote:
Shireesh Anjal wrote:
On 02/01/2013 05:13 PM, noc wrote:
On 1-2-2013 11:07, Kanagaraj wrote:
Hi Joop,
Looks like the problem is because of the glusterfs version you are
using. vdsm could not parse the output from g
On 01/22/2013 03:28 PM, T-Sinjon wrote:
HI, everyone:
Recently , I newly installed ovirt 3.1 from
http://resources.ovirt.org/releases/stable/rpm/Fedora/17/noarch/,
and node use
http://resources.ovirt.org/releases/stable/tools/ovirt-node-iso-2.5.5-0.1.fc17.iso
wh
On 01/21/2013 01:50 PM, Kanagaraj Mayilsamy wrote:
Hi Jithin,
By looking at the logs, looks like you already had a volume named 'vol1' in
the gluster and you have tried to create another volume with the same name from
the UI. Thats why you were able to see the volume 'vol1' even after the
c
On 01/11/2013 12:56 PM, Jithin Raju wrote:
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
self._mount.mount(self.options, self._vfsT
On 01/04/2013 02:12 AM, Joop wrote:
Just saw the question what route to follow for post 3.2 and picked up
something that I didn't know about was going to ask if it was possible
to implement, namely setting permissions on the volume folder when
creating a gluster volume.
But when trying it out I f
On 01/03/2013 10:14 PM, Adrian Gibanel wrote:
This is what I'm missing right now in oVirt 3.1:
Better GlusterFS support.
===
1. Add a checkbox when creating a volume: "Set oVirt permissions" so
that the vdsm : kvm permissions are set. I don't want it to be
per-default because I'd
On 12/28/2012 03:14 PM, Joop wrote:
Vijay Bellur wrote:
On 12/28/2012 03:24 AM, Joop wrote:
qemu-img-1.2.0-25.fc17.x86_64
qemu-common-1.2.0-25.fc17.x86_64
qemu-kvm-1.2.0-25.fc17.x86_64
qemu-kvm-tools-1.2.0-25.fc17.x86_64
ipxe-roms-qemu-20120328-1.gitaac9718.fc17.noarch
qemu-system-x86-1.2.0
On 12/28/2012 03:24 AM, Joop wrote:
qemu-img-1.2.0-25.fc17.x86_64
qemu-common-1.2.0-25.fc17.x86_64
qemu-kvm-1.2.0-25.fc17.x86_64
qemu-kvm-tools-1.2.0-25.fc17.x86_64
ipxe-roms-qemu-20120328-1.gitaac9718.fc17.noarch
qemu-system-x86-1.2.0-25.fc17.x86_64
Other logs are available but don't know whe
On 11/08/2012 03:55 PM, Joop wrote:
Continuing my quest for a system consisting of oVirt en gluster I came
across the following.
I'm using the latest nightlies and had tried something with gluster but
it didn't work out. So I tried starting over en did a stop on my gluster
volumes, that went OK,
On 10/29/2012 11:46 AM, Daniel Rowe wrote:
Hi
I can't seem to get a gluster storage domain added I am using Fedora
17 on both the nodes management machine. I have the gluster volumes
showing in ovirt and I can manually mount the gluster volume both
locally on the node and on the management machi
On 10/03/2012 08:53 PM, Mike Burns wrote:
Action Items
* mburns to follow up with maintainers to figure out feature lists for
3.2 -- DUE by 10-Oct
* mburns to pull all of these features into the 3.2 release summary page
Apologies on missing out the due date. We plan to add th
On 07/09/2012 12:41 PM, Justin Clift wrote:
Hi all,
Saw the ongoing thread on oVirt 3.1 and Gluster, discussing
how to handle/portray storage networks in the Engine UI.
What's the right way to approach this, for people who use
non-IP based storage networks? (Infiniband, Fibre Channel,
etc).
On 07/04/2012 12:58 PM, Robert Middleswarth wrote:
On 07/04/2012 03:15 AM, Vijay Bellur wrote:
On 07/04/2012 12:18 PM, Robert Middleswarth wrote:
I was just able to repeat the issue. If you only have one node active
it will activate and work fine. But if you have 2 or more hosts / nodes
it
On 07/04/2012 12:18 PM, Robert Middleswarth wrote:
I was just able to repeat the issue. If you only have one node active
it will activate and work fine. But if you have 2 or more hosts / nodes
it will just round robin though the hosts with each host contending on
each round. I don't have that
On 06/21/2012 07:35 AM, зоррыч wrote:
Vijay?
-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com]
Sent: Thursday, June 21, 2012 12:47 AM
To: зоррыч
Cc: 'Daniel Paikov'; users@ovirt.org; Vijay Bellur
Subject: Re: [Users] Ovirt 3.1 and gluster (creation in ovirt)
On
On 06/16/2012 11:08 AM, Robert Middleswarth wrote:
I am seeing the same thing. I also notice that glusterfs seems to die
every-time I try. I am wonder if this could be a glusterfs / f17 issue.
Are you running GlusterFS 3.2.x in Fedora 17? For this volume creation
to complete successfully
57 matches
Mail list logo