I am having a very strange issue with ovirt 3.5.1 and gluster. I have a
gluster volume with 4 nodes. One node is specifically set as the node
hosting the gluster volume in my ovirt cluster however today it died. I
tried working around it my modifying the hostname in the entry to another
node tha
Nathan,
Did you find a work around for this? I am running into the same issue.
Is there a way to force vdsm to see gluster? Or a way to manually run the
search so I can see why it fails?
>*<>
*nathan stratton | vp technology | broadsoft, inc | +1-240-404-6580
|www.broadsoft.com
On Fri, Jun
Is there a way to pass oVirt user login details(name) to the vm in the form
of an environment variable to the vm? Would that be something Cloud-init
or Ovirt-guest-agent handles?
--
Patrick Pierson
___
Users mailing list
Users@ovirt.org
http://lists.ov
AM, Itamar Heim wrote:
> On 09/29/2014 04:24 PM, Antoni Segura Puimedon wrote:
>
>>
>>
>> - Original Message -
>>
>>> From: "Pat Pierson"
>>> To: users@ovirt.org
>>> Sent: Monday, September 29, 2014 3:07:53 PM
>&g
I am attempting to use Snort as an IDS on my network. Currently I have all
traffic on my router uplink port mirrored to a port I have plugged into an
unused port on an oVirt node. I have created a network that only has
access to that port and assigned that network to my snort vm. I am able to
se
Agreed, thanks for the input.
On Tue, Sep 16, 2014 at 11:02 AM, Shahar Havivi wrote:
> On 16.09.14 10:41, Pat Pierson wrote:
> > I had a feeling you were going to ask that, just finished installing
> fedora
> > 19 from an iso image and tested. fedora vm's cloud-init is
wrote:
> On 16.09.14 08:37, Pat Pierson wrote:
> > Shahar,
> > Thank you for your response. Version is
> cloud-init-0.7.4-2.el6.noarch
> >
> > On Sun, Sep 14, 2014 at 3:12 AM, Shahar Havivi
> wrote:
> >
> > > On 11.09.14 14:06, Pat Pierson w
Shahar,
Thank you for your response. Version is cloud-init-0.7.4-2.el6.noarch
On Sun, Sep 14, 2014 at 3:12 AM, Shahar Havivi wrote:
> On 11.09.14 14:06, Pat Pierson wrote:
> > I am running ovirt 3.4.3 on a Fedora 19 manager and have 1 node running
> > Fedora 19 as well. I a
I am running ovirt 3.4.3 on a Fedora 19 manager and have 1 node running
Fedora 19 as well. I am attempting to get cloud-init to work on a CentOS
VM but I am running into issues. I can see where in the log it mounts
/dev/sr1 to /tmp/tmp_random_location and where it reads the meta-data.json
and use
emu process is still running on your host.
> can you see which host the VM is running on? can you try to log in the
> host and check if there are any qemu processes running there.
>
> regards,
> Maor
>
>
> On 06/17/2014 01:32 PM, Pat Pierson wrote:
> > I am running ovirt 3.3.2
I am running ovirt 3.3.2 and gluster 3.4 and recently had a pretty
catastrophic failure of my small 3 node cluster. Long story short I lost
the disk to a VM and decided to delete (start over with) now magically
re-appears as "external-vmname". When I attempt to delete it again, it
deletes, but th
Looks like one of your peers is not connected anymore, depending on your
gluster setup this could be harmless so long as you replace it soon.
On Thu, Feb 27, 2014 at 1:54 AM, yfw...@daicy.net wrote:
>
> hi,
> it is my glusterfs log,
> [2014-02-27 10:44:35.565367] I [client.c:1883:client_rpc_not
Using Assaf's information I was able to accomplish my A/B network. I put a
quick write up about it here.
http://izen.ghostpeppersrus.com/setting-up-networks/
On Feb 6, 2014 3:50 AM, "Assaf Muller" wrote:
>
>
> - Original Message -
> > From: "Pa
Thanks Assaf. I will give that a try.
On Thu, Feb 6, 2014 at 3:50 AM, Assaf Muller wrote:
>
>
> - Original Message -
> > From: "Pat Pierson"
> > To: users@ovirt.org
> > Sent: Wednesday, February 5, 2014 10:17:54 PM
> > Subject: [Users]
I am having some issues wrapping my head around this but what I am trying
to setup is a A/B testing environment with a 3node cluster. Each node has
2 nics, 1 for ovirtmgmt and 1 for vlaned A/B network. I guess what I am
trying to understand is if ovirt is tagging the vlan's I setup and is
properl
Sorry if this is a simple answer but I could not find anything on google
about this. I keep having to reconfirm my membership with this list
because I assume google is kicking back messages.
Message is as follows:
Your membership in the mailing list Users has been disabled due to
excessive bounce
all machines.
On Fri, Jan 10, 2014 at 1:20 PM, David Li wrote:
> Is this the /etc/hosts file on the engine machine or the node machine?
>
> --
> *From:* Pat Pierson
> *To:* David Li
> *Cc:* "users@ovirt.org"
> *Sent:* Fr
you can set a static FQDN if you dont have a dns server in /etc/hosts,
however if you do this, set the same FQDNs on all hosts for each hosts
192.168.0.1 node1.test.com node1
192.168.0.2 node2.test.com node2
192.168.0.3 node3.test.com node3
use that fqdn for your engine/node duri
uster?
>
> Thanks,
> Kanagaraj
>
>
>
> On 12/12/2013 05:28 PM, Pat Pierson wrote:
>
> Kanagaraj,
> Thank you for the response. vdsm-gluster-4.13.0-11.el6.noarch is
> installed on the host that is currently on the gluster cluster as well as
> the host I am tryin
would have done host re-install before moving the host to
> gluster-cluster, vdsm-gluster would have not installed.
>
> Thanks,
> Kanagaraj
>
>
> On 12/12/2013 05:43 AM, Pat Pierson wrote:
>
> I am in the process of upgrading my cluster while at the same time moving
> to
I am in the process of upgrading my cluster while at the same time moving
to gluster. My engine is version 3.3.1 and I have a NFS cluster running in
3.1 compatibility mode that I am moving to a 3.3 GlusterFS cluster. Host3
runs the engine and is on the NFS cluster while host2 is running a single
21 matches
Mail list logo