Hi Guys,
Quick question, I have my nodes on a bond-bridge-privateVlan setup, and
my engine on a bond-bridge-publicVlan setup for remote monitoring.
Understandably, the nodes are complaining that they are failing updates.
(They're on a private vlan, and only configured with IP's in that vlan,
Yaniv,
Thanks for the reply.
Didi,
Dully noted!
Thank you all for the reply. I got it all fixed.
Regards,
--
Fernando Fuentes
ffuen...@txweather.org
http://www.txweather.org
On Tue, Aug 16, 2016, at 12:56 AM, Yaniv Dary wrote:
> This looks like a DWH, not a reports issue. Are you sure you
On 08/16/2016 08:20 PM, Huan He (huhe) wrote:
> Hi Juan,
>
> Thanks! It works.
>
> One more question, do you know how to do ³save network configuration² in
> the api? I did the following
>
> Params.Action(force=1, check_connectivity=1, host_nics=host_nics)
>
> but the gui says the network
Hi Juan,
Thanks! It works.
One more question, do you know how to do ³save network configuration² in
the api? I did the following
Params.Action(force=1, check_connectivity=1, host_nics=host_nics)
but the gui says the network configuration is not saved. I can¹t find any
relevant params in the
We experienced severe performance degridation with a 5TB volume with
500GB of data on it. So much so that we went ahead and upgraded to
10GbE. Our setup was 1Gbe interface for all gluster communication and
client access. We experience no performance hits when since switching
to 10Gbe.
On
Hi all.
I understand using 10Gb interfaces when using Gluster is advised for
helping with data replication specially in situations where a node went
down for a while and need to re-sync data.
However can anyone tell if using one 1Gb interface dedicated for it in
hosts with 1.8 TB of Raw
Thank. That did it.
On 08/16/2016 10:44 AM, Nir Soffer wrote:
> On Tue, Aug 16, 2016 at 7:28 PM, Edward Clay wrote:
>> So I've run into an issue where I add
>> "-obackup-volfile-servers=10.4.16.19:10.4.16.12"
> -o is added by vdsm on the host, try:
>
>
On Tue, Aug 16, 2016 at 7:28 PM, Edward Clay wrote:
> So I've run into an issue where I add
> "-obackup-volfile-servers=10.4.16.19:10.4.16.12"
-o is added by vdsm on the host, try:
backup-volfile-servers=10.4.16.19:10.4.16.12
> to the storage domain
> object and
So I've run into an issue where I add
"-obackup-volfile-servers=10.4.16.19:10.4.16.12" to the storage domain
object and click ok. Then I get an error that says "Failed to connect
Host hv5.domain.com to the Storage Domains SANB". Am I getting the
mount option correct? Any thoughts on what I'm
On 08/16/2016 11:58 AM, Arsène Gschwind wrote:
> Hi,
>
> has anybody been able to configure Foreman with oVirt 4 ? When trying to
> add Foreman as an external provider and test the login it always return
> : Failed to communicate with the external provider, see log for
> additional details.
>
>
On 08/16/2016 03:52 AM, like...@cs2c.com.cn wrote:
> Hello,
>
> I'm using ovirt3.6.7, and i want to use QoS function by restapi. But i
> fount i can't update the qos to unlimited.
> For example, i assigned a qos named qos1 to a vnic profile named
> vprofile1, then i want to set the qos of
Hi Elad,
Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
Please be sure that ovirtmgmt is not part of the iSCSI bond.
Yes, I made sure it is not part of the bond.
It does seem to have a conflict between default and enp9s0f0/ enp9s0f1.
Try to put the host in maintenance and then delete the
Oh yeah :)
I mistakenly used a root certificate from a local CA for
/etc/pki/ovirt-engine/apache-ca.pem.
Now I understood, and it works.
Thanks again.
16.08.2016, 16:15, "Jiri Belka" :
> IMO you "owe" explanation what was wrong, so other users
> could learn from your
IMO you "owe" explanation what was wrong, so other users
could learn from your mistakes and this mailing-list archive
would thus be beneficial for them when searching for help ;)
Anyway, that's great news!
j.
- Original Message -
From: "aleksey maksimov"
To:
Thank you, Jiri !
I did everything step by step and SPICE HTML5 browser client now works.
16.08.2016, 10:46, "Jiri Belka" :
> So,
>
> I used this for my own ca test:
>
> OWN CA AND OWN ENGINE KEY/CRT
> =
>
> 0> CA
>
> # awk '/my-/ || $1 ~
Hi,
Am 16.08.2016 um 09:26 schrieb Elad Ben Aharon:
Currently, your host is connected through a single initiator, the
'Default' interface (Iface Name: default), to 2 targets: tgta and tgtb
I see what you mean, but the "Iface Name" is somewhat irritating here,
it does not mean that the wrong
So,
I used this for my own ca test:
OWN CA AND OWN ENGINE KEY/CRT
=
0> CA
# awk '/my-/ || $1 ~ /^[^#]*_default/' /etc/pki/tls/openssl.cnf
certificate = $dir/my-ca.crt# The CA certificate
crl = $dir/my-ca.crl# The current CRL
Jiri, I did not hide information. Tell me what the log file should show and I
will show
16.08.2016, 10:29, "Jiri Belka" :
> It does have logs, filenames "hide" real data.
>
> You should reveal logs and what each file is and
> which exact commands you were executing.
>
> Vague
It does have logs, filenames "hide" real data.
You should reveal logs and what each file is and
which exact commands you were executing.
Vague statements won't help much. It does work for me,
there much be something strange in your setup but we
cannot know what without details.
j.
-
Currently, your host is connected through a single initiator, the 'Default'
interface (Iface Name: default), to 2 targets: tgta and tgtb (Target:
iqn.2005-10.org.freenas.ctl:tgta and Target: iqn.2005-10.org.freenas.ctl:tgtb).
Hence, each LUN is exposed from the storage server via 2 paths.
Since
Hi,
Am 15.08.2016 um 16:53 schrieb Elad Ben Aharon:
Is the iSCSI domain that supposed to be connected through the bond the
current master domain?
No, it isn't. An NFS share is the master domain.
Also, can you please provide the output of 'iscsiadm -m session -P3' ?
Yes, of course
21 matches
Mail list logo