Am 29.08.2016 um 12:25 schrieb Nir Soffer:
> On Thu, Aug 25, 2016 at 2:37 PM, InterNetX - Juergen Gotteswinter
> wrote:
>> currently, iscsi multipathed with solaris based filer as backend. but
>> this is already in progress of getting migrated to a different, less
>> fragile,
On Thu, Aug 25, 2016 at 2:37 PM, InterNetX - Juergen Gotteswinter
wrote:
> currently, iscsi multipathed with solaris based filer as backend. but
> this is already in progress of getting migrated to a different, less
> fragile, plattform. ovirt is nice, but too bleeding edge
On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
wrote:
>
> iSCSI & Ovirt is an awful combination, no matter if multipathed or
> bonded. its always gambling how long it will work, and when it fails why
> did it fail.
>
> its supersensitive to
On Fri, Aug 26, 2016 at 1:33 PM, InterNetX - Juergen Gotteswinter <
j...@internetx.com> wrote:
>
>
> Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:
> >
> >
> > On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
> > >
Le 26/08/2016 à 12:33, InterNetX - Juergen Gotteswinter a écrit :
Am 25.08.2016 um 15:53 schrieb Yaniv Kaul:
On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter
> wrote:
iSCSI & Ovirt is an
one more thing, which i am sure that most ppl are not aware of.
when using thin provisioned disks for vms, hosted on iSCSI SAN Ovirt
uses a for me unusual way to do this.
Ovirt adds a new LVM LV for a vm, generates a Thin Qcow Image which is
written directly raw onto that LV. So for, ok, can be
On Wed, Aug 24, 2016 at 6:15 PM, InterNetX - Juergen Gotteswinter <
juergen.gotteswin...@internetx.com> wrote:
> iSCSI & Ovirt is an awful combination, no matter if multipathed or
> bonded. its always gambling how long it will work, and when it fails why
> did it fail.
>
I disagree. In most
Le 25/08/2016 à 13:37, InterNetX - Juergen Gotteswinter a écrit :
Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
We are
Am 25.08.2016 um 08:42 schrieb Uwe Laverenz:
> Hi Jürgen,
>
> Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
>> iSCSI & Ovirt is an awful combination, no matter if multipathed or
>> bonded. its always gambling how long it will work, and when it fails why
>> did it fail.
>>
>>
Hi Jürgen,
Am 24.08.2016 um 17:15 schrieb InterNetX - Juergen Gotteswinter:
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
its supersensitive to latency, and superfast with setting an host
iSCSI & Ovirt is an awful combination, no matter if multipathed or
bonded. its always gambling how long it will work, and when it fails why
did it fail.
its supersensitive to latency, and superfast with setting an host to
inactive because the engine thinks something is wrong with it. in most
Hi Elad,
thank you very much for clearing things up.
Initiator/iface 'a' tries to connect target 'b' and vice versa. As 'a'
and 'b' are in completely separate networks this can never work as long
as there is no routing between the networks.
So it seems the iSCSI-bonding feature is not
Thanks.
You're getting an iSCSI connection timeout [1], [2]. It means the host
cannot connect to the targets from iface: enp9s0f1 nor iface: enp9s0f0.
This causes the host to loose its connection to the storage and also, the
connection to the engine becomes inactive. Therefore, the host changes
Hi Elad,
I sent you a download message.
thank you,
Uwe
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Network configuration seems OK.
Please provide engine.log and vdsm.log
Thanks
On Wed, Aug 24, 2016 at 3:22 PM, Uwe Laverenz wrote:
> Hi,
>
> sorry for the delay, I reinstalled everything, configured the networks,
> attached the iSCSI storage with 2 interfaces and finally
Hi,
sorry for the delay, I reinstalled everything, configured the networks,
attached the iSCSI storage with 2 interfaces and finally created the
iSCSI-bond:
[root@ovh01 ~]# route
Kernel IP Routentabelle
ZielRouter Genmask Flags Metric RefUse Iface
default
I don't think it's necessary.
Please provide the host's routing table and interfaces list ('ip a' or
ifconfing) while it's configured with the bond.
Thanks
On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz wrote:
> Hi Elad,
>
> Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
>
>
Hi Elad,
Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:
Please be sure that ovirtmgmt is not part of the iSCSI bond.
Yes, I made sure it is not part of the bond.
It does seem to have a conflict between default and enp9s0f0/ enp9s0f1.
Try to put the host in maintenance and then delete the
Hi,
Am 16.08.2016 um 09:26 schrieb Elad Ben Aharon:
Currently, your host is connected through a single initiator, the
'Default' interface (Iface Name: default), to 2 targets: tgta and tgtb
I see what you mean, but the "Iface Name" is somewhat irritating here,
it does not mean that the wrong
Currently, your host is connected through a single initiator, the 'Default'
interface (Iface Name: default), to 2 targets: tgta and tgtb (Target:
iqn.2005-10.org.freenas.ctl:tgta and Target: iqn.2005-10.org.freenas.ctl:tgtb).
Hence, each LUN is exposed from the storage server via 2 paths.
Since
Hi,
Am 15.08.2016 um 16:53 schrieb Elad Ben Aharon:
Is the iSCSI domain that supposed to be connected through the bond the
current master domain?
No, it isn't. An NFS share is the master domain.
Also, can you please provide the output of 'iscsiadm -m session -P3' ?
Yes, of course
Hi,
Is the iSCSI domain that supposed to be connected through the bond the
current master domain?
Also, can you please provide the output of 'iscsiadm -m session -P3' ?
Thanks
On Mon, Aug 15, 2016 at 4:31 PM, Uwe Laverenz wrote:
> Hi all,
>
> I'd like to test iSCSI
Hi all,
I'd like to test iSCSI multipathing with OVirt 4.02 and see the
following problem: if I try to add an iSCSI-Bond the host loses
connection to _all_ storage domains.
I guess I'm doing something wrong. :)
I have built a small test environment for this:
The storage is provided by a
23 matches
Mail list logo