Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-04 Thread Leno Vo
ahhh i'm giving up running gluster on production...  good thing  i have a 
replication and i can do hybrid networking with my DR site but there'a an app 
that network senstive so i have to  bump my bandwidth hopefully im not going to 
pay additional on it.  by the way my hpsan two nodes, also died on the 
blaockout but never had corruption. 

On Wednesday, August 3, 2016 6:18 PM, Leno Vo  
wrote:
 

 i have to reboot each node to make it working with a time interval of 5-8 
mins, after that it got stable but still lots of sharding didn't heal but 
there's no split-brain.  some vms lost it's vmx, so i created new vm and put to 
the storage to make working, wew!!!
sharding is still faulty, won't recommend on yet.  going back without it. 

On Wednesday, August 3, 2016 4:34 PM, Leno Vo  
wrote:
 

 my mistakes, the corruption happened after 6 hours, some vm had sharding won't 
heal but there's no split brain 

On Wednesday, August 3, 2016 11:13 AM, Leno Vo  
wrote:
 

 One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 
x3,it already crashed two time because of brown out and block out, it got 
production vms on it, about 1.3TB.
Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with 
ssd is solid rock or just lucky?
My other gluster is on 3 nodes 3713, but one node never got up (old server 
proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, 
it never healed about 586 occurences but there's no split-brain too.  and vms 
are intact too, working fine and fast.
ahh never turned on caching on the array, the esx might not come up right away, 
u need to go to setup first to make it work and restart and then you can go to 
array setup (hp array F8) and turned off caching.  then esx finally boot up.

   

   

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
i have to reboot each node to make it working with a time interval of 5-8 mins, 
after that it got stable but still lots of sharding didn't heal but there's no 
split-brain.  some vms lost it's vmx, so i created new vm and put to the 
storage to make working, wew!!!
sharding is still faulty, won't recommend on yet.  going back without it. 

On Wednesday, August 3, 2016 4:34 PM, Leno Vo  
wrote:
 

 my mistakes, the corruption happened after 6 hours, some vm had sharding won't 
heal but there's no split brain 

On Wednesday, August 3, 2016 11:13 AM, Leno Vo  
wrote:
 

 One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 
x3,it already crashed two time because of brown out and block out, it got 
production vms on it, about 1.3TB.
Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with 
ssd is solid rock or just lucky?
My other gluster is on 3 nodes 3713, but one node never got up (old server 
proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, 
it never healed about 586 occurences but there's no split-brain too.  and vms 
are intact too, working fine and fast.
ahh never turned on caching on the array, the esx might not come up right away, 
u need to go to setup first to make it work and restart and then you can go to 
array setup (hp array F8) and turned off caching.  then esx finally boot up.

   

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
my mistakes, the corruption happened after 6 hours, some vm had sharding won't 
heal but there's no split brain 

On Wednesday, August 3, 2016 11:13 AM, Leno Vo  
wrote:
 

 One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 
x3,it already crashed two time because of brown out and block out, it got 
production vms on it, about 1.3TB.
Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with 
ssd is solid rock or just lucky?
My other gluster is on 3 nodes 3713, but one node never got up (old server 
proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, 
it never healed about 586 occurences but there's no split-brain too.  and vms 
are intact too, working fine and fast.
ahh never turned on caching on the array, the esx might not come up right away, 
u need to go to setup first to make it work and restart and then you can go to 
array setup (hp array F8) and turned off caching.  then esx finally boot up.

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Ted Miller

On 8/3/2016 11:13 AM, Leno Vo wrote:

One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 
x3,it already crashed two time because of brown out and block out, it got 
production vms on it, about 1.3TB.


Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes 
with ssd is solid rock or just lucky?


My other gluster is on 3 nodes 3713, but one node never got up (old server 
proliant wants to retire), ssh raid 5 with combination sshd lol laptop 
seagate, it never healed about 586 occurences but there's no split-brain 
too.  and vms are intact too, working fine and fast.


ahh never turned on caching on the array, the esx might not come up right 
away, u need to go to setup first to make it work and restart and then you 
can go to array setup (hp array F8) and turned off caching.  then esx 
finally boot up.



___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
I would say you are very lucky.  I would not use anything less that replica 3 
in production.


Ted Miller
Elkhart, IN, USA
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] 3.7.13 two node ssd solid rock

2016-08-03 Thread Leno Vo
One of my gluster 3713 is on two nodes only with samsung ssd 1tb pro raid 5 
x3,it already crashed two time because of brown out and block out, it got 
production vms on it, about 1.3TB.
Never got split-brain, and healed quickly.  Can we say 3.7.13 two nodes with 
ssd is solid rock or just lucky?
My other gluster is on 3 nodes 3713, but one node never got up (old server 
proliant wants to retire), ssh raid 5 with combination sshd lol laptop seagate, 
it never healed about 586 occurences but there's no split-brain too.  and vms 
are intact too, working fine and fast.
ahh never turned on caching on the array, the esx might not come up right away, 
u need to go to setup first to make it work and restart and then you can go to 
array setup (hp array F8) and turned off caching.  then esx finally boot up.___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users