odified payload limit to resolve this
> - nagios-plugins-nrpe
> - nrpe
>
> Have you done that?
>
>
> On 10/09/2015 07:17 AM, Punit Dambiwal wrote:
>
> Hi Ramesh,
>
> Even after recompile nrpe with increased value still the same issue...
>
> Thanks,
> Punit
Hi,
I am getting the following error :-
[root@monitor-001 yum.repos.d]#
/usr/lib64/nagios/plugins/gluster/discovery.py -c ssd -H stor1
Traceback (most recent call last):
File "/usr/lib64/nagios/plugins/gluster/discovery.py", line 510, in
clusterdata =
Hi Ramesh,
Even after recompile nrpe with increased value still the same issue...
Thanks,
Punit
On Fri, Oct 9, 2015 at 9:21 AM, Punit Dambiwal <hypu...@gmail.com> wrote:
> Hi Ramesh,
>
> Thanks for the update...as i have install nagios and nrpe via yum,should i
> n
questions/613288/truncating-return-data-as-it-is-bigger-then-nrpe-allows.
>
>
> Let me know if u need any more info.
>
> Regards,
> Ramesh
>
>
> On 10/08/2015 02:48 PM, Punit Dambiwal wrote:
>
> Hi,
>
> I am getting the following error :-
>
> --
replicate to distributed replicate...
[image: Inline image 3]
Is there any body can help me in this...now it's more and more complex..
Thanks,
Punit
On Mon, Jul 20, 2015 at 5:12 PM, Ramesh Nachimuthu rnach...@redhat.com
wrote:
On 07/20/2015 12:44 PM, Punit Dambiwal wrote:
Hi Atin,
Earlier i
Hi Sathees,
With 3 bricks i can get the gluster volume statusbut after added more
brickscan not get gluster volume status
On Sun, Jul 12, 2015 at 11:09 AM, SATHEESARAN sasun...@redhat.com wrote:
On 07/11/2015 02:46 PM, Atin Mukherjee wrote:
On 07/10/2015 03:03 PM, Punit Dambiwal
...@integrafin.co.uk wrote:
On 24/04/15 09:14, Punit Dambiwal wrote:
Hi,
I want to use the glusterfs with the following architecture :-
1. 3* Supermicro servers As storage node.
2. Every server has 10 SATA HDD (JBOD) and 2 SSD for caching (2
Additional on back pane for OS).
3. Gluster should be replica=3
Hi,
I want to use the glusterfs with the following architecture :-
1. 3* Supermicro servers As storage node.
2. Every server has 10 SATA HDD (JBOD) and 2 SSD for caching (2 Additional
on back pane for OS).
3. Gluster should be replica=3
4. 10G network Connection
The question is how to
?
On Fri, Apr 10, 2015 at 11:45 AM, Punit Dambiwal hypu...@gmail.com
wrote:
Hi Ben,
That means if i will not attach the SSD in to brick...even not install
glusterfs on the server...it gives me throughput about 300mb/s but once i
will install glusterfs and add this ssd in to glusterfs volume
Hi Ben,
-Scheduler {noop or deadline } :- *noop*
-No read ahead! :- *yes*
-No RAID! :- *Yes no RAID*
-Make sure the kernel seems them as SSDs :- *Yes*
On Thu, Apr 9, 2015 at 10:04 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Ben,
Yes...i am using 2*10G (Bonding LACP)...
[root@cpu02
Punit,
yum info glusterfs should help you. AFAIK the glusterfs 3.6.0 was
added to the base and updates CentOS 7.1 repositories so you need to
exclude them if you want to stay with 3.5.
Cheers,
Luf
Punit Dambiwal píše v St 08. 04. 2015 v 12:17 +0800:
Hi,
I try to install gluster
bs=64k count=4k oflag=dsync
4096+0 records in
4096+0 records out
268435456 bytes (268 MB) copied, 62.6922 s, 4.3 MB/s
Please let me know what i should do to improve the performance of my
glusterfs…
Thanks,
Punit Dambiwal
___
Gluster-users
--showduplicates
yum info glusterfs --showduplicates
or shorter list is
yum list glusterfs --showduplicates.
I think we're moving off-topic in these lists.
Cheers,
Luf
Punit Dambiwal píše v St 08. 04. 2015 v 14:02 +0800:
Hi,
Please find the attached..
[root@cpu06 ~]# yum info
:55 AM, Ben Turner btur...@redhat.com wrote:
- Original Message -
From: Vijay Bellur vbel...@redhat.com
To: Punit Dambiwal hypu...@gmail.com, gluster-users@gluster.org
Sent: Wednesday, April 8, 2015 6:44:42 AM
Subject: Re: [Gluster-users] Glusterfs performance tweaks
On 04/08
]
On Wed, Apr 8, 2015 at 6:44 PM, Vijay Bellur vbel...@redhat.com wrote:
On 04/08/2015 02:57 PM, Punit Dambiwal wrote:
Hi,
I am getting very slow throughput in the glusterfs (dead slow...even
SATA is better) ... i am using all SSD in my environment.
I have the following setup :-
A. 4* host
~]#
-
On Wed, Apr 8, 2015 at 12:22 PM, John Gardeniers
jgardeni...@objectmastery.com wrote:
Hi Punit,
I had the same problem and found the easiest way was to download and
install the RPMs for v3.5.3
regards,
John
On 08/04/15 14:17, Punit Dambiwal wrote:
Hi,
I try to install
Hi,
I try to install gluster 3.5 on the 4 servers which installed with
Centos7...i have modified the ovirt-3.5-dependencies.repo with the gluster
version 3.5 but still the server keep try to install gluster 3.6
I want to install stable gluster version and want to manage it through
Ovirt
=DEFAULT_TIMEOUT) and remove the
/usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts
Thanks,
Punit Dambiwal
On Mon, Mar 23, 2015 at 9:18 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi All,
Still i am facing the same issue...please help me
(timeout=DEFAULT_TIMEOUT) and remove the
/usr/share/vdsm/storage/outOfProcess.pyc file and restart vdsm and
supervdsm service on all hosts
Thanks,
Punit Dambiwal
On Thu, Mar 19, 2015 at 10:08 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
Is there any body can help me to solve this issuei
.
~kaushal
On Wed, Mar 25, 2015 at 3:09 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi All,
With the help of gluster community and ovirt-china community...my issue
got resolved...
The main root cause was the following :-
1. the glob operation takes quite a long time, longer than the ioprocess
a RAID and you have a
properly aligned file system you may find NOOP may provide better
performance you need to do some testing with your hardware to
determine if this is the case.
On Mon, Mar 23, 2015 at 9:43 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I want to use Glusterfs
Hi,
I want to use Glusterfs with Ovirt 3.5...please help me to make the
architecture stable for the production use :-
I have 4 servers...every server can host 24 SSD disk(As bricks)..i want to
deploy distributed replicated storage with replica =2i don't want to
use the Hardware RAID...as i
@cpu01 log]#
[image: Inline image 3]
Thanks,
Punit
On Thu, Mar 19, 2015 at 2:53 PM, Michal Skrivanek
michal.skriva...@redhat.com wrote:
On Mar 19, 2015, at 03:18 , Punit Dambiwal hypu...@gmail.com wrote:
Hi All,
Is there any one have any idea about this problem...it seems
Hi,
Can anybody help me here to solve this problem
Thanks,
Punit
On Fri, Mar 20, 2015 at 12:30 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I have seen some error in the bricks log file...
[2015-03-20 04:10:07.688859] I [server-handshake.c:585
cpu04 Connected
9b61b0a5-be78-4ac2-b6c0-2db588da5c35localhost Connected
[root@cpu01 log]#
[image: Inline image 3]
Thanks,
Punit
On Thu, Mar 19, 2015 at 2:53 PM, Michal Skrivanek
michal.skriva...@redhat.com wrote:
On Mar 19, 2015, at 03:18 , Punit Dambiwal hypu
Hi,
I have seen some error in the bricks log file...
[2015-03-20 04:10:07.688859] I [server-handshake.c:585:server_setvolume]
0-ds01-server: accepted client from
cpu01-20541-2015/03/20-04:10:02:198340-ds01-client-0-0-0 (version: 3.6.2)
[2015-03-20 04:10:12.930118] I
be messedup with reboot then it seems
not good and stable technology for the production storage
Thanks,
Punit
On Wed, Mar 18, 2015 at 3:51 PM, Michal Skrivanek
michal.skriva...@redhat.com wrote:
On Mar 18, 2015, at 03:33 , Punit Dambiwal hypu...@gmail.com wrote:
Hi,
Is there any one from
Hi All,
Is there any one have any idea about this problem...it seems it's bug
either in Ovirt or Glusterfs...that's why no one has the idea about
itplease correct me if i am wrong
Thanks,
Punit
On Wed, Mar 18, 2015 at 5:05 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Michal,
Would
Hi Vijay,
Please find the gluster clinet logs from here :-
http://paste.ubuntu.com/10618869/
On Wed, Mar 18, 2015 at 10:45 AM, Vijay Bellur vbel...@redhat.com wrote:
On 03/18/2015 07:37 AM, Punit Dambiwal wrote:
Where i can find the gluster clinet logs :-
[root@cpu07 glusterfs]# ls
Hi,
I am facing one strange issue with ovirt/glusterfsstill didn't find
this issue is related with glusterfs or Ovirt
Ovirt :- 3.5.1
Glusterfs :- 3.6.1
Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
Guest VM :- more then 100
Issue :- When i deploy this cluster first
glustershd.log
nfs.log
rhev-data-center-mnt-glusterSD-10.10.0.14:_ds01.log
[root@cpu07 glusterfs]#
Thanks,
Punit
On Wed, Mar 18, 2015 at 1:20 AM, Vijay Bellur vbel...@redhat.com wrote:
On 03/17/2015 01:39 PM, Punit Dambiwal wrote:
Hi,
I am facing one strange issue with ovirt/glusterfsstill didn't
Hi,
Is there any one from community can help me to solve this issue...??
Thanks,
Punit
On Tue, Mar 17, 2015 at 12:52 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I am facing one strange issue with ovirt/glusterfsstill didn't find
this issue is related with glusterfs or Ovirt
Hi,
Can any body help on thisor it's bug in ovirt or gluster..??
On Thu, Feb 26, 2015 at 12:07 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Vered,
Please find the attached logs...
Thanks,
Punit
On Wed, Feb 25, 2015 at 2:16 PM, Vered Volansky ve...@redhat.com wrote:
Please send
Hi,
I am facing one strange issue with ovirt/glusterfsstill didn't find
this issue is related with glusterfs or Ovirt
Ovirt :- 3.5.1
Glusterfs :- 3.6.1
Host :- 4 Hosts (Compute+ Storage)...each server has 24 bricks
Guest VM :- more then 100
Issue :- When i deploy this cluster first
Hi,
In my ovirt infra...ovirt (3.5.1) with glusterfs (3.6.1)now when i try
to remove the vm...the vm remove successfully but the vm disk (vdisk)
remain there.if i try to remove this unattached disk..it's failed to
remove
[image: Inline image 1]
[image: Inline image 2]
Thanks,
Punit
On Tue, Feb 17, 2015 at 6:16 AM, Ben Turner btur...@redhat.com wrote:
- Original Message -
From: Joe Julian j...@julianfamily.org
To: Punit Dambiwal hypu...@gmail.com, gluster-users@gluster.org,
Humble Devassy Chirammal
humble.deva...@gmail.com
Sent: Monday, February 16, 2015 3:32:31
better after image file has grown bigger
and it's not necessary to resize sparse image anymore.
Best regards,
Samuli Heinonen
On 13.2.2015, at 8.58, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I have seen the gluster performance is dead slow on the small files...even
i am using the SSD
Hi,
I have seen the gluster performance is dead slow on the small files...even
i am using the SSDit's too bad performanceeven i am getting better
performance in my SAN with normal SATA disk...
I am using distributed replicated glusterfs with replica count=2...i have
all SSD disks on the
Hi,
Is there any one can help me ??
Thanks,
Punit
On Tue, Feb 10, 2015 at 9:44 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I want to know the best way to replace the failed brickI am using
glusterfs with ovirtand my compute and storage running on the same
node...
I have 4
://bugzilla.redhat.com/show_bug.cgi?id=991084).
On 02/10/2015 05:31 PM, Punit Dambiwal wrote:
Hi,
Is there any one can help me ??
Thanks,
Punit
On Tue, Feb 10, 2015 at 9:44 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I want to know the best way to replace the failed brickI am
Hi,
I want to know the best way to replace the failed brickI am using
glusterfs with ovirtand my compute and storage running on the same
node...
I have 4 nodes and each server has 24 bricks with distributed replicated
Now if any brick fail on any node...i have the HDD in spare to
Hi Atin,
What about if i will use glusterfs 3.5 ?? is this bug will affect 3.5 also
??
On Tue, Jan 13, 2015 at 3:00 PM, Atin Mukherjee amukh...@redhat.com wrote:
On 01/13/2015 12:12 PM, Punit Dambiwal wrote:
Hi Atin,
Please find the output from here :- http://ur1.ca/jf4bs
Looks like
dev pls help on this?
Thanks,
Kanagaraj
- Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Kanagaraj kmayi...@redhat.com
Cc: Martin Pavlík mpav...@redhat.com, Vijay Bellur
vbel...@redhat.com, Kaushal M kshlms...@gmail.com,
us...@ovirt.org, gluster-users@gluster.org
peer status' output from host cpu04? Also in the
same host, search for any errors in vdsm.log file.
Thanks,
Kanagaraj
- Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Kanagaraj Mayilsamy kmayi...@redhat.com
Cc: Martin Pavlík mpav...@redhat.com, gluster-users
:28,830::caps::716::root::(_getKeyPackages)
rpm package ('glusterfs-geo-replication',) not found
M.
On 09 Jan 2015, at 11:13, Punit Dambiwal hypu...@gmail.com wrote:
Hi Kanagaraj,
Please find the attached logs :-
Engine Logs :- http://ur1.ca/jdopt
VDSM Logs :- http://ur1.ca/jdoq9
/var/log/vdsm/vdsm.log and
/var/log/vdsm/vdsmd.log
it would be really helpful if you provided exact steps how to reproduce
the problem.
regards
Martin Pavlik - rhev QE
On 08 Jan 2015, at 03:06, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I try to add gluster volume but it failed
Hi,
I am facing one strange issue in the ovirt with glusterfs ..i want to
reactivate onw of my host nodebut it's failed with the following error
:-
Gluster command [gluster peer status cpu04.zne01.hkg1.ovt.com] failed on
server cpu04.
Engine Logs :- http://ur1.ca/jczdp
-
From: Martin Pavlík mpav...@redhat.com
To: Punit Dambiwal hypu...@gmail.com
Cc: gluster-users@gluster.org, Kaushal M kshlms...@gmail.com,
us...@ovirt.org
Sent: Wednesday, January 7, 2015 9:36:24 PM
Subject: Re: [ovirt-users] Failed to find host Host in gluster peer
list from Host
Hi
Hi,
I try to add gluster volume but it failed...
Ovirt :- 3.5
VDSM :- vdsm-4.16.7-1.gitdb83943.el7
KVM :- 1.5.3 - 60.el7_0.2
libvirt-1.1.1-29.el7_0.4
Glusterfs :- glusterfs-3.5.3-1.el7
Engine Logs :-
2015-01-08 09:57:52,569 INFO
[org.ovirt.engine.core.bll.lock.InMemoryLockManager]
Hi All,
It there any one can help me here ??
On Fri, Dec 12, 2014 at 10:44 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Dan,
Yes..it's glusterfs
glusterfs logs :- http://ur1.ca/j3b5f
OS Version: RHEL - 7 - 0.1406.el7.centos.2.3
Kernel Version: 3.10.0 - 123.el7.x86_64
KVM Version
: glusterfs-3.6.1-1.el7
Qemu Version : QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-60.el7_0.2)
Thanks,
punit
On Thu, Dec 11, 2014 at 5:47 PM, Dan Kenigsberg dan...@redhat.com wrote:
On Thu, Dec 11, 2014 at 03:41:01PM +0800, Punit Dambiwal wrote:
Hi,
Suddenly all of my VM on one host paused
Hi,
Suddenly all of my VM on one host paused with the following error :-
vm has paused due to unknown storage error
I am using glusterfs storage with distributed replicate replica=2my
storage and compute both running on the same node...
engine logs :- http://ur1.ca/j31iu
Host logs :-
Hi,
I have the following setup :-
4* node in distributed replicated
replica =2
24*brick in each node
i am using each brick as 256GB SSD...
I have 2*10Gb lan with bonding for storage purpose
As i am using SSD but still i am getting the very slow I/O In my guest
VM.
[root@centos7-2 ~]#
Can any one help me here ???
On Fri, Dec 5, 2014 at 4:47 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I have the following setup :-
4* node in distributed replicated
replica =2
24*brick in each node
i am using each brick as 256GB SSD...
I have 2*10Gb lan with bonding
Hi,
I have the following architecture :-
1. 4*Ovirt Host node with gluster (Distributed replicated
storage...replica=2)..with 8 bricks on each server
i.e I am using same host for compute as well as storage purpose...
My question is if any day one of my node completely dead...and seems i have
' to the ExecStart line in
the service file.
~kaushal
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal hypu...@gmail.com
wrote:
Hi Kaushal,
Thanks for the detailed reply
GlusterD attempts to
start but fails.
~kaushal
On Dec 2, 2014 8:03 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks,
punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M kshlms...@gmail.com wrote:
Hey
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi Kaushal,
Thanks for the detailed replylet me explain my setup first :-
1. Ovirt Engine
2. 4* host as well as storage machine (Host and gluster combined)
3. Every host has
Is there any one can help on this ??
Thanks,
punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy kmayi...@redhat.com
wrote:
[+Gluster-users@gluster.org
automatically after
the reboot. But the service is successfully started later manually by the
user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks,
Kanagaraj
- Original Message -
From: Punit Dambiwal hypu...@gmail.com
To: Kanagaraj kmayi
two additional node for HA/LB purpose.
Please suggest me good way to achieve this
On Mon, Oct 27, 2014 at 6:19 PM, Vijay Bellur vbel...@redhat.com wrote:
On 10/23/2014 01:35 PM, Punit Dambiwal wrote:
On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal hypu...@gmail.com
mailto:hypu
Hi,
Is there any body has some reference and update...
Thanks,
punit
On Wed, Oct 15, 2014 at 12:30 PM, Punit Dambiwal hypu...@gmail.com wrote:
Hi All,
Is there any body can help me ???
On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal hypu...@gmail.com
wrote:
Hi,
I have one question
Hi All,
Is there any body can help me ???
On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal hypu...@gmail.com wrote:
Hi,
I have one question regarding the gluster failover...let me explain my
current architecture,i am using Ovirt with gluster...
1. One Ovirt Engine (Ovirt 3.4)
2. 4
Hi,
I have one question regarding the gluster failover...let me explain my
current architecture,i am using Ovirt with gluster...
1. One Ovirt Engine (Ovirt 3.4)
2. 4 * Ovirt Node as well as Gluster storage node...with 12 brick in one
node...(Gluster Version 3.5)
3. All 4 node in distributed
Hi Vijay,
The architecture is based on replica 2 not on replica 3...yes it's better i
will raise this issue in the Ovirt userlist...Thanks.
Thanks,
punit
On Mon, Aug 18, 2014 at 8:07 PM, Vijay Bellur vbel...@redhat.com wrote:
On 08/18/2014 11:51 AM, Punit Dambiwal wrote:
Hi Vijay,
Thanks
,
Taira
2014-08-15 16:39 GMT+09:00 Punit Dambiwal hypu...@gmail.com:
Hi,
I want to use gluster distributed replicate storage with 4 nodes with the
below configuration :-
- 4 x *Quanta STRATOS S210-X22RQ*
- 4 x 12 x 512 GB , 2.5″ SSD (front, hot swappable main storage)
- 4 x 2 x
Hi,
I want to use gluster distributed replicate storage with 4 nodes with the
below configuration :-
- 4 x *Quanta STRATOS S210-X22RQ*
- 4 x 12 x 512 GB , 2.5″ SSD (front, hot swappable main storage)
- 4 x 2 x 128 GB, 2’5″ SSD (*rear, hot swappable* os drives, awesome
feature!)
-
67 matches
Mail list logo