Hi,
We will look into the " failed to get index" error.
It shouldn't affect the normal working. Do let us know if you face any
other issues.
Regards,
Hari.
On 02-Aug-2017 11:55 PM, "Dmitri Chebotarov" <4dim...@gmail.com> wrote:
Hello
I reattached hot tier to a new empty EC volume and started
On Wed, 2 Aug 2017 at 19:27, Mark Connor wrote:
> Sorry, I meant RedHat's Gluster Storage Server 3.2 which is latest and
> greatest.
>
For RHGS related questions/issues please get in touch with RedHat support.
This forum is for the community gluster version.
> On Wed,
Hello
I reattached hot tier to a new empty EC volume and started to copy data to
the volume.
Good news is I can see files now on SSD bricks (hot tier) - 'find
/path/to/brick -type f' shows files, before 'find' would only show dirs.
But I've got a 'rebalance' error in glusterd.log file after I
On Wed, Aug 2, 2017 at 8:27 PM, Tom Cannaerts - INTRACTO <
tom.cannae...@intracto.com> wrote:
> I added a peer to a 50GB replica volume and initial replication seems to
> go rather slow. It's about 50GB but has a lot of small files and a lot of
> files in the same folder.
>
> What would happen if
I added a peer to a 50GB replica volume and initial replication seems to go
rather slow. It's about 50GB but has a lot of small files and a lot of
files in the same folder.
What would happen if I try to access a file on the new peer? Will it just
fail? Will gluster fetch it sealessly from the
Sorry, I meant RedHat's Gluster Storage Server 3.2 which is latest and
greatest.
On Wed, Aug 2, 2017 at 9:28 AM, Kaushal M wrote:
> On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor
> wrote:
> > Can the glusterd daemon be restarted on all storage nodes
On Wed, Aug 2, 2017 at 5:07 PM, Mark Connor wrote:
> Can the glusterd daemon be restarted on all storage nodes without causing
> any disruption to data being served or the cluster in general? I am running
> gluster 3.2 using distributed replica 2 volumes with fuse clients.
Hi, Since I upgrade to version 10.0.4 I have a problem with the auth.allow
command.
Indeed, if I allow an IP address to access a volume, other clients can also
mount the volume without any consistent reason...
[gluster v set vol_test auth.allow IP_ADDRESS1] => IP_ADDRESS2 can easily mount
Hi, I am currently working on version 10.0.4.
I would like to add a node in my current peer probe (composed of 6 nodes in
dispersed volume 4+2).
When I perform [gluster peer probe NEW_NODE] I got a peer probe rejected.
I have followed the recommendation of the documentation "Resolving Peer
Hi Sanoj,
I copied over the quota.conf file from the affected volume (node 1) and opened
it up with a hex editor but can not recognize anything really except for the
first few header/version bytes. I have attached it within this mail (compressed
with bzip2) as requested.
Should I recreate them
Can the glusterd daemon be restarted on all storage nodes without causing
any disruption to data being served or the cluster in general? I am running
gluster 3.2 using distributed replica 2 volumes with fuse clients.
Regards,
Mark
___
Gluster-users
Mabi,
We have fixed a couple of issues in the quota list path.
Could you also please attach the quota.conf file (/var/lib/glusterd/vols/
patchy/quota.conf)
(Ideally, the first few bytes would be ascii characters followed by 17
bytes per directory on which quota limit is set)
Regards,
Sanoj
On
Hello!
We're restarting regular GD2 updates. This is the first one, and I
expect to send these out every other week.
In the last month, we've identified a few core areas that we need to
focus on. With solutions in place for these, we believe we're ready to
start more deeper integration with
Hi,
The issue you are seeing is a little complex one but information you have
provided is very less.
- Volume info
- Volume status?
- What kind of IO is going on?
- Any brick is down or not?
- Snapshot of Top command.
- Anything you are seeing in glustershd or mount logs or bricks logs?
Could you please response?
On Fri, Jul 28, 2017 at 5:55 PM, ABHISHEK PALIWAL
wrote:
> Hi Team,
>
> Whenever I am performing the IO operation on gluster volume, the loads is
> getting increase on CPU which reaches upto 70-80 sometimes.
>
> when we started debugging,
Thank you very much indeed, I'll try and add an arbiter node.
--
Best Regards,
Seva Gluschenko
CTO @ http://webkontrol.ru (http://webkontrol.ru/)
+7 916 172 6 170
August 1, 2017 12:29 AM, "WK" wrote:
On 7/31/2017 1:12 AM, Seva Gluschenko wrote: Hi folks,
I'm running a simple gluster setup with
what I've just notice - the brick in question does show up as:
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-GROUP-WORKN/A N/A
N N/A
for one particular vol. Status for other vols(so far) shows
it ok.
Would this be volume problem or brick problem,
also, now after the upgrade gluster claims, on some vols,
log list in heal info, and these in these amongst:
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-USER-HOME
Status: Connected
what are these entries?
On 02/08/17 02:19, Atin Mukherjee wrote:
This means shd client is not
On 02/08/17 02:22, Atin Mukherjee wrote:
Are you referring to other names of peer status output? If
so, then a peerinfo entry having other names populated
means it might be having multiple n/w interfaces or the
reverse address resolution is picking this name. But why
are you worried on the
19 matches
Mail list logo