Its difficult to give ETA at this point. we are discussing on tech-stack
to implement S3 .
However please let us know what S3 features you often use .
On 03/08/2017 05:54 PM, Gandalf Corvotempesta wrote:
2017-03-08 13:09 GMT+01:00 Saravanakumar Arumugam :
We are working
On 8 March 2017 at 23:34, Jarsulic, Michael [CRI] <
mjarsu...@bsd.uchicago.edu> wrote:
> I am having issues with one of my systems that houses two bricks and want
> to bring it down for maintenance. I was able to remove the first brick
> successfully and committed the changes. The second brick is
Not necessarily. ZFS does things fairly differently than other filesystems, and
can be faster than HW RAID. I’d recommend spending a bit of time reading up -
the Linux ZFS-discuss list archives are a great place to start -
http://list.zfsonlinux.org/pipermail/zfs-discuss/ .
That said, if
I am having issues with one of my systems that houses two bricks and want to
bring it down for maintenance. I was able to remove the first brick
successfully and committed the changes. The second brick is giving me a lot of
problems with the rebalance when I try to remove it. It seems like it
Just to let you know: I have reverted back to glusterfs 3.4.2 and
everything is working again. No more disconnects, no more errors in the
kernel log. So there *has* to be some kind of regression in the newer
versions. Sadly, I guess, it will be hard to find.
2016-12-20 13:31 GMT+01:00 Micha
Hi Karan,
>>Are you reading a small file data-set or large files data-set and secondly,
>>volume is mounted using which protocol?
I am using 1mb block size to test using RDMA transport.
--
Deepak
> On Mar 8, 2017, at 2:48 AM, Karan Sandha wrote:
>
> Are you reading a
Thanks for your feedback.
May I know what was the shard-block-size?
One way to fix this would be to make shard translator delete only the base
file (0th shard) in the IO path and move
the deletion of the rest of the shards to background. I'll work on this.
-Krutika
On Fri, Mar 3, 2017 at 10:35
2017-03-08 13:09 GMT+01:00 Saravanakumar Arumugam :
> We are working on a custom solution which will avoids gluster-swift
> altogether.
> We will update here once it is ready. Stay tuned.
Any ETA ?
___
Gluster-users mailing list
On 03/08/2017 04:55 PM, Gandalf Corvotempesta wrote:
I'm really inerested in this.
cool.
Let me know if I understood properly, now is possible to access a
Gluster volume as object storage via S3 API ?
Yes. It is possible.
Authentication is currently turned off. You can expect updates on
I'm really inerested in this.
Let me know if I understood properly, now is possible to access a
Gluster volume as object storage via S3 API ?
Is Gluster-swift (and with that, the rings, auth and so on coming from
OpenStack) still needed ?
2017-03-08 9:53 GMT+01:00 Saravanakumar Arumugam
2017-03-08 11:48 GMT+01:00 Karan Sandha :
> Hi Deepak,
>
> Are you reading a small file data-set or large files data-set and secondly,
> volume is mounted using which protocol?
>
> for small files data-set :-
>
> gluster volume set vol-name cluster.lookup-optimize on
Hi Deepak,
Are you reading a small file data-set or large files data-set and
secondly, volume is mounted using which protocol?
for small files data-set :-
**
*gluster volume set *vol-name* cluster.lookup-optimize on (default=off)*
*
gluster volume set *vol-name* server.event-threads 4
Hi,
I have posted a blog about accessing Gluster volume via S3 interface.[1]
Here, Gluster volume is exposed as a object storage.
Object storage functionality is implemented with changes to Swift
storage and
swift3 plugin is used to expose S3 interface. [4]
gluster-object is available as
13 matches
Mail list logo