Re: [Gluster-users] Tiering dropped ?

2019-09-04 Thread Amar Tumballi
On Wed, Sep 4, 2019 at 1:18 AM Carl Sirotic  wrote:

> So,
>
> I am running 4.1.x and I started to use tiering.
>
> I ran in a load of problem where my email server would get kernel
> panick, starting 12 hours after the change.
>
> I am in the process of detaching the tier.
>
> I saw that in version 6, tier feature was completely removed.
>
> I am under the impression there was some bugs.
>
>
That is -almost- correct. It is also true that there were issues in making
it actually better performant, and someone to take care of it full time.


>
> Is LVM Cache suposed to be a viable solution for nvme/ssd caching for
> sharded volume ?
>
>
What we saw is dm-cache (ie, lvm cache) has actually performed better.
Recommend you to try that for checking performance in your workload.

-Amar


>
> Carl
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Tiering dropped ?

2019-09-03 Thread Carl Sirotic

So,

I am running 4.1.x and I started to use tiering.

I ran in a load of problem where my email server would get kernel 
panick, starting 12 hours after the change.


I am in the process of detaching the tier.

I saw that in version 6, tier feature was completely removed.

I am under the impression there was some bugs.


Is LVM Cache suposed to be a viable solution for nvme/ssd caching for 
sharded volume ?



Carl

___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Tiering and arbitrated replicated volumes

2018-06-29 Thread D'ALESSANDRO Davide
Hello,

I read from Red Hat website that Gluster has this limit:


-  Tiering is neither compatible with arbitrated replicated volumes, 
nor with Samba

I wonder if this limit still applies to version 3.14 and 4.0 .

Indeed I was planning to use tiering with SSD drives, but keeping the 2x3 
configuration, with one arbiter in each distribution group.

Thanks

Kind regards

Davide D'ALESSANDRO

--
AMENDING EU TRADE MARK REGULATION: The amending Regulation on the EU trade mark 
entered into force on March 23 2016.

As a result, the Office is now called the European Union Intellectual Property 
Office (EUIPO).

Please update your contacts and bookmarks. Our web address changed to 
www.euipo.europa.eu on March 23 2016.

Our email addresses changed to name.surn...@euipo.europa.eu on March 23 2016, 
although the old name.surn...@oami.europa.eu email addresses will run in 
parallel to the new addresses for several months.

Learn about changes to fees, technical changes and institutional changes here: 
https://euipo.europa.eu/ohimportal/en/eu-trade-mark-regulation 
--
IMPORTANT: This message is intended exclusively for information purposes. It 
cannot be considered as an official EUIPO communication concerning procedures 
laid down in the European Union Trade Mark Regulations and Community Designs 
Regulations. It is therefore not legally binding on the EUIPO for the purpose 
of those procedures.
The information contained in this message and attachments is intended solely 
for the attention and use of the named addressee and may be confidential. If 
you are not the intended recipient, you are reminded that the information 
remains the property of the sender. You must not use, disclose, distribute, 
copy, print or rely on this e-mail. If you have received this message in error, 
please contact the sender immediately and irrevocably delete or destroy this 
message and any copies.
--
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] tiering

2018-03-05 Thread Hari Gowtham
Hi,

There isn't a way to replace the failing tier brick through a single
command as we don't have support for replace/ remove or add brick with
tier.
Once you bring the brick online(volume start force), the data in the
brick will be built by the self heal daemon (Done because its a
replicated tier).
But adding brick will still not work.

Else if you use the force option, it will work as expected and cause
data loss while detaching the tier.

The volume start force will start a new brick process for the brick
that was down.
If it doesn't then we need to check the logs for the reason why its
not starting.

On Mon, Mar 5, 2018 at 3:07 PM, Curt Lestrup  wrote:
> Hi Hari,
>
> I tried and now understand the implications of “detach force”.
> The brick failure was not caused by glusterfs so it has nothing with 
> glusterfs version to do.
>
> In fact, my question is about how to replace a failing tier brick.
>
> The setup is replicated so shouldn’t there be a way to attach a new brick 
> without terminating and rebuild the tier?
> Or at least allow a “force” to “gluster volume tier labgreenbin detach start”?
>
> How should I understand the below? Is it not misleading, since it refer to 
> the offline brick?
>> volume tier detach start: failed: Pre Validation failed on labgfs51. Found
>> stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove the
>> offline brick
>
> Tried "gluster volume start labgreenbin force" but that did not reinitialize 
> a new brick.
>
> /C
>
> On 2018-03-05, 07:31, "Hari Gowtham"  wrote:
>
> Hi Curt,
>
> gluster volume tier labgreenbin detach force will convert the volume
> from a tiered volume to a normal volume by detaching all the hot
> bricks.
> The force command won't move the data to the cold bricks. If the hot
> brick had data, it will not be moved.
>
> Here you can copy the data on the hot brick to the mount point after 
> detach.
>
> Or you can do a "gluster volume start labgreenbin force" to restart
> the brick that has went down.
>
> Did it happen with 3.12.6 too?
> If yes, do you have any idea about how the brick went down?
>
>
> On Sun, Mar 4, 2018 at 8:08 PM, Curt Lestrup  wrote:
> > Hi,
> >
> >
> >
> > Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 
> with
> > a 3 ssd tier where one ssd is bad.
> >
> >
> >
> > Status of volume: labgreenbin
> >
> > Gluster process TCP Port  RDMA Port  Online 
>  Pid
> >
> > 
> --
> >
> > Hot Bricks:
> >
> > Brick labgfs81:/gfs/p1-tier/mount   49156 0  Y
> > 4217
> >
> > Brick labgfs51:/gfs/p1-tier/mount   N/A   N/AN  
>  N/A
> >
> > Brick labgfs11:/gfs/p1-tier/mount   49152 0  Y  
>  643
> >
> > Cold Bricks:
> >
> > Brick labgfs11:/gfs/p1/mount49153 0  Y  
>  312
> >
> > Brick labgfs51:/gfs/p1/mount49153 0  Y  
>  295
> >
> > Brick labgfs81:/gfs/p1/mount49153 0  Y  
>  307
> >
> >
> >
> > Cannot find a command to replace the ssd so instead trying detach the 
> tier
> > but:
> >
> >
> >
> > # gluster volume tier labgreenbin detach start
> >
> > volume tier detach start: failed: Pre Validation failed on labgfs51. 
> Found
> > stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove 
> the
> > offline brick
> >
> > Tier command failed
> >
> >
> >
> > ‘force’ results in Usage:
> >
> > # gluster volume tier labgreenbin detach start force
> >
> > Usage:
> >
> > volume tier  status
> >
> > volume tier  start [force]
> >
> > volume tier  stop
> >
> > volume tier  attach [] ... [force]
> >
> > volume tier  detach 
> >
> >
> >
> > So trying to remove the brick:
> >
> > # gluster v remove-brick labgreenbin replica 2 
> labgfs51:/gfs/p1-tier/mount
> > force
> >
> > Removing brick(s) can result in data loss. Do you want to Continue? 
> (y/n) y
> >
> > volume remove-brick commit force: failed: Removing brick from a Tier 
> volume
> > is not allowed
> >
> >
> >
> > Succeeded removing the tier with:
> >
> > # gluster volume tier labgreenbin detach force
> >
> >
> >
> > but what does that mean? Will the content of tier get lost?
> >
> >
> >
> > How to solve this situation?
> >
> > /Curt
> >
> >
> >
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > 

Re: [Gluster-users] tiering

2018-03-04 Thread Hari Gowtham
Hi Curt,

gluster volume tier labgreenbin detach force will convert the volume
from a tiered volume to a normal volume by detaching all the hot
bricks.
The force command won't move the data to the cold bricks. If the hot
brick had data, it will not be moved.

Here you can copy the data on the hot brick to the mount point after detach.

Or you can do a "gluster volume start labgreenbin force" to restart
the brick that has went down.

Did it happen with 3.12.6 too?
If yes, do you have any idea about how the brick went down?


On Sun, Mar 4, 2018 at 8:08 PM, Curt Lestrup  wrote:
> Hi,
>
>
>
> Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with
> a 3 ssd tier where one ssd is bad.
>
>
>
> Status of volume: labgreenbin
>
> Gluster process TCP Port  RDMA Port  Online  Pid
>
> --
>
> Hot Bricks:
>
> Brick labgfs81:/gfs/p1-tier/mount   49156 0  Y
> 4217
>
> Brick labgfs51:/gfs/p1-tier/mount   N/A   N/AN   N/A
>
> Brick labgfs11:/gfs/p1-tier/mount   49152 0  Y   643
>
> Cold Bricks:
>
> Brick labgfs11:/gfs/p1/mount49153 0  Y   312
>
> Brick labgfs51:/gfs/p1/mount49153 0  Y   295
>
> Brick labgfs81:/gfs/p1/mount49153 0  Y   307
>
>
>
> Cannot find a command to replace the ssd so instead trying detach the tier
> but:
>
>
>
> # gluster volume tier labgreenbin detach start
>
> volume tier detach start: failed: Pre Validation failed on labgfs51. Found
> stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove the
> offline brick
>
> Tier command failed
>
>
>
> ‘force’ results in Usage:
>
> # gluster volume tier labgreenbin detach start force
>
> Usage:
>
> volume tier  status
>
> volume tier  start [force]
>
> volume tier  stop
>
> volume tier  attach [] ... [force]
>
> volume tier  detach 
>
>
>
> So trying to remove the brick:
>
> # gluster v remove-brick labgreenbin replica 2 labgfs51:/gfs/p1-tier/mount
> force
>
> Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
>
> volume remove-brick commit force: failed: Removing brick from a Tier volume
> is not allowed
>
>
>
> Succeeded removing the tier with:
>
> # gluster volume tier labgreenbin detach force
>
>
>
> but what does that mean? Will the content of tier get lost?
>
>
>
> How to solve this situation?
>
> /Curt
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] tiering

2018-03-04 Thread Curt Lestrup
Hi,

Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 
ssd tier where one ssd is bad.

Status of volume: labgreenbin
Gluster process TCP Port  RDMA Port  Online  Pid
--
Hot Bricks:
Brick labgfs81:/gfs/p1-tier/mount   49156 0  Y   4217
Brick labgfs51:/gfs/p1-tier/mount   N/A   N/AN   N/A
Brick labgfs11:/gfs/p1-tier/mount   49152 0  Y   643
Cold Bricks:
Brick labgfs11:/gfs/p1/mount49153 0  Y   312
Brick labgfs51:/gfs/p1/mount49153 0  Y   295
Brick labgfs81:/gfs/p1/mount49153 0  Y   307

Cannot find a command to replace the ssd so instead trying detach the tier but:

# gluster volume tier labgreenbin detach start
volume tier detach start: failed: Pre Validation failed on labgfs51. Found 
stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove the 
offline brick
Tier command failed

‘force’ results in Usage:
# gluster volume tier labgreenbin detach start force
Usage:
volume tier  status
volume tier  start [force]
volume tier  stop
volume tier  attach [] ... [force]
volume tier  detach 

So trying to remove the brick:
# gluster v remove-brick labgreenbin replica 2 labgfs51:/gfs/p1-tier/mount force
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: failed: Removing brick from a Tier volume is 
not allowed

Succeeded removing the tier with:
# gluster volume tier labgreenbin detach force

but what does that mean? Will the content of tier get lost?

How to solve this situation?
/Curt

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering Volumns

2018-02-11 Thread Hari Gowtham
Hi,

you can convert a normal volume into tiered volumes in Gluster. once a
volume (for example volume1) is converted into tiered volumes it will
have two tiers (Hot and Cold).
So naming the volume hot tier, cold tier doesn't make much sense.

About specifying the tiers. The volumes you currently have will be the
cold tier. so if you want the cold tier to be replicate, then create a
replica volume and then attach your
Hot tier (of the volume type you need) to it.

About suggestion for tiered volume, The cold is usually a EC volume
and the hot is a replica volume.

On Sat, Feb 10, 2018 at 12:23 AM, David Brown  wrote:
> Hello everyone.
> I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
> volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
> tiers for each volume?
>
> I will be adding 2 more HDDs to each server. I would then like to change
> from a Replicate to Distributed-Replicated. Not sure if that makes a
> difference in the tiering setup.
>
> [root@Glus1 ~]# gluster volume info
>
> Volume Name: ColdTier
> Type: Replicate
> Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: Glus1:/data/glusterfs/ColdTier/brick1
> Brick2: Glus2:/data/glusterfs/ColdTier/brick2
> Brick3: Glus3:/data/glusterfs/ColdTier/brick3
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
>
> Volume Name: HotTier
> Type: Replicate
> Volume ID: 6294035d-a199-4574-be11-d48ab7c4b33c
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: Glus1:/data/glusterfs/HotTier/brick1
> Brick2: Glus2:/data/glusterfs/HotTier/brick2
> Brick3: Glus3:/data/glusterfs/HotTier/brick3
> Options Reconfigured:
> auth.allow: 10.0.1.*
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> [root@Glus1 ~]#
>
>
> Thank you all for your help!
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



-- 
Regards,
Hari Gowtham.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Tiering Volumns

2018-02-10 Thread David Brown
Hello everyone.
I have a new GlusterFS setup with 3 servers and 2 volumes. The "HotTier"
volume uses Nvme and the "ColdTier" volume uses HDD's. How do I specify the
tiers for each volume?

I will be adding 2 more HDDs to each server. I would then like to change
from a Replicate to Distributed-Replicated. Not sure if that makes a
difference in the tiering setup.

[root@Glus1 ~]# gluster volume info

Volume Name: ColdTier
Type: Replicate
Volume ID: 1647487b-c05a-4cf7-81a7-08102ae348b6
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/ColdTier/brick1
Brick2: Glus2:/data/glusterfs/ColdTier/brick2
Brick3: Glus3:/data/glusterfs/ColdTier/brick3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

Volume Name: HotTier
Type: Replicate
Volume ID: 6294035d-a199-4574-be11-d48ab7c4b33c
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: Glus1:/data/glusterfs/HotTier/brick1
Brick2: Glus2:/data/glusterfs/HotTier/brick2
Brick3: Glus3:/data/glusterfs/HotTier/brick3
Options Reconfigured:
auth.allow: 10.0.1.*
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@Glus1 ~]#


Thank you all for your help!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Mohammed Rafi K C
Yes, you are correct. On a sharded volume, the hot and cold would be
based on sharded chunks.

I'm stressing the point which Krutika mentioned in her mail that we
haven't tested the use case in depth.


Regards
Rafi KC

On 09/06/2016 06:38 PM, Krutika Dhananjay wrote:
> Theoretically whatever you said is correct (at least from shard's
> perspective).
> Adding Rafi who's worked on tiering to know if he thinks otherwise.
>
> It must be mentioned that sharding + tiering hasn't been tested as
> such till now by us at least.
>
> Did you try it? If so, what was your experience?
>
> -Krutika 
>
> On Tue, Sep 6, 2016 at 5:59 PM, Gandalf Corvotempesta
>  > wrote:
>
> Anybody?
>
>
> Il 05 set 2016 22:19, "Gandalf Corvotempesta"
>  > ha scritto:
>
> Is tiering with sharding usefull with VM workload?
> Let's assume a storage with tiering and sharding enabled, used for
> hosting VM images.
> Each shard is subject to tiering, thus the most frequent part
> of the
> VM would be cached on the SSD, allowing better performance.
>
> Is this correct?
>
> To put it simple, very simple, let's assume a webserver VM,
> with the
> following directory structure:
>
> /home/user1/public_html
> /home/user2/public_html
>
> both are stored on 2 different shards (i'm semplyfing).
> /home/user1/public_html has much more visits than user2.
>
> Would that shard cached on hot tier allowing faster access by
> the webserver?
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org 
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
>
>

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread David Gossage
On Tue, Sep 6, 2016 at 7:29 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Anybody?
>
>
While I have not tested it yet the 2 email chains I have seen from users
trying it is that the performance has been worse rather than any increased
benefit.   Perhaps those using it successfully are just quiet and haven't
responded when others had issues.



Il 05 set 2016 22:19, "Gandalf Corvotempesta"  com> ha scritto:
>
>> Is tiering with sharding usefull with VM workload?
>> Let's assume a storage with tiering and sharding enabled, used for
>> hosting VM images.
>> Each shard is subject to tiering, thus the most frequent part of the
>> VM would be cached on the SSD, allowing better performance.
>>
>> Is this correct?
>>
>> To put it simple, very simple, let's assume a webserver VM, with the
>> following directory structure:
>>
>> /home/user1/public_html
>> /home/user2/public_html
>>
>> both are stored on 2 different shards (i'm semplyfing).
>> /home/user1/public_html has much more visits than user2.
>>
>> Would that shard cached on hot tier allowing faster access by the
>> webserver?
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Krutika Dhananjay
Theoretically whatever you said is correct (at least from shard's
perspective).
Adding Rafi who's worked on tiering to know if he thinks otherwise.

It must be mentioned that sharding + tiering hasn't been tested as such
till now by us at least.

Did you try it? If so, what was your experience?

-Krutika

On Tue, Sep 6, 2016 at 5:59 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Anybody?
>
> Il 05 set 2016 22:19, "Gandalf Corvotempesta" <
> gandalf.corvotempe...@gmail.com> ha scritto:
>
>> Is tiering with sharding usefull with VM workload?
>> Let's assume a storage with tiering and sharding enabled, used for
>> hosting VM images.
>> Each shard is subject to tiering, thus the most frequent part of the
>> VM would be cached on the SSD, allowing better performance.
>>
>> Is this correct?
>>
>> To put it simple, very simple, let's assume a webserver VM, with the
>> following directory structure:
>>
>> /home/user1/public_html
>> /home/user2/public_html
>>
>> both are stored on 2 different shards (i'm semplyfing).
>> /home/user1/public_html has much more visits than user2.
>>
>> Would that shard cached on hot tier allowing faster access by the
>> webserver?
>>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Dan Lambright


- Original Message -
> From: "Gandalf Corvotempesta" <gandalf.corvotempe...@gmail.com>
> To: "gluster-users" <Gluster-users@gluster.org>
> Sent: Tuesday, September 6, 2016 8:29:06 AM
> Subject: Re: [Gluster-users] Tiering and sharding for VM workload
> 
> 
> 
> Anybody?

Paul Cruzner did some tests with sharding+tiering, I think the intent was to 
investigate the VM workload case. 

In general, at the moment, the larger the transfer size, the better chance 
tiering will be able to help you. Shards of VM images would (I suppose) be 
"large", so your idea may see benefit. The set of webpages/VM shards in your 
example would have to stay relatively stable over time and fit on the hot tier.

> 
> Il 05 set 2016 22:19, "Gandalf Corvotempesta" <
> gandalf.corvotempe...@gmail.com > ha scritto:
> 
> 
> Is tiering with sharding usefull with VM workload?
> Let's assume a storage with tiering and sharding enabled, used for
> hosting VM images.
> Each shard is subject to tiering, thus the most frequent part of the
> VM would be cached on the SSD, allowing better performance.
> 
> Is this correct?
> 
> To put it simple, very simple, let's assume a webserver VM, with the
> following directory structure:
> 
> /home/user1/public_html
> /home/user2/public_html
> 
> both are stored on 2 different shards (i'm semplyfing).
> /home/user1/public_html has much more visits than user2.
> 
> Would that shard cached on hot tier allowing faster access by the webserver?
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Tiering and sharding for VM workload

2016-09-06 Thread Gandalf Corvotempesta
Anybody?

Il 05 set 2016 22:19, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> ha scritto:

> Is tiering with sharding usefull with VM workload?
> Let's assume a storage with tiering and sharding enabled, used for
> hosting VM images.
> Each shard is subject to tiering, thus the most frequent part of the
> VM would be cached on the SSD, allowing better performance.
>
> Is this correct?
>
> To put it simple, very simple, let's assume a webserver VM, with the
> following directory structure:
>
> /home/user1/public_html
> /home/user2/public_html
>
> both are stored on 2 different shards (i'm semplyfing).
> /home/user1/public_html has much more visits than user2.
>
> Would that shard cached on hot tier allowing faster access by the
> webserver?
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Tiering and Sharding?

2016-03-28 Thread Mohammed Rafi K C


On 03/26/2016 06:01 AM, Alan Millar wrote:
> How does tiering interact with sharding?   
>
>
> If a volume is both sharded and tiered, and a large file is split into 
> shards, will the entire logical file be moved between hot and cold tiers?  Or 
> will only individual shards be migrated?  

only shards will be migrated across the tiers. Meaning , if your
accessing only one particular chunk of a file, then only that chunk is
eligible for migration.

By the way, If possible, I would like to know more about your
configurations and testing results.

Regards
Rafi KC

>
>
> I didn't see this covered in the documentation.  Thanks
>
> - Alan
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Tiering and Sharding?

2016-03-25 Thread Alan Millar
How does tiering interact with sharding?   


If a volume is both sharded and tiered, and a large file is split into shards, 
will the entire logical file be moved between hot and cold tiers?  Or will only 
individual shards be migrated?  


I didn't see this covered in the documentation.  Thanks

- Alan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Tiering

2015-12-09 Thread Lindsay Mathieson
I see that 3.7 has settings for tiering, for the wording I presume 
hot/cold SSD tiering.


Is this beta yet? testable? are there any usage docs yet?

Thanks,

Lindsay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] tiering demo Thursday

2015-04-17 Thread Vijay Bellur

On 04/16/2015 07:40 PM, Dan Lambright wrote:

Hello folks,

Our hangout session has concluded [1], and I expect we will do another next 
month from the US which hopefully will have better interactivity.

In the meantime, below [2] is the gluster volume info display we are 
considering for tiered volumes. Let us know any feedback on how it looks.

[1]
goo.gl/auENCG

[2]
Proposal for gluster v info for the tier case:

Volume Name: t
Type: Tier
Volume ID: 320d4795-4eae-4d83-8e55-1681813c8549
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:

hot
Number of Bricks: 3 x 2 = 6
Type: Distribute Replicate
Brick1: gprfs018:/home/t6
Brick2: gprfs018:/home/t5

cold
Type: Distributed Replicate
Number of Bricks: 3 x 2 = 6
Brick3: gprfs018:/home/t1
Brick4: gprfs018:/home/t2
Brick5: gprfs018:/home/t3
Brick6: gprfs018:/home/t4



I like the overall idea. The number of bricks value in each tier seem to 
be incorrect in the proposal. When we start supporting different 
transport types across hot and cold tiers, we can display the transport 
type per tier too.


-Vijay
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] tiering demo Thursday

2015-04-16 Thread Dan Lambright
Hello folks,

Our hangout session has concluded [1], and I expect we will do another next 
month from the US which hopefully will have better interactivity. 

In the meantime, below [2] is the gluster volume info display we are 
considering for tiered volumes. Let us know any feedback on how it looks.

[1]
goo.gl/auENCG

[2]
Proposal for gluster v info for the tier case:

Volume Name: t
Type: Tier
Volume ID: 320d4795-4eae-4d83-8e55-1681813c8549
Status: Created
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:

hot
Number of Bricks: 3 x 2 = 6
Type: Distribute Replicate
Brick1: gprfs018:/home/t6
Brick2: gprfs018:/home/t5

cold
Type: Distributed Replicate
Number of Bricks: 3 x 2 = 6
Brick3: gprfs018:/home/t1
Brick4: gprfs018:/home/t2
Brick5: gprfs018:/home/t3
Brick6: gprfs018:/home/t4

- Original Message -
 From: Dan Lambright dlamb...@redhat.com
 To: Gluster Devel gluster-de...@gluster.org
 Sent: Wednesday, April 15, 2015 12:09:27 PM
 Subject: tiering demo Thursday
 
 
 Hello folks,
 
 We are scheduling a Hangout[1] session Thursday 1:30PM UTC regarding the
 upcoming Gluster tiering feature in GlusterFS. This session would include
 a preview of the feature, implementation details and quick demo.
 
 Please plan to join the Hangout session and spread the word around.
 
 [1] goo.gl/auENCG
 
 Dan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] tiering demo Thursday

2015-04-14 Thread Dan Lambright
Hello folks,

We are scheduling a Hangout[1] session Thursday 1:30PM UTC regarding the 
upcoming Tiering feature in GlusterFS. This session would include a preview 
of the feature, implementation details and quick demo.

Please plan to join the Hangout session and spread the word around.

[1] goo.gl/auENCG

Dan
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users