Re: [Gluster-devel] Volume usage mismatch problem

2016-02-02 Thread Manikandan Selvaganesh
Hi,

The size was not immediately updated because the accounting process done by
marker happens asynchronously and it will take some in updating the size. Also, 
you
have mentioned the size got updated after an ls command. Probably, the
problem could be an fsync(flushing) that could have failed at that time and the 
size 
was hence not properly updated.

--
Thanks & Regards,
Manikandan Selvaganesh.

- Original Message -
From: "박성식" <mulg...@gmail.com>
To: "Manikandan Selvaganesh" <mselv...@redhat.com>
Sent: Tuesday, February 2, 2016 1:22:54 PM
Subject: Re: [Gluster-devel] Volume usage mismatch problem

Thank you for answer.

After writed 500MB file volume usage it is not updated.

root@CLIENT: # dd if=/dev/urandom of=/mnt/500m.1 bs=1048576 count=500
root@CLIENT: # df -k /mnt
38.38.38.101:/tvol1  104857600  491776 104365824   1% /mnt

root@SERVER: # getfattr -d -m . -e hex /tpool/tvol1/500m.1
getfattr: Removing leading '/' from absolute path names
# file: tpool/tvol1/500m.1
trusted.bit-rot.version=0x020056af8f54b5e6
trusted.gfid=0xfb1159d18ff4476bac1b9e2f49ffe348
trusted.glusterfs.quota.----0001.contri.1=0x1b510e01
trusted.pgfid.----0001=0x0001

0x1b510e01 <- NOT 500MB (After 'ls command
executed' the information is updated.)

root@CLIENT: # ls -al /mnt
total 512006
drwxr-xr-x  4 root root 9 Feb  2 07:43 .
drwxr-xr-x 22 root root  4096 Jan 28 07:48 ..
-rw-r--r--  1 root root 524288000 Feb  2 07:44 500m.1
drwxr-xr-x  3 root root 6 Feb  2 07:42 .trashcan

root@CLIENT: # df -k /mnt
38.38.38.101:/tvol1  104857600  512000 104345600   1% /mnt

root@CLIENT: # getfattr -d -m . -e hex /tpool/tvol1/500m.1
getfattr: Removing leading '/' from absolute path names
# file: tpool/tvol1/500m.1
trusted.bit-rot.version=0x020056afdf7a000220b0
trusted.gfid=0x1f0868d8f78541f8bca26eb83138c991
trusted.glusterfs.quota.----0001.contri.1=0x1f41
<- 500MB OK
trusted.pgfid.----0001=0x0001

Is zfs and compatibility issues? (The xfs everything is OK.)

Thanks.

-- 

Sungsik, Park/corazy [박성식, 朴成植]

Software Development Engineer

Email: mulg...@gmail.com




This email may be confidential and protected by legal privilege.

If you are not the intended recipient, disclosure, copying, distribution

and use are prohibited; please notify us immediately and delete this copy

from your system.



On Tue, Feb 2, 2016 at 3:24 PM, Manikandan Selvaganesh <mselv...@redhat.com>
wrote:

> Hi,
>
> Please find my comments inline.
>
> > Hi all
> >
> > Gluster-3.7.6 in 'Quota' problem occurs in the following test case.
> >
> > (it doesn't occur if don't enable the volume quota)
> >
> > Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.
> >
> > Can you help with the following problems?
> >
> >
> > 1. zfs disk pool information
> >
> > root@server-1:~# zpool status
> > pool: pool
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > pool ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
> >
> > errors: No known data errors
> >
> > root@server-2:~# zpool status
> > pool: pool
> > state: ONLINE
> > scan: none requested
> > config:
> >
> > NAME STATE READ WRITE CKSUM
> > pool ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> > pci-:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
> >
> > errors: No known data errors
> >
> > 2. zfs volume list information
> >
> > root@server-1:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 179K 11.3T 19K /pool
> > pool/tvol1 110K 11.3T 110K /pool/tvol1
> >
> > root@server-2:~# zfs list
> > NAME USED AVAIL REFER MOUNTPOINT
> > pool 179K 11.3T 19K /pool
> > pool/tvol1 110K 11.3T 110K /pool/tvol1
> >
> > 3. gluster volume information
> >
> > root@server-1:~# gluster volume info
> > Volume Name: tvol1
> > Type: Distribute
> > Volume ID: 02d4c9de-e05f-4177-9e86-3b9b2195d7ab
> > Status: Started
> > Number of Bricks: 2
> > Transport-type: tcp
> > Bricks:
> > Brick1: 38.38.38.101:/pool/tvol1
> > Brick2: 38.38.38.102:/pool/tvol1
> > Options Reconfigu

Re: [Gluster-devel] Volume usage mismatch problem

2016-02-01 Thread Manikandan Selvaganesh
Hi,

Please find my comments inline.

> Hi all
>
> Gluster-3.7.6 in 'Quota' problem occurs in the following test case.
>
> (it doesn't occur if don't enable the volume quota)
>
> Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.
>
> Can you help with the following problems?
>
>
> 1. zfs disk pool information
>
> root@server-1:~# zpool status
> pool: pool
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
>
> errors: No known data errors
>
> root@server-2:~# zpool status
> pool: pool
> state: ONLINE
> scan: none requested
> config:
>
> NAME STATE READ WRITE CKSUM
> pool ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:1:0 ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:2:0 ONLINE 0 0 0
> pci-:00:10.0-scsi-0:0:3:0 ONLINE 0 0 0
>
> errors: No known data errors
>
> 2. zfs volume list information
>
> root@server-1:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> pool 179K 11.3T 19K /pool
> pool/tvol1 110K 11.3T 110K /pool/tvol1
>
> root@server-2:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> pool 179K 11.3T 19K /pool
> pool/tvol1 110K 11.3T 110K /pool/tvol1
>
> 3. gluster volume information
>
> root@server-1:~# gluster volume info
> Volume Name: tvol1
> Type: Distribute
> Volume ID: 02d4c9de-e05f-4177-9e86-3b9b2195d7ab
> Status: Started
> Number of Bricks: 2
> Transport-type: tcp
> Bricks:
> Brick1: 38.38.38.101:/pool/tvol1
> Brick2: 38.38.38.102:/pool/tvol1
> Options Reconfigured:
> features.quota-deem-statfs: on
> features.inode-quota: on
> features.quota: on
> performance.readdir-ahead: on

In the 'gluster volume info', you could find a feature, quota-deem-statfs
which is turned 'on' by default. When this is turned on, when you do a df,
it will list the usage by taking quota limits into consideration rather then
displaying the actual disk space.
To be simple even if disk space on /pool is 20GB and you have set a quota limit
on /pool as 10GB and when you try to do quota list or a df on /pool, it will
show Available space as 10GB because quota-deem-statfs is turned on. If you
want disk space to show exactly without taking quota limits into consideration,
please turn off deem statfs using
'gluster volume set VOLNAME quota-deem-statfs off'

> 4. gluster volume quota list
>
> root@server-1:~# gluster volume quota tvol1 list
> Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit 
> exceeded?
> ---
> / 100.0GB 80%(80.0GB) 0Bytes 100.0GB No No
>
> 5. brick disk usage
>
> root@server-1:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> pool 12092178176 0 12092178176 0% /pool
> pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
> localhost:tvol1 104857600 0 104857600 0% /run/gluster/tvol1

pool/tvol1 is your brick and it is showing 128 as used space.
It is because in the backend, you have folders like .glusterfs and so on
which shows used space as 128. From the mountpoint, localhost:tvol1 there
are no files created and hence it is showing as 0. Also, there would be 
some change in the way zfs and glusterfs quota accounts.

>
> root@server-2:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> pool 12092178176 0 12092178176 0% /pool
> pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
>
> 6. client mount / disk usage
>
> root@client:~# mount -t glusterfs 38.38.38.101:/tvol1 /mnt
> root@client:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> 38.38.38.101:/tvol1 104857600 0 104857600 0% /mnt

As you can see there are no files created from the mountpoint and quota
accounts only for the files created from the mountpoint and hence the used
space is 0.

> 7. Write using the dd command file
>
> root@client:~# dd if=/dev/urandom of=/mnt/10m bs=1048576 count=10
> 10+0 records in
> 10+0 records out
> 10485760 bytes (10 MB) copied, 0.751261 s, 14.0 MB/s
>
> 8. client disk usage
>
> root@client:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> 38.38.38.101:/tvol1 104857600 0 104857600 0% /mnt

You have written a file of size of 10MB. The process of accounting
by marker happens asynchronously and it won't be in effect immediately,
you could expect a delay in updation.

> 9. brick disk usage
>
> root@server-1:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> pool 12092167936 0 12092167936 0% /pool
> pool/tvol1 12092178304 10368 12092167936 1% /pool/tvol1
> localhost:tvol1 104857600 0 104857600 0% /run/gluster/tvol1
>
> root@server-2:~# df -k
> Filesystem 1K-blocks Used Available Use% Mounted on
> pool 12092178176 0 12092178176 0% /pool
> pool/tvol1 12092178304 128 12092178176 1% /pool/tvol1
>
> 10. zfs volume list information
>
> root@server-1:~# zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> pool 10.2M 11.3T 19K /pool
> 

[Gluster-devel] Volume usage mismatch problem

2016-01-29 Thread 박성식
Hi all

Gluster-3.7.6 in 'Quota' problem occurs in the following test case.

(it doesn't occur if don't enable the volume quota)

Volume usage mismatch occurs when using glusterfs-3.7.6 on ZFS.

Can you help with the following problems?


1. zfs disk pool information

root@server-1:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:1:0  ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:2:0  ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:3:0  ONLINE   0 0 0

errors: No known data errors

root@server-2:~# zpool status
  pool: pool
 state: ONLINE
  scan: none requested
config:

NAME STATE READ WRITE CKSUM
pool ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:1:0  ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:2:0  ONLINE   0 0 0
 pci-:00:10.0-scsi-0:0:3:0  ONLINE   0 0 0

errors: No known data errors

2. zfs volume list information

root@server-1:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
pool 179K  11.3T19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

root@server-2:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
pool 179K  11.3T19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

3. gluster volume information

root@server-1:~# gluster volume info

Volume Name: tvol1
Type: Distribute
Volume ID: 02d4c9de-e05f-4177-9e86-3b9b2195d7ab
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 38.38.38.101:/pool/tvol1
Brick2: 38.38.38.102:/pool/tvol1
Options Reconfigured:
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

4. gluster volume quota list

root@server-1:~# gluster volume quota tvol1 list
  Path   Hard-limit  Soft-limit  Used
 Available  Soft-limit exceeded? Hard-limit exceeded?
---
/100.0GB 80%(80.0GB)   0Bytes
100.0GB  No   No

5. brick disk usage

root@server-1:~# df -k
Filesystem 1K-blocksUsed   Available Use% Mounted on
pool 12092178176   0 12092178176   0% /pool
pool/tvol1   12092178304 128 12092178176   1% /pool/tvol1
localhost:tvol1104857600   0   104857600   0%
/run/gluster/tvol1

root@server-2:~# df -k
Filesystem 1K-blocksUsed   Available Use% Mounted on
pool 12092178176   0 12092178176   0% /pool
pool/tvol1   12092178304 128 12092178176   1% /pool/tvol1

6. client mount / disk usage

root@client:~# mount -t glusterfs 38.38.38.101:/tvol1 /mnt
root@client:~# df -k
Filesystem   1K-blocksUsed Available Use% Mounted on
38.38.38.101:/tvol1  104857600   0 104857600   0% /mnt

7. Write using the dd command file

root@client:~# dd if=/dev/urandom of=/mnt/10m bs=1048576 count=10
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.751261 s, 14.0 MB/s

8. client disk usage

root@client:~# df -k
Filesystem   1K-blocksUsed Available Use% Mounted on
38.38.38.101:/tvol1  104857600   0 104857600   0% /mnt

9. brick disk usage

root@server-1:~# df -k
Filesystem 1K-blocksUsed   Available Use% Mounted on
pool 12092167936   0 12092167936   0% /pool
pool/tvol1   12092178304   10368 12092167936   1% /pool/tvol1
localhost:tvol1104857600   0   104857600   0%
/run/gluster/tvol1

root@server-2:~# df -k
Filesystem 1K-blocksUsed   Available Use% Mounted on
pool 12092178176   0 12092178176   0% /pool
pool/tvol1   12092178304 128 12092178176   1% /pool/tvol1

10. zfs volume list information

root@server-1:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
pool10.2M  11.3T19K  /pool
pool/tvol1  10.1M  11.3T  10.1M  /pool/tvol1

root@server-2:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
pool 188K  11.3T19K  /pool
pool/tvol1   110K  11.3T   110K  /pool/tvol1

11. gluster volume quota list

root@server-1:~# gluster volume quota tvol1 list
  Path   Hard-limit  Soft-limit  Used
 Available  Soft-limit exceeded? Hard-limit exceeded?
---
/100.0GB 80%(80.0GB) 512Bytes
100.0GB  No   No

12. Views from the client file

root@client:~# ls -al /mnt
total 10246
drwxr-xr-x  4 root root9  1월 30 02:23 .