Re: [Gluster-devel] Non Shared Persistent Gluster Storage with Kubernetes

2016-07-05 Thread Shyam

On 07/01/2016 01:45 AM, B.K.Raghuram wrote:

I have not gone through this implementation nor the new iscsi
implementation being worked on for 3.9 but I thought I'd share the
design behind a distributed iscsi implementation that we'd worked on
some time back based on the istgt code with a libgfapi hook.

The implementation used the idea of using one file to represent one
block (of a chosen size) thus allowing us to use gluster as the backend
to store these files while presenting a single block device of possibly
infinite size. We used a fixed file naming convention based on the block
number which allows the system to determine which file(s) needs to be
operated on for the requested byte offset. This gave us the advantage of
automatically accessing all of gluster's file based functionality
underneath to provide a fully distributed iscsi implementation.

Would this be similar to the new iscsi implementation thats being worked
on for 3.9?




Ultimately the idea would be to use sharding, as a part of the gluster 
volume graph, to distribute the blocks (or rather shard the blocks), 
rather than having the disk image on one distribute subvolume and hence 
scale disk sizes to the size of the cluster. Further, sharding should 
work well here, as this is a single client access case (or are we past 
that hurdle already?).


What this achieves is similar to the iSCSI implementation that you talk 
about, but gluster doing the block splitting and hence distribution, 
rather than the iSCSI implementation (istgt) doing the same.


< I did a cursory check on the blog post, but did not find a shard 
reference, so maybe others could pitch in here, if they know about the 
direction>


Further, in your original proposal, how do you maintain device 
properties, such as size of the device and used/free blocks? I ask about 
used and free, as that is an overhead to compute, if each block is 
maintained as a separate file by itself, or difficult to achieve 
consistency of the size and block update (as they are separate 
operations). Just curious.

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Question on merging zfs snapshot support into the mainline glusterfs

2016-07-05 Thread sriram
Hi,
 
I tried to go through the patch and find the reason behind the question
posted. But could'nt get any concrete details about the same.
 
When going through the mail chain, there were mentions of generic
snapshot interface. I'd be interested in doing the changes if you guys
could fill me with some initial information. Thanks.
 
Sriram
 
 
On Mon, Jul 4, 2016, at 01:59 PM, B.K.Raghuram wrote:
> Hi Rajesh,
> I did not want to respond to the question that you'd posed on the zfs
> snapshot code (about the volume backend backup) as I am not too
> familiar with the code and the person who's coded it is not with us
> anymore. This was done in bit of a hurry so it could be that it was
> just kept for later..
>
> However, Sriram who is cc'd on this email, has been helping us by
> starting to look at the gluster code  and has expressed an interest in
> taking the zfs code changes on. So he can probably dig out an answer
> to your question. Sriram, Rajesh had a question on one of the zfs
> related patches -
> (https://github.com/fractalio/glusterfs/commit/39a163eca338b6da146f72f380237abd4c671db2#commitcomment-18109851)
>
> Sriram is also interested in contributing to the process of creating a
> generic snapshot interface in the gluster code which you and Pranith
> mentioned above. If this is ok with you all, could you fill him in on
> what your thoughts are on that and how he could get started?
> Thanks!
> -Ram
>
> On Wed, Jun 22, 2016 at 11:45 AM, Rajesh Joseph
>  wrote:
>>
>>
>> On Tue, Jun 21, 2016 at 4:24 PM, Pranith Kumar Karampuri
>>  wrote:
>>> hi,
>>>   Is there a plan to come up with an interface for snapshot
>>>   functionality? For example, in handling different types of
>>>   sockets in gluster all we need to do is to specify which
>>>   interface we want to use and ib,network-socket,unix-domain
>>>   sockets all implement the interface. The code doesn't have to
>>>   assume anything about underlying socket type. Do you guys
>>>   think it is a worthwhile effort to separate out the logic of
>>>   interface and the code which uses snapshots? I see quite a few
>>>   of if (strcmp ("zfs", fstype)) code which can all be removed
>>>   if we do this. Giving btrfs snapshots in future will be a
>>>   breeze as well, this way? All we need to do is implementing
>>>   snapshot interface using btrfs snapshot commands. I am not
>>>   talking about this patch per se. Just wanted to seek your
>>>   inputs about future plans for ease of maintaining the feature.
>>
>>
>> As I said in my previous mail this is in plan and we will be doing
>> it. But due to other priorities this was not taken in yet.
>>
>>
>>>
>>>
>>> On Tue, Jun 21, 2016 at 11:46 AM, Atin Mukherjee
>>>  wrote:


 On 06/21/2016 11:41 AM, Rajesh Joseph wrote:
  > What kind of locking issues you see? If you can provide some
  > more information I can be able to help you.

 That's related to stale lock issues on GlusterD which are there in
 3.6.1 since the fixes landed in the branch post 3.6.1. I have
 already provided the workaround/way to fix them [1]

  
 [1]http://www.gluster.org/pipermail/gluster-users/2016-June/thread.html#26995

  ~Atin

 ___
  Gluster-devel mailing list Gluster-devel@gluster.org
  http://www.gluster.org/mailman/listinfo/gluster-devel

>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
 
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Minutes from todays Gluster Bug Triage meeting

2016-07-05 Thread Saravanakumar Arumugam

Hi,

Thanks all who joined!

Next week at the same time (Tuesday 12:00 UTC) we will have an other bug
triage meeting to catch the bugs that were not handled by developers and
maintainers yet. We'll stay repeating this meeting as safety net so that
bugs get the initial attention and developers can immediately start
working on the issues that were reported.

Bug triaging (in general, no need to only do it during the meeting) is
intended to help developers, in the hope that developers can focus on
writing bug fixes instead of spending much of their valued time
troubleshooting incorrectly/incompletely reported bugs.

More details about bug triaging can be found here:
http://gluster.readthedocs.io/en/latest/Contributors-Guide/Bug-Triage/

Meeting minutes below.

Thanks,
Saravana


#gluster-meeting: Gluster Bug Triage
Meeting started by Saravanakmr at 12:01:03 UTC (full logs).

Meeting summary

agenda: https://public.pad.fsfe.org/p/gluster-bug-triage 
(Saravanakmr, 12:01:13)


Roll call (Saravanakmr, 12:01:20)
Next weeks meeting host (Saravanakmr, 12:04:08)
ACTION: skoduri will host July 12 meeting (Saravanakmr, 12:05:14)

Action Items (Saravanakmr, 12:06:13)
ndevos need to decide on how to provide/use debug builds 
(Saravanakmr, 12:06:36)
ACTION: ndevos need to decide on how to provide/use debug 
builds (Saravanakmr, 12:07:24)


ndevos to propose some test-cases for minimal libgfapi test 
(Saravanakmr, 12:07:42)
skoduri to remind the developers working on test-automation to 
triage their own bugs (Saravanakmr, 12:12:29)
http://nongnu.13855.n7.nabble.com/Reminder-Triaging-and-Updating-Bug-status-td213287.html 
(Saravanakmr, 12:15:06)


jiffin will try to add an error for bug ownership to check-bugs.py 
(Saravanakmr, 12:16:18)
ACTION: jiffin will try to add an error for bug ownership to 
check-bugs.py (Saravanakmr, 12:17:02)


Group Triage (Saravanakmr, 12:17:34)
bugs to triage have been added to 
https://public.pad.fsfe.org/p/gluster-bugs-to-triage (Saravanakmr, 12:17:44)


Open Floor (Saravanakmr, 12:23:23)



Meeting ended at 12:24:58 UTC (full logs).

Action items

skoduri will host July 12 meeting
ndevos need to decide on how to provide/use debug builds
jiffin will try to add an error for bug ownership to check-bugs.py



Action items, by person

ndevos
ndevos need to decide on how to provide/use debug builds
skoduri
skoduri will host July 12 meeting



People present (lines said)

Saravanakmr (55)
ndevos (17)
skoduri (14)
kkeithley (10)
zodbot (3)
hgowtham (2)

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] REMINDER: Gluster Bug Triage starts at 12:00 UTC

2016-07-05 Thread Saravanakumar Arumugam

Hi all,

The Gluster Bug Triage Meeting will start in approx. 1 hour 30 minutes from now.
Please join if you are interested in getting a decent status of bugs
that have recently been filed, and maintainers/developers did not pickup
yet.

The meeting also includes a little bit about testing and other misc
stuff related to bugs.

See you there!

Thanks,
Saravanakumar

Agenda:https://public.pad.fsfe.org/p/gluster-bug-triage
Location: #gluster-meeting on Freenode IRC 
-https://webchat.freenode.net/?channels=gluster-meeting
Date: Tuesday July 5, 2016
Time: 12:00 UTC, 13:00 CET, 7:00 EST (to get your local time, run: date -d "12:00 
UTC")
Chair: Saravanakumar


1. Agenda
  -  Roll Call

2. Action Items

1. ndevos need to decide on how to provide/use debug builds

2. ndevos to propose some test-cases for minimal libgfapi test

3. Manikandan and gem to wait until Nigel gives access to test the scripts

4. skoduri to remind the developers working on test-automation to
   triage their own bugs

5. jiffin will try to add an error for bug ownership to check-bugs.py

3. Group Triage

4. Open Floor

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] [PATCH] block/gluster: add support to choose libgfapi logfile

2016-07-05 Thread Prasanna Kumar Kalever
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded
in a call to glfs logging api, in case if debug level is chosen to DEBUG/TRACE
gfapi logs will be huge and fill/overflow the console view.

this patch provides a commandline option to mention log file path which helps
in logging to the specified file and also help in persisting the gfapi logs.

Usage: -drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\
 file.logfile=/var/log/qemu/qemu-gfapi.log

Signed-off-by: Prasanna Kumar Kalever 
---
 block/gluster.c | 31 +--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 16f7778..6875429 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -24,6 +24,7 @@ typedef struct GlusterAIOCB {
 typedef struct BDRVGlusterState {
 struct glfs *glfs;
 struct glfs_fd *fd;
+const char *logfile;
 bool supports_seek_data;
 int debug_level;
 } BDRVGlusterState;
@@ -34,6 +35,7 @@ typedef struct GlusterConf {
 char *volname;
 char *image;
 char *transport;
+const char *logfile;
 int debug_level;
 } GlusterConf;
 
@@ -181,7 +183,8 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, 
const char *filename,
 ret = qemu_gluster_parseuri(gconf, filename);
 if (ret < 0) {
 error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
-   "volname/image[?socket=...]");
+   "volname/image[?socket=...][,file.debug=N]"
+   "[,file.logfile=/path/filename.log]");
 errno = -ret;
 goto out;
 }
@@ -197,7 +200,7 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, 
const char *filename,
 goto out;
 }
 
-ret = glfs_set_logging(glfs, "-", gconf->debug_level);
+ret = glfs_set_logging(glfs, gconf->logfile, gconf->debug_level);
 if (ret < 0) {
 goto out;
 }
@@ -256,6 +259,8 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, 
ssize_t ret, void *arg)
 }
 
 #define GLUSTER_OPT_FILENAME "filename"
+#define GLUSTER_OPT_LOGFILE "logfile"
+#define GLUSTER_LOGFILE_DEFAULT "-" /* '-' handled in libgfapi as /dev/stderr 
*/
 #define GLUSTER_OPT_DEBUG "debug"
 #define GLUSTER_DEBUG_DEFAULT 4
 #define GLUSTER_DEBUG_MAX 9
@@ -271,6 +276,11 @@ static QemuOptsList runtime_opts = {
 .help = "URL to the gluster image",
 },
 {
+.name = GLUSTER_OPT_LOGFILE,
+.type = QEMU_OPT_STRING,
+.help = "Logfile path of libgfapi",
+},
+{
 .name = GLUSTER_OPT_DEBUG,
 .type = QEMU_OPT_NUMBER,
 .help = "Gluster log level, valid range is 0-9",
@@ -339,6 +349,12 @@ static int qemu_gluster_open(BlockDriverState *bs,  QDict 
*options,
 
 filename = qemu_opt_get(opts, GLUSTER_OPT_FILENAME);
 
+s->logfile = qemu_opt_get(opts, GLUSTER_OPT_LOGFILE);
+if (!s->logfile) {
+s->logfile = GLUSTER_LOGFILE_DEFAULT;
+}
+gconf->logfile = s->logfile;
+
 s->debug_level = qemu_opt_get_number(opts, GLUSTER_OPT_DEBUG,
  GLUSTER_DEBUG_DEFAULT);
 if (s->debug_level < 0) {
@@ -422,6 +438,7 @@ static int qemu_gluster_reopen_prepare(BDRVReopenState 
*state,
 
 gconf = g_new0(GlusterConf, 1);
 
+gconf->logfile = s->logfile;
 gconf->debug_level = s->debug_level;
 reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
 if (reop_s->glfs == NULL) {
@@ -556,6 +573,11 @@ static int qemu_gluster_create(const char *filename,
 char *tmp = NULL;
 GlusterConf *gconf = g_new0(GlusterConf, 1);
 
+gconf->logfile = qemu_opt_get_del(opts, GLUSTER_OPT_LOGFILE);
+if (!gconf->logfile) {
+gconf->logfile = GLUSTER_LOGFILE_DEFAULT;
+}
+
 gconf->debug_level = qemu_opt_get_number_del(opts, GLUSTER_OPT_DEBUG,
  GLUSTER_DEBUG_DEFAULT);
 if (gconf->debug_level < 0) {
@@ -949,6 +971,11 @@ static QemuOptsList qemu_gluster_create_opts = {
 .help = "Preallocation mode (allowed values: off, full)"
 },
 {
+.name = GLUSTER_OPT_LOGFILE,
+.type = QEMU_OPT_STRING,
+.help = "Logfile path of libgfapi",
+},
+{
 .name = GLUSTER_OPT_DEBUG,
 .type = QEMU_OPT_NUMBER,
 .help = "Gluster log level, valid range is 0-9",
-- 
2.7.4

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] Glusterfs 3.7.11 with LibGFApi in Qemu on Ubuntu Xenial does not work

2016-07-05 Thread André Bauer
Just for the record...

In the meantime i also filed a bug at the apparmor bugtracker:

https://bugs.launchpad.net/apparmor/+bug/1595451

Unfortunately they also could not help until now :-(

Regards
André

Am 22.06.2016 um 12:42 schrieb André Bauer:
> Hi Vijay,
> 
> i just used "tail -f /var/log/glusterfs/*.log" and also "tail -f
> /var/log/glusterfs/bricks/glusterfs-vmimages.log" on all 4 nodes to
> check for new log entries when trying to migrate a VM to the host.
> 
> There are no new log entries from start of vm migration until error.
> 
> Does anybody have this (qemu / libgfapi access) running in Ubuntu 16.04?
> 
> Regards
> André
> 
> 
> 
> Am 17.06.2016 um 04:44 schrieb Vijay Bellur:
>> On Wed, Jun 15, 2016 at 8:07 AM, André Bauer  wrote:
>>> Hi Prasanna,
>>>
>>> Am 15.06.2016 um 12:09 schrieb Prasanna Kalever:
>>>

 I think you have missed enabling bind insecure which is needed by
 libgfapi access, please try again after following below steps

 => edit /etc/glusterfs/glusterd.vol by add "option
 rpc-auth-allow-insecure on" #(on all nodes)
 => gluster vol set $volume server.allow-insecure on
 => systemctl restart glusterd #(on all nodes)

>>>
>>> No, thats not the case. All services are up and runnig correctly,
>>> allow-insecure is set and the volume works fine with libgfapi access
>>> from my Ubuntu 14.04 KVM/Qemu servers.
>>>
>>> Just the server which was updated to Ubuntu 16.04 can't access the
>>> volume via libgfapi anmyore (fuse mount still works).
>>>
>>> GlusterFS logs are empty when trying to access the GlusterFS nodes so iyo
>>> think the requests are blocked on the client side.
>>>
>>> Maybe apparmor again?
>>>
>>
>> Might be worth a check again to see if there are any errors seen in
>> glusterd's log file on the server. libvirtd seems to indicate that
>> fetch of the volume configuration file from glusterd has failed.
>>
>> If there are no errors in glusterd or glusterfsd (brick) logs, then we
>> can possibly blame apparmor ;-).
>>
>> Regards,
>> Vijay
>>
> 
> 


-- 
Mit freundlichen Grüßen
André Bauer

MAGIX Software GmbH
André Bauer
Administrator
August-Bebel-Straße 48
01219 Dresden
GERMANY

tel.: 0351 41884875
e-mail: aba...@magix.net
aba...@magix.net 
www.magix.com 

Geschäftsführer | Managing Directors: Dr. Arnd Schröder, Klaus Schmidt
Amtsgericht | Commercial Register: Berlin Charlottenburg, HRB 127205

Find us on:

 
 
--
The information in this email is intended only for the addressee named
above. Access to this email by anyone else is unauthorized. If you are
not the intended recipient of this message any disclosure, copying,
distribution or any action taken in reliance on it is prohibited and
may be unlawful. MAGIX does not warrant that any attachments are free
from viruses or other defects and accepts no liability for any losses
resulting from infected email transmissions. Please note that any
views expressed in this email may be those of the originator and do
not necessarily represent the agenda of the company.
--
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Bugs with incorrect status

2016-07-05 Thread Niels de Vos
1279747 (mainline) MODIFIED: spec: add CFLAGS=-DUSE_INSECURE_OPENSSL to 
configure command-line for RHEL-5 only
  ** mchan...@redhat.com: No change posted, but bug 1279747 is in MODIFIED **

1338593 (mainline) ASSIGNED: clean posix locks based on client-id as part of 
server_connection_cleanup
  [master] I661ebe posix/locks: associate posix locks with client-uuid (NEW)
  ** spa...@redhat.com: Bug 1338593 should be in POST, change I661ebe under 
review **

1332073 (mainline) ASSIGNED: EINVAL errors while aggregating the directory size 
by quotad
  [master] If8a267 quotad: fix potential buffer overflows (NEW)
  [master] Iaa quotad: fix potential buffer overflows (NEW)
  [master] If8a267 quotad: fix potential buffer overflows (NEW)
  ** mselv...@redhat.com: Bug 1332073 should be in POST, change If8a267 under 
review **

1349284 (mainline) ASSIGNED: [tiering]: Files of size greater than that of high 
watermark level should not be promoted
  [master] Ice0457 cluster/tier: dont promote if estimated block consumption > 
hi watermark (NEW)
  ** mchan...@redhat.com: Bug 1349284 should be in POST, change Ice0457 under 
review **

1202717 (mainline) MODIFIED: quota: re-factor quota cli and glusterd changes 
and remove code duplication
  ** mselv...@redhat.com: No change posted, but bug 1202717 is in MODIFIED **

1352482 (3.7.12) POST: qemu libgfapi clients hang when doing I/O with 3.7.12
  ** b...@gluster.org: No change posted, but bug 1352482 is in POST **

1153964 (mainline) MODIFIED: quota: rename of "dir" fails in case of quota 
space availability is around 1GB
  [master] Iaad907 quota: No need for quota-limit check if rename is under same 
parent (ABANDONED)
  [master] I2c8140 quota: For a link operation, do quota_check_limit only till 
the common ancestor of src and dst file (MERGED)
  [master] Ia1e536 quota: For a rename operation, do quota_check_limit only 
till the common ancestor of src and dst file (MERGED)
  ** mselv...@redhat.com: Bug 1153964 should be CLOSED, v3.7.12 contains a fix 
**

1351154 (3.8.0) NEW: nfs-ganesha disable doesn't delete nfs-ganesha folder from 
/var/run/gluster/shared_storage
  [release-3.8] Icc09b3 ganesha/scripts : delete nfs-ganesha folder from shared 
storage during clean up (MERGED)
  ** b...@gluster.org: Bug 1351154 should be MODIFIED, change Icc09b3 has been 
merged **

1008839 (mainline) POST: Certain blocked entry lock info not retained after the 
lock is granted
  [master] Ie37837 features/locks : Certain blocked entry lock info not 
retained after the lock is granted (ABANDONED)
  ** ata...@redhat.com: Bug 1008839 is in POST, but all changes have been 
abandoned **

1339166 (mainline) POST: distaf: Added timeout value to wait for rebalance to 
complete and removed older rebalance library file
  [master] I89e2e4 Added timeout value to wait for rebalance to complete and 
removed older rebalance library file (MERGED)
  ** aloga...@redhat.com: Bug 1339166 should be MODIFIED, change I89e2e4 has 
been merged **

1337899 (mainline) POST: Misleading error message on rebalance start when one 
of the glusterd instance is down
  [master] I5827d3 Glusterd: printing the node details on error message of 
rebalance (MERGED)
  ** hgowt...@redhat.com: Bug 1337899 should be MODIFIED, change I5827d3 has 
been merged **

1316178 (3.7.8) POST: changelog/rpc: Memory leak- rpc_clnt_t object is never 
freed
  ** khire...@redhat.com: No change posted, but bug 1316178 is in POST **

1215596 (3.6.3) MODIFIED: "case sensitive = no" is not honored when "preserve 
case = yes" is present in smb.conf
  ** rta...@redhat.com: No change posted, but bug 1215596 is in MODIFIED **

1258144 (3.7.5) ON_QA: Data Tiering: Tier deamon crashed when detach tier start 
was issued while IOs were happening
  ** dlamb...@redhat.com: No change posted, but bug 1258144 is in ON_QA **

1310445 (3.7.7) ASSIGNED: Gluster not resolving hosts with IPv6 only lookups
  [release-3.7] Idd7513 glusterd: Bug fixes for IPv6 support (MERGED)
  ** nithind1...@yahoo.in: Bug 1310445 should be CLOSED, v3.7.12 contains a fix 
**

1334164 (mainline) POST: Worker dies with [Errno 5] Input/output error upon 
creation of entries at slave
  [master] Ic559c2 cluster/distribute: heal layout in discover codepath too 
(ABANDONED)
  [master] I84b204 cluster/distribute: heal layout in discover codepath too 
(ABANDONED)
  [master] I4323c2 cluster/distribute: heal layout in discover codepath too 
(ABANDONED)
  [master] I4259d8 cluster/distribute: heal layout in discover codepath too 
(MERGED)
  [master] I1bd815 cluster/distribute: use a linked inode in directory heal 
codepath (MERGED)
  ** rgowd...@redhat.com: Bug 1334164 should be MODIFIED, change I1bd815 has 
been merged **

1336354 (mainline) POST: Provide a way to configure gluster source location in 
devel-vagrant
  [master] I7057a9 extra/devel-vagrant: accept gluster src location from user 
(MERGED)
  ** rjos...@redhat.com: Bug 1336354 should be MODIFIED, change I7057a9 has 
been merged **