Hi Krishna,
I looked into the code.. but it seems that the server is not actually
performing the authentication of clients based on their IP addresses.Can you
confirm it?
On 4/24/07, Krishna Srinivas <[EMAIL PROTECTED]> wrote:
Hi John,
Try comma as the separator.
option auth.ip.brick.allow 1
Unfortunately, our mapserv isn't compiled with debugging symbols. If it
comes down to it, I can try and arrange that.
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1209813296 (LWP 14678)]
0x080c54e3 in __libc_csu_init ()
(gdb) bt
#0 0x080c54e3 in __libc_csu_init ()
Erik,
please give a 'bt' command output from gdb, also if possible an
strace log of running mapsrv over glusterfs (to see the last FS
operation or more clues)
regards,
avati
On Tue, May 01, 2007 at 05:32:12PM -0700, Erik Osterman wrote:
> If I try to run our mapserv application from a glusterfs
If I try to run our mapserv application from a glusterfs mounted volume
I consistently get segfaults. Yet if I copy that binary from the
glusterfs volume to /tmp, it works fine.
I only have the postfix translator enabled on the glusterfsd processes.
The clients are are running afr (*:2) betwe
Wow, it almost looked like the patch fixed the issue with using
stat-prefetch, but see below. I was almost unable to get it to crash with
du's or rm's on complex directories, as it did fairly easily before.
Also, I think it fixed a tiny anomaly that I had noticed but ignored.
Previously, even
Awesome. I'll give it a try today sometime. Personally, I don't need the
pattern matching, but sounds like a neat feature.
Never ceases to amaze me how fast your guys implement features.
Best,
Erik
Anand Avati wrote:
Erik,
the fixed-uid/gid feature is currently not there in glusterfs. this
Same here, and the loadbalance sounds like a great addition.
Thanks,
Brent
On Tue, 1 May 2007, Majied Najjar wrote:
That makes all the sense in the world to me to have replication on the
server side. I especially like the idea about network failover and not
having to depend on client mounts
That makes all the sense in the world to me to have replication on the server
side. I especially like the idea about network failover and not having to
depend on client mounts to maintain consistency on the server side.
Majied
On Tue, 1 May 2007 09:05:28 -0700
Anand Avati <[EMAIL PROTECTED]>
here is a design proposal about some changes to afr and related.
currently AFR is totally handled on the client side, where the client
does the replication as well as failover. the AFR translator
essentially is doing _two_ features - 1. replication 2. failover.
In view of the recent race conditi
Thanks, committed
avati
On Mon, Apr 30, 2007 at 07:41:01PM +0530, Harshavardhana wrote:
> Hi avati,
>
>Please find the attached patch. Well now the configure will exit for
> byacc parser as it doesn't have yylval defined as YYSTYPE inside
> y.tab.h of which bison has
>
> I tried adding a bel
> > > * How do I cleanly shut down the bricks making sure that they remain
> > > consistent?
> >
> > For 1.3 you have to kill the glusterfsd manually. You can get the pid
> > from the pidfile (${datadir}/run/glusterfsd.pid)
>
> That's not a problem, my question is how do I shut down two mirrore
> Erik,
> the fixed-uid/gid feature is currently not there in glusterfs. this
> will come as a translator if at all in the future.
which is not far away, i just committed features/fixed-id xlator to
the tla. realized it was so simple that it was just a couple of minutes
of coding. load it in y
On Fri, 2007-04-27 at 16:25 -0700, Anand Avati wrote:
> > * How do I cleanly shut down the bricks making sure that they remain
> > consistent?
>
> For 1.3 you have to kill the glusterfsd manually. You can get the pid
> from the pidfile (${datadir}/run/glusterfsd.pid)
That's not a problem, my qu
Erik,
the fixed-uid/gid feature is currently not there in glusterfs. this
will come as a translator if at all in the future. are you in need of
this feature?
avati
On Mon, Apr 30, 2007 at 06:29:27PM -0700, Erik Osterman wrote:
> Erik Osterman wrote:
> >Is it possible to pass options that cause
> I was wondering if you could describe patch-134 a little? I was curious as
> to whether or not it could be related to the stat-prefetch or the NFS
> reexport issues.
this was a bug in afr which could have triggered for anybody who used
AFR and accessed a directory. the functions forming the r
15 matches
Mail list logo