Re: [Gluster-users] HadoopFS-like gluster setup

2009-06-30 Thread Shehjar Tikoo

Peng Zhao wrote:

Hi, all,
I'm new to gluster, but found it interesting. I want to setup gluster in 
a way to be similar with HDFS.

There is my sample vol-file:
volume posix
 type storage/posix
 option directory /data1/gluster
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume compute-5-0
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-0
 option remote-subvolume brick
end-volume

volume compute-5-1
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-1
 option remote-subvolume brick
end-volume

volume compute-5-2
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-2
 option remote-subvolume brick
end-volume

volume compute-5-3
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-3
 option remote-subvolume brick
end-volume

volume compute-5-4
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-4
 option remote-subvolume brick-ns
end-volume

volume primary
 type cluster/replicate
 option local-volume-name primary
 subvolumes compute-5-0 compute-5-1
end-volume

volume secondary
 type cluster/replicate
 option local-volume-name secondary
 subvolumes compute-5-2 compute-5-3
end-volume

volume unified
 type cluster/unify
 option scheduler rr
 option local-volume-name unified  # do I need this?
 option namespace compute-5-4   # do I need this?
 subvolumes primary secondary
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes unified
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

The glusterd is up & running and no error msg in the logs. However, it 
reports some error when I tried to mount it:
[2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator: 
Initialization of volume 'fuse' failed, review your volfile again
[2009-07-01 09:37:36] E [glusterfsd.c:498:_xlator_graph_init] glusterfs: 
initializing translator failed
[2009-07-01 09:37:36] E [glusterfsd.c:1191:main] glusterfs: translator 
initialization failed. exiting


I guess it is a very common question. Anyone has any idea?
BR,


Try generating the log file with log-level set to bug. You
can do so by using the "-L DEBUG" command line parameter.

The debug log level will give us a better idea of what
exactly is failing.

-Shehjar

Gnep




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] HELP: problem of stripe expansion !!!!!!!!!!!!!!!!!!!!!!!!

2009-06-30 Thread Anand Babu Periasamy
You cannot expand stripe directly.  You have to use
distribute + stripe, where you scale in stripe sets.

For example, if you have 8 nodes, you create
=>  distribute(stripe(4)+stripe(4))
Now if you want to scale your storage cluster, you should do so
in stripe sets. Add 4 more nodes like this:
=>  distribute(stripe(4)+stripe(4)+stripe(4))

Distributed-stripe not only makes stripe scalable, but
better load balanced and reduced disk contention.
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]


eagleeyes wrote:
> Hello:
>Today i test stripe expansion : two volumes  expand four volumes 
> , when i vi or cat a file ,the log was :
> [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
> client8 returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
> client7 returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client7 
> returned error No such file or directory
> [2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client8 
> returned error No such file or directory
> [2009-07-01 11:25:55] W [fuse-bridge.c:639:fuse_fd_cbk] glusterfs-fuse: 149: 
> OPEN() /file => -1 (No such file or directory)
>  
> Did it a bug like dht expansion ?  What should we do to deal with this 
> problem?
>  
> my client configur changes is from subvolumes client5 client6  to 
> subvolumes client5 client6 client7 client8 
>  
>  
> 2009-07-01
> 
> eagleeyes
> 
> 
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] HELP: problem of stripe expansion !!!!!!!!!!!!!!!!!!!!!!!!

2009-06-30 Thread eagleeyes
Hello:
   Today i test stripe expansion : two volumes  expand four volumes , when 
i vi or cat a file ,the log was :
[2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
client8 returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1920:stripe_open_getxattr_cbk] stripe: 
client7 returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client7 
returned error No such file or directory
[2009-07-01 11:25:55] W [stripe.c:1871:stripe_open_cbk] stripe: client8 
returned error No such file or directory
[2009-07-01 11:25:55] W [fuse-bridge.c:639:fuse_fd_cbk] glusterfs-fuse: 149: 
OPEN() /file => -1 (No such file or directory)

Did it a bug like dht expansion ?  What should we do to deal with this problem?
 
my client configur changes is from subvolumes client5 client6  to subvolumes 
client5 client6 client7 client8  


2009-07-01 



eagleeyes
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] HadoopFS-like gluster setup

2009-06-30 Thread Peng Zhao
Hi, all,
I'm new to gluster, but found it interesting. I want to setup gluster in a
way to be similar with HDFS.
There is my sample vol-file:
volume posix
 type storage/posix
 option directory /data1/gluster
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume compute-5-0
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-0
 option remote-subvolume brick
end-volume

volume compute-5-1
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-1
 option remote-subvolume brick
end-volume

volume compute-5-2
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-2
 option remote-subvolume brick
end-volume

volume compute-5-3
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-3
 option remote-subvolume brick
end-volume

volume compute-5-4
 type protocol/client
 option transport-type tcp
 option remote-host compute-5-4
 option remote-subvolume brick-ns
end-volume

volume primary
 type cluster/replicate
 option local-volume-name primary
 subvolumes compute-5-0 compute-5-1
end-volume

volume secondary
 type cluster/replicate
 option local-volume-name secondary
 subvolumes compute-5-2 compute-5-3
end-volume

volume unified
 type cluster/unify
 option scheduler rr
 option local-volume-name unified  # do I need this?
 option namespace compute-5-4   # do I need this?
 subvolumes primary secondary
end-volume

volume writebehind
  type performance/write-behind
  option cache-size 1MB
  subvolumes unified
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

The glusterd is up & running and no error msg in the logs. However, it
reports some error when I tried to mount it:
[2009-07-01 09:37:36] E [xlator.c:736:xlator_init_rec] xlator:
Initialization of volume 'fuse' failed, review your volfile again
[2009-07-01 09:37:36] E [glusterfsd.c:498:_xlator_graph_init] glusterfs:
initializing translator failed
[2009-07-01 09:37:36] E [glusterfsd.c:1191:main] glusterfs: translator
initialization failed. exiting

I guess it is a very common question. Anyone has any idea?
BR,
Gnep
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Is it stable when the process died without vmp umount() ?

2009-06-30 Thread Shehjar Tikoo

Daesung Kim wrote:
In my application, many apache processes use apis in 
libglusterfsclient.so to be a glusterfs client.


And vmp is mounted once when the process is initialized.

 


So if I kill the process, vmp list in the library would be cleared.

In this case, Can I feel free when the process died without vmp umount?

Does it make garbage data which are not destroyed somewhere in the 
glusterfs servers?


Yes. This is not a problem. Once the process using libglusterfsclient
dies the connections and related state at the server are also cleaned
up.

-Shehjar


 


Thanks.




___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Is it stable when the process died without vmp umount() ?

2009-06-30 Thread Daesung Kim
In my application, many apache processes use apis in
libglusterfsclient.so to be a glusterfs client.

And vmp is mounted once when the process is initialized.

 

So if I kill the process, vmp list in the library would be cleared.

In this case, Can I feel free when the process died without vmp umount?

Does it make garbage data which are not destroyed somewhere in the
glusterfs servers?

 

Thanks.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS slowness due to updatedb cron job

2009-06-30 Thread Anand Babu Periasamy

We have seen at a number of deployments, users experiencing slowness of
GlusterFS over a period of time (as the volume usage grows). Some times it
happens once in a day. This is due to updatedb cron job script that wakes up
once in a day to index all the files it can find. It is configured by default
to ignore network file systems such as NFS and lustre. Please add fuse.glusterfs
to the list.

Edit /etc/updatedb.conf:
PRUNEFS="fuse.glusterfs NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 
ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf



--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] GlusterFS Virtual Storage for Red Hat Virtualization

2009-06-30 Thread Anand Babu Periasamy

Red Hat's enterprise virtualization management is built on ovirt (ovirt.org)
libvirt (libvirt.org) and default kvm (linux-kvm). It is in beta stage, but 
usable.
We have integrated GlusterFS into this project. It is fairly easy to install.
Give it a try and let us know your feedback at dl-ovirt(at)gluster.com.

http://www.gluster.org/docs/index.php/GlusterFS_oVirt_Setup_Guide

GlusterFS RPMs in this repository has fixes for poor mmap write performance.
It uses fuse-2.8 big-writes ability. Next 2.0.3 release of GlusterFS scheduled
for this week will have this improvement included by default.

We will include a series of improvements towards virtual machine implementations
in the upcoming releases of GlusterFS (such as background healing, incremental 
block
level healing of large files, persistent open across system-reboots). Your 
feedback is
crucial.

Happy Hacking,
--
Anand Babu Periasamy
GPG Key ID: 0x62E15A31
Blog [http://unlocksmith.org]
GlusterFS [http://www.gluster.org]
GNU/Linux [http://www.gnu.org]

___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] flush-behind behaving synchronously

2009-06-30 Thread Zak Wilcox

[Apologies if this gets posted twice]

Hi,

I can't seem to get flush-behind to flush behind and I was wondering  
if anyone would be kind enough to lend some brain cells to help me  
see what I'm missing.


I'm using glusterfs 2.0.1 on Debian/x86.  I've got a "cluster/ 
replicate" volume replicated between two servers that are connected  
(over a VPN) over a slow (ADSL) link.  Synchronous creation of  
large files over such a link is very slow, so I added the  
"performance/write-behind" layer with "option flush-behind on".  When  
I write a file to the volume using cp I see the write()s returning  
implausibly quickly for the speed of the link, as expected, but the  
final close() still suffers a long delay as though I'd never  
specified flush-behind.  The following strace shows what happens when  
a 100kB file gets copied into the gluster (which is mounted on /mnt/ 
tmp).  strace prints timestamps at syscall entry.



quince:/mnt# strace -ttt cp ~/typically-sized-files/hundredkay /mnt/ 
tmp/100k
1246310229.679320 execve("/bin/cp", ["cp", "/root/typically-sized- 
files/hund"..., "/mnt/tmp/100k"], [/* 17 vars */]) = 0

[...]
1246310230.343767 open("/root/typically-sized-files/hundredkay",  
O_RDONLY|O_LARGEFILE) = 3
1246310230.343818 fstat64(3, {st_mode=S_IFREG|0644,  
st_size=102400, ...}) = 0
1246310230.343861 open("/mnt/tmp/100k", O_WRONLY|O_CREAT|O_EXCL| 
O_LARGEFILE, 0644) = 4

1246310230.563629 fstat64(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
1246310230.563694 read(3, "MZ\220\0\3\0\0\0\4\0\0\0\377\377\0\0\270\0 
\0\0\0\0\...@\0\0\0\0\0\0\0\0"..., 8192) = 8192
1246310230.563758 write(4, "MZ\220\0\3\0\0\0\4\0\0\0\377\377\0\0\270\0 
\0\0\0\0\...@\0\0\0\0\0\0\0\0"..., 8192) = 8192
1246310231.480604 read(3, "M\364_^d\211\r\0\0\0\0\311\303\213\3013\311 
\211h\4\211h\10\211h\f\...@\20\1\0\0\0"..., 8192) = 8192
1246310231.480667 write(4, "M\364_^d\211\r\0\0\0\0\311\303\213\3013 
\311\211h\4\211h\10\211h\f\...@\20\1\0\0\0"..., 8192) = 8192
1246310231.480940 read(3, "\377\377C;_\10|\360\213E\f\211u\0109p\10\17 
\216\220\0\0\0\213e\f\213m\10\...@\f\213"..., 8192) = 8192
1246310231.480990 write(4, "\377\377C;_\10|\360\213E\f\211u\0109p\10 
\17\216\220\0\0\0\213e\f\213m\10\...@\f\213"..., 8192) = 8192

[...]
1246310231.481945 read(3, "\312u\1\0\336u\1\0\352u\1\0\374u\1\0\16v\1 
\0\36v\1\0,v\1\0:v\1\0H"..., 8192) = 8192
1246310231.481986 write(4, "\312u\1\0\336u\1\0\352u\1\0\374u\1\0\16v\1 
\0\36v\1\0,v\1\0:v\1\0H"..., 8192) = 8192
1246310231.482063 read(3, "\377\271^\0\377\302v\0\322\247m\0\364\313 
\226\0\377\300h\0\374\277h\0\372\304x\0\340\261p\0\341"..., 8192) = 4096
1246310231.482102 write(4, "\377\271^\0\377\302v\0\322\247m\0\364\313 
\226\0\377\300h\0\374\277h\0\372\304x\0\340\261p\0\341"..., 4096) = 4096

1246310231.482174 close(4)  = 0
1246310235.602280 close(3)  = 0
[...]
1246310235.602419 exit_group(0) = ?


This delay at close (4.2 seconds here) gets longer as the file gets  
bigger - exactly as you'd expect if write()ing was asynchronous but  
close()ing/flush()ing was synchronous.  Here's the (slightly more  
abbreviated) timings for a 1MB file:


===
quince:/mnt# strace -ttt cp ~/typically-sized-files/onemeg /mnt/tmp/
1246315526.483531 execve("/bin/cp", ["cp", "/root/typically-sized- 
files/onem"..., "/mnt/tmp/"], [/* 17 vars */]) = 0

[...]
1246315527.034448 read(3, "MZ\220\0\3\0\0\0\4\0\0\0\377\377\0\0\270\0 
\0\0\0\0\...@\0\0\0\0\0\0\0\0"..., 8192) = 8192
1246315527.034515 write(4, "MZ\220\0\3\0\0\0\4\0\0\0\377\377\0\0\270\0 
\0\0\0\0\...@\0\0\0\0\0\0\0\0"..., 8192) = 8192
1246315527.500059 read(3, "M\364_^d\211\r\0\0\0\0\311\303\213\3013\311 
\211h\4\211h\10\211h\f\...@\20\1\0\0\0"..., 8192) = 8192
1246315527.500113 write(4, "M\364_^d\211\r\0\0\0\0\311\303\213\3013 
\311\211h\4\211h\10\211h\f\...@\20\1\0\0\0"..., 8192) = 8192

[...]
1246315527.522606 write(4, "\362\303\34:?/\5+\351\262\277\35\2413\5(eP 
\0\357\223\226w\373\311r4\325\347\215\30\r\270"..., 8192) = 8192

1246315527.522733 read(3, ""..., 8192)  = 0
1246315527.522770 close(4)  = 0
1246315563.780185 close(3)  = 0
===

That's 36.26 seconds or ~28kB/s, which is typical of my upload  
speeds, strongly suggesting glusterfs is copying the file before  
returning from close().


Each of the two slow-connected machines is a gluster server with a  
localhost gluster client.  I'm using server-based replication (only  
out of desperation when client-based flush-behind didn't work).  This  
latest config does the write-back on the server, but I've also tried  
doing it on the client, with the same problem.  I'll include my most  
recent server-replication-based/server-write-delaying config because  
it's where those straces came from.


Client config (identical both ends):


volume loopback
  type protocol/client
  option transport-type tcp
  option remote-host 127.0.0.1
  option remote-subvolume delayedwrite  # name of the remot

Re: [Gluster-users] Storage Cluster Question

2009-06-30 Thread Francisco Cabrita
On Mon, Jun 29, 2009 at 11:37 PM, Mickey Mazarick 
wrote:

> Have you seen a distributed parallel fault tolerant file system that
> doesn't take a serious hit doing mmaps or direct io?
> This is a serious question, I've installed luster to contrast recently and
> it didn't measure up but I'm wondering what other DPFT filers anyone has
> tried and for what applications?
> The wiki list is here:
> http://en.wikipedia.org/wiki/List_of_file_systems#Distributed_parallel_fault_tolerant_file_systems


This is a very interesting topic. I would also appreciate ML feedback on
this.


>
> 
>
> Can anyone give a positive or negative on any of these? Perhaps we can use
> some of the positives of others to drive gluster development.


Well,  it's not listed there and in fact I don't know if I could add this to
that DPFT list, but on FreeBSD we have a interesting framework to handle
Devices, GEOM: Modular Disk Transformation Framework [
http://www.freebsd.org/doc/en/books/handbook/geom.html ] above this I can
add "geoms". In this Distributed scenario I can plan to have a base GEOM
with a bunch of distributed disks agregated via GEOM GGATE and above that
some RAID0 or so.
I have a very positive impressions about this. Friends are using this in
production.

Francisco


>
>
> -Mic
>
>
> Nathan Stratton wrote:
>
>> On Mon, 29 Jun 2009, Todd Daugherty wrote:
>>
>>  I have not used it in production yet but I did a test with:
>>> http://gluster.org/docs/index.php/Translators/cluster/stripe
>>>
>>> That was quite nice across 8 servers for Quicktime files (average file
>>> 1GB).
>>>
>>> Why would you say no to Xen VM files?
>>>
>>
>> Because with xen you need to --disable-direct-io-mode to get it to work
>> and that kills performance.
>>
>> Raw disk:
>> 8589934592 bytes (8.6 GB) copied, 21.0523 seconds, 408 MB/s
>>
>> Gluster:
>> 8589934592 bytes (8.6 GB) copied, 47.4356 seconds, 181 MB/s
>>
>> Gluster --disable-direct-io-mode
>> 8589934592 bytes (8.6 GB) copied, 336.514 seconds, 25.5 MB/
>>
>>
>>  <>
>>>
>> Nathan StrattonCTO, BlinkMind, Inc.
>> nathan at robotics.net nathan at blinkmind.com
>> http://www.robotics.nethttp://www.blinkmind.com
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>>
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
blog: http://sufixo.com/raw
http://www.linkedin.com/in/franciscocabrita
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] Storage Cluster Question

2009-06-30 Thread Francisco Cabrita
On Mon, Jun 29, 2009 at 8:46 PM, Nathan Stratton wrote:

> On Mon, 29 Jun 2009, Todd Daugherty wrote:
>
>  I have not used it in production yet but I did a test with:
>> http://gluster.org/docs/index.php/Translators/cluster/stripe
>>
>> That was quite nice across 8 servers for Quicktime files (average file
>> 1GB).
>>
>> Why would you say no to Xen VM files?
>>
>
> Because with xen you need to --disable-direct-io-mode to get it to work and
> that kills performance.
>
> Raw disk:
> 8589934592 bytes (8.6 GB) copied, 21.0523 seconds, 408 MB/s
>
> Gluster:
> 8589934592 bytes (8.6 GB) copied, 47.4356 seconds, 181 MB/s
>
> Gluster --disable-direct-io-mode
> 8589934592 bytes (8.6 GB) copied, 336.514 seconds, 25.5 MB/


autch, This is a huge penalty!

My primary VMs are for Nagios, Cacti and MySQL. Both Nagios and Cacti have
strong interaction with MySQL Vm which is storing thing on "file system". So
I think is is the key to move forward and choose another technology.

Francisco


>
>
>
>  <>
>>
> Nathan StrattonCTO, BlinkMind, Inc.
> nathan at robotics.net nathan at blinkmind.com
> http://www.robotics.nethttp://www.blinkmind.com
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users
>



-- 
blog: http://sufixo.com/raw
http://www.linkedin.com/in/franciscocabrita
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


Re: [Gluster-users] help friend :java nio problem

2009-06-30 Thread Trygve Hardersen
I don't think your example would copy the file correctly, but I'm getting
the same error when I run a similar test on our file system. It works on the
local file system.

Trygve

On Mon, Jun 29, 2009 at 4:59 PM, eagleeyes  wrote:

>  Thanks  ,the attachment is a java nio with mmap , You could use it for
> testing
> You should  mount  GFS at  directory /alfresco ,and  touch a  file named
> file.txt  with some words , then in the /alfresco  implement "javac
> FcTest.java " and "java FcTest " ,if create outFile.txt which have the same
> words with   file.txt  ,the test is success . if there is some error like :
> [r...@iaas-1 alfresco]# java FcTest  > java.io.IOException: No such device
> > at sun.nio.ch.FileChannelImpl.map0(Native Method)
> > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:747)
> > at FcTest.main(FcTest.java:18)
> the test is fault
>
>
> 2009-06-29
> --
>  eagleeyes
> --
> *发件人:* Trygve Hardersen
> *发送时间:* 2009-06-29  19:36:54
> *收件人:* eagleeyes
> *抄送:* Anand Babu
> *主题:* Re: help friend :java nio problem
>  Hello
>
> Can you send me a snippet of the NIO code that you're using? We're using
> NIO, but probably not the mmap function.
>
> Trygve
>
> On Mon, Jun 29, 2009 at 8:47 AM, eagleeyes  wrote:
>
>>  Dear friends  :
>> You had said that you used java nio with gluster very well ,Could you
>> help me ?
>>  My java nio used mmap model not position model , so  nio couldn't
>> creat files  into gluster directory .
>> What your successed configure ? I couldn't find  fuse mmap operation
>> in Gluster source codes .
>>
>> 2009-06-29
>> --
>> eagleeyes
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users


[Gluster-users] Use of mod_glusterfs

2009-06-30 Thread eagleeyes
HELLO:
  Were there some one who had used  mod_glusterfs  ?
  I  install   mod_glusterfs  with apache2.2 followed 
http://www.gluster.org/docs/index.php/Getting_modglusterfs_to_work  step by 
step , but how to use it ? or  how to Authentication it works ?

2009-06-30 



eagleeyes 
___
Gluster-users mailing list
Gluster-users@gluster.org
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users