Re: [Gluster-users] What is the default root password for the Gluster installed using ISO?
glusteradmin it shows up when you install gluster storage platform. -Original Message- From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Xybrek Sent: Thursday, October 20, 2011 2:30 AM To: gluster-users@gluster.org Subject: [Gluster-users] What is the default root password for the Gluster installed using ISO? Hi I have installed Gluster 3.0.5 from ISO download. The default 'glusteradmin' password for the web interface works, but I can't find a documented root password to access the shell. What is the default root password? Thanks, Xybrek ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster map/reduce performance..
Yes, I used the GlusterFS plugin. Gluster version is - 3.3 beta 2. For the Volumes Distributed-mirroring volume: Using 4 server and 2(brick)x2(replica) configuration Stripe-mirroring volume : Using 4 Server and 4(stripe count) x 2 (repica) configuration For the Map/reduce system I user 6 server ( 4 is the brick server and other 2 is for just map/reduce ) I checked your source file, but I can’t find any clue for the Performance degradation in Merging Stage. ( I think it is connected with writing) Actaully, In writing test, Gluster was quite good. So I’m little confused right now. Regards Andrew From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Venky Shankar Sent: Thursday, October 20, 2011 1:35 AM To: andrew; gluster-users@gluster.org Subject: Re: [Gluster-users] gluster map/reduce performance.. Hi there, Appreciate if you could share the following info with us: * Are you using GlusterFS hadoop plugin (which is here http://download.gluster.com/pub/gluster/glusterfs/qa-releases/3.3-beta-2/glusterfs-hadoop-0.20.2-0.1.x86_64.rpm and is still in beta) or are you using GlusterFS as an additional layer below Hadoop's FileSystem (HDFS) ? The latter is basically configuring Hadoop to use GlusterFS mount point (e.g. FUSE mount) as the data directory for Hadoop's DFS. Let us know your setup (including GlusterFS version) to debug further. Thanks, -Venky From: gluster-users-boun...@gluster.org [gluster-users-boun...@gluster.org] on behalf of andrew [sstrato.k...@gmail.com] Sent: Wednesday, October 19, 2011 6:15 PM To: gluster-users@gluster.org Subject: [Gluster-users] gluster map/reduce performance.. Hi, all, i try to check the performance of Map/Reduce of Gluster File system. Mapper side speed is quite good and it is sometimes faster than hadoop's map job. But in the Reduce Side job is much slower than hadoop. i analyze the result and i found the primary reason of slow speed is bad performance in Merging stage. Would you have any suggestion for this issue FYI check the blog http://storage4com.blogspot.com/ thanks. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] Problem with VM images when one node goes online (self-healing) on a 2 node replication gluster for VMware datastore
Maybe the stalling time could be decreased if you set the gluster.data-self-heal-algorithm as ‘diff’ But don’t expect too much speed up. From: gluster-users-boun...@gluster.org [mailto:gluster-users-boun...@gluster.org] On Behalf Of Peter Linder Sent: Tuesday, October 11, 2011 6:06 PM To: gluster-users@gluster.org Subject: Re: [Gluster-users] Problem with VM images when one node goes online (self-healing) on a 2 node replication gluster for VMware datastore With 3.2.4 during self-heal, no operation on the file being healed is allowed so your VM's will stall and time out if the self-heal isn't finished quick enough. gluster 3.3 will fix this, but I don't know when it will be released. There are betas to try out though :). Perhaps somebody else can say how stable 3.3-beta2 is compared to 3.2.4? On 10/11/2011 10:57 AM, keith wrote: Hi all I am testing gluster-3.2.4 on a 2 nodes storage with replication as our VMware datastore. The setup is running replication on 2 nodes with ucarp and mount it on WMware using NFS to gluster as a datastore. Volume Name: GLVOL1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: t4-01.store:/EXPORT/GLVOL1 Brick2: t4-03.store:/EXPORT/GLVOL1 Options Reconfigured: performance.cache-size: 4096MB High-availability testing goes on smoothly without any problem or data-corruption, that is when any node is down, all VM guests runs normally without any problem. The problem arises when I bring up the failed node and the node start doing self-healing. All my VM guests get kernel error messages and finally the VM guests ended up with "EXT3-fs error: ext3_journal_start_sb: detected aborted journal" remount filesystem (root) as read-only. Below are some of the VM guests kernel error generated when I bring up the failed gluster node for self-healing: Oct 11 15:57:58 testvm3 kernel: pvscsi: task abort on host 1, 8100221c90c0 Oct 11 15:57:58 testvm3 kernel: pvscsi: task abort on host 1, 8100221c9240 Oct 11 15:57:58 testvm3 kernel: pvscsi: task abort on host 1, 8100221c93c0 Oct 11 15:58:34 testvm3 kernel: INFO: task kjournald:2081 blocked for more than 120 seconds. Oct 11 15:58:34 testvm3 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Oct 11 15:58:34 testvm3 kernel: kjournald D 810001736420 0 2081 14 2494 2060 (L-TLB) Oct 11 15:58:34 testvm3 kernel: 81003c087cf0 0046 810030ef2288 81003f5d6048 Oct 11 15:58:34 testvm3 kernel: 037685c8 000a 810037c53820 80314b60 Oct 11 15:58:34 testvm3 kernel: 1883cb68d47d 2c4e 810037c53a08 3f5128b8 Oct 11 15:58:34 testvm3 kernel: Call Trace: Oct 11 15:58:34 testvm3 kernel: [] do_gettimeofday+0x40/0x90 Oct 11 15:58:34 testvm3 kernel: [] sync_buffer+0x0/0x3f Oct 11 15:58:34 testvm3 kernel: [] io_schedule+0x3f/0x67 Oct 11 15:58:34 testvm3 kernel: [] sync_buffer+0x3b/0x3f Oct 11 15:58:34 testvm3 kernel: [] __wait_on_bit+0x40/0x6e Oct 11 15:58:34 testvm3 kernel: [] sync_buffer+0x0/0x3f Oct 11 15:58:34 testvm3 kernel: [] out_of_line_wait_on_bit+0x6c/0x78 Oct 11 15:58:34 testvm3 kernel: [] wake_bit_function+0x0/0x23 Oct 11 15:58:34 testvm3 kernel: [] :jbd:journal_commit_transaction+0x553/0x10aa Oct 11 15:58:34 testvm3 kernel: [] lock_timer_base+0x1b/0x3c Oct 11 15:58:34 testvm3 kernel: [] try_to_del_timer_sync+0x7f/0x88 Oct 11 15:58:34 testvm3 kernel: [] :jbd:kjournald+0xc1/0x213 Oct 11 15:58:34 testvm3 kernel: [] autoremove_wake_function+0x0/0x2e Oct 11 15:58:34 testvm3 kernel: [] keventd_create_kthread+0x0/0xc4 Oct 11 15:58:34 testvm3 kernel: [] :jbd:kjournald+0x0/0x213 Oct 11 15:58:34 testvm3 kernel: [] keventd_create_kthread+0x0/0xc4 Oct 11 15:58:34 testvm3 kernel: [] kthread+0xfe/0x132 Oct 11 15:58:34 testvm3 kernel: [] child_rip+0xa/0x11 Oct 11 15:58:34 testvm3 kernel: [] keventd_create_kthread+0x0/0xc4 Oct 11 15:58:34 testvm3 kernel: [] kthread+0x0/0x132 Oct 11 15:58:34 testvm3 kernel: [] child_rip+0x0/0x11 Oct 11 15:58:34 testvm3 kernel: Oct 11 15:58:34 testvm3 kernel: INFO: task crond:3418 blocked for more than 120 seconds. Oct 11 15:58:34 testvm3 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Oct 11 15:58:34 testvm3 kernel: crond D 810001736420 0 3418 1 3436 3405 (NOTLB) Oct 11 15:58:34 testvm3 kernel: 810036c55ca8 0086 80019e3e Oct 11 15:58:34 testvm3 kernel: 00065bf2 0007 81003ce4b080 80314b60 Oct 11 15:58:34 testvm3 kernel: 18899ae16270 00023110 81003ce4b268 8804ec00 Oct 11 15:58:34 testvm3 kernel: Call Trace: Oct 11 15:58:34 testvm3 kernel: [] __getblk+0x25/0x22c Oct 11 15:58:34 testvm3 kernel: [] do_gettimeofday+0x40/0x90 Oct 11 15:58:34 testvm3 kernel: [] sync_buffer+0x0/0x3f Oct 11 15:58:34 testvm3 kernel: [] io_schedule+0x3f/0x67 Oct 11 15:58:34 test
[Gluster-users] gluster is buildable on OSX?
I’m trying to build gluster3.3-beta2 release. But I realize that there’s one variable which is not compatible with OSX. It’s ‘_PATH_MOUNTED’ variable which stands for “/etc/mtab” I thought it’s for checking mount status. The “/etc/mtab” doesn’t exist at all. Is There any way to handle this? Regards Andrew. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] gluster's cpuload is too high on specific birck daemon
Here's my result of strace -f -c -p glusterPID Process 1993 detached Process 1994 detached Process 1996 detached Process 2004 detached Process 2006 detached Process 2013 detached Process 3469 detached % time seconds usecs/call callserrors syscall -- --- --- - - 38.39 23.261809 11899 195586 lgetxattr 28.31 17.155779 35667 48152 futex 16.58 10.046922 70258 143 epoll_wait 15.339.292713 92927110 nanosleep 1.360.826873 826873 1 restart_syscall 0.010.007801 51 154 getdents 0.000.002299 4 572 readv 0.000.002176 1 1787 lstat 0.000.001078 9012 read 0.000.000896 6 141 writev 0.000.000472 3 142 lseek 0.000.000207 2 126 clock_gettime 0.000.000194 49 4 munmap 0.000.79 422 fcntl 0.000.00 026 open 0.000.00 025 close 0.000.00 020 stat 0.000.00 0 4 fstat 0.000.00 0 4 mmap 0.000.00 0 4 statfs -- --- --- - - 100.00 60.599298 5633 138 total -Original Message- From: Pavan T C [mailto:t...@gluster.com] Sent: Wednesday, July 27, 2011 6:18 PM To: 공용준(yongjoon kong)/Cloud Computing 기술담당/SKCC Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] gluster's cpuload is too high on specific birck daemon On Wednesday 27 July 2011 01:45 PM, 공용준(yongjoon kong)/Cloud Computing 기술담당/SKCC wrote: > Hello, > > I'm gluster with distributed-replicated mode(4 brick server) > > And 10client server mount gluster volume brick1 server. ( mount -t glustefs > brick1:/volume /mnt) > > And there's very strange thing. > > The brick1's cpu load is too high. From 'top' command, it's over 400% > But other brick's load is too low. It is possible that an AFR self heal is getting triggered. On the brick, run the following command: strace -f -c -p and provide the output. Pavan > > Is there any reason for this? Or Is there anyway tracking down this issue? > > Thanks. > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] gluster's cpuload is too high on specific birck daemon
Hello, I'm gluster with distributed-replicated mode(4 brick server) And 10client server mount gluster volume brick1 server. ( mount -t glustefs brick1:/volume /mnt) And there's very strange thing. The brick1's cpu load is too high. From 'top' command, it's over 400% But other brick's load is too low. Is there any reason for this? Or Is there anyway tracking down this issue? Thanks. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
Re: [Gluster-users] how to rotate log of client side
Well, I adjust log level to TRACE it may be more than 3GB From: Luis Cerezo [mailto:l...@luiscerezo.org] Sent: Wednesday, July 20, 2011 12:27 AM To: 공용준(yongjoon kong)/Cloud Computing 기술담당/SKCC Cc: gluster-users@gluster.org Subject: Re: [Gluster-users] how to rotate log of client side why is it getting too big? what is in it? On Jul 19, 2011, at 4:03 AM, 공용준(yongjoon kong)/Cloud Computing 기술담당/SKCC wrote: Hi, I’m using gluster 3.1.4 Is there any way to rotate client side’s log It’s getting bigger every day. ___ Gluster-users mailing list Gluster-users@gluster.org<mailto:Gluster-users@gluster.org> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users Luis E. Cerezo http://www.luiscerezo.org http://twitter.com/luiscerezo http://flickr.com/photos/luiscerezo photos for sale: http://photos.luiscerezo.org Voice: 412 223 7396 ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
[Gluster-users] how to rotate log of client side
Hi, I’m using gluster 3.1.4 Is there any way to rotate client side’s log It’s getting bigger every day. ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users