Re: [Gluster-users] Writing is slow when there are 10 million files.
Our application also stores the path of the file in a database. Accessing a file directly is normally pretty speedy. However, to get the files into the database required searching parts of the filesystem which was really slow. We also had users using the filesystem fixing things which was all unix shell ls/cp/mv etc, and again, really slow. And the biggest problem I had was if one of the nodes went down for a reboot/patching/whatever, to "resync" the filesystems took weeks because of the huge number of files. thanks, liam On Tue, Apr 15, 2014 at 3:15 AM, Terada Michitaka wrote: > >> To Liam: > > >I had about 100 million files in Gluster and it was unbelievably > painfully slow. We had to ditch it for other technology. > > Has slow down occurred on writing file?, listing files, or both? > > In our application, path of the data is managed in database. > "ls" is slow, but not influence to my application, but writing file slow > down is critical. > > >> To All: > > I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks). > > http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt > > Line 15, average-latency value is about 30 ms. > I cannot judge this value is a normal(ordinary?) performance or not. > > Is it slow? > > Thanks, > --Michika Terada > > > > > 2014-04-15 16:05 GMT+09:00 Franco Broi : > > >> My bug report is here >> https://bugzilla.redhat.com/show_bug.cgi?id=1067256 >> >> On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote: >> > If you experience pain using any filesystem, you should see your >> > doctor. >> > >> > If you're not actually experiencing pain, perhaps you should avoid >> > hyperbole and instead talk about what version you tried, what your >> > tests were, how you tried to fix it, and what the results were. >> > >> > If you're using a current version with a kernel that has readdirplus >> > support for fuse it shouldn't be that bad. If it is, file a bug report >> > - especially if you have the skills to help diagnose the problem. >> > >> > On April 14, 2014 11:30:26 PM PDT, Liam Slusser >> > wrote: >> > >> > I had about 100 million files in Gluster and it was >> > unbelievably painfully slow. We had to ditch it for other >> > technology. >> > >> > >> > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi >> > wrote: >> > >> > I seriously doubt this is the right filesystem for >> > you, we have problems >> > listing directories with a few hundred files, never >> > mind millions. >> > >> > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka >> > wrote: >> > > Dear All, >> > > >> > > >> > > >> > > I have a problem with slow writing when there are 10 >> > million files. >> > > (Top level directories are 2,500.) >> > > >> > > >> > > I configured GlusterFS distributed cluster(3 nodes). >> > > Each node's spec is below. >> > > >> > > >> > > CPU: Xeon E5-2620 (2.00GHz 6 Core) >> > > HDD: SATA 7200rpm 4TB*12 (RAID 6) >> > > NW: 10GBEth >> > > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 >> > 12:38:06 >> > > >> > > This cluster(volume) is mounted on CentOS via FUSE >> > client. >> > > This volume is storage of our application and I want >> > to store 3 >> > > hundred million to 5 billion files. >> > > >> > > >> > > I performed a writing test, writing 32KByte file × >> > 10 million to this >> > > volume, and encountered a problem. >> > > >> > > >> > > (1) Writing is so slow and slow down as number of >> > files increases. >> > > In non clustering situation(one node), this node's >> > writing speed is >> > > 40 MByte/sec at random, >> > > But writing speed is 3.6MByte/sec on that cluster. >> > > (2) ls command is very slow. >> > > About 20 second. Directory creation takes about 10 >> > seconds at >> > > lowest. >> > > >> > > >> > > Question: >> > > >> > > 1)5 Billion files are possible to store in >> > GlusterFS? >> > > Has someone succeeded to store billion files to >> > GlusterFS? >> > > >> > > 2) Could you give me a link for a tuning guide or >> > some information of >> > > tuning? >> >
Re: [Gluster-users] Writing is slow when there are 10 million files.
On 15 Apr 2014 18:15, Terada Michitaka wrote: > > >> To Liam: > > >I had about 100 million files in Gluster and it was unbelievably painfully > >slow. We had to ditch it for other technology. > > Has slow down occurred on writing file?, listing files, or both? > > In our application, path of the data is managed in database. > "ls" is slow, but not influence to my application, but writing file slow down > is critical. Throughput with the fuse client is very good, as long as you access files directly you won't have any problems with slow directory reads. In my experience it's better than NFS, especially if you have many clients. > > >> To All: > > I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks). > > http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt > > Line 15, average-latency value is about 30 ms. > I cannot judge this value is a normal(ordinary?) performance or not. > > Is it slow? > > Thanks, > --Michika Terada > > > > > 2014-04-15 16:05 GMT+09:00 Franco Broi : >> >> >> My bug report is here >> https://bugzilla.redhat.com/show_bug.cgi?id=1067256 >> >> On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote: >> > If you experience pain using any filesystem, you should see your >> > doctor. >> > >> > If you're not actually experiencing pain, perhaps you should avoid >> > hyperbole and instead talk about what version you tried, what your >> > tests were, how you tried to fix it, and what the results were. >> > >> > If you're using a current version with a kernel that has readdirplus >> > support for fuse it shouldn't be that bad. If it is, file a bug report >> > - especially if you have the skills to help diagnose the problem. >> > >> > On April 14, 2014 11:30:26 PM PDT, Liam Slusser >> > wrote: >> > >> > I had about 100 million files in Gluster and it was >> > unbelievably painfully slow. We had to ditch it for other >> > technology. >> > >> > >> > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi >> > wrote: >> > >> > I seriously doubt this is the right filesystem for >> > you, we have problems >> > listing directories with a few hundred files, never >> > mind millions. >> > >> > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka >> > wrote: >> > > Dear All, >> > > >> > > >> > > >> > > I have a problem with slow writing when there are 10 >> > million files. >> > > (Top level directories are 2,500.) >> > > >> > > >> > > I configured GlusterFS distributed cluster(3 nodes). >> > > Each node's spec is below. >> > > >> > > >> > > CPU: Xeon E5-2620 (2.00GHz 6 Core) >> > > HDD: SATA 7200rpm 4TB*12 (RAID 6) >> > > NW: 10GBEth >> > > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 >> > 12:38:06 >> > > >> > > This cluster(volume) is mounted on CentOS via FUSE >> > client. >> > > This volume is storage of our application and I want >> > to store 3 >> > > hundred million to 5 billion files. >> > > >> > > >> > > I performed a writing test, writing 32KByte file × >> > 10 million to this >> > > volume, and encountered a problem. >> > > >> > > >> > > (1) Writing is so slow and slow down as number of >> > files increases. >> > > In non clustering situation(one node), this node's >> > writing speed is >> > > 40 MByte/sec at random, >> > > But writing speed is 3.6MByte/sec on that cluster. >> > > (2) ls command is very slow. >> > > About 20 second. Directory creation takes about 10 >> > seconds at >> > > lowest. >> > > >> > > >> > > Question: >> > > >> > > 1)5 Billion files are possible to store in >> > GlusterFS? >> > > Has someone succeeded to store billion files to >> > GlusterFS? >> > > >> > > 2) Could you give me a link for a tuning guide or >> > some information of >> > > tuning? >> > > >> > > Thanks. >> > > >> > > >> > > -- Michitaka Terada >> > >> > > ___ >> > > Gluster-users mailing list >> > > Gluster-users@gluster.org >> >
Re: [Gluster-users] Writing is slow when there are 10 million files.
>> To Liam: >I had about 100 million files in Gluster and it was unbelievably painfully slow. We had to ditch it for other technology. Has slow down occurred on writing file?, listing files, or both? In our application, path of the data is managed in database. "ls" is slow, but not influence to my application, but writing file slow down is critical. >> To All: I uploaded a statistics when writing test(32kbyte x 10 million, 6 bricks). http://gss.iijgio.com/gluster/gfs-profile_d03r2.txt Line 15, average-latency value is about 30 ms. I cannot judge this value is a normal(ordinary?) performance or not. Is it slow? Thanks, --Michika Terada 2014-04-15 16:05 GMT+09:00 Franco Broi : > > My bug report is here > https://bugzilla.redhat.com/show_bug.cgi?id=1067256 > > On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote: > > If you experience pain using any filesystem, you should see your > > doctor. > > > > If you're not actually experiencing pain, perhaps you should avoid > > hyperbole and instead talk about what version you tried, what your > > tests were, how you tried to fix it, and what the results were. > > > > If you're using a current version with a kernel that has readdirplus > > support for fuse it shouldn't be that bad. If it is, file a bug report > > - especially if you have the skills to help diagnose the problem. > > > > On April 14, 2014 11:30:26 PM PDT, Liam Slusser > > wrote: > > > > I had about 100 million files in Gluster and it was > > unbelievably painfully slow. We had to ditch it for other > > technology. > > > > > > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi > > wrote: > > > > I seriously doubt this is the right filesystem for > > you, we have problems > > listing directories with a few hundred files, never > > mind millions. > > > > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka > > wrote: > > > Dear All, > > > > > > > > > > > > I have a problem with slow writing when there are 10 > > million files. > > > (Top level directories are 2,500.) > > > > > > > > > I configured GlusterFS distributed cluster(3 nodes). > > > Each node's spec is below. > > > > > > > > > CPU: Xeon E5-2620 (2.00GHz 6 Core) > > > HDD: SATA 7200rpm 4TB*12 (RAID 6) > > > NW: 10GBEth > > > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 > > 12:38:06 > > > > > > This cluster(volume) is mounted on CentOS via FUSE > > client. > > > This volume is storage of our application and I want > > to store 3 > > > hundred million to 5 billion files. > > > > > > > > > I performed a writing test, writing 32KByte file × > > 10 million to this > > > volume, and encountered a problem. > > > > > > > > > (1) Writing is so slow and slow down as number of > > files increases. > > > In non clustering situation(one node), this node's > > writing speed is > > > 40 MByte/sec at random, > > > But writing speed is 3.6MByte/sec on that cluster. > > > (2) ls command is very slow. > > > About 20 second. Directory creation takes about 10 > > seconds at > > > lowest. > > > > > > > > > Question: > > > > > > 1)5 Billion files are possible to store in > > GlusterFS? > > > Has someone succeeded to store billion files to > > GlusterFS? > > > > > > 2) Could you give me a link for a tuning guide or > > some information of > > > tuning? > > > > > > Thanks. > > > > > > > > > -- Michitaka Terada > > > > > ___ > > > Gluster-users mailing list > > > Gluster-users@gluster.org > > > > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > > > > > > > __ > > > >
Re: [Gluster-users] Writing is slow when there are 10 million files.
My bug report is here https://bugzilla.redhat.com/show_bug.cgi?id=1067256 On Mon, 2014-04-14 at 23:51 -0700, Joe Julian wrote: > If you experience pain using any filesystem, you should see your > doctor. > > If you're not actually experiencing pain, perhaps you should avoid > hyperbole and instead talk about what version you tried, what your > tests were, how you tried to fix it, and what the results were. > > If you're using a current version with a kernel that has readdirplus > support for fuse it shouldn't be that bad. If it is, file a bug report > - especially if you have the skills to help diagnose the problem. > > On April 14, 2014 11:30:26 PM PDT, Liam Slusser > wrote: > > I had about 100 million files in Gluster and it was > unbelievably painfully slow. We had to ditch it for other > technology. > > > On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi > wrote: > > I seriously doubt this is the right filesystem for > you, we have problems > listing directories with a few hundred files, never > mind millions. > > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka > wrote: > > Dear All, > > > > > > > > I have a problem with slow writing when there are 10 > million files. > > (Top level directories are 2,500.) > > > > > > I configured GlusterFS distributed cluster(3 nodes). > > Each node's spec is below. > > > > > > CPU: Xeon E5-2620 (2.00GHz 6 Core) > > HDD: SATA 7200rpm 4TB*12 (RAID 6) > > NW: 10GBEth > > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 > 12:38:06 > > > > This cluster(volume) is mounted on CentOS via FUSE > client. > > This volume is storage of our application and I want > to store 3 > > hundred million to 5 billion files. > > > > > > I performed a writing test, writing 32KByte file × > 10 million to this > > volume, and encountered a problem. > > > > > > (1) Writing is so slow and slow down as number of > files increases. > > In non clustering situation(one node), this node's > writing speed is > > 40 MByte/sec at random, > > But writing speed is 3.6MByte/sec on that cluster. > > (2) ls command is very slow. > > About 20 second. Directory creation takes about 10 > seconds at > > lowest. > > > > > > Question: > > > > 1)5 Billion files are possible to store in > GlusterFS? > > Has someone succeeded to store billion files to > GlusterFS? > > > > 2) Could you give me a link for a tuning guide or > some information of > > tuning? > > > > Thanks. > > > > > > -- Michitaka Terada > > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > > __ > > > > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Writing is slow when there are 10 million files.
If you experience pain using any filesystem, you should see your doctor. If you're not actually experiencing pain, perhaps you should avoid hyperbole and instead talk about what version you tried, what your tests were, how you tried to fix it, and what the results were. If you're using a current version with a kernel that has readdirplus support for fuse it shouldn't be that bad. If it is, file a bug report - especially if you have the skills to help diagnose the problem. On April 14, 2014 11:30:26 PM PDT, Liam Slusser wrote: >I had about 100 million files in Gluster and it was unbelievably >painfully >slow. We had to ditch it for other technology. > > >On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi >wrote: > >> >> I seriously doubt this is the right filesystem for you, we have >problems >> listing directories with a few hundred files, never mind millions. >> >> On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote: >> > Dear All, >> > >> > >> > >> > I have a problem with slow writing when there are 10 million files. >> > (Top level directories are 2,500.) >> > >> > >> > I configured GlusterFS distributed cluster(3 nodes). >> > Each node's spec is below. >> > >> > >> > CPU: Xeon E5-2620 (2.00GHz 6 Core) >> > HDD: SATA 7200rpm 4TB*12 (RAID 6) >> > NW: 10GBEth >> > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 12:38:06 >> > >> > This cluster(volume) is mounted on CentOS via FUSE client. >> > This volume is storage of our application and I want to store 3 >> > hundred million to 5 billion files. >> > >> > >> > I performed a writing test, writing 32KByte file × 10 million to >this >> > volume, and encountered a problem. >> > >> > >> > (1) Writing is so slow and slow down as number of files increases. >> > In non clustering situation(one node), this node's writing speed >is >> > 40 MByte/sec at random, >> > But writing speed is 3.6MByte/sec on that cluster. >> > (2) ls command is very slow. >> > About 20 second. Directory creation takes about 10 seconds at >> > lowest. >> > >> > >> > Question: >> > >> > 1)5 Billion files are possible to store in GlusterFS? >> > Has someone succeeded to store billion files to GlusterFS? >> > >> > 2) Could you give me a link for a tuning guide or some information >of >> > tuning? >> > >> > Thanks. >> > >> > >> > -- Michitaka Terada >> > ___ >> > Gluster-users mailing list >> > Gluster-users@gluster.org >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> >> ___ >> Gluster-users mailing list >> Gluster-users@gluster.org >> http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > > >___ >Gluster-users mailing list >Gluster-users@gluster.org >http://supercolony.gluster.org/mailman/listinfo/gluster-users -- Sent from my Android device with K-9 Mail. Please excuse my brevity.___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Writing is slow when there are 10 million files.
We consolidated hardware into a single large ZFS server with a redundant "hot" slave. thanks, liam On Mon, Apr 14, 2014 at 11:33 PM, Jeffrey 'jf' Lim wrote: > On Tue, Apr 15, 2014 at 2:30 PM, Liam Slusser wrote: > > > > I had about 100 million files in Gluster and it was unbelievably > painfully > > slow. We had to ditch it for other technology. > > > > and what is (or was) that other technology? > > -jf > > -- > He who settles on the idea of the intelligent man as a static entity > only shows himself to be a fool. > > Mensan / Full-Stack Technical Polymath / System Administrator > 12 years over the entire web stack: Performance, Sysadmin, Ruby and > Frontend > ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Writing is slow when there are 10 million files.
On Tue, Apr 15, 2014 at 2:30 PM, Liam Slusser wrote: > > I had about 100 million files in Gluster and it was unbelievably painfully > slow. We had to ditch it for other technology. > and what is (or was) that other technology? -jf -- He who settles on the idea of the intelligent man as a static entity only shows himself to be a fool. Mensan / Full-Stack Technical Polymath / System Administrator 12 years over the entire web stack: Performance, Sysadmin, Ruby and Frontend ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Writing is slow when there are 10 million files.
I had about 100 million files in Gluster and it was unbelievably painfully slow. We had to ditch it for other technology. On Mon, Apr 14, 2014 at 11:24 PM, Franco Broi wrote: > > I seriously doubt this is the right filesystem for you, we have problems > listing directories with a few hundred files, never mind millions. > > On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote: > > Dear All, > > > > > > > > I have a problem with slow writing when there are 10 million files. > > (Top level directories are 2,500.) > > > > > > I configured GlusterFS distributed cluster(3 nodes). > > Each node's spec is below. > > > > > > CPU: Xeon E5-2620 (2.00GHz 6 Core) > > HDD: SATA 7200rpm 4TB*12 (RAID 6) > > NW: 10GBEth > > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 12:38:06 > > > > This cluster(volume) is mounted on CentOS via FUSE client. > > This volume is storage of our application and I want to store 3 > > hundred million to 5 billion files. > > > > > > I performed a writing test, writing 32KByte file × 10 million to this > > volume, and encountered a problem. > > > > > > (1) Writing is so slow and slow down as number of files increases. > > In non clustering situation(one node), this node's writing speed is > > 40 MByte/sec at random, > > But writing speed is 3.6MByte/sec on that cluster. > > (2) ls command is very slow. > > About 20 second. Directory creation takes about 10 seconds at > > lowest. > > > > > > Question: > > > > 1)5 Billion files are possible to store in GlusterFS? > > Has someone succeeded to store billion files to GlusterFS? > > > > 2) Could you give me a link for a tuning guide or some information of > > tuning? > > > > Thanks. > > > > > > -- Michitaka Terada > > ___ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users > > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Writing is slow when there are 10 million files.
I seriously doubt this is the right filesystem for you, we have problems listing directories with a few hundred files, never mind millions. On Tue, 2014-04-15 at 10:45 +0900, Terada Michitaka wrote: > Dear All, > > > > I have a problem with slow writing when there are 10 million files. > (Top level directories are 2,500.) > > > I configured GlusterFS distributed cluster(3 nodes). > Each node's spec is below. > > > CPU: Xeon E5-2620 (2.00GHz 6 Core) > HDD: SATA 7200rpm 4TB*12 (RAID 6) > NW: 10GBEth > GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 12:38:06 > > This cluster(volume) is mounted on CentOS via FUSE client. > This volume is storage of our application and I want to store 3 > hundred million to 5 billion files. > > > I performed a writing test, writing 32KByte file × 10 million to this > volume, and encountered a problem. > > > (1) Writing is so slow and slow down as number of files increases. > In non clustering situation(one node), this node's writing speed is > 40 MByte/sec at random, > But writing speed is 3.6MByte/sec on that cluster. > (2) ls command is very slow. > About 20 second. Directory creation takes about 10 seconds at > lowest. > > > Question: > > 1)5 Billion files are possible to store in GlusterFS? > Has someone succeeded to store billion files to GlusterFS? > > 2) Could you give me a link for a tuning guide or some information of > tuning? > > Thanks. > > > -- Michitaka Terada > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users
[Gluster-users] Writing is slow when there are 10 million files.
Dear All, I have a problem with slow writing when there are 10 million files. (Top level directories are 2,500.) I configured GlusterFS distributed cluster(3 nodes). Each node's spec is below. CPU: Xeon E5-2620 (2.00GHz 6 Core) HDD: SATA 7200rpm 4TB*12 (RAID 6) NW: 10GBEth GlusterFS : glusterfs 3.4.2 built on Jan 3 2014 12:38:06 This cluster(volume) is mounted on CentOS via FUSE client. This volume is storage of our application and I want to store 3 hundred million to 5 billion files. I performed a writing test, writing 32KByte file × 10 million to this volume, and encountered a problem. (1) Writing is so slow and slow down as number of files increases. In non clustering situation(one node), this node's writing speed is 40 MByte/sec at random, But writing speed is 3.6MByte/sec on that cluster. (2) ls command is very slow. About 20 second. Directory creation takes about 10 seconds at lowest. Question: 1)5 Billion files are possible to store in GlusterFS? Has someone succeeded to store billion files to GlusterFS? 2) Could you give me a link for a tuning guide or some information of tuning? Thanks. -- Michitaka Terada ___ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users