Re: Considering deprecation and removal of XZ compression (hbase-compression-xz)

2024-04-09 Thread Wei-Chiu Chuang
+1 same here. gzip/lzo in the past, Snappy or zstd now. On Tue, Apr 2, 2024 at 7:50 PM 张铎(Duo Zhang) wrote: > For me I've never seen people actually use the xz compression. > > For size, usually people will choose gzip, and for speed, in the past > people will choose lzo an

Re: Considering deprecation and removal of XZ compression (hbase-compression-xz)

2024-04-09 Thread Andrew Purtell
Let's remove in 2.6.0. I will submit a PR. On Tue, Apr 2, 2024 at 7:50 PM 张铎(Duo Zhang) wrote: > For me I've never seen people actually use the xz compression. > > For size, usually people will choose gzip, and for speed, in the past > people will choose lzo and now they

Re: Considering deprecation and removal of XZ compression (hbase-compression-xz)

2024-04-02 Thread Duo Zhang
For me I've never seen people actually use the xz compression. For size, usually people will choose gzip, and for speed, in the past people will choose lzo and now they choose snappy or zstd. So for me I prefer we just deprecated the xz compression immediately and remove it 2.6.0. T

Considering deprecation and removal of XZ compression (hbase-compression-xz)

2024-04-01 Thread Andrew Purtell
aster/hbase-compression/hbase-compression-xz). We depend on version 1.9 of xz-java, which was published in 2021, well before maintenance changes in the project and the involvement of a person who is now believed to be a malicious actor. Projects like HBase that depend on xz-java have no reason

Compression codec snappy not supported, aborting RS construction

2023-12-22 Thread Adam Sjøgren
anode start up fine, but when I try starting the HBase Regionserver I get: 10:23:52.522 [main] ERROR org.apache.hadoop.hbase.regionserver.HRegionServer - Failed construction RegionServer java.io.IOException: Compression codec snappy not supported, aborting RS construction

Re: Clarification for WAL Compression doc

2020-04-23 Thread Andrey Elenskiy
om/apache/hbase/blob/7877e09b6023c80e8bacd25fb8e0b9273ed7d258/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java#L171), WAL compression isn't actually block based, it's entry based and the dictionary doesn't need to be flushed explicitly as it's written out as data is written. As

Re: Clarification for WAL Compression doc

2020-04-21 Thread Stack
On Tue, Apr 14, 2020 at 1:16 PM Andrey Elenskiy wrote: > Hello, > > I'm trying to understand the extent of the following issue mentioned in > "WAL Compression" doc: https://hbase.apache.org/book.html#wal.compression > > A possible downside to WAL compression i

Clarification for WAL Compression doc

2020-04-14 Thread Andrey Elenskiy
Hello, I'm trying to understand the extent of the following issue mentioned in "WAL Compression" doc: https://hbase.apache.org/book.html#wal.compression A possible downside to WAL compression is that we lose more data from the > last block in the WAL if it ill-terminated mid-wr

Exploring Flash and Compression Acceleration

2018-05-04 Thread Thad Omura
compression. We have been able to prove some nice gains on YCSB synthetic workloads (complete system benchmark info and results summarized at http://www.scaleflux.com/downloads/ScaleFlux_HBase_Solution_Brief.pdf) but we'd like to reach out to the community to see if there are any HBase users

Re: Using snappy compression with standalone HBase ?

2017-10-04 Thread schausson
Hi, Finally I figured out : I had to download libhadoop.so and reference its location with HBASE_LIBRARY_PATH Now it works fine ! -- Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html

Re: Using snappy compression with standalone HBase ?

2017-10-04 Thread schausson
Hi Ted, Thanks for your help, I deployed Hbase 1.2.5, and in the lib folder, I can see a bunch of hadoop jars, all of them for the 2.5.1 release : hadoop-annotations-2.5.1.jar hadoop-auth-2.5.1.jar hadoop-client-2.5.1.jar hadoop-common-2.5.1.jar hadoop-hdfs-2.5.1.jar hadoop-mapreduce-client-app-2.

Re: Using snappy compression with standalone HBase ?

2017-10-03 Thread Ted Yu
at 10:21 AM, schausson wrote: > Hi, > I'm experimenting with HBase on a brand new linux VM (ubuntu), as a > standalone installation (I don't have any hadoop distribution on my VM, > it's > worth saying it). I would like to test compression options, but couldn't

Using snappy compression with standalone HBase ?

2017-10-03 Thread schausson
Hi, I'm experimenting with HBase on a brand new linux VM (ubuntu), as a standalone installation (I don't have any hadoop distribution on my VM, it's worth saying it). I would like to test compression options, but couldn't figure out how to make it working : I manually install

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-29 Thread Artem Ervits
gt;> 'hbase shell'. It shows nothing related to data loading. BTW, I use >> happybase (python's hbase package) to load data into hbase. >> >> I cannot find any similar files in the hdfs://hbase folder, what I can >> find >> are the empty column fam

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-29 Thread Artem Ervits
into hbase. > > I cannot find any similar files in the hdfs://hbase folder, what I can find > are the empty column family folders. But if I switch back to > compression='NONE' then all the files appear in those column family > folders. > > > > > > > &

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-28 Thread kennyut
pty column family folders. But if I switch back to compression='NONE' then all the files appear in those column family folders. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067947.html Sent from the HBase User mailing list archive at Nabble.com.

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-28 Thread kennyut
Thanks Jean and Ted. I think i've found it. It's under: /var/log/hbase I am looking into the file. Will post update after. -- View this message in context: http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067946.html

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-28 Thread Jean-Marc Spaggiari
> 0.98.1-cdh5.1.3, rUnknown, Tue Sep 16 20:19:34 PDT 2014 > > > > But where I can find my HBase region server log? I used all default > option > > when installing HBase, including all configurations. > > > > Thanks for your reply! > > > > > >

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-28 Thread Ted Yu
onfigurations. > > Thanks for your reply! > > > > -- > View this message in context: > http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067938.html > Sent from the HBase User mailing list archive at Nabble.com. >

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-28 Thread kennyut
our reply! -- View this message in context: http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067938.html Sent from the HBase User mailing list archive at Nabble.com.

Re: After compression, the table folders under hdfs://hbase are empty

2015-01-27 Thread Ted Yu
Which HBase release are you using ? Have you checked region server log(s) and look for SMT_KO2 ? Cheers On Tue, Jan 27, 2015 at 2:54 PM, kennyut wrote: > I tried to test HBase's data compression, I used two separate codes below: > > non-compression code: > create '

After compression, the table folders under hdfs://hbase are empty

2015-01-27 Thread kennyut
I tried to test HBase's data compression, I used two separate codes below: non-compression code: create 'SMT_KO1', {NAME=>'info', COMPRESSION=> 'NONE', VERSIONS => 5}, {NAME=>'usg', COMPRESSION=> 'NONE;, VER

Re: Hbase: Bulk Loading with Compression and DBE

2014-12-14 Thread Ted Yu
> emits ImmutableBytesWritable, KeyValue pairs. I declare a pre-splitted table > where the column families have compression set to SNAPPY and Data Block > Encoding set to PREFIX_TREE (hcd.setCompressionType(Algorithm.SNAPPY); and > hcd.setDataBlockEncoding(DataBlockEncoding.PREFIX_TREE)

Hbase: Bulk Loading with Compression and DBE

2014-12-14 Thread Shashwat Mishra
Hi all, I am trying to bulk load some network-data into an Hbase table. My mapper emits ImmutableBytesWritable, KeyValue pairs. I declare a pre-splitted table where the column families have compression set to SNAPPY and Data Block Encoding set to PREFIX_TREE (hcd.setCompressionType

Re: Snappy compression not working with HBase 0.98.3

2014-07-23 Thread Hanish Bansal
adoop.so and libsnappy.so to hbase native library folder > > at $HBASE_HOME/lib/native/Linux-amd64-64/. > > > > It also didn't work. > > > > *Run a compression test using tool, getting below error:* > > > > [root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bi

Re: Snappy compression not working with HBase 0.98.3

2014-07-14 Thread Stack
md64-64 > > As hadoop holds hadoop and snappy library, it should work. But it didn't. > > 2. Copied libhadoop.so and libsnappy.so to hbase native library folder > at $HBASE_HOME/lib/native/Linux-amd64-64/. > > It also didn't work. > > *Run a compression test

Re: Snappy compression not working with HBase 0.98.3

2014-07-14 Thread Jean-Marc Spaggiari
p holds hadoop and snappy library, it should work. But it didn't. > > 2. Copied libhadoop.so and libsnappy.so to hbase native library folder > at $HBASE_HOME/lib/native/Linux-amd64-64/. > > It also didn't work. > > *Run a compression test using tool, getting be

Re: Snappy compression not working with HBase 0.98.3

2014-07-14 Thread Hanish Bansal
so to hbase native library folder at $HBASE_HOME/lib/native/Linux-amd64-64/. It also didn't work. *Run a compression test using tool, getting below error:* [root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bin/hbase org.apache.hadoop.hbase.util. CompressionTest file:///tmp/test.txt snappy 2014-0

Re: Snappy compression not working with HBase 0.98.3

2014-07-13 Thread Stack
On Sun, Jul 13, 2014 at 10:28 PM, Esteban Gutierrez wrote: > Hello Ankit, > > The only reason the test can fail in the master is that the snappy natives > libraries are not installed correctly . Have you tried to run the > compression test (hbase org.apache.hadoop.hbase.util.

Re: Snappy compression not working with HBase 0.98.3

2014-07-13 Thread Esteban Gutierrez
Hello Ankit, The only reason the test can fail in the master is that the snappy natives libraries are not installed correctly . Have you tried to run the compression test (hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp snappy) in the master? does it works? If it works correctly

Re: Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Ankit Jain
ish, > > Since 0.95 a test for compression was added to the HBase Master, now you > need to make sure the native libraries are installed in the HBase Master(s) > and not just in the Region Servers. (see HBASE-6370 for details about this > change) > > Regards, > Esteban. >

Re: Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Esteban Gutierrez
Hello Hanish, Since 0.95 a test for compression was added to the HBase Master, now you need to make sure the native libraries are installed in the HBase Master(s) and not just in the Region Servers. (see HBASE-6370 for details about this change) Regards, Esteban. -- Cloudera, Inc. On Fri

Re: Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Ted Yu
Please see http://hbase.apache.org/book.html#snappy.compression.installation Cheers On Fri, Jul 11, 2014 at 3:37 AM, Hanish Bansal < hanish.bansal.agar...@gmail.com> wrote: > We are using hbase 0.98.3 with hadoop 2.4.0. > > Run a compression test using tool, getting below e

RE: Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Kashif Jawed Siddiqui
Add hadoop\lib\native to the HBASE CLASSPATH The $HADOOP_HOME\lib\native contains the snappy libs Thumbs Up ! KASHIF -Original Message- From: Hanish Bansal [mailto:hanish.bansal.agar...@gmail.com] Sent: 11 July 2014 16:08 To: user@hbase.apache.org Subject: Re: Snappy compression not

Re: Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Hanish Bansal
We are using hbase 0.98.3 with hadoop 2.4.0. Run a compression test using tool, getting below error: [root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bin/hbase org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy 2014-07-11 16:05:10,572 INFO [main] Configuration.deprecation

Snappy compression not working with HBase 0.98.3

2014-07-11 Thread Hanish Bansal
Hi All, Recently i have upgraded HBase environment from 0.94 to 0.98.3. Now trying to use snappy compression with it. I have installed snappy library as per guide mentioned in https://hbase.apache.org/book/snappy.compression.html When i am creating a table with snappy compression enabled, i am

Re: Does compression ever improve performance?

2014-06-16 Thread Michael Segel
That works since you don’t need a region to be splittable… On Jun 14, 2014, at 4:36 PM, Kevin O'dell wrote: > Hi Jeremy, > > I always recommend turning on snappy compression, I have ~20% > performance increases. > On Jun 14, 2014 10:25 AM, "Ted Yu" wrote

Re: Does compression ever improve performance?

2014-06-15 Thread lars hofhansl
e hand you fit more data into the block cache (which is unlike compression, where the data is uncompressed before the blocks get cached), but on the other hand much more garbage is produced during scanning and more CPU and memory bandwidth is used. So you need to test for your use case

Re: Does compression ever improve performance?

2014-06-15 Thread Ted Yu
but this question seems relevant: > > Does data block encoding also help performance, or does it just enable more > efficient compression? > > --Tom > > On Saturday, June 14, 2014, Guillermo Ortiz wrote: > > > I would like to see the times they got doing some sca

Re: Does compression ever improve performance?

2014-06-15 Thread Tom Brown
I don't mean to hijack the thread, but this question seems relevant: Does data block encoding also help performance, or does it just enable more efficient compression? --Tom On Saturday, June 14, 2014, Guillermo Ortiz wrote: > I would like to see the times they got doing some scan

Re: Does compression ever improve performance?

2014-06-14 Thread Guillermo Ortiz
I would like to see the times they got doing some scans or get with the benchmark about compression and block code to figure out how much time to save if your data are smaller but you have to decompress them. El sábado, 14 de junio de 2014, Kevin O'dell escribió: > Hi Jeremy, > &

Re: Does compression ever improve performance?

2014-06-14 Thread Kevin O'dell
Hi Jeremy, I always recommend turning on snappy compression, I have ~20% performance increases. On Jun 14, 2014 10:25 AM, "Ted Yu" wrote: > You may have read Doug Meil's writeup where he tried out different > ColumnFamily > compressions : > > https://blog

Re: Does compression ever improve performance?

2014-06-14 Thread Ted Yu
You may have read Doug Meil's writeup where he tried out different ColumnFamily compressions : https://blogs.apache.org/hbase/ Cheers On Fri, Jun 13, 2014 at 11:33 AM, jeremy p wrote: > Thank you -- I'll go ahead and try compression. > > --Jeremy > > > On Fri, Ju

Re: Does compression ever improve performance?

2014-06-13 Thread jeremy p
Thank you -- I'll go ahead and try compression. --Jeremy On Fri, Jun 13, 2014 at 10:59 AM, Dima Spivak wrote: > I'd highly recommend it. In general, compressing your column families will > improve performance by reducing the resources required to get data from > disk (ev

Re: Does compression ever improve performance?

2014-06-13 Thread Dima Spivak
wrote: > Hey all, > > Right now, I'm not using compression on any of my tables, because our data > doesn't take up a huge amount of space. However, I would turn on > compression if there was a chance it would improve HBase's performance. By > performance, I'

Does compression ever improve performance?

2014-06-13 Thread jeremy p
Hey all, Right now, I'm not using compression on any of my tables, because our data doesn't take up a huge amount of space. However, I would turn on compression if there was a chance it would improve HBase's performance. By performance, I'm talking about the speed with whi

Re: How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-12 Thread jeremy p
> > > > Hi Jeremy, > > > > > > Here is some code that creates an table using the HBaseAdmin API, with > a > > > bunch of options such as compression and specified key boundaries. > > > http://pastebin.com/KNcv03bj > > > > > > The us

Re: How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-12 Thread Jean-Marc Spaggiari
> > > On Wed, Jun 11, 2014 at 4:34 PM, Subbiah, Suresh > wrote: > > > Hi Jeremy, > > > > Here is some code that creates an table using the HBaseAdmin API, with a > > bunch of options such as compression and specified key boundaries. > > http://pastebi

Re: How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-12 Thread jeremy p
Awesome -- thank you both! --Jeremy On Wed, Jun 11, 2014 at 4:34 PM, Subbiah, Suresh wrote: > Hi Jeremy, > > Here is some code that creates an table using the HBaseAdmin API, with a > bunch of options such as compression and specified key boundaries. > http://pastebin.com/K

RE: How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-11 Thread Subbiah, Suresh
Hi Jeremy, Here is some code that creates an table using the HBaseAdmin API, with a bunch of options such as compression and specified key boundaries. http://pastebin.com/KNcv03bj The user specified options will be in the StringArrayList tableOptions. This is part of the Trafodion code

Re: How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-11 Thread Jean-Marc Spaggiari
jeremy p : > I'm currently creating a table using the HBaseAdmin object. The reason I'm > doing it with the HBaseAdmin object is that I need to pre-split the table > by specifying the start key, end key, and number of regions. I want to use > Snappy compression for this tab

How to specify a compression algorithm when creating a table with the HBaseAdmin object?

2014-06-11 Thread jeremy p
I'm currently creating a table using the HBaseAdmin object. The reason I'm doing it with the HBaseAdmin object is that I need to pre-split the table by specifying the start key, end key, and number of regions. I want to use Snappy compression for this table, however, I haven't see

Re: test compression in hbase

2014-03-25 Thread Shahab Yunus
It says: RemoteException(java.io.IOException): /hbase/test is non empty Is the directory empty or are there files form some previous runs? Does the user have access to delete the data here? Regards, Shahab On Tue, Mar 25, 2014 at 7:42 AM, Mohamed Ghareb wrote: > How I can test the compress

test compression in hbase

2014-03-25 Thread Mohamed Ghareb
How I can test the compression snappy in hbase I ran the below command hbase org.apache.hadoop.hbase.util.CompressionTest /hbase/test snappy the test table is exist and empty but i have error 14/03/25 13:12:01 DEBUG util.FSUtils: Creating file=/hbase/test with permission=rwxrwxrwx 14/03/25 13

test compression in hbase

2014-03-25 Thread Mohamed Ghareb
How I can test the compression snappy in hbase I ran the below command hbase org.apache.hadoop.hbase.util.CompressionTest /hbase/test snappy the test table is exist and empty but i have error 14/03/25 13:12:01 DEBUG util.FSUtils: Creating file=/hbase/test with permission=rwxrwxrwx 14/03/25 13

Re: Snappy compression question

2014-01-09 Thread Rural Hunter
Yes, I followed the part to build it. But I didn't follow the configure part(core-site.xml ) 于 2014/1/10 8:48, Ted Yu 写道: Rural: Just to confirm, you were following instructions here: https://code.google.com/p/hadoop-snappy/ Cheers

Re: Snappy compression question

2014-01-09 Thread Ted Yu
Rural: Just to confirm, you were following instructions here: https://code.google.com/p/hadoop-snappy/ Cheers On Wed, Jan 8, 2014 at 5:34 PM, Ted Yu wrote: > It's Okay. > > Either J-M or myself can come up with a patch. > > Cheers > > > On Wed, Jan 8, 2014 at 5:32 PM, Rural Hunter wrote: > >>

Re: Snappy compression question

2014-01-08 Thread Ted Yu
It's Okay. Either J-M or myself can come up with a patch. Cheers On Wed, Jan 8, 2014 at 5:32 PM, Rural Hunter wrote: > Sorry I think my English is not good enough to provide a patch for > documentation. So I just created a jira and put my thoughts in it: > https://issues.apache.org/jira/brows

Re: Snappy compression question

2014-01-08 Thread Rural Hunter
Sorry I think my English is not good enough to provide a patch for documentation. So I just created a jira and put my thoughts in it: https://issues.apache.org/jira/browse/HBASE-10303 于 2014/1/4 20:23, Jean-Marc Spaggiari 写道: Hi Rural, If you have any recommendation on the way to complete it,

Re: Snappy compression question

2014-01-08 Thread Ted Yu
4 bits OS as the libhadoop.so in the binary package is only for > 32 > > bits OS. It also din't metion actually you need both snappy and > > hadoop-snappy. > > > > 于 2014/1/3 19:20, 张玉雪 写道: > > > > Hi: > >> > >>

Re: Snappy compression question

2014-01-04 Thread Jean-Marc Spaggiari
s OS as the libhadoop.so in the binary package is only for 32 > bits OS. It also din't metion actually you need both snappy and > hadoop-snappy. > > 于 2014/1/3 19:20, 张玉雪 写道: > > Hi: >> >> When I used hadoop 2.2.0 and hbase 0.96.1.1 to using s

Re: Snappy compression question

2014-01-03 Thread Rural Hunter
nly for 32 bits OS. It also din't metion actually you need both snappy and hadoop-snappy. 于 2014/1/3 19:20, 张玉雪 写道: Hi: When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy compression I followed the topic http://hbase.apache.org/book/snappy.compression.html,

Re: Snappy compression question

2014-01-03 Thread Jean-Marc Spaggiari
Shameless plug ;) http://www.spaggiari.org/index.php/hbase/how-to-install-snappy-with-1 Keep us posted. 2014/1/3 Ted Yu > See this thread: > > http://search-hadoop.com/m/LviZD1WPToG/Snappy+libhadoop&subj=RE+Setting+up+Snappy+compression+in+Hadoop > > On Jan 3, 2014, at

Re: Snappy compression question

2014-01-03 Thread Ted Yu
See this thread: http://search-hadoop.com/m/LviZD1WPToG/Snappy+libhadoop&subj=RE+Setting+up+Snappy+compression+in+Hadoop On Jan 3, 2014, at 3:20 AM, 张玉雪 wrote: > Hi: > > When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy > compression > > I foll

Snappy compression question

2014-01-03 Thread 张玉雪
Hi: When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy compression I followed the topic http://hbase.apache.org/book/snappy.compression.html, but I get some error ,can some one help me? [hadoop@master bin]$ hbase org.apache.hadoop.hbase.util.CompressionTest

Re: How to verify your COMPRESSION policy takes work?

2013-11-13 Thread Jean-Marc Spaggiari
Create another table with the same schema without the compression, insert the same thing in the two tables and compare the foot print? Le 2013-11-13 05:58, "Jia Wang" a écrit : > Hi Folks > > I added COMPRESSION value after creating a table, so here is the table >

How to verify your COMPRESSION policy takes work?

2013-11-13 Thread Jia Wang
Hi Folks I added COMPRESSION value after creating a table, so here is the table description: {NAME => 'SPLIT_TEST_BIG', SPLIT_POLICY => 'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy', MAX_FILESIZE => '107374182400', FAMILIES => [{NA

Re: Hbase Compression

2013-09-24 Thread aiyoh79
119.b19289cf9b1400c6daddc347337bac03. in 1163ms, sequenceid=687077, >> compactio >> n requested=false >> >> It seems like it will first flush into a tmp file and the memsize is >> 122.1m, >> but when it finally added, the size is 64.4m. Lastly, there are 2 more

Re: Hbase Compression

2013-09-24 Thread aiyoh79
added, the size is 64.4m. Lastly, there are 2 more >> parameters which is 128.2m and 48.0m for currentsize. >> >> I never specify hbase.regionserver.codecs preperty in my hbase-site.xml >> file, so is the size difference still because of compression? >> >> Thank

Re: Hbase Compression

2013-09-24 Thread Jean-Daniel Cryans
uested=false > > It seems like it will first flush into a tmp file and the memsize is > 122.1m, > but when it finally added, the size is 64.4m. Lastly, there are 2 more > parameters which is 128.2m and 48.0m for currentsize. > > I never specify hbase.regionserver.codecs prepe

Re: Hbase Compression

2013-09-24 Thread Ted Yu
hen it finally added, the size is 64.4m. Lastly, there are 2 more > parameters which is 128.2m and 48.0m for currentsize. > > I never specify hbase.regionserver.codecs preperty in my hbase-site.xml > file, so is the size difference still because of compression? > > Thanks, > &g

Hbase Compression

2013-09-24 Thread aiyoh79
, the size is 64.4m. Lastly, there are 2 more parameters which is 128.2m and 48.0m for currentsize. I never specify hbase.regionserver.codecs preperty in my hbase-site.xml file, so is the size difference still because of compression? Thanks, aiyoh79 -- View this message in context: http

Re: Activating LZO compression on HBase

2013-06-09 Thread Azuryy Yu
rpose. > @ Ashwanth : thank-u i have already seen this link..but its not helping > me. > > > On Sat, Jun 8, 2013 at 7:40 PM, Kevin O'dell >wrote: > > > I don't want to start a compression war here, but is there a reason you > are > > trying to use LZO

RE: Compression class loading mismatch in 0.94.2

2013-06-09 Thread Levy Meny
@gmail.com] Sent: Sunday, June 09, 2013 4:33 PM To: user@hbase.apache.org Cc: Yaniv Ofer Subject: Re: Compression class loading mismatch in 0.94.2 The change happened in 0.94.5 Please see HBASE-5458 Cheers On Sun, Jun 9, 2013 at 5:35 AM, Levy Meny wrote: > Hi, > Anyone knows why org.apac

Re: Compression class loading mismatch in 0.94.2

2013-06-09 Thread Ted Yu
The change happened in 0.94.5 Please see HBASE-5458 Cheers On Sun, Jun 9, 2013 at 5:35 AM, Levy Meny wrote: > Hi, > Anyone knows why org.apache.hadoop.hbase.io.hfile.Compression has changed > in 0.94.2 to use the SystemClassLoader to load the snappy class, instead of > the ContextClassLoader i

Compression class loading mismatch in 0.94.2

2013-06-09 Thread Levy Meny
Hi, Anyone knows why org.apache.hadoop.hbase.io.hfile.Compression has changed in 0.94.2 to use the SystemClassLoader to load the snappy class, instead of the ContextClassLoader in previous versions (e.g. in 0.92.1)? private CompressionCodec buildCodec(Configuration conf) { try {

Re: Activating LZO compression on HBase

2013-06-08 Thread priyanka raichand
@ Alok: thanks for correcting it @Kevin : I need to use both..snappy and LZO..just for some analysis purpose. @ Ashwanth : thank-u i have already seen this link..but its not helping me. On Sat, Jun 8, 2013 at 7:40 PM, Kevin O'dell wrote: > I don't want to start a compression war

Re: Activating LZO compression on HBase

2013-06-08 Thread Kevin O'dell
I don't want to start a compression war here, but is there a reason you are trying to use LZO over Snappy? On Sat, Jun 8, 2013 at 9:54 AM, Ashwanth Kumar wrote: > Check this out > > https://github.com/twitter/hadoop-lzo/issues/35 > > > > On Sat, Jun 8, 2013 at 7:20 PM

Re: Activating LZO compression on HBase

2013-06-08 Thread Ashwanth Kumar
Check this out https://github.com/twitter/hadoop-lzo/issues/35 On Sat, Jun 8, 2013 at 7:20 PM, Alok Singh Mahor wrote: > On Sat, Jun 8, 2013 at 7:04 PM, priyanka raichand < > raichand.priya...@gmail.com> wrote: > > > Hello everyone, > > I am trying to activate LZ

Re: Activating LZO compression on HBase

2013-06-08 Thread Alok Singh Mahor
On Sat, Jun 8, 2013 at 7:04 PM, priyanka raichand < raichand.priya...@gmail.com> wrote: > Hello everyone, > I am trying to activate LZO compression in HBase with the help of following > http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/ > > I am getting error at

Activating LZO compression on HBase

2013-06-08 Thread priyanka raichand
Hello everyone, I am trying to activate LZO compression in HBase with the help of following http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/ I am getting error at the 7th step in that link (ant compile-native) you can see the error here* http://paste.ubuntu.com/5745079/* I have

Re: RPC Replication Compression

2013-06-04 Thread Stack
On Tue, Jun 4, 2013 at 6:48 PM, Jean-Daniel Cryans wrote: > Replication doesn't need to know about compression at the RPC level so > it won't refer to it and as far as I can tell you need to set > compression only on the master cluster and the slave will figure it > out. &

Re: RPC Replication Compression

2013-06-04 Thread Jean-Daniel Cryans
Replication doesn't need to know about compression at the RPC level so it won't refer to it and as far as I can tell you need to set compression only on the master cluster and the slave will figure it out. Looking at the code tho, I'm not sure it works the same way it used

Re: RPC Replication Compression

2013-06-04 Thread Asaf Mesika
If RPC has compression abilities, how come Replication, which also works in RPC does not get it automatically? On Tue, Jun 4, 2013 at 12:34 PM, Anoop John wrote: > > 0.96 will support HBase RPC compression > Yes > > > Replication between master and slave > will enjoy

Re: RPC Replication Compression

2013-06-04 Thread Anoop John
> 0.96 will support HBase RPC compression Yes > Replication between master and slave will enjoy it as well (important since bandwidth between geographically distant data centers is scarce and more expensive) But I can not see it is being utilized in replication. May be we can do improveme

RPC Replication Compression

2013-06-04 Thread Asaf Mesika
Hi, Just wanted to make sure if I read in the internet correctly: 0.96 will support HBase RPC compression thus Replication between master and slave will enjoy it as well (important since bandwidth between geographically distant data centers is scarce and more expensive)

Re: should i use compression?

2013-04-03 Thread Marcos Luis Ortiz Valmaseda
supposedly faster. Compress using: - store size of common prefix - save column family once in the first KeyValue - use integer compression for key, value and prefix (7-bit encoding) - use bits to avoid duplication key length, value length and type if it same as previous - store in 3 bits lengt

Re: should i use compression?

2013-04-03 Thread Marcos Luis Ortiz Valmaseda
prakash kadel : >> > thank you very much. >> > i will try with snappy compression with data_block_encoding >> > >> > >> > >> > >> > On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell > >wrote: >> > >> >> Prak

Re: should i use compression?

2013-04-03 Thread Marcos Luis Ortiz Valmaseda
any documentation anywhere regarding the differences between > PREFIX, DIFF and FAST_DIFF? > > 2013/4/3 prakash kadel : > > thank you very much. > > i will try with snappy compression with data_block_encoding > > > > > > > > > > On Wed, Apr 3, 20

Re: should i use compression?

2013-04-03 Thread Jean-Marc Spaggiari
Is there any documentation anywhere regarding the differences between PREFIX, DIFF and FAST_DIFF? 2013/4/3 prakash kadel : > thank you very much. > i will try with snappy compression with data_block_encoding > > > > > On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell wrote:

Re: should i use compression?

2013-04-03 Thread prakash kadel
thank you very much. i will try with snappy compression with data_block_encoding On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell wrote: > Prakash, > > Yes, I would recommend Snappy Compression. > > On Wed, Apr 3, 2013 at 10:18 AM, Prakash Kadel > wrote: > >

Re: should i use compression?

2013-04-03 Thread Ted Yu
Another commonly used encoding is FAST_DIFF Cheers On Wed, Apr 3, 2013 at 7:18 AM, Prakash Kadel wrote: > Thanks, > is there any specific compression that is recommended of the use case > i have? >Since my values are all null will compression help? > > I am thinki

Re: should i use compression?

2013-04-03 Thread Kevin O'dell
Prakash, Yes, I would recommend Snappy Compression. On Wed, Apr 3, 2013 at 10:18 AM, Prakash Kadel wrote: > Thanks, > is there any specific compression that is recommended of the use case i > have? >Since my values are all null will compression help? > > I am

Re: should i use compression?

2013-04-03 Thread Prakash Kadel
Thanks, is there any specific compression that is recommended of the use case i have? Since my values are all null will compression help? I am thinking of using prefix data_block_encoding.. Sincerely, Prakash Kadel On Apr 3, 2013, at 10:55 PM, Ted Yu wrote: > You should use d

Re: should i use compression?

2013-04-03 Thread Marcos Luis Ortiz Valmaseda
+1 for Ted´s advice. Using compression can save a lot of space in memory and disc, so it´s a good recommendation. 2013/4/3 Ted Yu > You should use data block encoding (in 0.94.x releases only). It is helpful > for reads. > > You can also enable compression. > > Cheers &

Re: should i use compression?

2013-04-03 Thread Ted Yu
You should use data block encoding (in 0.94.x releases only). It is helpful for reads. You can also enable compression. Cheers On Wed, Apr 3, 2013 at 6:42 AM, Prakash Kadel wrote: > Hello, > I have a question. > I have a table where i store data in the column qualifiers(t

should i use compression?

2013-04-03 Thread Prakash Kadel
Hello, I have a question. I have a table where i store data in the column qualifiers(the values itself are null). I just have 1 column family. The number of columns per row is variable (1~ few thousands) Currently i don't use compression or the data_block_encoding. Should

Re: Change the compression algorythme

2012-12-01 Thread Jean-Marc Spaggiari
Perfect, thanks. I will use the hbase.regionserver.codecs option to make sure the new algorithm is working fine. JM 2012/12/1, Kevin O'dell : > JM, > > You are correct. Just disable the table, alter the compression, and then > major and minor compactions will run their c

Re: Change the compression algorythme

2012-12-01 Thread Kevin O'dell
JM, You are correct. Just disable the table, alter the compression, and then major and minor compactions will run their course. Please remember to run the compression test first to make sure there is nothing wrong with the new algorithm. On Dec 1, 2012 3:21 PM, "Jean-Marc Spaggiari&qu

Change the compression algorythme

2012-12-01 Thread Jean-Marc Spaggiari
Hi, What's the right way to change the compression algorythme for a cf? Can I simply disable the table, alter the table with the new compression alg info, and enable it back? As a result, I will still have a table compressed with the previous algorythme, but at some point in the future, it

  1   2   3   >