+1 same here. gzip/lzo in the past, Snappy or zstd now.
On Tue, Apr 2, 2024 at 7:50 PM 张铎(Duo Zhang) wrote:
> For me I've never seen people actually use the xz compression.
>
> For size, usually people will choose gzip, and for speed, in the past
> people will choose lzo an
Let's remove in 2.6.0. I will submit a PR.
On Tue, Apr 2, 2024 at 7:50 PM 张铎(Duo Zhang) wrote:
> For me I've never seen people actually use the xz compression.
>
> For size, usually people will choose gzip, and for speed, in the past
> people will choose lzo and now they
For me I've never seen people actually use the xz compression.
For size, usually people will choose gzip, and for speed, in the past
people will choose lzo and now they choose snappy or zstd.
So for me I prefer we just deprecated the xz compression immediately
and remove it 2.6.0.
T
aster/hbase-compression/hbase-compression-xz).
We depend on version 1.9 of xz-java, which was published in 2021, well
before maintenance changes in the project and the involvement of a person
who is now believed to be a malicious actor. Projects like HBase that
depend on xz-java have no reason
anode start up fine,
but when I try starting the HBase Regionserver I get:
10:23:52.522 [main] ERROR
org.apache.hadoop.hbase.regionserver.HRegionServer - Failed construction
RegionServer
java.io.IOException: Compression codec snappy not supported, aborting RS
construction
om/apache/hbase/blob/7877e09b6023c80e8bacd25fb8e0b9273ed7d258/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/WALCellCodec.java#L171),
WAL compression isn't actually block based, it's entry based and the
dictionary doesn't need to be flushed explicitly as it's written out as
data is written. As
On Tue, Apr 14, 2020 at 1:16 PM Andrey Elenskiy
wrote:
> Hello,
>
> I'm trying to understand the extent of the following issue mentioned in
> "WAL Compression" doc: https://hbase.apache.org/book.html#wal.compression
>
> A possible downside to WAL compression i
Hello,
I'm trying to understand the extent of the following issue mentioned in
"WAL Compression" doc: https://hbase.apache.org/book.html#wal.compression
A possible downside to WAL compression is that we lose more data from the
> last block in the WAL if it ill-terminated mid-wr
compression. We have been able to
prove some nice gains on YCSB synthetic workloads (complete system benchmark
info and results summarized at
http://www.scaleflux.com/downloads/ScaleFlux_HBase_Solution_Brief.pdf) but we'd
like to reach out to the community to see if there are any HBase users
Hi,
Finally I figured out : I had to download libhadoop.so and reference its
location with HBASE_LIBRARY_PATH
Now it works fine !
--
Sent from: http://apache-hbase.679495.n3.nabble.com/HBase-User-f4020416.html
Hi Ted,
Thanks for your help,
I deployed Hbase 1.2.5, and in the lib folder, I can see a bunch of hadoop
jars, all of them for the 2.5.1 release :
hadoop-annotations-2.5.1.jar
hadoop-auth-2.5.1.jar
hadoop-client-2.5.1.jar
hadoop-common-2.5.1.jar
hadoop-hdfs-2.5.1.jar
hadoop-mapreduce-client-app-2.
at 10:21 AM, schausson wrote:
> Hi,
> I'm experimenting with HBase on a brand new linux VM (ubuntu), as a
> standalone installation (I don't have any hadoop distribution on my VM,
> it's
> worth saying it). I would like to test compression options, but couldn't
Hi,
I'm experimenting with HBase on a brand new linux VM (ubuntu), as a
standalone installation (I don't have any hadoop distribution on my VM, it's
worth saying it). I would like to test compression options, but couldn't
figure out how to make it working :
I manually install
gt;> 'hbase shell'. It shows nothing related to data loading. BTW, I use
>> happybase (python's hbase package) to load data into hbase.
>>
>> I cannot find any similar files in the hdfs://hbase folder, what I can
>> find
>> are the empty column fam
into hbase.
>
> I cannot find any similar files in the hdfs://hbase folder, what I can find
> are the empty column family folders. But if I switch back to
> compression='NONE' then all the files appear in those column family
> folders.
>
>
>
>
>
>
>
&
pty column family folders. But if I switch back to
compression='NONE' then all the files appear in those column family folders.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067947.html
Sent from the HBase User mailing list archive at Nabble.com.
Thanks Jean and Ted. I think i've found it. It's under:
/var/log/hbase
I am looking into the file. Will post update after.
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067946.html
> 0.98.1-cdh5.1.3, rUnknown, Tue Sep 16 20:19:34 PDT 2014
> >
> > But where I can find my HBase region server log? I used all default
> option
> > when installing HBase, including all configurations.
> >
> > Thanks for your reply!
> >
> >
> >
onfigurations.
>
> Thanks for your reply!
>
>
>
> --
> View this message in context:
> http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067938.html
> Sent from the HBase User mailing list archive at Nabble.com.
>
our reply!
--
View this message in context:
http://apache-hbase.679495.n3.nabble.com/After-compression-the-table-folders-under-hdfs-hbase-are-empty-tp4067921p4067938.html
Sent from the HBase User mailing list archive at Nabble.com.
Which HBase release are you using ?
Have you checked region server log(s) and look for SMT_KO2 ?
Cheers
On Tue, Jan 27, 2015 at 2:54 PM, kennyut wrote:
> I tried to test HBase's data compression, I used two separate codes below:
>
> non-compression code:
> create '
I tried to test HBase's data compression, I used two separate codes below:
non-compression code:
create 'SMT_KO1', {NAME=>'info', COMPRESSION=> 'NONE', VERSIONS => 5},
{NAME=>'usg', COMPRESSION=> 'NONE;, VER
> emits ImmutableBytesWritable, KeyValue pairs. I declare a pre-splitted table
> where the column families have compression set to SNAPPY and Data Block
> Encoding set to PREFIX_TREE (hcd.setCompressionType(Algorithm.SNAPPY); and
> hcd.setDataBlockEncoding(DataBlockEncoding.PREFIX_TREE)
Hi all,
I am trying to bulk load some network-data into an Hbase table. My mapper emits
ImmutableBytesWritable, KeyValue pairs. I declare a pre-splitted table where
the column families have compression set to SNAPPY and Data Block Encoding set
to PREFIX_TREE (hcd.setCompressionType
adoop.so and libsnappy.so to hbase native library folder
> > at $HBASE_HOME/lib/native/Linux-amd64-64/.
> >
> > It also didn't work.
> >
> > *Run a compression test using tool, getting below error:*
> >
> > [root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bi
md64-64
>
> As hadoop holds hadoop and snappy library, it should work. But it didn't.
>
> 2. Copied libhadoop.so and libsnappy.so to hbase native library folder
> at $HBASE_HOME/lib/native/Linux-amd64-64/.
>
> It also didn't work.
>
> *Run a compression test
p holds hadoop and snappy library, it should work. But it didn't.
>
> 2. Copied libhadoop.so and libsnappy.so to hbase native library folder
> at $HBASE_HOME/lib/native/Linux-amd64-64/.
>
> It also didn't work.
>
> *Run a compression test using tool, getting be
so to hbase native library folder
at $HBASE_HOME/lib/native/Linux-amd64-64/.
It also didn't work.
*Run a compression test using tool, getting below error:*
[root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bin/hbase
org.apache.hadoop.hbase.util.
CompressionTest file:///tmp/test.txt snappy
2014-0
On Sun, Jul 13, 2014 at 10:28 PM, Esteban Gutierrez
wrote:
> Hello Ankit,
>
> The only reason the test can fail in the master is that the snappy natives
> libraries are not installed correctly . Have you tried to run the
> compression test (hbase org.apache.hadoop.hbase.util.
Hello Ankit,
The only reason the test can fail in the master is that the snappy natives
libraries are not installed correctly . Have you tried to run the
compression test (hbase org.apache.hadoop.hbase.util.CompressionTest
file:///tmp snappy) in the master? does it works? If it works correctly
ish,
>
> Since 0.95 a test for compression was added to the HBase Master, now you
> need to make sure the native libraries are installed in the HBase Master(s)
> and not just in the Region Servers. (see HBASE-6370 for details about this
> change)
>
> Regards,
> Esteban.
>
Hello Hanish,
Since 0.95 a test for compression was added to the HBase Master, now you
need to make sure the native libraries are installed in the HBase Master(s)
and not just in the Region Servers. (see HBASE-6370 for details about this
change)
Regards,
Esteban.
--
Cloudera, Inc.
On Fri
Please see http://hbase.apache.org/book.html#snappy.compression.installation
Cheers
On Fri, Jul 11, 2014 at 3:37 AM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:
> We are using hbase 0.98.3 with hadoop 2.4.0.
>
> Run a compression test using tool, getting below e
Add hadoop\lib\native to the HBASE CLASSPATH
The $HADOOP_HOME\lib\native contains the snappy libs
Thumbs Up !
KASHIF
-Original Message-
From: Hanish Bansal [mailto:hanish.bansal.agar...@gmail.com]
Sent: 11 July 2014 16:08
To: user@hbase.apache.org
Subject: Re: Snappy compression not
We are using hbase 0.98.3 with hadoop 2.4.0.
Run a compression test using tool, getting below error:
[root@IMPETUS-I0141 hbase-0.98.3-hadoop2]# bin/hbase
org.apache.hadoop.hbase.util.CompressionTest file:///tmp/test.txt snappy
2014-07-11 16:05:10,572 INFO [main] Configuration.deprecation
Hi All,
Recently i have upgraded HBase environment from 0.94 to 0.98.3. Now trying
to use snappy compression with it.
I have installed snappy library as per guide mentioned in
https://hbase.apache.org/book/snappy.compression.html
When i am creating a table with snappy compression enabled, i am
That works since you don’t need a region to be splittable…
On Jun 14, 2014, at 4:36 PM, Kevin O'dell wrote:
> Hi Jeremy,
>
> I always recommend turning on snappy compression, I have ~20%
> performance increases.
> On Jun 14, 2014 10:25 AM, "Ted Yu" wrote
e hand you fit more data into the block cache (which is unlike
compression, where the data is uncompressed before the blocks get cached), but
on the other hand much more garbage is produced during scanning and more CPU
and memory bandwidth is used. So you need to test for your use case
but this question seems relevant:
>
> Does data block encoding also help performance, or does it just enable more
> efficient compression?
>
> --Tom
>
> On Saturday, June 14, 2014, Guillermo Ortiz wrote:
>
> > I would like to see the times they got doing some sca
I don't mean to hijack the thread, but this question seems relevant:
Does data block encoding also help performance, or does it just enable more
efficient compression?
--Tom
On Saturday, June 14, 2014, Guillermo Ortiz wrote:
> I would like to see the times they got doing some scan
I would like to see the times they got doing some scans or get with the
benchmark about compression and block code to figure out how much time to
save if your data are smaller but you have to decompress them.
El sábado, 14 de junio de 2014, Kevin O'dell
escribió:
> Hi Jeremy,
>
&
Hi Jeremy,
I always recommend turning on snappy compression, I have ~20%
performance increases.
On Jun 14, 2014 10:25 AM, "Ted Yu" wrote:
> You may have read Doug Meil's writeup where he tried out different
> ColumnFamily
> compressions :
>
> https://blog
You may have read Doug Meil's writeup where he tried out different ColumnFamily
compressions :
https://blogs.apache.org/hbase/
Cheers
On Fri, Jun 13, 2014 at 11:33 AM, jeremy p
wrote:
> Thank you -- I'll go ahead and try compression.
>
> --Jeremy
>
>
> On Fri, Ju
Thank you -- I'll go ahead and try compression.
--Jeremy
On Fri, Jun 13, 2014 at 10:59 AM, Dima Spivak wrote:
> I'd highly recommend it. In general, compressing your column families will
> improve performance by reducing the resources required to get data from
> disk (ev
wrote:
> Hey all,
>
> Right now, I'm not using compression on any of my tables, because our data
> doesn't take up a huge amount of space. However, I would turn on
> compression if there was a chance it would improve HBase's performance. By
> performance, I'
Hey all,
Right now, I'm not using compression on any of my tables, because our data
doesn't take up a huge amount of space. However, I would turn on
compression if there was a chance it would improve HBase's performance. By
performance, I'm talking about the speed with whi
>
> > > Hi Jeremy,
> > >
> > > Here is some code that creates an table using the HBaseAdmin API, with
> a
> > > bunch of options such as compression and specified key boundaries.
> > > http://pastebin.com/KNcv03bj
> > >
> > > The us
>
>
> On Wed, Jun 11, 2014 at 4:34 PM, Subbiah, Suresh
> wrote:
>
> > Hi Jeremy,
> >
> > Here is some code that creates an table using the HBaseAdmin API, with a
> > bunch of options such as compression and specified key boundaries.
> > http://pastebi
Awesome -- thank you both!
--Jeremy
On Wed, Jun 11, 2014 at 4:34 PM, Subbiah, Suresh
wrote:
> Hi Jeremy,
>
> Here is some code that creates an table using the HBaseAdmin API, with a
> bunch of options such as compression and specified key boundaries.
> http://pastebin.com/K
Hi Jeremy,
Here is some code that creates an table using the HBaseAdmin API, with a bunch
of options such as compression and specified key boundaries.
http://pastebin.com/KNcv03bj
The user specified options will be in the StringArrayList tableOptions.
This is part of the Trafodion code
jeremy p :
> I'm currently creating a table using the HBaseAdmin object. The reason I'm
> doing it with the HBaseAdmin object is that I need to pre-split the table
> by specifying the start key, end key, and number of regions. I want to use
> Snappy compression for this tab
I'm currently creating a table using the HBaseAdmin object. The reason I'm
doing it with the HBaseAdmin object is that I need to pre-split the table
by specifying the start key, end key, and number of regions. I want to use
Snappy compression for this table, however, I haven't see
It says:
RemoteException(java.io.IOException): /hbase/test is non empty
Is the directory empty or are there files form some previous runs? Does the
user have access to delete the data here?
Regards,
Shahab
On Tue, Mar 25, 2014 at 7:42 AM, Mohamed Ghareb wrote:
> How I can test the compress
How I can test the compression snappy in hbase
I ran the below command
hbase org.apache.hadoop.hbase.util.CompressionTest /hbase/test snappy
the test table is exist and empty but i have error
14/03/25 13:12:01 DEBUG util.FSUtils: Creating file=/hbase/test with
permission=rwxrwxrwx
14/03/25 13
How I can test the compression snappy in hbase
I ran the below command
hbase org.apache.hadoop.hbase.util.CompressionTest /hbase/test snappy
the test table is exist and empty but i have error
14/03/25 13:12:01 DEBUG util.FSUtils: Creating file=/hbase/test with
permission=rwxrwxrwx
14/03/25 13
Yes, I followed the part to build it. But I didn't follow the configure
part(core-site.xml )
于 2014/1/10 8:48, Ted Yu 写道:
Rural:
Just to confirm, you were following instructions here:
https://code.google.com/p/hadoop-snappy/
Cheers
Rural:
Just to confirm, you were following instructions here:
https://code.google.com/p/hadoop-snappy/
Cheers
On Wed, Jan 8, 2014 at 5:34 PM, Ted Yu wrote:
> It's Okay.
>
> Either J-M or myself can come up with a patch.
>
> Cheers
>
>
> On Wed, Jan 8, 2014 at 5:32 PM, Rural Hunter wrote:
>
>>
It's Okay.
Either J-M or myself can come up with a patch.
Cheers
On Wed, Jan 8, 2014 at 5:32 PM, Rural Hunter wrote:
> Sorry I think my English is not good enough to provide a patch for
> documentation. So I just created a jira and put my thoughts in it:
> https://issues.apache.org/jira/brows
Sorry I think my English is not good enough to provide a patch for
documentation. So I just created a jira and put my thoughts in it:
https://issues.apache.org/jira/browse/HBASE-10303
于 2014/1/4 20:23, Jean-Marc Spaggiari 写道:
Hi Rural,
If you have any recommendation on the way to complete it,
4 bits OS as the libhadoop.so in the binary package is only for
> 32
> > bits OS. It also din't metion actually you need both snappy and
> > hadoop-snappy.
> >
> > 于 2014/1/3 19:20, 张玉雪 写道:
> >
> > Hi:
> >>
> >>
s OS as the libhadoop.so in the binary package is only for 32
> bits OS. It also din't metion actually you need both snappy and
> hadoop-snappy.
>
> 于 2014/1/3 19:20, 张玉雪 写道:
>
> Hi:
>>
>> When I used hadoop 2.2.0 and hbase 0.96.1.1 to using s
nly for 32 bits OS. It also din't metion actually you need both snappy
and hadoop-snappy.
于 2014/1/3 19:20, 张玉雪 写道:
Hi:
When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy
compression
I followed the topic
http://hbase.apache.org/book/snappy.compression.html,
Shameless plug ;)
http://www.spaggiari.org/index.php/hbase/how-to-install-snappy-with-1
Keep us posted.
2014/1/3 Ted Yu
> See this thread:
>
> http://search-hadoop.com/m/LviZD1WPToG/Snappy+libhadoop&subj=RE+Setting+up+Snappy+compression+in+Hadoop
>
> On Jan 3, 2014, at
See this thread:
http://search-hadoop.com/m/LviZD1WPToG/Snappy+libhadoop&subj=RE+Setting+up+Snappy+compression+in+Hadoop
On Jan 3, 2014, at 3:20 AM, 张玉雪 wrote:
> Hi:
>
> When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy
> compression
>
> I foll
Hi:
When I used hadoop 2.2.0 and hbase 0.96.1.1 to using snappy compression
I followed the topic
http://hbase.apache.org/book/snappy.compression.html, but I get some error
,can some one help me?
[hadoop@master bin]$ hbase org.apache.hadoop.hbase.util.CompressionTest
Create another table with the same schema without the compression, insert
the same thing in the two tables and compare the foot print?
Le 2013-11-13 05:58, "Jia Wang" a écrit :
> Hi Folks
>
> I added COMPRESSION value after creating a table, so here is the table
>
Hi Folks
I added COMPRESSION value after creating a table, so here is the table
description:
{NAME => 'SPLIT_TEST_BIG', SPLIT_POLICY =>
'org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy',
MAX_FILESIZE => '107374182400', FAMILIES => [{NA
119.b19289cf9b1400c6daddc347337bac03. in 1163ms, sequenceid=687077,
>> compactio
>> n requested=false
>>
>> It seems like it will first flush into a tmp file and the memsize is
>> 122.1m,
>> but when it finally added, the size is 64.4m. Lastly, there are 2 more
added, the size is 64.4m. Lastly, there are 2 more
>> parameters which is 128.2m and 48.0m for currentsize.
>>
>> I never specify hbase.regionserver.codecs preperty in my hbase-site.xml
>> file, so is the size difference still because of compression?
>>
>> Thank
uested=false
>
> It seems like it will first flush into a tmp file and the memsize is
> 122.1m,
> but when it finally added, the size is 64.4m. Lastly, there are 2 more
> parameters which is 128.2m and 48.0m for currentsize.
>
> I never specify hbase.regionserver.codecs prepe
hen it finally added, the size is 64.4m. Lastly, there are 2 more
> parameters which is 128.2m and 48.0m for currentsize.
>
> I never specify hbase.regionserver.codecs preperty in my hbase-site.xml
> file, so is the size difference still because of compression?
>
> Thanks,
>
&g
, the size is 64.4m. Lastly, there are 2 more
parameters which is 128.2m and 48.0m for currentsize.
I never specify hbase.regionserver.codecs preperty in my hbase-site.xml
file, so is the size difference still because of compression?
Thanks,
aiyoh79
--
View this message in context:
http
rpose.
> @ Ashwanth : thank-u i have already seen this link..but its not helping
> me.
>
>
> On Sat, Jun 8, 2013 at 7:40 PM, Kevin O'dell >wrote:
>
> > I don't want to start a compression war here, but is there a reason you
> are
> > trying to use LZO
@gmail.com]
Sent: Sunday, June 09, 2013 4:33 PM
To: user@hbase.apache.org
Cc: Yaniv Ofer
Subject: Re: Compression class loading mismatch in 0.94.2
The change happened in 0.94.5
Please see HBASE-5458
Cheers
On Sun, Jun 9, 2013 at 5:35 AM, Levy Meny wrote:
> Hi,
> Anyone knows why org.apac
The change happened in 0.94.5
Please see HBASE-5458
Cheers
On Sun, Jun 9, 2013 at 5:35 AM, Levy Meny wrote:
> Hi,
> Anyone knows why org.apache.hadoop.hbase.io.hfile.Compression has changed
> in 0.94.2 to use the SystemClassLoader to load the snappy class, instead of
> the ContextClassLoader i
Hi,
Anyone knows why org.apache.hadoop.hbase.io.hfile.Compression has changed in
0.94.2 to use the SystemClassLoader to load the snappy class, instead of the
ContextClassLoader in previous versions (e.g. in 0.92.1)?
private CompressionCodec buildCodec(Configuration conf) {
try {
@ Alok: thanks for correcting it
@Kevin : I need to use both..snappy and LZO..just for some analysis purpose.
@ Ashwanth : thank-u i have already seen this link..but its not helping me.
On Sat, Jun 8, 2013 at 7:40 PM, Kevin O'dell wrote:
> I don't want to start a compression war
I don't want to start a compression war here, but is there a reason you are
trying to use LZO over Snappy?
On Sat, Jun 8, 2013 at 9:54 AM, Ashwanth Kumar wrote:
> Check this out
>
> https://github.com/twitter/hadoop-lzo/issues/35
>
>
>
> On Sat, Jun 8, 2013 at 7:20 PM
Check this out
https://github.com/twitter/hadoop-lzo/issues/35
On Sat, Jun 8, 2013 at 7:20 PM, Alok Singh Mahor wrote:
> On Sat, Jun 8, 2013 at 7:04 PM, priyanka raichand <
> raichand.priya...@gmail.com> wrote:
>
> > Hello everyone,
> > I am trying to activate LZ
On Sat, Jun 8, 2013 at 7:04 PM, priyanka raichand <
raichand.priya...@gmail.com> wrote:
> Hello everyone,
> I am trying to activate LZO compression in HBase with the help of following
> http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/
>
> I am getting error at
Hello everyone,
I am trying to activate LZO compression in HBase with the help of following
http://www.nosql.se/2011/09/activating-lzo-compression-in-hbase/
I am getting error at the 7th step in that link (ant compile-native)
you can see the error here* http://paste.ubuntu.com/5745079/*
I have
On Tue, Jun 4, 2013 at 6:48 PM, Jean-Daniel Cryans wrote:
> Replication doesn't need to know about compression at the RPC level so
> it won't refer to it and as far as I can tell you need to set
> compression only on the master cluster and the slave will figure it
> out.
&
Replication doesn't need to know about compression at the RPC level so
it won't refer to it and as far as I can tell you need to set
compression only on the master cluster and the slave will figure it
out.
Looking at the code tho, I'm not sure it works the same way it used
If RPC has compression abilities, how come Replication, which also works in
RPC does not get it automatically?
On Tue, Jun 4, 2013 at 12:34 PM, Anoop John wrote:
> > 0.96 will support HBase RPC compression
> Yes
>
> > Replication between master and slave
> will enjoy
> 0.96 will support HBase RPC compression
Yes
> Replication between master and slave
will enjoy it as well (important since bandwidth between geographically
distant data centers is scarce and more expensive)
But I can not see it is being utilized in replication. May be we can do
improveme
Hi,
Just wanted to make sure if I read in the internet correctly: 0.96 will
support HBase RPC compression thus Replication between master and slave
will enjoy it as well (important since bandwidth between geographically
distant data centers is scarce and more expensive)
supposedly faster.
Compress using:
- store size of common prefix
- save column family once in the first KeyValue
- use integer compression for key, value and prefix (7-bit encoding)
- use bits to avoid duplication key length, value length and type if it
same as previous
- store in 3 bits lengt
prakash kadel :
>> > thank you very much.
>> > i will try with snappy compression with data_block_encoding
>> >
>> >
>> >
>> >
>> > On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell > >wrote:
>> >
>> >> Prak
any documentation anywhere regarding the differences between
> PREFIX, DIFF and FAST_DIFF?
>
> 2013/4/3 prakash kadel :
> > thank you very much.
> > i will try with snappy compression with data_block_encoding
> >
> >
> >
> >
> > On Wed, Apr 3, 20
Is there any documentation anywhere regarding the differences between
PREFIX, DIFF and FAST_DIFF?
2013/4/3 prakash kadel :
> thank you very much.
> i will try with snappy compression with data_block_encoding
>
>
>
>
> On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell wrote:
thank you very much.
i will try with snappy compression with data_block_encoding
On Wed, Apr 3, 2013 at 11:21 PM, Kevin O'dell wrote:
> Prakash,
>
> Yes, I would recommend Snappy Compression.
>
> On Wed, Apr 3, 2013 at 10:18 AM, Prakash Kadel
> wrote:
> >
Another commonly used encoding is FAST_DIFF
Cheers
On Wed, Apr 3, 2013 at 7:18 AM, Prakash Kadel wrote:
> Thanks,
> is there any specific compression that is recommended of the use case
> i have?
>Since my values are all null will compression help?
>
> I am thinki
Prakash,
Yes, I would recommend Snappy Compression.
On Wed, Apr 3, 2013 at 10:18 AM, Prakash Kadel wrote:
> Thanks,
> is there any specific compression that is recommended of the use case i
> have?
>Since my values are all null will compression help?
>
> I am
Thanks,
is there any specific compression that is recommended of the use case i
have?
Since my values are all null will compression help?
I am thinking of using prefix data_block_encoding..
Sincerely,
Prakash Kadel
On Apr 3, 2013, at 10:55 PM, Ted Yu wrote:
> You should use d
+1 for Ted´s advice.
Using compression can save a lot of space in memory and disc, so it´s a
good recommendation.
2013/4/3 Ted Yu
> You should use data block encoding (in 0.94.x releases only). It is helpful
> for reads.
>
> You can also enable compression.
>
> Cheers
&
You should use data block encoding (in 0.94.x releases only). It is helpful
for reads.
You can also enable compression.
Cheers
On Wed, Apr 3, 2013 at 6:42 AM, Prakash Kadel wrote:
> Hello,
> I have a question.
> I have a table where i store data in the column qualifiers(t
Hello,
I have a question.
I have a table where i store data in the column qualifiers(the values
itself are null).
I just have 1 column family.
The number of columns per row is variable (1~ few thousands)
Currently i don't use compression or the data_block_encoding.
Should
Perfect, thanks.
I will use the hbase.regionserver.codecs option to make sure the new
algorithm is working fine.
JM
2012/12/1, Kevin O'dell :
> JM,
>
> You are correct. Just disable the table, alter the compression, and then
> major and minor compactions will run their c
JM,
You are correct. Just disable the table, alter the compression, and then
major and minor compactions will run their course.
Please remember to run the compression test first to make sure there is
nothing wrong with the new algorithm.
On Dec 1, 2012 3:21 PM, "Jean-Marc Spaggiari&qu
Hi,
What's the right way to change the compression algorythme for a cf?
Can I simply disable the table, alter the table with the new
compression alg info, and enable it back? As a result, I will still
have a table compressed with the previous algorythme, but at some
point in the future, it
1 - 100 of 241 matches
Mail list logo