Hi,
I am having a usecase to read files from hdfs and local file system
depending on a configuration parameter. I found that apache commons-vfs
supports various file systems and the latest developer release has an
implementation for hdfs also (though only read support is provided
currently). I
thanks
在 2014年08月01日 12:00, Devopam Mittra 写道:
If you have mySQL as your metastore , you may use something similar to
below:
SELECT tbl.TBL_NAME,COUNT(DISTINCT part.PART_NAME) AS partition_count
FROM metastore_db.TBLS tbl, metastore_db.PARTITIONS part
WHERE tbl.TBL_ID = part.TBL_ID
AND
Hi,
I am having a usecase to read files from hdfs and local file system
depending on a configuration parameter. I found that apache commons-vfs
supports various file systems and the latest developer release has an
implementation for hdfs also (though only read support is provided
currently). I
Hi.
in Hive can I make use of the centralized cache management introduced in Hadoop
2.3
(http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html)?
If not implemented yet, is this on the roadmap?
My use case is that I want to pin a fact table that
Please take a look at hive with tez as execution engine on hadoop 2.3.
it may help you compare it with what you want to achieve
On Fri, Aug 1, 2014 at 4:13 PM, Uli Bethke uli.bet...@sonra.io wrote:
Hi.
in Hive can I make use of the centralized cache management introduced in
Hadoop 2.3 (
I am already using tez as the execution engine and used hdfs cacheadmin to pin a
file to memroy. However querying that file through Hive still goes to disk.
Any ideas?
On 01 August 2014 at 11:46 Nitin Pawar nitinpawar...@gmail.com wrote:
Please take a look at hive with tez as execution
Hi,
I want to pass different configuration variables for different users in
hive. I have tested this by putting hive-site.xmls in user's home directory
and for that user it grabs the hive-site.xml from his home directory and
executes hive accordingly.
This works for CLI but it doesn't from
Hi team
Quick question.
I am writing a hive generic UDF.
In which, I wanna have this:
HashMapString, String vpDefinition = new HashMapString, String() ;
vpDefinition.push(“auction_id”,”22”) ;
The second line always give me FAILED: ClassCastException
org.apache.hadoop.io.Text cannot be cast to
Hi team
Quick question.
I am writing a hive generic UDF.
In which, I wanna have this:
HashMapString, String vpDefinition = new HashMapString, String() ;
vpDefinition.push(“auction_id”,”22”) ;
The second line always give me FAILED: ClassCastException
org.apache.hadoop.io.Text cannot be cast
Hi team
Quick question.
I am writing a hive generic UDF.
In which, I wanna have this:
HashMapString, String vpDefinition = new HashMapString, String() ;
vpDefinition.push(“auction_id”,”22”) ;
The second line always give me FAILED: ClassCastException
org.apache.hadoop.io.Text cannot be cast
Hi Dan,
Take a look at this:
http://javarevisited.blogspot.com/2012/12/how-to-solve-javalangclasscastexception-java.html
__Birm
Ricardo Birmele, CISSP
Security Data Scientist
Microsoft IT Security Operations
* | | *
[Microsoft Logo]
From: Dan Fan [mailto:d...@appnexus.com]
Sent: Friday,
Hi,
I am trying to test some optimizations that Partitioning and Clustering tables
can do, but I have a dude on how works the SORT BY clause in a table.
The case is the following:
I create a simple bucketed table as :
CREATE TABLE USERS(ID INT,NAME STRING, OTHER INT)
CLUSTERED BY (ID) SORTED
Would probably have to see the whole code of your evaluate() function.
Is this while trying to use treat the arguments to the UDF as a String object?
It was probably passed into the GenericUDF as a Text object (Hadoop Writable
version of string type), not a String object. It would have to be
13 matches
Mail list logo