How to set idle Hive jdbc connection out from java code using hive jdbc

2015-11-29 Thread reena upadhyay
I am using hive jdbc 1.0 in my java application to create connection with
hive server and execute query. I want to set the idle hive connection
timeout from java code. Like say, user first creates the hive connection,
and if the hive connection remains idle for next 10 minutes, then this
connection object should get expired. If user uses this same connection
object after 10 minutes for executing query, then hive jdbc should throw
error. Can you please tell me the way to achieve this through java code.

I know there is a property *hive.server2.idle.session.timeout *in hive, but
I don't know whether this is the right property required to be set from
java code or there is some other property. I tried setting this property in
jdbc connection string but it did not worked.

try {
Class.forName("org.apache.hive.jdbc.HiveDriver");
} catch (ClassNotFoundException e) {
LOG.error(ExceptionUtils.getStackTrace(e));
}

String jdbcurl = "
*jdbc:hive2://localhost:1/idw?hive.server2.idle.session.timeout=1000ms";*
Connection con;
con = DriverManager.getConnection(jdbcurl,"root","");

Thread.sleep(3000);

Now below I am using connection object, hive jdbc should throw error here
as I used connection object after 3000 ms but I had set the idle timeout as
1000ms but hive jdbc had not thrown error

ResultSet rs = con.createStatement().executeQuery("select *
from idw.emp");

Need help on this.


Re: Hotels.com

2015-11-29 Thread Amrit Jangid
??

On Mon, Nov 30, 2015 at 11:56 AM, Roshini Johri 
wrote:

> [image: Inline image 2]
>
> Roshini Johri
>
> [image: Inline image 1] 
>
>
>
> She borrowed the book from him many years ago and hasn't yet returned it.
> She advised him to come back at once. Someone I know recently combined
> Maple Syrup & buttered Popcorn thinking it would taste like caramel
> popcorn. It didn’t and they don’t recommend anyone else do it either. A
> glittering gem is not enough. They got there early, and they got really
> good seats.
>



-- 

Regards,
Amrit

-- 



Hotels.com

2015-11-29 Thread Roshini Johri
[image: Inline image 2]

Roshini Johri

[image: Inline image 1] 



She borrowed the book from him many years ago and hasn't yet returned it.
She advised him to come back at once. Someone I know recently combined
Maple Syrup & buttered Popcorn thinking it would taste like caramel
popcorn. It didn’t and they don’t recommend anyone else do it either. A
glittering gem is not enough. They got there early, and they got really
good seats.


RE: Building Rule Engine/ Rule Transformation

2015-11-29 Thread Mahender Sarangam
We are not expert in java programing, we work .NET code. But there is no 
support for .NET UDF. 

Subject: Re: Building Rule Engine/ Rule Transformation
From: jornfra...@gmail.com
Date: Sun, 29 Nov 2015 11:33:05 +0100
CC: user-h...@hive.apache.org
To: user@hive.apache.org

Why not implement Hive UDF in Java?
On 28 Nov 2015, at 21:26, Mahender Sarangam  
wrote:




 Hi team,
 
We need expert input to discuss how to implement Rule engine in hive. Do you 
have any references available to implement rule in hive/pig.

 
We are migrating our Stored Procedures into Multiple Hive query but it is 
becoming complex in maintenance,  Hive is not Procedural Language, so we could 
not write IF ELSE logic or any kind of procedural language. Can any one suggest 
us which HDP technology can be helpful for Procedural language replacement, we 
are thinking of PIG, Can it  becomes best example to perform rule/data 
transformation.
 
Our data is in Structure format table with around 250 columns, We are having 
Rules like Update Columns based on lookup table values, delete rows which 
doesn't satisfy the condition, update statement include update of multiple 
columns like case etc and some date conversions columns. Suggest us best way to 
implement this rule engine.  previously we have used our SQL Server for Rule 
engine, now we have migrated application to Big Data, we are looking for any 
references available to perform this rule transformation.
 
 
Some of our Finding are 

Make use of Hive StreamWriting PIG.
 
We are .NET developers, Can we think of writing .EXE application and stream row 
wise data to .EXE and apply rules on top of  each row. Will it be nicer 
solution or Is it better to implement in PIG, but implementing doesn't fetch me 
much benefit when compared with Hive. Can you please comment on above 
approaches please.
 
Thanks,
Mahender  
  

Re: Hive version with Spark

2015-11-29 Thread Xuefu Zhang
Sofia,

What specific problem did you encounter when trying spark.master other than
local?

Thanks,
Xuefu

On Sat, Nov 28, 2015 at 1:14 AM, Sofia Panagiotidi <
sofia.panagiot...@taiger.com> wrote:

> Hi Mich,
>
>
> I never managed to run Hive on Spark with a spark master other than local
> so I am afraid I don’t have a reply here.
> But do try some things. Firstly, run hive as
>
> hive --hiveconf hive.root.logger=DEBUG,console
>
>
> so that you are able to see what the exact error is.
>
> I am afraid I cannot be much of a help as I think I reached the same point
> (where it would work only when setting spark.master=local) before
> abandoning.
>
> Cheers
>
>
>
> On 27 Nov 2015, at 01:59, Mich Talebzadeh  wrote:
>
> Hi Sophia,
>
>
> There is no Hadoop-2.6. I believe you should use Hadoop-2.4 as shown below
>
>
> mvn -Phadoop-2.4 -Dhadoop.version=2.6.0 -DskipTests clean package
>
> Also if you are building it for Hive on Spark engine, you should not
> include Hadoop.jar files in your build.
>
> For example I tried to build spark 1.3 from source code (I read that this
> version works OK with Hive, having tried unsuccessfully spark 1.5.2).
>
> The following command created the tar file
>
> ./make-distribution.sh --name "hadoop2-without-hive" --tgz
> "-Pyarn,hadoop-provided,hadoop-2.4,parquet-provided"
>
> spark-1.3.0-bin-hadoop2-without-hive.tar.gz
>
>
> Now I have other issues making Hive to use Spark execution engine
> (requires Hive 1.1 or above )
>
> In hive I do
>
> set spark.home=/usr/lib/spark;
> set hive.execution.engine=spark;
> set spark.master=spark://127.0.0.1:7077;
> set spark.eventLog.enabled=true;
> set spark.eventLog.dir=/usr/lib/spark/logs;
> set spark.executor.memory=512m;
> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
> use asehadoop;
> select count(1) from t;
>
> I get the following
>
> OK
> Time taken: 0.753 seconds
> Query ID = hduser_20151127003523_e9863e84-9a81-4351-939c-36b3bef36478
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapreduce.job.reduces=
> Failed to execute spark task, with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark
> client.)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
>
> HTH,
>
> Mich
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
> *From:* Sofia [mailto:sofia.panagiot...@taiger.com
> ]
> *Sent:* 18 November 2015 16:50
> *To:* user@hive.apache.org
> *Subject:* Hive version with Spark
>
> Hello
>
> After various failed tries to use my Hive (1.2.1) with my Spark (Spark
> 1.4.1 built for Hadoop 2.2.0) I decided to try to build again Spark with
> Hive.
> I would like to know what is the latest Hive version that can be used to
> build Spark at this point.
>
> When downloading Spark 1.5 source and trying:
>
> *mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Phive -Phive-1.2.1
> -Phive-thriftserver  -DskipTests clean package*
>
> I get :
>
> *The requested profile "hive-1.2.1" could not be activated because it does
> not exist.*
>
> Thank you
> Sofia
>
>
>


Re: Building Rule Engine/ Rule Transformation

2015-11-29 Thread Jörn Franke
Why not implement Hive UDF in Java?

> On 28 Nov 2015, at 21:26, Mahender Sarangam  
> wrote:
> 
>  Hi team,
>  
> We need expert input to discuss how to implement Rule engine in hive. Do you 
> have any references available to implement rule in hive/pig.
> 
>  
> We are migrating our Stored Procedures into Multiple Hive query but it is 
> becoming complex in maintenance,  Hive is not Procedural Language, so we 
> could not write IF ELSE logic or any kind of procedural language. Can any one 
> suggest us which HDP technology can be helpful for Procedural language 
> replacement, we are thinking of PIG, Can it  becomes best example to perform 
> rule/data transformation.
>  
> Our data is in Structure format table with around 250 columns, We are having 
> Rules like Update Columns based on lookup table values, delete rows which 
> doesn't satisfy the condition, update statement include update of multiple 
> columns like case etc and some date conversions columns. Suggest us best way 
> to implement this rule engine.  previously we have used our SQL Server for 
> Rule engine, now we have migrated application to Big Data, we are looking for 
> any references available to perform this rule transformation.
>  
>  
> Some of our Finding are 
> 
> Make use of Hive Stream
> Writing PIG.
> 
>  
> We are .NET developers, Can we think of writing .EXE application and stream 
> row wise data to .EXE and apply rules on top of  each row. Will it be nicer 
> solution or Is it better to implement in PIG, but implementing doesn't fetch 
> me much benefit when compared with Hive. Can you please comment on above 
> approaches please.
>  
> Thanks,
> Mahender


Re: Building Rule Engine/ Rule Transformation

2015-11-29 Thread Dmitry Tolpeko
Mahender,

Please try Hive HPL/SQL tool first. It will be included to Hive 2.0, but
now it is available at hplsql.org. The tool attempts to implement
procedural SQL for Hive. It is supporting SQL Server syntax as well. If
there are any issues please contact me by email.

Dmitry

On Sat, Nov 28, 2015 at 11:26 PM, Mahender Sarangam <
mahender.bigd...@outlook.com> wrote:

>  Hi team,
>
> We need expert input to discuss how to implement Rule engine in hive. Do
> you have any references available to implement rule in hive/pig.
>
>
> We are migrating our Stored Procedures into Multiple Hive query but it is
> becoming complex in maintenance,  Hive is not Procedural Language, so we
> could not write IF ELSE logic or any kind of procedural language. Can any
> one suggest us which HDP technology can be helpful for Procedural language
> replacement, we are thinking of PIG, Can it  becomes best example to
> perform rule/data transformation.
>
> Our data is in Structure format table with around 250 columns, We are
> having Rules like Update Columns based on lookup table values, delete rows
> which doesn't satisfy the condition, update statement include update of
> multiple columns like case etc and some date conversions columns. Suggest
> us best way to implement this rule engine.  previously we have used our SQL
> Server for Rule engine, now we have migrated application to Big Data, we
> are looking for any references available to perform this rule
> transformation.
>
>
> Some of our Finding are
>
>
>1. Make use of Hive Stream
>2. Writing PIG.
>
>
>
> We are .NET developers, Can we think of writing .EXE application and
> stream row wise data to .EXE and apply rules on top of  each row. Will it
> be nicer solution or Is it better to implement in PIG, but implementing
> doesn't fetch me much benefit when compared with Hive. Can you please
> comment on above approaches please.
>
> Thanks,
> Mahender
>