Unable to execute hive script on AWS with arguments from java program

2013-09-09 Thread Puneet Khatod
Hi,

I am new to Hive and am trying to execute a hive script/query on AWS using java 
APIs (StepFactory class). My Hive query requires some arguments.
I am able to start the cluster and install hive on it, but am getting error on 
hive execution step (Error is that AWS is unable to understand arguments).

Below is the code snippet that I have tried:

  StepConfig runHive = new StepConfig()
.withName("Run Hive")
.withActionOnFailure("TERMINATE_JOB_FLOW")
/*  .withHadoopJarStep(
 stepFactory.newRunHiveScriptStep(
   eventSubsetHiveScript," 
-d"," S3_INPUT_BUCKET="+in_bucketname+ " -d S3_OUT_BUCKET="+out_bucketname+" -d 
DT="+in_dt));*/
.withHadoopJarStep(
 stepFactory.newRunHiveScriptStep(
   eventSubsetHiveScript," 
-d"," S3_INPUT_BUCKET="+in_bucketname,
   " -d"," 
S3_OUT_BUCKET="+out_bucketname,
   " -d"," DT="+in_dt));

Error that I get in stderr logs is:
Unrecognised option  -d

I have tried the the commented out snippet too, but same result :(.
This time the error is:
Unrecognised option  -d S3_INPUT_BUCKET=s3://my-input -d 
S3_OUT_BUCKET=s3://my-output...

Please help me in getting this correct. How to pass arguments to the hive query 
using Java API?


Regards,
Puneet Khatod | puneet.kha...@tavant.com<mailto:puneet.kha...@tavant.com>
Technical Lead | T: +91 120  4030300 | F: +91 120 403 0301


Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is addressed and may 
contain confidential and/or privileged material. If you have received this in 
error, please contact the 
sender and delete the material from any computer. All e-mails sent from or to 
Tavant Technologies 
may be subject to our monitoring procedures.


How to validate data type in Hive

2013-08-26 Thread Puneet Khatod
Hi,

I have a requirement to validate data type of the values present in my flat 
file (which is source for my hive table). I am unable to find any hive 
feature/function which would do that.
Is there any way to validate data type of the values present in the underlying 
file? Something like BCP (Bulk copy program), used in SQL.

Please reply, my whole project is struck due to this issue.

Thanks,
Puneet

From: Yin Huai [mailto:huaiyin@gmail.com]
Sent: Monday, August 26, 2013 5:10 PM
To: user@hive.apache.org
Cc: dev; Eric Chu
Subject: Re: DISTRIBUTE BY works incorrectly in Hive 0.11 in some cases

forgot to add in my last reply To generate correct results, you can set 
hive.optimize.reducededuplication to false to turn off ReduceSinkDeDuplication

On Sun, Aug 25, 2013 at 9:35 PM, Yin Huai 
mailto:huaiyin@gmail.com>> wrote:
Created a jira https://issues.apache.org/jira/browse/HIVE-5149

On Sun, Aug 25, 2013 at 9:11 PM, Yin Huai 
mailto:huaiyin@gmail.com>> wrote:
Seems ReduceSinkDeDuplication picked the wrong partitioning columns.

On Fri, Aug 23, 2013 at 9:15 PM, Shahansad KP 
mailto:s...@rocketfuel.com>> wrote:
I think the problem lies with in the group by operation. For this optimization 
to work the group bys partitioning should be on the column 1 only.

It wont effect the correctness of group by, can make it slow but int this case 
will fasten the overall query performance.

On Fri, Aug 23, 2013 at 5:55 PM, Pala M Muthaia 
mailto:mchett...@rocketfuelinc.com>> wrote:
I have attached the hive 10 and 11 query plans, for the sample query below, for 
illustration.

On Fri, Aug 23, 2013 at 5:35 PM, Pala M Muthaia 
mailto:mchett...@rocketfuelinc.com>> wrote:
Hi,

We are using DISTRIBUTE BY with custom reducer scripts in our query workload.

After upgrade to Hive 0.11, queries with GROUP BY/DISTRIBUTE BY/SORT BY and 
custom reducer scripts produced incorrect results. Particularly, rows with same 
value on DISTRIBUTE BY column ends up in multiple reducers and thus produce 
multiple rows in final result, when we expect only one.

I investigated a little bit and discovered the following behavior for Hive 0.11:

- Hive 0.11 produces a different plan for these queries with incorrect results. 
The extra stage for the DISTRIBUTE BY + Transform is missing and the Transform 
operator for the custom reducer script is pushed into the reduce operator tree 
containing GROUP BY itself.

- However, if the SORT BY in the query has a DESC order in it, the right plan 
is produced, and the results look correct too.

Hive 0.10 produces the expected plan with right results in all cases.


To illustrate, here is a simplified repro setup:

Table:

CREATE TABLE test_cluster (grp STRING, val1 STRING, val2 INT, val3 STRING, val4 
INT) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n' 
STORED AS TEXTFILE;

Query:

ADD FILE reducer.py;

FROM(
  SELECT grp, val2
  FROM test_cluster
  GROUP BY grp, val2
  DISTRIBUTE BY grp
  SORT BY grp, val2  -- add DESC here to get correct results
) a

REDUCE a.*
USING 'reducer.py'
AS grp, reducedValue


If i understand correctly, this is a bug. Is this a known issue? Any other 
insights? We have reverted to Hive 0.10 to avoid the incorrect results while we 
investigate this.

I have the repro sample, with test data and scripts, if anybody is interested.



Thanks,
pala






Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is addressed and may 
contain confidential and/or privileged material. If you have received this in 
error, please contact the 
sender and delete the material from any computer. All e-mails sent from or to 
Tavant Technologies 
may be subject to our monitoring procedures.


RE: New to hive.

2013-07-17 Thread Puneet Khatod
Hi,

There are many online tutorials and blogs to provide quick get-set-go sort of 
information. To start with you can learn Hadoop. For detailed knowledge you 
will have to go through e-books as mentioned by Lefty.
These books are bulky but will provide every bit of hadoop.

I recently came across an android app called 'Big data Xpert', which has tips 
and tricks about big data technologies. I think that it can be quick and good 
reference for beginners as well as experienced developers..
For reference:
https://play.google.com/store/apps/details?id=com.mobiknights.xpert.bigdata

Thanks,
Puneet

From: Lefty Leverenz [mailto:le...@hortonworks.com]
Sent: Thursday, June 20, 2013 11:05 AM
To: user@hive.apache.org
Subject: Re: New to hive.

"Programming Hive" and "Hadoop: The Definitive Guide" are available at the 
O'Reilly website (http://oreilly.com/) and on Amazon.

But don't forget the Hive wiki:

  *   Hive Home -- https://cwiki.apache.org/confluence/display/Hive/Home
  *   Getting Started -- 
https://cwiki.apache.org/confluence/display/Hive/GettingStarted
  *   Hive Tutorial -- https://cwiki.apache.org/confluence/display/Hive/Tutorial
- Lefty


On Wed, Jun 19, 2013 at 7:02 PM, Mohammad Tariq 
mailto:donta...@gmail.com>> wrote:
Hello ma'am,

  Hive queries are parsed using ANTLR and and are 
converted into corresponding MR jobs(actually a lot of things happen under the 
hood). I had answered a similar 
question
 few days ago on SO, you might find it helpful. But I would suggest you to go 
through the original paper 
which explains all these things in proper detail. I would also recommend you to 
go through the book "Programming Hive". It's really nice.

HTH

Warm Regards,
Tariq
cloudfront.blogspot.com

On Thu, Jun 20, 2013 at 4:24 AM, Bharati 
mailto:bharati.ad...@mparallelo.com>> wrote:
Hi Folks,

I am new to hive and need information, tutorials etc that you can point to. I 
have installed hive to work with MySQL.

 I can run queries. Now I would like to understand how the map and reduce 
classes are created and how I can look at the data for the map job and map 
class the hive query generates.  Also is there a way to create custom map 
classes.
I would appreciate if anyone can help me get started.

Thanks,
Bharati

Sent from my iPad
Fortigate Filtered



Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is addressed and may 
contain confidential and/or privileged material. If you have received this in 
error, please contact the 
sender and delete the material from any computer. All e-mails sent from or to 
Tavant Technologies 
may be subject to our monitoring procedures.


Need help with percentile calculation

2012-11-21 Thread Puneet Khatod
Hi,

I am trying to use percentile function of HIVE but getting exception from 
Amazon EMR service.
I am using version 0.7.

Please assist. It is very critical and urgent.

Below is the code snippet:

CREATE EXTERNAL TABLE IF NOT EXISTS server_d
(
  ag_date STRING,
  median _time BIGINT,
  95percentile _time BIGINT
) COMMENT 'server'
PARTITIONED by (dt STRING, hh STRING, min STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE
LOCATION '${hiveconf:S3_INPUT_BUCKET}/server /';

INSERT OVERWRITE TABLE server_d PARTITION(dt='${hiveconf:DT}', 
hh='${hiveconf:HH}', min='${hiveconf:MIN}')
SELECT '${hiveconf:DT}' as ag_date,
percentile(total _time, 0.50) as median _time,
percentile(total_time, 0.95) as 95percentile_time
from my_log my
where my.date_req = '${hiveconf:DT}' and my.dt = '${hiveconf:DT}';


exception that I am getting:

Exception in thread "Thread-239" java.lang.RuntimeException: Error while 
reading from task log url
at 
org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:130)
at 
org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:211)
at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:81)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for 
URL: 
http://10.***.***.**:9103/tasklog?taskid=attempt_2010835_0005_m_01_3&start=-8193
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436)
at java.net.URL.openStream(URL.java:1010)
at 
org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getErrors(TaskLogProcessor.java:120)
... 3 more

Thanks,
Puneet

From: Felix.徐 [mailto:ygnhz...@gmail.com]
Sent: Wednesday, November 21, 2012 2:22 PM
To: user@hive.apache.org
Subject: Bugs exist in SEMI JOIN?

Hi,
I am using the version 0.9.0 and my tables are the same with TPC-H benchmark:

Here is a simple query(works correctly):

Q1
INSERT OVERWRITE TABLE customer_orders_statistics
 SELECT C_CUSTKEY FROM CUSTOMER
 LEFT SEMI JOIN(
  SELECT O_CUSTKEY FROM ORDERS WHERE unix_timestamp(O_ORDERDATE, '-MM-dd') 
> unix_timestamp('1995-12-31','-MM-dd')
 ) tempTable ON tempTable.O_CUSTKEY=CUSTOMER.C_CUSTKEY

it means inserting the key of customers who has orders since 1995-12-31 into 
another table.
But if I write the query like this:

Q2
INSERT OVERWRITE TABLE customer_orders_statistics
 SELECT C_CUSTKEY FROM CUSTOMER
 LEFT SEMI JOIN ORDERS
 ON CUSTOMER.C_CUSTKEY=ORDERS.O_CUSTKEY
 AND unix_timestamp(ORDERS.O_ORDERDATE, '-MM-dd') > 
unix_timestamp('1995-12-31','-MM-dd')

I will get exception from Hive:


FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:1566)
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.pushJoinFilters(SemanticAnalyzer.java:5254)
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6754)
  at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7531)
  at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:431)
  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:336)
  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:909)
  at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:258)
  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:215)
  at 
org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:406)
  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:557)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)
  at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


Also,If I write the query like this:
Q3
INSERT OVERWRITE TABLE customer_orders_statistics
 SELECT C_CUSTKEY FROM CUSTOMER
 LEFT SEMI JOIN ORDERS
 ON CUSTOMER.C_CUSTKEY=ORDERS.O_CUSTKEY
 WHERE unix_timestamp(ORDERS.O_ORDERDATE, '-MM-dd') > 
unix_timestamp('1995-12-31','-MM-dd')

Then this query can be executed(wondering the right hand of SEMI JOIN can be 
referenced in WHERE clause now?), but the result is wrong(comparing to Q1, Q1's 
result is the same with mysql).
Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is ad

RE: Continuous log analysis requires 'dynamic' partitions, is that possible?

2012-07-24 Thread Puneet Khatod
If you are using Amazon (AWS), you can use 'recover partitions' to enable all 
top level partitions.
This will add required dynamicity.

Regards,
Puneet Khatod

From: Bertrand Dechoux [mailto:decho...@gmail.com]
Sent: 24 July 2012 21:15
To: user@hive.apache.org
Subject: Continuous log analysis requires 'dynamic' partitions, is that 
possible?

Hi,

Let's say logs are stored inside hdfs using the following file tree 
///.
So for apache, that would be :
/apache/01/01
/apache/01/02
...
/apache/02/01
...

I would like to know how to define a table for this information. I found out 
that the table should be external and should be using partitions.
However, I did not found any way to dynamically create the partitions. Is there 
no automatic way to define them?
In that case, the partition 'template' would be / with the root 
being apache.

I know how to 'hack a fix' : create a script which would generate all the "add 
partition statement" and run the resulting statements without caring about the 
results because partitions may not exist or may already have been added. 
Better, I could parse the result of 'show partition' for the table and run only 
the relevant statement but it still feels like a hack.

Is there any clean way to do it?

Regards,

Bertrand Dechoux
Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is addressed and may
contain confidential and/or privileged material. If you have received this in 
error, please contact the
sender and delete the material from any computer. All e-mails sent from or to 
Tavant Technologies
may be subject to our monitoring procedures.


Not in clause in hive query

2012-07-18 Thread Puneet Khatod
Hi,

I am working on Hive 0.7. I am migrating SQL queries to hive and facing issues 
with the queries that have 'Not in' clause usage.

Example:
select * from customer  where cust_id not in (12022,11783);

I am getting:
FAILED: Parse Error: line 1:38 cannot recognize input near ' cust_id ' 'not' 
'in' in expression specification.

Is there any alternative available in Hive to replicate behaviour of 'IN' and 
'NOT IN' clause?

Regards,
Puneet

From: Saggau, Arne [mailto:arne.sag...@ottogroup.com]
Sent: 18 July 2012 12:30
To: user@hive.apache.org
Subject: AW: not able to access Hive Web Interface

Hi,

you have to give the relative path in hive-site.xml.
So try using lib/hive-hwi-0.8.1.war

Regards
Arne

Von: yogesh.kuma...@wipro.com 
[mailto:yogesh.kuma...@wipro.com]
Gesendet: Mittwoch, 18. Juli 2012 08:41
An: user@hive.apache.org; 
bejoy...@yahoo.com
Betreff: RE: not able to access Hive Web Interface
Wichtigkeit: Hoch

Hi all :-),

Iam trying to access Hive Web Interface but it fails.

I have this changes in hive-site.xml




hive.hwi.listen.host
0.0.0.0
This is the host address the Hive Web Interface will 
listen on



hive.hwi.listen.port

This is the port the Hive Web Interface will listen 
on



hive.hwi.war.file
/HADOOP/hive/lib/hive-hwi-0.8.1.war /*  (Here is the 
hive directory) */
This is the WAR file with the jsp content for Hive Web 
Interface




 
***

And also export the ANT lib like.

export ANT_LIB=/Yogesh/ant-1.8.4/lib
export PATH=$PATH:$ANT_LIB


now when i do run command

hive --service hwi  it results

12/07/17 18:03:02 INFO hwi.HWIServer: HWI is starting up
12/07/17 18:03:02 WARN conf.HiveConf: DEPRECATED: Ignoring hive-default.xml 
found on the CLASSPATH at /HADOOP/hive/conf/hive-default.xml
12/07/17 18:03:02 FATAL hwi.HWIServer: HWI WAR file not found at 
/HADOOP/hive/lib/hive-hwi-0.8.1.war


and if I go for

hive --service hwi --help it results

Usage ANT_LIB= hive --service hwi


Althought if I go to /HADOOP/hive/lib directory I found

1) hive-hwi-0.8.1.war
2) hive-hwi-0.8.1.jar

these files are present there.

what is Iam doing wrong :-( ?

Please help and Suggest

Greetings
Yogesh Kumar

From: Gesli, Nicole [nicole.ge...@memorylane.com]
Sent: Wednesday, July 18, 2012 12:50 AM
To: user@hive.apache.org; 
bejoy...@yahoo.com
Subject: Re: DATA UPLOADTION
For the Hive query approach, check the string functions 
(https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-StringFunctions)
 or write your own (UDF), if needed. It depends on what you are trying to get. 
Example:

SELECT TRIM(SUBSTR(data, LOCATE(LOWER(data), ' this '), LOCATE(LOWER(data), ' 
that ')+5)) my_string
FROM   log_table
WHERE  LOWER(data) LIKE '%this%and%that%'


From: Bejoy KS mailto:bejoy...@yahoo.com>>
Reply-To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>, 
"bejoy...@yahoo.com" 
mailto:bejoy...@yahoo.com>>
Date: Monday, July 16, 2012 11:39 PM
To: "user@hive.apache.org" 
mailto:user@hive.apache.org>>
Subject: Re: DATA UPLOADTION

Hi Yogesh

You can connect reporting tools like tableau , micro strategy etc direcly with 
hive.

If you are looking for some static reports based on aggregate data. You can 
process the data in hive move the resultant data into some rdbms and use some 
common reporting tools over the same. I know quite a few projects following 
this model.
Regards
Bejoy KS

Sent from handheld, please excuse typos.

From: mailto:yogesh.kuma...@wipro.com>>
Date: Tue, 17 Jul 2012 06:33:43 +
To: mailto:user@hive.apache.org>>; 
mailto:bejoy...@yahoo.com>>
ReplyTo: user@hive.apache.org
Subject: RE: DATA UPLOADTION

Thanks Gesli and Bejoy,

I have created tables in hive and uploaded data into it. I can perform query on 
it, please suggest me how to generate reports from that tables.

Mr. Gesli,
If I create tables with single string column like ( create table Log_table( 
Data STRING); ) then how can perform condition based query over the data into 
Log_table ?


Thanks & Regards :-)
Yogesh Kumar

From: Gesli, Nicole 
[nicole.ge...@memorylane.com]
Sent: Monday, July 16, 2012 11:30 PM
To: user@hive.apache.org; 
bejoy...@yahoo.com
Cc: u...@hbase.apache.org
Subject: Re: DATA UPLOA

Error in Hive execution on cluster : Wrong FS: hdfs

2012-05-31 Thread Puneet Khatod
Hi,

I am facing below error when I am firing any query in Hive.
My Hive setup is present on the master node in my cluster. Hadoop is configured 
using IP_addresses in configuration xmls and in master, slave files and it is 
running fine. The error only arises when hive query is executed which had table 
location on HDFS. It seems hive is expecting configurations to be done using 
hostnames.

Please help me in configuring hive such that it can understand IP address based 
configuration. I am using Hadoop 0.20.2 and Hive 0.7.1.

Regards,
Puneet

  __  __  
__

hive> show tables;
OK
test_page_view
test_page_view_stg
Time taken: 131.309 seconds
hive>
> select * from test_page_view_stg;
FAILED: Hive Internal Error: java.lang.RuntimeException(Error while making MR 
scratch directory - check filesystem config (null))
java.lang.RuntimeException: Error while making MR scratch directory - check 
filesystem config (null)
at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:196)
at org.apache.hadoop.hive.ql.Context.getMRTmpFileURI(Context.java:247)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:900)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6594)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:340)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:164)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:241)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:456)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.lang.IllegalArgumentException: Wrong FS: 
hdfs://:9100/tmp/hive-hadoop/hive_2012-05-31_11-29-19_844_3368974040204630542,
 expected: hdfs://.local:9100
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.checkPath(DistributedFileSystem.java:99)
at org.apache.hadoop.fs.FileSystem.makeQualified(FileSystem.java:222)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.makeQualified(DistributedFileSystem.java:116)
at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:146)
at org.apache.hadoop.hive.ql.Context.getMRScratchDir(Context.java:190)
... 14 more

Any comments or statements made in this email are not necessarily those of 
Tavant Technologies.
The information transmitted is intended only for the person or entity to which 
it is addressed and may
contain confidential and/or privileged material. If you have received this in 
error, please contact the
sender and delete the material from any computer. All e-mails sent from or to 
Tavant Technologies
may be subject to our monitoring procedures.