Regards
Sanjiv Singh
Mob : +1 571-599-5236
Regards
Sanjiv Singh
Mob : +1 571-599-5236
sql.jdbc.Driver",
*partitionColumn*="id",
*lowerBound *= 1,
*upperBound *= maxId,
*numPartitions *= 100
).load()
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Wed, Aug 10, 2016 at 6:35 AM, Siva A <siva9940261...@gmail.com> wrote
Hi All,
We are using for Spark SQL :
- Hive :1.2.1
- Spark : 1.3.1
- Hadoop :2.7.1
Let me know if needs other details to debug the issue.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Sun, Mar 13, 2016 at 1:07 AM, Mich Talebzadeh <mich.talebza...@gmail.com>
wrote:
scala> sqlContext.table("default.foo").count* // Gives 2 , no compaction
required.*
Regards
Sanjiv Singh
Mob : +091 9990-447-339
compaction, Spark SQL
start recognizing delta files.
Let know me if needed other details to get root cause.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Tue, Feb 23, 2016 at 2:28 PM, Varadharajan Mukundan <srinath...@gmail.com
> wrote:
> That's interesting. I'm not sure why first c
elta files
Now run major compaction:
hive> ALTER TABLE default.foo COMPACT 'MAJOR';
scala> sqlContext.table("default.foo").count // Gives 1
hive> insert into foo values(20);
scala> sqlContext.table("default.foo").count* // Gives 2 , no compaction
required.
m mytable ;
+--+
| _c0 |
+--+
| 44 |
+--+
1 row selected (1.196 seconds)
*HIVE JDBC :*
1: jdbc:hive2://myhost:1> select count(*) from mytable ;
+--+--+
| _c0 |
+--+--+
| 44 |
+--+--+
1 row selected (0.121 seconds)
Regards
Sanjiv Singh
Mob : +091 9990-447-33
db.db/mytable/delta_086_086
drwxr-xr-x - root hdfs 0 2016-02-23 11:41
/apps/hive/warehouse/mydb.db/mytable/delta_087_087
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Mon, Feb 22, 2016 at 1:38 PM, Varadharajan Mukundan <srinath...@gmail.com
> wrote:
> Actual
Documentation is upset sometimes.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Mon, Feb 22, 2016 at 9:49 AM, Varadharajan Mukundan <srinath...@gmail.com
> wrote:
> Yes, I was burned down by this issue couple of weeks back. This also means
> that after every insert job, compa
ws'='3',
'rawDataSize'='0',
'totalSize'='11383',
'transactional'='true',
'transient_lastDdlTime'='1455864121') ;
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Mon, Feb 22, 2016 at 9:01 AM, Varadharajan Mukundan <srinath...@gmail.com
> wrote:
> Hi,
>
> Is the transaction
is working on plain Apache setup.
Let me know if needs other details.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
Any help on this.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Wed, Jan 27, 2016 at 10:25 PM, @Sanjiv Singh <sanjiv.is...@gmail.com>
wrote:
> Hi Ted ,
> Its typo.
>
>
> Regards
> Sanjiv Singh
> Mob : +091 9990-447-339
>
> On Wed, Jan 27, 2016 at 9:13 PM, T
ssor.java:55)
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:744)
Regards
ory and JDBC queries are
getting results.
*conf/spark-env.sh* (executor memory configurations not picked by
thrift-server)
export SPARK_JAVA_OPTS="-Dspark.executor.memory=512M"
export SPARK_EXECUTOR_MEMORY=512M
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Thu, Jan 28, 2016 at 10:5
Hi Ted ,
Its typo.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Wed, Jan 27, 2016 at 9:13 PM, Ted Yu <yuzhih...@gmail.com> wrote:
> In the last snippet, temptable is shown by 'show tables' command.
> Yet you queried tampTable.
>
> I believe this just was typo :-)
>
&
he table through "show tables", but I run the query , it is
either hanged or returns nothing.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
Hi Karthik,
Can you provide us more detail of dataset data that you wanted to
parallelize with
SparkContext.parallelize(data);
Regards,
Sanjiv Singh
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Sun, Oct 12, 2014 at 11:45 AM, rapelly kartheek kartheek.m...@gmail.com
wrote:
Hi,
I am
18 matches
Mail list logo