automatically
as well to rewrite small hfiles as you are creating new one (generating new
stats).
On 9/19/19 4:50 PM, Ankit Singhal wrote:
> Please schedule compaction on SYSTEM.STATS table to clear the old entries.
>
> On Thu, Sep 19, 2019 at 1:48 PM Stepan Migunov
> <mai
RegionServer logs/metrics? Any obvious
saturation issues (e.g. handlers consumed, JVM GC pauses, host CPU
saturation)?
Turn on DEBUG log4j client side (beware of chatty ZK logging) and see if
there's something obvious from when the EXPLAIN is slow.
On 9/17/19 3:58 AM, Stepan Migunov wrote:
> Hi
Hi
We have an issue with our production environment - from time to time we notice
a significant performance degradation for some queries. The strange thing is
that the EXPLAIN operator for these queries takes the same time as queries
execution (5 minutes or more). So, I guess, the issue is relat
apping.html
Francis
On 24/05/2018 5:23 AM, Stepan Migunov wrote:
> Thanks you for response, Josh!
>
> I got something like "Inconsistent namespace mapping properties" and
> thought it was because it's impossible to set
> "isNamespaceMappingEnabled" for the ODB
of PQS -- *not* the ODBC
driver. The trivial thing you can check would be to validate that the
hbase-site.xml which PQS references is up to date and that PQS was restarted
to pick up the newest version of hbase-site.xml
On 5/22/18 4:16 AM, Stepan Migunov wrote:
> Hi,
>
> Is the ODBC drive
Hi,
Is the ODBC driver from Hortonworks the only way to access Phoenix from .NET
code now?
The problem is that driver has some critical limitations - it seems, driver
doesn't support Namespace Mapping (it couldn't be able to connect to Phoenix if
phoenix.schema.isNamespaceMappingEnabled=true)
rsion of Phoenix and HBase
you’re using. Your example should work as expected barring declaration of
the table as immutable or COL2 being part of the primary key.
Thanks,
James
On Fri, Apr 27, 2018 at 6:13 AM Stepan Migunov <
stepan.migu...@firstlinesoftware.com> wrote:
Hi,
Could
Hi,
Could you please clarify, how I can set a value to NULL?
After upsert into temp.table (ROWKEY, COL1, COL2) values (100, "ABC", null);
the value of COL2 still has a previous value (COL1 has "ABC" as expected).
Or there is only one way - to set STORE_NULLS = true?
Thanks,
Stepan.
oop/blob/release-2.7.1/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StopWatch.java
On Tue, Mar 27, 2018, 4:47 AM Stepan Migunov <
stepan.migu...@firstlinesoftware.com> wrote:
Hi,
Phoenix 4.12.0-HBase-1.1, hadoop 2.6.4, hive 2.1.1
I have setup Hive for
Hi,
Phoenix 4.12.0-HBase-1.1, hadoop 2.6.4, hive 2.1.1
I have setup Hive for using external Phoenix tables. But after phoenix-hive.jar
was included into the hive-site.xml, the hive console give exception on some
operations (e.g. show databases or query with order by clause):
Exception in threa
otherwise preclude by a different rowkey structure via logic such as a
skip-scan (which would be shown in the EXPLAIN plan).
You may actually find that using the built-in UPSERT SELECT logic may
out-perform the Spark integration since you aren't actually doing any
transformation logic insi
nix to Hive - 13310 sec
>From Hive to Phoenix - > 30240 sec
We use Spark 2.2.1; hbase 1.1.2, Phonix 4.13, Hive 2.1.1
So it seems that Spark + Phoenix led great performance degradation. Any
thoughts?
On 2018/03/04 11:08:56, Stepan Migunov
wrote:
> In our software we need to combine f
nix. We'd need you to
help quantify this better :)
On 3/4/18 6:08 AM, Stepan Migunov wrote:
> In our software we need to combine fast interactive access to the data
> with quite complex data processing. I know that Phoenix intended for fast
> access, but hoped that also I could be able to
In our software we need to combine fast interactive access to the data with
quite complex data processing. I know that Phoenix intended for fast access,
but hoped that also I could be able to use Phoenix as a source for complex
processing with the Spark. Unfortunately, Phoenix + Spark shows ver
Hi,
Could you please suggest how I can change pool size / queue size when using
thin client? I have added to hbase-site.xml the following options:
phoenix.query.threadPoolSize
2000
phoenix.query.queueSize
10
restarted hbase (master and regions), but still receive the fo
Hi,
I have just upgraded my cluster to Phoenix 4.12 and got an issue with tasks
running on Spark 2.2 (yarn cluster mode). Any attempts to use method
phoenixTableAsDataFrame to load data from existing database causes an
exception (see below).
The tasks worked fine on version 4.11. I have check
16 matches
Mail list logo