From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, August 4, 2017 2:25:52 PM
To: user@hbase.apache.org
Subject: Re: Baffling RPC exceptions with our Thrift servers
actually going further back in the RS logs I see these:
java.io.IOException: Got error, status m
Shut down co4aap80c321419,16020,1502140564865 would force the
region to be re-assigned.
Thanks
Stephen
On Mon, Aug 7, 2017 at 5:10 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> How can this be fixed? I have run hbck -repair a few times but
>
> it doesnt make a diff.
>
&
How can this be fixed? I have run hbck -repair a few times but
it doesnt make a diff.
THis is what's reported in hbck log:
java.io.IOException: Region {ENCODED => ff1472457c0dba52ca09f464bf691244, NAME
=>
Just checked
THe keys are URLs. The region name has that and also the intended lookup key is
a URL. So I think they are consistent
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Monday, August 7, 2017 11:18:52 AM
To: user@hbase.apache.org
Subject: Re:
h http) looks to be different from
the region (which starts with table name).
Is this expected ?
On Mon, Aug 7, 2017 at 10:34 AM, jeff saremi <jeffsar...@hotmail.com> wrote:
> This happens so frequently to us and we still haven't figured out why
>
> We have this problem a
This happens so frequently to us and we still haven't figured out why
We have this problem a few times a day where the Thrift server reports errors
like:
2017-08-07 10:28:45,686 INFO [thrift-worker-54] client.RpcRetryingCaller: Call
exception, tries=31, retries=35, started=452454 ms ago,
run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, August 4, 2017 2:22:54 PM
To: user@hbase.apache.org
Subject: Baffling RPC exceptions with our Thrift servers
Every once in a while (and this i
Every once in a while (and this is getting more frequent) our Thrift clients
report errors all over.
I check say one of the Thrift server logs. I see a lot of lines like the
following:
2017-08-04 14:15:17,089 INFO [thrift-worker-29] client.RpcRetryingCaller: Call
exception, tries=14,
lowing parameter:
hbase.master.balancer.stochastic.writeRequestCost
Default value is 5, much smaller than default value for region count cost
(500).
Consider raising the value so that load balancer reacts more responsively.
On Thu, Jul 27, 2017 at 12:17 PM, jeff saremi <jeffsar...@hotmail.c
Jul 27, 2017 at 12:17 PM jeff saremi <jeffsar...@hotmail.com> wrote:
> We haven't done enough testing for me to say this with certainty but as we
> insert data and new regions get created, it could be a while before those
> regions are distributed. As such and if the data injection
We haven't done enough testing for me to say this with certainty but as we
insert data and new regions get created, it could be a while before those
regions are distributed. As such and if the data injection continues the load
on the region server becomes overwhelming
Is there a way to
inside of
the RegionServer/Master process (note the maven module and the fact that
it's a template file for the webUI).
You would have to invoke an RPC to the server to get the list of Tasks.
I'm not sure if such an RPC already exists.
On 7/13/17 12:45 PM, jeff saremi wrote:
> would someone throw
would someone throw this dog a bone please? thanks
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Tuesday, July 11, 2017 10:55:40 AM
To: user@hbase.apache.org
Subject: How to get a list of running tasks in hbase shell?
I sent this earlier in another
I sent this earlier in another thread. Thought i'd create its own to get an
answer. thanks
How do you get an instance of TaskMonitor in Jruby (bin/hbase shell)?
I tried the following and didn't result in anything:
java>
List tasks = taskMonitor.getTasks();
____
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Monday, July 10, 2017 11:20 AM
To: user@hbase.apache.org
Subject: Re: Command line tool to get the list of Failed Regions
yep it looks like sky is the limit wi
k -summary `.
>
> This wouldn't be easily machine-consumable, but the content should be
> there.
>
>
> On 7/7/17 3:48 PM, jeff saremi wrote:
>
>> Is there a command line option that would give us a list of offline
>> Regions for a table? or a list of all regions and t
e list of Failed Regions
HBCK can do this, something like `hbase hbck -summary `.
This wouldn't be easily machine-consumable, but the content should be there.
On 7/7/17 3:48 PM, jeff saremi wrote:
> Is there a command line option that would give us a list of offline Regions
> for a table? or a
Is there a command line option that would give us a list of offline Regions for
a table? or a list of all regions and their status similar to the Tasks section
of master-status web page?
thanks
We're seeing errors like these in our master logs. The problem is manifested on
the master-status page in section "Regions in Transition" in the following form
(in red):
4498c6d15d8b92f0519de59853d76992
ImageFeaturesTable,fa/mKp,1496337620103.4498c6d15d8b92f0519de59853d76992.
about the behavior of hbck
bq. See logs for detail
Did you get clue from hbck log ?
Which hbase release are you using ?
Please check out this JIRA:
HBASE-16008 A robust way deal with early termination of HBCK
Cheers
On Wed, May 31, 2017 at 3:00 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
&
I'm running hbck like the following:
bin\hbase.cmd hbck -repair
Then I get this exception printed out to the console. However the program does
not exit or seem to be doing anything else. When I kill it using CTRL-C, I'm
not able to run subsequent hbck commands unless I go and clear the locks
wonderful! thanks
From: Yu Li <car...@gmail.com>
Sent: Tuesday, May 30, 2017 4:33:59 AM
To: jeff saremi
Cc: d...@hbase.apache.org; hbase-user
Subject: Re: What is Dead Region Servers and how to clear them up?
Thanks for the confirmation Jeff, have opened
Yes Yu. What you're suggesting would work for us too and would still be
appreciated.
thanks a lot
jeff
From: Yu Li <car...@gmail.com>
Sent: Sunday, May 28, 2017 10:13:38 AM
To: jeff saremi
Cc: d...@hbase.apache.org; hbase-user
Subject: Re: What is Dead
:59 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Yes. we don't have fixed servers with the exceptions of ZK machines.
>
> We have 3 yarn jobs one for each of master, region, and thrift servers
> each launched separately with different number of nodes. I hope that's not
&g
hbase processes inside Yarn
container ?
Cheers
On Sat, May 27, 2017 at 10:58 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Thanks @Yu Li<mailto:car...@gmail.com>
>
> You are absolutely correct. Dead RS's will happen regardless. My issue
> with this is more "psycho
n is the easiest way to identify.
>
> Enis
>
> On Fri, May 26, 2017 at 12:14 PM, jeff saremi <jeffsar...@hotmail.com>
> wrote:
>
> > thanks Enis
> >
> > I apologize for earlier
> >
> > This looks very close to our
rk is incredibly counterproductive.
-Dima
On Fri, May 26, 2017 at 11:03 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Thank you for the GFY answer
>
> And i guess to figure out how to fix these I can always go through the
> HBase source code.
>
>
> __
band maintenance task and
will be cleaned up by the HMasters eventually.
On Fri, May 26, 2017 at 2:03 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> Thank you for the GFY answer
>
> And i guess to figure out how to fix these I can always go
names containing ".meta."
- Restart HBase master.
Upon restart, you can see that these do not show up anymore. For more
technical details, please refer to the jira link.
Enis
On Fri, May 26, 2017 at 11:03 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Thank you for the GFY
ervers and how to clear them up?
Sending this back to the user mailing list.
RegionServers can die for many reasons. Looking at your RegionServer log
files should give hints as to why it's happening.
-Dima
On Fri, May 26, 2017 at 9:48 AM, jeff saremi <jeffsar...@hotmail.com> wrote:
&g
va/org/apache/hadoop/hbase/client/TestAsyncProcess.java#L1222
On Fri, May 26, 2017 at 12:47 PM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Hi Stack
>
> no there are no details in the exception. I mentioned that in another
> thread. When you perform a Batch operati
com> on behalf of Stack
<st...@duboce.net>
Sent: Friday, May 26, 2017 12:05:36 AM
To: Hbase-User
Subject: Re: What is the cause for RegionTooBusyException?
On Mon, May 22, 2017 at 9:31 AM, jeff saremi <jeffsar...@hotmail.com> wrote:
> while I'm still trying to find anything useful in
I'm still looking to get hints on how to remove the dead regions. thanks
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Wednesday, May 24, 2017 12:27:06 PM
To: user@hbase.apache.org
Subject: Re: What is Dead Region Servers and how to clear them up?
i'm
an you describe the specific inconsistencies you were trying to resolve ?
Depending on the inconsistencies, advice can be given on the best known
hbck command arguments to use.
Feel free to pastebin master log if needed.
On Wed, May 24, 2017 at 12:10 PM, jeff saremi <jeffsar...@hotmail.c
only 3 are
listed here with "-splitting" at the end of their names and they contain one
single file like: 1493846660401..meta.1493922323600.meta
____
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Wednesday, May 24, 2017 9:04:11 AM
To: user@hbase.ap
ent we can do is selectively logging which host(s) was
involved in the UnknownHostException's.
BTW was hbase-site.xml on the classpath of your client ?
On Tue, May 23, 2017 at 3:28 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We get errors like below which are not helping at all.
&
We get errors like below which are not helping at all.
For instance in this case we have no clue what server/region it is talking about
Is there a setting we missed or somehow we could augment this information to
help us? We're using hbase 1.2.5
Failed 10456 actions: UnknownHostException:
signed region do you have? You can try to assign them
> > manually in hbase shell
> >
> > On Tue, May 23, 2017 at 1:25 PM, jeff saremi <jeffsar...@hotmail.com>
> > wrote:
> >
> > > Are dead region servers to blame? Is this possibly stale information in
&
should check RS logs to see why regions can not be assigned.
Get RS name from master log and check RS log
-Vlad
On Tue, May 23, 2017 at 11:47 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Our write code throws exceptions lik
)
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Tuesday, May 23, 2017 11:36:11 AM
To: user@hbase.apache.org
Subject: Regions in Transition: FAILED_CLOSE status
Why are a few hundred of our regions in this state? and what can we do to fix
this?
I have been r
Why are a few hundred of our regions in this state? and what can we do to fix
this?
I have been running hbck a few times (is running one time enough?) to no avail.
Internet search does not come up with anything useful either.
I have restarted all masters and all region servers with no luck.
out of physical storage. The client could slowness
due to many writes but throwing exceptions was unheard of.
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, May 19, 2017 8:18:59 PM
To: user@hbase.apache.org
Subject: Re: What is the
memstore limit, " +
Which hbase release are you using ?
Cheers
On Fri, May 19, 2017 at 3:59 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We're getting errors like this. Where should we be looking into to solve
> this?
>
>
> Failed 69261 acti
ecific
RegionTooBusy exception.
cheers,
--James
On Fri, May 19, 2017 at 6:59 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We're getting errors like this. Where should we be looking into to solve
> this?
>
>
> Failed 69261 actions: RegionTooBusyException: 12695 times,
We're getting errors like this. Where should we be looking into to solve this?
Failed 69261 actions: RegionTooBusyException: 12695 times,
RemoteWithExtrasException: 56566 times
thanks
Jeff
thanks Ted.
From: Ted Yu <yuzhih...@gmail.com>
Sent: Wednesday, May 3, 2017 3:32:12 PM
To: user@hbase.apache.org
Subject: Re: Is stop row included in the scan or not?
bq. stopRow - row to stop scanner before (exclusive)
On Wed, May 3, 2017 at 3:08 PM
by reading the docs for 1.2
(https://hbase.apache.org/1.2/apidocs/org/apache/hadoop/hbase/client/Scan.html)
i'm not able to tell if the stop row is returned in the results from a Scan or
not. Could someone clear this up please? thanks
2
�master:16000K��W�,�PBUF
cZxid = 0x1000a7f01
ctime = Mon Mar 27 16:50:52 UTC 2017
mZxid = 0x1000a7f17
mtime = Mon Mar 27 16:50:52 UTC 2017
pZxid = 0x1000a7f01
cversion = 0
dataVersion = 2
On Tue, Apr 25, 2017 at 4:09 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> BTW on the page
said. Let's get
tableExists and createTable the same for now
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Wednesday, April 26, 2017 8:31:23 AM
To: user@hbase.apache.org
Subject: Re: Baffling situation with tableExists and createTable
yes i had
me = Mon Mar 27 16:50:52 UTC 2017
> mZxid = 0x1000a7f17
> mtime = Mon Mar 27 16:50:52 UTC 2017
> pZxid = 0x1000a7f01
> cversion = 0
> dataVersion = 2
>
> On Tue, Apr 25, 2017 at 4:09 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
>
>&
BTW on the page
http://localhost:16010/master-status#userTables
there is no sign of the supposedly existing table either
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Tuesday, April 25, 2017 4:05:56 PM
To: user@hbase.apache.org
Subject: Baffling sit
I have a super simple piece of code which tries to create a test table if it
does not exist
calling admin.tableExists(TableName.valueOf(table)) returns false causing the
control to be passed to the line that creates it
admin.createTable(tableDescriptor). Then i get an exception that the table
f http://slider.incubator.apache.org/ (since you mentioned
Yarn) ?
Slider provides several methods of monitoring region server health.
FYI
On Wed, Mar 29, 2017 at 9:57 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We have our region servers assigned by Yarn and occasionally we get a
> fal
We have our region servers assigned by Yarn and occasionally we get a false
list of servers in zookeeper.
I'm writing a monitor program and I'd like to instead of pinging the server,
perform a simple query to get the health of a specific region server. Is this
possible? how? thanks
Jeff
Custom Compaction Policy
Have you taken look at http://hbase.apache.org/book.html#ops.date.tiered ?
Cheers
On Wed, Mar 22, 2017 at 12:29 PM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> I mentioned some of this in another thread. We have a readonly database
> which get bulk loaded
I mentioned some of this in another thread. We have a readonly database which
get bulk loaded using HFiles.
We want to keep only two versions/generations of data. Since the size of data
is massive we need to delete the older generation.
Since we write one single HBase for each region for each
tial Heap Occupancy
Percentage (IHOP) in G1 is 45%. You can up this. But having this
>80% am not sure whether really advisable.
-Anoop-
On Fri, Mar 17, 2017 at 11:50 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> I'll go through these recommendations
ned on. From an upfront design make sure you pre-split your
tables so your first few bulk loads don't cause split and compaction
pains. Hope this helps!
On Fri, Mar 17, 2017 at 1:32 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We're creating a readonly database and would like to
We're creating a readonly database and would like to know the recommended
optimizations we could do. We'd be loading data via direct write to HFiles.
One thing i could immediately think of is to eliminate the memory for Memstore.
What is the minimum that we could get away with?
How about
: user@hbase.apache.org
Subject: Re: Is deploying Region server as a YARN job a customary thing to do?
Related:
https://slider.incubator.apache.org/
Consider polling Slider mailing list.
FYI
On Mon, Mar 6, 2017 at 10:08 AM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We have the option of run
We have the option of running our region server dynamically as a YARN job. I'd
like to know if this is what everyone else does? Is this recommended at all?
thanks
Yu
Of the patches attached to HBASE-15160, do I need to apply all (v2, v3, ...) or
just HBASE-15160.patch ?
Also how would I know against what version this patch was created?
thanks
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, March 3, 2017
ritical path of writing request
However, for your original question, that to monitor the whole trace of a
single request, I'm afraid no mature solution for the time being just as
Stack mentioned.
Hope my answer helps (smile).
Best Regards,
Yu
On 4 March 2017 at 00:48, jeff saremi <jeffsar...@hot
er way is to check the
http://:/jmx
page of a running regionserver, you could see all metrics there.
Best Regards,
Yu
On 4 March 2017 at 01:54, jeff saremi <jeffsar...@hotmail.com> wrote:
> Is there a page listing all metrics in HBase?
>
> I have checked this:
>
> http:/
Is there a page listing all metrics in HBase?
I have checked this:
http://hbase.apache.org/book.html#hbase_metrics
but it's only the most important ones not the whole thing
processing a request
On Thu, Mar 2, 2017 at 10:26 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> So i'd like to come back to my original question on how to get about
> separating the latency of HDFS from HBase.
>
>
That is a simple question to which we do not have an a
So i'd like to come back to my original question on how to get about separating
the latency of HDFS from HBase.
Is there a most appropriate log4j TRACE option that could print out this
information to the logs?
Thanks
From: jeff saremi <jeffsar...@hotmail.
E-14451 goes in, an issue that has been put aside with a
while now. Sorry if you've burned time on this to date.
Yours,
St.Ack
On Thu, Mar 2, 2017 at 6:28 AM, jeff saremi <jeffsar...@hotmail.com> wrote:
> Where would i seek help for issues revolving around HTrace and zipkin?
> Here? Because I ha
oop/hbase/HTableDescriptor.html#getRegionReplication--
On Tue, Feb 28, 2017 at 3:30 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> Enis
>
> just one more question. How would i go about getting the count of the
> replica's for a table or columngroup? thanks
>
> _
Where would i seek help for issues revolving around HTrace and zipkin? Here?
Because I have configured everything the way documentation said but i see
nothing in the zipkin server or in the logs. nothing at all
From: jeff saremi <jeffsar...@hotmail.com>
etting detailed elapsed times in every stage of
processing a request
Have you looked at:
http://hbase.apache.org/book.html#tracing
On Tue, Feb 28, 2017 at 12:37 PM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> I think we need to get detailed information from HBase RegionServer logs
>
I think we need to get detailed information from HBase RegionServer logs on how
a request (read or write) is processed. Specifically speaking, i need to know
of say 100 ms time spent in processing a write, how much of it was spent
waiting for the HDFS?
What is the most efficient way of enabling
you can do load-balancing with a get.setReplicaId(random() %
num_replicas) kind of pattern.
Enis
On Thu, Feb 16, 2017 at 9:41 AM, Anoop John <anoop.hb...@gmail.com> wrote:
> Never saw this kind of discussion.
>
> -Anoop-
>
> On Thu, Feb 16, 2017 at 10:13 PM, jeff sa
This is specifically true for Regions metrics such as
Namespace_hbase_table_meta_region_1657623790_metric_storeCount=1,
Even though the hadoop metrics2 framework allows for "tags' (which are called
dimensions everywhere else) HBase does not take full advantage of that and
instead spews
thing in mind that this data from secondary regions will be bit
out of sync as the replica is eventual consistent. Because of this
said reason, change client so as to share the load across diff RSs
might be tough.
-Anoop-
On Sun, Feb 12, 2017 at 8:13 AM, jeff saremi <jeffsar...@hotmail.c
When we use a FileSink to log metrics from HBase we can see more names than
when we use a custom metric sink. Is there something undocumented that we're
missing?
For instance when using FileSink, we can see WAL, RegionServer,
Replication,Server , and Regions in the log.
However if we use a
This turned out to be a result of multiple versions of apache http components.
I made sure my code used the same version of dependencies as the HBase instance
we had.
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Tuesday, February 14, 2017 12:10:58 PM
To
This is really not an HBase issue but rather hadoop metrics issue (and it may
not be that either) but I'll just post here to see if someone knows why this is
happening.
I've created a new Metrics2 Sink which will send the metrics over http to some
server.
I can test this locally with no
g/jira/browse/HBASE-10070.
Your first question would be answered by that document.
Cheers
On Sat, Feb 11, 2017 at 2:06 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> The first time I heard replicas in HBase the following thought immediately
> came to my mind:
> To alleviate t
server is just wrapping the HBase
client you'd use in your own Java application and exposing it in a
different manner.
jeff saremi wrote:
> Thanks Josh
>
> I made a mistake in mentioning the master. It looks like the client contacts
> the Region Server which holds the Meta tab
http://hbase.apache.org/book.html#_architecture
jeff saremi wrote:
> I'd like to understand if there are any considerations on why one would use
> thrift versus the direct client?
>
> I was told that Thrift server allow key-caching which would result in faster
> key-to-regionserv
-
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
On Tue, Jan 24, 2017 at 11:17 PM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> Has anyone found a re-usable way of exporting HBase metrics t
I'd like to understand if there are any considerations on why one would use
thrift versus the direct client?
I was told that Thrift server allow key-caching which would result in faster
key-to-regionserver queries as opposed to getting that from the Hbase master
nodes. It would also alleviate
rently in master branch) ?
See
hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/example/datasources/AvroSource.scala
and
hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/DefaultSourceSuite.scala
for examples.
There may be other options.
FYI
On Fri, Jan 27, 2017 at 7:28 PM, jeff
se-spark/src/main/scala/org/apache/hadoop/hbase/spark/
> >> example/datasources/AvroSource.scala
> >> and hbase-spark/src/test/scala/org/apache/hadoop/hbase/spark/
> >> DefaultSourceSuite.scala
> >> for examples.
> >>
> >> There may be other options.
&g
Hi
I'm seeking some pointers/guidance on what we could do to insert billions of
records that we already have in avro files in hadoop into HBase.
I read some articles online and one of them recommended using HFile format. I
took a cursory look at the documentation for that. Given the complexity
, 2017 at 8:49 PM jeff saremi <jeffsar...@hotmail.com> wrote:
> it looks like Jolokia is the required adapter for JMX based systems.
>
>
> ____
> From: jeff saremi <jeffsar...@hotmail.com>
> Sent: Tuesday, January 24, 2017 8:17 PM
> To:
Jan 24, 2017 at 1:42 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> We are enabling reader replicas. We're also using Thrift endpoints for our
> HBase. How could we enable Consistency.Timeline for Thrift server?
> thanks
>
> Jeff
>
We are enabling reader replicas. We're also using Thrift endpoints for our
HBase. How could we enable Consistency.Timeline for Thrift server?
thanks
Jeff
equest for them
That would be wonderful.
Please log a JIRA, polish the C# example and attach to the JIRA.
In hbase, we're not at the stage of reviewing / committing pull request yet.
On Fri, Jan 13, 2017 at 3:45 PM, jeff saremi <jeffsar...@hotmail.com> wrote:
> sorry Ted for wa
can create a
pull request for them
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, January 13, 2017 2:11 PM
To: user@hbase.apache.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException
Thanks Ted.
I looked at this. We didn'
issues-in-c-sharp
Can you take a look at the answer to see if it is relevant ?
Cheers
On Fri, Jan 13, 2017 at 11:10 AM, jeff saremi <jeffsar...@hotmail.com>
wrote:
> The result is the same. OutofMemoryException.
>
> I again ran my C++ client to make sure nothing wierd is going
I did a x64 compilation
I get a
{"Cannot read, Remote side has closed"} Thrift.TException
{Thrift.Transport.TTransportException}
with no further details
____
From: jeff saremi <jeffsar...@hotmail.com>
Sent: Friday, January 13, 2017 1
he.org
Subject: Re: HBase Thrift Client for C#: OutofMemoryException
Which thrift version did you use to generate c# code ?
hbase uses 0.9.3
Can you pastebin the whole stack trace for the exception ?
I assume you run your code on 64-bit machine.
Cheers
On Fri, Jan 13, 2017 at 9:53 AM, jeff
I have cloned the latest thrift and hbase code. Used thrift generator to
generate c# code from
hbase-thrift\src\main\resources\org\apache\hadoop\hbase\thrift. Then created a
single VS solution with the generated code, the thrift lib for c#
(thrift\lib\csharp\src\Thrift.csproj) and i also added
95 matches
Mail list logo