Yes I fear you have to shade and create an uberjar
> Am 17.02.2020 um 23:27 schrieb Mich Talebzadeh :
>
>
> I stripped everything from the jar list. This is all I have
>
> sspark-shell --jars shc-core-1.1.1-2.1-s_2.11.jar, \
> json4s-native_2.11-3.5.3.jar, \
> json
Hi Mich!
Please try to keep your thread on a single mailing list. It's much easier
to have things show up on a new list if you give a brief summary of the
discussion and a pointer to the original thread (lists.apache.org is great
for this).
It looks like you're using "SHC" aka the "Spark HBase Co
Hi,
Does anyone has any more suggestion for the error I reported below please?
Thanks,
Mich
*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is exp
This might be due to the serializer being used.
This stackoverflow answer might help:
https://stackoverflow.com/questions/44414429/spark-negativearraysizeexception
On Sun, Feb 23, 2020 at 1:39 PM Proust (Feng Guizhou) [Travel Search &
Discovery] wrote:
> Hi, Spark Users
>
> I ecounter below Nega
Hi, Spark Users
I ecounter below NegativeArraySizeException when run Spark SQL. The catalyst
generated code for "apply2_19" and "apply1_11" is attached and also the related
DTO.
Difficult to understand how the problem could happen, please help if any idea.
I can see maybe https://issues.apache.