[ 
https://issues.apache.org/jira/browse/HIVE-21240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16771441#comment-16771441
 ] 

BELUGA BEHR commented on HIVE-21240:
------------------------------------

I do not believe this failed unit test is related.  Please consider the latest 
patch for inclusion into the project. [^HIVE-24240.8.patch] 

{code:java}
2019-02-18T14:55:57,783 DEBUG [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Initiating connection to node 
localhost:9093 (id: -1 rack: null)
2019-02-18T14:55:57,785 DEBUG [pool-17-thread-1] network.Selector: [Consumer 
clientId=958935173, groupId=] Connection with localhost/127.0.0.1 disconnected
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
~[?:1.8.0_191]
        at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 
~[?:1.8.0_191]
        at 
org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:152)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:471) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at org.apache.kafka.common.network.Selector.poll(Selector.java:425) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:271)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:218)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.getTopicMetadata(Fetcher.java:274)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1774)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.partitionsFor(KafkaConsumer.java:1742)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.hadoop.hive.kafka.KafkaInputFormat.fetchTopicPartitions(KafkaInputFormat.java:189)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.hadoop.hive.kafka.KafkaInputFormat.lambda$buildFullScanFromKafka$0(KafkaInputFormat.java:96)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at org.apache.hadoop.hive.kafka.RetryUtils.retry(RetryUtils.java:93) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at org.apache.hadoop.hive.kafka.RetryUtils.retry(RetryUtils.java:116) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at org.apache.hadoop.hive.kafka.RetryUtils.retry(RetryUtils.java:109) 
~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.hadoop.hive.kafka.KafkaInputFormat.buildFullScanFromKafka(KafkaInputFormat.java:98)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at 
org.apache.hadoop.hive.kafka.KafkaInputFormat.lambda$computeSplits$5(KafkaInputFormat.java:135)
 ~[kafka-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_191]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_191]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_191]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]
2019-02-18T14:55:57,787 DEBUG [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Node -1 disconnected.
2019-02-18T14:55:57,787  WARN [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Connection to node -1 could not be 
established. Broker may not be available.
2019-02-18T14:55:57,787 DEBUG [pool-17-thread-1] 
internals.ConsumerNetworkClient: [Consumer clientId=958935173, groupId=] 
Cancelled request with header RequestHeader(apiKey=METADATA, apiVersion=6, 
clientId=958935173, correlationId=32) due to node -1 being disconnected
2019-02-18T14:55:57,888 DEBUG [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Give up sending metadata request since 
no node is available
2019-02-18T14:55:57,990 DEBUG [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Give up sending metadata request since 
no node is available
2019-02-18T14:55:58,056 DEBUG [pool-17-thread-1] clients.NetworkClient: 
[Consumer clientId=958935173, groupId=] Give up sending metadata request since 
no node is available
 {code}

> JSON SerDe Re-Write
> -------------------
>
>                 Key: HIVE-21240
>                 URL: https://issues.apache.org/jira/browse/HIVE-21240
>             Project: Hive
>          Issue Type: Improvement
>          Components: Serializers/Deserializers
>    Affects Versions: 4.0.0, 3.1.1
>            Reporter: BELUGA BEHR
>            Assignee: BELUGA BEHR
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>         Attachments: HIVE-21240.1.patch, HIVE-21240.1.patch, 
> HIVE-21240.2.patch, HIVE-21240.3.patch, HIVE-21240.4.patch, 
> HIVE-21240.5.patch, HIVE-21240.6.patch, HIVE-21240.7.patch, 
> HIVE-21240.8.patch, HIVE-21240.8.patch, HIVE-24240.8.patch, 
> HIVE-24240.8.patch, HIVE-24240.8.patch, HIVE-24240.8.patch
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> The JSON SerDe has a few issues, I will link them to this JIRA.
> * Use Jackson Tree parser instead of manually parsing
> * Added support for base-64 encoded data (the expected format when using JSON)
> * Added support to skip blank lines (returns all columns as null values)
> * Current JSON parser accepts, but does not apply, custom timestamp formats 
> in most cases
> * Added some unit tests
> * Added cache for column-name to column-index searches, currently O\(n\) for 
> each row processed, for each column in the row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to