[jira] [Commented] (FLUME-2820) Support New Kafka APIs

2016-01-06 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15086952#comment-15086952
 ] 

Ping Wang commented on FLUME-2820:
--

Hi Jeff, the fix version will be 1.7.0?  Will there be a patch for released 
Flume 1.6.0?   Looking forward to this update, thanks. 


> Support New Kafka APIs
> --
>
> Key: FLUME-2820
> URL: https://issues.apache.org/jira/browse/FLUME-2820
> Project: Flume
>  Issue Type: Improvement
>Reporter: Jeff Holoman
>Assignee: Jeff Holoman
>
> Flume utilizes the older Kafka producer and consumer APIs. We should add 
> implementations of the Source, Channel and Sink that utilize the new APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2491) Include Kite morphline dependencies for Morphline Solr sink

2016-03-09 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186824#comment-15186824
 ] 

Ping Wang commented on FLUME-2491:
--

Hi,
I'm also suffering from the installing the dependencies jars for 
MorphlineSolrSink.  And found there's a JIRA for this. Is there any plan to 
deliver the fix? 
Thanks.

> Include Kite morphline dependencies for Morphline Solr sink
> ---
>
> Key: FLUME-2491
> URL: https://issues.apache.org/jira/browse/FLUME-2491
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.5.0.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: FLUME-2491.patch
>
>
> Currently, the kite morphline library version required by flume does not 
> appear to be available for download or direct customer install by end user. 
> So pulling them into the flume binary distribution through maven.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2433) Add kerberos support for Hive sink

2016-03-19 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202772#comment-15202772
 ] 

Ping Wang commented on FLUME-2433:
--

Hi Roshan,
I am using Flume with Hive in Kerberos cluster, and found your patch here. But 
I hit a problem after applied your fix. I had a search for "Server not found in 
Kerberos database (7) - UNKNOWN_SERVER"  and have tried the solutions from 
websites, All did not help.  Could you please help give me some pointers? 
Thanks a lot! 
..
2016-03-19 01:28:44,253 (hive-k1-call-runner-0) [INFO - 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:377)]
 Trying to connect to metastore with URI thrift://***:9083
2016-03-19 01:28:44,316 (hive-k1-call-runner-0) [DEBUG - 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:243)] 
opening transport org.apache.thrift.transport.TSaslClientTransport@34ae96b5
2016-03-19 01:28:44,334 (hive-k1-call-runner-0) [ERROR - 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:296)] SASL 
negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Server not found in Kerberos 
database (7) - UNKNOWN_SERVER)]
at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at 
org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at 
org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at 
org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at 
org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:432)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:237)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:182)
at 
org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.(HiveClientCache.java:330)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at 
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:86)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at 
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:118)
at 
org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:231)
at 
org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:227)
at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
at 
org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:227)
at 
org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:202)
at 
org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:558)
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:448)
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:274)
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.(HiveEndPoint.java:243)
at 
org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.ja

[jira] [Created] (FLUME-2912) thrift Sources/Sinks can only authenticate with kerberos principal in format with hostname

2016-05-23 Thread Ping Wang (JIRA)
Ping Wang created FLUME-2912:


 Summary: thrift Sources/Sinks can only authenticate with kerberos 
principal in  format with hostname
 Key: FLUME-2912
 URL: https://issues.apache.org/jira/browse/FLUME-2912
 Project: Flume
  Issue Type: Bug
  Components: Sinks+Sources
Affects Versions: v1.6.0
Reporter: Ping Wang
 Fix For: v1.7.0


Using Thrift Sources/Sinks in Kerberos environment, the Flume agents
only work with principle in format "name/_h...@your-realm.com".  
If using other valid principle in the format "n...@your-realm.com"  it will hit 
ERROR of "GSS initiate failed".  

Here's the configuration file:
g1.sources.source1.type = spooldir
g1.sources.source1.spoolDir = /test
g1.sources.source1.fileHeader = false
g1.sinks.sink1.type = thrift
g1.sinks.sink1.hostname = localhost
g1.sinks.sink1.port = 5
g1.channels.channel1.type = memory
g1.channels.channel1.capacity = 1000
g1.channels.channel1.transactionCapacity = 100
g1.sources.source1.channels = channel1
g1.sinks.sink1.channel = channel1
g2.sources = source2
g2.sinks = sink2
g2.channels = channel2
g2.sources.source2.type = thrift
g2.sources.source2.bind = localhost
g2.sources.source2.port = 5
g2.sinks.sink2.type = hdfs
g2.sinks.sink2.hdfs.path = /tmp
g2.sinks.sink2.hdfs.filePrefix = thriftData
g2.sinks.sink2.hdfs.writeFormat = Text
g2.sinks.sink2.hdfs.fileType = DataStream
g2.channels.channel2.type = memory
g2.channels.channel2.capacity = 1000
g2.channels.channel2.transactionCapacity = 100
g2.sources.source2.channels = channel2
g2.sinks.sink2.channel = channel2
g1.sinks.sink1.kerberos = true
g1.sinks.sink1.client-principal = flume/hostn...@xxx.com
g1.sinks.sink1.client-keytab
= /etc/security/keytabs/flume-1563.server.keytab
g1.sinks.sink1.server-principal = flume/hostn...@xxx.com
g2.sources.source2.kerberos = true
g2.sources.source2.agent-principal = flume/hostn...@xxx.com
g2.sources.source2.agent-keytab
= /etc/security/keytabs/flume-1563.server.keytab

If using other valid principle like "t...@ibm.com" as below, will hit error:

g1.sinks.sink1.kerberos = true
g1.sinks.sink1.client-principal = t...@ibm.com
g1.sinks.sink1.client-keytab = /home/test/test.keytab
g1.sinks.sink1.server-principal = t...@ibm.com
g2.sources.source2.kerberos = true
g2.sources.source2.agent-principal = t...@ibm.com
g2.sources.source2.agent-keytab = /home/test/test.keytab


Agent g1:
ERROR server.TThreadPoolServer: Error occurred during processing of
message.
java.lang.RuntimeException:
org.apache.thrift.transport.TTransportException: Peer indicated failure:
GSS initiate failed
    at org.apache.thrift.transport.TSaslServerTransport
$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run
(TThreadPoolServer.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker
(ThreadPoolExecutor.java:1142)

Agent g2:
ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by
GSSException: No valid credentials provided (Mechanism level: Server not
found in Kerberos database (7) - UNKNOWN_SERVER)]
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
(GssKrb5Client.java:211)








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2912) thrift Sources/Sinks can only authenticate with kerberos principal in format with hostname

2016-05-23 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297753#comment-15297753
 ] 

Ping Wang commented on FLUME-2912:
--

The Thrift Src/Sink kerberos authentication were enabled via FLUME-2631.
In our test, only the host-based service principal can do authentication, the 
simple principal like  s...@example.com can not. This is not flexible and it's 
better to fix the strict limits.

> thrift Sources/Sinks can only authenticate with kerberos principal in  format 
> with hostname
> ---
>
> Key: FLUME-2912
> URL: https://issues.apache.org/jira/browse/FLUME-2912
> Project: Flume
>  Issue Type: Bug
>  Components: Sinks+Sources
>Affects Versions: v1.6.0
>Reporter: Ping Wang
> Fix For: v1.7.0
>
>
> Using Thrift Sources/Sinks in Kerberos environment, the Flume agents
> only work with principle in format "name/_h...@your-realm.com".  
> If using other valid principle in the format "n...@your-realm.com"  it will 
> hit ERROR of "GSS initiate failed".  
> Here's the configuration file:
> g1.sources.source1.type = spooldir
> g1.sources.source1.spoolDir = /test
> g1.sources.source1.fileHeader = false
> g1.sinks.sink1.type = thrift
> g1.sinks.sink1.hostname = localhost
> g1.sinks.sink1.port = 5
> g1.channels.channel1.type = memory
> g1.channels.channel1.capacity = 1000
> g1.channels.channel1.transactionCapacity = 100
> g1.sources.source1.channels = channel1
> g1.sinks.sink1.channel = channel1
> g2.sources = source2
> g2.sinks = sink2
> g2.channels = channel2
> g2.sources.source2.type = thrift
> g2.sources.source2.bind = localhost
> g2.sources.source2.port = 5
> g2.sinks.sink2.type = hdfs
> g2.sinks.sink2.hdfs.path = /tmp
> g2.sinks.sink2.hdfs.filePrefix = thriftData
> g2.sinks.sink2.hdfs.writeFormat = Text
> g2.sinks.sink2.hdfs.fileType = DataStream
> g2.channels.channel2.type = memory
> g2.channels.channel2.capacity = 1000
> g2.channels.channel2.transactionCapacity = 100
> g2.sources.source2.channels = channel2
> g2.sinks.sink2.channel = channel2
> g1.sinks.sink1.kerberos = true
> g1.sinks.sink1.client-principal = flume/hostn...@xxx.com
> g1.sinks.sink1.client-keytab
> = /etc/security/keytabs/flume-1563.server.keytab
> g1.sinks.sink1.server-principal = flume/hostn...@xxx.com
> g2.sources.source2.kerberos = true
> g2.sources.source2.agent-principal = flume/hostn...@xxx.com
> g2.sources.source2.agent-keytab
> = /etc/security/keytabs/flume-1563.server.keytab
> If using other valid principle like "t...@ibm.com" as below, will hit error:
> g1.sinks.sink1.kerberos = true
> g1.sinks.sink1.client-principal = t...@ibm.com
> g1.sinks.sink1.client-keytab = /home/test/test.keytab
> g1.sinks.sink1.server-principal = t...@ibm.com
> g2.sources.source2.kerberos = true
> g2.sources.source2.agent-principal = t...@ibm.com
> g2.sources.source2.agent-keytab = /home/test/test.keytab
> Agent g1:
> ERROR server.TThreadPoolServer: Error occurred during processing of
> message.
> java.lang.RuntimeException:
> org.apache.thrift.transport.TTransportException: Peer indicated failure:
> GSS initiate failed
>     at org.apache.thrift.transport.TSaslServerTransport
> $Factory.getTransport(TSaslServerTransport.java:219)
>     at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run
> (TThreadPoolServer.java:189)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker
> (ThreadPoolExecutor.java:1142)
> Agent g2:
> ERROR transport.TSaslTransport: SASL negotiation failure
> javax.security.sasl.SaslException: GSS initiate failed [Caused by
> GSSException: No valid credentials provided (Mechanism level: Server not
> found in Kerberos database (7) - UNKNOWN_SERVER)]
>     at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge
> (GssKrb5Client.java:211)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLUME-2427) java.lang.NoSuchMethodException and warning on HDFS (S3) sink

2016-10-11 Thread Ping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-2427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ping Wang updated FLUME-2427:
-
Attachment: FLUME-2427-0.patch

I also hit this issue, and agree that it will cause confusion and it’s better 
to hidden the exception from log. I attached my fix for this, please review it. 
Thanks!

> java.lang.NoSuchMethodException and warning on HDFS (S3) sink 
> --
>
> Key: FLUME-2427
> URL: https://issues.apache.org/jira/browse/FLUME-2427
> Project: Flume
>  Issue Type: Question
>Reporter: Bijith Kumar
>Priority: Minor
> Attachments: FLUME-2427-0.patch
>
>
> The below warning and Exception is thrown every time a file is written to S3 
> using HDFS sink. Looks like a jar mismatch to me. Tried latest hadoop and 
> jets3 jars  but didn't work 
> 17 Jul 2014 23:30:18,159 INFO  [hdfs-s3sink-engagements-call-runner-6] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:184)
>   - FileSystem's output stream doesn't support getNumCurrentReplicas; 
> --HDFS-826 not available; 
> fsOut=org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream;
>  err=java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.getNumCurrentReplicas()
> 17 Jul 2014 23:30:18,160 WARN  
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed:210)  - isFileClosed 
> is not available in the version of HDFS being used. Flume will not attempt to 
> close files if the close fails on the first attempt
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.isFileClosed(org.apache.hadoop.fs.Path)
>   at java.lang.Class.getMethod(Class.java:1665)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:207)
>   at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:295)
>   at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:554)
>   at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:426)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>   at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>   at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLUME-3012) Sending a term signal can not shutdown Flume agent when KafkaChannel starting has exceptions

2016-10-21 Thread Ping Wang (JIRA)
Ping Wang created FLUME-3012:


 Summary: Sending a term signal can not shutdown Flume agent when 
KafkaChannel starting has exceptions
 Key: FLUME-3012
 URL: https://issues.apache.org/jira/browse/FLUME-3012
 Project: Flume
  Issue Type: Bug
  Components: Kafka Channel
Affects Versions: v1.6.0
 Environment: Flume 1.6.0+Kafka 0.9 
Reporter: Ping Wang
 Fix For: v1.8.0


Use Kafka Channel in the agent configuration:
tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1
tier1.sources.source1.type = exec
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1
tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
tier1.channels.channel1.kafka.bootstrap.servers = a.b.c.d:6667
tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.hdfs.path = /tmp/kafka/channel
tier1.sinks.sink1.hdfs.rollInterval = 5
tier1.sinks.sink1.hdfs.rollSize = 0
tier1.sinks.sink1.hdfs.rollCount = 0
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.channel = channel1

Accidentally kaka.bootstrap.servers is not correct,  errors will be thrown out:
..
)] Waiting for channel: channel1 to start. Sleeping for 500 ms
2016-10-21 01:15:50,739 (conf-file-poller-0) [INFO - 
org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
Waiting for channel: channel1 to start. Sleeping for 500 ms
2016-10-21 01:15:51,240 (conf-file-poller-0) [INFO - 
org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
Waiting for channel: channel1 to start. Sleeping for 500 ms
2016-10-21 01:15:51,735 (lifecycleSupervisor-1-6) [INFO - 
org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:115)] 
Starting Kafka Channel: channel1
2016-10-21 01:15:51,737 (lifecycleSupervisor-1-6) [INFO - 
org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:165)] 
ProducerConfig values: 
compression.type = none
metric.reporters = []
metadata.max.age.ms = 30
metadata.fetch.timeout.ms = 6
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [a.b.c.d:6667]
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 33554432
timeout.ms = 3
key.serializer = class 
org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.keystore.type = JKS
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
max.block.ms = 6
sasl.kerberos.min.time.before.relogin = 6
connections.max.idle.ms = 54
ssl.truststore.password = null
max.in.flight.requests.per.connection = 5
metrics.num.samples = 2
client.id = 
ssl.endpoint.identification.algorithm = null
ssl.protocol = TLS
request.timeout.ms = 3
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
acks = all
batch.size = 16384
ssl.keystore.location = null
receive.buffer.bytes = 32768
ssl.cipher.suites = null
ssl.truststore.type = JKS
security.protocol = PLAINTEXT
retries = 0
max.request.size = 1048576
value.serializer = class 
org.apache.kafka.common.serialization.ByteArraySerializer
ssl.truststore.location = null
ssl.keystore.password = null
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 3
partitioner.class = class 
org.apache.kafka.clients.producer.internals.DefaultPartitioner
send.buffer.bytes = 131072
linger.ms = 0

2016-10-21 01:15:51,742 (lifecycleSupervisor-1-6) [DEBUG - 
org.apache.kafka.common.metrics.Metrics.sensor(Metrics.java:201)] Added sensor 
with name bufferpool-wait-time
2016-10-21 01:15:51,743 (lifecycleSupervisor-1-6) [DEBUG - 
org.apache.kafka.common.metrics.Metrics.sensor(Metrics.java:201)] Added sensor 
with name buffer-exhausted-records
2016-10-21 01:15:51,744 (lifecycleSupervisor-1-6) [INFO - 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:613)] 
Closing the Kafka producer with timeoutMillis = 0 ms.
2016-10-21 01:15:51,744 (lifecycleSupervisor-1-6) [DEBUG - 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:654)] 
The Kafka producer has closed.
2016-10-21 01:15:51,744 (lifecycleSupervisor-1-6) [ERROR - 
org.apache.flume.lifecycle.LifecycleSupervisor$MonitorRunnable.run(LifecycleSupervisor.java:253)]
 Unable to start org.apache.flume.channel.kafka.KafkaChannel{name: channel1} - 
Exception follows.
org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at 
org.apache.kafka.clients.pro

[jira] [Updated] (FLUME-3012) Sending a term signal can not shutdown Flume agent when KafkaChannel starting has exceptions

2016-10-21 Thread Ping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ping Wang updated FLUME-3012:
-
Attachment: threaddumps.log

Attach the the dump from jstack.

> Sending a term signal can not shutdown Flume agent when KafkaChannel starting 
> has exceptions
> 
>
> Key: FLUME-3012
> URL: https://issues.apache.org/jira/browse/FLUME-3012
> Project: Flume
>  Issue Type: Bug
>  Components: Kafka Channel
>Affects Versions: v1.6.0
> Environment: Flume 1.6.0+Kafka 0.9 
>Reporter: Ping Wang
> Fix For: v1.8.0
>
> Attachments: threaddumps.log
>
>
> Use Kafka Channel in the agent configuration:
> tier1.sources = source1
> tier1.channels = channel1
> tier1.sinks = sink1
> tier1.sources.source1.type = exec
> tier1.sources.source1.command = /usr/bin/vmstat 1
> tier1.sources.source1.channels = channel1
> tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
> tier1.channels.channel1.kafka.bootstrap.servers = a.b.c.d:6667
> tier1.sinks.sink1.type = hdfs
> tier1.sinks.sink1.hdfs.path = /tmp/kafka/channel
> tier1.sinks.sink1.hdfs.rollInterval = 5
> tier1.sinks.sink1.hdfs.rollSize = 0
> tier1.sinks.sink1.hdfs.rollCount = 0
> tier1.sinks.sink1.hdfs.fileType = DataStream
> tier1.sinks.sink1.channel = channel1
> Accidentally kaka.bootstrap.servers is not correct,  errors will be thrown 
> out:
> ..
> )] Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:50,739 (conf-file-poller-0) [INFO - 
> org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
> Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:51,240 (conf-file-poller-0) [INFO - 
> org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
> Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:51,735 (lifecycleSupervisor-1-6) [INFO - 
> org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:115)] 
> Starting Kafka Channel: channel1
> 2016-10-21 01:15:51,737 (lifecycleSupervisor-1-6) [INFO - 
> org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:165)]
>  ProducerConfig values: 
>   compression.type = none
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   metadata.fetch.timeout.ms = 6
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [a.b.c.d:6667]
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.keystore.type = JKS
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   max.block.ms = 6
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   ssl.truststore.password = null
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   client.id = 
>   ssl.endpoint.identification.algorithm = null
>   ssl.protocol = TLS
>   request.timeout.ms = 3
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   acks = all
>   batch.size = 16384
>   ssl.keystore.location = null
>   receive.buffer.bytes = 32768
>   ssl.cipher.suites = null
>   ssl.truststore.type = JKS
>   security.protocol = PLAINTEXT
>   retries = 0
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.ByteArraySerializer
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   send.buffer.bytes = 131072
>   linger.ms = 0
> 2016-10-21 01:15:51,742 (lifecycleSupervisor-1-6) [DEBUG - 
> org.apache.kafka.common.metrics.Metrics.sensor(Metrics.java:201)] Added 
> sensor with name bufferpool-wait-time
> 2016-10-21 01:15:51,743 (lifecycleSupervisor-1-6) [DEBUG - 
> org.apache.kafka.common.metrics.Metrics.sensor(Metrics.java:201)] Added 
> sensor with name buffer-exhausted-records
> 2016-10-21 01:15:51,744 (lifecycleSupervisor-1-6) [INFO - 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:613)]
>  Closing the Kafka producer with timeoutMillis = 0 ms.
> 2016-10-21 01:15:51,744 (lifecycleSupervisor-1-6) [DEBUG - 
> org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.ja

[jira] [Commented] (FLUME-3012) Sending a term signal can not shutdown Flume agent when KafkaChannel starting has exceptions

2016-10-21 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15594537#comment-15594537
 ] 

Ping Wang commented on FLUME-3012:
--

I found flume-ng-node/src/main/java/org/apache/flume/node/Application.java 
already has Shutdown Hook to handle SIGTERM:

final Application appReference = application;
  Runtime.getRuntime().addShutdownHook(new Thread("agent-shutdown-hook") {
@Override
public void run() {
  appReference.stop();
}
  });

In the scenario,  addShutdownHook was called but stop operation was not done. 


> Sending a term signal can not shutdown Flume agent when KafkaChannel starting 
> has exceptions
> 
>
> Key: FLUME-3012
> URL: https://issues.apache.org/jira/browse/FLUME-3012
> Project: Flume
>  Issue Type: Bug
>  Components: Kafka Channel
>Affects Versions: v1.6.0
> Environment: Flume 1.6.0+Kafka 0.9 
>Reporter: Ping Wang
> Fix For: v1.8.0
>
> Attachments: threaddumps.log
>
>
> Use Kafka Channel in the agent configuration:
> tier1.sources = source1
> tier1.channels = channel1
> tier1.sinks = sink1
> tier1.sources.source1.type = exec
> tier1.sources.source1.command = /usr/bin/vmstat 1
> tier1.sources.source1.channels = channel1
> tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
> tier1.channels.channel1.kafka.bootstrap.servers = a.b.c.d:6667
> tier1.sinks.sink1.type = hdfs
> tier1.sinks.sink1.hdfs.path = /tmp/kafka/channel
> tier1.sinks.sink1.hdfs.rollInterval = 5
> tier1.sinks.sink1.hdfs.rollSize = 0
> tier1.sinks.sink1.hdfs.rollCount = 0
> tier1.sinks.sink1.hdfs.fileType = DataStream
> tier1.sinks.sink1.channel = channel1
> Accidentally kaka.bootstrap.servers is not correct,  errors will be thrown 
> out:
> ..
> )] Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:50,739 (conf-file-poller-0) [INFO - 
> org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
> Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:51,240 (conf-file-poller-0) [INFO - 
> org.apache.flume.node.Application.startAllComponents(Application.java:161)] 
> Waiting for channel: channel1 to start. Sleeping for 500 ms
> 2016-10-21 01:15:51,735 (lifecycleSupervisor-1-6) [INFO - 
> org.apache.flume.channel.kafka.KafkaChannel.start(KafkaChannel.java:115)] 
> Starting Kafka Channel: channel1
> 2016-10-21 01:15:51,737 (lifecycleSupervisor-1-6) [INFO - 
> org.apache.kafka.common.config.AbstractConfig.logAll(AbstractConfig.java:165)]
>  ProducerConfig values: 
>   compression.type = none
>   metric.reporters = []
>   metadata.max.age.ms = 30
>   metadata.fetch.timeout.ms = 6
>   reconnect.backoff.ms = 50
>   sasl.kerberos.ticket.renew.window.factor = 0.8
>   bootstrap.servers = [a.b.c.d:6667]
>   retry.backoff.ms = 100
>   sasl.kerberos.kinit.cmd = /usr/bin/kinit
>   buffer.memory = 33554432
>   timeout.ms = 3
>   key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>   sasl.kerberos.service.name = null
>   sasl.kerberos.ticket.renew.jitter = 0.05
>   ssl.keystore.type = JKS
>   ssl.trustmanager.algorithm = PKIX
>   block.on.buffer.full = false
>   ssl.key.password = null
>   max.block.ms = 6
>   sasl.kerberos.min.time.before.relogin = 6
>   connections.max.idle.ms = 54
>   ssl.truststore.password = null
>   max.in.flight.requests.per.connection = 5
>   metrics.num.samples = 2
>   client.id = 
>   ssl.endpoint.identification.algorithm = null
>   ssl.protocol = TLS
>   request.timeout.ms = 3
>   ssl.provider = null
>   ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>   acks = all
>   batch.size = 16384
>   ssl.keystore.location = null
>   receive.buffer.bytes = 32768
>   ssl.cipher.suites = null
>   ssl.truststore.type = JKS
>   security.protocol = PLAINTEXT
>   retries = 0
>   max.request.size = 1048576
>   value.serializer = class 
> org.apache.kafka.common.serialization.ByteArraySerializer
>   ssl.truststore.location = null
>   ssl.keystore.password = null
>   ssl.keymanager.algorithm = SunX509
>   metrics.sample.window.ms = 3
>   partitioner.class = class 
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
>   send.buffer.bytes = 131072
>   linger.ms = 0
> 2016-10-21 01:15:51,742 (lifecycleSupervisor-1-6) [DEBUG - 
> org.apache.kafka.common.metrics.Metrics.sensor(Metrics.java:201)] Added 
> sensor with name bufferpool-wait-time
> 2016-10-21 01:15:51,743 (lifecycleSupervisor-1-6) [DEBUG - 
> org.apache.kafka.common.metrics.M

[jira] [Created] (FLUME-3026) Add Kafka 0.10 support for Flume

2016-11-18 Thread Ping Wang (JIRA)
Ping Wang created FLUME-3026:


 Summary: Add Kafka 0.10 support for Flume  
 Key: FLUME-3026
 URL: https://issues.apache.org/jira/browse/FLUME-3026
 Project: Flume
  Issue Type: Improvement
  Components: Channel, Sinks+Sources
Affects Versions: v1.7.0
Reporter: Ping Wang


Kafka 0.10 already released and it introduces some new APIs that not compatible 
with old one.  When using Kafka 0.10, the unit test cases will hit compilation 
error:
...
[ERROR] 
/root/flume/flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java:[526,14]
 error: method createTopic in class AdminUtils cannot be applied to given types;
...
[ERROR] 
/root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[23,19]
 error: cannot find symbol
[ERROR] package kafka.common
[ERROR] 
/root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[109,13]
 error: cannot find symbol
[ERROR] class TestKafkaSource
[ERROR] 
/root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/KafkaSourceEmbeddedKafka.java:[134,14]
 error: method createTopic in class AdminUtils cannot be applied to given types;
...




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLUME-3026) Add Kafka 0.10 support for Flume

2016-11-18 Thread Ping Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLUME-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ping Wang updated FLUME-3026:
-
Attachment: FLUME-3026.patch

> Add Kafka 0.10 support for Flume  
> --
>
> Key: FLUME-3026
> URL: https://issues.apache.org/jira/browse/FLUME-3026
> Project: Flume
>  Issue Type: Improvement
>  Components: Channel, Sinks+Sources
>Affects Versions: v1.7.0
>Reporter: Ping Wang
> Attachments: FLUME-3026.patch
>
>
> Kafka 0.10 already released and it introduces some new APIs that not 
> compatible with old one.  When using Kafka 0.10, the unit test cases will hit 
> compilation error:
> ...
> [ERROR] 
> /root/flume/flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java:[526,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[23,19]
>  error: cannot find symbol
> [ERROR] package kafka.common
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[109,13]
>  error: cannot find symbol
> [ERROR] class TestKafkaSource
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/KafkaSourceEmbeddedKafka.java:[134,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-3026) Add Kafka 0.10 support for Flume

2016-11-18 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15676509#comment-15676509
 ] 

Ping Wang commented on FLUME-3026:
--

I attached the patch that's created based on trunk, please take a review, 
thanks.

> Add Kafka 0.10 support for Flume  
> --
>
> Key: FLUME-3026
> URL: https://issues.apache.org/jira/browse/FLUME-3026
> Project: Flume
>  Issue Type: Improvement
>  Components: Channel, Sinks+Sources
>Affects Versions: v1.7.0
>Reporter: Ping Wang
> Attachments: FLUME-3026.patch
>
>
> Kafka 0.10 already released and it introduces some new APIs that not 
> compatible with old one.  When using Kafka 0.10, the unit test cases will hit 
> compilation error:
> ...
> [ERROR] 
> /root/flume/flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java:[526,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[23,19]
>  error: cannot find symbol
> [ERROR] package kafka.common
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[109,13]
>  error: cannot find symbol
> [ERROR] class TestKafkaSource
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/KafkaSourceEmbeddedKafka.java:[134,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2933) Does Flume support Kafka 0.10 version

2016-11-18 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15676514#comment-15676514
 ] 

Ping Wang commented on FLUME-2933:
--

Hi, I created FLUME-3026 with patch attached for the support to Kafka 0.10.  

> Does Flume support Kafka 0.10 version
> -
>
> Key: FLUME-2933
> URL: https://issues.apache.org/jira/browse/FLUME-2933
> Project: Flume
>  Issue Type: Question
>  Components: Docs
>Reporter: Joseph Aliase
>
> Does Flume support Kafka 0.10 version or there is a plan to support Kafka 0.10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2662) Upgrade to Commons-IO 2.4

2016-11-20 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15681059#comment-15681059
 ] 

Ping Wang commented on FLUME-2662:
--

FLUME-2717 is also about upgrading Commons-IO to 2.4. Please mark it duplicate.

> Upgrade to Commons-IO 2.4
> -
>
> Key: FLUME-2662
> URL: https://issues.apache.org/jira/browse/FLUME-2662
> Project: Flume
>  Issue Type: Bug
>  Components: Build
>Affects Versions: v1.5.1
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>  Labels: Dependencies
> Attachments: FLUME-2662.patch
>
>
> Hadoop 2.7 is now switching to apache-commons-io v2.4. Hbase 1.0 is also 
> using commons-io v2.4.
> Flume is currently at 2.1.
> Flume runs into issues like this when tests run against them:
> testSequenceFile(org.apache.flume.sink.hdfs.TestUseRawLocalFileSystem)  Time 
> elapsed: 77 sec  <<< ERROR!
> java.lang.NoClassDefFoundError: org/apache/commons/io/Charsets
> at org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:854)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
> at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:582)
> at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:98)
> at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:78)
> at org.apache.flume.sink.hdfs.HDFSSequenceFile.open(HDFSSequenceFile.java:69)
> at 
> org.apache.flume.sink.hdfs.TestUseRawLocalFileSystem.testSequenceFile(TestUseRawLocalFileSystem.java:89)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> I am planning to submit a patch to upgrade commons-io to 2.4.  Just wanted to 
> be cautious and check if we have witnessed has been any issue in the past 
> when upgrading apache commons libraries.
> Based on what I see here:
> http://commons.apache.org/proper/commons-io/upgradeto2_4.html  and
> http://commons.apache.org/proper/commons-io/upgradeto2_2.html
> Commons-IO 2.4 is binary compat with 2.2 which is in turn binary compat
> with 2.1.
> There is what they call a "rare" case of source incompat as described in
> https://issues.apache.org/jira/browse/IO-318
> Doesnt look like we are affected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-3026) Add Kafka 0.10 support for Flume

2016-11-20 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15681066#comment-15681066
 ] 

Ping Wang commented on FLUME-3026:
--

Hi Jeff,  FLUME-2820 is a closed JIRA for supporting Kafka 0.9 with Flume 
version 1.7.0.  In my scenario, I using higher Kafka version 0.10.1.0 and hit 
imcompatible APIs. So I opened this JIRA for supporting new Kafka 0.10.   

> Add Kafka 0.10 support for Flume  
> --
>
> Key: FLUME-3026
> URL: https://issues.apache.org/jira/browse/FLUME-3026
> Project: Flume
>  Issue Type: Improvement
>  Components: Channel, Sinks+Sources
>Affects Versions: v1.7.0
>Reporter: Ping Wang
> Attachments: FLUME-3026.patch
>
>
> Kafka 0.10 already released and it introduces some new APIs that not 
> compatible with old one.  When using Kafka 0.10, the unit test cases will hit 
> compilation error:
> ...
> [ERROR] 
> /root/flume/flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java:[526,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[23,19]
>  error: cannot find symbol
> [ERROR] package kafka.common
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[109,13]
>  error: cannot find symbol
> [ERROR] class TestKafkaSource
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/KafkaSourceEmbeddedKafka.java:[134,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-3026) Add Kafka 0.10 support for Flume

2016-11-21 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-3026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15682945#comment-15682945
 ] 

Ping Wang commented on FLUME-3026:
--

Hi Jeff, thanks for the information.  I see,  the next Flume version is not 
decided and which Kafka version to support can not be decided.  You can cancel 
this JIRA..

> Add Kafka 0.10 support for Flume  
> --
>
> Key: FLUME-3026
> URL: https://issues.apache.org/jira/browse/FLUME-3026
> Project: Flume
>  Issue Type: Improvement
>  Components: Channel, Sinks+Sources
>Affects Versions: v1.7.0
>Reporter: Ping Wang
> Attachments: FLUME-3026.patch
>
>
> Kafka 0.10 already released and it introduces some new APIs that not 
> compatible with old one.  When using Kafka 0.10, the unit test cases will hit 
> compilation error:
> ...
> [ERROR] 
> /root/flume/flume-ng-sinks/flume-ng-kafka-sink/src/test/java/org/apache/flume/sink/kafka/TestKafkaSink.java:[526,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[23,19]
>  error: cannot find symbol
> [ERROR] package kafka.common
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/TestKafkaSource.java:[109,13]
>  error: cannot find symbol
> [ERROR] class TestKafkaSource
> [ERROR] 
> /root/flume/flume-ng-sources/flume-kafka-source/src/test/java/org/apache/flume/source/kafka/KafkaSourceEmbeddedKafka.java:[134,14]
>  error: method createTopic in class AdminUtils cannot be applied to given 
> types;
> ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLUME-2427) java.lang.NoSuchMethodException and warning on HDFS (S3) sink

2017-02-20 Thread Ping Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/FLUME-2427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15874192#comment-15874192
 ] 

Ping Wang commented on FLUME-2427:
--

Thanks Mike! 

> java.lang.NoSuchMethodException and warning on HDFS (S3) sink 
> --
>
> Key: FLUME-2427
> URL: https://issues.apache.org/jira/browse/FLUME-2427
> Project: Flume
>  Issue Type: Question
>Reporter: Bijith Kumar
>Assignee: Ping Wang
>Priority: Minor
> Fix For: v1.8.0
>
> Attachments: FLUME-2427-0.patch
>
>
> The below warning and Exception is thrown every time a file is written to S3 
> using HDFS sink. Looks like a jar mismatch to me. Tried latest hadoop and 
> jets3 jars  but didn't work 
> 17 Jul 2014 23:30:18,159 INFO  [hdfs-s3sink-engagements-call-runner-6] 
> (org.apache.flume.sink.hdfs.AbstractHDFSWriter.reflectGetNumCurrentReplicas:184)
>   - FileSystem's output stream doesn't support getNumCurrentReplicas; 
> --HDFS-826 not available; 
> fsOut=org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream;
>  err=java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.getNumCurrentReplicas()
> 17 Jul 2014 23:30:18,160 WARN  
> [SinkRunner-PollingRunner-DefaultSinkProcessor] 
> (org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed:210)  - isFileClosed 
> is not available in the version of HDFS being used. Flume will not attempt to 
> close files if the close fails on the first attempt
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.isFileClosed(org.apache.hadoop.fs.Path)
>   at java.lang.Class.getMethod(Class.java:1665)
>   at 
> org.apache.flume.sink.hdfs.BucketWriter.getRefIsClosed(BucketWriter.java:207)
>   at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:295)
>   at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:554)
>   at 
> org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:426)
>   at 
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>   at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>   at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLUME-3091) Swift Source

2017-05-11 Thread Ping Wang (JIRA)
Ping Wang created FLUME-3091:


 Summary: Swift Source 
 Key: FLUME-3091
 URL: https://issues.apache.org/jira/browse/FLUME-3091
 Project: Flume
  Issue Type: Bug
Reporter: Ping Wang
 Fix For: 2.0.0


There is already a JIRA for S3 Source support (FLUME-2437).
Similar with S3, OpenStack Object Store(Swift) is also cloud storage. 
I opened this JIRA for a new Swift Source. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)