[ https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15889395#comment-15889395 ]
Kevin Risden commented on SOLR-10215: ------------------------------------- I can confirm that 6.4.1 doesn't work with HDFS NameNode HA. 6.3.0 works just fine. The nightly build of 6.5.0 from https://builds.apache.org/job/Solr-Artifacts-6.x/lastSuccessfulBuild/artifact/solr/package/solr-6.5.0-254.tgz works as well. My testing setup: https://github.com/risdenk/solr_hdfs_ha_docker This works pretty well on 32GB of ram with AWS. I was using something similar to this: {code} docker-machine create --driver amazonec2 --amazonec2-region us-west-2 --amazonec2-request-spot-instance --amazonec2-spot-price 0.50 --amazonec2-root-size 50 --amazonec2-instance-type m4.2xlarge aws01 eval $(docker-machine env aws01) ./run.sh {code} > Cannot use the namenode for HDFS HA as of Solr 6.4 > -------------------------------------------------- > > Key: SOLR-10215 > URL: https://issues.apache.org/jira/browse/SOLR-10215 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Hadoop Integration > Affects Versions: 6.4.1, 6.4.0 > Reporter: Cassandra Targett > Priority: Blocker > Fix For: 6.4.2 > > > As of Solr 6.4, it seems it's no longer possible to use a namenode instead of > a server address with the {{solr.hdfs.home}} parameter when configuring Solr > with HDFS high availability (HA). > Startup is fine, but when trying to create a collection, this error is in the > logs: > {code} > 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1 > x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core > [testing_shard1_replica1]: Error Instantiating Update Handler, > solr.DirectUpdateHandler2 failed to instantiate > org.apache.solr.update.UpdateHandler > org.apache.solr.common.SolrException: Error Instantiating Update Handler, > solr.DirectUpdateHandler2 failed to instantiate > org.apache.solr.update.UpdateHandler > {code} > And after the full stack trace (which I will put in a comment), there is this: > {code} > Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: > mycluster > {code} > I started Solr with the params configured as system params instead of in > {{solrconfig.xml}}, so my {{solr.in.sh}} has this: > {code} > SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS > -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs > -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index > -Dsolr.hdfs.confdir=/etc/hadoop/conf/" > {code} > Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP > 2.5). > I tried with a couple variations of defining the Solr home parameter: > * {{hdfs://mycluster:8020/solr-index}} > * {{hdfs://mycluster/solr-index}} > * {{solr-index}} > None of these variations worked with Solr 6.4.1 (the first 2 got the same > error as above, the last was just wrong so it got a different error). > I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as > in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server > address also works fine, but that negates the High Availability feature > (which is like failover, for those who don't know). > _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_ -- This message was sent by Atlassian JIRA (v6.3.15#6346) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org