Dave,

I'm investigating if the datanode is configured correctly.  I don't see any
issue in the logs and via the name node web UI the node is live...so it
looks to be fine.

Mike,
I have only been remote debugging when accumulo init is run.. not running
custom code.

Thanks,

On Wed, Dec 15, 2021 at 11:32 AM Mike Miller <mmil...@apache.org> wrote:

> When you say "the FileSKVWriter that is used when createMetadataFile is
> called it" is this code that you have extended or are calling through a
> client? If you are using the FileSKVWriter directly, then it may not have
> the configuration properly passed to it. That interface is not in the
> public API and should avoid being used.
>
> On Wed, Dec 15, 2021 at 10:45 AM Dave Marion <dmario...@gmail.com> wrote:
>
> > Is that datanode configured correctly? I wonder why it's excluded.
> >
> > On Wed, Dec 15, 2021 at 9:45 AM Vincent Russell <
> vincent.russ...@gmail.com
> > >
> > wrote:
> >
> > > Thank you Christopher,
> > >
> > > I was able to determine that the ssl settings in core-site.xml are
> being
> > > picked up and used.   In fact when accumulo init is run, accumulo is
> able
> > > to create the /accumulo directory in HDFS.    What is weird is that
> when
> > > the FileSKVWriter that is used when createMetadataFile is called it
> > > throws an exception when close is called.
> > >
> > >
> > > I get:
> > >
> > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
> > > /accumulo/tables/!0/table_info/0_1.rf could only be written to the 0 of
> > the
> > > 1 minReplication nodes.  There are 1 datanode(s) running and 1 node(s)
> > are
> > > excluded in this operation.
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1720)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3389)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:683)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:214)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:495)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
> > >         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
> > >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
> > >         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
> > >         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)
> > >
> > > I don't get this error when I disable ssl on hadoop.
> > >
> > > Any insight would be greatly appreciated.
> > >
> > > Thanks,
> > >
> > > On Tue, Dec 14, 2021 at 2:23 PM Vincent Russell <
> > vincent.russ...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Thanks Chris.
> > > >
> > > > Yes I do get an error (I can't remember now because it's on a
> separate
> > > > computer) during the init and I get a MagicNumber exception on the
> > > datanode
> > > > during this process which says something like maybe encryption isn't
> > > turned
> > > > on.
> > > >
> > > > but let me make sure that the core-default.xml and core-site.xml are
> on
> > > > the classpath.  They may not be.
> > > >
> > > > Thanks again.
> > > >
> > > > On Tue, Dec 14, 2021 at 2:13 PM Christopher <ctubb...@apache.org>
> > wrote:
> > > >
> > > >> I have not personally tested HDFS configured for SSL/TLS, but `new
> > > >> Configuration()` will load the core-default.xml and core-site.xml
> > > >> files it finds on the class path. So, it looks like it should work.
> > > >> Have you tried it? Did you get an error?
> > > >>
> > > >>
> > > >> On Tue, Dec 14, 2021 at 1:54 PM Vincent Russell
> > > >> <vincent.russ...@gmail.com> wrote:
> > > >> >
> > > >> > Thank you Mike,
> > > >> >
> > > >> > but it appears that accumulo uses those settings to connect
> > accumulo,
> > > >> but
> > > >> > not to connect to hdfs.
> > > >> >
> > > >> > For instance the VolumeManagementImpl just does this:
> > > >> >
> > > >> > VolumeConfiguration.create(new Path(volumeUriOrDir), hadoopConf));
> > > >> >
> > > >> > where the hadoopConf is just instantiated in the Initialize class:
> > > >> >
> > > >> > Configuration hadoopConfig = new Configuration();
> > > >> > VolumeManager fs = VolumeManagerImpl.get(siteConfig,
> hadoopConfig);
> > > >> >
> > > >> > Thanks,
> > > >> > Vincent
> > > >> >
> > > >> > On Tue, Dec 14, 2021 at 12:18 PM Mike Miller <mmil...@apache.org>
> > > >> wrote:
> > > >> >
> > > >> > > Checkout the accumulo client properties that start with the
> "ssl"
> > > >> prefix.
> > > >> > >
> > > https://accumulo.apache.org/docs/2.x/configuration/client-properties
> > > >> > > This blog post from a few years ago may help:
> > > >> > >
> > > >> > >
> > > >>
> > >
> >
> https://accumulo.apache.org/blog/2014/09/02/generating-keystores-for-configuring-accumulo-with-ssl.html
> > > >> > >
> > > >> > > On Tue, Dec 14, 2021 at 9:58 AM Vincent Russell <
> > > >> vincent.russ...@gmail.com
> > > >> > > >
> > > >> > > wrote:
> > > >> > >
> > > >> > > > Hello,
> > > >> > > >
> > > >> > > > I am trying to init a test accumulo instance with an hdfs
> > running
> > > >> with
> > > >> > > > SSL.    Is this possible?  I am looking at the code and it
> > doesn't
> > > >> look
> > > >> > > > like this is possible.
> > > >> > > >
> > > >> > > > The Initialize class just instantiates a Hadoop config and
> > passes
> > > >> that
> > > >> > > into
> > > >> > > > the VolumeManager without sending over any hadoop configs from
> > the
> > > >> > > core.xml
> > > >> > > > file.
> > > >> > > >
> > > >> > > > Am I missing something?
> > > >> > > >
> > > >> > > > Thanks in advance for your help,
> > > >> > > > Vincent
> > > >> > > >
> > > >> > >
> > > >>
> > > >
> > >
> >
>

Reply via email to