Re: Heads-up - manual toolchain update required soon

2017-05-14 Thread Lars Volker
I recently bumped the Breakpad version in the toolchain repo and now would
like to pull that into master. The change to do so is here:
https://gerrit.cloudera.org/#/c/6883/

Should I wait until gflags has been pulled into master and rebase? Or would
you like me to bump gflags in my change, too?

On Tue, May 9, 2017 at 12:21 AM, Henry Robinson  wrote:

> I'm about to start the process of getting IMPALA-5174 committed to the
> toolchain. This patch changes gflags to allow 'hidden' flags that won't
> show up on /varz etc.
>
> The toolchain glog has a dependency on gflags, meaning that the installed
> glog library needs to be built against the installed gflag library. So when
> the new gflag gets pulled in, you will need the new glog as well.
>
> However, the toolchain scripts won't detect that anything has changed for
> glog, because there's no version number change (changing the toolchain
> build ID doesn't cause the toolchain scripts to invalidate dependencies).
>
> Rather than introduce a spurious version bump with an empty patch file or
> something, I figured in this case it's easiest to ask developers to
> manually delete their local glog, and then bin/bootstrap_toolchain.py will
> download the most recent glog that's built against gflag. This is a
> one-time thing.
>
> I'll send out instructions about how to do this when the toolchain is
> updated.
>


Re: test_insert.py failing with ''File does not exist: ..."

2017-05-14 Thread Lars Volker
I reloaded my local cluster from a snapshot before seeing your mail and
that fixed it. I assume reloading the single table would have worked, too.
Thank you for your help!

On Sun, May 14, 2017 at 12:07 AM, Alexander Behm 
wrote:

> Have you tried reloading the alltypesnopart_insert table?
>
> bin/load-data.py -f -w functional-query --table_names=alltypesnopart_
> insert
>
> You may have to run this first:
>
> bin/create_testdata.sh
>
>
> On Sat, May 13, 2017 at 2:44 PM, Lars Volker  wrote:
>
> > I cannot run test_insert.py anymore on master. I tried clean.sh, rebuilt
> > from scratch, removed the whole toolchain, but it still won't work. On
> > first glance it looks like the test setup code tries to drop the Hive
> > default partition but cannot find a file for it. Has anyone seen this
> error
> > before? Could this be related to the cdh_components update? Thanks, Lars
> >
> > -- executing against localhost:21000
> > select count(*) from alltypesnopart_insert;
> >
> > FAILED-- closing connection to: localhost:21000
> >
> > = short test
> > summary info =
> > FAIL
> > tests/query_test/test_insert.py::TestInsertQueries::()::
> > test_insert[exec_option:
> > {'batch_size': 0, 'num_nodes': 0, 'sync_ddl': 0, 'disable_codegen':
> False,
> > 'abort_on_error': 1, 'exec_single_node_rows_threshold': 0} |
> table_format:
> > text/none]
> >
> > 
> FAILURES
> > =
> >  TestInsertQueries.test_insert[exec_option: {'batch_size': 0,
> 'num_nodes':
> > 0, 'sync_ddl': 0, 'disable_codegen': False, 'abort_on_error': 1,
> > 'exec_single_node_rows_threshold': 0} | table_format: text/none]
> > tests/query_test/test_insert.py:119: in test_insert
> > multiple_impalad=vector.get_value('exec_option')['sync_ddl'] == 1)
> > tests/common/impala_test_suite.py:332: in run_test_case
> > self.execute_test_case_setup(test_section['SETUP'],
> table_format_info)
> > tests/common/impala_test_suite.py:448: in execute_test_case_setup
> > self.__drop_partitions(db_name, table_name)
> > tests/common/impala_test_suite.py:596: in __drop_partitions
> > partition, True), 'Could not drop partition: %s' % partition
> > shell/gen-py/hive_metastore/ThriftHiveMetastore.py:2513: in
> > drop_partition_by_name
> > return self.recv_drop_partition_by_name()
> > shell/gen-py/hive_metastore/ThriftHiveMetastore.py:2541: in
> > recv_drop_partition_by_name
> > raise result.o2
> > E   MetaException: MetaException(_message='File does not exist:
> > /test-warehouse/functional.db/alltypesinsert/year=__HIVE_
> > DEFAULT_PARTITION__\n\tat
> > org.apache.hadoop.hdfs.server.namenode.FSDirectory.
> > getContentSummary(FSDirectory.java:2296)\n\tat
> > org.apache.ha
> > doop.hdfs.server.namenode.FSNamesystem.getContentSummary(
> > FSNamesystem.java:4545)\n\tat
> > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.
> > getContentSummary(NameNodeRpcServer.java:1087)\n\tat
> > org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderP
> > roxyClientProtocol.getContentSummary(AuthorizationProviderProxyClie
> > ntProtocol.java:563)\n\tat
> > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSi
> > deTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSi
> > deTranslatorPB.java:873)\n\tat
> > org.ap
> > ache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$
> > ClientNamenodeProtocol$2.callBlockingMethod(
> ClientNamenodeProtocolProtos.
> > java)\n\tat
> > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(
> > ProtobufRpcEngine.java:617)\n\tat
> > org.apa
> > che.hadoop.ipc.RPC$Server.call(RPC.java:1073)\n\tat
> > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)\n\tat
> > org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)\n\tat
> > java.security.AccessController.doPrivileged(Native Method)\n\tat javax.s
> > ecurity.auth.Subject.doAs(Subject.java:415)\n\tat
> > org.apache.hadoop.security.UserGroupInformation.doAs(
> > UserGroupInformation.java:1917)\n\tat
> > org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211)\n')
> > ! Interrupted: stopping
> > after 1 failures !!
> > === 1 failed, 1 passed
> in
> > 22.29 seconds ===
> >
>