Thanks for the list of failing tests. Curiously, earlier I got all but
one to pass. I'm curious to see those logs to see what's up.

So far I've only been testing with 0.20 to get one version totally working.

--travis



On Mon, Oct 22, 2012 at 5:27 PM, Chris Drome <[email protected]> wrote:
> Hi Travis,
>
> These are the tests that are fail:
> HCat_Negative_1
> HCat_Negative_2
> HCat_Negative_3
> HCat_Negative_4
> HCat_Negative_5
> HCat_Negative_6
> HCat_Negative_7
>
> These tests check functionality that was disabled in the hcat_client code.
> Using hive instead of hcat_client means that this functionality is
> actually implemented, hence the tests fail.
>
>
> These are the tests that abort:
> Pig_HBase_1
> Pig_HBase_2
> Hadoop_HBase_1
>
>
> In addition, if you are working with Hadoop23 you may also see
> HCat_DropTable_3 fail depending on the version of Hadoop23 you are using.
> If this is the case it will also generate 3 dependency failures as a
> result.
>
> We have an internal script which generates a bash script that configured
> environment variables, generates the test data, and runs the tests. A side
> effect of this script is that clean up everything (unless disabled) when
> generating the test data or running the tests.
>
> chris
>
> On 10/22/12 3:00 PM, "Travis Crawford" <[email protected]> wrote:
>
>>Hey Chris -
>>
>>Yeah the issues I've been seeing are due to dropping managed tables
>>and deleting the partitions. I switched from using the MySQL metastore
>>backend to derby, and trash the data files before starting tests and
>>they're quite a ways in without failures. Looking in the HDFS audit
>>log I was seeing deletes when tables were dropped too.
>>
>>A couple questions:
>>
>>(a) What are the tests that are known to fail? I'd love to disable
>>those to avoid spending a bunch of time troubleshooting something
>>known to be broken.
>>
>>(b) What setup steps do you do before running the tests? Do you trash
>>the HiveMetaStore state? Any other "prepare the environment" steps?
>>
>>--travis
>>
>>
>>On Mon, Oct 22, 2012 at 1:39 PM, Travis Crawford
>><[email protected]> wrote:
>>> Interesting that it takes 50% longer on a single pseudo-distributed
>>> setup. For cheap queries like these I would expect local to be a bit
>>> faster, actually. Thanks for that data point.
>>>
>>> Most of the tests I'm seeing fail are because there are no input
>>> records. I think this has something to do with "drop table if exists"
>>> commands, when the table is managed & the partition location is the
>>> generated data, causing the data to be deleted. I just enabled audit
>>> logging in my HDFS config to test this theory.
>>>
>>> --travis
>>>
>>>
>>>
>>> On Mon, Oct 22, 2012 at 10:53 AM, Chris Drome <[email protected]>
>>>wrote:
>>>> Hi Travis,
>>>>
>>>> That seems a little long considering the number of failures.
>>>>
>>>> Final results , PASSED: 107 FAILED: 8 SKIPPED: 0 ABORTED: 3 FAILED
>>>> DEPENDENCY: 3
>>>>
>>>> Total time: 64 minutes 55 seconds
>>>>
>>>>
>>>> These are the test results run against Hadoop23.
>>>> Normally there are 7 failures, 3 aborts, 0 failed dependencies.
>>>>
>>>> The failures come from broken negative tests.
>>>> The three aborts are from broken hbase tests.
>>>>
>>>> The additional failure here is a known issue with the latest version of
>>>> Hadoop23 and causes the 3 failed dependencies. You should not see this
>>>>if
>>>> you are building and testing against Hadoop20.
>>>>
>>>> Hope this helps.
>>>>
>>>> chris
>>>>
>>>>
>>>> On 10/20/12 1:02 PM, "Travis Crawford (JIRA)" <[email protected]> wrote:
>>>>
>>>>>
>>>>>    [
>>>>>https://issues.apache.org/jira/browse/HCATALOG-535?page=com.atlassian.j
>>>>>ira
>>>>>.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1348082
>>>>>2#c
>>>>>omment-13480822 ]
>>>>>
>>>>>Travis Crawford commented on HCATALOG-535:
>>>>>------------------------------------------
>>>>>
>>>>>Without much tinkering most of the tests pass, and took ~90 minutes.
>>>>>Question for someone who's run these before - how long do they
>>>>>typically
>>>>>take?
>>>>>
>>>>>{code}
>>>>>[exec] Final results , PASSED: 96 FAILED: 21 SKIPPED: 0 ABORTED: 4
>>>>>FAILED
>>>>>DEPENDENCY: 0
>>>>>Total time: 91 minutes 30 seconds
>>>>>{code}
>>>>>
>>>>>Hopefully the ones that have failed are due to some common reason. Will
>>>>>check the logs out to see what's up.
>>>>>
>>>>>> HCatalog e2e tests should run locally with minimal configuration
>>>>>> ----------------------------------------------------------------
>>>>>>
>>>>>>                 Key: HCATALOG-535
>>>>>>                 URL:
>>>>>>https://issues.apache.org/jira/browse/HCATALOG-535
>>>>>>             Project: HCatalog
>>>>>>          Issue Type: Improvement
>>>>>>            Reporter: Travis Crawford
>>>>>>            Assignee: Travis Crawford
>>>>>>
>>>>>> Setting up the environment to run e2e tests is documented here:
>>>>>> https://cwiki.apache.org/confluence/display/HCATALOG/How+To+Test
>>>>>> Its extremely time consuming to setup because there are so many
>>>>>>moving
>>>>>>parts. Some are very machine-specific, like configuring SSH and
>>>>>>installing MySQL for your platform. However, some stuff we can
>>>>>>automate
>>>>>>for the developer, like downloading, installing & configuring all the
>>>>>>Java stuff. We should do that to simplify.
>>>>>> Also, tests do not run from a git repo because of the svn external.
>>>>>>This would be very helpful to fix. Developing with Git is WAAAAAY
>>>>>>nicer
>>>>>>because branching is so easy.
>>>>>
>>>>>--
>>>>>This message is automatically generated by JIRA.
>>>>>If you think it was sent incorrectly, please contact your JIRA
>>>>>administrators
>>>>>For more information on JIRA, see:
>>>>>http://www.atlassian.com/software/jira
>>>>
>

Reply via email to