Re: Pre-Commit build is failing

2017-07-25 Thread Sean Busbey
-dev@yetus to bcc, since I think this is a Hadoop issue and not a yetus
issue.

Please review/commit HADOOP-14686 (which I am providing as a
volunteer/contributor on the Hadoop project).

On Tue, Jul 25, 2017 at 7:54 PM, Allen Wittenauer 
wrote:

>
> Again: just grab the .gitignore file from trunk and update it in
> branch-2.7. It hasn't been touched (outside of one patch) in years.  The
> existing jobs should then work.
>
> The rest of this stuff, yes, I know and yes it's intentional.  The
> directory structure was inherited from the original jobs that Nigel set up
> with the old version of test-patch.  Maybe some day I'll fix it.  But
> that's a project for a different day.  In order to fix it, it means taking
> down the patch testing for Hadoop while I work it out.  You'll notice that
> all of the other Yetus jobs for Hadoop have a much different layout.
>
>
>
>
> > On Jul 25, 2017, at 7:24 PM, suraj acharya  wrote:
> >
> > Hi,
> >
> > Seems like the issue was incorrect/unclean checkout.
> > I made a few changes[1] to the directories the checkout happens to  and
> it is now running.
> > Of course, this build[2] will take some time to run, but at the moment,
> it is running maven install.
> >
> > I am not sure who sets up/ manages the jenkins job of HDFS and dont want
> to change that, but I will keep the dummy job around for a couple of days
> in case anyone wants to see.
> > Also, I see that you'll were using the master branch of Yetus. If there
> is no patch present there that is of importance, then I would recommend to
> use the latest stable release version 0.5.0
> >
> > If you have more questions, feel free to ping dev@yetus.
> > Hope this helps.
> >
> > [1]: https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-
> Copy/configure
> > [2]: https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-
> Copy/12/console
> >
> > -Suraj Acharya
> >
> > On Tue, Jul 25, 2017 at 6:57 PM, suraj acharya 
> wrote:
> > For anyone looking. I created another job here. [1].
> > Set it with debug to see the issue.
> > The error is being seen here[2].
> > From the looks of it, it looks like, the way the checkout is happening
> is not very clean.
> > I will continue to look at it, but in case anyone wants to jump in.
> >
> > [1] : https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-Copy/
> > [2] : https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-
> Copy/11/console
> >
> > -Suraj Acharya
> >
> > On Tue, Jul 25, 2017 at 6:28 PM, Konstantin Shvachko <
> shv.had...@gmail.com> wrote:
> > Hi Yetus developers,
> >
> > We cannot build Hadoop branch-2.7 anymore. Here is a recent example of a
> > failed build:
> > https://builds.apache.org/job/PreCommit-HDFS-Build/20409/console
> >
> > It seems the build is failing because Yetus cannot apply the patch from
> the
> > jira.
> >
> > ERROR: HDFS-11896 does not apply to branch-2.7.
> >
> > As far as I understand this is Yetus problem. Probably in 0.3.0.
> > I can apply this patch successfully, but Yetus test-patch.sh script
> clearly
> > failed to apply. Cannot say why because Yetus does not report it.
> > I also ran Hadoop's test-patch.sh script locally and it passed
> successfully
> > on branch-2.7.
> >
> > Could anybody please take a look and help fixing the build.
> > This would be very helpful for the release (2.7.4) process.
> >
> > Thanks,
> > --Konst
> >
> > On Mon, Jul 24, 2017 at 10:41 PM, Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> > > Or should we backport the entire HADOOP-11917
> > >  ?
> > >
> > > Thanks,
> > > --Konst
> > >
> > > On Mon, Jul 24, 2017 at 6:56 PM, Konstantin Shvachko <
> shv.had...@gmail.com
> > > > wrote:
> > >
> > >> Allen,
> > >>
> > >> Should we add "patchprocess/" to .gitignore, is that the problem for
> 2.7?
> > >>
> > >> Thanks,
> > >> --Konstantin
> > >>
> > >> On Fri, Jul 21, 2017 at 6:24 PM, Konstantin Shvachko <
> > >> shv.had...@gmail.com> wrote:
> > >>
> > >>> What stuff? Is there a jira?
> > >>> It did work like a week ago. Is it a new Yetus requirement.
> > >>> Anyways I can commit a change to fix the build on our side.
> > >>> Just need to know what is missing.
> > >>>
> > >>> Thanks,
> > >>> --Konst
> > >>>
> > >>> On Fri, Jul 21, 2017 at 5:50 PM, Allen Wittenauer <
> > >>> a...@effectivemachines.com> wrote:
> > >>>
> > 
> >  > On Jul 21, 2017, at 5:46 PM, Konstantin Shvachko <
> >  shv.had...@gmail.com> wrote:
> >  >
> >  > + d...@yetus.apache.org
> >  >
> >  > Guys, could you please take a look. Seems like Yetus problem with
> >  > pre-commit build for branch-2.7.
> > 
> > 
> >  branch-2.7 is missing stuff in .gitignore.
> > >>>
> > >>>
> > >>>
> > >>
> > >
> >
> >
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional 

[jira] [Created] (HADOOP-14686) Branch-2.7 .gitignore is out of date

2017-07-25 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-14686:


 Summary: Branch-2.7 .gitignore is out of date
 Key: HADOOP-14686
 URL: https://issues.apache.org/jira/browse/HADOOP-14686
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, precommit
Affects Versions: 2.7.4
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker


.gitignore is out of date on branch-2.7, which is causing issues in precommit 
checks for that branch.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Pre-Commit build is failing

2017-07-25 Thread Allen Wittenauer

Again: just grab the .gitignore file from trunk and update it in 
branch-2.7. It hasn't been touched (outside of one patch) in years.  The 
existing jobs should then work. 

The rest of this stuff, yes, I know and yes it's intentional.  The 
directory structure was inherited from the original jobs that Nigel set up with 
the old version of test-patch.  Maybe some day I'll fix it.  But that's a 
project for a different day.  In order to fix it, it means taking down the 
patch testing for Hadoop while I work it out.  You'll notice that all of the 
other Yetus jobs for Hadoop have a much different layout.




> On Jul 25, 2017, at 7:24 PM, suraj acharya  wrote:
> 
> Hi,
> 
> Seems like the issue was incorrect/unclean checkout.
> I made a few changes[1] to the directories the checkout happens to  and it is 
> now running. 
> Of course, this build[2] will take some time to run, but at the moment, it is 
> running maven install.
> 
> I am not sure who sets up/ manages the jenkins job of HDFS and dont want to 
> change that, but I will keep the dummy job around for a couple of days in 
> case anyone wants to see.
> Also, I see that you'll were using the master branch of Yetus. If there is no 
> patch present there that is of importance, then I would recommend to use the 
> latest stable release version 0.5.0
> 
> If you have more questions, feel free to ping dev@yetus.
> Hope this helps.
> 
> [1]: https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-Copy/configure
> [2]: https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-Copy/12/console
> 
> -Suraj Acharya
> 
> On Tue, Jul 25, 2017 at 6:57 PM, suraj acharya  wrote:
> For anyone looking. I created another job here. [1].
> Set it with debug to see the issue.
> The error is being seen here[2].
> From the looks of it, it looks like, the way the checkout is happening is not 
> very clean.
> I will continue to look at it, but in case anyone wants to jump in.
> 
> [1] : https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-Copy/
> [2] : https://builds.apache.org/job/PreCommit-HDFS-Build-Suraj-Copy/11/console
> 
> -Suraj Acharya
> 
> On Tue, Jul 25, 2017 at 6:28 PM, Konstantin Shvachko  
> wrote:
> Hi Yetus developers,
> 
> We cannot build Hadoop branch-2.7 anymore. Here is a recent example of a
> failed build:
> https://builds.apache.org/job/PreCommit-HDFS-Build/20409/console
> 
> It seems the build is failing because Yetus cannot apply the patch from the
> jira.
> 
> ERROR: HDFS-11896 does not apply to branch-2.7.
> 
> As far as I understand this is Yetus problem. Probably in 0.3.0.
> I can apply this patch successfully, but Yetus test-patch.sh script clearly
> failed to apply. Cannot say why because Yetus does not report it.
> I also ran Hadoop's test-patch.sh script locally and it passed successfully
> on branch-2.7.
> 
> Could anybody please take a look and help fixing the build.
> This would be very helpful for the release (2.7.4) process.
> 
> Thanks,
> --Konst
> 
> On Mon, Jul 24, 2017 at 10:41 PM, Konstantin Shvachko 
> wrote:
> 
> > Or should we backport the entire HADOOP-11917
> >  ?
> >
> > Thanks,
> > --Konst
> >
> > On Mon, Jul 24, 2017 at 6:56 PM, Konstantin Shvachko  > > wrote:
> >
> >> Allen,
> >>
> >> Should we add "patchprocess/" to .gitignore, is that the problem for 2.7?
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> On Fri, Jul 21, 2017 at 6:24 PM, Konstantin Shvachko <
> >> shv.had...@gmail.com> wrote:
> >>
> >>> What stuff? Is there a jira?
> >>> It did work like a week ago. Is it a new Yetus requirement.
> >>> Anyways I can commit a change to fix the build on our side.
> >>> Just need to know what is missing.
> >>>
> >>> Thanks,
> >>> --Konst
> >>>
> >>> On Fri, Jul 21, 2017 at 5:50 PM, Allen Wittenauer <
> >>> a...@effectivemachines.com> wrote:
> >>>
> 
>  > On Jul 21, 2017, at 5:46 PM, Konstantin Shvachko <
>  shv.had...@gmail.com> wrote:
>  >
>  > + d...@yetus.apache.org
>  >
>  > Guys, could you please take a look. Seems like Yetus problem with
>  > pre-commit build for branch-2.7.
> 
> 
>  branch-2.7 is missing stuff in .gitignore.
> >>>
> >>>
> >>>
> >>
> >
> 
> 


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Pre-Commit build is failing

2017-07-25 Thread Konstantin Shvachko
Hi Yetus developers,

We cannot build Hadoop branch-2.7 anymore. Here is a recent example of a
failed build:
https://builds.apache.org/job/PreCommit-HDFS-Build/20409/console

It seems the build is failing because Yetus cannot apply the patch from the
jira.

ERROR: HDFS-11896 does not apply to branch-2.7.

As far as I understand this is Yetus problem. Probably in 0.3.0.
I can apply this patch successfully, but Yetus test-patch.sh script clearly
failed to apply. Cannot say why because Yetus does not report it.
I also ran Hadoop's test-patch.sh script locally and it passed successfully
on branch-2.7.

Could anybody please take a look and help fixing the build.
This would be very helpful for the release (2.7.4) process.

Thanks,
--Konst

On Mon, Jul 24, 2017 at 10:41 PM, Konstantin Shvachko 
wrote:

> Or should we backport the entire HADOOP-11917
>  ?
>
> Thanks,
> --Konst
>
> On Mon, Jul 24, 2017 at 6:56 PM, Konstantin Shvachko  > wrote:
>
>> Allen,
>>
>> Should we add "patchprocess/" to .gitignore, is that the problem for 2.7?
>>
>> Thanks,
>> --Konstantin
>>
>> On Fri, Jul 21, 2017 at 6:24 PM, Konstantin Shvachko <
>> shv.had...@gmail.com> wrote:
>>
>>> What stuff? Is there a jira?
>>> It did work like a week ago. Is it a new Yetus requirement.
>>> Anyways I can commit a change to fix the build on our side.
>>> Just need to know what is missing.
>>>
>>> Thanks,
>>> --Konst
>>>
>>> On Fri, Jul 21, 2017 at 5:50 PM, Allen Wittenauer <
>>> a...@effectivemachines.com> wrote:
>>>

 > On Jul 21, 2017, at 5:46 PM, Konstantin Shvachko <
 shv.had...@gmail.com> wrote:
 >
 > + d...@yetus.apache.org
 >
 > Guys, could you please take a look. Seems like Yetus problem with
 > pre-commit build for branch-2.7.


 branch-2.7 is missing stuff in .gitignore.
>>>
>>>
>>>
>>
>


Re: zstd compression

2017-07-25 Thread Andrew Wang
Thanks Sean, Owen. I've opened an issue on their github here:

https://github.com/facebook/zstd/issues/775

I figure it doesn't hurt to ask, particularly if they intend for zstd to be
a replacement for the commonly-embedded zlib.

On Tue, Jul 25, 2017 at 6:17 AM, Owen O'Malley 
wrote:

> I'd support asking Facebook to change it with both my hadoop and orc hats
> on.
>
> .. Owen
>
> > On Jul 24, 2017, at 23:43, Sean Busbey  wrote:
> >
> > Nope. Once I found out HBase's use was compliant as an optional runtime
> > dependency I stopped looking.
> >
> >> On Jul 24, 2017 7:22 PM, "Andrew Wang" 
> wrote:
> >>
> >> I think it'd still be worth asking FB to relicense zstandard. Being able
> >> to bundle it in the release would make it easier to use, since I doubt
> >> there are zstandard packages in the default OS repos.
> >>
> >> Sean, have you already filed an issue with zstandard?
> >>
> >> On Mon, Jul 17, 2017 at 1:30 PM, Jason Lowe  >
> >> wrote:
> >>
> >>> I think we are OK to leave support for the zstd codec in the Hadoop
> code
> >>> base.  I asked Chris Mattman for clarification, noting that the
> support for
> >>> the zstd codec requires the user to install the zstd headers and
> libraries
> >>> and then configure it to be included in the native Hadoop build.  The
> >>> Hadoop releases are not shipping any zstd code (e.g.: headers or
> libraries)
> >>> nor does it require zstd as a mandatory dependency.  Here's what he
> said:
> >>>
> >>>
> >>> On Monday, July 17, 2017 11:07 AM, Chris Mattmann  >
> >>> wrote:
> >>>
>  Hi Jason,
> 
>  This sounds like an optional dependency on a Cat-X software. This
> isn’t
> >>> the only type of compression
>  that is allowed within Hadoop, correct? If it is truly optional and
> you
> >>> have gone to that level of detail
>  below to make the user opt in, and if we are not shipping zstd with
> our
> >>> products (source code releases),
>  then this is an acceptable usage.
> 
>  Cheers,
>  Chris
> >>>
> >>>
> >>> So I think we are in the clear with respect to zstd usage as long as we
> >>> keep it as an optional codec where the user needs to get the headers
> and
> >>> libraries for zstd and configure it into the native Hadoop build.
> >>>
> >>> Jason
> >>>
> >>> On Monday, July 17, 2017 9:44 AM, Sean Busbey 
> >>> wrote:
> >>>
> >>>
> >>>
> >>> I know that the HBase community is also looking at what to do about
> >>>
> >>> our inclusion of zstd. We've had it in releases since late 2016. My
> >>>
> >>> plan was to request that they relicense it.
> >>>
> >>>
> >>> Perhaps the Hadoop PMC could join HBase in the request?
> >>>
> >>>
> >>> On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauer
> >>>
> >>>  wrote:
> >>>
> 
> >>>
> It looks like HADOOP-13578 added Facebook's zstd compression
> >>> codec.  Unfortunately, that codec is using the same 3-clause BSD
> (LICENSE
> >>> file) + patent grant license (PATENTS file) that React is using and
> RocksDB
> >>> was using.
> >>>
> 
> >>>
> Should that code get reverted?
> >>>
> 
> >>>
> 
> >>>
> 
> >>>
>  -
> >>>
>  To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >>>
>  For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>>
> 
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>>
> >>> busbey
> >>>
> >>>
> >>> -
> >>>
> >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >>>
> >>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>>
> >>> -
> >>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> >>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >>>
> >>>
> >>
>


[jira] [Created] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-07-25 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HADOOP-14685:
---

 Summary: Test jars to exclude from hadoop-client-minicluster jar
 Key: HADOOP-14685
 URL: https://issues.apache.org/jira/browse/HADOOP-14685
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.0.0-beta1
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


This jira is to discuss, what test jars to be included/excluded from 
hadoop-client-minicluster



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: zstd compression

2017-07-25 Thread Owen O'Malley
I'd support asking Facebook to change it with both my hadoop and orc hats on. 

.. Owen

> On Jul 24, 2017, at 23:43, Sean Busbey  wrote:
> 
> Nope. Once I found out HBase's use was compliant as an optional runtime
> dependency I stopped looking.
> 
>> On Jul 24, 2017 7:22 PM, "Andrew Wang"  wrote:
>> 
>> I think it'd still be worth asking FB to relicense zstandard. Being able
>> to bundle it in the release would make it easier to use, since I doubt
>> there are zstandard packages in the default OS repos.
>> 
>> Sean, have you already filed an issue with zstandard?
>> 
>> On Mon, Jul 17, 2017 at 1:30 PM, Jason Lowe 
>> wrote:
>> 
>>> I think we are OK to leave support for the zstd codec in the Hadoop code
>>> base.  I asked Chris Mattman for clarification, noting that the support for
>>> the zstd codec requires the user to install the zstd headers and libraries
>>> and then configure it to be included in the native Hadoop build.  The
>>> Hadoop releases are not shipping any zstd code (e.g.: headers or libraries)
>>> nor does it require zstd as a mandatory dependency.  Here's what he said:
>>> 
>>> 
>>> On Monday, July 17, 2017 11:07 AM, Chris Mattmann 
>>> wrote:
>>> 
 Hi Jason,
 
 This sounds like an optional dependency on a Cat-X software. This isn’t
>>> the only type of compression
 that is allowed within Hadoop, correct? If it is truly optional and you
>>> have gone to that level of detail
 below to make the user opt in, and if we are not shipping zstd with our
>>> products (source code releases),
 then this is an acceptable usage.
 
 Cheers,
 Chris
>>> 
>>> 
>>> So I think we are in the clear with respect to zstd usage as long as we
>>> keep it as an optional codec where the user needs to get the headers and
>>> libraries for zstd and configure it into the native Hadoop build.
>>> 
>>> Jason
>>> 
>>> On Monday, July 17, 2017 9:44 AM, Sean Busbey 
>>> wrote:
>>> 
>>> 
>>> 
>>> I know that the HBase community is also looking at what to do about
>>> 
>>> our inclusion of zstd. We've had it in releases since late 2016. My
>>> 
>>> plan was to request that they relicense it.
>>> 
>>> 
>>> Perhaps the Hadoop PMC could join HBase in the request?
>>> 
>>> 
>>> On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauer
>>> 
>>>  wrote:
>>> 
 
>>> 
It looks like HADOOP-13578 added Facebook's zstd compression
>>> codec.  Unfortunately, that codec is using the same 3-clause BSD (LICENSE
>>> file) + patent grant license (PATENTS file) that React is using and RocksDB
>>> was using.
>>> 
 
>>> 
Should that code get reverted?
>>> 
 
>>> 
 
>>> 
 
>>> 
 -
>>> 
 To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> 
 For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
 
>>> 
>>> 
>>> 
>>> 
>>> --
>>> 
>>> busbey
>>> 
>>> 
>>> -
>>> 
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> 
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> -
>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>> 
>>> 
>> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: zstd compression

2017-07-25 Thread Sean Busbey
Nope. Once I found out HBase's use was compliant as an optional runtime
dependency I stopped looking.

On Jul 24, 2017 7:22 PM, "Andrew Wang"  wrote:

> I think it'd still be worth asking FB to relicense zstandard. Being able
> to bundle it in the release would make it easier to use, since I doubt
> there are zstandard packages in the default OS repos.
>
> Sean, have you already filed an issue with zstandard?
>
> On Mon, Jul 17, 2017 at 1:30 PM, Jason Lowe 
> wrote:
>
>> I think we are OK to leave support for the zstd codec in the Hadoop code
>> base.  I asked Chris Mattman for clarification, noting that the support for
>> the zstd codec requires the user to install the zstd headers and libraries
>> and then configure it to be included in the native Hadoop build.  The
>> Hadoop releases are not shipping any zstd code (e.g.: headers or libraries)
>> nor does it require zstd as a mandatory dependency.  Here's what he said:
>>
>>
>> On Monday, July 17, 2017 11:07 AM, Chris Mattmann 
>> wrote:
>>
>> > Hi Jason,
>> >
>> > This sounds like an optional dependency on a Cat-X software. This isn’t
>> the only type of compression
>> > that is allowed within Hadoop, correct? If it is truly optional and you
>> have gone to that level of detail
>> > below to make the user opt in, and if we are not shipping zstd with our
>> products (source code releases),
>> > then this is an acceptable usage.
>> >
>> > Cheers,
>> > Chris
>>
>>
>> So I think we are in the clear with respect to zstd usage as long as we
>> keep it as an optional codec where the user needs to get the headers and
>> libraries for zstd and configure it into the native Hadoop build.
>>
>> Jason
>>
>> On Monday, July 17, 2017 9:44 AM, Sean Busbey 
>> wrote:
>>
>>
>>
>> I know that the HBase community is also looking at what to do about
>>
>> our inclusion of zstd. We've had it in releases since late 2016. My
>>
>> plan was to request that they relicense it.
>>
>>
>> Perhaps the Hadoop PMC could join HBase in the request?
>>
>>
>> On Sun, Jul 16, 2017 at 8:11 PM, Allen Wittenauer
>>
>>  wrote:
>>
>> >
>>
>> > It looks like HADOOP-13578 added Facebook's zstd compression
>> codec.  Unfortunately, that codec is using the same 3-clause BSD (LICENSE
>> file) + patent grant license (PATENTS file) that React is using and RocksDB
>> was using.
>>
>> >
>>
>> > Should that code get reverted?
>>
>> >
>>
>> >
>>
>> >
>>
>> > -
>>
>> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>
>> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>> >
>>
>>
>>
>>
>> --
>>
>> busbey
>>
>>
>> -
>>
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>>
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>> -
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>
>