aining for some time. After we
> >>>> clean
> >>>> up our code and tests to conform (to these[3] and other requirements)
> we
> >>>> would like to contribute it to Hadoop. We have many customers using
> the
> >>>> connector in high-throughput production Hadoop clusters; we¹d like to
> >>>> make
> >>>> it easier and faster to use Hadoop and GCS.
> >>>>
> >>>> Timeline:
> >>>> Presently, we are working on the beta of Google Cloud Dataproc[4]
> which
> >>>> limits our time a bit, so we¹re targeting late Q1 2016 for creating a
> >>>> JIRA
> >>>> issue and adapting our connector code as needed.
> >>>>
> >>>> Our (quick) questions:
> >>>> * Do we need to take any (non-coding) action for this beyond
> submitting
> >>>> a
> >>>> JIRA when we are ready?
> >>>> * Are there any up-front concerns or questions which we can (or will
> >>>> need
> >>>> to) address?
> >>>>
> >>>> Thank you!
> >>>>
> >>>> James Malone
> >>>> On behalf of the Google Big Data OSS Engineering Team
> >>>>
> >>>> Links:
> >>>> [1] -
> >>>>
> https://github.com/GoogleCloudPlatform/bigdata-interop/tree/master/gcs
> >>>> [2] - https://cloud.google.com/hadoop/google-cloud-storage-connector
> >>>> [3] -
> >>>>
> https://github.com/GoogleCloudPlatform/bigdata-interop/tree/master/gcs
> >>>> [4] - https://cloud.google.com/dataproc
> >>>
> >>
> >>
>
>
--
jay vyas
Also if they are general Hadoop big data examples were happy to carry them in
bigtop as well ... Especially if they touch multiple areas of the Hadoop
ecosystem
> On Jun 23, 2015, at 11:56 PM, Andrew Wang wrote:
>
> Yea, throw them under dev-support. It'd be good to link them up on the wiki
>
rs?
>
>
>
>
>
>
>
> http://mail-archives.apache.org/mod_mbox/hadoop-common-dev/201407.mbox/%3CCALEq1Z8QvHof1A3zO0W5WGfbNjCOpfNo==jktq8jiu6efm_...@mail.gmail.com%3E
>
>
>
>
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes
>
> Rich Haase| Sr. Software Engineer | Pandora
> m (303) 887-1146 | rha...@pandora.com<mailto:awils...@pandora.com>
>
>
>
>
>
>
> --
> Best regards / Met vriendelijke groeten,
>
> Niels Basjes
>
>
> Rich Haase| Sr. Software Engineer | Pandora
> m (303) 887-1146 | rha...@pandora.com<mailto:rha...@pandora.com>
>
>
>
>
>
--
jay vyas
>
> > >> Tata Consultancy Services Limited
> > >> 415/21-24, Kumaran Nagar,
> > >> Sholinganallur,
> > >> Old Mahabalipuram,
> > >> Chennai - 600 119,Tamil Nadu
> > >> India
> > >> Cell:- +91-9840141129
> > >> Mailto: madhan.sundarara...@tcs.com
> > >> Website: http://www.tcs.com
> > >>
> > >> Experience certainty. IT Services
> > >> Business Solutions
> > >> Consulting
> > >>
> > >> =-=-=
> > >> Notice: The information contained in this e-mail
> > >> message and/or attachments to it may contain
> > >> confidential or privileged information. If you are
> > >> not the intended recipient, any dissemination, use,
> > >> review, distribution, printing or copying of the
> > >> information contained in this e-mail message
> > >> and/or attachments to it are strictly prohibited. If
> > >> you have received this communication in error,
> > >> please notify us by reply e-mail or telephone and
> > >> immediately and permanently delete the message
> > >> and any attachments. Thank you
> > >>
> > >>
> > >
> > >
> >
> >
>
--
jay vyas
One easy place to contribute in small increments could be the reproducing of
bugs in jiras that are filed and open.
If every day you spent an hour reproducing a bug filed in a jira, you could
come up to speed eventually on a lot of sharp corners of the source code, and
probably contribute som
Yup, that's a great summary. More details...
The HCFS wiki page will give you insight into some tests you can run to test
your FileSystem plugin class, which you will put in a jar file described below.
In general, hadoop apps are written to the file system interface which is
loaded runtime, s
jay vyas created HADOOP-11251:
-
Summary: Confirm that all contract tests are run by RawLocalFS
Key: HADOOP-11251
URL: https://issues.apache.org/jira/browse/HADOOP-11251
Project: Hadoop Common
jay vyas created HADOOP-11072:
-
Summary: better Logging in DNS.java
Key: HADOOP-11072
URL: https://issues.apache.org/jira/browse/HADOOP-11072
Project: Hadoop Common
Issue Type: Improvement
these appear to be java errors related use to your jdk?
maybe your JDK doesnt match up well with your OS.
Consider trying red hat 6+ or Fedora 20?
On Jul 8, 2014, at 5:45 AM, "moses.wang (JIRA)" wrote:
> moses.wang created HADOOP-10795:
> ---
>
>
jay vyas created HADOOP-10723:
-
Summary: FileSystem deprecated filesystem name warning : Make
error message HCFS compliant
Key: HADOOP-10723
URL: https://issues.apache.org/jira/browse/HADOOP-10723
I think breaking backwards compat is sensible since It's easily caught by the
compiler and in this case where the alternative is a
Runtime error that can result in terabytes of mucked up output.
> On May 29, 2014, at 6:11 AM, Matt Fellows
> wrote:
>
> As someone who doesn't really contribute
nizes ?
Thanks !
--
Jay Vyas
http://jayunit100.blogspot.com
which contains other jhist files (which *are*
recognized)?
Also I've created a jira for finer grained logging during the directoryScan(..)
operation: https://issues.apache.org/jira/browse/MAPREDUCE-5902
> On May 22, 2014, at 1:37 PM, Jay Vyas wrote:
>
> (sorry, i meant THROW a NPE,
(sorry, i meant THROW a NPE, not " return a null). Big difference of
course !
On Thu, May 22, 2014 at 1:36 PM, Jay Vyas wrote:
> Hi hadoop ... Is there a reason why line 220, below, should ever return
> null when
> being called through the code path of job.getCounters() ?
>
response.setCounters(TypeConverter.toYarn(job.getAllCounters()));
224 return response;
225 }
--
Jay Vyas
http://jayunit100.blogspot.com
Couple more questions:
- what is "source" vs. "modules" in steve's above outline?
- Should individual JIRAs be submitted to start doing this for segments of
the code, and if so at what granularity?
ed in the dir, distribute
> the krb5.conf and the keytabfile to you clients. config the cluents to pick
> up the krb5.conf, you are done.
>
> thx
>
> Alejandro
> (phone typing)
>
> > On Apr 17, 2014, at 8:28, Jay Vyas wrote:
> >
> > ah .. thats nice to
that is confidential,
> > privileged and exempt from disclosure under applicable law. If the reader
> > of this message is not the intended recipient, you are hereby notified
> that
> > any printing, copying, dissemination, distribution, disclosure or
> > forwarding of this communication is strictly prohibited. If you have
> > received this communication in error, please contact the sender
> immediately
> > and delete it from your system. Thank You.
>
--
Jay Vyas
http://jayunit100.blogspot.com
jay vyas created HADOOP-10505:
-
Summary: LinuxContainerExecutor is incompatible with Simple
Security mode.
Key: HADOOP-10505
URL: https://issues.apache.org/jira/browse/HADOOP-10505
Project: Hadoop Common
Slf4j is definetly a great step forward. Log4j is restrictive for complex and
multi tenant apps like hadoop.
Also the fact that slf4j doesn't use any magic when binding to its log provider
makes it way easier to swap out its implementation then tools of the past.
> On Apr 10, 2014, at 2:16 AM,
jay vyas created HADOOP-10464:
-
Summary: Make TestTrash compatible with HADOOP-10461 .
Key: HADOOP-10464
URL: https://issues.apache.org/jira/browse/HADOOP-10464
Project: Hadoop Common
Issue Type
jay vyas created HADOOP-10463:
-
Summary: Bring RawLocalFileSystem test coverage to 100%
Key: HADOOP-10463
URL: https://issues.apache.org/jira/browse/HADOOP-10463
Project: Hadoop Common
Issue
jay vyas created HADOOP-10461:
-
Summary: Runtime DI based injector for FileSystem tests
Key: HADOOP-10461
URL: https://issues.apache.org/jira/browse/HADOOP-10461
Project: Hadoop Common
Issue
jay vyas created HADOOP-10405:
-
Summary: CLOVER coverage analysis for Hadoop-Commoon tests
Key: HADOOP-10405
URL: https://issues.apache.org/jira/browse/HADOOP-10405
Project: Hadoop Common
Issue
ankit nadig wrote:
> thanks a lot!
>
>
> On Tue, Sep 24, 2013 at 6:49 PM, Jay Vyas wrote:
>
> > And also, if you want to help out: we are developing blueprints in the
> > bigtop project specifically for people who want to learn how real world
> > bigdata workflows loo
And also, if you want to help out: we are developing blueprints in the bigtop
project specifically for people who want to learn how real world bigdata
workflows look.
> On Sep 24, 2013, at 4:52 AM, Steve Loughran wrote:
>
> Hi.
>
> You need to know that we don't really consider Hadoop a good
/svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20/src/examples/org/apache/hadoop/examples/MultiFileWordCount.java
What should be the correct behaviour of getPos() in the RecordReader?
http://stackoverflow.com/questions/18708832/hadoop-rawlocalfilesystem-and-getpos
--
Jay Vyas
http://jayunit100.blogspot.com
27 matches
Mail list logo