I do not know how the distribution and splitting of deflate files exactly
works if that is your question but probably you will find something useful
in *Codec classes, where are located implementations of few compression
formats. Deflate files are just a type of compression files that you can
use for storing files in your system. There are several others types,
depending on your needs and tradeofs you are dealing (space or time for
compressing).

 Globs I think are just a matching strategy to match files/folders together
with regular expressions..


On 22 February 2012 19:29, Jay Vyas <jayunit...@gmail.com> wrote:

> Hi guys !
>
> Im trying to understand the way globstatus / deflate files work in hdfs.  I
> cant read them using the globStatus API in the hadoop FileSystem , from
> java.  the specifics are here if anyone wants some easy stackoverflow
> points :)
>
>
> http://stackoverflow.com/questions/9400739/hadoop-globstatus-and-deflate-files
>
> On Wed, Feb 22, 2012 at 7:39 AM, Merto Mertek <masmer...@gmail.com> wrote:
>
> > Hm.. I would try first to stop all the deamons wtih
> > $haddop_home/bin/stop-all.sh. Afterwards check that on the master and one
> > of the slaves no deamons are running (jps). Maybe you could try to check
> if
> > your conf on tasktrackers for the jobtracker is pointing to the right
> place
> > (mapred-site.xml). Do you see any error in the jobtracker log too?
> >
> >
> > On 22 February 2012 09:44, Adarsh Sharma <adarsh.sha...@orkash.com>
> wrote:
> >
> > > Any update on the below issue.
> > >
> > > Thanks
> > >
> > >
> > > Adarsh Sharma wrote:
> > >
> > >> Dear all,
> > >>
> > >> Today I am trying  to configure hadoop-0.20.205.0 on a 4  node
> Cluster.
> > >> When I start my cluster , all daemons got started except tasktracker,
> > >> don't know why task tracker fails due to following error logs.
> > >>
> > >> Cluster is in private network.My /etc/hosts file contains all IP
> > hostname
> > >> resolution commands in all  nodes.
> > >>
> > >> 2012-02-21 17:48:33,056 INFO
> > org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> > >> MBean for source TaskTrackerMetrics registered.
> > >> 2012-02-21 17:48:33,094 ERROR org.apache.hadoop.mapred.**TaskTracker:
> > >> Can not start task tracker because java.net.SocketException: Invalid
> > >> argument
> > >>       at sun.nio.ch.Net.bind(Native Method)
> > >>       at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> > >> ServerSocketChannelImpl.java:**119)
> > >>       at sun.nio.ch.**ServerSocketAdaptor.bind(**
> > >> ServerSocketAdaptor.java:59)
> > >>       at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
> > >>       at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**
> > >> 301)
> > >>       at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
> > >>       at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
> > >>       at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
> > >>       at org.apache.hadoop.mapred.**TaskTracker.initialize(**
> > >> TaskTracker.java:772)
> > >>       at org.apache.hadoop.mapred.**TaskTracker.<init>(**
> > >> TaskTracker.java:1428)
> > >>       at org.apache.hadoop.mapred.**TaskTracker.main(TaskTracker.**
> > >> java:3673)
> > >>
> > >> Any comments on the issue.
> > >>
> > >>
> > >> Thanks
> > >>
> > >>
> > >
> >
>
>
>
> --
> Jay Vyas
> MMSB/UCHC
>

Reply via email to