ialLen());
> }
>
> even if the comment still say it is not supported, it seems to do
> something
>
> So this makes me think that append is not supported on hadoop
> LocalFileSystem.
>
> Is it correct?
>
> Thanks,
> Olivier
>
>
>
>
>
> On Th
pache.org/mod_mbox/hadoop-core-user/ I only see a
> browse per month...
>
> Thanks,
> Olivier
>
>
> On Thu, May 28, 2009 at 10:57 AM, Sasha Dolgy wrote:
>
>> append isn't supported without modifying the configuration file for
>> hadoop. check out the mailling
; java.io.IOException: Not supported
> at
> org.apache.hadoop.fs.ChecksumFileSystem.append(ChecksumFileSystem.java:290)
> at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:525)
> at com.neodatis.odb.hadoop.HadoopIO.main(HadoopIO.java:38)
>
>
> 1) Can someone tell me what am i doing wrong?
>
> 2) How can I update the file (for example, just update the first 10 bytes of
> the file)?
>
>
> Thanks,
> Olivier
>
--
Sasha Dolgy
sasha.do...@gmail.com
h, FsPermission)
> */
> public abstract FSDataOutputStream create(Path f,
> FsPermission permission,
> boolean overwrite,
> int bufferSize,
> short replication,
> long blockSize,
> Progressable progress) throws IOException;
>
> On Fri, May 15, 20
nor is there a way for you to even
> know
>> where the data nodes are located. That is all done by the Hadoop client
> code
>> and done silently under the covers by Hadoop itself.
>>
>> Bill
>>
>> -Original Message-
>> From: sdo...
a datanode nor is there a way for you to even know
> where the data nodes are located. That is all done by the Hadoop client code
> and done silently under the covers by Hadoop itself.
>
> Bill
>
> -Original Message-
> From: sdo...@gmail.com [mailto:sdo...@gmail.com] On Be
alse);
os.write("something".getBytes());
os.close();
Should the client be connecting to a data node to create the file as
indicated in the graphic above?
If connecting to a data node is possible and suggested, where can I find
more details about this process?
Thanks in advance,
-sasha
--
Sasha Dolgy
sasha.do...@gmail.com
;
> http://issues.apache.org/jira/browse/HADOOP-5744
>
> Hope that helps,
> -Todd
>
> On Fri, May 15, 2009 at 6:35 AM, Sasha Dolgy wrote:
>
> > Hi there, forgive the repost:
> >
> > Right now data is received in parallel and is written to a queue, then a
> >
on the
> hour/day/etc to avoid small-file proliferation.
>
> If you want to track the work being done around append and sync, check out
> HADOOP-5744 and the issues referenced therein:
>
> http://issues.apache.org/jira/browse/HADOOP-5744
>
> Hope that helps,
> -Todd
>
>
Hi there, forgive the repost:
Right now data is received in parallel and is written to a queue, then a
single thread reads the queue and writes those messages to a
FSDataOutputStream which is kept open, but the messages never get flushed.
Tried flush() and sync() with no joy.
1.
outputStream.wri
search this list for that variable name. i made a post last week
inquiring about appends() and was given enough information to go hunt
down the info on google and jira
On Thu, May 14, 2009 at 2:01 PM, Vishal Ghawate
wrote:
> where did you find that property
yep, i'm using it in 0.19.1 and have used it in 0.20.0
-sasha
On Thu, May 14, 2009 at 1:35 PM, Vishal Ghawate
wrote:
> is this property available in 0.20.0
> since i dont thik it is there in prior versions
> Vishal S. Ghawate
> ____
> Fr
ell about Append functionality in Hadoop. Is it available
> now in 0.20 ??
>
> Regards,
>
> Wasim
--
Sasha Dolgy
sasha.do...@gmail.com
t and isn't being written to file...
On Tue, May 12, 2009 at 5:26 PM, Sasha Dolgy wrote:
> Right now data is received in parallel and is written to a queue, then a
> single thread reads the queue and writes those messages to a
> FSDataOutputStream which is kept open, but the message
ttp://wiki.apache.org/hadoop/Chukwa
>
>
> On Sat, May 9, 2009 at 9:44 AM, Sasha Dolgy wrote:
> > Would WritableFactories not allow me to open one outputstream and
> continue
> > to write() and sync() ?
> >
> > Maybe I'm reading into that wrong. Although
Does anyone have any vague ideas when append() may be available for
production usage?
Thanks in advance
-sasha
--
Sasha Dolgy
sasha.do...@gmail.com
Would WritableFactories not allow me to open one outputstream and continue
to write() and sync() ?
Maybe I'm reading into that wrong. Although UUID would be nice, it would
still leave me in the problem of having lots of little files instead of a
few large files.
-sd
On Sat, May 9, 2009 at 8:37
yes, that is the problem. two or hundreds...data streams in very quickly.
On Fri, May 8, 2009 at 8:42 PM, jason hadoop wrote:
> Is it possible that two tasks are trying to write to the same file path?
>
>
> On Fri, May 8, 2009 at 11:46 AM, Sasha Dolgy wrote:
>
> > H
o enable compression (use block
> compression), and SequenceFiles are designed to work well with
> MapReduce.
>
> Cheers,
>
> Tom
>
> On Wed, May 6, 2009 at 12:34 AM, Sasha Dolgy
> wrote:
> > hi there,
> > working through a concept at the moment and was attempt
y-value pair would represent one of
> your little files (you can have a null key, if you only need to store
> the contents of the file). You can also enable compression (use block
> compression), and SequenceFiles are designed to work well with
> MapReduce.
>
> Cheers,
>
&
client
127.0.0.1 because current leaseholder is trying to recreate file.
i hope this make sense. still a little bit confused.
thanks in advance
-sd
--
Sasha Dolgy
sasha.do...@gmail.com
21 matches
Mail list logo