Hi,

It's pretty clear that both versions differ. I just can't make out any
reason (except that maybe the transfer verion of the build version is higher
than the one I use (I triple checked that I always use the same hadoop
version!).

Unfortunately, compiling hadoop fails with an error on my machine (must be
windows related), so I have difficulties building a custom hadoop-core to
see what version each versions have.

Also, I'm unable to post a bug report? I always get redirected to the list
page? It would be very helpful if someone else could look into it, or at
least confirm the bug. The code is all there in my first email.

Thanks,
Thibaut



Shengkai Zhu wrote:
> 
> I've check cod ed in DataNode.java, exactly where you get the error;
> 
> *...*
> *DataInputStream in=null;*
> *in = new DataInputStream(
>             new BufferedInputStream(s.getInputStream(), BUFFER_SIZE));
> short version = in.readShort();
> if ( version != DATA_TRANFER_VERSION ) {
>      throw new IOException( "Version Mismatch" );
> }*
> *...*
> 
> May be useful for you.
> 
> On 7/11/08, Thibaut_ <[EMAIL PROTECTED]> wrote:
>>
>>
>> Hi, I'm trying to access the hdfs of my hadoop cluster in a non hadoop
>> application. Hadoop 0.17.1 is running on standart ports
>>
>> This is the code I use:
>>
>> FileSystem fileSystem = null;
>>                String hdfsurl = "hdfs://localhost:50010";
>> fileSystem = new DistributedFileSystem();
>>
>>                try {
>>                        fileSystem.initialize(new URI(hdfsurl), new
>> Configuration());
>>                } catch (Exception e) {
>>                        e.printStackTrace();
>>                        System.out.println("init error:");
>>                        System.exit(1);
>>
>>                }
>>
>>
>> which fails with the exception:
>>
>>
>> java.net.SocketTimeoutException: timed out waiting for rpc response
>>        at org.apache.hadoop.ipc.Client.call(Client.java:559)
>>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:212)
>>        at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknown
>> Source)
>>        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:313)
>>        at
>> org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:102)
>>        at org.apache.hadoop.dfs.DFSClient.<init>(DFSClient.java:178)
>>        at
>>
>> org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:68)
>>        at
>> com.iterend.spider.conf.Config.getRemoteFileSystem(Config.java:72)
>>        at tests.RemoteFileSystemTest.main(RemoteFileSystemTest.java:22)
>> init error:
>>
>>
>> The haddop logfile contains the following error:
>>
>> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Storage
>> directory \hadoop\tmp\hadoop-sshd_server\dfs\data is not formatted.
>> 2008-07-10 23:05:47,840 INFO org.apache.hadoop.dfs.Storage: Formatting
>> ...
>> 2008-07-10 23:05:47,928 INFO org.apache.hadoop.dfs.DataNode: Registered
>> FSDatasetStatusMBean
>> 2008-07-10 23:05:47,929 INFO org.apache.hadoop.dfs.DataNode: Opened
>> server
>> at 50010
>> 2008-07-10 23:05:47,933 INFO org.apache.hadoop.dfs.DataNode: Balancing
>> bandwith is 1048576 bytes/s
>> 2008-07-10 23:05:48,128 INFO org.mortbay.util.Credential: Checking
>> Resource
>> aliases
>> 2008-07-10 23:05:48,344 INFO org.mortbay.http.HttpServer: Version
>> Jetty/5.1.4
>> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
>> HttpContext[/static,/static]
>> 2008-07-10 23:05:48,346 INFO org.mortbay.util.Container: Started
>> HttpContext[/logs,/logs]
>> 2008-07-10 23:05:49,047 INFO org.mortbay.util.Container: Started
>> [EMAIL PROTECTED]
>> 2008-07-10 23:05:49,244 INFO org.mortbay.util.Container: Started
>> WebApplicationContext[/,/]
>> 2008-07-10 23:05:49,247 INFO org.mortbay.http.SocketListener: Started
>> SocketListener on 0.0.0.0:50075
>> 2008-07-10 23:05:49,247 INFO org.mortbay.util.Container: Started
>> [EMAIL PROTECTED]
>> 2008-07-10 23:05:49,257 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=DataNode, sessionId=null
>> 2008-07-10 23:05:49,535 INFO org.apache.hadoop.dfs.DataNode: New storage
>> id
>> DS-2117780943-192.168.1.130-50010-1215723949510 is assigned to data-node
>> 127.0.0.1:50010
>> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode:
>> 127.0.0.1:50010In DataNode.run, data =
>> FSDataset{dirpath='c:\hadoop\tmp\hadoop-sshd_server\dfs\data\current'}
>> 2008-07-10 23:05:49,586 INFO org.apache.hadoop.dfs.DataNode: using
>> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 60000msec
>> 2008-07-10 23:06:04,636 INFO org.apache.hadoop.dfs.DataNode: BlockReport
>> of
>> 0 blocks got processed in 11 msecs
>> 2008-07-10 23:19:54,512 ERROR org.apache.hadoop.dfs.DataNode:
>> 127.0.0.1:50010:DataXceiver: java.io.IOException: Version Mismatch
>>        at
>> org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:961)
>>        at java.lang.Thread.run(Thread.java:619)
>>
>>
>> Any ideas how I can fix this? The haddop cluster and my application are
>> both
>> using the same hadoop jar!
>>
>> Thanks for your help,
>> Thibaut
>> --
>> View this message in context:
>> http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18392343.html
>> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>>
>>
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Version-Mismatch-when-accessing-hdfs-through-a-nonhadoop-java-application--tp18392343p18475775.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to