Re: Data migration in Hadoop

2011-09-13 Thread Vikas Srivastava
thanks Ayon and sonal,

one more thing

*question:- does the imbalance size in cluster is of any datanode create a
problem...or have any bad impact*

Acc to what you are saying

my cluster would be of 10 DN of (2tb hdd) and 1 DN of (8tb HDD)  does this
make any bad impact.

please suggest.. this all config with 16 gb ram

Regards
Vikas Srivastava

On Tue, Sep 13, 2011 at 11:20 PM, Ayon Sinha  wrote:

> What you can do for each node:
> 1. decommission node (or 2 nodes if you want to do this faster). You can do
> this with the excludes file.
> 2. Wait for blocks to be moved off the decommed node(s)
> 3. Replace the disks and put them back in service.
> 4. Repeat until done.
>
> -Ayon
> See My Photos on Flickr 
> Also check out my Blog for answers to commonly asked 
> questions.
>
> --
> *From:* Vikas Srivastava 
> *To:* user@hive.apache.org
> *Sent:* Tuesday, September 13, 2011 5:27 AM
> *Subject:* Re: Data migration in Hadoop
>
> hey sonal!!
>
> Actually right now we have 11 node cluster each having 8 disks of 3oogb and
> 8gb ram,
>
> now what we want to do is to replace those 300gb disks with 1 tb disks so
> that we can have more space per server.
>
> we have replication factor 2.
>
> my suggestion is ..
> 1:- Add a node of 8 tb in cluster and run balancer to balance the load.
> 2:- free any 1 node(repalcement node).
>
> question:- does the imbalance size in cluster is of any datanode create a
> problem...or have any bad impact
>
> regards
> Vikas Srivastava
>
>
> On Tue, Sep 13, 2011 at 5:37 PM, Sonal Goyal wrote:
>
> Hi Vikas,
>
> This was discussed in the groups recently:
>
> http://lucene.472066.n3.nabble.com/Fixing-a-bad-HD-tt2863634.html#none
>
> Are you looking at replacing all your datanodes, or only a few? how big is
> your cluster?
>
> Best Regards,
> Sonal
> Crux: Reporting for HBase 
> Nube Technologies 
>
> 
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 1:52 PM, Vikas Srivastava <
> vikas.srivast...@one97.net> wrote:
>
> HI ,
>
> can ny1 tell me how we can migrate hadoop or replace old hard disks with
> new big size hdd.
>
> actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
> efficiently!!!
>
> ploblem is to migrate data from 1 hdd to other
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>
>
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>
>
>


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !


HBase and Hivehandler compatibly

2011-09-13 Thread Naila karim
Dear All, Hello

I have Hadoop 0-20.2 append version and HBase-0.90.3.
I am trying to build Hive 0.9.0-SNAPSHOT for aforementioned versions of
Hadoop and HBase.
I have to integrate Hive and HBase. For this purpose *I want to know about
the compatibility of
Hive 0.9.0-SNAPSHOT version with HBase-0.90.3?*

Thanks
Naila


Re: Data migration in Hadoop

2011-09-13 Thread Sonal Goyal
Vikas,

I would suggest running the production clusters with replication factor of
3. Then you could decommission 2 nodes as Ayon suggests. Else one node at a
time.

Best Regards,
Sonal
Crux: Reporting for HBase 
Nube Technologies 







On Tue, Sep 13, 2011 at 11:20 PM, Ayon Sinha  wrote:

> What you can do for each node:
> 1. decommission node (or 2 nodes if you want to do this faster). You can do
> this with the excludes file.
> 2. Wait for blocks to be moved off the decommed node(s)
> 3. Replace the disks and put them back in service.
> 4. Repeat until done.
>
> -Ayon
> See My Photos on Flickr 
> Also check out my Blog for answers to commonly asked 
> questions.
>
> --
> *From:* Vikas Srivastava 
> *To:* user@hive.apache.org
> *Sent:* Tuesday, September 13, 2011 5:27 AM
> *Subject:* Re: Data migration in Hadoop
>
> hey sonal!!
>
> Actually right now we have 11 node cluster each having 8 disks of 3oogb and
> 8gb ram,
>
> now what we want to do is to replace those 300gb disks with 1 tb disks so
> that we can have more space per server.
>
> we have replication factor 2.
>
> my suggestion is ..
> 1:- Add a node of 8 tb in cluster and run balancer to balance the load.
> 2:- free any 1 node(repalcement node).
>
> question:- does the imbalance size in cluster is of any datanode create a
> problem...or have any bad impact
>
> regards
> Vikas Srivastava
>
>
> On Tue, Sep 13, 2011 at 5:37 PM, Sonal Goyal wrote:
>
> Hi Vikas,
>
> This was discussed in the groups recently:
>
> http://lucene.472066.n3.nabble.com/Fixing-a-bad-HD-tt2863634.html#none
>
> Are you looking at replacing all your datanodes, or only a few? how big is
> your cluster?
>
> Best Regards,
> Sonal
> Crux: Reporting for HBase 
> Nube Technologies 
>
> 
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 1:52 PM, Vikas Srivastava <
> vikas.srivast...@one97.net> wrote:
>
> HI ,
>
> can ny1 tell me how we can migrate hadoop or replace old hard disks with
> new big size hdd.
>
> actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
> efficiently!!!
>
> ploblem is to migrate data from 1 hdd to other
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>
>
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>
>
>


Re: Change in serdeproperties does not update existing partitions

2011-09-13 Thread Maxime Brugidou
Thanks Ashutosh for your answer. I actually use external tables so that i
don't drop my partitions data.

This is still an odd behavior to me and I don't get why someone would expect
it. Whenever I need to add a column to a table (my table here represent a
log, and it is common to add fields to logs), I need to drop all partitions
and recreate them. How do people do in general?

Do you have a use case where people want to alter a table and not update
existing partitions? Is it so that if your file format evolves you don't
have to convert the whole history?

Best,
Maxime

On Tue, Sep 13, 2011 at 7:03 PM, Ashutosh Chauhan wrote:

> Hey Maxime,
>
> Yeah, thats intended behavior. After you do alter on table, all subsequent
> actions on table and partitions will inherit from it. If you want to modify
> properties of already existing partitions, you should be able to do
> something like 'alter table test_table partition (day='2011-09-02') set
> serdeproperties ('input.regex' = '(.*)')' Unfortunately this is not
> supported currently. Feel free to file a bug for that.
>
> A workaround (applicable only because you are using external table) is to
> drop partition and then add them again. When you drop a partition from
> external table, only metadata gets wiped out, data is not deleted, so when
> you will add partition again, it will inherit from table serde properties
> and you will get what you are looking for. Use this workaround with care,
> you don't want to loose your data in recreating partitions.
>
> Hope it helps,
> Ashutosh
>
> On Tue, Sep 13, 2011 at 06:03, Maxime Brugidou 
> wrote:
>
>> Hello,
>>
>> I am using Hive 0.7 from cloudera cdh3u0 and I encounter a strange
>> behavior when I update the serdeproperties of a table (for example for the
>> RegexSerDe).
>>
>> If you have a simple partitioned table like
>>
>> create external table test_table (
>> id int)
>> partitioned by (day string)
>> row format serde 'org.apache.hadoop.contrib.serde2.RegexSerDe'
>> with serdeproperties (
>> 'input.regex' = '.* ([^ ]*)'
>> );
>>
>> alter table test_table add partition (day='2011-09-01');
>>
>> alter table test_table set serdeproperties  (
>> 'input.regex' = '(.*)'
>> );
>>
>> alter table test_table add partition (day='2011-09-02');
>>
>>
>> The first partition will still use the older regex and the new one will
>> use the new regex. Is this intended behavior? Why?
>>
>> Thanks for your help,
>> Maxime
>>
>>
>


Re: Data migration in Hadoop

2011-09-13 Thread Ayon Sinha
What you can do for each node:
1. decommission node (or 2 nodes if you want to do this faster). You can do 
this with the excludes file.
2. Wait for blocks to be moved off the decommed node(s)
3. Replace the disks and put them back in service.
4. Repeat until done.
 
-Ayon
See My Photos on Flickr
Also check out my Blog for answers to commonly asked questions.




From: Vikas Srivastava 
To: user@hive.apache.org
Sent: Tuesday, September 13, 2011 5:27 AM
Subject: Re: Data migration in Hadoop


hey sonal!!

Actually right now we have 11 node cluster each having 8 disks of 3oogb and 8gb 
ram,

now what we want to do is to replace those 300gb disks with 1 tb disks so that 
we can have more space per server.

we have replication factor 2.

my suggestion is ..
1:- Add a node of 8 tb in cluster and run balancer to balance the load.
2:- free any 1 node(repalcement node). 

question:- does the imbalance size in cluster is of any datanode create a 
problem...or have any bad impact

regards 
Vikas Srivastava



On Tue, Sep 13, 2011 at 5:37 PM, Sonal Goyal  wrote:

Hi Vikas,
>
>
>This was discussed in the groups recently:
>
>
>http://lucene.472066.n3.nabble.com/Fixing-a-bad-HD-tt2863634.html#none
>
>
>Are you looking at replacing all your datanodes, or only a few? how big is 
>your cluster?
>
>Best Regards,
>Sonal
>Crux: Reporting for HBase
>Nube Technologies 
>
>
>
>
>
>
>
>
>
>On Tue, Sep 13, 2011 at 1:52 PM, Vikas Srivastava  
>wrote:
>
>HI ,
>>
>>can ny1 tell me how we can migrate hadoop or replace old hard disks with new 
>>big size hdd.
>>
>>actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this 
>>efficiently!!!
>>
>>ploblem is to migrate data from 1 hdd to other
>>
>>
>>-- 
>>With Regards
>>Vikas Srivastava
>>
>>DWH & Analytics Team
>>Mob:+91 9560885900
>>One97 | Let's get talking !
>>
>


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !

Re: Change in serdeproperties does not update existing partitions

2011-09-13 Thread Ashutosh Chauhan
Hey Maxime,

Yeah, thats intended behavior. After you do alter on table, all subsequent
actions on table and partitions will inherit from it. If you want to modify
properties of already existing partitions, you should be able to do
something like 'alter table test_table partition (day='2011-09-02') set
serdeproperties ('input.regex' = '(.*)')' Unfortunately this is not
supported currently. Feel free to file a bug for that.

A workaround (applicable only because you are using external table) is to
drop partition and then add them again. When you drop a partition from
external table, only metadata gets wiped out, data is not deleted, so when
you will add partition again, it will inherit from table serde properties
and you will get what you are looking for. Use this workaround with care,
you don't want to loose your data in recreating partitions.

Hope it helps,
Ashutosh

On Tue, Sep 13, 2011 at 06:03, Maxime Brugidou wrote:

> Hello,
>
> I am using Hive 0.7 from cloudera cdh3u0 and I encounter a strange behavior
> when I update the serdeproperties of a table (for example for the
> RegexSerDe).
>
> If you have a simple partitioned table like
>
> create external table test_table (
> id int)
> partitioned by (day string)
> row format serde 'org.apache.hadoop.contrib.serde2.RegexSerDe'
> with serdeproperties (
> 'input.regex' = '.* ([^ ]*)'
> );
>
> alter table test_table add partition (day='2011-09-01');
>
> alter table test_table set serdeproperties  (
> 'input.regex' = '(.*)'
> );
>
> alter table test_table add partition (day='2011-09-02');
>
>
> The first partition will still use the older regex and the new one will use
> the new regex. Is this intended behavior? Why?
>
> Thanks for your help,
> Maxime
>
>


Re: Hive issue

2011-09-13 Thread Ashutosh Chauhan
+ user@hive

Siddharth,

>> java.sql.SQLException: Method not supported
>> at
org.apache.hadoop.hive.jdbc.HiveConnection.createStatement(HiveConnection.java:207)

Hive's jdbc implementation is not fully jdbc compliant yet and doesn't
support all the features.

Hope it helps,
Ashutosh


On Tue, Sep 13, 2011 at 06:22, Siddharth Tiwari
wrote:

>  Hi ashutosh,
>
> The error was the same I didnt had my core jar in path but now I tried to
> run a simple query SELECT gener FROM gener_liking2; through reprt designer
> and it threw following error:
>
> org.pentaho.reporting.engine.classic.core.ReportDataFactoryException:
> Failed at query: SELECT gener from 'gener_liking2';
>
> at
> org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:254)
> at
> org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:95)
> at
> org.pentaho.reporting.ui.datasources.jdbc.ui.JdbcPreviewWorker.run(JdbcPreviewWorker.java:103)
> at java.lang.Thread.run(Thread.java:679)
> ParentException:
> java.sql.SQLException: Method not supported
> at
> org.apache.hadoop.hive.jdbc.HiveConnection.createStatement(HiveConnection.java:207)
> at
> org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.parametrizeAndQuery(SimpleSQLReportDataFactory.java:333)
> at
> org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SimpleSQLReportDataFactory.queryData(SimpleSQLReportDataFactory.java:250)
> at
> org.pentaho.reporting.engine.classic.core.modules.misc.datafactory.sql.SQLReportDataFactory.queryData(SQLReportDataFactory.java:95)
> at
> org.pentaho.reporting.ui.datasources.jdbc.ui.JdbcPreviewWorker.run(JdbcPreviewWorker.java:103)
> at java.lang.Thread.run(Thread.java:679)
>
> please help
>
> ****
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
>
>


Change in serdeproperties does not update existing partitions

2011-09-13 Thread Maxime Brugidou
Hello,

I am using Hive 0.7 from cloudera cdh3u0 and I encounter a strange behavior
when I update the serdeproperties of a table (for example for the
RegexSerDe).

If you have a simple partitioned table like

create external table test_table (
id int)
partitioned by (day string)
row format serde 'org.apache.hadoop.contrib.serde2.RegexSerDe'
with serdeproperties (
'input.regex' = '.* ([^ ]*)'
);

alter table test_table add partition (day='2011-09-01');

alter table test_table set serdeproperties  (
'input.regex' = '(.*)'
);

alter table test_table add partition (day='2011-09-02');


The first partition will still use the older regex and the new one will use
the new regex. Is this intended behavior? Why?

Thanks for your help,
Maxime


question about applying patch

2011-09-13 Thread Chalcy Raja
Hi,

I am trying to import sqlserver table in to hive table using sqoop.  The import 
failed on nvarchar field.  There is a patch released as per 
https://issues.apache.org/jira/browse/SQOOP-323?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel#issue-tabs.

How do I apply this patch to the existing sqoop?

Thank you,
Chalcy

The error I am getting is,

Hive does not support the SQL type for column   -- the column is 
nvarchar
at 
com.cloudera.sqoop.hive.TableDefWriter.getCreateTableStmt(TableDefWriter.java:150)
at com.cloudera.sqoop.hive.HiveImport.importTable(HiveImport.java:187)
at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:362)
at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:423)
at com.cloudera.sqoop.Sqoop.run(Sqoop.java:144)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:180)
at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:218)
at com.cloudera.sqoop.Sqoop.main(Sqoop.java:228)



Re: Data migration in Hadoop

2011-09-13 Thread Vikas Srivastava
hey sonal!!

Actually right now we have 11 node cluster each having 8 disks of 3oogb and
8gb ram,

now what we want to do is to replace those 300gb disks with 1 tb disks so
that we can have more space per server.

we have replication factor 2.

my suggestion is ..
1:- Add a node of 8 tb in cluster and run balancer to balance the load.
2:- free any 1 node(repalcement node).

question:- does the imbalance size in cluster is of any datanode create a
problem...or have any bad impact

regards
Vikas Srivastava


On Tue, Sep 13, 2011 at 5:37 PM, Sonal Goyal  wrote:

> Hi Vikas,
>
> This was discussed in the groups recently:
>
> http://lucene.472066.n3.nabble.com/Fixing-a-bad-HD-tt2863634.html#none
>
> Are you looking at replacing all your datanodes, or only a few? how big is
> your cluster?
>
> Best Regards,
> Sonal
> Crux: Reporting for HBase 
> Nube Technologies 
>
> 
>
>
>
>
>
>
> On Tue, Sep 13, 2011 at 1:52 PM, Vikas Srivastava <
> vikas.srivast...@one97.net> wrote:
>
>> HI ,
>>
>> can ny1 tell me how we can migrate hadoop or replace old hard disks with
>> new big size hdd.
>>
>> actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
>> efficiently!!!
>>
>> ploblem is to migrate data from 1 hdd to other
>>
>>
>> --
>> With Regards
>> Vikas Srivastava
>>
>> DWH & Analytics Team
>> Mob:+91 9560885900
>> One97 | Let's get talking !
>>
>>
>


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !


Re: Data migration in Hadoop

2011-09-13 Thread Sonal Goyal
Hi Vikas,

This was discussed in the groups recently:

http://lucene.472066.n3.nabble.com/Fixing-a-bad-HD-tt2863634.html#none

Are you looking at replacing all your datanodes, or only a few? how big is
your cluster?

Best Regards,
Sonal
Crux: Reporting for HBase 
Nube Technologies 







On Tue, Sep 13, 2011 at 1:52 PM, Vikas Srivastava <
vikas.srivast...@one97.net> wrote:

> HI ,
>
> can ny1 tell me how we can migrate hadoop or replace old hard disks with
> new big size hdd.
>
> actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
> efficiently!!!
>
> ploblem is to migrate data from 1 hdd to other
>
>
> --
> With Regards
> Vikas Srivastava
>
> DWH & Analytics Team
> Mob:+91 9560885900
> One97 | Let's get talking !
>
>


Data migration in Hadoop

2011-09-13 Thread Vikas Srivastava
HI ,

can ny1 tell me how we can migrate hadoop or replace old hard disks with new
big size hdd.

actually i need to replace old hdd of 300 tbs to 1 tb so how can i do this
efficiently!!!

ploblem is to migrate data from 1 hdd to other


-- 
With Regards
Vikas Srivastava

DWH & Analytics Team
Mob:+91 9560885900
One97 | Let's get talking !


how to run command in hive and script hive

2011-09-13 Thread 陳春宏
Hello: 

I have analysis apache log from hive, but there is a problem

When I write hive command in Script file and use crontab for schedule it 

The result is different with run in hive container

The attachment file is 2 way process detail

 

 

Hive_error.txt is run hive command in script 

Hive_normal.txt is run hive command in hive container

 

 

 

Best Regard

MSN:   chen0...@hotmail.com

SKYPE:  chen0727

Mobil: 886-937545215

Tel: 886-2-8798-2988 #222

Fax:886-2-8751-5499

 

hadoop@hadoop-00:~$ uname
Linux
hadoop@hadoop-00:~$ cat /proc/version
Linux version 2.6.35-22-server (buildd@allspice) (gcc version 4.4.5 
(Ubuntu/Linaro 4.4.4-14ubuntu4) ) #33-Ubuntu SMP Sun Sep 19 20:48:58 UTC 2010
/home/hadoop/hive-0.6.0/bin/hive -e "insert overwrite table varnish_data select 
host,to_date(from_unixtime(unix_timestamp(regexp_extract(time, '^.([^:]*):.*', 
1),'dd/MMM/'))),substr(regexp_extract(request, 'http://(\\S*)\/..', 1), 1, 
instr(regexp_extract(request, 'http://(\\S*)\/..', 
1),'/')-1),status,size,regexp_extract(referer, 'http://(\\S*)/', 1),agent FROM 
varnish;"
hive> desc varnish_data;
OK
hoststring
timestring
request string
status  string
sizeint
referer string
agent   string
Time taken: 0.073 seconds
hive> select * from varnish_data limit 5;
OK
115.87.233.142  2011-08-04  200 5012"Mozilla/4.0 
(compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET 
CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2)"
202.76.19.134   2011-08-04  200 397 "Mozilla/4.0 
(compatible; MSIE 6.0; Windows NT 5.2; SV1; .NET CLR 1.1.4322)"
171.243.127.174 2011-08-04  200 941 "Mozilla/4.0 
(compatible; MSIE 6.0; Windows NT 5.1; SV1; GTB7.1; MS Internet Explorer; .NET 
CLR 2.0.50727)"
121.33.94.452011-08-04  200 4273"Mozilla/4.0 
(compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET4.0C; .NET4.0E)"
118.68.36.972011-08-04  200 941 "Mozilla/4.0 
(compatible; MSIE 6.0; Windows NT 5.1; SV1)"
Time taken: 0.491 seconds




hadoop@hadoop-00:~$ uname
Linux
hadoop@hadoop-00:~$ cat /proc/version
Linux version 2.6.35-22-server (buildd@allspice) (gcc version 4.4.5 
(Ubuntu/Linaro 4.4.4-14ubuntu4) ) #33-Ubuntu SMP Sun Sep 19 20:48:58 UTC 2010
/home/hadoop/hive-0.6.0/bin/hive -e "insert overwrite table varnish_data select 
host,to_date(from_unixtime(unix_timestamp(regexp_extract(time, '^.([^:]*):.*', 
1),'dd/MMM/'))),substr(regexp_extract(request, 'http://(\\S*)\/..', 1), 1, 
instr(regexp_extract(request, 'http://(\\S*)\/..', 
1),'/')-1),status,size,regexp_extract(referer, 'http://(\\S*)/', 1),agent FROM 
varnish;"
hive> desc varnish_data;
OK
hoststring
timestring
request string
status  string
sizeint
referer string
agent   string
Time taken: 0.073 seconds

hive> insert overwrite table varnish_data select 
host,to_date(from_unixtime(unix_timestamp(regexp_extract(time, '^.([^:]*):.*', 
1),'dd/MMM/'))),substr(regexp_extract(request, 'http://(\\S*)\/..', 1), 1, 
instr(regexp_extract(request, 'http://(\\S*)\/..', 
1),'/')-1),status,size,regexp_extract(referer, 'http://(\\S*)/', 1),agent FROM 
varnish;
hive> select * from varnish_data limit 5;   

  OK

115.87.233.142  2011-08-04  image.11.com200 
50127uwak.11.com"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 
5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 
3.5.30729; InfoPath.2)"
202.76.19.134   2011-08-04  image.8899988.com   200 397 
dxiqd.8899988.com   "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.2; 
SV1; .NET CLR 1.1.4322)"
171.243.127.174 2011-08-04  image.32.com200 941 
a5vpni7x.32.com "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; 
SV1; GTB7.1; MS Internet Explorer; .NET CLR 2.0.50727)"
121.33.94.452011-08-04  image.8899988.com   200 4273
u1u1u.8899988.com   "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; 
SV1; .NET4.0C; .NET4.0E)"
118.68.36.972011-08-04  image.32.com200 941 
q9r8m.32.com"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; 
SV1)"
Time taken: 0.418 seconds