[Hadoop Wiki] Update of "PoweredBy" by spookysam

2011-06-17 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "PoweredBy" page has been changed by spookysam:
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=312&rev2=313

   * [[http://www.web-alliance.fr|Web Alliance]]
* We use Hadoop for our internal search engine optimization (SEO) tools. It 
allows us to store, index, search data in a much faster way.
* We also use it for logs analysis and trends prediction.
-  * [[http://www.worldlingo.com/|WorldLingo]]
+  * [[http://www.worldlingo.com/|WorldLingo]] and 
[[http://itshumour.blogspot.com/2010/06/twenty-hilarious-funny-quotes.html|Funny
 Quotes]] 
* Hardware: 44 servers (each server has: 2 dual core CPUs, 2TB storage, 8GB 
RAM)
* Each server runs Xen with one Hadoop/HBase instance and another instance 
with web or application servers, giving us 88 usable virtual machines.
* We run two separate Hadoop/HBase clusters with 22 nodes each.


[Hadoop Wiki] Update of "UsingLzoCompression" by DougMeil

2011-06-17 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "UsingLzoCompression" page has been changed by DougMeil:
http://wiki.apache.org/hadoop/UsingLzoCompression?action=diff&rev1=24&rev2=25

Comment:
Per stack, changing the repo to Todd's version of LZO

  
  This distro doesn't contain all bug fixes (such as when LZO header or block 
header data falls on read boundary).
  
- Please get latest distro with all fixes from 
http://github.com/kevinweil/hadoop-lzo
+ Please get latest distro with all fixes from 
https://github.com/toddlipcon/hadoop-lzo
  
  == Why compression? ==
  By enabling compression, the store file (HFile) will use a compression 
algorithm on blocks as they are written (during flushes and compactions) and 
thus must be decompressed when reading.


[Hadoop Wiki] Update of "FrontPage" by prosch

2011-06-17 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "FrontPage" page has been changed by prosch:
http://wiki.apache.org/hadoop/FrontPage?action=diff&rev1=263&rev2=264

  = Apache Hadoop =
- [[http://hadoop.apache.org/|Apache Hadoop]] is a framework for running 
applications on large cluster built of commodity hardware. The Hadoop framework 
transparently provides applications both reliability and data motion. Hadoop 
implements a computational paradigm named [[HadoopMapReduce|Map/Reduce]], where 
the application is divided into many small fragments of work, each of which may 
be executed or reexecuted on any node in the cluster. In addition, it provides 
a distributed file system ([[DFS|HDFS]]) that stores data on the compute nodes, 
providing very high aggregate bandwidth across the cluster. Both Map/Reduce and 
the distributed file system are designed so that node failures are 
automatically handled by the framework.
+ [[http://hadoop.apache.org/|Apache Hadoop]] is a framework for running 
applications on large cluster built of commodity hardware. The Hadoop framework 
transparently provides applications both reliability and data motion. Hadoop 
implements a computational paradigm named [[HadoopMapReduce|Map/Reduce]], where 
the application is divided into many small fragments of work, each of which may 
be executed or reexecuted on any node in the cluster. In addition, it provides 
a distributed file system ([[DFS|HDFS]]) that stores data on the compute nodes, 
providing very high aggregate bandwidth across the cluster. Both Map/Reduce and 
the distributed file system are designed so that node failures are 
automatically handled by the 
framework[[http://www.uebersetzung1.wordpress.com|.]]
  
  == General Information ==
   * [[http://hadoop.apache.org/|Official Apache Hadoop Website]]: download, 
bug-tracking, mailing-lists, etc.


svn commit: r1137058 [2/2] - in /hadoop/common/site: common/ common/author/src/documentation/content/xdocs/ common/publish/ hdfs/ hdfs/author/src/documentation/content/xdocs/ hdfs/publish/ hdfs/publis

2011-06-17 Thread llu
Modified: hadoop/common/site/mapreduce/publish/version_control.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/common/site/mapreduce/publish/version_control.pdf?rev=1137058&r1=1137057&r2=1137058&view=diff
==
--- hadoop/common/site/mapreduce/publish/version_control.pdf (original)
+++ hadoop/common/site/mapreduce/publish/version_control.pdf Fri Jun 17 
22:36:09 2011
@@ -69,10 +69,10 @@ endobj
 >>
 endobj
 16 0 obj
-<< /Length 1605 /Filter [ /ASCII85Decode /FlateDecode ]
+<< /Length 1621 /Filter [ /ASCII85Decode /FlateDecode ]
  >>
 stream
-GatU4h/h:0&:`lHcsni[#j]C6\[NhYdWDY*?D-]0?1("/!)LD)Z@U,IJ*RUg)0"Rle!FfPmDj-pg0NlVeEc!]Li=?,dB".WH&nTurXgsm5UJJ.KF'J=TmtAo5"?DG/ECGnD3`d$C3P^;6_hu@Ro/Vjgam@#or=L?j&cT*9Do51i#!>c`kB-6=,#*7&R%233huoMW:DlN.-L)`S8,`\k:^lSTVe38Vip;(n]qk3ohItfa])hHOJM=E\aSg(/LSNjAQAtj*pujc70=OL*!-R'NR/#P`dGT86)J(DG%b!S>2M2Y[*R7TE6/SK@t6C/FP>aL^e`ok]]f.CJ2f5*@N7k3eUS+\?$,$hSdg0sI6=]Com+G";Hn:t7]gg!jOu`VqF7gT6!OP3>kEkZX^jK+==1!p"j`d_1/VcRSY]E=Z$/j%EsZHUD07-Z*^J4@KS'=G`WfNVC'C0NC/)3S(GlauW[m1@LK3JDK5#`DAKB0VT%Wd!.n@A>*b/;V,%_f:5HAPiIpII70+1Z@>/_/4,GBf9BU?KAD3=D#-u60mWEStmDCE\!?M*&#hCUjI?.^1_kh!WjXa_%?k[;pO9PLU5sF]R_!)ZVus=JZ'SKD0RffnPqu5""]>f#\BAhGHV0UJu]?B,+6m2I@@4u1AT_bnA%KO;nV>cfRPq8&99lMcrL6,;KpLrp>aEZ#n3@rVeR0959?::KETbSGO\.s>)TR`N[IZ1[XJ%5`q$m9@CbOY&r,OgWD?j<"]OV#m):00?$GF=/31C0U@g4CA-9mq:AhbAqW5D4IISq7CbMP=)r0#Q(`]--/26Yb?e&\[WRYKk*tU<^2(*I`%/T"^GZB$nLD.q*+C)N.LtSEEgc4bH)3IQHTB(jBY/olXW6@&_R6ahE3')`pP1t;\u.Yc@$AF
 
hdff%0Y^sJZE!TTC*((.]?7ENl^)(l02gZcU,,[G#Xt>'ibrRRm+,ae+jLS84*a=X$1s=-UP/H:GF[rNa[61D&1p+r:0o]Y`SHB'BJ.29q&TfVg[,tI8IN!']:2SihI@^8>NtTG_D=2ED)V2_Mc]rd@q`C#&l6Z!7kk'JAQ2L,X0Xn7VGK<"R)0DT7r%:;:?6`.[CU$TaKK)ON^@/1KYnqf6hG*cn1nNfmN+^M^W7C+IprRNLJ!9E05RPTT@UNAr+bOnN-trjU+P[q+dV*.'2=IM*``!":q6XTC`LOjp,4M:##F3%#eqG/.L]fm9gtbVQfIo]iG-06Kk[d+>I*Qc?C-jb\?35]HRjb?[!dk6-`B0I?GNHG.un`Q->JB::C/lMd5I?mKpTRF78a^ek\o?X&i5V)YrPc:=Vt*Vp*9Tbko'mS;g2kP`Go\"MqrUZ,TpI5[2P4%nN6CA,~>
+Gatm<8TPe1'Ya/hFH!80dFro7S*ejg;j1c$8ZHAb#6?T%)-TGa5l'<`lU=>R1rdh*07M6q><[IRBjV?srL4')NWV+clih(V,:J'aJs$+U+_R1H&.KH>T6CF1TlV\UrO]7KA_*j7S=tG[n5Je%f_Yh2bM0M>qBk5^%R\18oB0P4@q68lp&=5^Q;.T-[!(/H8."EUm;Fs_cJ_8RdV;q'Dj*_$gNe&n(+ppC@fPoY8([f2Q_2g5[B2,HkDg4YgX7!>l9:_u72UAnDd=1^1/d'aUoW_^RA2ZH?VjXFdceae0Wj9L*V>%&9n!5"0'$O%\hNc#=JZ+2,#:2?g,rd1".Sb:jCVjX=;aqeCa;K1l^#Ed80[uuHBs(WBhgW?+L>Ypu6R[j\PaPLrI\tddDt;e@NKORR+9pj7a!GVf73r5!HjX]8KT!#9\#jXoZ43;n,*sliT'Hjj/a'0KRX&#V[7ob(1flc#DW0!^XK+lZ#`s]L%M:Bg3I_JZcA'O5)W#_,++cl`@H5.oL6CeWY9lRUi33WM:kfj0pI9-pE/alr-<'`Ob6g,!Vk6!im^OZnL.?[[)[LWr#`b+%FrITI\r0n!+0I7,@-C:qUUF%qj09&F+IEA?5#"0Yr_t+\QAij6pUKkr]hhJLFZ+a+G0g-G.6,9QUNUnQ#`#;d:I/ElfP0/V,0*2`BCV#k1l8O3X^[jDNhP.K=r'X(7msaa^]^BnB?7QNdht^Aa/th";(Vb2p[`N^AMq'(2Ud,cSW)/\*`(8aXBH\AjZ(Xj^G)_TWP=Nf;_8RZ6UPZ/R-2M:j"F]_`:i0mm5ON8WE_Y'RoY/64d$ZTCR%5*063.E'Z1WgD)*cYMbId0$ba]4Q\dicQmD3;R91MkD/lq
 
MfL"p$rLY)nb$dML"4k`%EbF>B!,sJ[0SQ3/2(*I(_lsGa%Ym&Fm59<[5JD\WgTWdnh@&1O3ER.8YRoqY)!uC&;dSS:FT4Mf>+j%Z,"U1jlV_Di@G-C.M:2f*$3[$+$%u$:e0f$`<%NK^U=Uh'PLk^oIY2>[MbUZHmTsoE!a`9XD23FF#*?TLuI)?G0J;4?g;Rio%^mCqM^=^\]W!dj1OZ]-1shu9rg:6qUUN?A1.&PIIF845".3g5/0^)E>##Pcm1HIcb$CA#aWO[UcT[@>O;suXm,)QFB05ADs7JPH(c!jOJXn_#^dLYU8\FnJeVc^4PGB0]C?5aPJmqV*-SD1gi%5e%Z>Z/P+_WK"HI37Ktjh-T3.%](ep7EQEKlNfqd)"obeI_6L".6cMq4Rllto_Voh_^69-"E,5L2DpocYgFES?+l]/Si4=Bj/'tb0;iSmr%)K,l-tprf(qt;AR.7&Wg/'Bj?tSqH-2[Q5C.fD%F/"O^idkP1a0]/IDlD'=T`M$pE)r@5Y_~>
 endstream
 endobj
 17 0 obj
@@ -156,10 +156,10 @@ endobj
 24 0 obj
 << /Type /Annot
 /Subtype /Link
-/Rect [ 90.0 524.532 348.96 512.532 ]
+/Rect [ 90.0 524.532 422.964 512.532 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
-/A << /URI (http://svn.apache.org/viewcvs.cgi/hadoop/mapreduce/)
+/A << /URI (http://svn.apache.org/viewcvs.cgi/hadoop/common/trunk/mapreduce/)
 /S /URI >>
 /H /I
 >>
@@ -167,10 +167,10 @@ endobj
 25 0 obj
 << /Type /Annot
 /Subtype /Link
-/Rect [ 279.648 472.198 524.94 460.198 ]
+/Rect [ 90.0 445.798 409.296 433.798 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
-/A << /URI (http://svn.apache.org/repos/asf/hadoop/mapreduce/)
+/A << /URI (http://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce/)
 /S /URI >>
 /H /I
 >>
@@ -178,7 +178,7 @@ endobj
 26 0 obj
 << /Type /Annot
 /Subtype /Link
-/Rect [ 303.288 458.998 323.94 446.998 ]
+/Rect [ 226.632 432.598 247.284 420.598 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
 /A << /URI (http://www.apache.org/dev/version-control.html#anon-svn)
@@ -189,10 +189,10 @@ endobj
 27 0 obj
 << /Type /Annot
 /Subtype /Link
-/Rect [ 250.656 406.664 500.616 394.664 ]
+/Rect [ 90.0 367.064 413.964 355.064 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
-/A << /URI (https://svn.apache.org/repos/asf/hadoop/mapreduce/)
+/A << /URI (https://svn.apache.org/repos/asf/hadoop/common/trunk/mapreduce/)
 /S /URI >>
 /H /I
 >>
@@ -200,7 +200,7 @@ endobj
 28 0 obj
 << /Type /Annot
 /Subtype /Link
-/Rect [ 297.288 393.464 317.94 381.464 ]
+/Rect [ 220.632 353.864 241.284 341.864 ]
 /C [ 0 0 0 ]
 /Border [ 0 0 0 ]
 /A << /URI (http://www.apache.org/dev/version-control.html#https-svn)
@@ 

svn commit: r1137065 - in /hadoop/common/trunk/common: CHANGES.txt src/test/core/org/apache/hadoop/conf/TestConfiguration.java

2011-06-17 Thread eli
Author: eli
Date: Fri Jun 17 22:53:10 2011
New Revision: 1137065

URL: http://svn.apache.org/viewvc?rev=1137065&view=rev
Log:
HADOOP-7402. TestConfiguration doesn't clean up after itself. Contributed by 
Aaron T. Myers

Modified:
hadoop/common/trunk/common/CHANGES.txt

hadoop/common/trunk/common/src/test/core/org/apache/hadoop/conf/TestConfiguration.java

Modified: hadoop/common/trunk/common/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/common/CHANGES.txt?rev=1137065&r1=1137064&r2=1137065&view=diff
==
--- hadoop/common/trunk/common/CHANGES.txt (original)
+++ hadoop/common/trunk/common/CHANGES.txt Fri Jun 17 22:53:10 2011
@@ -321,6 +321,8 @@ Trunk (unreleased changes)
 HADOOP-7377. Fix command name handling affecting DFSAdmin. (Daryn Sharp
 via mattf)
 
+HADOOP-7402. TestConfiguration doesn't clean up after itself. (atm via eli)
+
 Release 0.22.0 - Unreleased
 
   INCOMPATIBLE CHANGES

Modified: 
hadoop/common/trunk/common/src/test/core/org/apache/hadoop/conf/TestConfiguration.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/common/src/test/core/org/apache/hadoop/conf/TestConfiguration.java?rev=1137065&r1=1137064&r2=1137065&view=diff
==
--- 
hadoop/common/trunk/common/src/test/core/org/apache/hadoop/conf/TestConfiguration.java
 (original)
+++ 
hadoop/common/trunk/common/src/test/core/org/apache/hadoop/conf/TestConfiguration.java
 Fri Jun 17 22:53:10 2011
@@ -33,6 +33,7 @@ import java.util.regex.Pattern;
 import junit.framework.TestCase;
 import static org.junit.Assert.assertArrayEquals;
 
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.fs.Path;
 import org.codehaus.jackson.map.ObjectMapper; 
 
@@ -246,7 +247,12 @@ public class TestConfiguration extends T
 
   public void testGetLocalPath() throws IOException {
 Configuration conf = new Configuration();
-conf.set("dirs", "a, b, c ");
+String[] dirs = new String[]{"a", "b", "c"};
+for (int i = 0; i < dirs.length; i++) {
+  dirs[i] = new Path(System.getProperty("test.build.data"), dirs[i])
+  .toString();
+}
+conf.set("dirs", StringUtils.join(dirs, ","));
 for (int i = 0; i < 1000; i++) {
   String localPath = conf.getLocalPath("dirs", "dir" + i).toString();
   assertTrue("Path doesn't end in specified dir: " + localPath,
@@ -258,7 +264,12 @@ public class TestConfiguration extends T
   
   public void testGetFile() throws IOException {
 Configuration conf = new Configuration();
-conf.set("dirs", "a, b, c ");
+String[] dirs = new String[]{"a", "b", "c"};
+for (int i = 0; i < dirs.length; i++) {
+  dirs[i] = new Path(System.getProperty("test.build.data"), dirs[i])
+  .toString();
+}
+conf.set("dirs", StringUtils.join(dirs, ","));
 for (int i = 0; i < 1000; i++) {
   String localPath = conf.getFile("dirs", "dir" + i).toString();
   assertTrue("Path doesn't end in specified dir: " + localPath,




[Hadoop Wiki] Update of "Hive/Tutorial" by StevenWong

2011-06-17 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Tutorial" page has been changed by StevenWong:
http://wiki.apache.org/hadoop/Hive/Tutorial?action=diff&rev1=37&rev2=38

  == What is Hive ==
  Hive is a data warehousing infrastructure based on the Hadoop. Hadoop 
provides massive scale out and fault tolerance capabilities for data storage 
and processing (using the map-reduce programming paradigm) on commodity 
hardware.
  
- Hive is designed to enable easy data summarization, ad-hoc querying and 
analysis of large volumes of data. It provides a simple query language called 
Hive QL, which is based on SQL and which enables users familiar with SQL to do 
ad-hoc querying, summarization and data analysis easily. At the same time, Hive 
QL also allows traditional map/reduce programmers to be able to plug in their 
custom mappers and reducers to do more sophisticated analysis that may not be 
supported by the built-in capabilities of the language. 
+ Hive is designed to enable easy data summarization, ad-hoc querying and 
analysis of large volumes of data. It provides a simple query language called 
Hive QL, which is based on SQL and which enables users familiar with SQL to do 
ad-hoc querying, summarization and data analysis easily. At the same time, Hive 
QL also allows traditional map/reduce programmers to be able to plug in their 
custom mappers and reducers to do more sophisticated analysis that may not be 
supported by the built-in capabilities of the language.
  
  == What is NOT Hive ==
  Hadoop is a batch processing system and Hadoop jobs tend to have high latency 
and incur substantial overheads in job submission and scheduling. As a result - 
latency for Hive queries is generally very high (minutes) even when data sets 
involved are very small (say a few hundred megabytes). As a result it cannot be 
compared with systems such as Oracle where analyses are conducted on a 
significantly smaller amount of data but the analyses proceed much more 
iteratively with the response times between iterations being less than a few 
minutes. Hive aims to provide acceptable (but not optimal) latency for 
interactive data browsing, queries over small data sets or test queries.
@@ -55, +55 @@

  . |→DOUBLE
   . |→BIGINT
. |→INT
-. |→TINYINT 
+. |→TINYINT
   . |→FLOAT
. |→INT
 . |→TINYINT
 . |→STRING
 . |→BOOLEAN
  
- 
  This type hierarchy defines how the types are implicitly converted in the 
query language. Implicit conversion is allowed for types from child to an 
ancestor. So when a query expression expects type1 and the data is of type2 
type2 is implicitly converted to type1 if type1 is an ancestor of type2 in the 
type hierarchy. Apart from these fundamental rules for implicit conversion 
based on type system, Hive also allows the special case for conversion:
  
   * STRING → DOUBLE
@@ -86, +85 @@

   * Relational Operators - The following operators compare the passed operands 
and generate a TRUE or FALSE value depending on whether the comparison between 
the operands holds or not.
  
  ||'''Relational Operator''' ||'''Operand types''' ||'''Description''' ||
- || ''???'' surely there are operators for equality and lack of equality? 
||
+ ||''???'' surely there are operators for 
equality and lack of equality? ||
  ||A < B ||all primitive types ||TRUE if expression A is  less than expression 
B otherwise FALSE ||
  ||A <= B ||all primitive types ||TRUE if expression A is less than or equal 
to expression B otherwise FALSE ||
  ||A > B ||all primitive types ||TRUE if expression A is greater than 
expression B otherwise FALSE ||
@@ -112, +111 @@

  ||~A ||all number types ||Gives the result of bitwise NOT of A. The type of 
the result is the same as the type of A. ||
  
  
- 
- 
   * Logical Operators - The following operators provide support for creating 
logical expressions. All of them return boolean TRUE or FALSE depending upon 
the boolean values of the operands.
  
  ||''' Logical Operators ''' ||'''Operands types''' ||'''Description''' ||
@@ -124, +121 @@

  ||NOT A ||boolean ||TRUE if A is FALSE, otherwise FALSE ||
  ||! A ||boolean ||Same as NOT A ||
  
+ 
  * Operators on Complex Types - The following operators provide mechanisms to 
access elements in Complex Types
- 
  ||'''Operator''' ||'''Operand types''' ||'''Description''' ||
  ||A[n] ||A is an Array and n is an int ||returns the nth element in the array 
A. The first element has index 0 e.g. if A is an array comprising of ['foo', 
'bar'] then A[0] returns 'foo' and A[1] returns 'bar' ||
  ||M[key] ||M is a Map and key has type K ||returns the value 
corresponding to the key in the map e.g. if M is a map comprising of {'f' -> 
'foo', 'b' -> 'bar', 'all' -> 'foobar'} then M['all'] returns 'foobar' ||
@@ -155, +152 @@

  ||string ||regexp_replace(string A, string B, string C) ||returns the string 
resulting

[Hadoop Wiki] Update of "Hive/Tutorial" by StevenWong

2011-06-17 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hive/Tutorial" page has been changed by StevenWong:
http://wiki.apache.org/hadoop/Hive/Tutorial?action=diff&rev1=38&rev2=39

Comment:
Fix typo.

  ||string ||regexp_replace(string A, string B, string C) ||returns the string 
resulting from replacing all substrings in B that match the Java regular 
expression syntax(See 
[[http://java.sun.com/j2se/1.4.2/docs/api/java/util/regex/Pattern.html|Java 
regular expressions syntax]]) with C. For example, regexp_replace('foobar', 
'oo|ar', ) returns 'fb' ||
  ||int ||size(Map) ||returns the number of elements in the map type ||
  ||int ||size(Array) ||returns the number of elements in the array type ||
- || ||cast(expr as ) ||converts the results of the expression expr 
to  e.g. cast('1' as BIGINT) will convert the string '1' to it integral 
representation. A null is returned if the conversion does not succeed. ||
+ ||'''Expected "=" to follow "type"'''||cast(expr as ) ||converts the 
results of the expression expr to  e.g. cast('1' as BIGINT) will convert 
the string '1' to it integral representation. A null is returned if the 
conversion does not succeed. ||
  ||string ||from_unixtime(int unixtime) ||convert the number of seconds from 
unix epoch (1970-01-01 00:00:00 UTC) to a string representing the timestamp of 
that moment in the current system time zone in the format of "1970-01-01 
00:00:00" ||
  ||string ||to_date(string timestamp) ||Return the date part of a timestamp 
string: to_date("1970-01-01 00:00:00") = "1970-01-01" ||
  ||int ||year(string date) ||Return the year part of a date or a timestamp 
string: year("1970-01-01 00:00:00") = 1970, year("1970-01-01") = 1970 ||
@@ -461, +461 @@

   * Dynamic partition insert could potentially resource hog in that it could 
generate a large number of partitions in a short time. To get yourself buckled, 
we define three parameters:
* '''hive.exec.max.dynamic.partitions.pernode''' (default value being 100) 
is the maximum dynamic partitions that can be created by each mapper or 
reducer. If one mapper or reducer created more than that the threshold, a fatal 
error will be raised from the mapper/reducer (through counter) and the whole 
job will be killed.
* '''hive.exec.max.dynamic.partitions''' (default value being 1000) is the 
total number of dynamic partitions could be created by one DML. If each 
mapper/reducer did not exceed the limit but the total number of dynamic 
partitions does, then an exception is raised at the end of the job before the 
intermediate data are moved to the final destination.
-   * '''hive.max.created.files''' (default value being 10) is the maximum 
total number of files created by all mappers and reducers. This is implemented 
by updating a Hadoop counter by each mapper/reducer whenever a new file is 
created. If the total number is exceeding hive.max.created.files, a fatal error 
will be thrown and the job will be killed.
+   * '''hive.exec.max.created.files''' (default value being 10) is the 
maximum total number of files created by all mappers and reducers. This is 
implemented by updating a Hadoop counter by each mapper/reducer whenever a new 
file is created. If the total number is exceeding hive.exec.max.created.files, 
a fatal error will be thrown and the job will be killed.
  
   * Another situation we want to protect against dynamic partition insert is 
that the user may accidentally specify all partitions to be dynamic partitions 
without specifying one static partition, while the original intention is to 
just overwrite the sub-partitions of one root partition. We define another 
parameter hive.exec.dynamic.partition.mode=strict to prevent the all-dynamic 
partition case. In the strict mode, you have to specify at least one static 
partition. The default mode is strict. In addition, we have a parameter 
hive.exec.dynamic.partition=true/false to control whether to allow dynamic 
partition at all. The default value is false.
   * In Hive 0.6, dynamic partition insert does not work with 
hive.merge.mapfiles=true or hive.merge.mapredfiles=true, so it internally turns 
off the merge parameters. Merging files in dynamic partition inserts are 
supported in Hive 0.7 (see JIRA HIVE-1307 for details).