[Hadoop Wiki] Update of Hive/AuthDev by HeYongqiang

2011-01-12 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The Hive/AuthDev page has been changed by HeYongqiang.
http://wiki.apache.org/hadoop/Hive/AuthDev?action=diffrev1=6rev2=7

--

  
  First try user name:
  
- first try to deny this access by look up the deny tables by user name:
- 
- 1. If there is an entry in 'user' that deny this access, return DENY
- 
- 2. If there is an entry in 'db'  that deny this access, return DENY
- 
- 3. If there is an entry in 'table'  that deny this access, return DENY
- 
- 4. If there is an entry in 'column'  that deny this access, return DENY
- 
- Perform the above steps for each group/roles that the user belongs to.
- 
- if deny failed, go through all privilege levels with the user name:
- 
- 5. If there is an entry in 'user' that accept this access, return ACCEPT
+ 1. If there is an entry in 'user' that accept this access, return ACCEPT
  
- 6. If there is an entry in 'db'  that accept this access, return ACCEPT
+ 2. If there is an entry in 'db'  that accept this access, return ACCEPT
  
- 7. If there is an entry in 'table'  that accept this access, return ACCEPT
+ 3. If there is an entry in 'table'  that accept this access, return ACCEPT
  
- 8. If there is an entry in 'column'  that accept this access, return ACCEPT
+ 4. If there is an entry in 'column'  that accept this access, return ACCEPT
  
  Second try the user's group/role names one by one until we get an ACCEPT. 
  
@@ -387, +373 @@

  
  Authorization decision manager manages a set of authorization provider, and 
each provider can decide to accept or deny. And it is the decision manager to 
do the final decision. Can be vote based, or one -1 then deny, or one +1 then 
accept. Authorization provider decides whether to accept or deny an access 
based on his own information.
  
+ = 8. Metastore upgrade script for mysql =
+ 
+ {{{
+ --
+ -- Table structure for table `ROLES`
+ --
+ 
+ DROP TABLE IF EXISTS `ROLES`;
+ CREATE TABLE `ROLES` (
+   `ROLE_ID` bigint(20) NOT NULL,
+   `CREATE_TIME` int(11) NOT NULL,
+   `OWNER_NAME` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   `ROLE_NAME` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   PRIMARY KEY  (`ROLE_ID`),
+   UNIQUE KEY `ROLEENTITYINDEX` (`ROLE_NAME`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+ 
+ 
+ --
+ -- Table structure for table `ROLE_MAP`
+ --
+ 
+ DROP TABLE IF EXISTS `ROLE_MAP`;
+ CREATE TABLE `ROLE_MAP` (
+   `ROLE_GRANT_ID` bigint(20) NOT NULL,
+   `ADD_TIME` int(11) NOT NULL,
+   `GRANT_OPTION` smallint(6) NOT NULL,
+   `GRANTOR` varchar(128) character set latin1 collate latin1_bin default NULL,
+   `GRANTOR_TYPE` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   `PRINCIPAL_NAME` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `PRINCIPAL_TYPE` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `ROLE_ID` bigint(20) default NULL,
+   PRIMARY KEY  (`ROLE_GRANT_ID`),
+   UNIQUE KEY `USERROLEMAPINDEX` 
(`PRINCIPAL_NAME`,`ROLE_ID`,`GRANTOR`,`GRANTOR_TYPE`),
+   KEY `ROLE_MAP_N49` (`ROLE_ID`),
+   CONSTRAINT `ROLE_MAP_FK1` FOREIGN KEY (`ROLE_ID`) REFERENCES `ROLES` 
(`ROLE_ID`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+ 
+ --
+ -- Table structure for table `GLOBAL_PRIVS`
+ --
+ 
+ DROP TABLE IF EXISTS `GLOBAL_PRIVS`;
+ CREATE TABLE `GLOBAL_PRIVS` (
+   `USER_GRANT_ID` bigint(20) NOT NULL,
+   `CREATE_TIME` int(11) NOT NULL,
+   `GRANT_OPTION` smallint(6) NOT NULL,
+   `GRANTOR` varchar(128) character set latin1 collate latin1_bin default NULL,
+   `GRANTOR_TYPE` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   `PRINCIPAL_NAME` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `PRINCIPAL_TYPE` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `USER_PRIV` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   PRIMARY KEY  (`USER_GRANT_ID`),
+   UNIQUE KEY `GLOBALPRIVILEGEINDEX` 
(`PRINCIPAL_NAME`,`PRINCIPAL_TYPE`,`USER_PRIV`,`GRANTOR`,`GRANTOR_TYPE`)
+ ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
+ 
+ 
+ --
+ -- Table structure for table `DB_PRIVS`
+ --
+ 
+ DROP TABLE IF EXISTS `DB_PRIVS`;
+ CREATE TABLE `DB_PRIVS` (
+   `DB_GRANT_ID` bigint(20) NOT NULL,
+   `CREATE_TIME` int(11) NOT NULL,
+   `DB_ID` bigint(20) default NULL,
+   `GRANT_OPTION` smallint(6) NOT NULL,
+   `GRANTOR` varchar(128) character set latin1 collate latin1_bin default NULL,
+   `GRANTOR_TYPE` varchar(128) character set latin1 collate latin1_bin default 
NULL,
+   `PRINCIPAL_NAME` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `PRINCIPAL_TYPE` varchar(128) character set latin1 collate latin1_bin 
default NULL,
+   `DB_PRIV` varchar(128) character set latin1 collate latin1_bin default NULL,
+   PRIMARY KEY  (`DB_GRANT_ID`),
+   UNIQUE KEY `DBPRIVILEGEINDEX` 

svn commit: r1058343 - in /hadoop/common/trunk: CHANGES.txt src/java/core-default.xml src/java/org/apache/hadoop/io/SequenceFile.java

2011-01-12 Thread shv
Author: shv
Date: Wed Jan 12 22:42:25 2011
New Revision: 1058343

URL: http://svn.apache.org/viewvc?rev=1058343view=rev
Log:
HADOOP-7102. Remove fs.ramfs.impl field from core-deafult.xml. Contributed by 
Konstantin Shvachko.

Modified:
hadoop/common/trunk/CHANGES.txt
hadoop/common/trunk/src/java/core-default.xml
hadoop/common/trunk/src/java/org/apache/hadoop/io/SequenceFile.java

Modified: hadoop/common/trunk/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/CHANGES.txt?rev=1058343r1=1058342r2=1058343view=diff
==
--- hadoop/common/trunk/CHANGES.txt (original)
+++ hadoop/common/trunk/CHANGES.txt Wed Jan 12 22:42:25 2011
@@ -258,6 +258,8 @@ Release 0.22.0 - Unreleased
 HADOOP-6811. Remove EC2 bash scripts. They are replaced by Apache Whirr
 (incubating, http://incubator.apache.org/whirr). (tomwhite)
 
+HADOOP-7102. Remove fs.ramfs.impl field from core-deafult.xml (shv)
+
   OPTIMIZATIONS
 
 HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..).

Modified: hadoop/common/trunk/src/java/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/core-default.xml?rev=1058343r1=1058342r2=1058343view=diff
==
--- hadoop/common/trunk/src/java/core-default.xml (original)
+++ hadoop/common/trunk/src/java/core-default.xml Wed Jan 12 22:42:25 2011
@@ -298,12 +298,6 @@
 /property
 
 property
-  namefs.ramfs.impl/name
-  valueorg.apache.hadoop.fs.InMemoryFileSystem/value
-  descriptionThe FileSystem for ramfs: uris./description
-/property
-
-property
   namefs.har.impl/name
   valueorg.apache.hadoop.fs.HarFileSystem/value
   descriptionThe filesystem for Hadoop archives. /description

Modified: hadoop/common/trunk/src/java/org/apache/hadoop/io/SequenceFile.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/trunk/src/java/org/apache/hadoop/io/SequenceFile.java?rev=1058343r1=1058342r2=1058343view=diff
==
--- hadoop/common/trunk/src/java/org/apache/hadoop/io/SequenceFile.java 
(original)
+++ hadoop/common/trunk/src/java/org/apache/hadoop/io/SequenceFile.java Wed Jan 
12 22:42:25 2011
@@ -3371,9 +3371,6 @@ public class SequenceFile {
   public boolean nextRawKey() throws IOException {
 if (in == null) {
   int bufferSize = getBufferSize(conf); 
-  if (fs.getUri().getScheme().startsWith(ramfs)) {
-bufferSize = conf.getInt(io.bytes.per.checksum, 512);
-  }
   Reader reader = new Reader(conf,
  Reader.file(segmentPathName), 
  Reader.bufferSize(bufferSize),




svn commit: r1058344 - in /hadoop/common/branches/branch-0.22: CHANGES.txt src/java/core-default.xml src/java/org/apache/hadoop/io/SequenceFile.java

2011-01-12 Thread shv
Author: shv
Date: Wed Jan 12 22:48:30 2011
New Revision: 1058344

URL: http://svn.apache.org/viewvc?rev=1058344view=rev
Log:
HADOOP-7102. Merge -r 1058342:1058343 from trunk to branch 0.22.

Modified:
hadoop/common/branches/branch-0.22/CHANGES.txt
hadoop/common/branches/branch-0.22/src/java/core-default.xml

hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/io/SequenceFile.java

Modified: hadoop/common/branches/branch-0.22/CHANGES.txt
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.22/CHANGES.txt?rev=1058344r1=1058343r2=1058344view=diff
==
--- hadoop/common/branches/branch-0.22/CHANGES.txt (original)
+++ hadoop/common/branches/branch-0.22/CHANGES.txt Wed Jan 12 22:48:30 2011
@@ -199,6 +199,8 @@ Release 0.22.0 - Unreleased
 HADOOP-6811. Remove EC2 bash scripts. They are replaced by Apache Whirr
 (incubating, http://incubator.apache.org/whirr). (tomwhite)
 
+HADOOP-7102. Remove fs.ramfs.impl field from core-deafult.xml (shv)
+
   OPTIMIZATIONS
 
 HADOOP-6884. Add LOG.isDebugEnabled() guard for each LOG.debug(..).

Modified: hadoop/common/branches/branch-0.22/src/java/core-default.xml
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.22/src/java/core-default.xml?rev=1058344r1=1058343r2=1058344view=diff
==
--- hadoop/common/branches/branch-0.22/src/java/core-default.xml (original)
+++ hadoop/common/branches/branch-0.22/src/java/core-default.xml Wed Jan 12 
22:48:30 2011
@@ -298,12 +298,6 @@
 /property
 
 property
-  namefs.ramfs.impl/name
-  valueorg.apache.hadoop.fs.InMemoryFileSystem/value
-  descriptionThe FileSystem for ramfs: uris./description
-/property
-
-property
   namefs.har.impl/name
   valueorg.apache.hadoop.fs.HarFileSystem/value
   descriptionThe filesystem for Hadoop archives. /description

Modified: 
hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/io/SequenceFile.java
URL: 
http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/io/SequenceFile.java?rev=1058344r1=1058343r2=1058344view=diff
==
--- 
hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/io/SequenceFile.java
 (original)
+++ 
hadoop/common/branches/branch-0.22/src/java/org/apache/hadoop/io/SequenceFile.java
 Wed Jan 12 22:48:30 2011
@@ -3371,9 +3371,6 @@ public class SequenceFile {
   public boolean nextRawKey() throws IOException {
 if (in == null) {
   int bufferSize = getBufferSize(conf); 
-  if (fs.getUri().getScheme().startsWith(ramfs)) {
-bufferSize = conf.getInt(io.bytes.per.checksum, 512);
-  }
   Reader reader = new Reader(conf,
  Reader.file(segmentPathName), 
  Reader.bufferSize(bufferSize),




svn commit: r1058384 [2/2] - in /hadoop/site: author/src/documentation/ author/src/documentation/content/xdocs/ publish/

2011-01-12 Thread nigel
Modified: hadoop/site/publish/who.pdf
URL: 
http://svn.apache.org/viewvc/hadoop/site/publish/who.pdf?rev=1058384r1=1058383r2=1058384view=diff
==
--- hadoop/site/publish/who.pdf (original)
+++ hadoop/site/publish/who.pdf Thu Jan 13 01:36:00 2011
@@ -8,7 +8,7 @@ endobj
  /Length 457 /Filter [ /ASCII85Decode /FlateDecode ]
  
 stream
-GauHG4\rsLdc...@?we+*$(pD:Jt:c3(b#%v...@ihr^r4cuk#/k?\2iXG?=hnAXQlF-LB$,Wn#X_bu\GbLiL35`]Ru]h_`o77K((VPD_,EHeJ2=f0e`jke9l1)t4:#S`(8J3mJB_/iY%+4qa^FEbijDHPEKsm;l8bZ/ZrNs!4rHj%oaO:r=b,9QH^l'H3jmUm()=^HFADmP^,H\_'tHBEL5!].Ak8o:_A\n9bt'#i1Fn'=Sla5TU7E?O?G2^(:$lbe...@f:8?k\_?i`BPJYTQpTc/.WAi(qraG`;uG\mF2HIif,gA4d.^I[8sZ[+]ol'+gT_:LiUiTMVoWp[2,Y4(0t=,3#2h$0n...@#%'mLdHRs.MC;Zo#4:ei)(^l,IdW^!*Rj3A]7J!*b0/?O_C6^Xq?\P^]61p_Y!=OWScbD3U*/0RJ:q~
+GauHGYti1jdcpm...@nf5jbkgstutwwxly#-OMdhK30EDd?tO5i^7$PJ/e3(*vhgophwhk-ilrrp#$66ro^ga^y...@gdk!m\nfh@%:)@ReF_IN@;wx...@v?7g#o1q,D87s'2hk62G+;j];Bi'N=MiB=Om89eh\'J()aJu4d4q:,Eq2hnJO]V$JIh]qp*JIZqq7`MQBtqTlYNQMcJ`!0?=_p^g(an;O]17*`2...@q.b_dqsvq9L@@\H_.p:$:$m=K:1:mKcJmmq_NMc8Vp1MpTg]$WRoD/raG`;uG\mF2`MG[l_cSQ);XVIsZO:!0k`:dB!0Fh8Q_mQMf#M-VB+e`8#DfE__buLEn(v...@70ih^@(Cn,dC(e9u71Z5kn+,o#CN_4juq1A](N12GI@,$rr7j+-Nhw\i...@dadq[b)+8Bg~
 endstream
 endobj
 6 0 obj
@@ -61,7 +61,7 @@ endobj
  /Length 4251 /Filter [ /ASCII85Decode /FlateDecode ]
  
 stream
-GatV$?$GCVfcI/'t=XV\Of%JA%063,Wfe3jF!Wg:A(3\KVWL!!;NEINP`.43YGHiqjE[/bs1%x...@!tahm_[qrj/nm,M-uXSo-Z'HfI2Zr82^!TLCXU$IJjo5J[Jqo3ipF)HW?]Dq$]R'1^C]`+::Wp2QMJ#]PV`OZq*gl'(FW));2s)T7\G4WVhc#7XDRLX*qO,4Ac'$#%cHU;+maA(go6Bluc.aknf^XSX8i=K$GGDMUnC(_)-PY'%TSoQ'[Oaqc1e/qfmJS,_$1T;m4MfGq5Dq33F0cM`C:UJWLSAKnP).T%:]se/s#+3N6nQoP`aEK0i=-hTO[SP+P-s/umt!?9F.ZH*r0RX.(?Z;V-O]EROV/sf...@n32kHW2V%SD=Z(H3b01],3\$J(]+jU[s61;JSE8?MK3K7_c(nEcqnSQ5m(q=d?J:]\$mhUnO7YMAX2QAml-bL#up=5i8s,J[;q)3tSRJX\eNfU9O9+a9f81;dL84Z/sK$'!Zf-nK_8l9NS+AJ]/P'CjG7I_)U6:FqlQ:Is(=UVZK(QIk=HUa\V_befDM5E^MFN($)`oN7_8ju9Re*;QDodILecc`K,4fbJMTG6!Xg#N=JZqNW/N*fE)2Xa'l_W5Y=t99Z4M;#bdmhEhpAu(-/UhJh)+$9P?43PY%smhtMs+$hb#?^OZY2BIbT0JcT$/oelA,t...@7#$YQ_TNM*9`%L]bgd5p`l#Uk'qel...@uk4n$;/7Lfa/`_Lf#3A)l9Xe-H)ZA\pohE/EGc!f./tc9lrY$DI8U'F5u01`Ao_-AgU_^Y-L7KEa_RoYkRnjb)f0D:/RIkJJ[sl!q[%2D%c9o)=5...@vi)n?R(*0.57kf=)\5#r#Mp4B0Cu;*L_ANj#HXi;$MFd(d4BP_Dhnjh*Kk.JSH6_].hu*^+m(N3s-%.-5Fn!SOa)\+Ht,
 
P?1.^7%:YS!V!iF:=eK'l-Z*qX68feGEi?pQEm+ZiUuU:PhDak#MNtQqQ$8pti5sj+c$Ll^=VF'ZR9l6hRf6;4%)EfVnT`)k3_J/2oJB+Q8L//E@5jk...@mku];in:bt...@v6_$coLjPME^GaTPOBDtC:0*sb-g=.$43bt...@v6coeULV=1F,mIu:;0pTWCe7gb'PGZMS-[+3`a9Lr3CRZCVZ)6I[u:7!.YY^b-3M^BhM;Qd_3Rn6]Rtk47r.HVHWF\O`QPoG?lfE1.6X)L]D`Tcpbp*QSR1;;_O$mbbeg8rJ^OHntREkoT6pGUk9*Z'tl_)gfVh'ce_cKBcI\SJm$7^pUE38`8g.V[prP6nf1H.'bGhjH#QehW9J1hNDrD8aI9Hj8]:i...@rzoeqdbktzm#ld1pvq7(grr1ealu[j5-psfia`%e...@=m,.WAWrA[/u[...@?u/bZN]Nn4Y6+o8p8t2Mg3[:77XRGTVQ8Ce=H^IIU;pgh(0E_ZS=Lc_dD9kCm'u7\#fN5#qla!...@4c4s`4'2...@hg+wqr\i!msok+fgg,Dt4E*-]f2L+G=5b#pkl7KpNa#^Jd,,gXX9bBR%Z.SXXkAigZjY[M,Kck9k;R\hXdMq?_73OhNajKLUoWN^?(pA4J$bGh/a;@qH?U@EK$3q)9^DCG8lS40-6?ulaNlb...@wo$h*kt,Ap-mm)LmjLdcbW(J8+N[K^.Q)41,WWU`V)o$t6QZiG[lTCo9#ADSf59!,O3C=Q_W!nHn(/ooK#9jA=jRuo,n81ZL6j^628`3Dj\DY3G(Rd'gr+F8bWOjlRjdYc2[S7j0b5tL$$8RT^,iC7Ykbr6E^UeiemasXSrKpV55VA9LcRKl#n%;=]bf`;CRfh:`^G':KTMeU)!7i$Sq0i:'L:N/j...@q$a+(jsN'LReTGgoA4.hFQhb9p/XD?V;qS'B
 
`h',D:VFMC4cl320beh,-?487e/pfT`Z#=F_2R90kIf)i24et?F3$pabZ6=oP+B6]c\lUjVb6(Vq(Pi`PAMS$:OWngL(Y^i5h1j...@oufntOnBdDH6+3H)%%[deBpj2P-Y9?!0?jL0KY]`$7C')Y(eqi_#r,u`26BF0*u+E4$8H3I6m5l6TjuT)WYNFo/,nCMAHhkU'gmlo[RP(aLmF)2E3_#NmrnIcs0\ce0IXadh0b\,Uc:kn=1J5YIYK]IL0*b$72[[b2N+N,3;5eE6]%Q1PgNpk;3P+Zn/bd,0MC/e5?qu9Q9FjD49fF9B4K`d,Xi=#2[8S/N.-kFS@c=2-%'D2?n)RS0i:6Q,bER0k!sAh#^R[AsK1Z(NmGNe(p/Y,^_sn/Pf;Dc'fkhoj...@cl320beh,.=Q5]q]n;=G^p_A9QR:-thC;'tL=9[5:09J]ucTX=15]BdIH%De4.V`qRQI9Bq'Lq:H/MdUZdM$i8#ET[ZjN)F)g...@n!r]pb*'?(F?B?aP0nX+*H!Vh8_F`F\h+KO(2dnCtM:)]JJudj+J_ceJ2B3iWKcZp5o0(K5%/Omat*3\Oj4`rj+_?I[NlgF!sn7/;:[U:oK(:hE*P]aH6)4q;CHK/Ad_hYbmcZgTJb:gKhC$*,(E7h$U5$L*W!M.$%NLq=/]bd@!_4wfsU.][PWVCBK;#...@qj?mx452[Tsc4,d]Jp;kZo0hYRY=%XgIdhCmH31c-Nnm/sAcXrk^EO![?dR6B=(aq0a)A$)rk'g]Z_a7TgXEH)=_2Gajrchr[ODe'#VAMW4*u-To%KT9L5Hk//C0NQe\+[Yiee$p\DBN+-6RJB[fR$1Z8+AHHeG@7D'jh7Ya/KBQ'0h3#Kub$s-(2SW+3E3)C[PU^?)@9A#R3[4PPKQ$Art[q-M9mNh6Kt.VMKH#2s!P9o
 

svn commit: r1058408 - /hadoop/common/branches/branch-0.20-security/

2011-01-12 Thread acmurthy
Author: acmurthy
Date: Thu Jan 13 05:04:14 2011
New Revision: 1058408

URL: http://svn.apache.org/viewvc?rev=1058408view=rev
Log:
Branching to merge Yahoo! patchset to Apache

Added:
hadoop/common/branches/branch-0.20-security/   (props changed)
  - copied from r1058407, hadoop/common/branches/branch-0.20/

Propchange: hadoop/common/branches/branch-0.20-security/
--
--- svn:ignore (added)
+++ svn:ignore Thu Jan 13 05:04:14 2011
@@ -0,0 +1,7 @@
+build
+logs
+.classpath
+.project
+.settings
+
+.externalToolBuilders

Propchange: hadoop/common/branches/branch-0.20-security/
--
--- svn:mergeinfo (added)
+++ svn:mergeinfo Thu Jan 13 05:04:14 2011
@@ -0,0 +1,4 @@
+/hadoop/common/trunk:910709
+/hadoop/core/branches/branch-0.19:713112
+/hadoop/core/trunk:727001,727117,727191,727212,727217,727228,727255,727869,728187,729052,729987,732385,732572,732613,732777,732838,732869,733887,734870,734916,736426,738328,738697,740077,740157,741703,741762,743745,743816,743892,744894,745180,746010,746206,746227,746233,746274,746338,746902-746903,746925,746944,746968,746970,747279,747289,747802,748084,748090,748783,749262,749318,749863,750533,752073,752609,752834,752836,752913,752932,753112-753113,753346,754645,754847,754927,755035,755226,755348,755370,755418,755426,755790,755905,755938,755960,755986,755998,756352,757448,757624,757849,758156,758180,759398,759932,760502,760783,761046,761482,761632,762216,762879,763107,763502,764967,765016,765809,765951,771607,771661,772844,772876,772884,772920,773889,776638,778962,778966,779893,781720,784661,785046,785569
+/hadoop/hdfs/trunk:882708




[Hadoop Wiki] Update of QuickStart by DavidBeckwith

2011-01-12 Thread Apache Wiki
Dear Wiki user,

You have subscribed to a wiki page or wiki category on Hadoop Wiki for change 
notification.

The QuickStart page has been changed by DavidBeckwith.
http://wiki.apache.org/hadoop/QuickStart?action=diffrev1=21rev2=22

--

  Congratulations you have just successfully run your first MapReduce with 
Hadoop.
  
  == Stage 2: Pseudo-distributed Configuration ==
- You can in fact run everything on a single host. To run things this way, put 
the following in `conf/hadoop-site.xml`
+ You can in fact run everything on a single host. To run things this way, put 
the following in `conf/hdfs-site.xml` (`conf/hadoop-site.xml` in versions  
0.20)
  {{{
  configuration