[HACKERS] Coding style guide

2011-02-17 Thread Daniel Loureiro
Is there any official style guide of PostgreSQL code ? Like the
google-styleguide
(http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml) ?

Regards,
--
Daniel Loureiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Anyone for SSDs?

2010-12-11 Thread Daniel Loureiro
 You can believe whatever you want, that doesn't make it true.
completely agree. Like yours, Its just my point of view, not the reality.

I agree with most points here, but I wondering how many good ideas are
killed with the thought: this will be a performance killer with so
many random access, lets discarded it. If in 80's the sequential
access has more cost compared with random access (ok, there's not the
SSD case), will be the PostgreSQL in the same design that it have
nowadays ?

--
Daniel Loureiro.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Anyone for SSDs?

2010-12-10 Thread Daniel Loureiro
 Most of you already know I am new to this list and newer to any OSS
 development. However, while browsing the source code (of 9.0.1) I find
 that there is only one way to store relations on disk - the magnetic
 disk.

The fact that it's called md.c is a hangover from the '80s.  These days,
the logic that the Berkeley guys envisioned being at that code level
is generally in kernel device drivers.  md.c can drive anything that
behaves as a block device + filesystem, which is pretty much everything
of interest.

I believe that PostgreSQL was been developed and optimized for
sequential access. To get full advantage of SSDs its necessary to
rewrite almost the whole project - there are so much code written with
the sequential mechanism in mind.

--
Daniel Loureiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Anyone for SSDs?

2010-12-10 Thread Daniel Loureiro
 You can believe whatever you want, that doesn't make it true.
completely agree. Like yours, Its just my point of view, not the reality.

I agree with some points here, but I wondering how many good ideas are
killed with the thought: this will be a performance killer with so
many random access, lets discarded it. An quicksort method in
sequential disk its just awful to be thinking in a non SSD world, but
its possible in an SSD.

If in 80's the sequential access has more cost compared with random
access will be the PostgreSQL in the same design that it have nowadays
?

--
Daniel Loureiro

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] DELETE with LIMIT (or my first hack)

2010-12-01 Thread Daniel Loureiro
its pretty clear to me that's 2 different needs here, both linked to
DELETE/UPDATE behavior.

A) an feature MySQL-like which will DELETE/UPDATE just K tuples
B) an feature to protect the database in case the DBA forget the WHERE
statement

I think that the first feature its pretty reasonable for many reasons - some
of then listed below (not in order of importance):
 1) MySql compatibility: will turn more easy intercompatibility
 2) speed: why scan all the table if its expected to affect just one row ?
 3) possibility to batch operation (paginate UPDATE/DELETE)
 4) easy-to-use in some operations (like delete the row with higher Y
field): its necessary to implement with ORDER BY
 5) some others independent (and possibly weird needs) things that i forget

The second feature its something to turn the PostgreSQL more secure: in
others words armor from DBA. The syntax maybe will something like DELETE
 ASSERT 1, or an explicit keyword for this, like: DELETEO  So,
the mechanism should be give an error and rollback if the command affect
more than specified tuples. IMHO this its a very weird syntax and so much
non-standard SQL. So I believe this not a so-necessary feature. Ok I known
that I started this discussion (around this weird feature, not the first and
reasonable feature), but was good to instigate others thoughts.

Sds,
--
Daniel Loureiro


2010/11/30 Bruce Momjian br...@momjian.us

 Daniel Loureiro wrote:
   3. This doesn't work tremendously well for inheritance trees, where
   ModifyTable acts as sort of an implicit Append node.  You can't just
   funnel all the tuples through one Sort or Limit node because they
 aren't
   all the same rowtype.  (Limit might perhaps not care, but Sort will.)
   But you can't have a separate Sort/Limit for each table either, because
   that would give the wrong behavior.  Another problem with funneling all
   the rows through one Sort/Limit is that ModifyTable did need to know
   which table each row came from, so it can apply the modify to the right
   table.
 
  So I guess that I have choose the wrong hack to start.
 
  Just for curiosity, why the result of WHERE filter (in
  SELECT/DELETE/UPDATE) is not put in memory, i.e. an array of ctid, like
 an
  buffer and then executed by SELECT/DELETE/UPDATE at once ?

 Informix dbaccess would prompt a user for confirmation if it saw a
 DELETE with no WHERE.

 --
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +



Re: [HACKERS] DELETE with LIMIT (or my first hack)

2010-11-30 Thread Daniel Loureiro
 3. This doesn't work tremendously well for inheritance trees, where
 ModifyTable acts as sort of an implicit Append node.  You can't just
 funnel all the tuples through one Sort or Limit node because they aren't
 all the same rowtype.  (Limit might perhaps not care, but Sort will.)
 But you can't have a separate Sort/Limit for each table either, because
 that would give the wrong behavior.  Another problem with funneling all
 the rows through one Sort/Limit is that ModifyTable did need to know
 which table each row came from, so it can apply the modify to the right
 table.

So I guess that I have choose the wrong hack to start.

Just for curiosity, why the result of WHERE filter (in
SELECT/DELETE/UPDATE) is not put in memory, i.e. an array of ctid, like an
buffer and then executed by SELECT/DELETE/UPDATE at once ?

Greets,
--
Daniel Loureiro


Re: [HACKERS] DELETE with LIMIT (or my first hack)

2010-11-30 Thread Daniel Loureiro
to me the key its security - its a anti-DBA-with-lack-of-attention feature.
If i forget the WHERE statement, I will delete some valid tuples and
messed up the bd, but its less-than-worst that exclude all the table. A DBA
who never forgot an WHERE in an DELETE is not an DBA. Just kidding, but
this happens often enough.

is there another option to implement this ? Its possible to be done by
plugins/extension (in a Firefox browser style) ?

Sds,
--
Daniel Loureiro
--

2010/11/30 Andrew Dunstan and...@dunslane.net



 On 11/30/2010 09:57 AM, Csaba Nagy wrote:


 So it is really an ideological thing and not lack of demand or
 implementation attempts... I for myself can't write working C code
 anyway, so I got my peace with the workaround - I wish you good luck
 arguing Tom :-)




 We need a convincing use case for it. So far the only one that's seemed at
 all convincing to me is the one about deleting in batches. But that might be
 enough.

 As for it being illogical, I don't think it's any more so than

 DELETE FROM foo WHERE random()  0.1;

 and you can do that today.

 cheers

 andrew




[HACKERS] DELETE with LIMIT (or my first hack)

2010-11-29 Thread Daniel Loureiro
Hi,

frequently i have accidents with DELETE/UPDATE commands. In fact, sometimes
in the last 8 or 9 years (ok, a lot of times) I forget the entire WHERE
clause or have a “not so perfectly“ WHERE clause, with an awful suprise.
There’s no words to figure the horror ever time i see that the number of
affected rows its not 1 or two how expected, but the entire table. So I
planned to make a hack to make the “LIMIT” directive available to “DELETE”
command.

So, can anyone help-me in how to do this ? This its my plan: 1) change the
lex grammar (wheres the file ?) 2) change the parser to accept the new
grammar 3) change the executor to stop after “n” successful iterations. Is
this correct ?

Greets,
--

Daniel Loureiro
--
http://diffcoder.blogspot.com/


Re: [HACKERS] DELETE with LIMIT (or my first hack)

2010-11-29 Thread Daniel Loureiro
good point. But when you use a LIMIT in a SELECT statement you WANT n RANDOM
tuples - its wrong to get RANDOM tuples ? So, in the same logic, its wrong
to exclude n random tuples ? Besides, if you want DELETE just 1 tuple, why
the executor have to scan the entire table, and not just stoping after find
the 1 tuple ? Why the LIMIT clause should be used to speedup only SELECT
statements ? if the programmer know the expected number of affected rows why
not use it to speed up DELETE/UPDATE ?

cheers,
--
Daniel Loureiro
http://diffcoder.blogspot.com/

2010/11/30 Jaime Casanova ja...@2ndquadrant.com

 On Mon, Nov 29, 2010 at 9:08 PM, Daniel Loureiro loureir...@gmail.com
 wrote:
 
  3) change the executor to stop after “n” successful iterations. Is
  this correct ?
 

 no. it means you will delete the n first tuples that happen to be
 found, if you don't have a WHERE clause that means is very possible
 you delete something you don't want to... the correct solution is to
 use always try DELETE's inside transactions and only if you see the
 right thing happening issue a COMMIT

 besides i think this has been proposed and rejected before

 --
 Jaime Casanova www.2ndQuadrant.com
 Professional PostgreSQL: Soporte y capacitación de PostgreSQL