On Apr 11, 5:45 am, [EMAIL PROTECTED] (Albe Laurenz) wrote:
Format the output.
For example, the 17408 in the query above is a result from the
first query.
If you had triggers, constraints, rules or indexes associated
with the table or the table would INHERIT another table, you'd probably
Hi all, i'm trying to setup PITR on my postgresql ( i run version 1.8.11 );
Following the docs and tips from the list this is what i made:
1) set up a crontab which copys the last-created WAL file in
/home/postgres/WAL
2) set up a shell-script as a the archive_command: it copyes WAL files
from
Jaisen N.D. wrote:
Hai, I use Debian Etch. I have a problem with postgresql 8.1. I have
uninstalled the postgresql-8.3, which I was took from debian back ports,
removed its configuration files, and the user postgres also.
It sounds like you didn't remove the data directory,
On Apr 12, 2008, at 7:11 AM, Jaisen N.D. wrote:
localhost:/home/user# su - postgres
[EMAIL PROTECTED]:~$ /usr/lib/postgresql/8.1/bin/initdb -D /var/
lib/postgresql/data
The files belonging to this database system will be owned by user
postgres.
This user must also own the server process.
I would like to create a rule that, by updating a view, allows me to update
one table and insert into another.
The following example illustrates what I'm trying to do:
--Create Tables
CREATE TABLE my_table
(
my_table_id serial,
a character varying(255),
Hello:
This is a newbie question as I have not used the product before. After
installation of PG 8.2 I noticed that I have 2 servers: postgresql database
server 8.2 (localhost:5432) and another one differing only in the name (the
second one is 8.3). The problem is that I did not request the
My premise is that someone will do mistakes in the php code and I'd like to
mitigate the effect of these mistakes.
- Prepared statements is the only bulletproof technique
- You can use a database abstraction layer (there are more than many
libraries for PHP) Fast to implement, all queries goes
On Sat, Apr 12, 2008 at 03:05:41PM +, william wayne wrote:
This is a newbie question as I have not used the product before. After
installation of PG 8.2 I noticed that I have 2 servers: postgresql
database server 8.2 (localhost:5432) and another one differing only
in the name (the second
On Sat, 12 Apr 2008 11:11:48 -0400
Jonathan Bond-Caron [EMAIL PROTECTED] wrote:
My premise is that someone will do mistakes in the php code and
I'd like to mitigate the effect of these mistakes.
- Prepared statements is the only bulletproof technique
I'm not looking for something bullet
On Fri, Apr 11, 2008 at 04:48:03PM -0500, Erik Jones wrote:
On Apr 11, 2008, at 4:23 PM, Stefan Sturm wrote:
I don''t want to kill them. So how can I find out, what ist locking them?
Is there a tool, which shows me such Information?
There is a system catalog view called pg_locks that has an
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
I may sound naive but having a way to protect the DB from this kind
of injections looks as a common problem, I'd thought there was
already a common solution.
Use prepared statements.
regards, tom lane
--
Sent via
On Fri, Apr 11, 2008 at 2:54 PM, Craig Ringer
[EMAIL PROTECTED] wrote:
Pavan Deolasee wrote:
I wonder if it would make sense to add support to mount database in
*read-only* mode from multiple servers though. I am thinking about
data warehouse kind of operations where multiple servers can be
On Sat, Apr 12, 2008 at 11:00 PM, Dawid Kuroczko [EMAIL PROTECTED] wrote:
Not quite workable. Remember that table data is not always available on
the block device -- there are pages modified in the buffer cache (shared
memory), and other machines have no access to the other's shared
On Sat, 12 Apr 2008 12:39:38 -0400
Tom Lane [EMAIL PROTECTED] wrote:
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
I may sound naive but having a way to protect the DB from this
kind of injections looks as a common problem, I'd thought there
was already a common solution.
Use prepared
On Sat, Apr 12, 2008 at 8:11 PM, Pavan Deolasee
[EMAIL PROTECTED] wrote:
On Sat, Apr 12, 2008 at 11:00 PM, Dawid Kuroczko [EMAIL PROTECTED] wrote:
Not quite workable. Remember that table data is not always available on
the block device -- there are pages modified in the buffer cache
On Fri, Apr 11, 2008 at 9:21 PM, Ivan Sergio Borgonovo
[EMAIL PROTECTED] wrote:
Is there a switch (php side or pg side) to avoid things like:
pg_query(select id from table1 where a=$i);
into becoming
pg_query(select id from table1 where a=1 and 1=1; do something
nasty; -- );
So that
Ivan Sergio Borgonovo wrote:
Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
Code reviews are about the only way to enforce this.
If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable
Pavan Deolasee wrote:
[...]
I am not suggesting one read-write and many read-only architecture. I am
rather suggesting all read-only systems. I would be interested in this
setup if I run large read-only queries on historical data and need easy
scalability. With read-only setup, you can easily
paul rivers wrote:
Ivan Sergio Borgonovo wrote:
Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
Code reviews are about the only way to enforce this.
That's not entirely true - if you have a policy that says
paul rivers [EMAIL PROTECTED] writes:
If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable cost, why not providing a safety net that
will at least raise the bar for the attacker at a very cheap cost?
How do you do this? Disallow string concatenation
Gregory Stark wrote:
paul rivers [EMAIL PROTECTED] writes:
If I can't, and I doubt there is a system that will let me enforce
that policy at a reasonable cost, why not providing a safety net that
will at least raise the bar for the attacker at a very cheap cost?
How do you do this?
Pavan Deolasee [EMAIL PROTECTED] writes:
I am not suggesting one read-write and many read-only architecture. I am
rather suggesting all read-only systems. I would be interested in this
setup if I run large read-only queries on historical data and need easy
scalability. With read-only setup,
Ivan Sergio Borgonovo [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] wrote:
Use prepared statements.
Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
Modify the PHP code (at whatever corresponds to the DBD layer)
to always
On Sat, 12 Apr 2008 20:25:36 +0100
Gregory Stark [EMAIL PROTECTED] wrote:
paul rivers [EMAIL PROTECTED] writes:
If I can't, and I doubt there is a system that will let me
enforce that policy at a reasonable cost, why not providing a
safety net that will at least raise the bar for the
I dumped the db with:
pg_dump -C -f dbname.dump
$ uname -a
Linux myhost.mydomain.net 2.6.22.14-72.fc6 #1 SMP Wed Nov 21 14:10:25
EST 2007 x86_64 x86_64 x86_64 GNU/Linux
Postgresql 8.2.7
And restored in another machine:
$ uname -a
Linux dkt 2.6.24.4-64.fc8 #1 SMP Sat Mar 29 09:54:46 EDT 2008
My mistake. Sorry for the noise.
Regards, Clodoaldo
2008/4/12, Clodoaldo [EMAIL PROTECTED]:
I dumped the db with:
pg_dump -C -f dbname.dump
$ uname -a
Linux myhost.mydomain.net 2.6.22.14-72.fc6 #1 SMP Wed Nov 21 14:10:25
EST 2007 x86_64 x86_64 x86_64 GNU/Linux
Postgresql 8.2.7
On Sat, Apr 12, 2008 at 11:06:42PM +0200, Ivan Sergio Borgonovo wrote:
But what about already written code that use pg_query?
If you rewrite the database interface then it doesn't matter, the calls
to pg_query will end up being calls to prepare/execute underneath so
you'll have their protection.
paul rivers wrote:
Ivan Sergio Borgonovo wrote:
Yeah... but how can I effectively enforce the policy that ALL input
will be passed through prepared statements?
Code reviews are about the only way to enforce this.
(Note: I'm clueless about PHP, so I'm basing this on perl/python/etc):
I am running into problems vacuuming my larger tables. It seems for tables
greater than 1 million rows, Vacuum just hangs. I could leave it running
for hours and it never comes to completion.
Things like copying the whole table to a temp table with bulk insert such as
(SELECT * INTO temp
Paragon [EMAIL PROTECTED] writes:
I am running into problems vacuuming my larger tables. It seems for tables
greater than 1 million rows, Vacuum just hangs. I could leave it running
for hours and it never comes to completion.
Is it actually *doing* anything, like consuming CPU or I/O --
Paragon [EMAIL PROTECTED] writes:
I am running into problems vacuuming my larger tables. It seems for
tables greater than 1 million rows, Vacuum just hangs. I could leave it
running
for hours and it never comes to completion.
Is it actually *doing* anything, like consuming CPU or I/O
31 matches
Mail list logo