Greg Sabino Mullane wrote:
So, my doubt is: if the return type is int instead of unsigned int,
is this function testable for negative return values?
A quick glance at the code in fe-exec.c and fe-protocol3.c shows that
the underlying variable starts at 0 as an int and in incremented by
Greg Sabino Mullane wrote:
So, my doubt is: if the return type is int instead of unsigned int,
is this function testable for negative return values?
A quick glance at the code in fe-exec.c and fe-protocol3.c shows that
the underlying variable starts at 0 as an int and in incremented by
hi,
i just fetched newest 8.3 from cvs head, compiled, ran.
when i set logs to stderr, and enter query with error, i get this
information in logs:
ERROR: subquery in FROM must have an alias
HINT: For example, FROM (SELECT ...) [AS] foo.
STATEMENT: select count(*) from (select x from q order by
Hello All,
I have installed pgpoolAdmin on linux box. first pgpool configuration setting
pages work properly after that when I try to login nothing happens.
Following is the error in the apache server log
PHP Warning: fetch(templates_c/%%6A^6A5^6A537DD8%%login.tpl.php): failed to
open stream:
Hi,
It's possible to extract data directly from data files ?
I have two tablespaces one for data and one for indexes.
After filesystem crash I lost my /var/lib/postgresql/data folder :( All
data is in /lost+found :(, I found folders with data and index tablespe
that looks ok.
It's possible to
Hi Ashish,
Looks like a smarty issue and not a pgpooladmin issue. Check your smarty
global variables for folder paths.
Regards,
Moiz Kothari
--
Hobby Site : http://dailyhealthtips.blogspot.com
On 9/26/07, Ashish Karalkar [EMAIL PROTECTED] wrote:
Hello All,
I have installed pgpoolAdmin on
Hi,
I was using PostgreSQL in version postgresql-8.2.4-1-binaries-no-installer.zip
under Windows. I did the following:
1. I unzipped PostgreSQL into D:\PostgreSQL and created directory named
database inside.
2. I exceuted (on non-administrator account postgres): initdb -D
D:\PostgreSQL\database
Michael Glaesemann wrote:
On Sep 25, 2007, at 17:30 , Alvaro Herrera wrote:
Michael Glaesemann wrote:
select dom_id,
dom_name,
usr_count
from domains
natural join (select usr_dom_id as dom_id,
count(usr_dom_id) as usr_count
from
Madison Kelly wrote:
Thanks for your reply!
Unfortunately, in both cases I get the error:
nmc= SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM domains
JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id) 0 ORDER BY
dom_name;
ERROR: syntax error at or near COUNT
Hi there,
I have some global national statistical data sets.
The table design is like this for each variable:
name 20012002 2003 2004 2005
-
Afghanistan
Albania
I would like to
hi friends,
I am new to this group, i want one help,
I want to get a lakh of records from database, but it
is taking too much time, what can i do for this,
thanks in Advance.
Get the freedom to save as many mails as you wish. To know how, go to
Madison Kelly [EMAIL PROTECTED] writes:
SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*) FROM users u
WHERE u.usr_dom_id=d.dom_id) 0 ORDER BY d.dom_name ASC;
Which gives me just the domains with at least one user under them, but not
the count. This is not ideal, and I
Alvaro Herrera wrote:
Madison Kelly wrote:
Thanks for your reply!
Unfortunately, in both cases I get the error:
nmc= SELECT dom_id, dom_name, COUNT(usr_dom_id) AS usr_count FROM domains
JOIN users ON (usr_dom_id=dom_id) HAVING COUNT (usr_dom_id) 0 ORDER BY
dom_name;
ERROR: syntax
On Sep 26, 2007, at 7:41 , Madison Kelly wrote:
Unfortunately, in both cases I get the error:
Um, the two cases could not be giving the same error as they don't
both contain the syntax that the error is complaining about: the
first case uses count in a subquery so it couldn't throw
Gregory Stark wrote:
Madison Kelly [EMAIL PROTECTED] writes:
SELECT d.dom_id, d.dom_name FROM domains d WHERE (SELECT COUNT(*) FROM users u
WHERE u.usr_dom_id=d.dom_id) 0 ORDER BY d.dom_name ASC;
Which gives me just the domains with at least one user under them, but not
the count. This is
Bruce Momjian [EMAIL PROTECTED] writes:
Greg Sabino Mullane wrote:
There may be some other safeguards in place I did not see to prevent this,
but I don't see a reason why we shouldn't use unsigned int or
unsigned long int here, both for ntups and the return value of the
function.
On
On 9/26/07, Martin Bednář [EMAIL PROTECTED] wrote:
Hi,
It's possible to extract data directly from data files ?
I have two tablespaces one for data and one for indexes.
After filesystem crash I lost my /var/lib/postgresql/data folder :( All
data is in /lost+found :(, I found folders with
Just wondering - when using a newer pg_dump to dump from an older
Postgres, does pg_dump automatically generate INSERT statements for the
data rather than using COPY?
I noticed this today when transferring data to a newer server - pg_dump
generated INSERTs although I didn't ask for them. Not
Hi everyone,
I would like to know the best way to implement a DAG in PostgreSQL. I
understand there has been some talk of recursive queries, and I'm
wondering if there has been much progress on this.
Are there any complete examples of DAGs which work with PostgreSQL? I
would like to be able to
Hi,
We suddenly stumbled upon duplicate entities. Some of our databases
ended up with two 'public' schemas and several duplicate user tables
(sharing the same oid).
After checking through the logs, it doesn't appear to be a problem
resulting from wrap-around OID's. Though the logs mention
Tom Lane-2 wrote:
Alessandra Bilardi [EMAIL PROTECTED] writes:
ERROR: could not write to hash-join temporary file: No space left on
device
Check your queries. I suspect you've written an incorrectly constrained
join that is producing many more rows than you expect.
I have a database schema which has a central table with several
others depending on it. The dependent tables all have foreign key
constraints with ON DELETE CASCADE so that I can remove tuples from
the central table and have the dependent rows removed automatically.
This all works, but it's very
I'm stuck trying to tune a big-ish postgres db and wondering if anyone
has any pointers.
I cannot get Postgres to make good use of plenty of available RAM and
stop thrashing the disks.
One main table. ~30 million rows, 20 columns all integer, smallint or
char(2). Most have an index. It's a
Hi folks,
sorry I do not get it right and I have to ask now.
I manually compiled PostgreSQL on my Kubuntu machine to /usr/local/opt/
pgsql and did all this stuff like creating a postgres user and I
have a startup script in /etc/init.d.
But if I try to start PostgreSQL by running sudo
On Sep 25, 11:02 am, [EMAIL PROTECTED] (Alvaro Herrera)
wrote:
Dan99 escribió:
Hi,
I found out this morning that I cannot get pg_dump to work at all on
my database. It refuses to create a dump and instead just freezes.
When using the verbose option (-v) i get the following output and
On Sep 25, 10:32 am, [EMAIL PROTECTED] (Scott Marlowe) wrote:
On 9/18/07, Dan99 [EMAIL PROTECTED] wrote:
Hi,
I found out this morning that I cannot get pg_dump to work at all on
my database. It refuses to create a dump and instead just freezes.
When using the verbose option (-v) i
Greetings list,
I have created a function which inserts a row in a table which has 2 unique
indexes on two different columns.
I was wondering if there is a way in case of UNIQUE_VIOLATION exception to
find out which index would have been violated?
Petri Simolin
After I clustered the primary key index of a table with about 300,000
rows, my vacuum/analyze on that table is taking too long ... over 15
mins when originally it was 15 seconds! Nothing else has been changed
with this table. Is clustering not good for vacuums?
---(end of
Dan99 escribió:
Update your pgsql. 7.4.2 is old in two ways. the 7.4 branch is
pretty old. plan an upgrade as soon as you can get this backup to
work. Secondly, pg 7.4 is up to a number near 20 now, i.e. 7.4.18.
There are over two years of bug fixes you're missing, and one of them
Raymond O'Donnell wrote:
Just wondering - when using a newer pg_dump to dump from an older Postgres,
does pg_dump automatically generate INSERT statements for the data rather
than using COPY?
No.
I noticed this today when transferring data to a newer server - pg_dump
generated INSERTs
In response to James Williams [EMAIL PROTECTED]:
I'm stuck trying to tune a big-ish postgres db and wondering if anyone
has any pointers.
I cannot get Postgres to make good use of plenty of available RAM and
stop thrashing the disks.
One main table. ~30 million rows, 20 columns all
Raymond O'Donnell wrote:
Just wondering - when using a newer pg_dump to dump from an older
Postgres, does pg_dump automatically generate INSERT statements for
the data rather than using COPY?
I noticed this today when transferring data to a newer server -
pg_dump generated INSERTs although I
Johann Maar wrote:
But if I try to start PostgreSQL by running sudo /etc/init.d/
postgresql start it will fail because it tries to write a PID file to
/var/run/postgresql which does not exist. If I create this directory
and set the permissions for postgres to write it works (!), but after
the
James Williams wrote:
The box has 4 x Opterons, 4Gb RAM five 15k rpm disks, RAID 5. We
wanted fast query/lookup. We know we can get fast disk IO.
RAID 5 is usually adviced against here. It's not particularly fast or
safe, IIRC. Try searching the ML archives for RAID 5 ;)
--
Alban Hertroys
I set up a test server using the latest 8.2 as suggest by the list and
did pg_dump of the old data base.
I created a new empty database with the same name an created a user with
the same name as was on the old server.
I then tried to do a restore using webmin just as a test and got errors.
I am
paul.dorman [EMAIL PROTECTED] writes:
Hi everyone,
I would like to know the best way to implement a DAG in PostgreSQL. I
understand there has been some talk of recursive queries, and I'm
wondering if there has been much progress on this.
The ANSI recursive queries didn't make it into 8.3.
On 26/09/2007 16:26, Carlos Moreno wrote:
Maybe you used the switch -d to specify the database? (like with psql
and some other client applications).
Duhhh! I've just realised my mistake - here's my command line:
pg_dump -h 192.168.200.2 -U postgres -d assetreg -f assetreg.txt -E utf8
I
Have you tried clustering tables based on the most-frequently used
indexes to improve locality?
http://www.postgresql.org/docs/8.2/static/sql-cluster.html
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bill Moran
Sent: Wednesday, September 26, 2007
Bill Moran [EMAIL PROTECTED] writes:
Give it enough shared_buffers and it will do that. You're estimating
the size of your table @ 3G (try a pg_relation_size() on it to get an
actual size) If you really want to get _all_ of it in all the time,
you're probably going to need to add RAM to the
Conal [EMAIL PROTECTED] writes:
I have a database schema which has a central table with several
others depending on it. The dependent tables all have foreign key
constraints with ON DELETE CASCADE so that I can remove tuples from
the central table and have the dependent rows removed
Phoenix Kiula escribió:
After I clustered the primary key index of a table with about 300,000
rows, my vacuum/analyze on that table is taking too long ... over 15
mins when originally it was 15 seconds! Nothing else has been changed
with this table. Is clustering not good for vacuums?
No.
On Wed, Sep 26, 2007 at 10:51:43AM +0200, Romain Roure wrote:
Hi,
We suddenly stumbled upon duplicate entities. Some of our databases
ended up with two 'public' schemas and several duplicate user tables
(sharing the same oid).
After checking through the logs, it doesn't appear to be a
On Wed, Sep 26, 2007 at 11:59:28AM +0200, Martin Bedná? wrote:
Hi,
It's possible to extract data directly from data files ?
I have two tablespaces one for data and one for indexes.
After filesystem crash I lost my /var/lib/postgresql/data folder :( All
data is in /lost+found :(, I found
Johann Maar wrote:
Hi folks,
sorry I do not get it right and I have to ask now.
I manually compiled PostgreSQL on my Kubuntu machine to /usr/local/opt/
pgsql and did all this stuff like creating a postgres user and I
have a startup script in /etc/init.d.
But if I try to start PostgreSQL by
Martijn van Oosterhout [EMAIL PROTECTED] writes:
On Wed, Sep 26, 2007 at 10:51:43AM +0200, Romain Roure wrote:
After checking through the logs, it doesn't appear to be a problem
resulting from wrap-around OID's. Though the logs mention
transaction-wraparound may have happened.
Please shouw
David Siebert [EMAIL PROTECTED] writes:
I set up a test server using the latest 8.2 as suggest by the list and
did pg_dump of the old data base.
I created a new empty database with the same name an created a user with
the same name as was on the old server.
I then tried to do a restore using
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Carlos Moreno wrote:
Johann Maar wrote:
But if I try to start PostgreSQL by running sudo /etc/init.d/
postgresql start it will fail because it tries to write a PID file to
/var/run/postgresql which does not exist. If I create this directory
and
On 9/26/07, James Williams [EMAIL PROTECTED] wrote:
The last is based mostly on the observation that another tiddly
unrelated mysql db which normally runs fast, grinds to a halt when
we're querying the postgres db (and cpu, memory appear to have spare
capacity).
Just a quick observation
take a look on contrib/ltree
On Wed, 26 Sep 2007, paul.dorman wrote:
Hi everyone,
I would like to know the best way to implement a DAG in PostgreSQL. I
understand there has been some talk of recursive queries, and I'm
wondering if there has been much progress on this.
Are there any complete
what does this mean?
{postgres=arwdRxt/postgres,username=r/postgres}
cheers, jzs
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
Johann Maar wrote:
But if I try to start PostgreSQL by running sudo /etc/init.d/
postgresql start it will fail because it tries to write a PID file
to /var/run/postgresql which does not exist. If I create this
directory and set the permissions for postgres to write it works (!),
but after the
--- John Smith [EMAIL PROTECTED] wrote:
what does this mean?
{postgres=arwdRxt/postgres,username=r/postgres}
This link describes each of the letters:
http://www.tldp.org/LDP/intro-linux/html/sect_03_04.html
Regards,
Richard Broersma Jr.
---(end of
On Sep 26, 2007, at 14:51 , John Smith wrote:
what does this mean?
{postgres=arwdRxt/postgres,username=r/postgres}
http://www.postgresql.org/docs/8.2/interactive/sql-grant.html
If you provide a bit more information (such as where and how you see
this information), you might assist those
On Wed, Sep 26, 2007 at 10:05:21PM +0200, Peter Eisentraut wrote:
I tried to change the location of the PID target directory in
postgresql.conf, but then clients like psql still try to find the PID
file in /var/run/ postgresql and fail.
You must be mistaken about this. psql shouldn't
When loading (inserting) data into a table with COPY I have read in the
documentation that rows are appended to the end of the table instead of
being added to existing table pages, so I'm wondering about memory
utilization. Our application uses a number of COPY statements in
parallel, so COPY
Hi,
I have a file to import to postgresql that have an unusual date format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't know how
to handle the DDMM dates.
I tried and tried but I can't import those
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Greg Sabino Mullane wrote:
There may be some other safeguards in place I did not see to prevent this,
but I don't see a reason why we shouldn't use unsigned int or
unsigned long int here, both for ntups and the return value of the
On Sep 26, 2007, at 3:42 PM, Diego Gil wrote:
Hi,
I have a file to import to postgresql that have an unusual date
format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't
know how
to handle the DDMM
On 9/26/07, Diego Gil [EMAIL PROTECTED] wrote:
Hi,
I have a file to import to postgresql that have an unusual date format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is ok, but I don't know how
to handle the DDMM
Dear Sir/Madam,
We have developed an accounting solution using Linux and Postgresql as an
open source database. Storing data in English is not an issue but we need
your assistance to guide us on how can we store multi language
English/Arabic characters using Postgresql.
Your assistance is
Feature request: allow some way to return next a set of values. Usage:
recursive stored procedures to walk a tree. Example: given a table, find all
tables that inherit from it.
Right now, as far as can tell, that requires a little extra effort to merge
the results from different levels of
On 9/26/07, Michael Glaesemann [EMAIL PROTECTED] wrote:
On Sep 26, 2007, at 14:51 , John Smith wrote:
what does this mean?
{postgres=arwdRxt/postgres,username=r/postgres}
http://www.postgresql.org/docs/8.2/interactive/sql-grant.html
...purrrfect! thanks michael. i got there by \dp
a
Scott Ribe wrote:
Feature request: allow some way to return next a set of values. Usage:
recursive stored procedures to walk a tree. Example: given a table, find all
tables that inherit from it.
There is a new RETURN QUERY in 8.3 that may be what you want.
--
Alvaro Herrera
El mié, 26-09-2007 a las 17:22 -0500, Erik Jones escribió:
On Sep 26, 2007, at 3:42 PM, Diego Gil wrote:
Hi,
I have a file to import to postgresql that have an unusual date
format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a
El mié, 26-09-2007 a las 17:24 -0500, Scott Marlowe escribió:
On 9/26/07, Diego Gil [EMAIL PROTECTED] wrote:
Hi,
I have a file to import to postgresql that have an unusual date format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
This is silly. Have you forgotten that the max number of columns is
constrained to 1600 on the backend side?
Uh, this is the number of returned rows, right? How does this relate to
columns?
Duh, brain fade on my part, sorry. Still,
Maybe I'm just missing something but I can't seem to get pg_dump to output
copy statements. Regardless of the -d / --inserts flag it always outputs
insert statements. The doc says that pg_dump will output copy statements by
default and will only output insert statements with the -d / --inserts
Matthew Dennis [EMAIL PROTECTED] writes:
Maybe I'm just missing something but I can't seem to get pg_dump to output
copy statements. Regardless of the -d / --inserts flag it always outputs
insert statements.
I'm betting this is the same type of pilot error discussed earlier
today:
Matthew Dennis wrote:
Maybe I'm just missing something but I can't seem to get pg_dump to output
copy statements. Regardless of the -d / --inserts flag it always outputs
insert statements. The doc says that pg_dump will output copy statements by
default and will only output insert statements
There is a new RETURN QUERY in 8.3 that may be what you want.
Sounds good.
--
Scott Ribe
[EMAIL PROTECTED]
http://www.killerbytes.com/
(303) 722-0567 voice
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
On Wednesday 26 September 2007 20:24:12 Tom Lane wrote:
Matthew Dennis [EMAIL PROTECTED] writes:
Maybe I'm just missing something but I can't seem to get pg_dump to
output copy statements. Regardless of the -d / --inserts flag it always
outputs insert statements.
I'm betting this is the
On Sep 26, 2007, at 5:24 PM, Scott Marlowe wrote:
On 9/26/07, Diego Gil [EMAIL PROTECTED] wrote:
Hi,
I have a file to import to postgresql that have an unusual date
format.
For example, Jan 20 2007 is 20022007, in DDMM format, without any
separator. I know that a 20072002 (MMDD) is
Dear Sir/Madam,
We have developed an accounting solution using Linux and Postgresql as an
open source database. Storing data in English is not an issue but we need
your assistance to guide us on how can we store multi language
English/Arabic characters using Postgresql.
Hello
simply, use
Jan de Visser [EMAIL PROTECTED] writes:
In my world two identical pilot errors within a short timeframe are indicat=
ive=20
of a bad interface.
Yeah, it's inconsistent. How many people's dump scripts do you want to
break to make it more consistent?
regards, tom lane
After upgrading to 8.2.4 version of PostgreSQL (Suse Linux, compiled from
source), function display in psql is changed.
In 8.1 version, using \df+ command we get the function description as
entered while creating it. In 8.2 version this seems to have changed. There
are additional characters and
75 matches
Mail list logo