owner. It then builds fine. Seems
like there has to be an easier way.
Anyway, after installing the new RPMs on my FC4 dev server and
rebuilding my programs, the programs do now run on my web server (stock
FC4 PostgreSQL).
Thanks for your help
--
Bryan White, ArcaMax
will not be cleaned up.
initdb: cannot be run as root
Please log in (using, e.g., "su") as the (unprivileged) user that will
own the server process.
--
Bryan White, ArcaMax Publishing Inc.
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ve multiple copies of libpq around and
select it at compile time. The client library doesn't really have to
match the server version...
As I asked in another thread: Does it work to install the
postgresql-server RPM from the 8.1 version and the others from the
Fedora 4 included 8.0 version?
Tom Lane wrote:
Bryan White <[EMAIL PROTECTED]> writes:
I am having problems with my libpq programs crashing. This seems to be
a version incompatibility and I want to find out how to best proceed.
My main database is running Fedora Core 5 with the supplied PostgreSQL
8.1.4.
My web
.x on my dev server and
had no problems running the produced programs on a live server with
8.0.x libraries.
--
Bryan White, ArcaMax Publishing Inc.
Bryan is used to being beast of burden to other people's needs.
Very sad life. Probably have very sad death. But, at least
Tom Lane wrote:
Bryan White <[EMAIL PROTECTED]> writes:
ec=# \z bulkuploadcfg
Access privileges for database "ec"
Schema | Table |A
Tom Lane wrote:
You need to revoke them as that user, likely. REVOKE really means
"revoke grants I made", not "revoke any grant anybody made".
Ok I tried logging is as that user. Oddly after the revoke then only
grant that disappeared was one I created.
Maybe it has something to do with 'g
privileges I have set show up after the original user
privileges in the \z output.
How can I clean this up. Would dropping the user have any effect?
This is on 7.4 if that makes a difference.
--
Bryan White, ArcaMax Publishing Inc.
The world ends when your dead.
Until then you got more punishment
e to escape is the single
quote character.
-
Bryan White
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> My hard disk partition with the postgres data directory got full. I
> tried to shut down postgres so I could clear some space, nothing
> happened. So I did a reboot. On restart (after clearing some
> pg_sorttemp.XX files), I discovered that all my tables appear empty!
> When I check in the data
> Hello,
>
> I'm a bit new to postgres. Is there anyway to tell the current number of
> connections on a database or server? I'm having a connection closing
> problem and would like to debug it somehow. I know on Sybase you can
check
> a sys table to determine this. Not familiar with how to
I need to insert a bunch of records in a transaction. The transaction must
not abort if the a duplicate is found.
I know I have seen the syntax for the before. Can someone jog my memory?
Bryan White, ArcaMax.com, VP of Technology
The avalanche has already begun.
It is too late for the pebbles
> I'll take a look, but in the meantime you might be faced with an initdb
> to bring the OID counter back under 2G :-(
I am doing this on my backup db server. The database is recreated nightly
from a dump from the main database server. Maybe I need to do an initdb
nightly as well?
I tried crea
| text | not null default ''
zip| text | not null default ''
country| text | not null default ''
phone | text | not null default ''
batchid| text | not null default
> Whenever a query is executed (not found in cache, etc.), the caching
> system would simply store the query, the results, and a list of tables
> queried. When a new query came in, it would do a quick lookup in the
query
> hash to see if it already had the results. If so, whammo. Whenever
> Fascinating. Looks like a possible framework for building a standalone
> dumping utility.for migration
It could be turned into that. It already does all the parsing, you would
just have to change the output functions for the desired format.
> I tried to increase the block size from 8K to 32K and received a IPC
error.
> Now IPC is compiled into the Kernel so why would I get this error. I
> switched it back to 8K and it runs fine.
Did you dump your database(s) before the change and initdb/reload them
after? I presume this is needed a
nformation.
It is available here http://www.arcamax.com/pg_check/
I am looking for suggestions as to how to make it more useful so please look
it over.
Bryan White, ArcaMax.com, VP of Technology
You can't deny that it is not impossible, can you.
>
> I would like to move some data from an older installation of PostgreSQL to
> a newer. When doing
> "pg_dump persondb > db.out" I get the following error message:
>
> "dumpSequence(person_sek): 0 (!=1) tuples returned by SELECT"
>
> The "person_sek" is a sequence in the database.
>
I believ
27;. I am thinking that
instead I will need to pipe pg_dumps output into gzip thus avoiding the
creation of a file of that size.
Does anyone have experince with this sort of thing?
Bryan White, ArcaMax.com, VP of Technology
You can't deny that it is not impossible, can you.
> > Server process (pid 2864) exited with status 139 at Thu Sep 14 10:13:11
2000
>
> That should produce a coredump --- can you get a backtrace?
I found a core file. I am not all that familiar with gdb but the backtrace
looks useless:
#0 0x8064fb4 in ?? ()
#1 0x809da10 in ?? ()
#2 0x809e538 i
Here is a follow up. I did a hex/ascii dump of the 3 bad tuples. In the
dump I could pick out an email address. This is an indexed field. I did a
select on each of them in the live database. The 1st and 3rd were not
found. The second worked ok if I only selected the customer id (an int4 and
ct the answer is
to perform surgery on the bad pages and then rebuild indexes but this is a
scary idea. Has anyone else created tools to deal with this kind of
problem?
Bryan White, ArcaMax.com, VP of Technology
You can't deny that it is not impossible, can you.
> Greetings all,
>
> At long last, here are the results of the benchmarking tests that
> Great Bridge conducted in its initial exploration of PostgreSQL. We
> held it up so we could test the shipping release of the new
> Interbase 6.0. This is a news release that went out today.
>
> The release
> Shut down the postmaster and then copy the entire db (including pg_log
> file) and it should work. The catch is to make sure pg_log is in sync
> with your table files.
I would rather not leave my database down long enough to copy the entire db
(3.5GB). I have control over when changes are app
> Hmm. Assuming that it is a corrupted-data issue, the only likely
> failure spot that I see in CopyTo() is the heap_getattr macro.
> A plausible theory is that the length word of a variable-length field
> (eg, text column) has gotten corrupted, so that when the code tries to
> access the next fi
> Status 139 indicates a SEGV trap on most Unixen. There should be a core
> dump left by the crashed backend --- can you get a backtrace from it
> with gdb?
>
> I concur that this probably indicates corrupted data in the file. We
> may or may not be able to guess how it got corrupted, but a sta
I have been looking at the new syntax in create table such as unique and
primary key constraints (new as in I just noticed it, I don't know when it
was added). It seems to me there is a minor gotcha when using pg_dump/psql
to reload a database.
When indexes were created separately pg_dump would
> Can I make a sequence use an int8 instead of int4?
>
> I have an application where, over a few years, it's quite possible to hit
> the ~2 billion limit. (~4 billion if I start the sequence at -2
> billion.)
>
> There won't be that many records in the table, but there will be that many
> inserts
> Error return is that it is not able to find the attribute
"any_column_name" in the table.
This maybe obvious but have you looked at the table layout to see if the
column exists. You may have a problem with spaces in the name or upper case
letters in the name. In either case you must quote th
I would just like to check an assumption. I "vacuum analyze" regularly. I
have always assumed that this did a plain vacuum in addition to gathering
statistics. Is this true? The documentation never states explicitly one
way or the other but it almost suggests that they are independant
operatio
> That's good, but does it mean that 7.0 is slower about adding index
> entries than 6.5 was? Or did you have fewer indexes on the table
> when you were using 6.5?
No the indexes have been there all along. My impression is the performance
loss was between 6.5.0 and 6.5.3. I had just ignored th
> I have set index_scan off for tomorrow morning's run. I will let you know
> what happens.
I think my problem is fixed. By disabling index scan while creating the
cursors and then turning it back on again for the small query that occurs in
my inner loop the performance has returned to normal (
> Well, when you have 2.7 million records in a database, the code might be
as good
> as it can be.
I have recoverd the performance lost when I moved to Postgres 7.0 by
executing
SET enable_indexscan = OFF before creating my cursors and turning it back on
for the inner loop query. It may even be
I have a large report that I run once a day. Under 6.5.3 it took just over
3hrs to run. Under 7.0 it is now taking 8 hours to run. No other changes
were made.
This is on RedHat Linux 6.2. A PIII 733 with 384MB Ram, and 2 IDE 7200 RPM
disks. One disk contains the Postgres directroy including
I have a very simple table defined as
CREATE TABLE custlist (
listid int4 NOT NULL,
custid int4 NOT NULL);
Somehow a record has gotten into the table with both values null. When I
try to delete using
delete from custlist where custid is null;
or
delete from custlist where
> hi, all experts there, greetings!
>
> Just minutes ago, my boss found out one of the attributes in a
> table is too short (varchar 64 for url), we need to make
> it wider to 85 A.S.A.P. Seems that alter table can not do it.
> So, I used pg_dump, (how to do it gracefully?) immediately drop the ta
not reflect the default value for new rows added after the alter
statement.
I could work around this is someone could tell me how to modify the system
tables to specify a default value. This does not seem to be much
documentation for the layout of the system tables.
Bryan White
ArcaMax Inc
If I add a field to a colum using:
alter table mytable add column myint int not null default 0;
The default value does not seem to make it into the data dictionary.
This is using Postgres 6.5.
Is there is way to manually poke a default value into the data dictionary?
Bryan White
ArcaMax
`bench.out'
gmake: *** [runtest] Quit (core dumped)
Quit (core dumped)
--
Has any body run into this before.
Bryan White
ArcaMax Inc.
Yorktown VA
www.arcamax.com
down the
postmaster and then move and/or copy directories in /usr/local/pgsql/data.
Can someone confirm this?
Bryan White
ArcaMax Inc.
Yorktown VA
www.arcamax.com
this be some sort of resource problem?
Bryan White
ArcaMax Inc.
Yorktown VA
www.arcamax.com
>Now that I have a database functional, I need to allow other users
>to have access. Using createuser I can give other users access to
>the postmaster, but I need to give them access to my database as well.
>
>Could someone enlighten me.
Access to tables is controled with the Grant and Revoke S
>Hello, I'm currently looking for any documentation anyone on the list
>knows of that shows the performance of Linux on any Intel hardware
>(preferrably a dual PII) for serving web/database services. I know I'm
>stretching the off-topicness here, but please bear with me. I promise
>I'll have som
44 matches
Mail list logo