I have set up a PG9.2 database slightly over 110 GB in size. In addition to
the master server, I have 2 servers running as active standby nodes.
When I run pg_basebackup to set up the standby servers, it completes in
about 60 minutes - which is okay for my needs. However, I also need to set
up anot
Hi,
When I execute an insert statement, older table records are deleted even though
my insert statement works. It always keeps the maximum number of records in the
table to 4200 records. So If I added 3 more records, it deletes the first three
records in the table even though it adds the 3 new
On Tue, Oct 10, 2006 at 10:15:42AM -0500, Jim C. Nasby scratched on the wall:
> On Fri, Oct 06, 2006 at 10:37:26AM -0500, Jay A. Kreibich wrote:
> > These are generally referred to as "Hierarchical Queries" and center
> > around the idea of a self-referencing t
r any more than three levels deep) you can also join the table
to itself multiple times. This can get really confusing very quickly,
and is not an overly general solution, but it can be done in "pure" SQL
in a fairly straight forward (if not a bit complex) kind of way.
-j
--
ot support this very well, if
at all.
-j
--
Jay A. Kreibich | CommTech, Emrg Net Tech Svcs
[EMAIL PROTECTED] | Campus IT & Edu Svcs
<http://www.uiuc.edu/~jak> | University of Illinois at U/C
---
want to have a look at this:
http://www.postgresql.org/docs/8.0/static/backup-online.html
-j
--
Jay A. Kreibich | CommTech, Emrg Net Tech Svcs
[EMAIL PROTECTED] | Campus IT & Edu Svcs
<http://www.uiuc.edu/~ja
Do the pg_dump/pg_dumpall utility programs generate any
return codes that I could analyze to determine the success or failure of the
PostgreSQL database backups? It would
seem logical that they should. Please advise, and point
me to a piece of doc that might be of help if it exists.
Thanks in
'm not a switch specialist but given the ability
> to do routing I was imagine the arpping trich...
Yes, but such systems are generally routers first and switches second.
-j
--
Jay A. Kreibich | Comm. Technologies, R&D
[EMAIL PRO
the same company.
IGMP snooping is also really tricky to do right, and there are still
some situations where you are forced to flood traffic.
-j
--
Jay A. Kreibich | Comm. Technologies, R&D
[EMAIL PROTECTED] | Campus IT &am
quot;cross-over" cable, not a "roll" cable. There is
a difference.
With many newer, high quality NICs, you don't even need that. Many
modern NICs do auto MDI/MDI-X detection, so any standard cable will do.
-j
--
Jay A. Kreibich | Comm. Technolog
aches (our application writes
data at a very very slow rate; the main reason we have RAIDs is
protection, size, and *read* speed).
-j
--
Jay A. Kreibich | Comm. Technologies, R&D
[EMAIL PROTECTED] | Campus IT & Edu. Svcs.
?
Work in the same timezone. EST and EDT are not the same.
-j
--
Jay A. Kreibich | Integration & Software Eng.
[EMAIL PROTECTED] | Campus IT & Edu. Svcs.
<http://www.uiuc.edu/~jak> | University of Illinois at U/C
--
You can also use netstat to see if it is listening.
Postgres usually listens on port 5432 right? so examine netstat -na and
see if anything is listening on tcp port 5432.
If you are using a recent version of linux, netstat has the -p option that
will name the process listening on that port. Oth
I know someone already pointed out the cache differences between the
processors, and that is likely to contribute to the differences you have
observed.
As was stated in the message about cache, databases are extremely IO
bound. It is worth noting that Celeron's have a Front Side Bus speed of
66
Look in the Admin Docs: Starting the Server. pg_ctl -o "-i ".
Or edit the TCPIP_SOCKETS parameter in pogresql.conf.
Jay
[EMAIL PROTECTED]
www.sysadmincorner.com
"Andy Jenks" <[EMAIL PROTECTED]> wrote in message
9flpfb$559$[EMAIL PROTECTED]">news:9flpfb$559$[EM
/usr partition, or to a NFS.
can this be done easily, and if so, what would be
the procedure to move the files with no losses (all databases are currently
active on the Internet to the public, but downtime is acceptable for moving
them)
Thanks in advance.
Jay Hodges
unning only a vacuum?
Thanks,
Jay Summet
that ^D after the time, but when I pasted
this line into psql it worked just fine.
Keep in mind, this program worked correctly with version 6.2, and all of my
other programs work as they should still.
My question: Can anybody spot my problem right off? If not, what code should
I use to report the actual error message generated by the backend
(as opposed to just detecting that an error has occured as I do now)?.
Thanks,
Jay Summet
I have 6.2.1 if it would be of any help.
-Jay
> Neither are options :( We didn't start *properly* tagging things until
> well after v6.1, so you can't just pull a 6.1 version out :( That was one
> of things I was gonna try with the v6.3 problem awhile back...
>
> F
So, I edited the defs.h file and made the following
change
-#define MAXTABLE32500
+#define MAXTABLE64500
and re-installed it. This fixed the problem.
So for BSDI4.0, the default yacc doesn't have a big enough table
size.
Jay Summet
essage when I try to run createuser. I've
checked the PG_hba.conf file and it should be allowing access from
the local machine.
I'm about at the limit of my knowledge of what to do here...
Any help greatly appreciated.
Jay W. Summet
21 matches
Mail list logo