In reference to the pg_clog errors I'm having, I am still looking for
tips or help. Here's the info again:
"ERROR: could not access status of transaction 143934068
DETAIL: could not open file "/usr/local/pgsql/data/pg_clog/0089": No
such file or directory
Now, despite creating an 8k fil
: could not open file "/usr/local/pgsql/data/pg_clog/0089": No
such file or directory
I looked, and the "0089" file was gone again. Is there anything I
can do to save the situation? (The PG version is 7.4.2)
steve wolfe
---(end of broadcast)-
I would very much like to see the same test with Fsync on.
A test that does not reflect real-world use has less value than one that
just shows how fast it can go.
>
> For a read-only database, fsync could be turned off. For any other
> system it would be hair-brained and nobody in their right mind
> The only time I've seen high cpu and memory bandwidth load with
near-zero i/o
> load like you describe was on Oracle and it turned out to be an sql
> optimization problem.
> What caused it was a moderate but not very large table on which a very
> frequent query was doing a full table scan (= seq
> I wanted to do some quick load testing on a postgres database. Does
anyone
> have any tips on how to go about doing this?
> Thanks much.
Sure. Test after the manner in which the database is normally used,
and with "real-world" data.
I've seen far too many people "benchmark" a database sy
> Is there any way of reading the whole database into memory? By changing
> the
> number of shared buffers the first query still takes long time and the
> subsequent similar queries are fast.
> If there is way of pinning the database in the memory all similar
> queries will take the same time. My
> What I said: "SCSI disks cost a lot more than comparable IDE disks."
>
> What you said: "No, because I found some cheap SCSI disks that
> don't have comparable IDE models."
That's not what I said. If you're going to quote me, get it right.
> My statement only operates in the domain where
>
> > (As an aside, one person was in a heated argument about how much
cheaper
> > IDE was than SCSI. I got on pricewatch, found some prices, and would
have
> > been able to put together a very fast SCSI system for the same price
as
> > his IDE array.)
>
> That's nuts: SCSI disks cost a lot more
> Tunning is somewhat of a black art to get the right balance. If you
have to
> make a choice, buy fewer processors, faster disks, and as much RAM as
the
> board will handle.
Wow. I'd buy more RAM and processors, and maybe skimp a *little* on the
disks. The RAID array in my machine is made u
> I'm not sure how much a 2+ way system will help. Most of the db work
> will be done in one long serialized processes and those can't be spread
> out over multiple processors (with current day postgresql).
That's assuming that only one query will ever be executed at once. As a
new backend is
> I'd probably go for a 3ware RAID instead... IDE disks are so much
> cheaper nowadays than SCSI, and the premium isn't really justifiable
> anymore.
Having used IDE and SCSI disks, when I'm serious about performance, IDE
doesn't even enter my mind. Also, over on the XFS list, there are a few
> > 2) More importantly, is it possible to prevent a customer from peeking
into
> > said database once it is deployed on their machine?
ROT13 it, then threaten them with the DMCA.
(Yes, that was a joke.)
steve
---(end of broadcast)---
TIP
> None of them. Run FreeBSD. It's better.
Or, it will be, once the SMP code is improved. : )
steve
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://www.postgresql.org/search.mpl
> > 1) Distribution of Linux to have the largest number of "out of the box"
> > security holes. Check back and look at the security reports. Count them
if
> > you insist.
>
> And check for the number of them being Red Hat specific.
I consider things like the portmapper being enabled by default
> > Even though it may appear that your server is doing a lot, it's not
facing
> > the load of a highly scaled enterprise level e-commerce site, where
RedHat
> > just doesn't cut it.
>
> That claim is bogus. Red Hat Linux is the number one linux by far in
> enterprise deployments.
Well, Microso
> Previous to version 7.1, RHL wasn't very secure by default. This is one
of
> the most common complaints I hear. 7.1 can be made quite secure out of
the
> box without any special config -- just leave the firewall config at the
> default of 'HIGH' -- of course, I've now heard complaints that it
> ...This is not the same in my book, since I don't care
> to run RHL in any kind of production environment...
>
>
> What is it about RHL that various people wouldn't
> recommend running it in a production envornment?
> I don't have a contrary view, so much as I'd like to
> know what's specifical
> > In my experience DB pricing structures seem to be a mystery whichever
> > platform you look at :)
>
> 20k per CPU. See:
Word is that the 2006 Microsoft automobiles will have similar pricing
structures. You'll pay $8,000 per year for each "seat-license" that you
want, and another $2,000
> THESE ARE NOT WHINING GRIPES. They are pefectly valid points about
> how the current website design is flawed. Your responses were pretty
> much "Make your own bookmark", "it doesn't work", and "you aren't
> working hard enough to find things." This attitude is unacceptable.
I thought tha
> > Will PostgreSQL take advantage of more than 1 CPU? If so, are there
any
> > benchmarks showing how it scales with multiple CPUs?
We run PG on a quad xeon, and it works wonderfully. From PG's design
of forking off a new backend for each connection, it is inherantly
scalable, as each backen
> It looks like Red Hat has announced an open source database called Red
Hat
> Database:
>
> (via slashdot:)
>
http://dailynews.yahoo.com/h/cn/20010619/tc/red_hat_to_play_in_oracle_s_ar
ena_1.html
>
> Why is this not PostgreSQL? Why can't the Red Hat folks sell support to
> PostgreSQL, rather than
Also the comment that RHDB would be better for small businesses than
large ones makes me think that it's not exactly high-performance or
full-featured. : )
steve
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://w
> > I mean allowing an unlimited amount of connections (as much as the
process
> > table will allow) that are served in a round-robin basis by oracle,
much
> > like an httpd process. I'm tired of having to up the number of user
> > connections just because we get an unprecedented surge in interes
> > Let's compare removing the RPM's:
>
> rpm -e `rpm -qa |grep postgresql`
Nice. I like it.
> > #rpm --erase php-pgsql-3.0.15-2
>
> That's not what you're doing for your manual install...
>
> > rm -rf ~postgres/*
>
> So postgres doesn't install it's binaries in /usr/local/bin, libraries
> in
> On Thursday 31 May 2001 16:22, Steve Wolfe wrote:
> > something else fills up /var, PG isn't hosed. And if PG fills up it's
> > partition, other services aren't hosed.
>
> Make a partition mounted on /var/lib/pgsql. :-)
Touche!
> > Now, p
> > Unfortunately, I can't just "compile" since I need to be able to
replace my
> > current 7.0.3 installation, installed via RPMs. How do I go about this
so I
> > don't mess everything up (leftover files and such, in addition to the
> > mandatory pg_dump) ?
>
> Install the "compiled" version some
> > Now, play some villanous music, and enter RedHat wearing a black
cape,
> > with small, beedy eyes.
>
> I don't have a cape, but I do have a red hat. And blue eyes, normal
size.
I was going for the melodrama. : )
> > They insist that an OS should not touch /usr/local, and they're
> > ri
> What PG version are you using?
>
> IIRC, in pre-7.1 code, backends allocate OIDs in blocks of 32 (?? more
> or less anyway); so if a backend uses one OID and then exits, you wasted
> 31 OIDs. This does not happen anymore with 7.1, though.
>
> Another possibility is that you're creating lots of
> >Also, when the OID's reach the limit of an int4, if I recall
correctly,
> > they simply wrap around, and things keep working unless you depend on
> > unique OID's. Is that correct?
>
> That's correct.
>
> cheers,
> t.
>
> p.s. and rumor has it that the universe will start shrinking as soon
> > Perhaps someone should write a platform-specific FAQ for Linux that
> > answers this question, so that the link provided would actually be
useful.
>
>
http://www.ca.postgresql.org/devel-corner/docs/postgres/kernel-resources.h
tml#SYSVIPC
Perhaps it should be put in the platform-specific F
> I get this error when trying to start postgres with a block size > the
> default of 64...
>
> [postgres@roam pgsql]$ /usr/local/pgsql/bin/postmaster -B 2000 -D
> /usr/local/pgsql/data
> IpcMemoryCreate: shmget failed (Invalid argument) key=5432001,
> size=33652736, permission=600
> This type of
> Yes they are better web servers than DBMS servers but if you
> database is small enough to cache in RAM then who care if
> the Netra uses slow disks?
>
> All that talk on this list about Linux vs. BSD is silly.
> Why bother when you can have Solaris 8 on SPARC hardware?
Easy: Cost.
(And,
> First off, I'd just like to thank everyone for their help with my last
> problem. It worked, but now it gives me another error:
>
> IpcMemoryCreate: shmget failed (Permission denied) key=5432010, size=144,
> permission=700
> This type of error is usually caused by an improper
> shared memory or
> It's not surprising to see software raid outperforming hardware raid (both
> of modern vintage).
That's provided, of course, that you have a good amount of free CPU
cycles. On machines that are otherwise heavily CPU-loaded, software raid
has been (in my experience) horrible.
steve
--
> Trying to rebuild v6.5.3 and the procedure calls for
> gmake.
>
> A google search is a dog-chasing-tail exercise.
>
> Can anybody help with a URL or an alternative.
www.freshmeat.net is a good place to look for sources, but in this case,
searching for "gnu make", which will yield you with:
h
> On Fri, Mar 16, 2001 at 03:53:22PM -0600, Alex Howansky wrote:
> > levels. Still, you'd think that even a "slow" RAID-5 configuration would
be
> > faster than a $98 IDE drive...
>
> I wouldn't.
You'd be wrong. : )
I've also copied large amounts of data from an IDE drive to an old AMI
Mega
> Yes, it is RAID-5 on the big box. Unfortunately, I don't have any spare
RAID
> equipped boxes sitting around, so I can't experiment with the different
RAID
> levels. Still, you'd think that even a "slow" RAID-5 configuration would
be
> faster than a $98 IDE drive...
Yes, it certainly should b
> > will give a divide by zero error as A=(y>0) and B=(x/y>1) can be
evaluated in
> > any order (A and B = B and A). I obviously would like (y>0) to happen
first,
> > but I don't see how this can be achieved.. Any ideas?
I have one idea that would be nifty to implement. In some compilers, you
> It is not performance I would be concerned about, but reliability. NFS
> has no state for reliability. I have to ask why they are using NFS
> rather than putting it on a drive local to the machine. If they say
> they want to share the data between two machines, that is even crazier.
They m
OK, one of my tables was getting duplicate entries, and I thought I'd
create a unique index on a particular field. So, I created the index:
domains=# create unique index idx_domain_name on domain (domain_name);
CREATE
Then, tried a vaccum:
domains=# vacuum analyze;
NOTICE: Index idx_domai
> Shouldn't it be possible to build vacuum as an ongoing internal PG
process,
> instead of a seperate operation? How does Oracle byepass this? Must be
some
> way that can be implemented.
Well, here's what it comes down to: Do you want updates to happen
quickly, and vacuum when load is low, or
> I don't think recovery mode actually does much in 7.0.* --- I think it's
> just a stub (Vadim might know better though). In 7.1 it means the thing
> is replaying the WAL log after a crash. In any case it shouldn't
> create a lockup condition like that.
>
> The only cases I've ever heard of whe
From: <[EMAIL PROTECTED]>
> I order to escape from .htaccess, I want to save user passwords in my
> database.
> However, passwords appear clear in the database.
> How passwords can be crypted in the database?
>
> I use PHP 3.0.16, Cobalt Raq3i.
Hmmm linux users that haven't heard of
> I have a server for which PG_DATA = /home/maxtor, an NFS mounted
> disk. Normally it works great, but when I try to vacuum, it takes
> FOREVER. Should I not even try to use remote storage like this? Has
> anybody else run into a similar problem?
NFS is slow, and very CPU-intensive. On a 1
> > I am considering splitting the database into tables residing on separate
> > machines, and connect them on one master node.
> >
> > The question I have is:
> >
> > 1) How can I do this using PostgreSQL?
>
> You can't.
I'll jump in with a bit more info. Splitting tables across multiple
mac
> >(Incidentally, we've toyed around with developping a
query-caching
> > system that would sit betwen PostgreSQL and our DB libraries.
>
> Sounds amazing, but requires some research, I guess. However, in
many
> cases one would be more than happy with cahced connections. Of
cour
> Even after that, you have a long way to go before you will hit 1000
> transactions per second from any SQL database.
I guess they could always buy a few Sun E1's on the backend, and a
large room of rack-mountable PC's for web/CGI serving. Nothing like
plopping down ten or twenty millio
> Even after that, you have a long way to go before you will hit 1000
> transactions per second from any SQL database.
Since my last post probably wasn't too useful, here's some information
that might be a little more help. It's a little long, I know, but hopefully
it will be of use to som
> Or they could buy a single IBM S/390, run Linux/390 and PostgreSQL on
> that. Probably would cost less, and be more reliable. And they can
> always load another Linux/390 VM -- an S/390 can run something like
> 41,000 virtual machines each running Linux/390 and Apache.
Yeah I'm very opt
> Use the "-F" option.
>
> I start PostgreSQL with this line:
> su -l postgres -c "/usr/bin/pg_ctl -D $PGDATA -p /usr/bin/postmaster -o
> '-i -B 4096 -o -F' start >/dev/null 2>&1" < /dev/null
Interesting... trying to start postgres 7.0.2 with the "-F" simply returns
a list of options to me, a
> > OK, here's a situation. One of the programmers at your company runs
the
> >disk out of space. You're going to go bonk him on the head, but first,
> >there are more pressing matters. PostgreSQL 6.5 has horked up the
tables,
> >and needs to be fixed. 7.0 is released, which has a fix for the
> Red Hat is a distribution. It should not put anything in /usr/local,
> which should be reserved for the _local_ adminstrator's use.
Hmm so, if the local administrator wants to compile the source, it
should go in /usr/local. If he wants to use a package manager, it should go
somewhere el
> > The last thing that a system admin needs when upgrading PostgreSQL is
"Oh,
> > crap, I forgot to uninstall the RPM of the old one first."
>
> Then upgrade the RPM's. It isn't hard.
OK, here's a situation. One of the programmers at your company runs the
disk out of space. You're going t
> In some cases yes, in some no. Simple text should compress/decompress
> quickly and the cpu time wasted is made up for by less hardware access
> time and smaller db files. If you have a huge database the smaller db
> files could be critical.
Hmm... that doesn't seem quite right to me. Whe
> > A trick you can use in 7.0.* to squeeze out a little more space is
> > to declare your large text fields as "lztext" --- this invokes
> > inline compression, which might get you a factor of 2 or so on typical
> > mail messages. lztext will go away again in 7.1, since TOAST supersedes
> > it,
> My current backups made with pg_dump are currently 1.3GB. I am wondering
> what kind of headaches I will have to deal with once they exceed 2GB.
>
> What will happen with pg_dump on a Linux 2.2.14 i386 kernel when the
output
> exceeds 2GB?
There are some ways around it if your program suppor
> can anyone recommend a windows ui tool for viewing data in postgres?
> essentially i want to use something other than the command line for
looking
> at results. as an emample, i'm looking for the equivalent of TOAD for
> Oracle. is the easiest approach to connect in MS Access via odbc?
Wel
Can any of you running PostgreSQL on Alphas comment on the performance vs.
Intel-based servers?
steve
> here is the vacuum verbose :
> pqReadData() -- backend closed the channel unexpectedly.
> This probably means the backend terminated abnormally
> before or while processing the request.
> We have lost the connection to the backend, so further processing is
impossible. Termina
> I've seen brief posts regarding server loads, distrubution for heavy
loads,
> etc, but don't recall if there were any solutions...
>
> Anyways...We are running a Postgres DB against multiple frontend
> webservers. For most of the time, everything runs fine. Then, all of a
> sudden, everything
> Actually, more RAM would permit you to increase both the -B parameters as
> well as the -S one ... which are both noted for providing performance
> increases ... -B more on repeative queries and -S on anything involving
> ORDER BY or GROUP BY ...
For a while now, I've been meaning to investi
> 1) Using only ODBC drivers. I don't know how much of an impact a driver
can
> make but it would seem that using native drivers would shutdown one source
> of objections.
Using ODBC is guaranteed to slow down the benchmark. I've seen native
database drivers beat ODBC by anywhere from a facto
> How suitable is PG for doing larger databases? The need I am
> considering would be a financial database that does maybe up to 100k
> transactions/day.
In a day? I think a lot of us do that much in an hour
> Obviously, it needs to be very reliable, and have
> minimal scheduled, and no
How does one escape parenthesis in a regular expression in Postgres?
An example is:
select * from subcategories where subcategory ~* '401(k)';
Which does not work. I've tried:
select * from subcategories where subcategory ~* '401\(k\)';
That still didn't work. Any advice would be much
> Since most RAID servers can't even flood a 100 mbit connection, you're
more
> than safe with that much bandwidth if most of your traffic is going to be
> database related. You might want to factor in all the other network
> traffic that will be going over those lines though. For instance, if t
> 1) Postgres 6.5.3 is a known quantity, and there are more people
familiar
> with
> the common, critical problems, so there are more people who can help
solve
> problems.
> 2) 7.0.2 undoubtedly has some new bugs and problems that are not
>apparent yet - bugs that could sink a commercial bus
> I tried this and got what I thought was a fairly reasonable error
> message:
>
> FATAL: StreamServerPort: bind() failed: Permission denied
> Is another postmaster already running on that port?
> If not, remove socket node (/tmp/.s.PGSQL.5432) and retry.
> postmaster: cannot creat
I've run across yet another cause that can make Postgres not be able to
bind to the port it desires (or any other port, for that matter). I don't
recall it being discussed before, so I'll throw it out.
When postgres binds to any port, it likes to write a file in /tmp, along
the lines of "
> I've got the same problem with readline and psql using 7.0 RC1.
>
> I found that there was something odd when ran configure. Here's the
> readline stuff I grepped out of config.cache:
>
> ac_cv_header_readline_h=${ac_cv_header_readline_h='no'}
>
ac_cv_header_readline_history_h=${ac_cv_header_
> > /var/lib/pgsql reports that we're running 6.3 - which I don't find on
the
> > FTP site. Is that not a valid distribution number, or is the source
not
> > available?
>
> Oh, I remember running 6.3. That was a looong time ago. Current version
is
> 6.5.3 and 7.0 is about to be released. I wou
First, I really appreciate the suggestions. On to a reply...
> Restore the /usr/lib/perl5/site_perl directory from a working backup.
You'd think so, wouldn't you? : )
Unfortunately, the last time we put a cleaning tape in the DAT drive, the
drive died, and now needs to be replaced. Be
/var/lib/pgsql reports that we're running 6.3 - which I don't find on the
FTP site. Is that not a valid distribution number, or is the source not
available?
steve
Today, our newest employee thought he'd upgrade the Perl interface to
Postgres. So, he went into the source directory, and did "gmake install"
in the /src/interfaces/Perl5 directory. Now, everything's broken. Trying
to use it, we get:
perl: error in loading shared libraries:
/usr/lib/perl5
> maybe we need a keyword DOS|UNIX or perhaps TEXT|BINARY to tell
postgresql
> to pick DOS style or UNIX style line endings...
Maybe we just need to make sure that the files we are using are in the
correct format for the platform they're being processed on. ; )
steve
Our company is getting to where we need to think of turning our Postgres
server into a cluster. We've come up with a few very viable ideas, but
we'd love to hear of anything that others have done along this line.
steve
Grr
Although I'm no longer getting lock releases on this table, a daily
vaccum analyze still shows:
NOTICE: Rel reef: Uninitialized page 492 - fixing
NOTICE: Rel reef: Uninitialized page 498 - fixing
NOTICE: Rel reef: Uninitialized page 499 - fixing
The database is a fledgling mai
> > > I'm getting "LockRelease: locktable lookup failed, no lock" trying
to
> > > insert a row into a database.
> >
> > Well, I think I tracked down the problem - an index on a "text"
field.
> > Once I removed the index, everything works well. The odd thing is that
not
> > *all* inserts would
Today, I told someone the syntax for "count" without thinking, and
got it wrong - and in so doing, I found some interesting behavior.
select count('table.fieldname');
works. however...
select count('table.fieldname') where table.fieldname='something';
not only barfs, it kills your connect
78 matches
Mail list logo