On Wed, Aug 27, 2008 at 10:53 PM, Tom Lane [EMAIL PROTECTED] wrote:
No, you were right the first time: just feed the dump script to psql.
Ok. Tried that. Because there's no database, I have to execute the
psql command as postgres otherwise it doesn't work.
So here's my command:
$ psql
Ok. Here is what it is.
OLD DB:
Since 2002. May contain non-UTF8 data. But I thought I had modified it
all when I changed it to UTF-8 pgsql database (it was originally
Mysql). The database works very well on a very busy website.
Everything on that website is now UTF-8. I wish to mirror this
Is there a mechanism to check exactly which row may have characters
that are not UTF8? I am trying with this kind of a command:
select id from employee
where modify_date between '2008-03-01' and '2008-07-01'
and joint_field_of_name_etc convert(joint_field_of_name_etc, 'UTF8')
This is of course
I hope this is a simple question. I checked the PG wiki but didn't
find an answer:
http://wiki.postgresql.org/wiki/RPM_Installation
I have my 8.2.3 RPMs installed. I would like to upgrade to 8.2.9.
Because this is a minor version upgrade (within 8.2) I understand that
I should be able to upgrade
Thanks all. Turns out I didn't have any problems at all. I just ran
this command, all rpms together as Devrim Gunduz suggested -
--
rpm -Uvh postgresql-8.2.9-1PGDG.rhel4.i386.rpm
postgresql-contrib-8.2.9-1PGDG.rhel4.i386.rpm
postgresql-devel-8.2.9-1PGDG.rhel4.i386.rpm
On 8/26/08, Scott Marlowe [EMAIL PROTECTED] wrote:
Slony replication lets postgresql accomplish this, which is really
quite impressive. We just upgraded from an 8.1 server to an 8.3
server via slony, and it went smooth as silk. db downtime was
measured in seconds.
Thanks for this
On 8/26/08, Phoenix Kiula [EMAIL PROTECTED] wrote:
On 8/26/08, Scott Marlowe [EMAIL PROTECTED] wrote:
Slony replication lets postgresql accomplish this, which is really
quite impressive. We just upgraded from an 8.1 server to an 8.3
server via slony, and it went smooth as silk
You need to download Slony 1.2.14, which supports both 8.2 and 8.3.
You'll find it under Quick downloads.
Ok done. Slony is installed. Now what? How should I install a new
database which is 8.3.3?
I was also told that the postgresql.conf settings across 8.2 and 8.3
are different so I
On 8/26/08, Tomasz Ostrowski [EMAIL PROTECTED] wrote:
I think nobody would guide you step by step. Either read documentation
and do it yourself or hire an expert:
Thanks. I suppose that spirit is quite evident in the documentation.
Why make it easy or easily understandable when you can win
Hi.
I have googled and googled for good, simple instructions to upgrade
from 8.2.3 to 8.3.3 (latest stable at this time?)
I am on a Cpanel interface. Use Apache and PHP for most of my websites.
This seems to be the most often quoted resource on forums etc:
If you're going to run slony, then the pg_dump|pg_restore step is
completely wasted. Slony will restore all the data again. You do
need to run pg_dump -s, of course.
Thanks to everyone who replied. We have no experience with this
Slony. Any simple instructions on installing it and
IN the second SQL, I meant this:
WHERE modify_date '2008-01-01'
On 16/02/2008, Phoenix Kiula [EMAIL PROTECTED] wrote:
I have a table with an index on a field called modify_date.
This works well if I have SQL which ends in
WHERE modify_date = '2008-01-01'
But if I try
I have a table with an index on a field called modify_date.
This works well if I have SQL which ends in
WHERE modify_date = '2008-01-01'
But if I try this condition:
WHERE modify_date = '2008-01-01'
THis index is not used. The EXPLAIN tells me it needs to do a seq
scan. Why is this?
This table is vacuumed and analyzed every hour, so yes, it's been
analyzed recently.
These are the EXPLAIN ANALYZE outputs for both the equality condition
and the greater than condition:
orguser=# explain analyze select alias from clientswhere modify_date =
'2008-01-01' ;
On 17/02/2008, Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:
On 16/02/2008, Greg Smith [EMAIL PROTECTED] wrote:
top -bc | tee topdata
That will save everything to a file called topdata while also letting you
watch it scroll by. Not as easy to catch the bad periods that way, the
output is
On 17/02/2008, Shashank Tripathi [EMAIL PROTECTED] wrote:
On 17/02/2008, Andrej Ricnik-Bay [EMAIL PROTECTED] wrote:
On 17/02/2008, Phoenix Kiula [EMAIL PROTECTED] wrote:
~ top -b -d 1 | awk -f top.awk | tee topdata
awk: top.awk:24: for(i=8;ilast;i++
awk: top.awk:24
Hi,
I have an index on the user_id field in the query below:
myuser=# delete from clients where user_id like '64.22.91.%';
DELETE 22
Time: 220324.975 ms
Is there any reason why it's taking 220 seconds to run this simple
query? There are about 3 million rows in this table.
How can I
On 15/02/2008, Shoaib Mir [EMAIL PROTECTED] wrote:
On Fri, Feb 15, 2008 at 3:55 PM, Phoenix Kiula [EMAIL PROTECTED]
wrote:
Hi,
I have an index on the user_id field in the query below:
myuser=# delete from clients where user_id like '64.22.91.%';
DELETE 22
Time: 220324.975
On 15/02/2008, Richard Huxton [EMAIL PROTECTED] wrote:
First guess is that it's not using the index. What does
EXPLAIN ANALYSE SELECT user_id FROM clients WHERE user_id LIKE '...'
show?
Check the list archives for locale and like and text_pattern_ops too -
that's a good place to check.
Actually my host has just told me that I have a number of hung
semaphores in my server. And he is relating them to postgresql. I am
not surprised, because this is the only utility that has issues. All
the rest is working (apache, mysql, exim, etc). Any thoughts on where
I should start looking for
On 15/02/2008, Shoaib Mir [EMAIL PROTECTED] wrote:
You do not need to restart the database server for that purpose as all you
need is the pgstattuple.so file copied to PG-HOME/lib folder.
Do the following (in case you have installed server from source):
- Go to the
I'm glad I didn't go from 8.2.3 to 8.3 straight!
http://ogasawalrus.com/blog/node/462
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 15/02/2008, Richard Huxton [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
On 15/02/2008, Richard Huxton [EMAIL PROTECTED] wrote:
Ah, more new information! This does seem to point to the load,
particularly if it's exactly the same query each time. So what do
top/vmstat etc show
On 15/02/2008, Richard Huxton [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
On 15/02/2008, Richard Huxton [EMAIL PROTECTED] wrote:
Are you sure the two sets of vmstat/top figures are from when PG was
crashing/running queries slow? Everything seems idle to me in those
figures
On 16/02/2008, Greg Smith [EMAIL PROTECTED] wrote:
On Sat, 16 Feb 2008, Phoenix Kiula wrote:
The script you suggested doesn't work:
tmp ./trackusage.sh
-bash: ./trackusage.sh: /bin/sh: bad interpreter: Permission denied
Try changing the first line to
#!/bin/bash
Thanks
Hi,
I have been running PG for quite a while now.
Suddenly, the postmaster either hogs up memory or just croaks and
doesn't respond. The write process has become horribly slow.
Could it be that my data has grown so large that it's becoming an
issue? Even connecting to PGSQL as postgres user in
Thanks. Comments below. (PS: I am still unable to connect to
postgresql even in SSH! I see this message:
psql: could not connect to server: Connection timed out
Is the server running on host localhost and accepting
TCP/IP connections on port 5432?
Yes of course the
The Amazon Dynamo framework is going to replace RDBMS?
http://www.readwriteweb.com/archives/amazon_dynamo.php
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org/
On 01/10/2007, Martin Marques [EMAIL PROTECTED] wrote:
1) Is MD5's weakness true?
Yes, but not really for using in a password functionality. You are
very unlikely to repeat a password but in any case you will have the
user ID to make it unique.
2) Is there any sha1() functions in
A vacuum analyze that used to take about 3 minutes on a table of about
4 million rows is now taking up to 25 minutes. I changed the
statistics on two index columns to 100 recently, to improve planner
estimates. Could this have something to do with the lack of speed?
After I clustered the primary key index of a table with about 300,000
rows, my vacuum/analyze on that table is taking too long ... over 15
mins when originally it was 15 seconds! Nothing else has been changed
with this table. Is clustering not good for vacuums?
---(end of
On 25/09/2007, Michael Fuhr [EMAIL PROTECTED] wrote:
How can I remove characters that form a part of regular expressions?
Why do you want to do that?
Because these values were inserted into the DB due to a faulty
application. So cleansing was called for.
I just ended up doing it with
On 24/09/2007, Vivek Khera [EMAIL PROTECTED] wrote:
my FSM is way bigger than I ever use (vacuum never reports shortage)
and I still get bloat that needs to be purged out with a reindex on
occasion.
Vivek,
I feel your pain. But I seem to have (mostly) solved my problem in three ways:
1.
On 25/09/2007, Vivek Khera [EMAIL PROTECTED] wrote:
Recommending I run vacuum intermixed with the data purge is a non-
starter; the vacuum on these tables takes a couple of hours. I'd
never finish purging my data with that kind of delay.
...
I will investigate the fill-factor. That seems
On 25/09/2007, Anoo Sivadasan Pillai [EMAIL PROTECTED] wrote:
Hi,
On further testing I found the same behaviour in Unique keys too, The
following batch can reproduce the behaviour.
CREATE TABLE master ( m1 INT primary key , m2 int unique ) ;
INSERT INTO master VALUES ( 1, 1 ) ;
Hi,
How can I remove characters that form a part of regular expressions? I
would like to remove all instances of the following characters:
[
]
\
+
Given that these all mean something in regexp, I am trying to prefix
them with a backslash, but it doesn't work. I tried the following:
update
On 19/09/2007, Gregory Williamson [EMAIL PROTECTED] wrote:
...
Can't speak directly to PostgreSQL but in Informix the fill factor is
useful for tweaking indexes. A very high fill factor is useful for tables
that are static -- any inserts or changes to the index trigger a *lot* of
moving of
On 19/09/2007, Richard Broersma Jr [EMAIL PROTECTED] wrote:
--- Phoenix Kiula [EMAIL PROTECTED] wrote:
2. Is this fill factor enough to have on the table, or should I also
do a fill factor for specific indexes? Or both the table and the
index? (I have four btree indexes on the table)
I
Unfortunately, folks like Phoenix are looking for yes/no answers, and
with many of these questions, the _correct_ answer is it depends on
your workload
I wanted merely to simplify the advice that gets dispensed on this
list, often conflicting to novice ears like mine. So I appreciate your
On 18/09/2007, Sander Steffann [EMAIL PROTECTED] wrote:
Hi,
Can I add SHM with merely by managing the entry in sysctl.conf? My
current values:
kernel.shmmax = 536870912
kernel.shmall = 536870912
My shared_buffers in postgresql.conf is 2. From the website
Thanks for a very informative post! One question:
I'm not sure how to find the current value, but a smaller fill factor
on busy tables should lead to less fragmentation, thus more efficient
indexes over time. Keep in mind that a smaller fill factor will also
lead to larger indexes
The manual is vague. Several threads about this, in language that is
ambiguous to me.
So a YES/NO question:
Q: To get rid of index bloat, is a VACUUM ANALYZE enough? Or must I
reindex/cluster indexes?
Thanks.
---(end of broadcast)---
TIP 3: Have
Well first question: how can I check if autovacuum is working?
On 04/09/2007, Tom Lane [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
Basically, what I am missing is some info on actually tweaking the
postgresql.conf to suit my system.
No, that's *not* what you're
We have a system that came with pg 8.1.9. When I try to uninstall
those RPMs, it works for all the rpms except for libs:
rpm -ev postgresql-libs-8.1.9-1.el5
error: Failed dependencies:
libpq.so.4 is needed by (installed) apr-util-1.2.7-6.i386
I am not sure what this is about and
On 14/09/2007, Phoenix Kiula [EMAIL PROTECTED] wrote:
We have a system that came with pg 8.1.9. When I try to uninstall
those RPMs, it works for all the rpms except for libs:
rpm -ev postgresql-libs-8.1.9-1.el5
error: Failed dependencies:
libpq.so.4 is needed by (installed) apr
On 11/09/2007, Phoenix Kiula [EMAIL PROTECTED] wrote:
The suggestion in this thread that a regex index will come into play
only when the WHERE condition specifically mentions it was indeed the
key for me.
Ok, I've hit a snag about this index. I think it's to do with how my
regex
On 13/09/2007, Alvaro Herrera [EMAIL PROTECTED] wrote:
Mike Charnoky wrote:
Alvaro: The cluster suggestion probably won't help in my case since data
in the table should already be naturally ordered by date.
It's not helpful only for reordering, but also for getting rid of dead
tuples.
On 13/09/2007, Tom Lane [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
Apart from creating a new table, indexing it, then renaming it to
original table -- is there an alternative to CLUSTER that doesn't
impose a painful ACCESS EXCLUSIVE lock on the table? We
I have a varchar ID field which captures a user account. If there is
no user id, then we just store the IP address of the user. Realizing
that we would like to index only those tuples where the user ID is not
an IP, I am trying to create a partial index as follows:
CREATE INDEX
Hello
We're trying to look for the most optimal config for a heavy duty
production server, and the following two are falling in the same price
range from our supplier:
Option 1:
2 x 300GB SCSI (10k rpm) with SAS and RAID 1
Option 2:
4 x 300GB SATA2 (7200 rpm, server grade) with RAID 10
I am
On 11/09/2007, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
It depends what you want to do with your database.
Do you have many reads (select) or a lot of writes (update,insert) ?
This one will be a hugely INSERT thing, very low on UPDATEs. The
INSERTS will have many TEXT fields as they are
On 11/09/2007, Tom Lane [EMAIL PROTECTED] wrote:
Richard Huxton [EMAIL PROTECTED] writes:
The planner isn't smart enough to figure out which queries can use this
index by examining them, it just looks for (NOT paid) in the WHERE
clause and if it doesn't find it, ignores the index.
Well,
Thanks Greg.
You're not going to get a particularly useful answer here without giving
some specifics about the two disk controllers you're comparing, how much
cache they have, and whether they include a battery backup.
Scenario 1, SATAII:
- Server: Asus RS120-E4/PA4 Dedicated Server
-
On 12/09/2007, Ron Johnson [EMAIL PROTECTED] wrote:
How (on average) large are the records you need to insert, and how
evenly spread across the 24 hour day do the inserts occur?
There will be around 15,000 inserts in a day. Each insert will have
several TEXT columns, so it is difficult to
On 12/09/2007, Greg Smith [EMAIL PROTECTED] wrote:
On Wed, 12 Sep 2007, Phoenix Kiula wrote:
Scenario 1, SATAII:
- Server: Asus RS120-E4/PA4 Dedicated Server
- CPU: Single -- Intel Quad Core Xeon Processor x3210 Processor 2.13Ghz
- RAM: 4Gb DDR2 Memory 667Mhz
- Hard disk: 4 x Seagate
On 04/09/07, Tom Lane [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
On 04/09/07, Richard Broersma Jr [EMAIL PROTECTED] wrote:
Thats odd, I wonder why the EXPLAIN ANALYZE time id some much less that
the logged select
statement times?
Because the statement has been
A couple of questions about the most_common_vals stuff in pg_stats
for a high traffic table:
1. Can I tell the stats collector to collect only values of a column
where a certain regex is matched? It is currently collecting the 500
values where most of them are values that I don't want, so it's
On 03/09/07, Tom Lane [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
most_common_vals will (and should) be empty if there aren't actually any
common values, but aren't you getting a histogram? Exactly what
performance do you think will be improved?
Lots of posts here
On 03/09/07, Alban Hertroys [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
As I understand it it's a sample of how the data is distributed.
Probably it's based on statistical mathematics that specifies a minimum
size for a representive sample of a given data set. It boils down to:
If you want
On 04/09/07, Richard Broersma Jr [EMAIL PROTECTED] wrote:
--- Phoenix Kiula [EMAIL PROTECTED] wrote:
LOG: duration: 93473.282 ms statement: select t_info, dstats, id
from trades where t_alias = '17huv' and status = 'Y'
---
Index Scan using trades_unique_t_alias
On 01/09/07, Ashish Karalkar [EMAIL PROTECTED] wrote:
Hello All,
I want to export data from PostgreSQL tables to MS Excel.
Is there any way?
Sure, write SQL in a program (php, perl, jsp, asp) to dump the tables
in HTML tabletrtd rows format. Then import that HTML page
program into Excel
Hello,
I have a simple query as follows. It joins two very straightforward tables.
SELECT
trades.id,
trades.url,
trades.alias,
tradecount.t_count,
tradecount.u_count
FROM trades
LEFT JOIN tradecount ON trades.id = tradecount.id
WHERE trades.user_id = 'jondoe' and trades.status = 'Y'
On 01/09/07, Alban Hertroys [EMAIL PROTECTED] wrote:
On Sep 1, 2007, at 11:46, Phoenix Kiula wrote:
.
..snip
However, there's a nested loop in there as the EXPLAIN ANALYZE shows
below. What is causing this nested loop?
It looks like it's used to match trades to tradecounts. I think
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What is this about and how do I solve this? A df -h on my system shows this:
FilesystemType
On 31/08/2007, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
I am getting this message when I start the DB:
psql: FATAL: could not access status of transaction 0
DETAIL: Could not write to file pg_subtrans/01F8 at offset 221184:
No space left on device.
What
On 31/08/2007, Josh Tolley [EMAIL PROTECTED] wrote:
On 8/31/07, Zoltan Boszormenyi [EMAIL PROTECTED] wrote:
Phoenix Kiula írta:
In addition to what others have already said, when things calm down
you should consider implementing some sort of monitoring system that
is configured to start
We're moving from MySQL to PG, a move I am rather enjoying, but we're
currently running both databases. As we web-enable our financial
services in fifteen countries, I would like to recommend the team that
we move entirely to PG.
In doing research on big installations of the two databases, I read
Hi,
We have big blobs of text (average 10,000 characters) in a database,
from which we would like to discover the most often repeated words or
phrases. Can tsearch be used for this kind of pattern search? I
suppose it's Text Mining 101 sort of stuff, nothing complex.
TIA!
On 25/08/07, Oleg Bartunov [EMAIL PROTECTED] wrote:
On Fri, 24 Aug 2007, Phoenix Kiula wrote:
Hi,
We have big blobs of text (average 10,000 characters) in a database,
from which we would like to discover the most often repeated words or
phrases. Can tsearch be used for this kind
On 23/08/07, Scott Marlowe [EMAIL PROTECTED] wrote:
Yeah, I'm not the biggest fan of CR, but it's worked with PostgreSQL
for quite some time now. We had it hitting a pg7.2 db back in the
day, when hip kids road around in rag top roadsters and wore tshirts
with cigarettes rolled in their
Hi,
On 23/08/07, Dmitry Koterov [EMAIL PROTECTED] wrote:
And here are results of built-in Postgres test script:
Can you tell me how I can execute this script on my system? Where is
this script?
Thanks!
---(end of broadcast)---
TIP 3: Have
[Sorry for the length of this post. It stretched as I provided as much
info as possible..]
So the rubber meets the road. We've put postgresql in a production
environment with some heavy simultaneous usage. It works well in
general, but often PG doesn't respond. How should I test what is going
On 19/08/07, Phoenix Kiula [EMAIL PROTECTED] wrote:
[Sorry for the length of this post. It stretched as I provided as much
info as possible..]
So the rubber meets the road. We've put postgresql in a production
environment with some heavy simultaneous usage. It works well in
general
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
.snipped
I can merrily increase the max_fsm_pages directive, but the manual
also caveats that with this can use more system V memory than
available on your system. My full verbose vacuum info below includes
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
There are ways to do this, but if you can't just use timeouts to expire
from the cache, things can become pretty complicated pretty fast. But
perhaps you can isolate some kinds of queries that can be cached for n
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
should we do one (VACUUM FULL) now given that we've overrun our
max_fsm_pages?
Yes, but not until you've fixed it. And only once.
FIxed what
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
...snip
There should be a line like this at the end of a VACUUM VERBOSE command:
INFO: free space map contains 33 pages in 74 relations
DETAIL: A total of 1184 page slots are in use (including overhead).
1184
Btw, related to one my earlier questions: where can I see how many
connections are being made to the DB, what was the maximum number
attempted at any given time, and so on? The connections related info.
Thanks!
---(end of broadcast)---
TIP 4: Have
On 19/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
No need to match. If you have 150 relations, 200 is a reasonable value.
But once you get the proper output from the vacuum command, it tells you
that as well (74 in my example above)
Found it! You get those words
On 19/08/07, Gavin M. Roy [EMAIL PROTECTED] wrote:
We use PHP, but think of it as a universal PgSQL proxy.. If you connect to
a connection you setup in pgBouncer via psql, it looks like a normal
database. Nothing is different in your code but where you connect (for us,
it's the same as our
I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons, is quite a convoluted one).
So I read up on .pgpass. Where should this file be
On 18/08/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Phoenix Kiula wrote:
I am writing some simple batch scripts to login to the DB and do a
pg_dump. Also, when I login to do my own SQL tinkering, I'd like not
to be asked for a password every time (which, for silly corporate
reasons
On 18/08/07, Ron Johnson [EMAIL PROTECTED] wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/18/07 06:02, Phoenix Kiula wrote:
[snip]
Thanks for this. I am logged in as root. Put it there and it works. I
Well, that's your first problem.
And second. And third.
Thanks
I'm loving the fact that while I am doing some one-time updates to the
DB, users can still SELECT away to glory. This is a major boon in
comparison to my experience with another major opensource database.
However, I am a little frustrated by the amount of time PGSQL takes to
complete tasks. Just
On 17/08/07, Martijn van Oosterhout [EMAIL PROTECTED] wrote:
On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
However, I am a little frustrated by the amount of time PGSQL takes to
complete tasks. Just to accommodate these tasks, my conf file has the
following
On 17/08/07, hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
On Fri, Aug 17, 2007 at 07:49:08PM +0800, Phoenix Kiula wrote:
I have dropped all indexes/indicises on my table, except for the
primary key. Still, when I run the query:
UPDATE mytable SET mycolumn = lower(mycolumn);
can
On 17/08/07, hubert depesz lubaczewski [EMAIL PROTECTED] wrote:
On Fri, Aug 17, 2007 at 10:22:55PM +0800, Phoenix Kiula wrote:
Wow, smartest advice of the day! Yes, a lot of our data in that column
has dots and numbers (800,000 compared to 6 million), so I wanted to
get only to the stuff
I'm noticing that some of my data has been imported as junk text:
For instance:
klciã«
What would be the SQL to find data of this nature? My column can only
have alphanumeric data, and the only symbols allowed are - and _,
so I tried this regexp query:
select id, t_code
from
On 18/08/07, Michael Glaesemann [EMAIL PROTECTED] wrote:
[Please reply to the list so that others may benefit from and
participate in the discussion.]
If you're including - in a range as a character, doesn't it have to
go first?
Try this:
WHERE t_code ~ $re$[^-A-Za-z1-9_]$re$
On 16/08/07, Richard Broersma Jr [EMAIL PROTECTED] wrote:
--- Phoenix Kiula [EMAIL PROTECTED] wrote:
On 16/08/07, Rodrigo De León [EMAIL PROTECTED] wrote:
On Aug 15, 11:46 pm, [EMAIL PROTECTED] (Phoenix Kiula) wrote:
Appreciate any tips, because it would
be nasty to have to do
Probably an optimistic question - can a user with access to two
databases create a function in one to access tables in the other? Or
triggers?
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On 16/08/07, Rainer Bauer [EMAIL PROTECTED] wrote:
Gregory Stark wrote:
Rainer Bauer [EMAIL PROTECTED] writes:
Anyway, what Phoenix is trying to say is that 2 queries are required: One
to
get the total count and one to get the tuples for the current page. I
reckon
it would help,
I am trying to force a column to have lowercase because Postgresql is
case-sensitive in queries. For the time being I've made an expression
index on lower(KEY). But I would like to have just lower case data and
then drop this expression index.
However, I see some inconsisent behavior from
I suspect you're not showing us the exact queries you're running. For
one, you can't have a table named TABLE (without quotes) in PostgreSQL.
Of course. The data is a tad private, hence the simple table and
column names represented in uppercase.
Perhaps something else you changed when
On 17/08/07, Tom Lane [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
However, I see some inconsisent behavior from Postgresql. When I issue
an UPDATE command , it shows me a duplicate violation (which could be
correct) --
-# update TABLE set ACOLUMN = lower(ACOLUMN
On 15/08/07, Ivan Zolotukhin [EMAIL PROTECTED] wrote:
Hello,
Imagine a web application that process text search queries from
clients. If one types a text search query in a browser it then sends
proper UTF-8 characters and application after all needed processing
(escaping, checks, etc) passes
you do a lot of queries like that and the id,s_id restriction isn't very
selective you might look into tsearch2 which can index that type of query.
Thanks. Does tsearch2 come installed with 8.2.3? I am not techie
enough to do all the compiling stuff so I'm hoping it does! How can I
check?
I'm grappling with a lot of reporting code for our app that relies on
queries such as:
SELECT COUNT(*) FROM TABLE WHERE (conditions)...
And I still do not find, from the discussions on this thread, any
truly viable solution for this. The one suggestion is to have a
separate counts
On 15/08/07, Gregory Stark [EMAIL PROTECTED] wrote:
Phoenix Kiula [EMAIL PROTECTED] writes:
I'm grappling with a lot of reporting code for our app that relies on
queries such as:
SELECT COUNT(*) FROM TABLE WHERE (conditions)...
...
The number of such possibilities
In some examples posted to this forum, it seems to me that when people
execute queries in the psql window, they also see 90 ms taken
(milliseconds), which denotes the time taken to execute the query.
Where can I set this option because I'm not seeing it in my psql
window on both Win XP and Linux.
201 - 300 of 321 matches
Mail list logo