Hello, Tom.
I don't understand relation between constraints and indexes.
By using EMS PostgreSQL Manager Lite, I created indexes on columns, some of
them are unique values.
But when I open it in PgAdmin, all such "unique" indexes are listed as
constraints and there are no indexes in Indexes sec
Gauthier, Dave wrote:
I’ll answer my own question...
select * from myfunc();
(dumb, dumb, dumb)
If it makes you feel any better, it is a common mistake :)
Joshua D. Drake
-dave
* From: * [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] *On Behalf Of *Gauthier, Dave
*Sent:* Monda
I'll answer my own question...
select * from myfunc();
(dumb, dumb, dumb)
-dave
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Gauthier, Dave
Sent: Monday, July 09, 2007 4:07 PM
To: pgsql-general@postgresql.org
Subject: [GENER
On 07/09/07 16:18, Mike wrote:
Hi,
What is efficient approach toward storing a web application's user
data. How do applications such as basecamp, SalesForce or QuickBooks
online store their data? Is it in one DB with huge tables each record
having a user account's foreign key or do they create a
On Monday 09 July 2007 18:33:49 turbovince wrote:
> Hello, I would like to perform some accent-insensitive searches on my
> database, which means that a select query with condition, say, WHERE
> NAME = 'HELLÔ' would return records where name is 'HELLO' as well.
>
> My data is encoded in Unicode (UT
Mike <[EMAIL PROTECTED]> writes:
> I see. Thank you for the elaborate response. I have a clearer idea of
> what is going on now. In designing my application I was thinking of
> storing pieces of my data as serialized python data structures into a
> binary field (no more than 15KB), while a friend
On Thursday 12 July 2007 10:44, Koen Vermeer wrote:
> On Mon, 2007-07-09 at 09:05 +0200, Koen Vermeer wrote:
> > I want to set up a database server that is connected to the Internet.
> > Therefore, I want my users to use SSL/TLS certificates. I provide these
> > certificates to the users, so I can
* Tom Lane ([EMAIL PROTECTED]) wrote:
> If you're worried about having the system insecure even transiently
> against local bad guys, it's possible to do this without opening any
> hole, but it requires taking the DB down for a few minutes so you can
> do the password change in standalone mode.
An
[EMAIL PROTECTED] writes:
> I am running three ways: sequential scan, bitmap index scan and index scan.
> The I/O cost for the index scan is 24+ times more than the other two. I do
> not
> understand why this happens. If I am using a clustered index, it is my
> understanding that there should b
Dat Chu <[EMAIL PROTECTED]> writes:
> I am the new sysadmin and was given the task to move a postgresql
> installation from one box to another. So I want to do pg_dumpall .
> However, there is no record of postgresql superuser/root password. I
> do have root access to the system however.
Edit pg_
[EMAIL PROTECTED] writes:
> It turned out he was right for our current set up. When I needed to
> empty the project table to re-parse data, doing a cascading delete
> could take up to 10 minutes!
You mean ON CASCADE DELETE foreign keys? Usually the reason that's
slow is you forgot to put an index
* Narasimha Rao P.A ([EMAIL PROTECTED]) wrote:
> Does postgreSQL support distributive query processing?
PostgreSQL does not directly support splitting one query across multiple
nodes (cpus, machines, whatever). It's certainly possible to set up
distributed load balancing over some set of PostgreS
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> I am running three ways: sequential scan, bitmap index scan and index scan.
> The I/O cost for the index scan is 24+ times more than the other two. I do
> not
> understand why this happens. If I am using a clustered index, it is my
> understandin
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> Since I'm not an expert in Postgres database design, I'm assuming I've
> done something sub-optimal. Are there some common techniques for
> tuning postgres performance? Do we need beefier hardware?
Honestly, it sounds like the database design might
* Gauthier, Dave ([EMAIL PROTECTED]) wrote:
> stdb=# select myfunc();
> ERROR: set-valued function called in context that cannot accept a set
select * from myfunc(); ?
Stephen
signature.asc
Description: Digital signature
Hello all -
I'm working on a postgres project after coming from a MySQL background
( no flames, please :). We are importing fairly large xml datasets
( 10-20 MB of xml files per 'project', currently 5 projects) into the
database for querying.
We are using PHP to create a web interface where users
Hi
I know that i can limited the total number of connections in
postgresql.conf with max_connections, but... can i limite the max
number of connections from an specific IP?
For example y like have total max connections of 40 buy the same ip
can't have more than 5 connections open.
It is possibl
Hello,
if I have this table:
CREATE TABLE teste (
id integer,
picos real[],
);
and insert this values:
INSERT INTO teste VALUES(1, '{{1,2,3},{1,2,3},{1,2,3},{1,2,3},
{1,2,3}}');
INSERT INTO teste VALUES(2, '{{1,2,3},{1,2,3},{1,2,3},{1,2,3},{1,2,3},
{1,2,3},{1,2,3}}');
INSERT INTO teste VAL
Hello:
I am running the following query:
SELECT COUNT(*) FROM orders WHERE o_orderdate < date('1995-03-15');
Here are some stats for the orders relation:
select relname, relpages, reltuples from pg_class where relname = 'orders';
"orders";29278;1.49935e+06
For my query above, the reduction facto
PostgreSQL 8.2 on Windows 2003 Server.
Is it possible to limit number of connections per username?
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so
Does postgreSQL support distributive query processing?
-
5, 50, 500, 5000. Store N number of mails in your inbox. Click here.
I've googled this one and tried everything (except the correct solution
of course) until tears are starting to flow. Please help. Complete
example below. 7.4.13 on suse-64 x86
create table mytable (mystring varchar, myreal real);
insert into mytable (mystring,myreal) values ('abc',1.2
Hi,
I am the new sysadmin and was given the task to move a postgresql
installation from one box to another. So I want to do pg_dumpall .
However, there is no record of postgresql superuser/root password. I
do have root access to the system however.
Does anyone have a suggestion on what I should
I am a beginner with Postgresql and Full-Text indexing enabled by Tsearch2.
I have recently set all the configuration to run Tsearch2 on my
databases (Postgresql 8.2), under Linux (Fedora).
Everything worked great for a period of time. The queries I frequently
used were :
- select finddoc('word
Hello, I would like to perform some accent-insensitive searches on my
database, which means that a select query with condition, say, WHERE
NAME = 'HELLÔ' would return records where name is 'HELLO' as well.
My data is encoded in Unicode (UTF8) and therefore I cannot use
Postgre's to_ascii() trick t
On 7/10/07, CG <[EMAIL PROTECTED]> wrote:
Can't wait... Got to have it now. :(
I patched xpath.c and created a function which takes a third parameter which
are the known namespaces in a space delimited hash list ... I based the patch
on some example code from the libxml website which works per
I see. Thank you for the elaborate response. I have a clearer idea of
what is going on now. In designing my application I was thinking of
storing pieces of my data as serialized python data structures into a
binary field (no more than 15KB), while a friend was arguing I should
store the data in oth
Hi,
What is efficient approach toward storing a web application's user
data. How do applications such as basecamp, SalesForce or QuickBooks
online store their data? Is it in one DB with huge tables each record
having a user account's foreign key or do they create a separate
database for each of th
Hi all,
I want to use postgres to store data and large files, typically audio
files from 100ko to 20Mo. For those files, I just need to store et
retrieve them, in an ACID way. (I don't need search, or substring, or
others functionnalities)
I saw postgres offers at least 2 method : bytea column
Hi,
I'm new to Postgresql and need to support on any IT related problem.
One of my customer not able to start the postgresql services.
The log said that ..
record with zero length at 0/2E16910
invalid primary checkpoint record
record with zero length at 0/2E168D0
invalid secondary checkpoint recor
Harpreet Dhaliwal escribió:
> How can one rollover a sequence back to zero after you delete records from
> a
> table with one such sequence.
> I see it starting with the last value of the sequence inserted.
You can use setval(), but normally you just leave it alone. Having
numbers not starting f
How can one rollover a sequence back to zero after you delete records from a
table with one such sequence.
I see it starting with the last value of the sequence inserted.
On 7/11/07, Tom Allison <[EMAIL PROTECTED]> wrote:
On Jul 10, 2007, at 3:09 PM, Tom Lane wrote:
>
> "Harpreet Dhaliwal" <
Joshua D. Drake wrote:
> Benjamin Arai wrote:
> RAID 5 with 16 spindles? RAID 10 will give you better performance I
> would think.
I'd say RAID 5 is not that bad for this kind of query, at least if the
controller is worth anything. RAID 10 is the best choice for OLTP, but
if the main job for this
imageguy wrote:
> I am building an application with Postrges as the backend foundation.
> This is my first application and it has struck me that as we add
> features/functionality to the application and database with each new
> version, we will need some method of obtaining the current structure
>
Oleg,
I am running the latest 8.2.4. I am using GIN. The data is static. I do
a batch upload every week of about 500GB and the ata is never touched
again, it is always add and never delete or update.
>From your slides you state:
GIN_FUZZY_SEARCH_LIMIT - maximum number of
returned rows
GIN_F
Benjamin,
what version of postgres and what type of index you used ?
The best setup is to use partitioning with rather small table for
new data and GiST index and big archive table with static data and
GIN index. I have some slides from PGDay
http://www.sai.msu.su/~megera/postgres/talks/fts-pg
On Jul 12, 2007, at 10:07 AM, imageguy wrote:
I am building an application with Postrges as the backend foundation.
This is my first application and it has struck me that as we add
features/functionality to the application and database with each new
version, we will need some method of obtainin
> Benjamin Arai wrote:
>> Hi,
>>
>> I have a really big Tsearch2 table (100s GB) that takes a while to
>> perform
>> queries and takes days to index. Is there any way to fix these issues
>> using UNIONs or partitioning? I was thinking that I could partition the
>> data by date but since I am alwa
Andrew Edson <[EMAIL PROTECTED]> writes:
> PANIC: right sibling's left-link doesn't match
> Any clue what's causing the panic message to occur?
Corrupt btree index. If you can determine which table is being
inserted/updated when this happens (admittedly the message is not being
helpful), R
I am building an application with Postrges as the backend foundation.
This is my first application and it has struck me that as we add
features/functionality to the application and database with each new
version, we will need some method of obtaining the current structure
of the customers database
Benjamin Arai wrote:
Hi,
I have a really big Tsearch2 table (100s GB) that takes a while to perform
queries and takes days to index. Is there any way to fix these issues
using UNIONs or partitioning? I was thinking that I could partition the
data by date but since I am always performing querie
Tom
Thanks works fine!
best regards
mdc
--- Tom Lane <[EMAIL PROTECTED]> escribió:
> marcelo Cortez <[EMAIL PROTECTED]> writes:
> > i downloaded postgres8.2.4 sources , expand and
> > ./configure and install with success.
> > Now i need one module from contrib directory ,
> > fuzzystrmatch
Hi,
I have a really big Tsearch2 table (100s GB) that takes a while to perform
queries and takes days to index. Is there any way to fix these issues
using UNIONs or partitioning? I was thinking that I could partition the
data by date but since I am always performing queries on the Tsearch2
field
Hi,
I have a really big Tsearch2 table (100s GB) that takes a while to perform
queries and takes days to index. Is there any way to fix these issues
using UNIONs or partitioning? I was thinking that I could partition the
data by date but since I am always performing queries on the Tsearch2
field
I tried to install postgres onto my macbook via 'fink' and don't
like it all that much.
I decided to install from source, it's a fallback to my slackware
days.
But fink already created a user postgres and I can't seem to find
anything to change it's configuration settings for shell, home
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> On Thu, Jul 12, 2007 at 09:04:38AM +0200, Hannes Dorbath wrote:
>> This god like faith of some admins in package maintainers, that they
>> know what's right, good and stable for them, sometimes really worries me.
> The problem is the mismatch be
One of the servers I am responsible for maintaining was apparently having
problems earlier today. The servers are all running SuSE 9.2, Apache 2 (not
sure what version), and Postgres 8.1.4. Our main server is running Slony 1.1,
I think, creating Log-Shipping records that the rest of the server
Hi,
I am using libpq to set up an SSL connection to a PostgreSQL server. The
client key is protected by a pass phrase, so when I issue a
PQconnectdb(), I get a prompt stating:
Enter PEM pass phrase:
Instead, I would like to give the user a nice message box, asking for
his/her password. So, how d
On Jul 12, 2007, at 8:09 AM, Csaba Nagy wrote:
Hi all,
I just had the following error on one of our data bases:
ERROR: could not access status of transaction 1038286848
DETAIL: could not open file "pg_clog/03DE": No such file or directory
I researched on the mailing list and it looks like t
On Mon, 2007-07-09 at 09:05 +0200, Koen Vermeer wrote:
> I want to set up a database server that is connected to the Internet.
> Therefore, I want my users to use SSL/TLS certificates. I provide these
> certificates to the users, so I can control the information that is in
> there. Now, I know that
On Thu, 2007-07-12 at 16:18, Simon Riggs wrote:
> The corruption could only migrate if the WAL records themselves caused
> the damage, which is much less likely than corruption of the data blocks
> at hardware level. ISTM that both Slony and Log shipping replication
> protect fairly well against bl
On Thu, 2007-07-12 at 15:09 +0200, Csaba Nagy wrote:
> Luckily I remembered I have a WAL logging based replica, so I
> recovered
> the rest of the truncated file from the replica's same file... this
> being an insert only table I was lucky I guess that this was an
> option.
> To my surprise, the sa
Hi all,
I just had the following error on one of our data bases:
ERROR: could not access status of transaction 1038286848
DETAIL: could not open file "pg_clog/03DE": No such file or directory
I researched on the mailing list and it looks like the usual suspect is
disk page corruption. There ar
On 7/12/07, Tom Allison <[EMAIL PROTECTED]> wrote:
I tried to install postgres onto my macbook via 'fink' and don't like
it all that much.
I decided to install from source, it's a fallback to my slackware days.
Try MacPorts (http://www.macports.org/), a modern BSD-style ports
system for OS X. T
> I tried to install postgres onto my macbook via 'fink' and don't like
> it all that much.
> I decided to install from source, it's a fallback to my slackware days.
>
> But fink already created a user postgres and I can't seem to find
> anything to change it's configuration settings for shell, h
I tried to install postgres onto my macbook via 'fink' and don't like
it all that much.
I decided to install from source, it's a fallback to my slackware days.
But fink already created a user postgres and I can't seem to find
anything to change it's configuration settings for shell, home
di
On Thu, Jul 12, 2007 at 09:04:38AM +0200, Hannes Dorbath wrote:
> This god like faith of some admins in package maintainers, that they
> know what's right, good and stable for them, sometimes really worries me.
The problem is the mismatch between what distrbuters want and what the
postgres team w
On 11.07.2007 23:07, Anton Melser wrote:
Just a word of advice... unless you plan to spend lots of time on your
db (like you want to think about it more than twice a week sort of
thing...), just go with what you have in terms of the distro. We are
running 8.1.4. And it just works, yes, even after
58 matches
Mail list logo