Christopher Browne wrote:
The poll isn't about "OSS"; it's a popularity contest for proprietary
software that runs on Linux.
It's interesting to see that MySQL is only third at the moment.
regards,
Robin
---(end of broadcast)---
TIP 1: subscri
> Great, that works out fine!
>
> So, the SQL I tested with is:
> select * from mytable order by convert(name, 'utf8', 'gb18030');
Sorry, what I wanted to say was:
SELECT * FROM t1 WHERE ... ORDER BY CONVERT(your_chinese_character using
utf_8_to_gb_18030);
Of course your example is fine too (a
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] ("Ed L.") wrote:
> On Thursday March 24 2005 7:07, [EMAIL PROTECTED] wrote:
>> Thank you for registering!
>>
>> http://www.sys-con.com/linux/readerschoice2004
>
> Curiously, I see PostgreSQL is not even on the ballot I see at
> the site
[EMAIL PROTECTED] ("Joshua D. Drake") wrote:
> On Mon, 2005-03-28 at 11:03 -0700, Ed L. wrote:
>> On Thursday March 24 2005 7:07, [EMAIL PROTECTED] wrote:
>> > Thank you for registering!
>> >
>> > http://www.sys-con.com/linux/readerschoice2004
>>
>> Curiously, I see PostgreSQL is not even on the b
I figured out the solution, but figured on sharing it for others who come up
with the same issue.
The solution was just to restart postgresql and it all started working.
Question for gentoo people -
Is it required to restart daemons, such as postgresql, after an emerge
world?
select version();
Great, that works out fine!
So, the SQL I tested with is:
select * from mytable order by convert(name, 'utf8', 'gb18030');
It produces the correct output.
Thanks Tatsuo!
Jian
On Tue, 29 Mar 2005 10:25:58 +0900 (JST), Tatsuo Ishii
<[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > I installed postgres
Thanks for the input everyone. I think Harald's approach will work
well, so I'm planning on doing what he suggested, with a few
modifications. I think I can still use a sequence-backed INTEGER rather
than TIMESTAMP, and have the trigger that sets the revision to NULL also
NOTIFY the daemon th
> One way to do this is to add a write_access column to actions and use
> a constraint to force it to be true.
>Create a UNIQUE key of
> (name, write_access) for user_data and then add a FOREIGN KEY
> reference from (name, write_access) in actions to (name, write_access)
> in user_data.
Yes the
On Mon, Mar 28, 2005 at 16:13:59 -0600,
Dale Sykora <[EMAIL PROTECTED]> wrote:
>
> CREATE TABLE user_data(
> name varchar(32),
> write_access bool DEFAULT 'f'
> );
> CREATE TABLE actions(
> action varchar(32),
> user varchar(32) -- somehow make sure user = user_data.name
Actually, it is common for "obvious" facts to be entirely incorrect.
-> ext3 wouldn't "die" with a file of that size; it supports files up
to about 2TB in size, and 8GB shouldn't be an "uncomfortable" size
-> PostgreSQL normally switches to a new file at 1GB intervals, so
that no file is ev
> Hi,
>
> I installed postgres 8.0 for windows on my win xp (Simplified Chinese
> version). The encoding is unicode. When I set pgsql client encoding to
> gb18030, I could insert Chinese text from the command line to
> postgres.
>
> However, I could not get the sort order of Chinese varchar field
Compañeros. Tengo xp professional con el service pack 1.
Estoy tratando de instalar la version 8 en Windows y cuando trato de configurar
en la ventana de servicio con el usuario postgre colocandole una clave me
arroja el siguiente error:
“Invalid usernamed specified:Error de inicio de s
> > create index prdt_new_url_dx on prdt_new (url)
> > create index prdt_new_sku_dx on prdt_new (sku)
> > create index prdt_old_sku_dx on prdt_old (sku)
> > create index prdt_new_url_null_dx on prdt_new (url) where prdt_new.url
> > IS NULL
I added indexes & redo the analyze - Query plan looks bett
On Mon, 2005-03-28 at 16:02, Scott Marlowe wrote:
> On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > > Also, this is important, have you anayzed the table? I'm guessing no,
> > > since the estimates are 1,000 rows, but the has join is getting a little
> > > bit more than that. :)
> > >
> > > Ana
I am trying to develop a database table column that is contrainted to a
subset of another table column. I have tried using foreign key, check,
and inheritance, but I cannot figure out how to do it. I have a
user_data table that has various user columns including name and the
bool column writ
On Mon, 2005-03-28 at 15:38, Yudie Pg wrote:
> > Also, this is important, have you anayzed the table? I'm guessing no,
> > since the estimates are 1,000 rows, but the has join is getting a little
> > bit more than that. :)
> >
> > Analyze your database and then run the query again.
>
> I analyz
> Also, this is important, have you anayzed the table? I'm guessing no,
> since the estimates are 1,000 rows, but the has join is getting a little
> bit more than that. :)
>
> Analyze your database and then run the query again.
I analyze the table and it decrease number of rows in nested loop o
Looks like you need to create some indexes, probably on (groupnum) and
possibly on (groupnum,sku) on both tables.
Hope this helps,
On Mon, Mar 28, 2005 at 01:50:06PM -0600, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell
On Mon, 2005-03-28 at 13:50, Yudie Gunawan wrote:
> > Hold on, let's diagnose the real problem before we look for solutions.
> > What does explain tell you? Have you analyzed the database?
>
>
> This is the QUERY PLAN
> Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
> Hash Cond
Andrus Moor wrote:
thank you for reply. There was a typo in my code. Second table should be
CREATE TABLE info (
code1 CHAR(10),
code2 CHAR(10),
FOREIGN KEY ('1', code1) REFERENCES classifier,
FOREIGN KEY ('2', code2) REFERENCES classifier
);
I try to explain my problem more precicely.
I can i
> Hold on, let's diagnose the real problem before we look for solutions.
> What does explain tell you? Have you analyzed the database?
This is the QUERY PLAN
Hash Left Join (cost=25.00..412868.31 rows=4979686 width=17)
Hash Cond: (("outer".groupnum = "inner".groupnum) AND
(("outer".sku)::tex
Alvaro Herrera wrote:
> On Sun, Mar 27, 2005 at 06:02:25PM -0600, Guy Rouillier wrote:
>
>> With the current implementation, it appears I need to either (1)
>> always commit after every inserted row, or (2) single thread my
>> entire insert logic. Neither of these two alternatives is very
>> desi
On Mon, 28 Mar 2005 21:06:12 +0200, Avishai Weissberg <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am trying to find a suitable FTI component.
>
> I am aware of tsearch2, but as far as I understand it doesn't really suit my
> purposes. I want to be able to run a search on a huge TEXT column, where t
From what I have gathered on the performance list, JFS seemed to be the
best overall choice, but I'd say check the archives of
pgsql-performance because so many of your I/O needs depends on what
you're going to be doing with your database.
-tfo
--
Thomas F. O'Connell
Co-Founder, Information Arc
Scott Ribe <[EMAIL PROTECTED]> writes:
>> Now how the heck did that happen? That's not some kind of weird UPDATE
>> failure, because the rows have different OIDs ... it seems like the
>> newer row must have been explicitly inserted, and it should surely have
>> been blocked by the unique index on
On Mon, 2005-03-28 at 13:02, Yudie Gunawan wrote:
> I actualy need to join from 2 tables. Both of them similar and has
> more than 4 millions records.
>
> CREATE TABLE prdt_old (
> groupnum int4 NOT NULL,
> sku varchar(30) NOT NULL,
> url varchar(150),
> );
>
> CREATE TABLE prdt_new(
> groupn
Hello,
I am trying to find a suitable FTI component.
I am aware of tsearch2, but as far as I understand it doesn't really suit my
purposes. I want to be able to run a search on a huge TEXT column, where the the
column's content is made of words (each 'word' is an email address) seperated by
white-s
I actualy need to join from 2 tables. Both of them similar and has
more than 4 millions records.
CREATE TABLE prdt_old (
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150),
);
CREATE TABLE prdt_new(
groupnum int4 NOT NULL,
sku varchar(30) NOT NULL,
url varchar(150) NOT NULL,
On Mon, 2005-03-28 at 11:03 -0700, Ed L. wrote:
> On Thursday March 24 2005 7:07, [EMAIL PROTECTED] wrote:
> > Thank you for registering!
> >
> > http://www.sys-con.com/linux/readerschoice2004
>
> Curiously, I see PostgreSQL is not even on the ballot I see at
> the site above for Best Linux Datab
> Now how the heck did that happen? That's not some kind of weird UPDATE
> failure, because the rows have different OIDs ... it seems like the
> newer row must have been explicitly inserted, and it should surely have
> been blocked by the unique index on datname. Are there subdirectories
> under
> Now how the heck did that happen? That's not some kind of weird UPDATE
> failure, because the rows have different OIDs ... it seems like the
> newer row must have been explicitly inserted, and it should surely have
> been blocked by the unique index on datname. Are there subdirectories
> under
On Mon, 2005-03-28 at 11:32 -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do t
On Thursday March 24 2005 7:07, [EMAIL PROTECTED] wrote:
> Thank you for registering!
>
> http://www.sys-con.com/linux/readerschoice2004
Curiously, I see PostgreSQL is not even on the ballot I see at
the site above for Best Linux Database ... could be a browser
config issue, but I do see the oth
Am Montag, 28. März 2005 18:06 schrieb Tom Lane:
> "Janning Vygen" <[EMAIL PROTECTED]> writes:
> > My disk was running full with 100 GB (!) of data/pg_xlog/ files.
>
> The only way for pg_xlog to bloat vastly beyond what it's supposed to be
> (which is to say, about twice your checkpoint_segments s
On Mon, Mar 28, 2005 at 11:32:04AM -0600, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
What's the query and how are you issuing it? Where are you seeing
the error? This could be a client problem: the client
On Mon, 2005-03-28 at 11:32, Yudie Gunawan wrote:
> I have table with more than 4 millions records and when I do select
> query it gives me "out of memory" error.
> Does postgres has feature like table partition to handle table with
> very large records.
> Just wondering what do you guys do to deal
Am Montag, 28. März 2005 13:46 schrieb Gustavo Franklin Nóbrega - Planae:
> Hi Janning!
>
> You need to expand your pg_xlog partition. If you use reiserfs, you can
> do this with resize_reiserfs. If you use ext2/ext3 you may try reise2fs.
This is not an option to me at the moment. because my d
I have table with more than 4 millions records and when I do select
query it gives me "out of memory" error.
Does postgres has feature like table partition to handle table with
very large records.
Just wondering what do you guys do to deal with very large table?
Thanks!
-
Scott Ribe <[EMAIL PROTECTED]> writes:
>> select ctid,oid,xmin,cmin,xmax,cmax,* from pg_database
> ctid | oid | xmin | cmin | xmax | cmax | datname | datdba |
> encoding | datistemplate | datallowconn | datlastsysoid | datvacuumxid |
> datfrozenxid | dattablespace | datconfig | dat
> What's the result of the following query?
>
> SELECT tableoid, xmin, xmax, oid, datname, datdba,
>datvacuumxid, datfrozenxid
> FROM pg_database
> ORDER BY datname, oid;
tableoid | xmin | xmax | oid | datname | datdba | datvacuumxid |
datfrozenxid
--++--+
> select ctid,oid,xmin,cmin,xmax,cmax,* from pg_database
ctid | oid | xmin | cmin | xmax | cmax | datname | datdba |
encoding | datistemplate | datallowconn | datlastsysoid | datvacuumxid |
datfrozenxid | dattablespace | datconfig | datacl
---+++--+--+-
> My first thought would be to look at the query preceding this error
> message (dumpDatabase in pg_dump.c) and try to determine why you are
> getting more than one database. Just from looking at this code, it
> seems like a mismatch between database version and pg_dump version
> might be a likely
>Hi,
>
>I installed postgres 8.0 for windows on my win xp (Simplified Chinese
>version). The encoding is unicode. When I set pgsql client encoding to
>gb18030, I could insert Chinese text from the command line to
>postgres.
>
>However, I could not get the sort order of Chinese varchar field to
>wor
Hi,
I installed postgres 8.0 for windows on my win xp (Simplified Chinese
version). The encoding is unicode. When I set pgsql client encoding to
gb18030, I could insert Chinese text from the command line to
postgres.
However, I could not get the sort order of Chinese varchar field to
work properl
On Mon, Mar 28, 2005 at 09:41:03AM -0600, Thomas F.O'Connell wrote:
> I've only been explaining general database theory and the rules of SQL
> in response to your posts because I'm still having a difficult time
> understanding what you're trying to accomplish.
I think he's trying to exploit ON
FYI,
MITRE, a nonprofit think tank catering to the US Department of Defense
(DoD), wrote an interesting study about the use of FOSS in the US Military.
PostgreSQL is not mentioned, but MySQL is. Seems to be an interesting
relaxation trend in the DoD, for those of us who do this kind of work.
htt
"Janning Vygen" <[EMAIL PROTECTED]> writes:
> My disk was running full with 100 GB (!) of data/pg_xlog/ files.
The only way for pg_xlog to bloat vastly beyond what it's supposed to be
(which is to say, about twice your checkpoint_segments setting) is if
checkpoints are somehow blocked from happeni
Interesting, that Stonebraker in his interview said about
parallel query processing
http://searchenterpriselinux.techtarget.com/qna/0,289202,sid39_gci1025832,00.html
Putting aside Larry Ellison, would you say, anything should have been done differently?
Stonebraker: We made a couple of significan
Dan,
You can get a sense of how much memory you will need by the shorthand
presented in table 16-2 for calculating the value of SHMMAX:
http://www.postgresql.org/docs/8.0/static/kernel-
resources.html#SYSVIPC-PARAMETERS
Otherwise, you'll need to include some estimate of work_mem and
maintena
If you declare parent.code to be a primary key, you're asserting that
you want it to be unique across all rows in parent. Thus, you will only
ever (be able to) have a single row with a value of 1.
If you do this:
INSERT INTO parent VALUES ('1');
INSERT INTO parent VALUES ('2');
UPDATE parent SET
Is the goal to have code1 always equal 1 and code2 always to equal 2?
If this is your goal and you are trying to ensure no-one enters anything
other than a 1 in code1 or a 2 in code2 is a check constraint what you are
after?
I guess if the 2 columns code1 and code2 have fixed values 1 and 2 it se
On Mon, Mar 28, 2005 at 12:51:19AM -0700, Scott Ribe wrote:
> pg_dumpall is failing with this error:
>
> pg_dump: query returned more than one (2) pg_database entry for database
> "pedcard"
> pg_dumpall: pg_dump failed on database "pedcard", exiting
>
> This is 8.0.1 on OS X; where do I start on
Andrus, it's still not clear to me that you're understanding the role
of referential integrity in database design. It exists to guarantee
that the values in a column in a given table correspond exactly to the
values in a column in another table on a per-row basis. It does not
exist to guarantee
Scott Ribe <[EMAIL PROTECTED]> writes:
> pg_dumpall is failing with this error:
> pg_dump: query returned more than one (2) pg_database entry for database
> "pedcard"
> pg_dumpall: pg_dump failed on database "pedcard", exiting
> This is 8.0.1 on OS X; where do I start on straightening this out?
On Sun, Mar 27, 2005 at 23:58:35 -0500,
Mike Mascari wrote:
>
> Without parallel query, the *only* way to decrease the execution time of
> a single query whose data has been fully cached is to buy the
> latest-and-greatest which is increasing in speed at decreasing rates,
> rather than scali
On Sun, Mar 27, 2005 at 19:46:41 -0600,
>
> Related question: Once I switch the 8.0.1 system over to be the production,
> can I reverse the direction and restore .dmp files on the 7.4.5 system or
> are the tablespace terms in the dump files going to cause problems?
The 8.0.1 dumps will probably
On Mar 28, 2005, at 2:51 AM, Scott Ribe wrote:
This is 8.0.1 on OS X; where do I start on straightening this out?
(There is
only 1 postmaster running, and it seems OK from client apps, both my
own app
and psql.)
My first thought would be to look at the query preceding this error
message (dumpDat
On Mon, Mar 28, 2005 at 06:09:22PM +0530, Rajarshi Mukherjee wrote:
>
> i am not being able to set the default autocommit feature of PG to off.
> i am using PG 8.0 Windows version and the following command :
> SET AUTOCOMMIT TO OFF
> throwing an error:
> ERROR: SET AUTOCOMMIT TO OFF is no longer s
On Mon, 28 Mar 2005 08:46:09 -0300, Gustavo Franklin Nóbrega - Planae
<[EMAIL PROTECTED]> wrote:
> Hi Janning!
>
> You need to expand your pg_xlog partition. If you use reiserfs, you can
> do this with resize_reiserfs. If you use ext2/ext3 you may try reise2fs.
>
> If you need to repartit
After a long battle with technology, [EMAIL PROTECTED] ("Joseph M. Day"), an
earthling, wrote:
> Can anyone recemmend a filesystem to use for Postgres. I currently
> have one table that has 80 mil rows, and will take roughly 8GB of
> space without indexing. Obviously EXT3 will die for a file size
On Sun, Mar 27, 2005 at 06:02:25PM -0600, Guy Rouillier wrote:
> With the current implementation, it appears I need to either (1) always
> commit after every inserted row, or (2) single thread my entire insert
> logic. Neither of these two alternatives is very desirable. And it is
> only a partia
Hi Janning!
You need to expand your pg_xlog partition. If you use reiserfs, you can
do this with resize_reiserfs. If you use ext2/ext3 you may try reise2fs.
If you need to repartition your filesystem, by myself experience, I
recommend to you to use LVM. With LVM, you can expand easily, add
You might want to look at:
http://www.postgresql.org/docs/8.0/interactive/largeobjects.html
or at the binary type in Postgres, also in the
docs.
Sean
- Original Message -
From:
dbalinglung
To: pgsql-general@postgresql.org
Sent: Monday, March 28, 2005 4:39
AM
Hello all,
i am not being able to set the default autocommit feature of PG to off.
i am using PG 8.0 Windows version and the following command :
SET AUTOCOMMIT TO OFF
throwing an error:
ERROR: SET AUTOCOMMIT TO OFF is no longer supported
Please suggest an alternative.
---
The plpython fix works on all of my functions :-)
Thank you muchly.
Sim
""Marc G. Fournier"" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
>
> In order to provide more extensive testing for 8.0.2, we have just
> packaged up a Beta1 available from:
>
> http://www.postgresql.org/ftp
Hi,
i do a nightly CLUSTER and VACUUM on one of my production databases.
Yesterday in the morning the vacuum process was still running after 8 hours.
That was very unusal and i didnt know exactly what to do. So i tried to stop
the process. After it didnt work i killed -9 the Vacuum process. I res
dear guys
how to i saving my picture file into table
?
thanks
regards
Alam Surya
67 matches
Mail list logo