Re: [GENERAL] Old source code needed

2015-11-27 Thread NTPT
Thanx for help,  I grab the source code  that match  old cluster fs backup. 

However: Should it run fine compiled  with recent gcc 4.9.3 ? 

while compiled with this gcc , I got a lot of strange errors like 

ERROR:  could not identify an ordering operator for type name at character 
3336
HINT:  Use an explicit ordering operator or modify the query.

with \dt or \list  or other  commands and select.


Hovever compiling it with gcc 3.4.6 and everythig works. 

Is it intended (expected) behavior or a compiller bug (Being on Gentoo, 
compiller bug scary me a lot).

thanx



-- Původní zpráva --
Od: Adrian Klaver <adrian.kla...@aklaver.com>
Komu: NTPT <n...@seznam.cz>, pgsql-general@postgresql.org
Datum: 26. 11. 2015 22:49:13
Předmět: Re: [GENERAL] Old source code needed

"On 11/26/2015 01:29 PM, NTPT wrote:
> Hi all, I need to find a old source codes for Postgresql9.0 BETA relases.
> could You help me ?
>

http://git.postgresql.org/gitweb/?p=postgresql.git;a=tags

Then, say for REL9_0_BETA2 click on commit which will take you to:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=dcd52a64bd9d3
baa252a8bea662b08f7780035a1

click on the snapshot link and you will get a postgres*.tar.gz file that 
is the code at 9.0beta2

-- 
Adrian Klaver
adrian.kla...@aklaver.com"

[GENERAL] Old source code needed

2015-11-26 Thread NTPT
Hi all, I need to find a old source codes for Postgresql9.0 BETA relases. 
 could You help me ? 



[GENERAL] Query failed: ERROR: character with byte sequence 0xc2 0x96 in encoding "UTF8" has no equivalent in encoding "WIN1250"

2015-11-25 Thread NTPT
Hi, all. 

I need help.

pg_exec(): Query failed: ERROR: character with byte sequence 0xc2 0x96 in 
encoding "UTF8" has no equivalent in encoding "WIN1250"

It is a strange. First there was a database with latin2 encoding.

to this database connect an aplicaton with "set client encoding to win1250" 
and manipulating data

then database was dumped with pg_dump -E UTF8

then database was restored pg_restore on another cluster in database with 
UTF8 encoding

then application connect to new database with "set client encoding to win
1250"

and - query failed


How in this scenario could invaid  characters reach the database ???

And how to solve this ? Errort message is not very useful, because does not 
provide any hint (at least column and row)



Re: [GENERAL] full_page_writes on SSD?

2015-11-25 Thread NTPT
Hi, 


I investigate bit about SSD  and how it works and need to be aligned .

And I  conclude that in the ideal world we need a  general --ebs=xxx switch 
in various linux tools to ensure alignment. Or make calculation by had.. 

On the market there are SSD disks with page size 4 or 8 kb. But there is  
for ssd disk typical property - the EBS - Erase Block Size. If disk operate 
and write to single sector, whole Erase block must be read by driver 
electronic, modified and write back to the drive. 

On the market there are devices with multiple EBS sizes . 128, 256, 512 1024
1534 2048 kib etc 
In my case Samsung 850evo  there are 8k pages and 1536 Erase Block

So first problem with alegment - partition should start on the  Erase block 
bounduary .  So --ebs  switch in partition tools for propper aignment  would
be practical. Or calculate by hand. In my sase 1536 = 3072 512b sectors.

Things get complicate if You use  mdadm raid. Because Raid superblock is 
located on the begining of the raid  device and  does not fill whole rerase 
block, it is practical to set in creation of raid  --offset to real 
filesystem start at next erase block from the begining of raid device so 
underlying filesystem would be aligned as well.  so --ebs=xxx on mdadm would
be practice

And now ext4  so blocksize 4096 . because page size of ssd is 8kb , setting 
stride´wit is a smallest unit on with filesystem operate in one disk to 2  
to fill ssd pagesize is practical. And stripe size set  as ebs/pagesize or 
as whole ebs . and may be it would be useful to use ext4 --offset to edb as 
well. 

this should align partition, raid and filesystem. fix me if I am wrong. 

And  now it is turn for database storage engine. I think try to write on 
erase block size bounduary and  erase block size amount of data may have 
some benefits not with the speed but in lower wear-out of the entire ssd 
disk.. 

  

  



-- Původní zpráva --
Od: Marcin Mańk 
Komu: PostgreSQL 
Datum: 24. 11. 2015 20:07:30
Předmět: [GENERAL] full_page_writes on SSD?

"

I saw this: http://blog.pgaddict.com/posts/postgresql-on-ssd-4kb-or-8kB-
pages(http://blog.pgaddict.com/posts/postgresql-on-ssd-4kb-or-8kB-pages)



It made me wonder: if SSDs have 4kB/8kB sectors, and we'd make the Postgres 
page size equal to the SSD page size, do we still need full_page_writes?




Regards

Marcin Mańk


"

[GENERAL] Re: [GENERAL] Query failed: ERROR: character with byte sequence 0xc2 0x96 in encoding "UTF8" has no equivalent in encoding "WIN1250"

2015-11-25 Thread NTPT
Hi, 


Thanx for explanation. 


but how to solve this  in existing  database ? 



-- Původní zpráva --
Od: Albe Laurenz <laurenz.a...@wien.gv.at>
Komu: 'NTPT *EXTERN*' <n...@seznam.cz>, pgsql-general@postgresql.org 
Datum: 25. 11. 2015 12:54:17
Předmět: Re: [GENERAL] Query failed: ERROR: character with byte sequence 0xc
2 0x96 in encoding "UTF8" has no equivalent in encoding "WIN1250"

"NTPT wrote:
> I need help.
> 
> pg_exec(): Query failed: ERROR: character with byte sequence 0xc2 0x96 in 
encoding "UTF8" has no
> equivalent in encoding "WIN1250"
> 
> It is a strange. First there was a database with latin2 encoding.
> 
> to this database connect an aplicaton with "set client encoding to win
1250" and manipulating data
> 
> then database was dumped with pg_dump -E UTF8
> 
> then database was restored pg_restore on another cluster in database with 
UTF8 encoding
> 
> then application connect to new database with "set client encoding to win
1250"
> 
> and - query failed
> 
> 
> How in this scenario could invaid characters reach the database ???
> 
> And how to solve this ? Errort message is not very useful, because does 
not provide any hint (at least
> column and row)

I can reproduce that, and I think it is a bug.

Hex 96 is Unicode Code Point 2013 in Windows-1250, that is an "en dash".

1) You enter this byte into a Latin 2 database with client_encoding WIN1250,
and it gets stored as hex 96 in the database.

2) You dump this database with -E UTF8 and get hex C2 96 in the dump.

3) You restore this database to a new UTF8 database, the data end up
as hex C2 96.

4) You query with client_encoding WIN1250 and get the error you quote.

Now I think that the bug is in step 1).
Wikipedia says that hex 96 is undefined in Latin 2
(https://en.wikipedia.org/wiki/ISO/IEC_8859-2),
so instead of storing this byte, PostgreSQL should have complained that it 
cannot be converted to Latin 2, since indeed there is no "em dash" defined
in Latin 2.

The bug seems to be in
backend/utils/mb/conversion_procs/latin2_and_win1250/latin2_and_win1250.c,
function win12502mic().
I think that the entries in win1250_2_iso88592 corresponding to undefined 
characters
should be 0x00 to produce an error.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general;

[GENERAL] RE: [GENERAL] Query failed: ERROR: character with byte sequence 0xc2 0x96 in encoding "UTF8" has no equivalent in encoding "WIN1250"

2015-11-25 Thread NTPT
but how to update affected columns ? error message does not  provide single 
 clue ( at least row name)

And dump-restore ?  It  do not underestand how it could help.. dumped as 
unicode  restore as unicode = I am at the same point ... dumping as latin2 
and restore to utf8 will end with the some errors..  I suspect 


-- Původní zpráva --
Od: Albe Laurenz <laurenz.a...@wien.gv.at>
Komu: 'NTPT *EXTERN*' <n...@seznam.cz>
Datum: 25. 11. 2015 14:47:46
Předmět: RE: [GENERAL] Query failed: ERROR: character with byte sequence 0xc
2 0x96 in encoding "UTF8" has no equivalent in encoding "WIN1250"

"NTPT wrote:
> but how to solve this in existing database ?

Either update all affected columns in the source database
or edit the database dump...

Yours,
Laurenz Albe"

[GENERAL] Migrate whole cluster to utf8

2015-10-21 Thread NTPT
Hi all

I have a db cluster (around 50GB of data ) in LATIN2  encoding. Now I need 
to dump whole cluster because of upgrade to newer  version of pstgresql. But
I need to  have new cluster created with utf8 encoding  And databases in 
that clusters tooo (with cs_CZ locale)

what is the best /safe practice ?

thanx for help


[GENERAL] Named / preparsed / preplaned(prepared) queries - Feature proposal

2011-11-06 Thread NTPT
Hi all

I have an idea how to extend postgresql feature set.


I sugest named / preparsed / preplamned query  mechanizm where named query is a 
simpliest form and preplaned query is the most sofisticated (implementation 
wise). Ie to have ability to  assign a global and may be persistant NAME (or 
alias) for some query for some user. Similar how PREPARE related stuff does, 
but AFAIK, prepared statement are limited for the current user on current 
connection only. 

with syntax like this: 

NAMEDQUERY select foo from bar where blah=$1 AS myquery [PREPARSED | 
PREPLANED  REPLAN EVERY condition] [PERSISTENT] [FINAL] [FOR USER user1,user2]


And then  modify user privileges to add privileges to create a NAMEDQUERY, to 
create it for others -  typicaly  useful for database administrator and the  
privilege of executing NAMED QUERY ONLY for some user - for example guest.


PERSISTANT tell the server that this named query should survive server restart. 

FINAL: if Administrator set a named query as FINAL, no other user is able to 
override it.

PREPARSED: Rationale behind PREPARSED queries is that in most scenarios on web 
applications there is a simple not complicated select of data often in heap or 
memory or in the in memory index where actually fetching that data is faster 
then parsing a query string. So I thing a global cache of preparsed  NAMEDQUERY 
statements is a good idea and on some loads can boost performance significantly.

PREPLANED: same as the preparsed, but the query plan is cached too. So in 
frequent simple queries (like common in ajax/comet applications where it ofen 
do some simple select to query a status of session etc), I thing it can remove 
significant burden (parsing and planning a query) from CPU .

yes, I know that a query plan that is superior  now, could be inferior few 
minutes later. So I suggest a  REPLAN EVERY condition, where condition can be 
whatever user like - time interval,   changed N percent of tuples etc etc - Let 
user decide what mechanizm is best for him.  
 

This feature can lead to two interested things. First, there is a decoupuling a 
database from application level and keep this two realm relatively separate, if 
underlying db structure is changed, just redefine named queries and do not 
touch application. It create some sort of easy to use abstraction layer. It is 
a bit objectish approach, because you can create a selfcontained database 
with data and method for its manipulation (if named query is  set as 
PERSISTENT, should  dump/restore and replication solutions do their job on 
persistant named queries as well)  


Second thing is a powerful, simple, yet easy to use security feature with as 
much granularity as is possible. For example user guest, who have set a NAMED 
QUERY ONLY user privilege, will be limited only to queries that admin of the 
database defined for him, no matter what.I thing that it could be easy to use 
privilege sepration mechanizm. 

I use postgresql  in my projects since 6.x branch and I am happy with it, but 
proposed feature is on my Postgresql wishlist. 
Any comments wellcome. 

PS: Execuse my wrong english.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: Re: [GENERAL] Named / preparsed / preplaned(prepared) queries - Feature proposal

2011-11-06 Thread NTPT

I do not know.  Should it be ?

 I think that features are proposed  because user(s) want  it, need it, because 
it can be useful for other peoples too, because with this can people solve 
their problems and/or improve performance and security of their applications. 
Not because someone else have something similar...

#  Původní zpráva 
# Od: John R Pierce pie...@hogranch.com
# Předmět: Re: [GENERAL] Named / preparsed / preplaned(prepared) queries - 
Feature
# proposal
# Datum: 06.11.2011 17:35:57
# 
# On 11/06/11 6:07 AM, NTPT wrote:
#  I use postgresql  in my projects since 6.x branch and I am happy with it, 
but
# proposed feature is on my Postgresql wishlist.
#  Any comments wellcome.
#
# is this based on any existing feature in the SQL specification, or in
# other popular database engines?
#
#
# --
# john r pierceN 37, W 122
# santa cruz ca mid-left coast
#
#
# --
# Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
# To make changes to your subscription:
# http://www.postgresql.org/mailpref/pgsql-general
#
#
#

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Open source database design tool , alternative to MicroOLDAP

2009-09-10 Thread NTPT
Hi all. is there available some freeware and/or opensource visual 
database design tool for postgresql ? Something like commercial microOLAP ?


thanx for help

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rapid Seek Devices (feature request)

2009-08-17 Thread NTPT
AFAIK postgresql measure characteristic of the data distribution in the 
tables and indexes (that is what vacuum ANALYSE does) , but results of 
that  measures  are **weighted by** random_page_cost and 
sequential_page_cost. So  measurements are correct,  but costs (weight) 
should reflect a real speed for sequentional and random operation of the 
storage device(s) (tablespaces) involved.


Jeremy Harris napsal(a):

On 08/17/2009 03:24 AM, Craig Ringer wrote:

On 16/08/2009 9:06 PM, NTPT wrote:

So I suggest we should have random_page_cost and
Sequential_page_cost configurable on per tablespace basis.


That strikes me as a REALLY good idea, personally, though I don't know
enough about the planner to factor in implementation practicalities and
any cost for people _not_ using the feature.


Could not pgsql *measure* these costs (on a sampling basis, and with long
time-constants)?

- Jeremy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rapid Seek Devices (feature request)

2009-08-16 Thread NTPT

Hi all

I have some idea/feature request.

Now, there are several devices available, that can be called rapid seek 
devices (RSD in future text). I mean SSD disks, some devices like 
gigabyte I-RAM and other (semi)profesional ram disk like solutions for 
example Acard ANS-9010 . Rapid seek because there are not moving parts, 
thus non-sequential access (seeks) have no penalty (no moving heads 
time). I think  it would be cool to have explicitly support for that 
kind of devices.



1: Postgresql has its tablespaces, so creating a tablespce on RSD and 
put some speed critical indexes/tables there, is optimal from the costs 
point (RSD are not cheap, so use it for the speed critical  tasks within 
single database is reasonable)  but it is only a half of the needed. 
Query planner act strongly pro sequentional reads while it try to 
guess best query plan. This bias is fine for classic disk based storage, 
but is not necesery in RSD. So, for tables/indexes on RSD, a suboptimal 
query plan is likely to be generated  (fix me if I am wrong).


So I suggest we should have random_page_cost and 
Sequential_page_cost configurable  on per tablespace basis. And query 
planner that is aware of it, include situation where tables and 
corresponding indexes are on different speed tablespaces.


Imagine this scenario: Default tablespace on sata disk, 
random_page_cost=8, sequential_page_cost=3 (from config file),


CREATE TABLESPACE fast /path/to/multiple_15k_rpm_SCSI_hw_RAID 
random_page_cost=4 sequential_page_cost=2
CREATE TABLESPACE lightspeed /path/to/SSD_OR_I-RAM random_page_cost=1 
sequential_page_cost=1.2


and now scatter your tables / indexes around new tablespaces :-)


2: Many of that RSD devices are not so much reliable (power outage in 
ramdisk, weak auxillarity battery in i-ram like devices, block wear 
out in SSD). While moving only an indexes to this device (  I found 
this article showing there IS a big difference having only an indexes on 
SSD -  http://linux.com/archive/feature/142658.) may be appropriate, and 
just reindex in worst case, this is not suitable in high availability 
enviroment.


So I suggest to have something like this to solve reliability problems 
on some RSD:


CREATE TABLESPACE lightspeed /path/to/SSD_OR_i-RAM random_page_cost=1 
sequential_page_cost=1.2 TEE name_of_the_slow_tablespace


ie read from fast tablespace, write to both fast and slow, reconstruct 
fast from slow if appropriate.


Thanx for your attention.

PS: Execuse my wrong english


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] In memory Database for postgres

2009-04-13 Thread NTPT
If you need to run some database really fast,  try to put only all your 
indexes onto ram disk. Look here... http://www.linux.com/feature/142658 
They use SSD to store indexes (not data)  for postgresql. A think the 
same conclusions should apply for ram disk too.


And in wrost case (power off for RAM disk or wear out for SSD) you  need 
only a reindex to build your indexes again


Scott Marlowe napsal(a):

On Mon, Apr 13, 2009 at 2:06 PM, aravind chandu avin_frie...@yahoo.com wrote:
  

Hello,

   Thanks for your reply,but what I am actually looking for is
database should be an in-memory database and at the same i want to store
that data into disk so that data won't be lost when the system restarts or
in case of power failure. Can you guys tell me the procedure how to do this?
your help will he greatly appreciated.



But that's just the point people have been making.  A small enough db
will be cached completely in memory, and the only time you'll have to
access the disks is the first read, and during writes, which can be
made to happen mostly after the fact and not influence the rest of the
db.

You're trying to reinvent a wheel to solve a non-existent problem.

  


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MULE_INTERNAL translation to win1250

2007-01-28 Thread NTPT

Hi.

I have a strange problem  in postgres 8.1.4  (gentoo 64bit on AMD64
platform)

My database is  created  vith LATIN-2 encoding for  correct vieving of
nacional specific characters ( czech language  )

inside  code of my php application is setting client encoding to win1250
because I need output of  query in this encoding.

On some parts of data I got an error :

Query failed: ERROR: character 0x829a of encoding MULE_INTERNAL has no
equivalent in WIN1250

Without set client_encoding to win1250 query works. I am curious why there
is a MULE_INTERNAL  mentioned even when \l+  say that corresponding database
is created with  (and even all  the cluster)  LATIN2 encoding.

Strange enough that ALL INSERTS  are done with WIN1250 client encoding too.
May be a bug in charset translation routines of postgres ?


And how can I  repair it, preferable in whole  database ?


Thanx for help.



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] MULE_INTERNAL translation to win1250

2007-01-28 Thread NTPT
I made a some future investigation. I find and identified an exact line in 
databse. Exact column that cause a problem, I am able to select column  into 
testtable while in testtable  it retain  its bad behavior. fortunally, 
this row  does not contain vital
data so I can drop it rather without a  bigger problem, but I would like to 
know why


I am able to  identify a single character that cause a problem in real data 
and in testtable  too.  (rather character combination using substring 
function - it seems that in certain point it take two characters as  single 
16bit one ) but I am not able to reproduce this behavior on fresh table 
using insert and  select statements. Please give me a  some tip where to 
search and what else informations to provide.


thank you.

- Original Message - 
From: Tom Lane [EMAIL PROTECTED]

To: NTPT [EMAIL PROTECTED]
Cc: pgsql-general@postgresql.org
Sent: Monday, January 29, 2007 12:33 AM
Subject: Re: [GENERAL] MULE_INTERNAL translation to win1250



NTPT [EMAIL PROTECTED] writes:
Without set client_encoding to win1250 query works. I am curious why 
there
is a MULE_INTERNAL  mentioned even when \l+  say that corresponding 
database

is created with  (and even all  the cluster)  LATIN2 encoding.


The conversions between LATIN2 and WIN1250 go by way of MULE_INTERNAL to
reduce duplication of code.  It shouldn't make any difference to the end
result though.  Are you sure that the characters you're using are
supposed to have representations in both character sets?


May be a bug in charset translation routines of postgres ?


If you think that, you need to provide us with the exact codes that are
being mistranslated and what you think they should translate to.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.1.410 / Virus Database: 268.17.12/654 - Release Date: 27.1.2007





---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] \dt gives ERROR: did not find '}' at end of input node

2005-07-24 Thread NTPT



Please I need help 

I receive the following error trying to do 
\dt or \di command in psql. What is wrong and how it can be 
fixed ?

Data in this database are several GB and are vital 
for me ...




[GENERAL] Filesystem level backup and 32 / 64 bit

2005-03-15 Thread NTPT
I have situation where I have one box with linux  native 64 bit distribution 
(Gentoo on AMD) running pg 8.x and other box running a 32 bit distro 
running version of pg 8.x

Is it posssible  to take a filesystem level backup (copyiing all in 
$PGDATA directory) from 64 bit system and use it as $PGDATA in the native 32 
bit system with the same version of postgresql ?

Ie to have one big 64 bit server and eventually a small but cheap 32 bit box 
as an emergency backup  ? 

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [GENERAL] More concurent transaction over single connection

2005-03-06 Thread NTPT
Ok. Let,s have a some model scenarios . Let it be a web server with some 
embedded language like PHP.

1: Multiprocess server (Like Apache 1.x  ) : Each process use one persistent 
connection. Right ?   One proces can serve only one  request in present 
time. Right  ?  When request is finished, process  hold your connection open 
and awaiting a new request. From the point of view of the transactions it is 
OK, because transactions over one persistant connection are serialized by 
nature.

2: One process, but  multiple threads . If each thread have your separate db 
connections, it is ok, it is like previous example, just substitute word 
process by word thread

3: One process, multiple threads, all threads share  the same one persitant 
connection. Because  one thread serve one request in present time, but 
threads can run concurently  (AFIAK ), I am affraid, that  multiple 
transactions  over the single connection in this scenario will result a 
complette  mess. I am right  ?

Please execuse my wrong english.



- Original Message - 
From: Richard Huxton dev@archonet.com
To: NTPT [EMAIL PROTECTED]
Cc: Postgres General pgsql-general@postgresql.org
Sent: Wednesday, February 09, 2005 11:45 AM
Subject: Re: [GENERAL] More concurent transaction over single connection


NTPT wrote:
AFAIK (7.4.x) there is one limitation in persistant connections to 
postgresql from various frontends ( 
http://cz.php.net/manual/en/features.persistent-connections.php ), 
because it can not use transactions in situation where more concurent 
tasks use a single connection (execuse my wrong english)


I suggest to add  some sort of context identificator to 
frontend/backend protocol to overcome this limit. Ie frontend - ( like 
PHP for example ) make ONE persistant connection  and different scripts 
are served over this connection. But frontend add for each instance  of 
script a unique context identificator and postgresql server  will treat 
different contexts as they was send by different connections. The 
results wil be sorted by context  by frontend and feeded to apprpriate 
instance of the php script
You've just reinvented connections. The problem is at the application end 
really, since PHP doesn't provide a middle-ware layer to manage this sort 
of stuff. Typically, java-based application servers manage this sort of 
thing for you.

I think it may add some benefit  to avoiding connection starting costs, 
especially in case where database and client are in greater network 
distance and/or need to use some expensive procedure to start connection 
and allow a relay simple and transparent  connection pooling,  may be a 
some type od spare servers like in Apache (MinSpareServers and Max 
SpareServers configuration directive )
Perhaps take a look at pgpool connection pooling.
--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [GENERAL] row numbering

2005-02-28 Thread NTPT
If you insert the results of your query into a table with a serial
column, the serial column will do what you want..
Plus add a huge overload... ? 

Having some sort of line numbering in result query  would be nice... 



- Original Message - 
From: Mike Harding [EMAIL PROTECTED]
To: Peter Eisentraut [EMAIL PROTECTED]
Cc: josue [EMAIL PROTECTED]; pgsql-general@postgresql.org
Sent: Saturday, February 26, 2005 1:27 AM
Subject: Re: [GENERAL] row numbering


If you insert the results of your query into a table with a serial
column, the serial column will do what you want..
On Sat, 2005-02-26 at 01:10 +0100, Peter Eisentraut wrote:
josue wrote:
 is there a way return a column with the row number automatically
 generated according the way the rows were processed by the query.
No, but you can easily keep a counter in the client.
--
Mike Harding [EMAIL PROTECTED]
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly


[GENERAL] More concurent transaction over single connection ?

2005-02-08 Thread NTPT
AFAIK (7.4.x) there is one limitation in persistant connections to 
postgresql from various frontends ( 
http://cz.php.net/manual/en/features.persistent-connections.php ), because 
it can not use transactions in situation where more concurent tasks use a 
single connection (execuse my wrong english)


I suggest to add  some sort of context identificator to frontend/backend 
protocol to overcome this limit. Ie frontend - ( like PHP for example ) 
make ONE persistant connection  and different scripts are served over this 
connection. But frontend add for each instance  of script a unique context 
identificator and postgresql server  will treat different contexts as 
they was send by different connections. The results wil be sorted by 
context  by frontend and feeded to apprpriate instance of the php script

I think it may add some benefit  to avoiding connection starting costs, 
especially in case where database and client are in greater network distance 
and/or need to use some expensive procedure to start connection and allow a 
relay simple and transparent  connection pooling,  may be a some type od 
spare servers like in Apache (MinSpareServers and Max SpareServers 
configuration directive )

What do you think about it  ? 

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[GENERAL] How many connections now ?

2005-02-05 Thread NTPT
Is there a way to determine how many connections to the database are  active 
?
some form of select  ? ie how many client application are connected to 
server  ?  I need this value for client application ...

Thank for help. 

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[GENERAL] Postgresql and Athlon64 ?

2005-01-31 Thread NTPT



Will I have some advantages, better performance etc 
using postgres 7.4 or postgres 8.x on Athlon64 system with 64 bit Linux distro 
?Are there asome benchmark available or someone personal experience ? 



Or should I stay in 32 bit platform for a 
while ?

Thanx for help 


Re: [GENERAL] Postgresql and Athlon64 ?

2005-01-31 Thread NTPT
So postgresql can  have benfit from 64 bit architecture ? 

- Original Message - 
From: P.J. Josh Rovero [EMAIL PROTECTED]
To: NTPT [EMAIL PROTECTED]
Cc: 'PgSql General' pgsql-general@postgresql.org
Sent: Monday, January 31, 2005 2:33 PM
Subject: Re: [GENERAL] Postgresql and Athlon64 ?


It runs fine, and is quite peppy an Fedora Core 2 for AMD 64.
I have not run into any problems.
NTPT wrote:
Will I have some advantages, better performance etc using postgres 7.4 
or postgres 8.x on Athlon64 system with 64 bit Linux distro ?Are there 
asome benchmark available or someone  personal experience ?  
 
 
Or should I stay in 32 bit platform  for a while ?
--
P. J. Josh Rovero Sonalysts, Inc.
Email: [EMAIL PROTECTED]www.sonalysts.com215 Parkway North
Work: (860)326-3671 or 442-4355 Waterford CT 06385
***

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[GENERAL] Exact or less specific match ?

2004-10-27 Thread NTPT
Hi.
i  have table like this:
create table my_data (
cond_1 int8,cond_2 varchar(),cond_3  cond_n whatrver ,data text)
This table represents a simple tree structure with known max level (n) . 
This table is filled with data, but branches have not a same depth.

Now I need to select from table
select data from my_data where cond_1=x AND cond_2='blah' And cond_3= . 
AND cond_n=whatewer

But, this tree have not a same depth in all his branches. So I need to 
select Exact match, and, if the exact match is not possible  (ie if there is 
not line that fit WHERE  condition ),

to select with WHERE  cond_1=x AND cond_2='blah' And cond_3= . AND 
cond_(n-1)=whatewer and so on until the 'data' is not empty or  top of the 
tree reached (ie if not match, find data from upper node of the tree).

I know, that similar effects can be reached with  COALESCE,
select coalesce ((select data from my_data where cond_1=x AND cond_2='blah' 
And cond_3= . AND cond_n=whatewer),(select data from my_data where 
cond_1=x AND cond_2='blah' And cond_3= . AND cond_(n-1)=whatewer) 
,...,(select data from my_data where cond_1=x ))

but i think it is not ideal, because it needs to perform a N  subselects, 
what can eat a lot of machine time...


is there some other way to do exact or less specific match ?
Thank you. please execuse my bad english 

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Index on TEXT versus CHAR(32)... fast exact TEXT matching

2004-09-06 Thread NTPT
what about to use a CRC32 checksum of the text, computed by client
application an then  make index of crc32 data ?

ie

add column crc int4,
add column md5 varchar(255 )
 create index blabla on mytable (crc)
or even create index blabla2 on mytable (crc,md5)

and  query like . where crc='crc32 of your searched text' AND
md5='md5 hash'

i am not shure, but this should work pretty fast.


- Original Message - 
From: Jon Lapham [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, September 04, 2004 3:04 PM
Subject: [GENERAL] Index on TEXT versus CHAR(32)... fast exact TEXT matching


 I have a table that stores TEXT information.  I need query this table to
 find *exact* matches to the TEXT... no regular expressions, no LIKE
 queries, etc.  The TEXT could be from 1 to 1+ characters in length,
 quite variable.  If it matters, the TEXT may contain UNICODE characters...

 Example:
 CREATE TABLE a (id SERIAL, thetext TEXT);
 SELECT id FROM a WHERE thetext='Some other text';

 One way I thought to optimize this process would be to store an MD5 hash
 of the thetext column and query on that:

 CREATE TABLE a (id SERIAL, thetext TEXT, thetext_md5 CHAR(32));
 SELECT id FROM a WHERE thetext_m5d=md5('Some other text');

 Now, obviously I would want to build an INDEX on either thetext or
 thetext_md5, depending on which way I decide to make the table.

 My question is, what is the absolute fastest way to find the exact match
 to a TEXT column?  Any amount of pre-processing is fine (such as
 calculating the MD5's of all the TEXT tuples), but the query must be
 extremely fast.

 Has anyone compared (theoretical or practical) the performance of
 querying a TEXT-based INDEX versus a CHAR(32)-based INDEX?  Is my MD5
 idea a waste of time?  Is there something better than MD5?  Would it be
 better to store the fingerprint of the TEXT as an integer somehow, so
 that the INDEX could be based on a INT* column?

 Thanks for any help!

 -Jon

 -- 
 -**-*-*---*-*---*-*---*-*-*-*---*-*---*-*-*-*-*---
   Jon Lapham  [EMAIL PROTECTED]Rio de Janeiro, Brasil
   Personal: http://www.jandr.org/
 ***-*--**---***---


 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faqs/FAQ.html




---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[GENERAL] Relocation error pg_dumpall undefined symbol get_progname in 7.4.3

2004-06-20 Thread NTPT



Regardi

I recently compilePostgresql 7.4.3 with gcc 
3.4.0 on RH 7.3 . Regression tests work well All passed, but

Some binaries report a relocation error: 
:pg_dumpall undefined symbol get_progname . 


This error occures in most of the binaries. What is 
wrong ?



[GENERAL] Numbering a records

2004-02-18 Thread NTPT
I have this table

content (id int8,owner int8,position int8,timestamp int8,description text,batch int8)

Table is inserted/deleted frequently, 'id' is almoust random.



I insert to the table following set of rows :

 12345, 1000,1,timestamp,blabla,0
 12349, 1000,2,timestamp,blabla,0
 12355, 1001,1,timestamp,blabla,0
 12389, 1000,3,timestamp,blabla,0
 etc.. There is a many of these records.

Now I need to od some select like this

select * from content where owner='1000' order by timestamp with some limits, offsets 
etc. It is OK, no problem.

Other select, like to need  select a record of user 1000  WHERE position 5 AND 
position 150  is OK, 

But now, some records are inserted, some deleted, some have the timestamp column 
updated, so column 'position' is not sequential anymore. I need to create some  UPDATE 
. where owner='id of the owner' ORDER by timestamp, that will 
recalculate column 'position' to contain actual position  inside a timestamp ordered 
table ? (ie. colum position contain an actual order of records that is owned by 
'owner' ordered by timestamp ).Please note that usage of plain LIMIT/OFFSET is not 
what I need.


in close relation to this, I have another problem. I NEED to assign bath number to 
records from this  example. ie in the table content, where owner='id of the owner' 
ordered by timestamp,  set of first 500 record should have the same 'bath' number  
'1', set of 2nd 500 records should have its batch number '2' etc...

Is it possible and how it can be done ?  


PS: Execuse my bad english.

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Specifying many rows in a table

2004-01-30 Thread NTPT

- Puvodní zpráva - 
Od: Steve Atkins [EMAIL PROTECTED]
Komu: [EMAIL PROTECTED]
Odesláno: 28. ledna 2004 20:32
Predmet: [GENERAL] Specifying many rows in a table


 I have a large table (potentially tens or hundreds of millions of rows) and
 I need to extract some number of these rows, defined by an integer primary
 key.
 
 So, the obvious answer is
 
   select * from table where id in (1,3,4);


Should not it be   select * from table where id in ('1','3','4'); ?
ie add an single quote around the numbers ? Think I red some statement in 
documentation, that without a single quote, index scan may not be always used   ?


 
 But I may want to extract a large number of rows, many thousands
 
   select * from table where id in (1, 3, 5, , 10, 100017, 23000);
 
 This falls over when it exceeds the maximum expression depth of 10,000.
 And I have a sneaky feeling that increasing max_expr_depth isn't the
 right fix.
 
 Performance is pretty important, so does anyone have a good suggestion
 for how to phrase this query so that it'll at worst only be a single
 seq-scan through the mondo table, and make a reasonable choice as to
 whether to use an index-scan or seq-scan, depending on the number
 of rows I'm pulling out?
 
 Cheers,
   Steve
 
 ---(end of broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if your
   joining column's datatypes do not match
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Touch row ?

2004-01-23 Thread NTPT
is it possible to add column to database, that will automatically contain  date+time 
(or likely Unix timestamp) when the row was touched/changed - ie by INSERT or UPDATE ?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])