Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Andrus

Hi!

So kellaaeg is a time? Your best bet here would be to create an index that 
is an actual timestamp comprised of both kuupaev and kellaaeg. You could 
do this with to_timestamp by concatinating both fields together, or it may 
be easier to replace the space in kellaaeg with a colon and cast it to 
time, then add the two:

  kuupaev + replace( kellaaeg, ' ', ':' )::time
I know you can't alter the table, but can you create a view on top of the 
table? If you did that, you could have a real timestamp field in the view 
that is calculated from kuupaev and kellaaeg and you can create a 
functional index that uses the same calculation. That would be the easiest 
way to use this.


Thank you.
I solved this by creating composite index on 3 columns and re-writing query 
as Tom recommended.

It looks like Tom's recommendation is simpler for me.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting my Database name in a C Extension

2014-10-03 Thread Cedric Berger

On 02/10/14 18:13, Tom Lane wrote:


Cedric Berger ced...@precidata.com writes:


1) What is the easiest way to get that directly in C?


The usual locution is get_database_name(MyDatabaseId).


Ok, but then how do I find MyDatabaseId in, say,
a BeginForeignScan() or GetForeignRelSize() FDW callback?

http://www.postgresql.org/docs/9.3/static/fdw-callbacks.html

Sorry for these beginner's questions, but I've trouble navigating
the documentation / source / examples for this kind of info.

PS: is which see for additional details really good English
in the fdw-callbacks.html documentation?


2) Is there a way to get this information in the SQL extension
installation/update scripts (like the @/extschema/@ substitution)?


Nope.  Doesn't seem to me like a remarkably good idea to refer to it
in that sort of way anyway.  What would happen if someone renamed
the database after the extension is installed?


Ok,
Make sense,

Thanks
Cedric



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Inheritance and column mapping

2014-10-03 Thread Achilleas Mantzios

On 03/10/2014 05:54, Jim Nasby wrote:

On 10/2/14, 9:00 AM, Tom Lane wrote:

Achilleas Mantzios ach...@matrix.gatewaynet.com writes:

Was there ever any discussion.thought about being able to follow a non-strict 
by name
column mapping between inherited tables and father tables?

No.  You could use a view with UNION ALL perhaps.

FWIW, I've had some less than stellar results with that (admittedly, back on 
8.4).

The other thing you could do is something like:

ALTER TABLE invoice_document RENAME TO invoice_document_raw;
ALTER TABLE invoice_document_raw RENAME invoice_no TO doc_no;
CREATE VIEW invoice_document AS
SELECT ...
, doc_no AS invoice_no
, ...
FROM invoice_document_raw
;

If you make that view writable then no one needs to know that you renamed the 
column in the underlying table.


That is a brilliant idea, thank you!
One problem is that the tables are a part of a 100-node replication system base 
on a heavily hacked
version of DBMirror, over a non-TCPIP Satellite network. That would require 
rewriting rules
and deploying this across the remote nodes.
I would be afraid to run the ALTER TABLE ... RENAME TO command in this system.
So, we could just bite the bullet and get our team rewrite all programs.

--
Achilleas Mantzios
Head of IT DEV
IT DEPT
Dynacom Tankers Mgmt



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting my Database name in a C Extension

2014-10-03 Thread Albe Laurenz
Cedric Berger wrote:
 1) What is the easiest way to get that directly in C?

 The usual locution is get_database_name(MyDatabaseId).
 
 Ok, but then how do I find MyDatabaseId in, say,
 a BeginForeignScan() or GetForeignRelSize() FDW callback?

It is a global, all you should have to do is
#include miscadmin.h

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Getting my Database name in a C Extension

2014-10-03 Thread Michael Paquier
On Fri, Oct 3, 2014 at 4:57 PM, Albe Laurenz laurenz.a...@wien.gv.at
wrote:
 Cedric Berger wrote:
 1) What is the easiest way to get that directly in C?

 The usual locution is get_database_name(MyDatabaseId).

 Ok, but then how do I find MyDatabaseId in, say,
 a BeginForeignScan() or GetForeignRelSize() FDW callback?

 It is a global, all you should have to do is
 #include miscadmin.h
When looking for a global variable, a command like that is generally useful:
$ git grep MyDatabaseId -- *.h
src/include/access/xact.h:  Oid dbId;
/* MyDatabaseId */
src/include/miscadmin.h:extern PGDLLIMPORT Oid MyDatabaseId;
Regards,
-- 
Michael


Re: [GENERAL] deadlock of lock-waits (on transaction and on tuple) using same update statement

2014-10-03 Thread Andrej Vanek
Hi,

retested:
yes, this is still an issue in 9.3.5, same deadlock errors occured.
Do you need to extract some simplified reproducible testcase?

Best Regards, Andrej


Re: [GENERAL] Getting my Database name in a C Extension

2014-10-03 Thread Cedric Berger

On 03/10/14 10:23, Michael Paquier wrote:


On Fri, Oct 3, 2014 at 4:57 PM, Albe Laurenz laurenz.a...@wien.gv.at
mailto:laurenz.a...@wien.gv.at wrote:
  Cedric Berger wrote:
  1) What is the easiest way to get that directly in C?
 
  The usual locution is get_database_name(MyDatabaseId).
 
  Ok, but then how do I find MyDatabaseId in, say,
  a BeginForeignScan() or GetForeignRelSize() FDW callback?
 
  It is a global, all you should have to do is
  #include miscadmin.h
When looking for a global variable, a command like that is generally useful:
$ git grep MyDatabaseId -- *.h
src/include/access/xact.h:  Oid dbId;
 /* MyDatabaseId */
src/include/miscadmin.h:extern PGDLLIMPORT Oid MyDatabaseId;


Thanks a lot!

I didn't think 'MyDatabaseId' was a real variable name...

Cedric



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Merlin Moncure
On Fri, Oct 3, 2014 at 1:28 AM, Andrus kobrule...@hot.ee wrote:
 Hi!

 So kellaaeg is a time? Your best bet here would be to create an index that
 is an actual timestamp comprised of both kuupaev and kellaaeg. You could do
 this with to_timestamp by concatinating both fields together, or it may be
 easier to replace the space in kellaaeg with a colon and cast it to time,
 then add the two:
   kuupaev + replace( kellaaeg, ' ', ':' )::time
 I know you can't alter the table, but can you create a view on top of the
 table? If you did that, you could have a real timestamp field in the view
 that is calculated from kuupaev and kellaaeg and you can create a functional
 index that uses the same calculation. That would be the easiest way to use
 this.


 Thank you.
 I solved this by creating composite index on 3 columns and re-writing query
 as Tom recommended.
 It looks like Tom's recommendation is simpler for me.

Also,

*) quit using char() type.  use varchar() -- at least in postgres,
it's better in every respect.  the char type pads out the fields on
disk.  (this is a common noobie error in postgres since that may not
necessarily be true in other databases)

*) numeric type gives fixed point operations and clean comparisons and
so is generally necessary, but it in some cases floating point
(float4/float8) are more compact and give better performance without
much downside.

*) 9.0 is up to 9.0.18.  Time to upgrade. (it's a binary only replacement).

*) indexes like this: (kuupaev), (kuupaev, kellaaeg) are mostly
redundant because the composite index can service queries on kuupaev
nearly as well as the single field index.  Either kill the single
field index to get better memory utilization or reverse the fields in
the composite index to (kellaaeg, kuupaev) if you make frequent
searches on 'kellaaeg'.

Indexes match quals in left to right order to give the best
performance.   So, an index on a,b,c gives good performance for
searches on (a), (a,b), and (a,b,c).  There are certain limited
exceptions to this rule but it's a good design principle to know.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] installing on mac air development machine

2014-10-03 Thread john gale

The GUI installer for Mac OS X downloaded from postgresql.org works fine.

~ john

On Oct 2, 2014, at 3:50 PM, john.tiger john.tigernas...@gmail.com wrote:

 we've always installed on linux so need help with a new mac air running 
 latest osx
 
 in the instructions it shows several methods:
 1) enterprisedb (but this does not look open source ?)
 2) fink
 3) macports
 4) source
 etc
 
 what do most use ?  thks
 
 ps:  is there a mac build for 9.4 beta 3 yet ?  we would use that if 
 available and not too hard to install
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Really strange foreign key constraint problem blocking delete

2014-10-03 Thread Tim Mickelson
Even if I try to delete the data entry in the table channel_mapping with
idaut 1622, it is not possible to delete from the table
tmp_autenticazionesocial with the error below. How is this even possible
since there is no channel_mapping with idaut 1622 any more?? I tried
this in Java under a transaction, but also in pgAdmin III.


ERROR:  update or delete on table tmp_autenticazionesocial violates
foreign key constraint channel_mapping_idaut_fkey on table
channel_mapping
DETAIL:  Key (idautenticazionesocial)=(1622) is still referenced from
table channel_mapping.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] faster way to calculate top tags for a resource based on a column

2014-10-03 Thread Jonathan Vanasco
I've been able to fix most of my slow queries into something more acceptable, 
but I haven't been able to shave any time off this one.  I'm hoping someone has 
another strategy.

I have 2 tables:
resource
resource_2_tag

I want to calculate the top 25 tag_ids in resource_2_tag  for resources 
that match a given attribute on the resource table.

both tables have around 1.6million records.  

If the database needs to warm up and read into cache, this can take 60seconds 
to read the data off disk.  
If the database doesn't need to warm up, it averages 1.76seconds.  

The 1.76s time is troubling me.
Searching for the discrete elements of this is pretty lightweight.  

here's an explain --  http://explain.depesz.com/s/PndC 

I tried a subquery instead of a join, and the query optimized the plan to the 
same.

i'm hoping someone will see something that I just don't see.



  Table public.resource_2_tag
Column |  Type   | Modifiers 
---+-+---
 resource_id   | integer | 
 tag_id| integer | 
Indexes:
_idx_speed_resource_2_tag__resource_id btree (resource_id)
_idx_speed_resource_2_tag__tag_id btree (tag_id)

  Table public.resource
   Column|Type |
Modifiers 
-+-+--
 id  | integer | not null 
default nextval('resource_id_seq'::regclass)
resource_attribute1_id   | integer | 
lots of other columns| |
Indexes:
resource_attribute1_idx btree (resource_attribute1_id)



select count(*) from resource;
-- 1669729

select count(*) from resource_2_tag;
-- 1676594

select count(*) from resource where resource_attribute1_id = 614;
-- 5184
-- 4.386ms

select id from resource where resource_attribute1_id = 614;
-- 5184
-- 87.303ms

popping the 5k elements into an in clause, will run the query in around 100ms.


EXPLAIN ANALYZE
SELECT 
resource_2_tag.tag_id AS resource_2_tag_tag_id, 
count(resource_2_tag.tag_id) AS counted 
FROM 
resource_2_tag 
JOIN resource ON resource.id = resource_2_tag.resource_id 
WHERE 
resource.resource_attribute1_id = 614 
GROUP BY resource_2_tag.tag_id 
ORDER BY counted DESC 
LIMIT 25 OFFSET 0;



 Limit  (cost=76659.61..76659.68 rows=25 width=4) (actual 
time=1502.902..1502.913 rows=25 loops=1)
   -  Sort  (cost=76659.61..76672.47 rows=5141 width=4) (actual 
time=1502.900..1502.906 rows=25 loops=1)
 Sort Key: (count(resource_2_tag.tag_id))
 Sort Method: top-N heapsort  Memory: 26kB
 -  HashAggregate  (cost=76463.13..76514.54 rows=5141 width=4) (actual 
time=1487.016..1495.206 rows=13887 loops=1)
   -  Hash Join  (cost=35867.88..76437.42 rows=5141 width=4) 
(actual time=97.654..1453.337 rows=27068 loops=1)
 Hash Cond: (resource_2_tag.resource_id = resource.id)
 -  Seq Scan on resource_2_tag  (cost=0.00..25847.94 
rows=1676594 width=8) (actual time=0.032..513.046 rows=1676594 loops=1)
 -  Hash  (cost=35803.88..35803.88 rows=5120 width=4) 
(actual time=97.576..97.576 rows=5184 loops=1)
   Buckets: 1024  Batches: 1  Memory Usage: 183kB
   -  Bitmap Heap Scan on resource  
(cost=272.68..35803.88 rows=5120 width=4) (actual time=5.911..90.264 rows=5184 
loops=1)
 Recheck Cond: (resource_attribute1_id = 614)
 -  Bitmap Index Scan on 
resource_attribute1_idx  (cost=0.00..271.40 rows=5120 width=0) (actual 
time=3.575..3.575 rows=5184 loops=1)
   Index Cond: (resource_attribute1_id = 
614)
 Total runtime: 1503.146 ms




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really strange foreign key constraint problem blocking delete

2014-10-03 Thread Vick Khera
Using my magick powers of mind reading, I will guess you made circular
dependencies.

On Fri, Oct 3, 2014 at 12:17 PM, Tim Mickelson
tim_mickel...@bigfoot.com wrote:
 Even if I try to delete the data entry in the table channel_mapping with
 idaut 1622, it is not possible to delete from the table
 tmp_autenticazionesocial with the error below. How is this even possible
 since there is no channel_mapping with idaut 1622 any more?? I tried
 this in Java under a transaction, but also in pgAdmin III.


 ERROR:  update or delete on table tmp_autenticazionesocial violates
 foreign key constraint channel_mapping_idaut_fkey on table
 channel_mapping
 DETAIL:  Key (idautenticazionesocial)=(1622) is still referenced from
 table channel_mapping.



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Andrus

Hi!

Thank you for explanations.


 the char type pads out the fields on disk.


It looks like you wrote that char takes more disk space.

from

http://www.pgcon.org/2013/schedule/attachments/269_tour-of-postgresql-data-types.pdf

page 28:

Unlikemany
databases,char(n)isNOTstoredasafixed-sizedfield 
inPostgres.Itistreatedexactlythesameas 
varchar(n)exceptforbeingpadded


So char type does not take more space than varchar.

Andrus. 




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Merlin Moncure
On Fri, Oct 3, 2014 at 3:28 PM, Andrus kobrule...@hot.ee wrote:
 Hi!

 Thank you for explanations.

  the char type pads out the fields on disk.


 It looks like you wrote that char takes more disk space.

 from

 http://www.pgcon.org/2013/schedule/attachments/269_tour-of-postgresql-data-types.pdf

 page 28:

 Unlikemany
 databases,char(n)isNOTstoredasafixed-sizedfield in
 Postgres.Itistreatedexactlythesameas
 varchar(n)exceptforbeingpadded

 So char type does not take more space than varchar.

I beg to differ:

postgres=#  create table t1(v char(100));
CREATE TABLE
postgres=#  create table t2(v varchar(100));
CREATE TABLE
postgres=# insert into t1 select '' from generate_series(1,100);
INSERT 0 100
Time: 5951.023 ms
postgres=# insert into t2 select '' from generate_series(1,100);
INSERT 0 100
Time: 2083.323 ms
postgres=# select pg_size_pretty(pg_relation_size(oid)) from pg_class
where relname = 't1';
 pg_size_pretty

 128 MB
(1 row)

postgres=# select pg_size_pretty(pg_relation_size(oid)) from pg_class
where relname = 't2';
 pg_size_pretty

 35 MB
(1 row)

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Processor usage/tuning question

2014-10-03 Thread Israel Brewster
I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some stats today, I saw that it was handling about 4-5 transactions/second (according to theSELECT sum(xact_commit+xact_rollback) FROM pg_stat_database; query), and an instance of the postmaster process was consistently showing 40%-80% utilization to handle this. I didn't think anything of that (the machine has plenty of capacity) until I mentioned it to a friend of mine, who said that utilization level seemed high for that many transactions. So if that level of utilization IS high, what might I need to tune to bring it down to a more reasonable level?Some details:Processors: 2x4core 2.5 GHz XeonTotal Memory: 16GBHard Disk: SSD raid 10wa value from top is typically 0.0%, sometimes up to 0.1%The database consists (primary) of a single table with 5 indexes and 11 columns. The majority of transactions are probably single-row inserts (live location data from numerous aircraft). Current record count is1,282,706, and kept fairly static on a day-to-day basis by a cleanup routine that runs each night and deletes old records (if that makes a difference). This database is streamed to a secondary hot read-only spare using streaming replication. The replica is using less than 1% processor on average.I have followed the directions here:https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server however I'm sure there are nuances I missed, or values that could be better selected for my usage case.Some current postgres config values:shared_buffers: 4GBeffective_cache_size: 12GBNot sure what other information I should provide, so let me know what useful data I missed!
---Israel BrewsterSystems Analyst IIRavn Alaska5245 Airport Industrial RdFairbanks, AK 99709(907) 450-7293---BEGIN:VCARD
VERSION:3.0
N:Brewster;Israel;;;
FN:Israel Brewster
ORG:Frontier Flying Service;MIS
TITLE:PC Support Tech II
EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com
TEL;type=WORK;type=pref:907-450-7293
item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;99701;
item1.X-ABADR:us
CATEGORIES:General
X-ABUID:36305438-95EA-4410-91AB-45D16CABCDDC\:ABPerson
END:VCARD




Re: [GENERAL] Processor usage/tuning question

2014-10-03 Thread Alan Hodgson
On Friday, October 03, 2014 11:24:31 AM Israel Brewster wrote:
 I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some
 stats today, I saw that it was handling about 4-5 transactions/second
 (according to the SELECT sum(xact_commit+xact_rollback) FROM
 pg_stat_database; query), and an instance of the postmaster process was
 consistently showing 40%-80% utilization to handle this. I didn't think
 anything of that (the machine has plenty of capacity) until I mentioned it
 to a friend of mine, who said that utilization level seemed high for that
 many transactions. So if that level of utilization IS high, what might I
 need to tune to bring it down to a more reasonable level?
 

You probably have some read queries not properly indexed that are sequentially 
scanning that 1.2 million row table over and over again. Enable slow query 
logging and see what's going on.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Adrian Klaver

On 10/03/2014 01:28 PM, Andrus wrote:

Hi!

Thank you for explanations.


 the char type pads out the fields on disk.


It looks like you wrote that char takes more disk space.

from

http://www.pgcon.org/2013/schedule/attachments/269_tour-of-postgresql-data-types.pdf


page 28:

Unlikemany
databases,char(n)isNOTstoredasafixed-sizedfield
inPostgres.Itistreatedexactlythesameas
varchar(n)exceptforbeingpadded

So char type does not take more space than varchar.


Which directly contradicts the information on page 27:

Character  Types  (or  Strings)
Name
Description

varchar(n)
variable-length with limit

char(n)
fixed-length, blank padded

text
variable unlimited length


and the docs:

http://www.postgresql.org/docs/9.3/interactive/datatype-character.html

Values of type character are physically padded with spaces to the 
specified width n, and are stored and displayed that way. However, the 
padding spaces are treated as semantically insignificant. Trailing 
spaces are disregarded when comparing two values of type character, and 
they will be removed when converting a character value to one of the 
other string types. Note that trailing spaces are semantically 
significant in character varying and text values, and when using pattern 
matching, e.g. LIKE, regular expressions.



Tip: There is no performance difference among these three types, apart 
from increased storage space when using the blank-padded type, and a few 
extra CPU cycles to check the length when storing into a 
length-constrained column. While character(n) has performance advantages 
in some other database systems, there is no such advantage in 
PostgreSQL; in fact character(n) is usually the slowest of the three 
because of its additional storage costs. In most situations text or 
character varying should be used instead.





Andrus.





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Tom Lane
Adrian Klaver adrian.kla...@aklaver.com writes:
 page 28:

 Unlikemany
 databases,char(n)isNOTstoredasafixed-sizedfield
 inPostgres.Itistreatedexactlythesameas
 varchar(n)exceptforbeingpadded

 Which directly contradicts the information on page 27:

This info is probably not as well worded as it could be, but it's not
really wrong.  The key point is that char(N) is blank-padded (thereby
wasting space) to be N *characters*, but that is not necessarily N
*bytes*, because of possible multi-byte characters.  Therefore the engine
has to treat it as a variable-length datatype.

I believe in some some other DBMSes, char(N) means N *bytes* and is
treated as a fixed-size datatype.  Dunno what they do with multibyte
encodings.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to find greatest record before known values fast

2014-10-03 Thread Adrian Klaver

On 10/03/2014 03:59 PM, Tom Lane wrote:

Adrian Klaver adrian.kla...@aklaver.com writes:

page 28:

Unlikemany
databases,char(n)isNOTstoredasafixed-sizedfield
inPostgres.Itistreatedexactlythesameas
varchar(n)exceptforbeingpadded



Which directly contradicts the information on page 27:


This info is probably not as well worded as it could be, but it's not
really wrong.  The key point is that char(N) is blank-padded (thereby
wasting space) to be N *characters*, but that is not necessarily N
*bytes*, because of possible multi-byte characters.  Therefore the engine
has to treat it as a variable-length datatype.


Well that is a distinction I had missed, thanks for the heads up. So the 
bottom line is char(N) is variable length, but for values that do not 
reach length N will require more storage space then varchar(n).




I believe in some some other DBMSes, char(N) means N *bytes* and is
treated as a fixed-size datatype.  Dunno what they do with multibyte
encodings.

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Processor usage/tuning question

2014-10-03 Thread Andy Colson

On 10/03/2014 04:40 PM, Alan Hodgson wrote:

On Friday, October 03, 2014 11:24:31 AM Israel Brewster wrote:

I have a Postgresql 9.3.5 server running on CentOS 6.5. In looking at some
stats today, I saw that it was handling about 4-5 transactions/second
(according to the SELECT sum(xact_commit+xact_rollback) FROM
pg_stat_database; query), and an instance of the postmaster process was
consistently showing 40%-80% utilization to handle this. I didn't think
anything of that (the machine has plenty of capacity) until I mentioned it
to a friend of mine, who said that utilization level seemed high for that
many transactions. So if that level of utilization IS high, what might I
need to tune to bring it down to a more reasonable level?



You probably have some read queries not properly indexed that are sequentially
scanning that 1.2 million row table over and over again. Enable slow query
logging and see what's going on.





Yep, do that... and then:

https://wiki.postgresql.org/wiki/Slow_Query_Questions

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Really strange foreign key constraint problem blocking delete

2014-10-03 Thread Tim Mickelson
But I don't think so, but here are the table defenitions:


CREATE TABLE bulldog.channel_mapping
(
  idchannel integer NOT NULL,
  idaut integer NOT NULL,
  CONSTRAINT channel_mapping_pk PRIMARY KEY (idchannel, idaut),
  CONSTRAINT channel_mapping_idaut_fkey FOREIGN KEY (idaut)
  REFERENCES cubesocialnetwork.tmp_autenticazionesocial
(idautenticazionesocial) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION,
  CONSTRAINT channel_mapping_idchannel_fkey FOREIGN KEY (idchannel)
  REFERENCES bulldog.social_channel (idchannel) MATCH SIMPLE
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);
ALTER TABLE bulldog.channel_mapping
  OWNER TO postgres;





-- Table: cubesocialnetwork.tmp_autenticazionesocial

-- DROP TABLE cubesocialnetwork.tmp_autenticazionesocial;

CREATE TABLE cubesocialnetwork.tmp_autenticazionesocial
(
  idautenticazionesocial serial NOT NULL,
  contratto text NOT NULL,
  idlocation numeric NOT NULL,
  textuser text,
  textpassword text,
  datacrea timestamp without time zone NOT NULL DEFAULT now(),
  idsocial numeric NOT NULL,
  location text,
  username text,
  link_foto text,
  valid text,
  link_profilo text,
  tweetmonitored boolean DEFAULT false,
  idutente text,
  tipologia text,
  api_key text,
  api_secret text,
  CONSTRAINT tmp_autenticazionesocial_pkey PRIMARY KEY
(idautenticazionesocial),
  CONSTRAINT tipo_social_fk FOREIGN KEY (idsocial)
  REFERENCES cubesocialnetwork.tipo_social (cd_social) MATCH FULL
  ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH (
  OIDS=FALSE
);
ALTER TABLE cubesocialnetwork.tmp_autenticazionesocial
  OWNER TO postgres;

-- Index: cubesocialnetwork.indice_tmp_autenticazione

-- DROP INDEX cubesocialnetwork.indice_tmp_autenticazione;

CREATE INDEX indice_tmp_autenticazione
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (textpassword COLLATE pg_catalog.default);

-- Index: cubesocialnetwork.indicetextuser

-- DROP INDEX cubesocialnetwork.indicetextuser;

CREATE INDEX indicetextuser
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (textuser COLLATE pg_catalog.default);

-- Index: cubesocialnetwork.indicidentificativosocial

-- DROP INDEX cubesocialnetwork.indicidentificativosocial;

CREATE INDEX indicidentificativosocial
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (idsocial);

-- Index: cubesocialnetwork.tmpautenticazione

-- DROP INDEX cubesocialnetwork.tmpautenticazione;

CREATE INDEX tmpautenticazione
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (idautenticazionesocial);

-- Index: cubesocialnetwork.tmpautenticazionecontrattoidlocation

-- DROP INDEX cubesocialnetwork.tmpautenticazionecontrattoidlocation;

CREATE INDEX tmpautenticazionecontrattoidlocation
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (contratto COLLATE pg_catalog.default, idlocation);

-- Index: cubesocialnetwork.tmpauteticazionesocial

-- DROP INDEX cubesocialnetwork.tmpauteticazionesocial;

CREATE INDEX tmpauteticazionesocial
  ON cubesocialnetwork.tmp_autenticazionesocial
  USING btree
  (username COLLATE pg_catalog.default);


On 03/10/2014 20:38, Vick Khera wrote:
 Using my magick powers of mind reading, I will guess you made circular
 dependencies.

 On Fri, Oct 3, 2014 at 12:17 PM, Tim Mickelson
 tim_mickel...@bigfoot.com wrote:
 Even if I try to delete the data entry in the table channel_mapping with
 idaut 1622, it is not possible to delete from the table
 tmp_autenticazionesocial with the error below. How is this even possible
 since there is no channel_mapping with idaut 1622 any more?? I tried
 this in Java under a transaction, but also in pgAdmin III.


 ERROR:  update or delete on table tmp_autenticazionesocial violates
 foreign key constraint channel_mapping_idaut_fkey on table
 channel_mapping
 DETAIL:  Key (idautenticazionesocial)=(1622) is still referenced from
 table channel_mapping.



 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general