Re: [GENERAL] multilib environment with postgresql92 from CentOS-repos? (9.2.4)

2013-06-16 Thread Martín Marqués

El 09/06/13 18:15, Markus Renner escribió:

Hello everybody,

since postgresql 9.2.x it seems such 32bit libs are gone for 64bit
hosts on repository-level. In other words. They are not install-able
 via yum. I've recently installed postgresql92* from
pgdg-centos92-repository. I used pgdg-centos92-9.2-6.noarch to setup
 the repo. We still must compile 32bit applications on some machines
 in the foreseeable future.

That applies for CentOS 6.4 and CentOS 5.9 with 64bit architecture.
Is there a way to install such libs without building a custom rpm.
Preferably over yum. We are looking for a easy way to deliver these
32bit libs to large number of servers. I only need one package named
 postgresql-libs.i686.


I can only suggest you install PGDG repo, copy the PGDG repo and edit
the baseurl, and disable it:

# cp /etc/yum.repos.d/pgdg-92-fedora.repo
/etc/yum.repos.d/pgdg-92-fedora.386.repo

Change in /etc/yum.repos.d/pgdg-92-fedora.386.repo the baseurl line, the
enabled line and the tag at the beginning to (remove the source repo 
that is below, or change it to an appropriate value):


[pgdg92-i386]
baseurl=http://yum.postgresql.org/9.2/fedora/fedora-$releasever-i386
enabled=0

Then you can install the rpm postgres libs like this:

# yum install --enablerepo=pgdg92-i386 postgresql92-libs.i386


On a Fedora 19 I simply could install such i686 libs from standard
fedora repos.


But not with the PGDG repos:

# yum list postgresql\*-libs\*
Complementos cargados:langpacks, presto, refresh-packagekit, security
Paquetes instalados
postgresql-libs.x86_64  9.2.4-1.fc18@updates
postgresql92-libs.x86_649.2.4-3PGDG.f18 @pgdg92
Paquetes disponibles
postgresql-libs.i6869.2.4-1.fc18updates

--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not write to hash-join temporary file: No space left on device

2013-06-16 Thread Martín Marqués

El 14/06/13 18:25, Moshe Jacobson escribió:

My database is total around 100 gigs of data, and I have 50 gigs free on
the volume, yet I get this error message. It was a simple join. Is there
a way to see more information on why I got this error?


Hash-joins can use a lot of space on temp files. Try running the query 
and at the same time monitor disk usage from the database temp/ 
directory ($PGDATA/dboid/temp/).


--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not write to hash-join temporary file: No space left on device

2013-06-16 Thread Martín Marqués

El 14/06/13 18:25, Moshe Jacobson escribió:

My database is total around 100 gigs of data, and I have 50 gigs free on
the volume, yet I get this error message. It was a simple join. Is there
a way to see more information on why I got this error?


Forgot to say. A low value of work_mem will make it more suitable for 
PostgreSQL to use temp files if numerous sorting jobs need quite a good 
amount o memory.


You might as well try different values of work_mem to see what happens 
(watch out for large values of work_mem), or rethink your query all 
together.


Cheers,

--
Martín Marquéshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not write to hash-join temporary file: No space left on device

2013-06-16 Thread Jeff Janes
On Fri, Jun 14, 2013 at 2:25 PM, Moshe Jacobson mo...@neadwerx.com wrote:

 My database is total around 100 gigs of data, and I have 50 gigs free on
 the volume, yet I get this error message. It was a simple join. Is there a
 way to see more information on why I got this error?


How big are the particular tables that participate in the query?  How many
of the columns participate in the query?

It would be useful to see the same information for this as what is
recommended here:

http://wiki.postgresql.org/wiki/Slow_Query_Questions

Cheers,

Jeff


Re: [GENERAL] prepared statement functioning range

2013-06-16 Thread 高健
Thanks a lot!



I have understand this now.



And the following:



In general you only want to have as many actual connections to PG
as you have CPU cores in your database server.

This made me consider the appropriate value for max_conennections.

This might be another topic I think.

I am wondering how to decide the appropriate numbers of pg processes
serving client.

I think if there are some activities use disk heavily,there might be some
different.



To make it simple, I consider the following scenario:

Consider there are some batch programs communicating with PG every day.

Maybe I can collect the cpu run time compared with the whole time the
program run for a period.

If cpu runtime is closer to the whole runtime  of program, I can say that
cpus or cpu cores are fully used.



If cpu runtime is really smaller too much than runtime of program, I can
say tha cpus or cpu cores are not busy.

So I can increase the value of max_connections in  order to fully use cpus'
ability.



But on the other hand, I might need to replace disks with some more
high-speed ones.



Is there any common calculation methods for deciding the max_connections
value?



Thanks

2013/6/14 Stephen Frost sfr...@snowman.net

 * 高健 (luckyjack...@gmail.com) wrote:
  So I can draw a conclusion:
 
  Prepared statement is  only for  use in the same session at which it has
  been executed.

 Prepared statements are session-local.

  It can not be shared via multiple sessions.

 Correct.

  That is, when in some special situations ,
 
  if I have to use mulitple connections between client applicaiton and
  postgresql database,
 
  I must consider this point if I want to get benifit from prepared
  statements.

 Yes.  If you're using a connection pooling system of some kind, it can
 be useful to have it automatically set up all of your prepared
 statements when it first connects to a new backend.  If it opens new
 backend connections preemptively and ensures it's always got spares
 available, this can be done with minimal impact to the application.  Or,
 of course, you can simply have your application check if a given
 statement has been prepared yet and, if not, prepare it before executing
 it.  That adds an extra round-trip to the database, of course, but you
 could also cache and keep local the set of statements that you know
 you've prepared for a given database connection too.

 In general, having a good connection pooler of some kind is really
 critical if you're going to have a lot of application threads talking to
 PG.  In general you only want to have as many actual connections to PG
 as you have CPU cores in your database server.

  So I am now thinking about the reason that prepared statement  can not
  cross over sessions.
 
  Maybe it is because of MVCC control?   So in order to make it simple, the
  prepared statement is in one session range?

 It doesn't have anything to do with MVCC.  afaik, the reason it's
 implemented this way is because it was much simpler to implement as it
 doesn't require any shared memory access or coordination between
 backends, it's not hard to work around, and isn't a terribly often
 requested feature.

 There's also a lot of backend parameters which can change what a single
 'prepare' ends up doing- search_path, constraint_exclusion, other
 planner tunables, all of which need to be the same across all of the
 sessions for the same plan to be the 'correct' one in all the backends,
 not to mention roles and permissisons of the users involved.

 Simply put, it'd be quite a bit of work, would probably make things
 slower due to the cross-backend communication required, and would really
 only work for these specific my application uses all the same prepared
 statements and always connects as the same user and with all the same
 parameters all the time cases.

 Thanks,

 Stephen



Re: [GENERAL] prepared statement functioning range

2013-06-16 Thread Stephen Frost
* 高健 (luckyjack...@gmail.com) wrote:
 Is there any common calculation methods for deciding the max_connections
 value?

max_connections is a hard limit and so you'd want to have that higher
than the number of connections you actually expect to have.  The general
recommendation is to have the same number of connections as you have
CPUs.  If your system ends up being i/o bound, adding more requestors to
the queue waiting for I/O isn't likely to really help, unless you have
multiple I/O zones (eg; using tablespaces or similar).  At that point,
it really gets to be pretty specific to your environment.

Note also that for batch reporting type of work can often be done using
a hot-standby slave, rather than the master, eliminating that I/O load
from the master system.

Thanks,

Stephen


signature.asc
Description: Digital signature


[GENERAL] JDBC prepared statement is not treated as prepared statement

2013-06-16 Thread 高健
Hello:



I  have one question about prepared statement.

I use Java via JDBC, then send prepared statement to execute.

I thought that the pg_prepared_statments  view will have one record after
my execution.

But I can't find.



Is the JDBC's prepared statement  differ from  SQL execute by prepare
command ?

http://www.postgresql.org/docs/current/static/sql-prepare.html



My simple java program is the following:



import java.sql.*;



public class Test01 {



public static void main(String argsv[]){

try

 {

   Class.forName(org.postgresql.Driver).newInstance();

   String url = jdbc:postgresql://localhost:5432/postgres ;



   Connection con =
DriverManager.getConnection(url,postgres,postgres );



   ///Phase 1:-Select data from
table---





   System.out.println(Phase 1start);



   String strsql =  select * from customers where cust_id = ?;

   PreparedStatement pst=con.prepareStatement(strsql);



   pst.setInt(1,3); //find the customer with cust_id of 3.



   ResultSet rs = pst.executeQuery();



   while (rs.next())

{

   System.out.print(cust_id:+rs.getInt( cust_id));

   System.out.println(...cust_name:+rs.getString( cust_name
));

   }

   System.out.println(Phase 1end\n);







   ///Phase 2:-Use connection again,to select data from
data dictionary---



   System.out.println(Phase 2start);



   strsql = select * from pg_prepared_statements;

   pst=con.prepareStatement(strsql);



   rs = pst.executeQuery();



   while (rs.next())

   {

  System.out.println(statement:+rs.getString( statement));

   }

   System.out.println(Phase
2end\n);



   ///Phase 3:-Use connection again,to select data from
table---



   System.out.println(Phase
3start);

   strsql = select * from customers;

   pst=con.prepareStatement(strsql);



   rs = pst.executeQuery();



   while (rs.next())

   {

  System.out.print(cust_id:+rs.getInt( cust_id));

  System.out.println(...cust_name:+rs.getString( cust_name
));

  }



  System.out.println(Phase
3end\n);



  rs.close();

  pst.close();

  con.close();



   }

catch (Exception ee)

{

   System.out.print(ee.getMessage());

   }

}



}



The result of it's execution is:

Phase 1start

cust_id:3...cust_name:Taylor

Phase 1end



Phase 2start

Phase 2end



Phase 3start

cust_id:1...cust_name:Smith

cust_id:2...cust_name:Brown

cust_id:3...cust_name:Taylor

Phase 3end



That is to say: my prepared statement is not cached by PG?

Then how to write a  java program to made it's prepared statement realized
by PG to treat it as a prepared statement?

Thank you.


Re: [GENERAL] WIN1251 localization

2013-06-16 Thread Petko Godev
On Sat, Jun 15, 2013 at 11:04:28PM +0400, Yuriy Rusinov wrote:
 Postgres 9.1
 
 utf-8

Is this the system locale or the pgcluster encoding? What is the output of the
pg_lsclusters?

Consider the utf8 convertion of those initial scripts if this is not an
option, then just create new cluster with cp1251 encoding.

Regards
--
Petko Godev



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general