Re: [GENERAL] Why are stored procedures looked on so negatively?

2013-07-25 Thread V S P
I do not see why stored procedures are particular better for
asynchronous application design.  this can be done, as some pointed
before, using standard libraries.



Furthermore, while this does not apply to databases that do not burden
users with heavy per-cpu costs, for many companies that build software
to sell, it is a selling point that your system is light on database
CPU utilization. So that your clients are not required to buy
grotesquely overpowered DB servers just because your application had
put its logic there.



Also framework, libraries and general community contributions of source
code, code coverage tools --  are much more accessible in general
purpose programming languages.















On Thu, Jul 25, 2013, at 04:51 AM, Bèrto ëd Sèra wrote:

Hi,

>the whole design of this application is asynchronous in nature.
Then you'll be MUCH better off with SPs, from an architectural POV, as
you can basically design "building blocks" by initially just making SPs
that deliver a mock result, and have the entire development of the app
server being in dependent on the SQL development. This way none of the
branches blocks the other (provided that you can actually freeze the
design).

Cheers
Bèrto



On 25 July 2013 09:44, Some Developer <[1]someukdevelo...@gmail.com>
wrote:

On 25/07/13 08:14, Vincenzo Romano wrote:

2013/7/25 Luca Ferrari <[2]fluca1...@infinito.it>:



On Thu, Jul 25, 2013 at 2:57 AM, Some Developer

<[3]someukdevelo...@gmail.com> wrote:



The added advantage of removing load from the app servers so they can

actually deal with serving the app is a bonus.





Uhm...I don't know what application you are developing, but I don't

buy your explaination.

While it is true that you are moving CPU cycles from the application

server to the database server, you will probably end with the

application server waiting for the database to acknowledge (and

therefore not serving requests) and usually the computation is not

that heavy for an online transaction (it would be better to do it as

batch if that is really heavy). Therefore this is not an advantage for

me.

Again, the only reason to use database facilities (like stored

procedures) is to arm the database so that even a different

application/connection/user will interact following as much business

rules as possible.



Moreover, please also note that one reason developers tend to avoid

database facilities is that they are using some kind of

stack/orm/automagical library that does not allow the usage of deep

features in sake of portability.











I'm not planning on creating a complex application in the database in
its

own right, just augmenting what is already available with a few time
savers

and (a couple of) speed optimisations for commonly carried out tasks.







I don't understand the "time saving" argument: you have to implement

the logic either in the application or the database, so let's say the

time of the implementation is the same. The only advantage of the

database is the code reuse. But take into account that there are

drawbacks, like debugging that is not always so simple.



Luca





I could be wrong, but the main advantage you gain by using stored

procedures is what Luca says: unique data access interface.

Just that.

I don't think you'll save a single CPU cycle by moving logic from

"application" to "DB" (or the other way around).

That logic need to be implemented (and run) on either part.

The only saving would happen if you push the logic straight to the
client.

And keep in mind than not all PLs are the same and have the same
effectiveness.

So, for example, instead of INSERTing rows from program, you could

SELECT from a stored procedure which will do the INSERT possibly with

the very same checks you would do in the application. Only put

together in a single place. The stored procedure.



Finally, I fear this is kind of "religion" war. So feel free to follow

any or establish your own.



The bottom line here is: PLs are OK. It just depends on what you do and
how.






When I was talking about improving speed I was talking about reducing
load on the app servers by putting more of the work load on the
database server. I know that it won't actually save CPU cycles (one of
the machines has to do it) but it will save load on the app servers. As
I said above using the asynchronous abilities of libpq helps keep the
app servers serving requests whilst the database gets on with its
tasks.



In fact the whole design of this application is asynchronous in nature.




--
Sent via pgsql-general mailing list ([4]pgsql-general@postgresql.org)
To make changes to your subscription:
[5]http://www.postgresql.org/mailpref/pgsql-general




--
==
If Pac-Man had affected us as kids, we'd all be running around in a
darkened room munching pills and listening to repetitive music.

References

1. mailto:someukdevelo...@gmail.com
2. mailto:fluca1...@infinito.it
3. mailto:someukdevelo...@

Re: [GENERAL] How useful is the money datatype?

2009-10-04 Thread V S P

Withing PG procedures at least in pgsql it is impossible to do 'money'
calculations
without a loss of precision.

There is an open source library by IBM that I use in my C++ code to do
this, and may be it can
be incorporated into PG

it is called decNumber
http://speleotrove.com/decimal/decnumber.html

Micropayment systems (that for example, I am implementing) require to
have
a reasonably good precision. Support for currencies such as yen also
dictates
that reasonably large numbers are supported

in my case, all my money calculations are done in C++ using decNumber
(which makes
the only useful feature of Cobol be available in C++ :-) )
then I convert them to a string, and send via Postgres ODBC to NUMBER
(19,6) field

(Postgres ODBC driver does not support a 'naitive' number type, so I
convert to text).







On Sat, 03 Oct 2009 17:19 +0100, "Sam Mason"  wrote:
> On Sat, Oct 03, 2009 at 11:49:50AM -0400, Merlin Moncure wrote:
> > On Sat, Oct 3, 2009 at 11:40 AM, Sam Mason  wrote:
> > > it's still a computer and thus can't represent anything
> > > with infinite precision (just numeric fractions in PG's case, let alone
> > > irrational numbers).
> > 
> > I don't quite agree with your statement (I agree with your point, just
> > not the way you worded it).
> 
> Maybe I didn't emphasize "numeric" enough; the current implementation
> of numeric datatypes in PG does not allow fractions to be represented
> accurately.  Is that any better?
> 
> > I could make a type, 'rational', define
> > the numerator, denominator, and do calculations like the above with
> > zero loss.
> 
> Yes, if you defined a datatype like this then it would be able to
> express a strictly larger subset of all numbers.
> 
> > So it depends how you define 'represent'.
> > Computers can do pretty much any type of bounded calculation given
> > enough time and memory.
> 
> Which is why I said "with infinite precision".  Assuming infinite time
> or space doesn't seem to help with any real world problem, it's the
> details of the assumptions made and the use case(s) optimized for that
> tend to be interesting.
> 
> -- 
>   Sam  http://samason.me.uk/
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - One of many happy users:
  http://www.fastmail.fm/docs/quotes.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] optmizing postgres for 'single client' / many small queries

2009-09-02 Thread V S P
I do not know of any for C++.

That's why I started my own (which is not the one used for the problem I
am having  :- ) )
http://github.com/vladp/CppOrm
it works with Pg 8.3+ and VC++ compiler sofar (but support for more
platforms and Dbs will be added in the future).
My Orm is not really an ORM because I did not implement anything that
would 'traverse' object instance
relationships (which is what the ORM in question here is doing).
Instead I just automagically generate SQL code for insert/update/deletes 
for classes that map to tables (one-to-one).


The basic problem is that C++ standards comittee in my view just
sucks... i do not have any better words
for it.  It is because of lack of reflection (ability to identify at
runtime variable names/functions names)
that an ORM, or HTTP session storage/retrival mechanism, JSON/XML
parsers that parse text right into class instances
-- cannot be implemented
Basically the things that are needed to deal with 'Typeless' data at
runtime (such that XML/JSON/Database queries)
and map that data to the C++ object instances.

Which is in the 'high-level view' why C++ is not used for web
development.


Yes there are 'attempts' in that area -- but all are different, require
quite a bit of sophistication
and are not complete (The reflection mechanism I implemented for my
cpporm is not complete either).

If C++ would have supported Reflection -- the there would be
C++_Hibernate, C++_LINQ, C++_json, C++_xml, C++_HTTP, C++_HTTPSession 
and so on... (and no they would have been memory hogs -- thanks to now
standard reference counting in C++ via shared_ptr and good use of
allocators)


sorry for the rant,
still looking for any bright ideas on optimizing for many small
queries/local db host situations.

Thanks









On Wed, 02 Sep 2009 14:45 -0400, "Tom Lane"  wrote:
> "V S P"  writes:
> > Well, actually
> > somebody has written a C++ ORM 
> > [ that is causing all your problems and you say you can't discard ]
> 
> Just out of curiosity, does anyone know of any ORM anywhere that doesn't
> suck?  They seem to be uniformly awful, at least in terms of their
> interfaces to SQL databases.  If there were some we could recommend,
> maybe people would be less stuck with these bogus legacy architectures.
> 
>   regards, tom lane
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - Faster than the air-speed velocity of an
  unladen european swallow


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] optmizing postgres for 'single client' / many small queries

2009-09-02 Thread V S P
Well, actually
somebody has written a C++ ORM 
that allows to do things (just a sketch)

class L: CDbCapable
{
  public:
  int prop1;
  int prop2;
}


class A:  CDbCapable
{
  QArray   list_of_props_xyz;
  
}


int main ()
{
   A inst1;

   inst1.create_or_update_DbSchemaIfNeeded();


   inst1.readFromDb();
   //modifying something

   inst1.writeToDb();

}



As it is well known C++ lack of Reflection prevents it from having
standardise  Data serialization 
libraries to files or to Databases.

So in-house a mechanism was developed to do the above.  It took some
time and it is not possible to just yank it out.


Of course, internally in the ORM's implementation a somewhat
questionable decision was made that 
to process arrays of 'children' for a given instance would require
separate SQL statements.


That's where the problem comes from, I understand what needs to be done
to redesign the approach/etc.  And that
will take more time than currently is available.

Therefore, I just wanted to ask if there there are some things in Pg
that can I can experiment with
(local client/server communcations via IPC,  reducing the speed of SQL
parses, any other possible tricks)



Thank you  in advance




On Wed, 02 Sep 2009 14:26 -0400, "Tom Lane"  wrote:
> "V S P"  writes:
> > The application typicall goes like this
> 
> > select id, child_tb_key_id,  from tb1
> 
> > then for each selected row above
> >  select from the child table do a select (like the above)
> 
> > and so on -- many levels deep
> 
> Seems like you need to fix your data representation so that this
> operation can be collapsed into one query.  The main problem looks
> to be a bogus decision to have separate child tables rather than
> one big table with an extra key column.
> 
>   regards, tom lane
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - mmm... Fastmail...


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] optmizing postgres for 'single client' / many small queries

2009-09-02 Thread V S P
Hi,
yes, I am sure I have to continue supporting Postgres
at this time, it would take enormous effor to change to something else
But yes, sqlite or tokiocabinet in my view would be good options (the
decison was made
some time ago, unfortunately).



On Wed, 02 Sep 2009 19:49 +0200, "Pavel Stehule"
 wrote:
> Hello
> 
> Are you sure, so you have to use PostgreSQL - maybe SQLite or
> memcached is better for your task.
> 
> regards
> Pavel Stehule
> 
> 2009/9/2 V S P :
> > Hi,
> > our application is using Postgres in a rather unusuall way.
> > It is used by a GUI application to store several hundred
> > thousand 'parameters'.  Basically it is used like a big INI
> > file.
> >
> > There are about 50 tables with various parameters.
> >
> > The application typicall goes like this
> >
> > select id, child_tb_key_id,  from tb1
> >
> > then for each selected row above
> >     select from the child table do a select (like the above)
> >
> > and so on -- many levels deep
> >
> >
> >
> > I know that it is not a proper way to use SQL
> > Instead we should be selecting many rows at once, joining them/etc
> >
> > But it is  what it is now...
> >
> > Queries are very fast though, Postgres reports that the
> > all the queries for a typical 'load' operation take 0.8 seconds
> > -- however overall time that the GUI user perceives is 8 seconds.
> > Out of that 8 seconds a big chunk is in the sending of the SQL
> > statements/receiving results back -- just network traffic, parsing/etc
> >
> > There are total about 2400 queries that happen in that period of time
> > (just selects)
> >
> >
> >
> >
> > I am trying to figure out how can I optimize PG configuration
> > to suite such a contrived deployment of Postgres.
> >
> > For example, we do not mind PG running on the same machine
> > as the Client app (it is connected via Qt Sql Pg plugin (so it uses
> > Pg native access library underneath).
> >
> > Are there any optmization can be done for that?
> >
> >
> > Also this is a 'single' client/single connection system
> > what optimizations can be done for that?
> >
> > and finally since most of the queries are very quick index-based
> > selects what can be done to optimize the traffic between pg and
> > the client?
> >
> >
> >
> > thank you in advance for
> > any recommendations/pointers.
> >
> >
> >
> >
> > --
> > Vlad P
> > author of C++  ORM  http://github.com/vladp/CppOrm/tree/master
> >
> >
> > --
> > http://www.fastmail.fm - Send your email first class
> >
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - Accessible with your email software
  or over the web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] optmizing postgres for 'single client' / many small queries

2009-09-02 Thread V S P
Hi,
our application is using Postgres in a rather unusuall way.
It is used by a GUI application to store several hundred
thousand 'parameters'.  Basically it is used like a big INI
file.

There are about 50 tables with various parameters.

The application typicall goes like this

select id, child_tb_key_id,  from tb1

then for each selected row above
 select from the child table do a select (like the above)

and so on -- many levels deep



I know that it is not a proper way to use SQL
Instead we should be selecting many rows at once, joining them/etc

But it is  what it is now...

Queries are very fast though, Postgres reports that the
all the queries for a typical 'load' operation take 0.8 seconds
-- however overall time that the GUI user perceives is 8 seconds.
Out of that 8 seconds a big chunk is in the sending of the SQL
statements/receiving results back -- just network traffic, parsing/etc

There are total about 2400 queries that happen in that period of time
(just selects)




I am trying to figure out how can I optimize PG configuration
to suite such a contrived deployment of Postgres.

For example, we do not mind PG running on the same machine
as the Client app (it is connected via Qt Sql Pg plugin (so it uses
Pg native access library underneath).

Are there any optmization can be done for that?


Also this is a 'single' client/single connection system
what optimizations can be done for that?

and finally since most of the queries are very quick index-based
selects what can be done to optimize the traffic between pg and
the client?



thank you in advance for
any recommendations/pointers.




-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - Send your email first class


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] parsing out String array

2009-08-15 Thread V S P



 Thank you very much , your suggestion
 helped a lot
 
 But, I have a bit more of a challenge now,
 
 my array is being generated by the 'client' (it is not in the database)
 so I am trying to employ your method on 
 'string' 
 
 but running into a problem as I cannot typecast correctly
 
 
 
 select V[i][j]
 FROM
 (select '{{A,B,C},{D,E,F}}') as V
 
  CROSS JOIN generate_series(1, 3) i
  CROSS JOIN generate_series(1,2) j
 
 
 
 Does not work, because V is not an array (it is a string)
 and I do not know where to put the typecast
 ::text[][]
 
 anywhwere I tried I get syntax error
 
 
 
 
thank you in advance for help
> 
> On Wed, 12 Aug 2009 20:52 -0700, "Scott Bailey" 
> wrote:
> > V S P wrote:
> > > if I have field declared
> > > myvalue text[][]
> > > 
> > > insert into vladik (myval)
> > > values
> > > (
> > > '{{"\",A", "\"B"}, {"Y", "Q"}}'
> > > )
> > > 
> > > 
> > > What do you guys use in your treasurechest of 'addons'
> > > to successfully parse out the above trickery
> > > and get
> > > 
> > > and get the 4 strings
> > > ",A
> > > "B
> > > Y
> > > Q
> > > 
> > > from within Postgres stored procedure as well as C++ or other client
> > > code.
> > > 
> > > 
> > > It seems to me that it is not possible with any built-in command
> > > available
> > > to easily extract the strings out to a multidimensional array
> > 
> > 
> > Actually its pretty easy.
> > 
> > SELECT myval[i][j]
> > FROM vladik
> > CROSS JOIN generate_series(1, array_upper(myval, 1)) i
> > CROSS JOIN generate_series(1, array_upper(myval, 2)) j
> > 
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - Accessible with your email software
  or over the web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] parsing out String array

2009-08-12 Thread V S P
if I have field declared
myvalue text[][]

insert into vladik (myval)
values
(
'{{"\",A", "\"B"}, {"Y", "Q"}}'
)


What do you guys use in your treasurechest of 'addons'
to successfully parse out the above trickery
and get

and get the 4 strings
",A
"B
Y
Q

from within Postgres stored procedure as well as C++ or other client
code.


It seems to me that it is not possible with any built-in command
available
to easily extract the strings out to a multidimensional array


thank you
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - One of many happy users:
  http://www.fastmail.fm/docs/quotes.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] single image Table across multiple PG servers

2009-07-14 Thread V S P
Hi,
thank you for the links


I read through the presentation

and they did not solve the issue for me -- which presenting a e
table from multiple
shards as one single table (at least for reads) for ODBC clients.


I also do not think that skypetools do that

they have implemented essentially an API on top of their shards
(separate db servers)
that does the table querying.  That means that I have to write a
separate API for every time
a user decides on a new query.


May be I misunderstood the approaches, but none of them actually
figures out how to
utilize the computing power/memory of multiple servers to satisfy
requests that spawn across
servers.

I think Oracle supports the ability to at least reference a table
in another server, I do not think
PG does that

It is possible that I have to look into the free DB2 server
offering (as the free version is exactly meant
to run on underpowered computers)

I just wanted to ask the list first.


thank you


On Tue, 14 Jul 2009 13:04 +0530, "Ransika de Silva"
 wrote:

  Hello,



We were also in search of having a table split across multiple
databases but then found out about skypetools and at the same
time the following
article; [1]http://www.jurriaanpersyn.com/archives/2009/02/12/dat
abase-sharding-at-netlog-with-mysql-and-php/, true that it's not
done with PG, but the same thing can be done with PG as well.



Assume this will be helpful for you.



Regards,

Ransika

On Tue, Jul 14, 2009 at 4:20 PM, Scott Marlowe
<[2]scott.marl...@gmail.com> wrote:

On Mon, Jul 13, 2009 at 11:16 PM, V S P<[3]torea...@fastmail.fm>
wrote:
> Hello
>
> I am researching how to store the data for easy 'user-driven'
> reporting (where I do not need to develop application for
> every user request).
>
> The data will typically be number ranges and text strings with
> unique Id for each row
>
> I hope there will be a lot of data :-).
>
> So in that anticipation I am looking for a way
> to allow
> SQL/ODBC access to the data
>
> but in a way that each table resides on more than one
> PG server
>
> for example:
>
> table 1 lives in 3 PG instances (I can partition the data by
date range)
> table 2 lives in the same 3 instances plus another one (because
it's
> bigger)
>
>
> and I would like users to be able to issue SQL from within ODBC
that
> joins them.


  I think that skype's skytools could be used to create such a
  solution,
  in particular pl/proxy.


--
Sent via pgsql-general mailing list
([4]pgsql-gene...@postgresql.org)
To make changes to your subscription:
[5]http://www.postgresql.org/mailpref/pgsql-general

--
Ransika De Silva
SCMAD 1.0, SCJP 1.4,
BSc.(Hons) Information Systems

References

1. 
http://www.jurriaanpersyn.com/archives/2009/02/12/database-sharding-at-netlog-with-mysql-and-php/
2. mailto:scott.marl...@gmail.com
3. mailto:torea...@fastmail.fm
4. mailto:pgsql-general@postgresql.org
5. http://www.postgresql.org/mailpref/pgsql-general
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - The professional email service



[GENERAL] [Q] single image Table across multiple PG servers

2009-07-13 Thread V S P
Hello

I am researching how to store the data for easy 'user-driven'
reporting (where I do not need to develop application for
every user request).

The data will typically be number ranges and text strings with 
unique Id for each row

I hope there will be a lot of data :-).

So in that anticipation I am looking for a way
to allow
SQL/ODBC access to the data

but in a way that each table resides on more than one
PG server

for example:

table 1 lives in 3 PG instances (I can partition the data by date range)
table 2 lives in the same 3 instances plus another one (because it's
bigger)


and I would like users to be able to issue SQL from within ODBC that
joins them.

I do not mind if I have to for example name tables like

tb1_srv1  and so on
but some how then have the joined with the global view -- if such
functionality exists.

That is, the users with SQL/ODBC will only be doing reads -- not updates
and some how they have to see a 'single' tables that resides on multiple
hosts.


I do not mind the complexity for the inserts (because I can program for
it) -- but selects need to be easy from things like MS Access and
Crystal reports, or for more sophisticated users from packages like 'R'

The reason why I think the data will not fit into one database,
is because I just do not  have money for servers (everything is coming
out
of my small pocket) so I just want to deploy inexpensive computers
but add them as I get more data to serve.

I looked at Hypertable but there is no ODBC driver for it.

Alternatively I also looked making the host operating systems into
a single Image OS.  Which appears to be possible with

http://www.kerrighed.org/wiki/index.php/Main_Page

However, I do not know if PG will even run there.
Obviously top-of the line performance for me is not necessary
but has to be not unusable.


thank you in advance
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - A fast, anti-spam email service.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] sequence and index name limits in 8.4

2009-06-27 Thread V S P
Searched docs and the web but only saw references to the older version
of postgres
where changing source code was recommended ( NAMEDATALEN) if more than
31 characters
in the names are desired.

wanted to ask if this is still the case in 8.4 (I need at least 128)


thank you in advance,
VSP
-- 
Vlad P
author of C++  ORM  http://github.com/vladp/CppOrm/tree/master


-- 
http://www.fastmail.fm - Send your email first class


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] rollback after select?

2009-03-29 Thread V S P
Hi,
I think I resolved my second question (about rollback modifing the
search_path)

I did not realize that I had to issue a commit
after set search_path

so what was happening is that when I did the rollback, the search path
was also gone.

The same problem was with my set client encoding (my other email)
both were related problem to the fact that PG I have to issue commits
even for the 'non sql' statements.


sorry I did not figure this out before posting,
Vlad





On Sun, 29 Mar 2009 20:20 -0400, "V S P"  wrote:
> I have read that
> if a given connection has an error,
> it first must be rolledback (before any other statement on that
> connection can
> be executed).
> 
> Is this true for Selects as well?
> In other words if select fails, should the connection be 'rolledback'?
> 
> I am trying to test my error handing
> and having problems:
> 
> In am multithreaded ODBC c++ app, I do select, it is successful,
>  then 'update'
> 
> update fails (because I misspelled 'WHERE'), I rollback
> but then when I try the same select, the DB says 
> relation XYZ does not exist (where XYZ is my table)
> 
> I do not understand how it is happening.
> 
> all my tables are in a new schema I call 'prod', when I connect to the
> database
> I set right away 'set search_path to prod', assuming that all of the
> SQLs now 
> will be going to 'prod'.
> 
> But may be this somehow gets screwed up after a rollback or any other
> error?
> 
> 
> thanks,
> Vlad
> -- 
>   V S P
>   torea...@fastmail.fm
> 
> -- 
> http://www.fastmail.fm - One of many happy users:
>   http://www.fastmail.fm/docs/quotes.html
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - And now for something completely different…


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] rollback after select?

2009-03-29 Thread V S P
I have read that
if a given connection has an error,
it first must be rolledback (before any other statement on that
connection can
be executed).

Is this true for Selects as well?
In other words if select fails, should the connection be 'rolledback'?

I am trying to test my error handing
and having problems:

In am multithreaded ODBC c++ app, I do select, it is successful,
 then 'update'

update fails (because I misspelled 'WHERE'), I rollback
but then when I try the same select, the DB says 
relation XYZ does not exist (where XYZ is my table)

I do not understand how it is happening.

all my tables are in a new schema I call 'prod', when I connect to the
database
I set right away 'set search_path to prod', assuming that all of the
SQLs now 
will be going to 'prod'.

But may be this somehow gets screwed up after a rollback or any other
error?


thanks,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - One of many happy users:
  http://www.fastmail.fm/docs/quotes.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] LOG: failed to commit client_encoding

2009-03-29 Thread V S P
Hi,
I am seeing this error in my concole (winXP) log
using Pg 8.3 UTF-8 encoding

client is Pg ODBC latest, 
every time I open a connection I set
client encoding to UTF-8 (to make sure my C++ app is getting via
ascii version of pgODBC the UTF-8 strings).

"SET client_encoding='UTF8'"

What does this LOG string mean, and what should I do?

if there is a place with all the error messages and explanation
I would certainly appreciate the link (I just tried regular searches
but could not find it).



thank you in advance,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] ODBC Decimal(19,6)

2009-03-16 Thread V S P
I declared a field as DECIMAL(19,6)

when doing select thatfield from tb1

for some reason ODBC thinks that it is a double


I think it has to be a string, otherwise precision is lost
or am I not understanding this right?

I tried to do 
select thefield\\:\\:varchar 

but for some reason that did not work yet.

thanks in advance,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - Access your email from home and the web


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] ODBC connect shows RELEASE / SAVEPOINT on selects

2009-03-16 Thread V S P
Hello, 
a newbie question:

I am using ODBC on windows (the unicode version) to connect to 8.3.3
running on
the same machine (XP 32 bit).

My C++ program uses OTL C++ library (it's ODBC functions)

Every time I execute a simple 
select fld from mytable;

I see this

LOG:  duration: 0.000 ms  statement: RELEASE _EXEC_SVP_01B06868
LOG:  duration: 0.000 ms  statement: SAVEPOINT _EXEC_SVP_01B06868

my connection is set to no autocommit

so I do not understand why I am seeing savepoints and release

(I have some other code against the same DB instance but from PHP, and I
did not see
those messages in the log).



Thanks in advance,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - A no graphics, no pop-ups email service


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] Cluster design for geographically separated dbs

2009-03-08 Thread V S P
Thank you,
Is there a way, in the same  idea,
to make postgresql 'skip' say every 100 numbers when generating 
a 'next' in bigserial?
(or to insure that every number generated is evenly divisible by 100,
and then another db would be 99 and so on)

In oracle, if I remember right, there was something called a 'Step'
for the sequence values.



Vlad



On Sun, 08 Mar 2009 01:13 -0700, "Scott Marlowe"
 wrote:
> On Sat, Mar 7, 2009 at 2:03 PM, V S P  wrote:
> 
> > And wanted to ask what would be the main challenges I am facing with --
> > from the experience of the users on this list.
> >
> > Especially I am not sure how to for example manage 'overlapping unique
> > IDs' data.
> 
> I'm not expert on a lot of what you're doing, but the standard trick
> here is to partition your bigserials.
> 
> The max value for the underlying sequence is 9223372036854775807 which
> should give you plenty of space to work in.  So, When creating your
> bigserials, you can then alter the sequence underneath them to use a
> different range on each machine.
> 
> smarlowe=# create table a1 (id bigserial, info text);
> NOTICE:  CREATE TABLE will create implicit sequence "a1_id_seq" for
> serial column "a1.id"
> smarlowe=# create table a2 (id bigserial, info text);
> NOTICE:  CREATE TABLE will create implicit sequence "a2_id_seq" for
> serial column "a2.id"
> smarlowe=# alter sequence a1_id_seq minvalue  maxvalue 199
> start 100;
> ALTER SEQUENCE
> smarlowe=# alter sequence a2_id_seq minvalue 200 maxvalue
> 299 start 200;
> ALTER SEQUENCE
> 
> Now those two sequences can't run into each other, and if you move a
> record from one machine to another it won't bump into what's already
> there.  Partitioning by 10billion gives you 922337203 possible
> partitions, so if you need bigger but fewer partitions there's plenty
> of wiggle room to play with.
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - IMAP accessible web-mail


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q] string to int hash function for small range

2009-03-08 Thread V S P
Ok, thank you

using   \df *hash*  from psql prompt
I can see that is how I have to access this function

select pg_catalog.hashtext('myusername')

I will also play with other suggestions of get_byte 
of the MD5 result casted to a string.

thanks again for all the replies,
Vlad





On Sat, 07 Mar 2009 21:17 -0700, "Scott Marlowe"
 wrote:
> On Sat, Mar 7, 2009 at 7:33 PM, V S P  wrote:
> > I would like to have a function
> > given a user id varchar(64) to get a hash number between 0 and 255.
> >
> > I first tried md5 but the number it returns is very big and cannot
> > be converted to an INT
> >
> > there is likely a way to simply add ascii values of the userId together
> > to get a small integer, but wanted to know if there are any other
> > 'built-in' ways
> >
> > this is to get the database id based on user id.
> 
> In psql type this:
> 
> \df *hash*
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - The professional email service


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] string to int hash function for small range

2009-03-07 Thread V S P
I would like to have a function
given a user id varchar(64) to get a hash number between 0 and 255.

I first tried md5 but the number it returns is very big and cannot
be converted to an INT

there is likely a way to simply add ascii values of the userId together
to get a small integer, but wanted to know if there are any other
'built-in' ways

this is to get the database id based on user id.

thanks in advance,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - One of many happy users:
  http://www.fastmail.fm/docs/quotes.html


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q] Cluster design for geographically separated dbs

2009-03-07 Thread V S P
Hello,
I am designing a db to hold often changed user data.

I just wanted to write down what I am thinking and ask people on the
list to comment
if they have any experiences in that area.

My thought is to have 
say west-cost and east-cost data center and each user will
go to either East Coast or West Coast

and then within each Coast, I would want to partition by Hash on a user
id.

I am reviewing the Skype paper on the subject


http://kaiv.wordpress.com/2007/07/27/postgresql-cluster-partitioning-with-plproxy-part-i/


And wanted to ask what would be the main challenges I am facing with --
from the experience of the users on this list.

Especially I am not sure how to for example manage 'overlapping unique
IDs' data.

First, say I have a user who is trying to register with the same ID as
somebody
else only in a different data center -- that means that I always have to
check
first in each datacenter if ID exists.  Then based on his/her IP address
I decide what data center is closest (but IP addresses are often not a
good indication of geographical location of the user either, so I will
give them a 'manual' select option)

Then if I have say 'BIG' serial in my tables, but since there is more
than one database -- the 'big-serial' in one database can well overlap
it in another database.

So if I have any tables that must contain data from different databases 
-- I have to add something else to the 'foreign' key -- besides the
reference to the big serial. And so on...

Right now - on paper, I am just having quite a few 'extra' fields in my
tables just to support 'UNiqueness' of the record across clusters.

I am not sure if I am doing it the right way (because then I also have
to at some point in time 'Defgrament' the IDs (as the data with
BIGserial keys can be deleted).

It looks to me that If I design things to take advantage of  Skype's
plproxy -- I will be able to leverage, what appears to be, a relatively
easy way to get data between databases (for reports that span clusters).


thanks in advance for any comments,
Vlad
-- 
  V S P
  torea...@fastmail.fm

-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_get_serial_sequence Strangeness/Unreliable?

2008-11-25 Thread V S P

Did you first insert into 
public.news_status

insert into public.news_status (status_id)
values (DEFAULT)

and then get the sequence?


Also since you have a domain 'public' I personally
always do 'set searc_path to public' before doing
any SQLs -- this way I know that I do not need
to prefix my table names with 'public' all the time.


V S P




On Tue, 25 Nov 2008 09:46:37 -0400, "Jeff MacDonald" <[EMAIL PROTECTED]>
said:
> Hello everyone,
> 
> I'm using "PostgreSQL 8.2.5 on amd64-portbld-freebsd6.1, compiled by  
> GCC cc (GCC) 3.4.4 [FreeBSD] 20050518"
> 
> The issue, is that when I run pg_get_serial_sequence on a particular  
> table/column it returns NULL.
> 
> Here are my tests:
> 
> simplyas_associations=> \d news_status
>   Table "public.news_status"
>  Column | Type  |  
> Modifiers
> ---+--- 
> +
>   status_id | integer   | not null default  
> nextval('status_status_id_seq'::regclass)
>   newsletter_id | integer   | not null
>   status| boolean   | not null
>   indate| character varying(15) | not null
> Indexes:
>  "status_pkey" PRIMARY KEY, btree (status_id)
>  "status_newsletter_id" btree (newsletter_id)
> 
> simplyas_associations=> select  
> pg_get_serial_sequence('news_status','status_id');
>   pg_get_serial_sequence
> 
> 
> (1 row)
> 
> So, for fun I added a new column to this table , and ran  
> pg_get_serial_sequence there
> 
> simplyas_associations=> alter table news_status add column test_id  
> serial;
> NOTICE:  ALTER TABLE will create implicit sequence  
> "news_status_test_id_seq" for serial column "news_status.test_id"
> ALTER TABLE
> simplyas_associations=> select  
> pg_get_serial_sequence('news_status','test_id');
>   pg_get_serial_sequence
> 
>   public.news_status_test_id_seq
> 
> 
> (1 row)
> 
> So my new column works. The only next step I could think of was to  
> compare my 2 sequences with \d, turns out their attributes are both  
> identical.
> 
> Ideas?
> 
> Thanks
> 
> Jeff MacDonald
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
-- 
  V S P
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Or how I learned to stop worrying and
  love email again


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Postgres to store high volume streams of sensor readings

2008-11-23 Thread V S P
While most of my experience with oracle/informix

I would also recommend 
a) partitioning on DB level
Put partitions on on separate hard disks, have the system to be 
at least dual core, and make the disks to be attached via SCSI
controller (not IDE) for parallel performance.


b) partitioning on application level (that is having
the insert code dynamically figure out what DB/and what table to go
(this complicates the application for inserts as well as for reports)



c) may be there is a chance to remove the index (if all you are doing
is inserts) -- and then recreate it later?

e) I did not see the type of index but if the value of at least
some of the indexed fields repeated a lot -- Oracle had what's called
'bitmap index'
Postgresql might have something similar, where that type of index
is optimized for the fact that values are the same for majority
of the rows (it becomes much smaller, and therefore quicker to update).

f) review that there are no insert triggers and
constraints (eithe field or foreign) on those tables
if there -- validate why they are there and see if they can
be removed -- and the application would then need to gurantee
correctness

VSP


On Sun, 23 Nov 2008 08:34:57 +0200, "Ciprian Dorin Craciun"
<[EMAIL PROTECTED]> said:
> On Sun, Nov 23, 2008 at 1:02 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> > Alvaro Herrera <[EMAIL PROTECTED]> writes:
> >> The problem is, most likely, on updating the indexes.  Heap inserts
> >> should always take more or less the same time, but index insertion
> >> requires walking down the index struct for each insert, and the path to
> >> walk gets larger the more data you have.
> >
> > It's worse than that: his test case inserts randomly ordered keys, which
> > means that there's no locality of access during the index updates.  Once
> > the indexes get bigger than RAM, update speed goes into the toilet,
> > because the working set of index pages that need to be touched also
> > is bigger than RAM.  That effect is going to be present in *any*
> > standard-design database, not just Postgres.
> >
> > It's possible that performance in a real-world situation would be
> > better, if the incoming data stream isn't so random; but it's
> > hard to tell about that with the given facts.
> >
> > One possibly useful trick is to partition the data by timestamp with
> > partition sizes chosen so that the indexes don't get out of hand.
> > But the partition management might be enough of a PITA to negate
> > any win.
> >
> >regards, tom lane
> 
> Thanks for your feedback! This is just as I supposed, but i didn't
> had the Postgres experience to be certain.
> I'll include your conclusion to my report.
> 
> Ciprian Craciun.
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
-- 
  V S P
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [Q]updating multiple rows with Different values

2008-11-23 Thread V S P
Thank you very much 
this is exactly what I am looking for

As well as the example provided
'  case when id=1 then 10  '

- it will work as well.


Now just one more question:
I will not have a lot of values to update (less than a 1000
at a time) -- but the values for col1 will be text that is
up to 64K.  So I will not be able to construct SQL strings 
and just send them (because it will probably exceed the character
limits for the SQL statements).

Instead, what I plan to do is to generate an sql string as prepared
statement in PDO, and then bind values  to it, so I will have

UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
(':val1', ':id1')
(':val2', ':id2')
(':val3', ':id3')
 ) AS t(id, col1)

$count=0;
foreach ($upd_arr as $upd_row )
{
  bindValue(':val'.$count,$upd_row->val);
  bindValue(':id'.$count,$upd_row->id);
  $count=$count+1
}


Is this, aproximately, how I should be doing the update?
Is there a limit on the amount of total size of the statement
when gets out of PDO and into postgres

If yes, what is it?
I will just split the loop into chunks, 
just wanted to know.


Thank you again for such a quick help.





On Sun, 23 Nov 2008 10:11:56 +0100, "Gerhard Heift"
<[EMAIL PROTECTED]> said:
> On Sat, Nov 22, 2008 at 10:04:48PM -0500, V S P wrote:
> > Hello,
> > searched documentation, FAQ and mailing list archives
> > (mailing list archive search is volumous :-) )
> > 
> > but could not find an answer:
> > 
> > I would like to be able to update
> > several rows to different values at the same time
> > 
> > In oracle this used to be called Array update or 
> > 'collect' update or 'bulk' update -- but those
> > keywords did not bring anything for Postgresql.
> > 
> > for example tbl_1 has two columns id and col1
> > 
> > 
> > update tbl_1  set
> >col1=3  where id=25,
> >col1=5  where id=26
> 
> Something like this?
> 
> UPDATE tbl_1 SET col1 = t.col1 FROM (VALUES
>   (25, 3)
>   (26, 5)
> ) AS t(id, col1)
> WHERE tbl_1.id = t.id;
> 
> > I am using PHP PDO (and hoping that if there is a mechanism
> > within postgresql to do that PDO will support it as well).
> > 
> > Thank you in advance,
> > VSP
> 
> Regards,
>   Gerhard
-- 
  V S P
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Or how I learned to stop worrying and
  love email again


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] [Q]updating multiple rows with Different values

2008-11-22 Thread V S P
Hello,
searched documentation, FAQ and mailing list archives
(mailing list archive search is volumous :-) )

but could not find an answer:

I would like to be able to update
several rows to different values at the same time

In oracle this used to be called Array update or 
'collect' update or 'bulk' update -- but those
keywords did not bring anything for Postgresql.

for example tbl_1 has two columns id and col1


update tbl_1  set
   col1=3  where id=25,
   col1=5  where id=26


I am using PHP PDO (and hoping that if there is a mechanism
within postgresql to do that PDO will support it as well).


Thank you in advance,
VSP

-- 
  V S P
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - Access all of your messages and folders
  wherever you are


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general