[GENERAL] Query Improvement??

2017-11-05 Thread Bret Stern

Goal is to return all vendors which exist in all three companies

I think I got lucky figuring this out. Is there an obviously better way?

combined_item_master looks like this:
company_code character varying(10) NOT NULL,
primary_vendor_no character varying(7)
..more fields

data looks like this:

company_code | primary_vendor
AAA  003
BBB  004
CCC  001
CCC  004
AAA  123
BBB  123
CCC  123
BBB  003 

Query returns all primary_vendor_no (as vendor_locations) which exist in
all three companies
results:
vendor_locations
123


Here's the query

select primary_vendor_no, count(primary_vendor_no) as vendor_locations
from
(
SELECT distinct primary_vendor_no, company_code
  FROM combined_item_master
group by primary_vendor_no, company_code
) as a

group by primary_vendor_no
having count(primary_vendor_no)=3
order by vendor_locations DESC, primary_vendor_no


Thanks
Bret





Re: [GENERAL] import CSV file to a table

2017-03-08 Thread Bret Stern
I'll throw in.

If tab delimited is available, perhaps that option will work
better...or..
use Access to find the violations of the quote comma delimited
assumptions, then
export from Access an import

Bret


On Wed, 2017-03-08 at 08:36 -0800, Karl Czajkowski wrote:

> I believe that in its fully glory, you cannot reliably locate CSV
> record boundaries except by parsing each field in order including
> quote processing.  Individual records may have arbitrary numbers of
> field and record separator characters within the values.
> 
> Karl
> 
> 
> On Mar 08, Rob Sargent modulated:
> > Since bash has been bandied about in this thread I presume awk is
> > available.  Here's how I would check just how 'csv'ish the incoming
> > file is.
> > ...
> 
> 




[GENERAL] Query help

2016-10-04 Thread Bret Stern
Good evening,
I'm curious about a way to ask the following question of my vendors
table.

psuedo1 "select all vendors which exist in BUR and EBC and SNJ"

and
psuedo2 "select all vendors which DO NOT exist in all three show rooms


The data is from a Sage accounting system which I pull out and place in
a
pg db. What we are trying to do is identify which vendors are defined in
all three of our showrooms, and vice-versa, which ones are not.

ap_vendors table
  company_code character varying(10) NOT NULL,
  ap_division_no character varying(2) NOT NULL,
  vendor_no character varying(7) NOT NULL,
  terms_code character varying(2),
  vendor_name character varying(30),
  address_line1 character varying(30),
  address_line2 character varying(30),
  address_line3 character varying(30),
  city character varying(20),
  state character varying(2),
  zip_code character varying(10),
  telephone_no character varying(17),
  fax_no character varying(17),
  CONSTRAINT ap_vendors_pk PRIMARY KEY (company_code, ap_division_no,
vendor_no)

sample records:
"BUR";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"
"BUR";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"BUR";"00";"AKDOP";"30";"AKDO
Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
"EBC";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"
"EBC";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"EBC";"00";"ARIZ01";"30";"Arizona
Tile";"''";"''";"''";"''";"''";"94550";"''";"''"
"SNJ";"00";"AKDOP";"30";"AKDO
Pacific";"''";"''";"''";"''";"''";"94545";"''";"''"
"SNJ";"00";"AGORA";"30";"Agora Natural
Surfaces";"''";"''";"''";"''";"''";"90248";"''";"''"
"SNJ";"00";"ADEXU";"30";"ADEX
USA";"''";"''";"''";"''";"''";"92831";"''";"''"

What I need is a query which I can modify to return only vendors which
exists
in all three company_code columns ( BUR, EBC, SNJ) (they represent
showroom location)

eg; exists in BUR, EBC, SNJ
ADEXU
AGORA

OR

modify the query to return only the vendors which DO NOT exist in all
three showrooms based on the first column company_code

eg; 
AKDOP   only exists in BUR and SNJ
ARIZ01  only exists in EBC

Thanks
Bret




Re: [GENERAL] Multi-Table Insert/Update Strategy - Use Functions/Procedures?

2016-01-27 Thread Bret Stern
On this track (possibly unrelated)...can a view be used as part of the
multi table update 
Just curious


 On Wed, 2016-01-27 at 14:48 -0500, Don Parris wrote:
> I have several tables related to people and their contact information,
> and want db users to be able to add or update a given person and their
> respective contact information in one step, and get all the
> information into the correct tables.
> 
> 
> 
> I think I am ok with setting the privileges on the tables and columns
> as appropriate to allow each group to select, insert and update the
> appropriate data, and I can create appropriate views for them to view
> data they need to see. However, I am not really sure about the best
> way to allow someone to actually insert/update the data.
> 
> 
> 
> For instance, given the following tables:
> 
> 
> core.category
> contact.entity
> 
> 
> contact.person
> 
> contact.entity_category --linking table between entity and category
> 
> 
> contact.person_entity --linking table between entity & person
> 
> ... --other tables for address and phone contact info
> 
> 
> 
> I haven't really given much thought as to how such a procedure might
> look, but I'm guessing something along the lines of:
> CREATE FUNCTION record_insert() RETURNS integer AS $$
> 
> BEGIN
> INSERT statements... --need PK from entity & category tables to insert
> into entity_category table.
> 
> END
> 
> $$
> 
> language plpgsql;
> 
> 
> Ideally, the db user just says "I want to enter Joe Public, and Joe is
> affiliated with the Widget Corp entity, and has the phone numbers..."
> 
> 
> 
> Am I on the right track, or is there some better way to set this up?
> My understanding is that views really aren't meant for insert/update
> operations, and I have seen on the web that using views to
> insert/update is a bit tricky - and still requires a procedure with a
> rule on the view.
> 
> 
> 
> Thanks,
> 
> Don
> 
> -- 
> 
> D.C. Parris, FMP, Linux+, ESL Certificate
> Minister, Security/FM Coordinator, Free Software Advocate
> 
> http://dcparris.net/
> GPG Key ID: F5E179BE



Re: [GENERAL] Let's Do the CoC Right

2016-01-22 Thread Bret Stern
Frankly,

Can we create another COC (Code of Content) for this specific list?

My mailbox is full of non-technical (in my opinion) CoC discussions.
Which I grow tired of.

And to add to this completely impossible COC solution; in my life I've
constantly BEEN offended.
I've been offended financially, technically, physically, grammatically
(as written), and my favorite ..golfically (can't putt).

I believe I'm a better everything by those who have offended me in the
name of life's lessons. I
don' t go to Starbucks and expect a COC 

eg;
You shouldn''t have used an int...and...
why the f%$ckl did you use a godda$%m int you dumb son-of-a-bitchare
the same thing to me,
but the latter clearly could have cost lives.

So for those of us who cannot be offended (no offense). no COC needed.


Cheers (no offense)
Bret Stern (no offense)


ps. If you do pull off the Holy Grail (no offense), I'll be sure to
adhere to it.




Re: [GENERAL] WIP: CoC

2016-01-12 Thread Bret Stern
On Tue, 2016-01-12 at 09:07 -0500, Regina Obe wrote:

> >  If the attacker goes public, we point people at the exchange that happened 
> > where Tom has presumably already discussed the reasons that the 
> > patch/feature/&c isn't being accepted.
> 
> If someone wanted to out someone, they would study them carefully.  They 
> would find Tom's buttons and push them.
> They will show proof of Tom saying fuck you trans thing (probably something 
> worse) and all that and it would  be a bad reflection on Tom and our 
> community.
> It's because they don't have a Coc that Tom is such a jerk.  They let the 
> power get to his head.
> They would have proof is my point in an email trail.  
> 
> Luckily I think Tom doesn't have many visible buttons to push, but others in 
> our community do.
> Anyrate I think it's looking more like a Coc will hurt us more than do us 
> good.  This is beginning to feel too much like Highschool Lincoln-douglass 
> debating which I never enjoyed.
> I just want to get back to programming something useful.
> 

Amen.

> 
> > I don't think I understand your point. So I get 100 friends to come here 
> > and ask for Tom to be outed, we ask for the reason and when they don't 
> > produce a valid one, nothing happens because none of us have any power.
> They will ask, they'll point at a random link.  Like this one - 
> https://twitter.com/krainboltgreene/status/611569515315507200
> 
> You'll be too lazy to read it and assume they read it and they are right.  
> Tom will be persecuted for some link everyone was too lazy to read.
> News of Tom's jerkiness would spread across the internet like a virus.  
> The jerk think would be echoed by everyone until everyone believes it and 
> takes it to heart. "Tom is a big jerk. How can the PostgreSQL project allow 
> such a jerk to be running the show."
> Tom will feel bad and think - "No good deed goes unpunished", he'll step down.
> 
> THE END
> 
> 
> Thanks,
> Regina
> 

In that vein..Being the insensitive MF that I am, I don't care what Tom
says. Toms value (in the list) is his commitment to
the project and (for me) the last word on most topics..even if he is a
liberal sob.
Feel the love,
Bret Stern


> 
> 




[GENERAL] Code of Conduct

2016-01-11 Thread Bret Stern
Maybe Trump should write this


Re: [GENERAL] Code of Conduct: Is it time?

2016-01-05 Thread Bret Stern
On Tue, 2016-01-05 at 22:41 -0600, Jim Nasby wrote:

> On 1/5/16 10:03 PM, John R Pierce wrote:
> > On 1/5/2016 5:31 PM, Jim Nasby wrote:
> >> IMHO, the real problem here is not simply a CoC, it is that the
> >> Postgres community doesn't focus on developing the community itself.
> >> The closest we come to "focus" is occasional talk on -hackers about
> >> how we need more developers. There is no formal
> >> discussion/leadership/coordination towards actively building and
> >> strengthening our community. Until that changes, I fear we will always
> >> have a lack of developers. More importantly, we will continue to lack
> >> all the other ways that people could contribute beyond writing code.
> >> IE: the talk shouldn't be about needing more developers, it should be
> >> about needing people who want to contribute time to growing the
> >> community.
> >
> >
> > That sounds like a bunch of modern marketing graduate mumbojumbo to
> > me.The postgres community are the people who actually support it on
> > the email lists and IRC, as well as the core development teams, and
> > INMO, they are quite strong and effective. when you start talking
> > about social marketing and facebook and twitter and stuff, thats just a
> > bunch of feelgood smoke and mirrors.The project's output is what
> > supports it, not having people going out 'growing community', that is
> > just a bunch of hot air.   you actively 'grow community' when you're
> > pushing worthless products (soda pop, etc) based on slick marketing
> > plans rather than actually selling something useful.
> 
> Then why is it that there is almost no contribution to the community 
> other than code and mailing list discussion?
> 
> Why is the infrastructure team composed entirely of highly experienced 
> code contributors, of which there are ~200 on the planet, when there are 
> literally 100s of thousands (if not millions) of people out there that 
> could do that work (and could probably do it better if it's what they do 
> for a living, no offense to the efforts of the infrastructure team).
> 
> Why is there a lack of developers? And a serious lack of code reviewers?
> -- 
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
> 

As long as I've participated in the list, I've had access to the very
best conversations
and technical discussions from my fellow decorated contributors. 

The coc sounds like a Washington politics play, but as long as the best
still engage
in this forum, I could care less. The list serves its purpose without
overhead...a rare
resource in today's flood of incoherent technical chatter.

Happy New Year!
Bret Stern
President
Machine Management



Re: [GENERAL] native api or odbc?

2015-06-23 Thread Bret Stern
I use odbc in a windows environment.

No complaints, can do inserts, select, updates, joins, execute stored
procedures
with little fuss...more than fast enough for my purposes.

There are some issues with 32/64 bit odbc depending on your relative
OS..as far as installing the correct odbc version. To be expected

Otherwise I couldn't be happier. For what it's worth.


On Tue, 2015-06-23 at 15:56 -0700, Ted Toth wrote:
> What are peoples experiences with either/both, pluses/minuses?
> 
> 
> 
> Ted



Re: [GENERAL] SSD Drives

2014-04-03 Thread Bret Stern
On Thu, 2014-04-03 at 12:32 -0700, David Rees wrote:
> On Thu, Apr 3, 2014 at 12:13 PM, Merlin Moncure  wrote:
> > On Wed, Apr 2, 2014 at 2:37 PM, Bret Stern
> >  wrote:
> >> Any opinions/comments on using SSD drives with postgresql?
> >
> > Here's a single S3700 smoking an array of 16 15k drives (poster didn't
> > realize that; was to focused on synthetic numbers):
> > http://dba.stackexchange.com/questions/45224/postgres-write-performance-on-intel-s3700-ssd
> 
> I just ran a quick test earlier this week on an old Dell 2970 (2
> Opteron 2387, 16GB RAM) comparing a 6-disk RAID10 with 10k 147GB SAS
> disks to a 2-disk RAID1 with 480GB Intel S3500 SSDs and found the SSDs
> are about 4-6x faster using pgbench and a scaling factor of 1100. Some
> sort of MegaRAID controller according to lspci and has BBU. TPS
> numbers below are approximate.
> 
> RAID10 disk array:
> 8 clients: 350 tps
> 16 clients: 530 tps
> 32 clients: 800 tps
> 
> RAID1 SSD array:
> 8 clients: 2100 tps
> 16 clients: 2500 tps
> 32 clients: 3100 tps
> 
> So yeah, even the slower, cheaper S3500 SSDs are way fast. If your
> write workload isn't too high, the S3500 can work well. 

Is a write cycle anywhere on the drive different than a re-write?

Or is a write a write!

They feedback/comments are awesome. I'm shopping..


> We'll see how
> the SMART drive lifetime numbers do once we get into production, but
> right now we estimate they should last at least 5 years and from what
> we've seen it seems that SSDs seem to wear much better than expected.
> If not, we'll pony up and go for the S3700 or perhaps move the xlog
> back on to spinning disks.
> 
> -Dave




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD Drives

2014-04-02 Thread Bret Stern
Care to share the SSD hardware you're using?

I've used none to date, and have some critical data I would like
to put on a development server to test with.

Regards,

Bret Stern

On Wed, 2014-04-02 at 15:31 -0500, Shaun Thomas wrote:
> On 04/02/2014 02:50 PM, Brent Wood wrote:
> 
> > http://it-blog.5amsolutions.com/2010/08/performance-of-postgresql-ssd-vs.html
> 
> While interesting, these results are extremely out of date compared to 
> current drives. Current chips and firmware regularly put out 2-10 times 
> better performance than even the best graphs on this page, depending on 
> what you buy.
> 
> We moved all of our performance-critical servers to NVRAM-based storage 
> years ago. For us, it was well worth the added expense.
> 
> -- 
> Shaun Thomas
> OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
> 312-676-8870
> stho...@optionshouse.com
> 
> __
> 
> See http://www.peak6.com/email_disclaimer/ for terms and conditions related 
> to this email
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SSD Drives

2014-04-02 Thread Bret Stern
Any opinions/comments on using SSD drives with postgresql?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Moving data from M$ JetDB file to Postgres on Linux

2014-03-01 Thread Bret Stern
FYI,
Pretty sure Apache Office Base has native support to open Access.mdb
files on a linux box



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replacing Ordinal Suffixes

2014-02-28 Thread Bret Stern
This is a kick *ss forum. I must say.


On Fri, 2014-02-28 at 14:17 -0800, Steve Atkins wrote:
> On Feb 28, 2014, at 2:04 PM, George Weaver  wrote:
> 
> > Hi list,
> >  
> > I'm stumped.
> >  
> > I am trying to use Regexp_Replace to replace ordinal suffixes in addresses 
> > (eg have '126th' want '126') for comparison purposes.  So far no luck.
> >  
> > I have found that
> >  
> > SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'), 
> > '(?!/D)(st|nd|rd|th)', '', 'g');
> >   regexp_replace
> > --
> >  300 nor 126 reet
> >  
> > but
> >  
> > SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'), 
> > '(?=/D)(st|nd|rd|th)', '', 'g');
> >  regexp_replace
> > 
> >  300 north 126th street
> >  
> > I'm a novice with regular expressions and google hasn't helped much.
> >  
> > Any suggestions?
> 
> Maybe this?
> 
> select regexp_replace('300 North 126th Street', '(\d+)(?:st|nd|rd|th)', '\1', 
> 'gi');
> 
> Cheers,
>   Steve
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with connection issue - started today

2014-02-06 Thread Bret Stern
You checked pg_hba_conf for host 10.68.73.6?
Is there a matching log entry for the rejection?


On Wed, 2014-02-05 at 09:21 -0200, Edson Richter wrote:
> Dear all, I need your advise. Found a tricky situation.
> 
> Without any changes in the configuration files, a **local** connection 
> to a local VPN IP address could not be established with the following error:
> 
> 2014-02-05 09:12:21.111 - x - org.postgresql.util.PSQLException: 
> ERROR: could not establish connection
>Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
> "master", database "master", SSL off
> 
> org.postgresql.util.PSQLException: ERROR: could not establish connection
>Detail: FATAL:  no pg_hba.conf entry for host "10.68.73.6", user 
> "master", database "master", SSL off
> 
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:403)
>  at 
> org.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:331)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at 
> org.apache.tomcat.dbcp.dbcp.DelegatingStatement.executeUpdate(DelegatingStatement.java:228)
>  at br.com...
> 
> If I run from the Local Server:
> [root@mylocalserver logs]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
> 
> master=> select count(*) from pessoa;
>   count
> ---
>   9
> (1 registro)
> 
> 
> If I run from a Remote Server:
> 
> [root@myremoteserver ssh]# psql -h 10.68.73.6 -d master -U master
> Senha para usuário master:
> psql (9.2.6)
> Digite "help" para ajuda.
> 
> master=> select count(*) from pessoa;
>   count
> ---
>   9
> (1 registro)
> 
> 
> So, seems that only JDBC is getting in trouble to connect to a local IP 
> address.
> Current interfaces (and yes, the tunneling is running - otherwise the 
> remote server could not connect):
> 
> [root@mylocalserver logs]# ifconfig
> eth0  Link encap:Ethernet  Endereço de HW XX:XX:XX:XX:XX:XX
>inet end.: xxx.xx.xxx.85  Bcast:xxx.xx.xxx.87 
> Masc:255.255.255.248
>endereço inet6: ::xxx:::/64 Escopo:Link
>UP BROADCASTRUNNING MULTICAST  MTU:1500  Métrica:1
>RX packets:172557 errors:0 dropped:0 overruns:0 frame:0
>TX packets:293439 errors:0 dropped:0 overruns:0 carrier:0
>colisões:0 txqueuelen:1000
>RX bytes:24537681 (23.4 MiB)  TX bytes:227413210 (216.8 MiB)
>IRQ:28 Memória:fb00-fb7f
> 
> loLink encap:Loopback Local
>inet end.: 127.0.0.1  Masc:255.0.0.0
>endereço inet6: ::1/128 Escopo:Máquina
>UP LOOPBACKRUNNING  MTU:16436  Métrica:1
>RX packets:138156 errors:0 dropped:0 overruns:0 frame:0
>TX packets:138156 errors:0 dropped:0 overruns:0 carrier:0
>colisões:0 txqueuelen:0
>RX bytes:31993170 (30.5 MiB)  TX bytes:31993170 (30.5 MiB)
> 
> tun0  Link encap:Não Especificado  Endereço de HW 
> 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
>inet end.: 10.68.73.6  P-a-P:10.68.73.5 Masc:255.255.255.255
>UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500 Métrica:1
>RX packets:149106 errors:0 dropped:0 overruns:0 frame:0
>TX packets:274534 errors:0 dropped:199 overruns:0 carrier:0
>colisões:0 txqueuelen:100
>RX bytes:6291572 (6.0 MiB)  TX bytes:381732404 (364.0 MiB)
> 
> 
> 
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem connecting to postgres via apache

2014-01-24 Thread Bret Stern
SELINUX is a bunch of settings to control security.

If you are able to find it, I was suggesting you disable it
only to see if the Apache problem goes away.

Then to turn it back on.

If it stops working, then there is a setting which is preventing
Apache from being accessed. Locating the specific setting
would be the next step.





On Fri, 2014-01-24 at 11:55 -0800, Susan Cassidy wrote:
> I don't actually know what SELinux is.  What else will happen if I
> (find out how to) disable it?
> 
> 
> Susan
> 
> 
> 
> On Fri, Jan 24, 2014 at 9:47 AM, Bret Stern
>  wrote:
> Are you calling the perl from apache (assuming yes)..? Does
> the web
> user have the rights to execute the perl code?
> 
> Try disabling SELinux..
> You'll get it..
> 
> On Fri, 2014-01-24 at 09:35 -0800, Susan Cassidy wrote:
> > I've already checked that.  It is enabled.  I am running
> Scientific
> > Linux.
> >
> >
> >
> > On Fri, Jan 24, 2014 at 8:28 AM, Tom Lane
>  wrote:
> > Susan Cassidy
>  writes:
> > > $dbh = DBI->connect("dbi:Pg:dbname=$dbname;host=
> > ${dbserver};port=$dbport;",
> > > $dbuser, $dbpasswd) or
> > >  errexit( "Unable to connect to dbname $dbname,
> err:
> > $DBI::errstr");
> >
> > > The exact same connection string works fine in a
> standalone
> > perl program.
> >
> > Given the permissions errors you mentioned upthread,
> I'm
> > wondering whether
> > you're running on Red Hat/CentOS, and if so whether
> SELinux is
> > preventing
> > apache from connecting to unexpected port numbers.
>  I seem to
> > recall
> > that there's a SELinux boolean specifically intended
> to allow
> > or disallow
> > database connections from webservers, but I couldn't
> tell you
> > the name
> > offhand.
> >
> > regards, tom lane
> >
> >
> 
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] problem connecting to postgres via apache

2014-01-24 Thread Bret Stern
Are you calling the perl from apache (assuming yes)..? Does the web
user have the rights to execute the perl code?

Try disabling SELinux..
You'll get it..

On Fri, 2014-01-24 at 09:35 -0800, Susan Cassidy wrote:
> I've already checked that.  It is enabled.  I am running Scientific
> Linux.
> 
> 
> 
> On Fri, Jan 24, 2014 at 8:28 AM, Tom Lane  wrote:
> Susan Cassidy  writes:
> > $dbh = DBI->connect("dbi:Pg:dbname=$dbname;host=
> ${dbserver};port=$dbport;",
> > $dbuser, $dbpasswd) or
> >  errexit( "Unable to connect to dbname $dbname, err:
> $DBI::errstr");
> 
> > The exact same connection string works fine in a standalone
> perl program.
> 
> Given the permissions errors you mentioned upthread, I'm
> wondering whether
> you're running on Red Hat/CentOS, and if so whether SELinux is
> preventing
> apache from connecting to unexpected port numbers.  I seem to
> recall
> that there's a SELinux boolean specifically intended to allow
> or disallow
> database connections from webservers, but I couldn't tell you
> the name
> offhand.
> 
> regards, tom lane
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms for entering data into postgresql

2013-10-11 Thread Bret Stern
Agreed.
On Fri, 2013-10-11 at 09:06 -0700, Steve Atkins wrote:
> On Oct 11, 2013, at 8:57 AM, Bret Stern  
> wrote:
> 
> > My interpretation of "Forms these days are written in HTML" means
> > most interfaces are web front ends to the cloud. Not a GUI framework.
> 
> 
> Yup.
> 
> But embedding an HTML renderer in your desktop app does allow you to
> use HTML where it's appropriate - and it works really well for dynamically
> generated forms and tabular output.
> 
> The IBM 3270 wasn't the crowning achievement of data entry technology.
> 
> Cheers,
>   Steve
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms for entering data into postgresql

2013-10-11 Thread Bret Stern
My interpretation of "Forms these days are written in HTML" means
most interfaces are web front ends to the cloud. Not a GUI framework.

On Fri, 2013-10-11 at 14:47 +0200, Wolfgang Keller wrote:
> > Forms these days are written in HTML.
> 
> Only by clueless/careless morons.
> 
> HTML has never been, is not and will never be a usable GUI framework.
> 
> And "web apps" are unusable garbage.
> 
> Sincerely,
> 
> Wolfgang
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Forms for entering data into postgresql

2013-10-11 Thread Bret Stern
With a brief look at the last 15 years of programming for the web, your
comments are a little harsh. Only a short time ago perl and cgi
was the rage.

I've been programming for 30 years...and still have clients who use
Lotus 123 (MS-DOS) based. It's a big world out there, and lots of
solutionsfor a lot of problems.

PS. I wrote my first Postgresql Trigger the other day. Awesome stuff.

On Fri, 2013-10-11 at 14:47 +0200, Wolfgang Keller wrote:
> > Forms these days are written in HTML.
> 
> Only by clueless/careless morons.
> 
> HTML has never been, is not and will never be a usable GUI framework.
> 
> And "web apps" are unusable garbage.
> 
> Sincerely,
> 
> Wolfgang
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Donation

2013-10-10 Thread Bret Stern
I'm amazed how complete pgAdmin, postgreSQL, all the interface options,
and this forum works.

It's time to reflect. I feel like donating. 
(Maybe I'll feel the same about our politicians someday) Nnn!

Is https://www.postgresql.us/donate the place to donate?






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 回复:[GENERAL] SP to calc shipments vs receipts

2013-09-25 Thread Bret Stern
On Wed, 2013-09-25 at 07:46 -0700, Chris Travers wrote:
> 
> 
> 
> On Wed, Sep 25, 2013 at 7:27 AM, DDT <410845...@qq.com> wrote:
> By the way, you can try to save the current totals to another
> table.
> update it through triggers when the inventory transactions
> changed.
> it may lead to better performance on a large set of inventory
> transactions for query current totals
> 
> 
> If you are going to do this, my recommendation is to store periodic
> summaries (i.e. for sum through date) and then aggregate rolling
> forward.  This vastly simplifies querying and data validation if you
> are only appending data. 
> -- 
> Best Wishes,
> Chris Travers
> 
> 
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more.shtml

Thanks Chris for the suggestions.

With all the input from this forum, my project is looking pretty sweet.

B Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: 回复:[GENERAL] SP to calc shipments vs receipts

2013-09-25 Thread Bret Stern
I like that idea.

Also thinking of creating two ALLOCATE transactions.

ALLO (Allocate Open)
ALLC (Allocate Closed) after allocation has been shipped.

This way I can still see the original allocation and allocation
ship transactions when running a movement report, but only
use ALLO to determine allocations still in our possession.

Thanks again for comments

On Wed, 2013-09-25 at 22:27 +0800, DDT wrote:
> By the way, you can try to save the current totals to another table.
> update it through triggers when the inventory transactions changed.
> it may lead to better performance on a large set of inventory
> transactions for query current totals
> 
> -
> 
> 
> 
> 
> Think I'll just do an UPDATE which changes the ALLOCATED
> transaction to a SHIP transaction and uses the current
> Ship Date/Time
> 
> On Tue, 2013-09-24 at 07:38 -0700, Bret Stern wrote:
> > Support at it's finest.
> > Thinking maybe ALLOCATED transactions zero out
> > when the allocated qty is shipped, but I would like to save
> > the original allocated qty..maybe add another field in my
> > transaction table to save the allocated transaction qty.
> > 
> > Also don't see any problem with deleting the ALLOCATED
> > transaction record..normally I don't like deleting any transaction
> > records, but at this moment don't see the harm.
> > 
> > my table. 
> > 
> >   id serial NOT NULL,
> >   trans_date character varying(20),
> >   trans_time character varying(20),
> >   trans_type character varying(8),
> >   trans_user character varying(10),
> >   trans_qty real,
> >   trans_reference character varying(40),
> >   trans_comment character varying(80),
> >   part_no character varying(40),
> >   part_desc character varying(40),
> >   part_owner_id character varying(20),
> >   building character varying(4),
> >   isle character varying(2),
> >   rack character varying(2),
> >   shelf character varying(2),
> >   matrix character varying(2),
> >   CONSTRAINT ss_item_tran_key PRIMARY KEY (id)
> > 
> > You'all have me thinking. Thanks for taking time to
> > educate me.
> > 
> > 
> > On Tue, 2013-09-24 at 14:22 +0800, DDT wrote:
> > > hello, is the output calculated by following rule?
> > > 
> > > on_hand SUM(receipt) - SUM(shipment) - SUM(allocated)
> > > available SUM(receipt) - SUM(shipment)
> > > 
> > > sql can be:
> > > sum(case when trans_type='REC' then trans_qty when trans_type IN
> > > ('SHP', 'ALL') then -trans_qty else 0) as on_hand
> > > sum(case when trans_type='REC' then trans_qty when trans_type =
> 'SHP'
> > > then -trans_qty else 0) as on_hand
> > > 
> > > but i'm courise about if something is allocated and then it
> shipped,
> > > will you delete the record or allocation?
> > > 
> > > 
> > > 
> > > 
> > > On 9/23/2013 10:13 PM, Bret Stern wrote:
> > > > I have an inventory transaction table with several fields,
> > > > specifically:
> > > > part_no
> > > > trans_type
> > > > trans_qty
> > > >
> > > > part_no | trans_type | trans_qty
> > > > abc REC 5000 (receipt)
> > > > abc REC 400 (receipt)
> > > > abc SHP 1000 (shipment)
> > > > abc ALL 1000 (allocated)
> > > >
> > > > Looking for the best way to show following totals with SQL
> > > >
> > > > on_hand | allocated | available
> > > > 3400 1000 4400
> > > 
> > > select part_no,
> > >  sum(cast when trans_type='REC' then trans_qty else 0)
> as 
> > > "on_hand",
> > >  sum(cast when trans_type='ALL' then trans_qty else 0)
> as 
> > > "allocated",
> > >  sum(cast when trans_type='SHP' then trans_qty else 0)
> as 
> > > "allocated"
> > >  from inventory_transaction_table
> > >  group by part_no;
> > > 
> > > 
> > > except, your example output doesn't correlate with your sample
> input 
> > > according to any rules I can see.
> > > 
> > > 
> > > -- 
> > > john r pierce  37N 122W
> > > somewhere on the middle of the left coast
> > > 
> > > 
> > > 
> > > -- 
> > > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > > To make changes to your subscription:
> > > http://www.postgresql.org/mailpref/pgsql-general
> > > .
> > > 
> > 
> > 
> > 
> > 
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> .
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SP to calc shipments vs receipts

2013-09-24 Thread Bret Stern

Think I'll just do an UPDATE which changes the ALLOCATED
transaction to a SHIP transaction and uses the current
Ship Date/Time

On Tue, 2013-09-24 at 07:38 -0700, Bret Stern wrote:
> Support at it's finest.
> Thinking maybe ALLOCATED transactions zero out
> when the allocated qty is shipped, but I would like to save
> the original allocated qty..maybe add another field in my
> transaction table to save the allocated transaction qty.
> 
> Also don't see any problem with deleting the ALLOCATED
> transaction record..normally I don't like deleting any transaction
> records, but at this moment don't see the harm.
> 
> my table. 
> 
>   id serial NOT NULL,
>   trans_date character varying(20),
>   trans_time character varying(20),
>   trans_type character varying(8),
>   trans_user character varying(10),
>   trans_qty real,
>   trans_reference character varying(40),
>   trans_comment character varying(80),
>   part_no character varying(40),
>   part_desc character varying(40),
>   part_owner_id character varying(20),
>   building character varying(4),
>   isle character varying(2),
>   rack character varying(2),
>   shelf character varying(2),
>   matrix character varying(2),
>   CONSTRAINT ss_item_tran_key PRIMARY KEY (id)
> 
> You'all have me thinking. Thanks for taking time to
> educate me.
> 
> 
> On Tue, 2013-09-24 at 14:22 +0800, DDT wrote:
> > hello, is the output calculated by following rule?
> > 
> > on_hand SUM(receipt) - SUM(shipment) - SUM(allocated)
> > available SUM(receipt) - SUM(shipment)
> > 
> > sql can be:
> > sum(case when trans_type='REC' then trans_qty when trans_type IN
> > ('SHP', 'ALL') then -trans_qty else 0) as on_hand
> > sum(case when trans_type='REC' then trans_qty when trans_type = 'SHP'
> > then -trans_qty else 0) as on_hand
> > 
> > but i'm courise about if something is allocated and then it shipped,
> > will you delete the record or allocation?
> > 
> > 
> > 
> > 
> > On 9/23/2013 10:13 PM, Bret Stern wrote:
> > > I have an inventory transaction table with several fields,
> > > specifically:
> > > part_no
> > > trans_type
> > > trans_qty
> > >
> > > part_no | trans_type | trans_qty
> > > abc REC 5000 (receipt)
> > > abc REC 400 (receipt)
> > > abc SHP 1000 (shipment)
> > > abc ALL 1000 (allocated)
> > >
> > > Looking for the best way to show following totals with SQL
> > >
> > > on_hand | allocated | available
> > > 3400 1000 4400
> > 
> > select part_no,
> >  sum(cast when trans_type='REC' then trans_qty else 0) as 
> > "on_hand",
> >  sum(cast when trans_type='ALL' then trans_qty else 0) as 
> > "allocated",
> >  sum(cast when trans_type='SHP' then trans_qty else 0) as 
> > "allocated"
> >  from inventory_transaction_table
> >  group by part_no;
> > 
> > 
> > except, your example output doesn't correlate with your sample input 
> > according to any rules I can see.
> > 
> > 
> > -- 
> > john r pierce  37N 122W
> > somewhere on the middle of the left coast
> > 
> > 
> > 
> > -- 
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> > .
> > 
> 
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 回复: [GENERAL] SP to calc shipments vs receipts

2013-09-24 Thread Bret Stern
Support at it's finest.
Thinking maybe ALLOCATED transactions zero out
when the allocated qty is shipped, but I would like to save
the original allocated qty..maybe add another field in my
transaction table to save the allocated transaction qty.

Also don't see any problem with deleting the ALLOCATED
transaction record..normally I don't like deleting any transaction
records, but at this moment don't see the harm.

my table. 

  id serial NOT NULL,
  trans_date character varying(20),
  trans_time character varying(20),
  trans_type character varying(8),
  trans_user character varying(10),
  trans_qty real,
  trans_reference character varying(40),
  trans_comment character varying(80),
  part_no character varying(40),
  part_desc character varying(40),
  part_owner_id character varying(20),
  building character varying(4),
  isle character varying(2),
  rack character varying(2),
  shelf character varying(2),
  matrix character varying(2),
  CONSTRAINT ss_item_tran_key PRIMARY KEY (id)

You'all have me thinking. Thanks for taking time to
educate me.


On Tue, 2013-09-24 at 14:22 +0800, DDT wrote:
> hello, is the output calculated by following rule?
> 
> on_hand SUM(receipt) - SUM(shipment) - SUM(allocated)
> available SUM(receipt) - SUM(shipment)
> 
> sql can be:
> sum(case when trans_type='REC' then trans_qty when trans_type IN
> ('SHP', 'ALL') then -trans_qty else 0) as on_hand
> sum(case when trans_type='REC' then trans_qty when trans_type = 'SHP'
> then -trans_qty else 0) as on_hand
> 
> but i'm courise about if something is allocated and then it shipped,
> will you delete the record or allocation?
> 
> 
> 
> 
> On 9/23/2013 10:13 PM, Bret Stern wrote:
> > I have an inventory transaction table with several fields,
> > specifically:
> > part_no
> > trans_type
> > trans_qty
> >
> > part_no | trans_type | trans_qty
> > abc REC 5000 (receipt)
> > abc REC 400 (receipt)
> > abc SHP 1000 (shipment)
> > abc ALL 1000 (allocated)
> >
> > Looking for the best way to show following totals with SQL
> >
> > on_hand | allocated | available
> > 3400 1000 4400
> 
> select part_no,
>  sum(cast when trans_type='REC' then trans_qty else 0) as 
> "on_hand",
>  sum(cast when trans_type='ALL' then trans_qty else 0) as 
> "allocated",
>  sum(cast when trans_type='SHP' then trans_qty else 0) as 
> "allocated"
>  from inventory_transaction_table
>  group by part_no;
> 
> 
> except, your example output doesn't correlate with your sample input 
> according to any rules I can see.
> 
> 
> -- 
> john r pierce  37N 122W
> somewhere on the middle of the left coast
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> .
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] SP to calc shipments vs receipts

2013-09-23 Thread Bret Stern
I have an inventory transaction table with several fields,
specifically:
part_no
trans_type
trans_qty

part_no | trans_type | trans_qty
abc REC 5000(receipt)
abc REC 400 (receipt)
abc SHP 1000(shipment)
abc ALL 1000(allocated)

Looking for the best way to show following totals with SQL

on_hand |   allocated   | available
34001000 4400

Thinking of writing a stored procedure that has the aggregate queries,
and returns the values defined above for this example.

Is this a recommended way?

B Stern







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using ODBC and VBA to pull data from a large object

2013-09-17 Thread Bret Stern
On Tue, 2013-09-17 at 08:32 -0400, Adam C Falkenberg wrote:
> Good Morning, 
> 
> I had a question about using ODBC with large objects.  When I query
> the pg_largeobject table and return data to an ADO recordset, I can
> only get the first 255 bytes even though the record contains 2048
> bytes of data (all the bytes after the first 255 show as 0).  When I
> checked the type of the record, it was a VarBinary.  Is there a way to
> have all of the data returned to the recordset?  Thanks for any help. 
> 
> Adam


Microsofts sample
http://support.microsoft.com/kb/258038



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for design: PostgreSQL mugs

2013-09-08 Thread Bret Stern
PostgreSQL - (the worlds database)



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stored Procedure table/column args

2013-09-02 Thread Bret Stern
On Mon, 2013-09-02 at 18:20 -0700, Adrian Klaver wrote:
> On 09/02/2013 06:06 PM, Bret Stern wrote:
> > 'psuedo
> >
> > Can you create stored procedures that are built from parameters as
> > below, or does this defeat the pre-compiled purpose of an SP?
> >
> > create function item_exists (tbl character varying, col character
> > varying, col_val character varying)
> >
> > DECLARE
> >   x integer;
> >
> > PERFORM col FROM tbl
> > WHERE col="'" + col_val + "'"
> > IF FOUND THEN
> >  x := 1;
> > else
> >  x := 0;
> > end if;
> >
> > RETURN x;
> > 
> > 
> >
> > left out some syntax, but should deliver the idea
> >
> > trying to build build a generic "check for existing", that's not
> > specific to a specific table/column without returning recordset
> > object overhead
> >
> > any better ways
> 
> http://www.postgresql.org/docs/9.2/interactive/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN

It's all there. Regards. B stern
> 
> > thanks
> >
> >
> >
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Stored Procedure table/column args

2013-09-02 Thread Bret Stern
'psuedo

Can you create stored procedures that are built from parameters as
below, or does this defeat the pre-compiled purpose of an SP?

create function item_exists (tbl character varying, col character
varying, col_val character varying)

DECLARE
 x integer;

PERFORM col FROM tbl
   WHERE col="'" + col_val + "'"
IF FOUND THEN
 x := 1;
else
 x := 0;
end if;

RETURN x;



left out some syntax, but should deliver the idea

trying to build build a generic "check for existing", that's not
specific to a specific table/column without returning recordset
object overhead

any better ways
thanks



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Update from select

2013-05-13 Thread Bret Stern
Easy money indeed. Thanks for the help
On Mon, 2013-05-13 at 15:28 -0500, Justin Tocci wrote:
> em stands for "easy money"
> 
> update tbl1
> set col3=em.col3,col4=em.col4,col5=em.col5
> from
> (select col3, col4,col5 from tbl2 where col1="criteria") em
> 
> Regards,
> 
> Justin Tocci
> Programmer
> www.workflowproducts.com
> 7813 Harwood Road
> North Richland Hills, TX 76180
> phone 817-503-9545
> skype justintocci
> 
> On May 13, 2013, at 3:23 PM, Bret Stern  
> wrote:
> 
> > PG 8.4
> > 
> > Having trouble putting together an update query to update
> > multiple columns in tbl1 from columns in tbl2. 
> > 
> > update tbl1
> > set col3,col4,col5
> > from
> > (select col3, col4,col5 from tbl2 where col1="criteria")
> > 
> > 
> > 
> > Can someone add to the Postgres Docs (shown below) to help me with this.
> > 
> > UPDATE employees SET sales_count = sales_count + 1 WHERE id =
> >  (SELECT sales_person FROM accounts WHERE name = 'Acme Corporation');
> > 
> > 
> > Many thanks
> > 
> > Bret Stern
> > 
> > 
> > 
> > -- 
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> 
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Update from select

2013-05-13 Thread Bret Stern
PG 8.4

Having trouble putting together an update query to update
multiple columns in tbl1 from columns in tbl2. 

update tbl1
set col3,col4,col5
from
(select col3, col4,col5 from tbl2 where col1="criteria")



Can someone add to the Postgres Docs (shown below) to help me with this.

UPDATE employees SET sales_count = sales_count + 1 WHERE id =
  (SELECT sales_person FROM accounts WHERE name = 'Acme Corporation');


Many thanks

Bret Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best import approach? Delimiters in strings

2013-02-16 Thread Bret Stern

The first value seems more suspect, should be "490015496GW". Mixing
numeric and string values is best enclosed in quotes.


"Golders Green, Golders Green, stop GW" is normal if you want the
entire string between the quotes to arrive in a single column (with
the comma).




On Sat, 2013-02-16 at 17:02 +, Andrew Taylor wrote:
> Hi,
> 
> 
> A bulk import (script attached) is failing.
> 
> 
> Error message:
> psql:/home/andyt/projects/django-stringer/txc/stops_inport.txt:86:
> ERROR:  invalid input syntax for type double precision: "stop_lat"
> 
> 
> I think the reason may be a delimiters in strings such as "Golders
> Green, Golders Green, stop GW"
> 
> 
> 490015496GW,"Golders Green, Golders Green, stop GW",,51.57207,-0.19549
> 
> 
> What's a good way to handle this?
> 
> 
> Thanks,
> 
> 
> Andy




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Windows Phone/Postgresql

2013-01-30 Thread Bret Stern
I'm thinking about picking up a new windows phone, and
would like to connect to a postgresql server from the
phone.

Anyone have some how-to links.

I've done some initial Google searches, and found some
leads, but hoping to draw on some experience in the group.

Regards,

Bret Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "could not write to output file: Permission denied" during pg_dump

2012-11-10 Thread Bret Stern
Sounds like a file sharing issue. In other words..writing to the same
file at the same time...from two separate pg_dump streams.

Perhaps adding a time var to the file name call below
and see if the error goes away.


On Sat, 2012-11-10 at 08:03 -0600, Tefft, Michael J wrote:
> We have several Postgres 9.4 databases on Solaris 10 that are
> structural clones but with different data . While running multiple
> concurrent pg_dump exports for these databases, we get sporadic errors
> like this:
> 
>  
> 
> pg_dump: dumping contents of table attachment
> 
> pg_dump: [custom archiver] could not write to output file: Permission
> denied
> 
> pg_dump: *** aborted because of error
> 
>  
> 
> This is after successfully dumping several tables. The table that
> triggers the failure varies from run to run, and is not (or, so far
> has not been) the first table processed. These failures only occur
> during concurrent runs – we have not seen them when we single-thread
> the pg_dump exports.
> 
>  
> 
> The command used to invoke pg_dump is as follows:
> 
> ${currentCodeDir}/thirdPartyLinks/postgres/bin/pg_dump --file=
> ${currentDataDir}/${db_name}.dmp --format=custom --host=
> ${PostgreSQLServer} --port=${db_port} --username= --no-privileges
> --verbose ${db_name}
> 
>  
> 
> Any ideas?
> 
>  
> 
> Michael Tefft
> 
> Snap-on Business Solutions 
> 
> 4025 Kinross Lakes Parkway
> 
> Richfield, OH 44286
> 
> 330-659-1840
> 
> michael.j.te...@snapon.com
> 
>  
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Suggestions for the best strategy to emulate returning multiple sets of results

2012-10-10 Thread Bret Stern
create a ramdrive 
On Wed, 2012-10-10 at 14:27 +0100, Seref Arikan wrote:
> Hi Merlin, 
> Thanks for the response. At the moment, the main function is creating
> two temp tables that drops on commit, and python functions fills
> these. Not too bad, but I'd like to push these temp tables to ram,
> which is a bit tricky due to not having a direct method of doing this
> with postgresql. (a topic that has been discussed in the past in this
> mail group) 
> 
> The global variable idea is interesting though. I have not encountered
> this before, is it the global dictionary SD/GD mentioned here:
> http://www.postgresql.org/docs/9.0/static/plpython-sharing.html ?
> It may help perform the expensive transformations once and reuse the
> results. 
> 
> Kind regards
> Seref
> 
> On Wed, Oct 10, 2012 at 2:06 PM, Merlin Moncure 
> wrote:
> On Mon, Oct 8, 2012 at 4:59 PM, Seref Arikan
>  wrote:
> > Thanks Merlin,
> > I've  tried arrays but plpython does not support returning
> arrays of custom
> > db types (which is what I'd need to do)
> 
> 
> 
> hm -- yeah.  can your custom types be broken down into plain
> SQL types
> (that is, composite types?).  maybe stash the results in
> global
> variable and return it in two calls, or insert into into a
>  tempt
> table that drops on commit?
> 
> merlin
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Add a check an a array column

2012-09-08 Thread Bret Stern
On Sat, 2012-09-08 at 21:24 +0200, Andreas Kretschmer wrote:
> Joel Hoffman  wrote:
> 
> > More concisely, you can compare directly against all values of the array:
> > 
> > # create table i (i int[] check (0 <= ALL(i) AND 1023 >= ALL(i)));
> > # insert into i values (ARRAY[0,1,2,3,1023]);
> > # insert into i values (ARRAY[0,1,2,3,-1]);
> > ERROR:  new row for relation "i" violates check constraint "i_i_check"
> 
> Nice! Didn't know that with all()
> 

A better place for validation is in the front-end, before
adding/attempting to add data to the db (my opinion).
Nice to see there are always other ways though. 

> 
> Andreas
> -- 
> Really, I'm not out to destroy Microsoft. That will just be a completely
> unintentional side effect.  (Linus Torvalds)
> "If I was god, I would recompile penguin with --enable-fly."   (unknown)
> Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°
> 
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] libpq Windows 7 64

2012-05-31 Thread Bret Stern
Got a new fancy pants Dell T3500 Windows 7 Ultimate 64 bit machine.
PostgreSql 9.1 

Trying to port a 32-bit XP libpq console app with Codeblocks.

Seems like the libpq.lib isn't linking quite right as the PQ_Connectdb,
PQstatus etc lib functions are reported as undefined.

Anyone out there get Windows 7 (64) libpq working?

PGadmin works great on my machine. It uses libpq.

Any and all help appreciated.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installer Questions (NSIS)

2012-04-12 Thread Bret Stern
I'm starting to put together an install script using NSIS,
for our client application. 

For the Postgresql database, I've been using the installer from
postgresql.org, courtesy of Dave Page.


Being an anal programmer type, I may want to pursue
a little more control over how our application is
distributed.

Is there a list which discusses this, or should I test the waters
here when the time comes?

Bret Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Regarding NOTIFY

2012-04-09 Thread Bret Stern
On Mon, 2012-04-09 at 09:03 +, Jasen Betts wrote:
> On 2012-03-09, Bret Stern  wrote:
> > We have a concrete batching application composed of two parts.
> > 1. The Monitor. 
> > The Monitor cycles every 60 seconds, and looks into a Postgresql table
> > for jobs to run. Primarily these jobs update Postgresql tables with
> > data from external applications.
> >
> > 2. The Client.
> > The client schedules orders etc.
> >
> > When a new product or customer is added to the Accounting or Batching
> > Controller (both external applications; and databases) the Client user
> > clicks a button and adds a job to run on the Monitor.
> >
> > Is it possible use the NOTIFY event to serve more like an interrupt,
> > and trigger the Monitor to run immediately.
> 
> Reasonably immediately.
> 
> > Can it be used with VB? or Should I use LibPQ?
> 
> I used libpq with VB6 when I needed this feature 3 years ago.
> 
> I had to write a little DLL to wrap the libpq calls so that VB could
> call them (actually I repackaged the wrapper that the main application uses)
> 
> > If this is the wrong list for these questions, let me know?
> 
> what language are you really using? VB is kind of vague it could mean
> .net (lots of people say VB when they mean .net, makes it hard to get
> questions about VB answered) 
> 
Was referring VB6 (I still prefer the IDE), but I can write in most
languages; (codeblocks for the libpq project).

> -- 
> ⚂⚃ 100% natural
> 
> 

Thanks for the comments. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Move Tables From One Database to Another

2012-03-29 Thread Bret Stern
On Thu, 2012-03-29 at 14:49 -0500, Andy Colson wrote:
> On 3/29/2012 2:10 PM, Rich Shepard wrote:
> > I'm storing vector map attribute data in postgres tables and somehow
> > managed to create two databases (of similar names) rather than one. I want
> > to combine the two.
> >
> > For tables that exist in the one database I want to eliminate, I thought
> > to use pg_dump to create .sql files, then use pg_restore to add the
> > table to
> > the other database. Did this for one table (with 4201 rows), but
> > 'pg_restore
> > -d database_name -t table_name' appears to not complete; it seems to have
> > hung up somewhere. While I see nothing specific in the output file or the
> > pg_restore man page this must not be the proper approach.
> >
> > Also, I need suggestions on how to combine tables that exist in both
> > databases by adding rows from the source database not in the target
> > database
> > and modifying rows that differ.
> >
> > As I'm not a professional or full-time DBA I'm probably missing really
> > simple syntax and approaches. Your advice will be appreciated.
> >
> > Rich
> >
> >
> 
> 
> How many tables are we talking about.  If its a few tables, I'd rename them:
> 
> alter table lake rename to lake_old;
> ... etc
> 
> then dump it out and restore into the proper db.
> 
> The proper db will now have to tables, lake and lake_old, which you can 
> selective update some rows:
> 
> update lake
>set foo = (select foo from lake_old where lake_old.id = lake.id)
>where exists (select foo from lake_old where lake_old.id = lake.id);
> 
> !! The were exists is very important !!
> 
> and insert missing:
> 
> insert into lake
> select * from lake_old
> where not exists (select id from lake_old where lake_old.id = lake.id);
> 
>  > to use pg_dump to create .sql files, then use pg_restore to add the
>  > table to 
>  > it seems to have
>  > hung up somewhere.
> 
> 
> I wonder if a table was in use and pg_restore blocked on the drop table? 
>   If you don't mind replacing the entire table, this method should work. 
>   But if you want to merge the two tables, I would not go this route.
> 
> if you try the restore again, you can do:
> 
> ps ax|grep postg
> and see what statement its running.  You can also do:
> 
> select * from pg_locks where not granted;
> 
> and see if anything is blocked.
> 
> -Andy
> 

Good info. I think i'll plagiarize this thinking if you don't mind.
Thanks for the broad explanation.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backing up through a database connection (not pg_dump)

2012-03-26 Thread Bret Stern
On Mon, 2012-03-26 at 16:16 -0700, John R Pierce wrote:
> On 03/26/12 4:05 PM, Tim Uckun wrote:
> > Is there a way to backup a database or a cluster though a database
> > connection?  I mean I want to write some code that connects to the
> > database remotely and then issues a backup command like it would issue
> > any other SQL command. I realize the backups would need to reside on
> > the database server.
> 
> there is no backup command in postgres SQL,   you could enumerate the 
> tables, and use /COPY tablename TO filepath;/ on each table, these files 
> would have to be in a directory writable by the postgres server process.
> 
> alternately, you could open a shell session on the dbserver and run 
> pg_dump there.frankly, this would be preferable.
> 
> 
> 
> 
> 
> -- 
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
> 
> 

pg_dump includes more than just the tables though..doesn't it. Meaning,
pg_dump includes stored procedures, views table structure etc?

COPY won't serve as a backup replacement.

I'd go with John's recommendation of pg_dump

Also, pg_dump can write to remote servers (windows example below)

We're also using libpq to trigger backups using NOTIFY from a client
app. 


Here's a .bat file I run on a Windows 2003 box

rem for windows date stamped archive

@echo off
   for /f "tokens=1-4 delims=/ " %%i in ("%date%") do (
 set dow=%%i
 set month=%%j
 set day=%%k
 set year=%%l
   )
   set datestr=%month%_%day%_%year%
   echo datestr is %datestr%

   set BACKUP_FILE=SKYLINE_%datestr%.backup
   echo backup file name is %BACKUP_FILE%
   SET PGPASSWORD=your_password
   echo on
   C:\"Program Files"\"PostgreSQL"\8.3\bin\pg_dump -i -h batchfarm -p
5432 -U postgres -F c -b -v -f %BACKUP_FILE% SKYLINE
   C:\"Program Files"\"PostgreSQL"\8.3\bin\pg_dump -i -h batchfarm -p
5432 -U postgres -F c -b -v -f \\Acrm-backup\data\Library\Backup\Skyline
\BackupDB\%BACKUP_FILE% SKYLINE

   SET PGPASSWORD="













-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL 64 Bit XIDs - Transaction IDs

2012-03-23 Thread Bret Stern

VoltDB maybe
- Original Message -
On 06:31 AM 03/23/2012 Frank Lanitz wrote:

Am 23.03.2012 14:23, schrieb Adrian Klaver:
>  I would say either they got the numbers wrong or someone is pulling
> your leg. That rate is not going to happen.

Maybe twitter or facebook all in all...

Cheers,
Frank

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backups

2012-03-15 Thread Bret Stern
Perhaps a RAM DISK could be considered in the equation  



On Thu, 2012-03-15 at 16:30 +0100, Albe Laurenz wrote:
> Richard Harley wrote:
> > Very simple question -  does pg_dump/dumpall hit the server in terms
> of database performance? We
> > currently do nightly backups and I want to move to hourly backups but
> not at the expense of hogging
> > all the resources for 5 mins.
> > 
> > Pg_dumpall is currently producing a 1GB file - that's the combined
> size of around 30 databases and it
> > takes around 5 mins to run.
> 
> pg_dump will be a performance hit, because it consumes disk I/O
> capacity.
> You could measure how the system is affected by your current backup.
> 
> On the other hand, instead of doing an hourly pg_dump, it might be
> preferable to do a filesystem backup and PITR. That way you have to
> do a backup only once a day (or week, depends how much traffic you have
> and how fast you have to restore) and can still recover to an
> arbitrary point in time.
> 
> Yours,
> Laurenz Albe
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrade questions

2012-03-14 Thread Bret Stern
I felt pretty good about my server until I read this.
On Wed, 2012-03-14 at 00:24 -0700, John R Pierce wrote:
> On 03/13/12 8:41 PM, Carson Gross wrote:
> > Does anyone have a reasonable guess as to the inserts per second 
> > postgres is capable of these days on middle-of-the-road hardware?  Any 
> > order of magnitude would be fine: 10, 100, 1000, 10,000.
> 
> my dedicated database server in my lab, which is a 2U dual Xeon X5660 
> box with 12 cores at 2.8ghz, 48GB ram, and 20 15000rpm SAS drives in a 
> RAID10 with a 1GB flash-cached raid card, can pretty easily sustain 6000 
> or more writes/second given enough threads doing the work, although 
> indexes, and/or large rows would slow that down.a single 
> connection/thread will not get that much throughput.
> 
> thats my definition of a middle of the road database server.  I have no 
> idea what yours is.
> 
> 
> -- 
> john r pierceN 37, W 122
> santa cruz ca mid-left coast
> 
> 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Calculated update

2012-03-12 Thread Bret Stern
On Mon, 2012-03-12 at 17:39 -0700, Bosco Rama wrote:
> Bret Stern wrote:
> > 
> > trying to update a varchar numeric string column
> > by converting it to int, adding a numeric value and insert it back
> > as a varchar
> > 
> > Having trouble with cast
> 
> I assume you are doing an update as opposed to an insert.  You use
> both above (and both numeric and int as well).  Anyway, try:
> 
> update tbl set col = (col::int + 1);
> 
> or some variation thereof.  That should do the trick.
> 
> Bosco.

update open_orderheader
set order_id = (order_id::int + 3000)
where module_id='aggregate'

worked as advertised.
thanks for the help guys.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Calculated update

2012-03-12 Thread Bret Stern

trying to update a varchar numeric string column
by converting it to int, adding a numeric value and insert it back
as a varchar

Having trouble with cast


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Regarding NOTIFY

2012-03-09 Thread Bret Stern
We have a concrete batching application composed of two parts.
1. The Monitor. 
The Monitor cycles every 60 seconds, and looks into a Postgresql table
for jobs to run. Primarily these jobs update Postgresql tables with
data from external applications.

2. The Client.
The client schedules orders etc.

When a new product or customer is added to the Accounting or Batching
Controller (both external applications; and databases) the Client user
clicks a button and adds a job to run on the Monitor.

Is it possible use the NOTIFY event to serve more like an interrupt,
and trigger the Monitor to run immediately.

Can it be used with VB?
or
Should I use LibPQ?

Any suggestions welcome.


If this is the wrong list for these questions, let me know?

Bret Stern




 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Show Databases via ODBC

2012-03-07 Thread Bret Stern
Works perfect. I'll trap and show only our databases to the user.
all thanks
On Wed, 2012-03-07 at 21:00 +0100, Szymon Guz wrote:
> On 7 March 2012 20:36, Bret Stern 
> wrote:
> Is it possible through ODBC to connect to a PostgreSql
> server and query for the available databases?
> 
> When our application upgrades, we typically create a
> new database. I want to design a form which allows
> the user to select the old database, which then
> migrates data to the new (currently connected)
> database.
> 
> Regards
> 
> 
> 
> 
> Hi,
> I think this query would be helpful: 
> 
> 
>   select datname from pg_database;
> 
> 
> regards
> Szymon



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Show Databases via ODBC

2012-03-07 Thread Bret Stern
Is it possible through ODBC to connect to a PostgreSql
server and query for the available databases?

When our application upgrades, we typically create a
new database. I want to design a form which allows
the user to select the old database, which then
migrates data to the new (currently connected)
database.

Regards


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Looking for an intro-to-SQL book which is PostgreSQL-friendly

2011-09-02 Thread Bret Fledderjohn
SQL for Dummies is pretty agnostic. Follow that up with SQL Cookbook from
O'Reilly and you have a good one two punch!

On 2 September 2011 19:48, Josh Berkus  wrote:

> All,
>
> I'm looking for an intro-to-SQL book for teaching a class, one aimed at
> folks who know *nothing* about RDBMSes, which is not based on MySQL or
> MSAccess.  The ones I have on my desk are all based on one or the other,
> except The Manga Guide to Databases, which I can't use in a serious class.
>
> The PostgreSQL books I've seen all make the assumption that the reader
> already knows what an RDBMS is and a little SQL.  The sole exception to
> this may be Beginning Databases with PostgreSQL from Apress, but that
> book is somewhat out-of-date (last edition, 2005), and teaches some bad
> habits around keys.
>
> Does anyone have other suggestions?
>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 

- Bret

"Why should I fret in microcosmic bonds
That chafe the spirit, and the mind repress,
When through the clouds gleam beckoning beyonds
Where shining vistas mock man's littleness?"
 - H.P. Lovecraft, "Phaeton" (1918)


[GENERAL] Using libpq with Visual Studio 2008

2011-05-18 Thread Bret Stern
Is this list suited for coding questions?

I'm using VS2008, connecting to postgresql with libpq and
have some initial questions.
Regards
Bret Stern


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Web Hosting

2011-03-05 Thread Bret Fledderjohn
On 5 March 2011 16:08, matty jones  wrote:

> I already have a domain name but I am looking for a hosting company that I
> can use PG with.  The few I have contacted have said that they support MySQL
> only and won't give me access to install what I need or they want way to
> much.  I don't need a dedicated host which so far seems the only way this
> will work, all the companies I have researched so far that offer shared
> hosting or virtual hosting only use MySQL.  I will take care of the setup
> and everything myself but I have already written my code using PG/PHP and I
> have no intention of switching.
>
> Thanks.
>
 I am using A2 Hosting (www.a2hosting.com ) which offers 8.4...  They are
inexpensive and so far reliable.


-- 

- Bret

"Why should I fret in microcosmic bonds
That chafe the spirit, and the mind repress,
When through the clouds gleam beckoning beyonds
Where shining vistas mock man's littleness?"
 - H.P. Lovecraft, "Phaeton" (1918)


Re: [GENERAL] Libpq memory leak

2010-09-24 Thread Bret S. Lambert
On Fri, Sep 24, 2010 at 06:11:31PM +0400, Dmitriy Igrishin wrote:
> Hey Vladimir,
> 
> 
> 
> 2010/9/24 Polyakov Vladimir 
> 
> > Program written in C using Libpq, which receives large files (BYTEA)
> > has a memory leak.
> > I need to free ALL of the used memory after each sql query.
> >
> > after each call PQclear() I drop the buffer:
> >conn->inBuffer = realloc(conn->inBuffer, 8192);
> >conn->inBufSize = 8192;


This is a known unsafe use of the realloc() function. If if fails to
allocate memory, you just lost the conn->inBuffer, thus leaking memory
in your own code. Fix this first, and then see if you still have the
issue with memory leaks, because it's possible you're just leaking it
with a bad realloc() idiom.

> >
> > It works, but ..
> > I noticed that in some cases PQclear() does not clear the memory.
> > This happens only when the program receives certain files...
> >
> > Why do you need realloc() after PQclear()?
> 
> -- 
> Regards,
> Dmitriy

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] macro/inline table valued functions

2010-09-21 Thread Bret Green
Is there anything like a macro or an inline table valued function in postgres? 
i.e I define a query as a function/macro and reuse the function in queries and 
the dbms will expand the function/macro to its definition, thus avoiding any 
overhead. 

If not what is the closest thing? 

Thanks



  

Re: [GENERAL] how to insert multiple rows and get the ids back in a temp table (pgplsql)?

2010-09-19 Thread Bret Green
Thanks
Any solution without cursors perhaps?





From: hubert depesz lubaczewski 
To: Bret Green 
Cc: pgsql-general@postgresql.org
Sent: Sun, September 19, 2010 7:12:51 AM
Subject: Re: [GENERAL] how to insert multiple rows and get the ids back in a 
temp table (pgplsql)?

On Sat, Sep 18, 2010 at 06:43:49PM -0700, Bret Green wrote:
> how can I do the following in plpgsql? 
> insert multiple rows in a table
> get the ids (serial) into a temp table (not the client)

for temprec in insert into table (x) select y from z returning id loop
insert into temp teable (q) values (temprec.id);
end loop;

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007



  

Re: [GENERAL] how to insert multiple rows and get the ids back in a temp table (pgplsql)?

2010-09-19 Thread Bret Green
No I do need it inside pgplsql. I need pgplsql for other stuff. 




From: Alban Hertroys 
To: Bret Green 
Cc: pgsql-general@postgresql.org
Sent: Sun, September 19, 2010 9:25:46 AM
Subject: Re: [GENERAL] how to insert multiple rows and get the ids back in a 
temp table (pgplsql)?

On 19 Sep 2010, at 3:43, Bret Green wrote:

> how can I do the following in plpgsql?
> 
> insert multiple rows in a table
> 
> get the ids (serial) into a temp table (not the client)
> 
>  
> for one row it will be like this
> 
> insert into mytable(mycolumn)values(123)returning id into some_variable;

You can do that for multiple rows just fine, you don't even need plpgsql for 
that:

development=> \d test
 Table "public.test"
Column |  Type   | Modifiers 
+-+---
i  | integer | 

development=> INSERT INTO test (i) select x from generate_series(4, 10) t(x) ret
urning i;
i  

  4
  5
  6
  7
  8
  9
10
(7 rows)

INSERT 0 7


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll see there is no forest.


!DSPAM:1170,4c96399610251136218112!


  

[GENERAL] how to insert multiple rows and get the ids back in a temp table (pgplsql)?

2010-09-18 Thread Bret Green
how can I do the following in plpgsql? 
insert multiple rows in a table
get the ids (serial) into a temp table (not the client)
 
for one row it will be like this
insert into mytable(mycolumn)values(123)returning id into some_variable;

now for multiple rows (using insert select) it will be like

insert into mytable(mycolumn)
select other_column from other_table
returning id into ???


  

Re: [GENERAL] What Linux edition we should chose?

2010-06-01 Thread Bret S. Lambert
On Wed, Jun 02, 2010 at 01:32:44AM -0400, Greg Smith wrote:
> Nilesh Govindarajan wrote:
> >I run my site (see my signature) on a self managed VPS. I was using
> >the default PGSQL RPM from the fedora repository, the site was getting
> >way slow. So I compiled all the stuff apache, php and postgresql with
> >custom gcc flags, which improved performance like hell

And were the versions the same? If you're going to go to the
trouble of hand-compiling, I'm willing to bet that you went to
the trouble of finding more recent versions of the software.

That is not how you test things.

> 
> Without breaking down how much of that speed increase was from
> Apache, PHP, and PostgreSQL respectively, I'm not sure what the
> people who package PostgreSQL can really learn from your data here.
> Reports on improving PostgreSQL performance by tweaking optimizer
> flags haven't been very repeatable for others when they've popped up
> in the past, so for all we know the bulk of your gain came from
> Apache and PHP optimizations.

Not to mention that compiler optimizations increase the chance of
hitting a compiler bug. Getting the wrong answer fast is not an
improvement over the right answer slow.

> 
> -- 
> Greg Smith  2ndQuadrant US  Baltimore, MD
> PostgreSQL Training, Services and Support
> g...@2ndquadrant.com   www.2ndQuadrant.us
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Row ordering after CREATE TABLE AS...SELECT regexp_split_to_table(source_text, regexp) AS new_column

2010-02-23 Thread Bret S. Lambert
On Wed, Feb 24, 2010 at 07:51:54AM +0100, John Gage wrote:
> This is a two-part question:
> 
> 1) I have a source_text that I want to divide into smaller subunits
> that will be contained in rows in a column in a new table.  Is it
> absolutely certain that the initial order of the rows in the
> resultant table after this  operation:
> 
> CREATE TABLE new_table AS SELECT regexp_split_to_table(source_text,
> E'regexp') as subunits FROM source_table;
> 
> will be the same as the order of these subunits in the original
> text?  Emphasis *initial order*.

I'd put money on not; this is not what databases are designed for.

> 
> 2) I would like to be able to create a serial-type column during
> CREATE TABLE AS in the new table that "memorizes" this order so that
> I can reconstruct the original text using ORDER BY on that serial
> column.  However, I am stumped how to do that.  I do not see how to
> put the name of that column into my SELECT statement which generates
> the table, and I do not see where else to put it.  Please forgive my
> stupidity.

Pre- or append an increasing serial number to the data, and use that
as a column named "initial_order" or something else that will make
it clear to you and other users what it is, and then import.

But if you have the original data, in order, why do you need to be
able to reconstruct it from a database dump? It just looks like
adding a step to add a step, to me.

> 
> The "work-around" to this problem has been to ALTER my table after
> its creation with a new serial-type column.  But this assumes that
> the answer to Question 1) above is always "Yes".
> 
> Thanking you for your understanding,
> 
> John
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-06 Thread Bret S. Lambert
On Sat, Feb 06, 2010 at 02:44:32PM +0100, C?dric Villemain wrote:
> 2010/2/6 Bret S. Lambert :
> > On Fri, Feb 05, 2010 at 11:41:13PM +0200, Alexei Vladishev wrote:
> >> Greetings,
> >>
> >> Is there a way of configuring PostgreSQL so that one specific table would
> >> use, say, 4GB of buffer cache while other tables would use the rest?
> >>
> >> I would like to keep the table and its indexes always in "hot"
> >> state, so that
> >> other queries won't pollute this part of the buffer cache. It would ensure
> >> reliable performance and much less disk IOPS working with the table.
> >
> > Fiddling with the buffer cache like that would require some sort of
> > OS support, if I'm not mistaken in what you're asking for.
> >
> > And then, even if the support is there, you'd need to outline exactly
> > how you're planning on pushing this button.
> >
> > Specifically, what's your usage pattern that would make this a
> > win for you?
> >
> > If the table and its indexes can already fit into the buffer cache,
> > and it's as commonly accessed as you think it is, the OS should
> > probably have it cached anyway.
> 
> that's all true.
> 
> I am working on pgfincore which allow in some way to prepare buffer cache.
> You need pg > 8.4 and linux (probably working with bsd too)

Why do something with a non-portable interface? Most OSes support
coherently mmap(2)'ing disk blocks into memory; in fact, I'm somewhat
taken aback that the postgres buffer cache isn't implemented in that
manner, but I'm willing to give the devs credit for having not done
so for good reasons.

> 
> I don't consider it ready fo rproduction, but fine for debugging
> things, if you reall care the buffer cache preload, tell me, I'll
> stabilize the code in a shorter time ;)
> http://villemain.org/projects/pgfincore
> 
> >
> >>
> >> Is it possible?
> >>
> >> Thanks for any hints!
> >>
> >> Alexei
> >>
> >> --
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >
> > --
> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> > To make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
> 
> 
> 
> -- 
> C?dric Villemain

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-06 Thread Bret S. Lambert
On Sat, Feb 06, 2010 at 03:46:58PM +0200, Alexei Vladishev wrote:
> Bret,
> 
> Thank you for your response!
> 
> >>Greetings,
> >>
> >>Is there a way of configuring PostgreSQL so that one specific table would
> >>use, say, 4GB of buffer cache while other tables would use the rest?
> >>
> >>I would like to keep the table and its indexes always in "hot"
> >>state, so that
> >>other queries won't pollute this part of the buffer cache. It would ensure
> >>reliable performance and much less disk IOPS working with the table.
> >
> >Fiddling with the buffer cache like that would require some sort of
> >OS support, if I'm not mistaken in what you're asking for.
> I am talking about PostgreSQL buffer cache not OS level. I believe
> it has nothing to do with
> OS support.

Well, kinda; but I'd been spending too much time doing admin, so I'd
completely spaced on Postgres terms when you used "buffer cache", so
sorry for the mixup.

> 
> It would be great to have support of multiple cache buffers assigned
> to different set of tables.
> Having this implemented, I would assign frequently accessed
> configuration tables (selects
> and updates) to one buffer and historical tables (lots of insert
> operations) to another buffer, so
> the sets would use independent buffers and won't affect each other.

Fair enough.

> 
> >And then, even if the support is there, you'd need to outline exactly
> >how you're planning on pushing this button.
> >
> >Specifically, what's your usage pattern that would make this a
> >win for you?
> Let me explain. I have a very busy application generating thousands
> of SQLs per second.
> There is an application level cache built into the application already.
> 
> The important part is that once per hour the application writes
> collected data to huge historical
> tables (100M up-to billions of records, partitioned). Since it
> happens every hour database buffer
> cache is already overwritten by data and indexes of other tables, so
> the write operation is very
> slow and requires huge amount of disk seeks causing 50-100x drop of
> performance.

The disk seeks will happen regardless of what Postgres does, as the
OS pulls in new disk blocks to perform the write. If your OS' buffer
cache is large enough to hold all the data you need, then your
best bet is likely partitioning data across multiple disks, so that
queuing the archive reads doesn't get in the way of production reads.

As I'm a unix admin mostly, I'm not qualified to give advice on whether
or not that's possible, or how to do it if it is ;)

> 
> So, my idea is to assign a separate buffer cache for the historical
> tables. It would guarantee that
> index data is always cached, so the write operation will be very fast.
> 
> Is it possible? Is there any other techniques available?

If it were at all possible, I'd actually set up a secondary archiving
server (unless you need the historical data on tap for the production
system as well), either on another port on the same machine, or on
another machine which won't impact your production system if it has to
suddenly do a bunch of disk I/O, and log the history to that.

> 
> >If the table and its indexes can already fit into the buffer cache,
> >and it's as commonly accessed as you think it is, the OS should
> >probably have it cached anyway.
> I see what you are saying but the problem is that it is normally
> accessed once per hour only.
> 
> Any thoughts?
> 
> Alexei

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple buffer cache?

2010-02-06 Thread Bret S. Lambert
On Fri, Feb 05, 2010 at 11:41:13PM +0200, Alexei Vladishev wrote:
> Greetings,
> 
> Is there a way of configuring PostgreSQL so that one specific table would
> use, say, 4GB of buffer cache while other tables would use the rest?
> 
> I would like to keep the table and its indexes always in "hot"
> state, so that
> other queries won't pollute this part of the buffer cache. It would ensure
> reliable performance and much less disk IOPS working with the table.

Fiddling with the buffer cache like that would require some sort of
OS support, if I'm not mistaken in what you're asking for.

And then, even if the support is there, you'd need to outline exactly
how you're planning on pushing this button.

Specifically, what's your usage pattern that would make this a
win for you?

If the table and its indexes can already fit into the buffer cache,
and it's as commonly accessed as you think it is, the OS should
probably have it cached anyway.

> 
> Is it possible?
> 
> Thanks for any hints!
> 
> Alexei
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] data dump help

2010-01-18 Thread Bret S. Lambert
On Tue, Jan 19, 2010 at 06:35:10PM +1300, Andrej wrote:
> 2010/1/19 Bret S. Lambert :
> 
> > Isn't this just over-engineering? Why not let the database do
> > the work, and add the column with a default value of 0, so that
> > you don't have to modify whatever 3rd-party app dumps the data:
> 
> But what if his third-party software does something silly like a "select *"
> on the table and then gets a hissy fit because the data doesn't match
> the expectations any longer?

He said his app logs there, so I kind of assumed that it's write-only
as far as the app is concerned. If not, then, yes, there could be
issues.

But why not keep things as simple as possible?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] data dump help

2010-01-18 Thread Bret S. Lambert
On Mon, Jan 18, 2010 at 05:49:32PM -0600, Terry wrote:
> On Mon, Jan 18, 2010 at 5:07 PM, Terry  wrote:
> > On Mon, Jan 18, 2010 at 4:48 PM, Andy Colson  wrote:
> >> On 1/18/2010 4:08 PM, Terry wrote:
> >>>
> >>> Hello,
> >>>
> >>> Sorry for the poor subject. ?Not sure how to describe what I need
> >>> here. ?I have an application that logs to a single table in pgsql.
> >>> In order for me to get into our log management, I need to dump it out
> >>> to a file on a periodic basis to get new logs. ?I am not sure how to
> >>> tackle this. ?I thought about doing a date calculation and just
> >>> grabbing the previous 6 hours of logs and writing that to a new log
> >>> file and setting up a rotation like that. ?Unfortunately, the log
> >>> management solution can't go into pgsql directly. ?Thoughts?
> >>>
> >>> Thanks!
> >>>
> >>
> >> How about a flag in the db, like: dumped.
> >>
> >> inside one transactions you'd be safe doing:
> >>
> >> begin
> >> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
> >> select * from log where dumped = 0;
> >> -- app code to format/write/etc
> >> update log set dumped = 1 where dumped = 0;
> >> commit;
> >>
> >> Even if other transactions insert new records, you're existing transaction
> >> wont see them, and the update wont touch them.
> >>
> >> -Andy
> >>
> >
> > I like your thinking but I shouldn't add a new column to this
> > database. ?It's a 3rd party application.
> >
> 
> Although.  I really like your idea so I might create another table
> where I will log whether the data has been dumped or not.  I just need
> to come up with a query to check this with the other table.

Isn't this just over-engineering? Why not let the database do
the work, and add the column with a default value of 0, so that
you don't have to modify whatever 3rd-party app dumps the data:

ALTER TABLE log ADD COLUMN dumped boolean DEFAULT FALSE

(I don't do much ALTER TABLE, so that syntax may be all foobar'ed)

- Bret

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Bret
Looks like fun.
 
Tom added some comments on the PQgetCopyData function.
 
If your environment allows, put a breapoint in on the line below  and look
at the vars
 
while (nLen = PQgetCopyData(pConn, buffer, false) > 0)
 
perhaps this will get you working
while (nLen = PQgetCopyData(pConn, &buffer, false) > 0)
 
 
here's the only sample I found
 
 while (!copydone)
  {
! ret = PQgetCopyData(g_conn, ©buf, false);
! switch (ret) {
! case -1:
! copydone = true;
! break;
! case 0:
! case -2:
! write_msg(NULL, "SQL command to dump the contents of table \"%s\" failed:
PQgetCopyData() failed.\n", classname);
! write_msg(NULL, "Error message from server: %s", PQerrorMessage(g_conn));
! write_msg(NULL, "The command was: %s\n", q->data);
! exit_nicely();
! break;
! default:
! archputs(copybuf, fout);
! PQfreemem(copybuf);
! break;
  }



  _  

From: Dave Huber [mailto:dhu...@letourneautechnologies.com] 
Sent: Thursday, December 03, 2009 11:35 AM
To: 'bret_st...@machinemanagement.com'; pgsql-general@postgresql.org
Subject: RE: [GENERAL] code example for PQgetCopyData



 

> Where is it blowing up?

 

I'm sorry, I wasn't clear. It bombs on the PQgetCopyData call. If I comment
out the entire while loop, the program runs fine. If I simply comment out
the contents of the while loop.kablooey!

 

Dave


  _  

This electronic mail message is intended exclusively for the individual(s)
or entity to which it is addressed. This message, together with any
attachment, is confidential and may contain privileged information. Any
unauthorized review, use, printing, retaining, copying, disclosure or
distribution is strictly prohibited. If you have received this message in
error, please immediately advise the sender by reply email message to the
sender and delete all copies of this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform
Electronic Transactions Act or any other law of similar import, absent an
express statement to the contrary contained in this e-mail, neither this
e-mail nor any attachments are an offer or acceptance to enter into a
contract, and are not intended to bind the sender, LeTourneau Technologies,
Inc., or any of its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no
viruses are present in this email, the company cannot accept responsibility
for any loss or damage arising from the use of this email or attachments.

 
 



Re: [GENERAL] code example for PQgetCopyData

2009-12-03 Thread Bret
 



  _  

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Dave Huber
Sent: Thursday, December 03, 2009 9:18 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] code example for PQgetCopyData



Does anybody have a snippet where they use PQgetCopyData? I must be calling
it wrong as it keep crashing my program. I've attached my code below. I am
writing this for a Code Interface Node in LabVIEW.

 

Thanks,

Dave

 

MgErr CINRun(LStrHandle conninfo, LStrHandle copystr, TD1Hdl resultValues) {

  MgErr err = noErr;

  PGconn *pConn;

  PGresult* pResult = NULL;

  char* szCopyStr = NULL;

  char* errormsg = NULL;

  char** buffer = NULL;   // for retrieving the data

  int nLen; // length of returned data

  

  // connect to the database

  char* szConnInfo = new char[LHStrLen(conninfo)+1];

  LToCStr((*(conninfo)), (CStr)szConnInfo);

  pConn = PQconnectdb(szConnInfo);

  delete [] szConnInfo;

 

  // check for errors connecting to database

  if (PQstatus(pConn) != CONNECTION_OK)

  {

DbgPrintf("Connection to database failed: %s",
PQerrorMessage(pConn));

  }

  else

  {

// start the copy command

szCopyStr = new char[LHStrLen(copystr)+1];

LToCStr((*(copystr)), (CStr)szCopyStr);

 

pResult = PQexec(pConn, szCopyStr);

delete [] szCopyStr;

  

// get the data

int i = 0;

while (nLen = PQgetCopyData(pConn, buffer, false) > 0)

{

  

  if (err = SetCINArraySize((UHandle)resultValues, 2, ++i))

goto out;

  if (err = NumericArrayResize(uB, 1L,
(UHandle*)(&(*resultValues)->elt[i-1]), nLen-1))

goto out;

  LStrLen(*(*resultValues)->elt[i-1]) = nLen-1;
// set the Labview String size

  MoveBlock(*buffer, LStrBuf(*(*resultValues)->elt[i-1]),
nLen-1);  // copy the data to a new string

 

  PQfreemem(*buffer);
// free the memory from getCopy



}

(*resultValues)->dimSize = i;

  

out:

PQclear(pResult);

 

// see if there were errors

if (nLen == -2)

{

  DbgPrintf("Copy Out failed: %s", PQerrorMessage(pConn));

}

 

  }

  // close the connection

  PQfinish(pConn);

 

  return err;

}

 
Where is it blowing up?
 
 

  _  

This electronic mail message is intended exclusively for the individual(s)
or entity to which it is addressed. This message, together with any
attachment, is confidential and may contain privileged information. Any
unauthorized review, use, printing, retaining, copying, disclosure or
distribution is strictly prohibited. If you have received this message in
error, please immediately advise the sender by reply email message to the
sender and delete all copies of this message.
THIS E-MAIL IS NOT AN OFFER OR ACCEPTANCE: Notwithstanding the Uniform
Electronic Transactions Act or any other law of similar import, absent an
express statement to the contrary contained in this e-mail, neither this
e-mail nor any attachments are an offer or acceptance to enter into a
contract, and are not intended to bind the sender, LeTourneau Technologies,
Inc., or any of its subsidiaries, affiliates, or any other person or entity.
WARNING: Although the company has taken reasonable precautions to ensure no
viruses are present in this email, the company cannot accept responsibility
for any loss or damage arising from the use of this email or attachments.





Re: [GENERAL] optimizing advice

2009-12-01 Thread Bret
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Scott Marlowe
> Sent: Tuesday, December 01, 2009 2:10 PM
> To: r.soeren...@mpic.de
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] optimizing advice
>
> 2009/12/1 Rüdiger Sörensen :
> > dear all,
> >
> > I am building a database that will be really huge and grow
> rapidly. It
> > holds data from satellite observations. Data is imported
> via a java application.
> > The import is organized via files, that are parsed by the
> application;
> > each file hods the data of one orbit of the satellite.
> > One of the tables will grow by about 40,000 rows per orbit,
> there are
> > roughly 13 orbits a day. The import of one day (13 orbits) into the
> > database takes 10 minutes at the moment. I will have to import data
> > back to the year 2000 or even older.
> > I think that there will be a performance issue when the table under
> > question grows, so I partitioned it using a timestamp
> column and one
> > child table per quarter. Unfortunately, the import of 13 orbits now
> > takes 1 hour instead of 10 minutes as before.  I can live
> with that,
> > if the import time will not grow sigificantly as the table
> grows further.
>
> I'm gonna guess you're using rules instead of triggers for
> partitioning?  Switching to triggers is a big help if you've
> got a large amount of data to import / store.  If you need
> some help on writing the triggers shout back, I had to do
> this to our stats db this summer and it's been much faster
> with triggers.
>
> --
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org) To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

189,800,000 records per year..
Hope they are short records.
Not knowing what the report target is, perhaps breaking orbits
into separate servers (or at least db's) by month or year, then
querying to build your research data on another server..

Steve..how does this compare to the stats db??






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronize filenames in table with filesystem

2009-12-01 Thread Bret
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Atkins
> Sent: Tuesday, December 01, 2009 9:44 AM
> To: pgsql-general General
> Subject: Re: [GENERAL] Synchronize filenames in table with filesystem
>
>
> On Dec 1, 2009, at 9:19 AM, Ludwig Kniprath wrote:
>
> > Hi List,
> > not another question on how to store files (db or
> filesystem), i decided to use the filesystem.
> >
> > I'm now searching for a trigger, that deletes the physical
> file when deleting a database-record containing the filename
> in one of its fields. Is there a sample somewhere how this
> could be done? I'm runnig PG 8.4 on a windows machine.
>
> I've done that by having the trigger put the name of the file
> to be deleted in a "to be deleted" table. Then an external
> process polls that table and deletes any file it finds in
> there (using listen/notify if you need that to happen
> immediately, but just polling works fine if it's just garbage
> collection).
>
> That has the advantage of not deleting files until the
> transaction commits too.
>
> Cheers,
>   Steve
>
>
> --
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org) To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

I'm with Ludwig..

Better to have the database perform it's primary function, and stay away
from os chores.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Allowing for longer table names (>64 characters)

2009-11-20 Thread Bret
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Allan Kamau
> Sent: Friday, November 20, 2009 1:42 AM
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] Allowing for longer table names (>64
> characters)
>
> On Fri, Nov 20, 2009 at 11:21 AM, A. Kretschmer
>  wrote:
> > In response to Allan Kamau :
> >> Hi all,
> >> I would like to increase the database objects names limit from 64
> >> characters to may be 128 characters to avoid name conflicts after
> >> truncation of long table/sequence names.
> >> I have seen a solution to this sometime back which
> includes (building
> >> from source) modifying a header file then recompiling, but I now
> >> cannot find this information.
> >
> > In the source-tree, src/include/pg_config_manual.h , change
> NAMEDATALEN.
> > But i think it's a bad idea ... 64 characters are enough for me.
> >
> >
> > Andreas
> > --
> > Andreas Kretschmer
> > Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr:
> -> Header)
> > GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2
> 3172 0C99
> >
> > --
> > Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org) To
> > make changes to your subscription:
> > http://www.postgresql.org/mailpref/pgsql-general
> >
>
> Thanks Andreas, I too agree it may not be a good idea to have
> long for various reasons including porting/upgrading issues
> and so on, as I have many tables, I seem to have been caught
> up in describing table functionality in the table name :-)
>
> Allan.
>
> --
> Sent via pgsql-general mailing list
> (pgsql-general@postgresql.org) To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general




I can't imagine naming a table like you describe. A "users" table
name is pretty clear.

I would love to see an example of this. You should probably
get "un-caught-up" in using this concept. Perhaps reading about
an existing model (ie; hungarian notation) and tweak it to fit
your needs.

But hey..maybe you're on to a new thing.

Bret












-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Numeric Type and VB/ODBC

2009-11-10 Thread Bret
This may not be the right group, if so, just let me know.

I have a table with a type [numeric].

When executing queries, I get the data which happens to
be (6.5) in this case, but my VB6 function which traps
for nulls (below) returns a null.

If I change it to type [real]. No problems


Function IsNothing(varToTest As Variant) As Integer
'  Tests for a "logical" nothing based on data type
'  Empty and Null = Nothing
'  Number = 0 is Nothing
'  Zero length string is Nothing
'  Date/Time is never Nothing

IsNothing = True

Select Case VarType(varToTest)
Case vbEmpty
Exit Function
Case vbNull
Exit Function
Case vbBoolean
If varToTest Then IsNothing = False
Case vbByte, vbInteger, vbLong, vbSingle, vbDouble, vbCurrency
If varToTest <> 0 Then IsNothing = False
Case vbDate
IsNothing = False
Case vbString
If (Len(varToTest) <> 0 And varToTest <> " ") Then IsNothing =
False
End Select

End Function


Bret Stern



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Backups

2009-11-08 Thread Bret

I need to back up a production database every night
on FreeBSD 7.2, running Postgresql 8.3.

Any good backup tips I should be aware of.

Typically, I make a backup based on the current day,
and rotate the seven days in the backup file name
 (eg; sat_backup, sun_backup etc).

Thanks for all the chatter.

Bret Stern




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ubuntu packages for 8.4

2009-07-09 Thread Bret Fledderjohn
2009/7/9 Tim Uckun 

> I don't see any ubuntu packages for 8.4 in the default repositories.
>
> Does anybody know if they will be upgrading the postgresql package to
> 8.4 or creating a new package for it.
>
> I'd rather use the packages than to compile it myself. If anybody has
> an argument as to why I should compile it I am all ears.
>

I just forwarded your message to Martin Pitt, he's the package maintain for
Postgres for Ubuntu (and Debian, I believe).  I don't know if this is
planned for Karmic Koala (to be released in October).

-- 
- Bret


[GENERAL] using regexp_matches and array manipulation

2008-07-18 Thread Bret Schuhmacher

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Can anyone give me an example of how to use regexp_matches and use the 
captured values?


For instance, if I have a delimited string "a,b,c" and I want to put 
each letter into a variable so I can subsequently use those variables in 
an insert statement, how would I do that?  I know regexp_matches returns 
a text array, but how do I assign the value to an array and then access 
those values?


leaselog=# select regexp_matches('a,b,c','(.*),(.*),(.*)');

~ regexp_matches
- 
~ {a,b,c}
(1 row)


I've tried select into, but that just created a table and didn't put the 
values into an array variable.

leaselog=# select regexp_matches('a,b,c','(.*),(.*),(.*)') into foo;


Thanks in advance!


Rgds,

Bret
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)

iD8DBQFIgT2T/PgQIGRJuUcRAvMGAJ9VRNfc5ZZsFtS2LG8VJgPNNnL1wwCfewlf
Jih6ReqSTj6Pp9Ya3B2uMn8=
=HbPn
-END PGP SIGNATURE-


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PG/Tcl and Expect?

2007-10-18 Thread Bret Schuhmacher
Does the PG/Tcl interface allow expect scripts?  I want to create a
stored procedure that ssh's to another server and runs a Perl script.
Expect could do it, but can I load the expect module from pgtcl?

 

Thanks,

 

Bret



 



Re: [GENERAL] Suggestions for Remote Procedure Calls from PG, please?

2007-10-17 Thread Bret Schuhmacher
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
Tom Lane wrote:
>
>
> You've almost figured out the big problem with anything like this;
> the trouble spot is the other way around.  What if you launch some
> remote operation, and it succeeds, and then later your own transaction
> rolls back for some unrelated reason?  Action FOO did happen in the
> external world, but there is no change in the state of the database
> --- which at the minimum probably means you'll try to do FOO again
> later.  Lather, rinse, repeat.
> . 

Thanks for the reply, Tom.  I was thinking I could have my remote
process send a message back to PG via XMLBlaster, too.  XMLBlaster is
a MOM-like message-queuing app that guarantees delivery to
subscribers. (www.xmlblaster.org).  The problem, as you stated,
though, is transactional integrity :-(.  Hmmm, I'll see about the
to-do queue idea.

Thanks again for your time!

Bret



-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
 
iD8DBQFHFtagIeMC5lK637kRAg56AJsF6eNlQWPdpjb8ufiO+xRqZTXymgCfdJFG
4igU9pCasxaVSGOxC0DBbHg=
=qKK2
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Suggestions for Remote Procedure Calls from PG, please?

2007-10-17 Thread Bret Schuhmacher
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 
What's the best way to invoke a process on another server from a PG
trigger or procedure?  I was thinking of using pl/java to invoke a web
service on the other box...  Can pl/tcl run Expect scripts?  That'd be
an option, too.  Or I could use XMLBlaster to send a message to the
other box to start the other process, but that's an asynchronous call
and I can't be sure if the remote procedure ran properly.

Does anyone else invoke a process on a remote server?  How do you do it?

Thanks,

Bret
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.7 (MingW32)
 
iD8DBQFHFsVCIeMC5lK637kRAvBvAKCRAgAg3H7jK/efm8KBlUKUifKV0ACgmo07
3eLZT6pB2XI8uTS47fdYcSw=
=rXIJ
-END PGP SIGNATURE-


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Status of pgaccess

2007-06-27 Thread Bret Busby


What is happening (or not happening) with pgaccess?

In upgrading a computer from Debian Linux 3.1 (sarge) to Debian Linux 
4.0 (etch), the package pgaccess is no longer available.


My wife has it installed on her Ubuntu system, which is the version 
before the current one. However, the pgaccess is not functional, and, 
when I searched for the package on the Debian web site, pgaccess is only 
found to be available for Debian 3.1 (also now named "oldstable", and 
has the version number 1:0.98.8.20030520-1, and, by context, I believe 
that that version works with PostgreSQL 7.x and not 8.x, which is 
apparently why pgaccess was not functional, as she had upgraded 
PostgreSQL on her system, from 7.4 to 8.1 .


Also, that version number of pgaccess, I believe, appears to indicate 
that it was released on 20 May 2003, which is, I believe, before the 
release of PostgreSQL 8.x .


On searching on the PostgreSQL web site for pgaccess, the page with the 
results includes the statement "Based on your search term, we recommend 
the following links:

* http://www.pgaccess.org/
"

At that web site, is the title "PgAccess (Redux)" , so I do not know 
whether it has been renamed "Redux", although searching on that name, 
in the Debian packages search engine, returns "not found for all suites, 
sections, and architectures".


Also on the web site at http://www.pgaccess.org , is stated "Last stable 
version is 0.98.7 , released on 27 January 2001. Read what's new. The 
first new release is expected later in June or July 2002.

"

So, it appears that that web site has not been updated since 2001 or 
2002. Most of the links on that web site home page, to information about 
pgaccess, are broken, and a stable version appears to have been 
released, after the last stable version that is mentioned on that web 
site.


On the web page at http://pgaccess.projects.postgresql.org/ is stated
"PgAccess at PgFoundry
nothing here yet "

From the Debian website webpage for information about the pgaccess 
package, at http://packages.qa.debian.org/p/pgaccess.html , is a link to 
the web page at 
http://packages.qa.debian.org/p/pgaccess/news/20060816T210827Z.html , 
which states that, as at 16 August 2006, the package was removed from 
the Debian "testing" distribution, which was then "etch", which is not 
the Debian "stable" distribution.


Has any work been done on pgaccess, since 2003 (the assumed date of the 
latest Debian release of the application package), and, does a version 
exist, that is compatible with PostgreSQL 8.x?


Thank you in anticipation.

--
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of Book 1 of
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams,
  published by Pan Books, 1992



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] Returning multiple rows from a function?

2006-11-27 Thread Bret Schuhmacher

Hi all,

I'm trying to return multiple rows from a function, but all I can get 
with the code below is the first row.  I got most of the function below 
off the net and I think the problem is the first "RETURN" statement, 
which stops the loop.


CREATE OR replace function getOnCallVol() RETURNS RECORD AS $$
DECLARE
   r RECORD;  
BEGIN

   FOR r IN
   select fname,lname,phone1,phone2,phone3,phone4,phone5
   from events e,volunteer v
   where (now() >= starttime and now()<=endtime and e.v_id = 
v.v_id)

   OR (fname='Backup') limit 2

   LOOP
   return r;
   END LOOP;
  
   RETURN null;


END;
$$ Language plpgsql;

When I run the SQL alone, I get two rows, as I should:
MarySmith112233   
BackupCellphone3319993   

However, if I run it via the function (i.e. select getOnCallVol()), I 
get this:

(Mary,Smith,11,22,33,"","")

Is there another way to get each row returned?  I played around with 
making the function return a "SETOF RECORD" and using "RETURN NEXT", but 
had no luck. 


Thanks,

Bret


--
Bret Schuhmacher
[EMAIL PROTECTED]



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] Training and certification

2004-07-08 Thread Bret Busby

I realise that these have been discuuseed before, but a couple of things 
have happened that caused me to bring this up again, and to raise some 
questions.

A couple of nights ago, a seminar was presented in Perth, Western 
Australia, by an institution offering IT masters degrees. One of the 
masters degrees, was a Master of Systems Development (MSD). Their 
masters degrees incorporated industry certification. The MSD 
incorporated the Microsoft MCAD and MCSD, and related to .NET systems 
development. The institution is also investigating possibly offering a 
similar masters degree, relating specifically to, or emphasising, 
databases. After the presentation, I asked the presenter about the 
possibility of incorporating open source, rather than proprietary system 
development, with open source industry certification. Unfortunately, the 
only open source industry certification that was relevant, that I could 
cite, was the MySQL certification set, along with LPI and RHCE 
certification, with no apparent industry certification for PostgreSQL. 
The presenter said that the institution was considering Oracle as a 
possibility in the future, Oracle having industry certification.

Thus, recognised, international, industry certification of 
open source application systems development, either involving PostgreSQL 
as a database backend by itself, or, involving PostgreSQL as a factor 
(eg, open source database applications development involving either 
PostgreSQL or MySQL as separate options, or, competency with both), 
could be useful, apart from having the internationally recognised 
industry certification in its own right, encouraging acceptance of open 
source software development in such qualifications as these masters 
degrees with their incorporated recognised international industry 
certification.

In trying to find the organisation that I had understood to be the main 
one for providing PostgreSQL certification (I had understood that it was 
PostgreSQL.com, or something like that, or, possibly pgsql.com), and, 
being unable to find any details of any competency-based certification 
at these sites, I did a bit of searching, and I found a postgresql.org 
web page at http://www.postgresql.org/survey.php?View=1&SurveyID=22 , 
which gave the results of a survey, with the question "Should we create 
a standard worldwide PostgreSQL training course?", with 79.691% of the 
votes, being votes for the yes side  - and of that percentage, 50.386 
"strongly yes". The survey is not dated, so I do not know how long ago 
it was held. However, it appears to have indicated support for "a 
standard  worldwide PostgreSQL training course". 

On the web page, whils other survey questions are listed, no further 
reference is made to the result of the survey, for example, "The 
PostgreSQL guru's are  developing a 
standard worldwide PostgreSQL traing course, in response to the survey 
results".

So, I ask, given the result of the poll, however old it is, is any 
action being taken, to develop "a standard worldwide PostgreSQL training 
course", or set of such courses (eg, core, DBA, developer, 
PHP|Perl/PostgreSQL web developer, etc)?

I also found a web page at 
http://advocacy.postgresql.org/advantages/?lang=en , in which the text 
was included;
"our training programs are generally regarded as being more 
cost-effective, manageable, and practical in the real world than that 
of the leading proprietary database vendors.".

To what training programs, does this refer? Are they standardised, or, 
does this refer to separate, independent, autonomous, individual 
training programs that are not standardised?

Also, in my searching, I did manage to find a certification at 
http://www.postgresql.at/certification.html , which also has a link to 
training provided by that company. From the web site, it appears that 
the training/certtification provided by that company, is of ten days 
duration, assumedly of 8 hours per day, thence 80 hours, the equivalent 
of about 6 hours per week over a 13 week period, plus two hours, thence, 
the equivalent of a single semester, university unit.

That appeared to be the only detailed training/certification course that 
I could find in PostgreSQL.

>From the web page at http://techdocs.postgresql.org/companies.php , that 
company appears to be a small company in Austria, and the company and 
certification appear to be recognised by PostgreSQL.org .

Is that the only PostgreSQL certification that is recognised? Is it 
recognised internationally?

Thank you in anticipation, for answers to all of these questions.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan 

Re: [GENERAL] book for postgresql

2004-05-31 Thread Bret Busby
On Mon, 9 Feb 2004, Christopher Browne wrote:

> Date: Mon, 09 Feb 2004 20:57:13 -0500
> From: Christopher Browne <[EMAIL PROTECTED]>
> To: "[EMAIL PROTECTED]"@postgresql.org
> Subject: Re: [GENERAL] book for postgresql
> 
> A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Ed Wong) wrote:
> > I am an oracle dba and new to postgresql.  Could you tell me what is
> > the best postgres book out there to start with?  I am looking for a
> > book which is sort of a complete reference including some dba chapters
> > as well.
> 
> There are three fairly good books in printed form:
> 
> - New Riders has one that is about the most recent one published,
>   which, it seems to me, has about the best explanation of the query
>   optimizer going, as well as quite a lot of other fairly deep
>   technical material;
> 
> - Addison Wesley published Bruce Momjian's book which is getting a bit
>   dated, but which is still quite good;
> 
> - O'Reilly has a third one that is also "dated but good."
> 
> The online material is also a good source, and is commonly included
> along with the PostgreSQL installation.
> 
> All four of these sources are quite good.  I have all but Bruce's book
> on my desk...
> 

For some strange reason, the message above, took somewhat over 
three months to come through, maybe due to the To address in the header 
(?).

I am wondering whether, given that Bruce Momjian's book appears to be 
regarded as "a bit dated", a 2nd Edition is on the way?

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-31 Thread Bret Busby
On Wed, 31 Dec 2003, Tony wrote:

> 
> I have been a consultant with Microsoft Operating Systems for sometime 
> now, but never sat any of their exams, because my experience with 
> Network Operating Systems speaks for itself.  I've never had my 
> abilities questioned by an employer (only by employment agency staff 
> that don't know their subject and insist that no one can be put forward 
> for this contract without at least an MCP)  not even by Microsoft 
> when I did work for them.
> 
> 

Perhaps I should have clarified - in referring to Microsoft 
certifications, I was referring to the MCAD and MCSD certifications.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-30 Thread Bret Busby
uggle with code to 
understand it. He used one-character variable names, and as a 
consequence the cost of having a consultant come in to do maintenance on 
this program was considerably higher than it should have been. Let's do 
the math to illustrate the point. Average billable hours went to this 
company at $55 per hour. It took eight hours just to figure out what 
this piece of code did. That time cost the company $440. Keep in mind, 
all that happened during that time was that the consultant read the code 
and traced its thread of execution. It took two hours to make and test 
the change, time billable for a total of $110. If we assume that 
self-documenting code could have reduced the research time by half, the 
cost for making a minor change to the program drops by $220. The point 
is that self-documenting code reduces the cost of owning a software 
system considerably. Variable names figure into that cost reduction as a 
significant factor."

So, good practices save time and money. Formalised, structured, 
standardised, training and certification, can increase the use of good 
practices, and, the confidence that good practices will be used, and, 
therefore, the confidence of efficiency.

It is like the use of the CMMI assessment for software developers, be 
they small businesses, or corporations.

I attended a .NET Community Of Practice seminar, a few months ago, and 
encountered a concept of which I was not previously aware, and I am not 
sure of the name for it; where a form allows SQL code instead of values, 
to be input into an input field in a form, allowing hacking into the 
database. The seminar warned against allowing such security breaches, 
and, mentioned various options and best ways of performing tasks. And, 
no, I am not of the .NET world, but, I learnt from the seminar. The 
inclusion of such issues, in formalised training, would also increase 
public confidence in software, which I understand to be one of the 
issues in software engineering.

Formalised, standardised, structured, training and certification, can 
increase a prospective employer's confidence, both that an employee is 
more than just a hack-programmer, and, that the employee, apart from 
having a reasonable idea of what the employee is doing, does what the 
employee is supposed to do, properly, and most efficiently, producing 
the most reliable and efficient result.


> > If you cannot see the advantages of formalised, structured, standardised 
> > training and certification, then I assume that you have no 
> > qualifications, and did not graduate from secondary school? 
> 
> Well this was just plain snobbish. There are benefits to secondary
> school but they do not pertain to each individual and it has been
> proven time and time again that secondary school (college) can actually
> hamper the minds, creativity and capabilities for a person to grown.
> Bill Gates, and Michael Dell come to mind.
> 
> The above of course is not par for the course for everyone. Some people
> need to be taught, some can teach themselves, some can only teach
> themselves within one arena of talent, some are complete morons... it
> depends on the individual.
> 

It, surely, is all about the basic principle of public education; 
ensuring that people are educated to the same level(s). That is the 
great advantage - being educated to prescribed levels, nd, in knowing 
that a person has been educated to a particular leve, and therefore, 
attributing a particular level of skills to the person.

> 
> > Such things 
> > are generally implemented at secondary school and further education, and 
> > Informix and Oracle and Microsoft have such things, from my 
> > understanding. 
> 
> As someone who has passed the MS exams, you don't need them, they are
> joke. The A+ was more difficult than the memorize the side bars and
> select letter "C" testing that Microsoft offers.
> 
> 

Did you complete the MCAD and MCSD courses?

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-30 Thread Bret Busby
On Tue, 30 Dec 2003, Nigel J. Andrews wrote:

> Date: Tue, 30 Dec 2003 11:12:05 + (GMT)
> From: Nigel J. Andrews <[EMAIL PROTECTED]>
> To: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
> Subject: Re: [GENERAL] Is my MySQL Gaining ?
> 
> 
> Just to poke fun at MySQl:
> 
> On Tue, 30 Dec 2003, Bret Busby wrote:
> > ...
> > It is alright for people in this thread, to say "But they are MySQL, and 
> > MySQL is not as powerful as PostgreSQL, so who cares what advantages 
> > there are in MySQL", but MySQL appears to be more mature, as it has 
> > things like standardised, formalised, structured, training courses and 
> > secrtifications, and, the "Teach Yourself MySQL in 21 Days" book, and 
> > that series of books has set exercises, etc, to aid the learning, 
> > ...
> 
> I thought MySQL was supposed to be easy to install, admin and use, how come it
> takes 21 days to learn it and needs formalised training courses?
> 
> 
> --
> Nigel
> 

So, in the absence of those benefits for PostgreSQL, all you can do is 
poke fun at a better offering?

Have you read and worked through the book, as either a person who has 
not worked with MySQL or the Perl DBI, or the API's in the book, or as 
a person with no experience with databases? If not, how then can you 
say it should take more time or less time? 

Have you undertaken the MySQL certifications? If not, how can you say 
that they are not worthwhile?

"Easy" is in the eye of the beholder, and, is affected by the depth into 
which a person goes.

If you cannot see the advantages of formalised, structured, standardised 
training and certification, then I assume that you have no 
qualifications, and did not graduate from secondary school? Such things 
are generally implemented at secondary school and further education, and 
Informix and Oracle and Microsoft have such things, from my 
understanding. And, isn't passing secondary school level exams, easy? If 
not, perhaps, you should try it again, and again, until you can 
confidently pass.

Some people find secondary school exams easy, others do not. Depending 
on where you were educated, most countries have had formalised, 
standardised, structured, education and certification at secopndary 
school, and, some kind of accreditation for technical college education 
and for university education. May be not, where you were educated.

Instead of going out of your way to ridicule MySQL, perhaps you should 
instead, try to do what I have done; have a look at what MySQL has, 
that PostgreSQL has not, and, consider how it could benefit PostgreSQL. 
Unless, of course, you want for PostgreSQL to not be taken seriously, 
and instead, to be similarly an object of ridicule, as its community 
would appear unable to achieve anything other than ridiculing others.

It is like some sections in the Linux community, who apparently feel 
that Linux has nothing to offer, and should not be taken serioulsy, so 
they devote their time and effort, to ridiculing Microsoft, instead of 
promoting the benefits of Linux, as they clearly believe that ridiculing 
Microsoft, can apparently hide their belief that Linux is not worthwhile 
in itself and that Linux has nothing to offer.

If some want to similarly regard PostgreSQL, as being so worthless, that 
the best way to conceal its worthlessness, is to ridicule MySQL, then 
that is unfortunate, as I believe that PostgreSQL is supposed to be 
better than MySQL, it just happens to lack some of the maturity of 
MySQL, as indicated in my paragraph, quoted above.

Oh, and, on that basis, remember the Beta video format? It was supposed 
to be far better than VHS. But, it disappeared because VHS had greater 
marketing. And, OS/2 was supposed to be far superior to MS Windows, 
but, similarly, the same fate befell that, and, similarly, with IBM 
PC-DOS and MS-DOS. 

As it was mentioned that PostgreSQL would be around, long after MySQL 
was dead and gone, perhaps not - perhaps, it may be the other way 
around.

It all depends on whether the PostgreSQL community is prepared to learn 
from others - remember that quote? "Those who do not learn from history, 
are doomed to reapeat it". It would be unfortunate, for PostgreSQL to 
disappear, like the Beta video format, due to the PostgreSQL community 
not being willing to learn from others.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-30 Thread Bret Busby
On Tue, 30 Dec 2003, Tom Lane wrote:

> Date: Tue, 30 Dec 2003 02:07:23 -0500
> From: Tom Lane <[EMAIL PROTECTED]>
> To: Bret Busby <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED], [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Is my MySQL Gaining ? 
> 
> Bret Busby <[EMAIL PROTECTED]> writes:
> > Does PostgreSQL yet allow the user or programmer, to determine where the 
> > database will be stored?
> 
> You speak as though you think that would be a good idea.
> 
> In my mind, "where the database is stored" is not a matter for users,
> nor for programmers, but for DBAs --- that is, the grunts who have to
> worry about backup policies and suchlike.  This is not an issue that
> should be exposed at the SQL-command level, and therefore it does not
> concern either users or database programmers.
> 
> That's not to say that we don't have work to do here.  There's
> considerable interest in developing "tablespace" features to help the
> DBA manage his problems.  But I absolutely will not buy into any
> suggestion that user foo's tables must be stored in user foo's home
> directory (even if I thought that Postgres user foo must correspond
> to a local Unix user foo ... which I don't ...)
> 
>   regards, tom lane
> 
> 

This is where terminology becomes amusing.

I meant the OS user, not the DBMS user, and I am not suggesting that 
DBMS users should be able to set where their tables are stored.

All kinds of scenarios can arise; where the DBA and the developer are 
the same person, or, employed in the same department of the same 
company; where the DBA is employed by the company, and the developer is 
a contractor, or an employee of a contractor, and, as I previosuly 
mentioned, the scenario where an ISP, by hosting a web site with a 
database backend, has a database in the same holding area as is held all 
the databases of all of the ISP's clients who similarly have web sites 
with database backends.

I would feel more confident about having a personal database "on the 
Internet"; a backend to my web site, if I knew that the database wasn't 
thrown into the same storage area as everyone of the ISP's other account 
holders, who also have the same DBMS database backends to their web 
sites. You never know what else is sharing the same storage area, or how 
safe your database is in there. It is a bit like having a cat; I would 
rather that the cat is with me, and that I know where it is, and what is 
happening with the cat, than having the cat locked away in a common room 
for all cats. Also, using that analogy, if I decide to move away with my 
cat, if it is with me, it is much simpler, and, cleaner, for me to 
simply pick up the cat and take it with me, than to try to find all of 
its bits, in a common room full of other cats. If I have a database 
system hosted by an ISP, and I try to move it to another ISP, surely, it 
would be simpler and cleaner, if I know that the database is stored in 
or under my home directory with the ISP, than having the database stored 
in a central repository with all of the other accounts holders' 
databases.

There is also the issue of security, in the same context; I would feel 
much more secure, with a database hosted by an ISP, if I could control 
the privileges on the database directory, rather than allowing the ISP 
the control. Having been a user on various UNIX systems, I have seen 
some pretty lax security by systems administrators, and other users, and 
I am reminded of a senior university computing lecturer, who had the 
exam for an advanced computing unit, with such lax security that some 
students wandering through the system, found the exam, and, when they 
sat the exam, were surprisingly well prepared (no, I was not one of the 
students), resulting in all the students in the unit, having to re-sit 
the exam, and, other effects. A DBA should be able to control where a 
database is stored, and the level of security applicable to where the 
database is stored (privileges applicable to the directory, etc), and, 
as I have previously mentioned, it can occur that the DBA and the 
developer/programmer, are the same person.

As an example, on a personal basis, if I ever get the number of names in 
my genealogy system, up to around 10,000, I would really want, if using 
a database backend (which would, I believe, be required), to have 
control over where the data is stored, so that I can easily and reliably 
back it up, as such data can be unreplaceable, and can take decades to 
accumulate.

Similarly, for commercial databases, now that DVD's are writable, 
backing up a largish database, using OS backing up, would be much 
better, and moreso, witth the data for a database, stored where it is 
wanted.

I am not sure whether it can all be done with symbolic links, to place 
Po

Re: [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Bret Busby
On Sat, 27 Dec 2003, Chris Travers wrote:

> Date: Sat, 27 Dec 2003 18:44:48 +0700
> From: Chris Travers <[EMAIL PROTECTED]>
> To: Marc G. Fournier <[EMAIL PROTECTED]>
> Cc: [EMAIL PROTECTED], [EMAIL PROTECTED],
>  [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Is my MySQL Gaining ?
> 
> 



> In short, I do not see MySQL as any sort of threat to PostgreSQL, near or
> long-term.  PostgreSQL will continue when MySQL no longer exists.  Firebird
> is a more serious competitor long-term, though I found it to be hard to
> learn when compared to PostgreSQL.  It has a long way to go before being as
> easy to use as PostgreSQL.
> 
> 

I suggest that it is a bit premature, to suggest that MySQL will 
disappear, and that PostgreSQL will still exist.

Each does have its advantages, and, people develop things in parallel in 
the two different systems.

For example, on the perl-gedcom list, people have developed, in 
parallel, genealogy database systems that they use, some using MySQL, 
some using PostgreSQL. People have their preferences, as some still use 
(or require to be used) MS Access, or Foxpro, or SQL-Server, or 
Informix, etc.

Does PostgreSQL yet allow the user or programmer, to determine where the 
database will be stored? From memory, that has (or had) been a 
shortcoming of PodtgreSQL; there was no control as to where the database 
was stored, so that, for example, from my understanding, where an ISP 
allowed PostgreSQL usage for web sites, all of the PostgreSQL databases 
of all the ISP account holders, were stored in the same location, which 
was not under the account-holder's home directory; similarly, if I, on a 
LAN, create a database InventoryThing, as user frednerk, and, create a 
database AccountsThing, as user joebloggs, my understanding is that both 
databases will be stored in a central PostgreSQL repository, rather than 
under each user home directory. Thus, if the frednerk home directory and 
everything under it, is backed up by frednerk, it appears that 
InventoryThing is not backed up, and, similarly, with joebloggs and 
AccountsThing. Likewise with separate ISP accounts and any PostgreSQL 
databases that they have and use on their web sites. Clarification of 
whether my understanding is correct, would be appreciated.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [pgsql-advocacy] [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Bret Busby
On Mon, 29 Dec 2003, Martin Marques wrote:

> 
> I haven't read the latest review of Bruce's book, but I can recall that the 
> original version started with:
> 
>  In this chapter, you will learn how to connect to the database server and 
> issue simple commands to the POSTGRESQL server.
> 
> At this point, the book makes the following assumptions:
> 
> * You have installed POSTGRESQL.
> * You have a running POSTGRESQL server.
> * You are configured as a POSTGRESQL user.
> * You have a database called test. 
> ==
> 
> Now, Joshua was talking about getting PostgreSQL started, which Bruce assums 
> you already know.
> 
> Anyway, I must admit that if you have PG installed and running, which is very 
> simple on normal Linux distributions, this book gives a huge boost to any 
> newbie.
> 
> 

And, if a person did not already have it installed and set up, would the 
person then have not been required to find elsewhere, how to do those?

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [pgsql-advocacy] [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Bret Busby
On Mon, 29 Dec 2003, Jeff Eckermann wrote:

> 
> Isn't this what books are supposed to be for? i.e. to
> fill in the gaps or provide the coverage, tips, howtos
> etc. etc. that no-one really expects formal
> documentation to cover.  There are quite a few good
> books out there, including two accessible online, with
> links from the www.postgresql.org page (that must have
> been modesty on your part ;-) ).  Bruce's book, even
> referring to an outdated version of PostgreSQL, still
> gives a pretty good introduction for an SQL newbie in
> how to get started.
> 
> We have plenty of good stuff already out there, the
> issue here appears to be more one of presentation and
> organization.
> 
> 

But, do these things have set exercises, relevant to the material, to 
ensure the reader understands the material?

It is one thing to present a worked example, but, without getting a 
student to perform an exercise "create a database named supermarket, 
with tables groceryline and socklevel and itemprice, input 100 stock 
lines of varying stock levels, and of varying values, then create a 
report of the total value of the stock, and a report listing the stock 
lines with an item value over $5.00, and the total value of stock with 
item prices over $5.00", to show whether the student actually 
understands what to do, and how to do it, so the student can realise 
whether the student needs to go back and cover the material again, or 
whether the student can move on.

To give a person knowledge, increases the person's memorised 
information; to require the person to use the knowledge, makes the 
person learn, and increases the person's skills.

That is why I have repeatedly referred to the need for a "Teach Yourself 
PostgreSQL in 21 Days" book, to have such exercises, etc.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [GENERAL] Is my MySQL Gaining ?

2003-12-29 Thread Bret Busby
reSQL skills, through the certifications, 
available to the common people, rather than making PostgreSQL 
programming, a black art with a secret society atmosphere, with the 
policy "If you can find it, you might be able to learn it".

It is useful, to have the resources that exist, including the support 
from the mailing lists, but, what is sorely lacking, is the existence 
of the things that I have repeatedly mentioned; formalised, 
standardised, structured, training and certification, and, a "Teach 
Yourself PostgreSQLin 21 Days" book, with appropriate set exercises, as 
in any good trauining course.

When PostgreSQL has these, then it will have achieved the maturity of 
MySQL, and other DBMS's, like Oracle, etc., and, then, PostgreSQL might 
become widely used, and displace the other DBMS's.

Until then, it will likely be still regarded as a hacker's DBMS, as Perl 
is regarded a language for hackers, or hack programmers.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] [NOVICE] PostgreSQL Training

2003-12-12 Thread Bret Busby
On Fri, 12 Dec 2003, Bruce Momjian wrote:

> Date: Fri, 12 Dec 2003 08:41:39 -0500 (EST)
> From: Bruce Momjian <[EMAIL PROTECTED]>
> To: Chris Travers <[EMAIL PROTECTED]>
> Cc: Stephan Szabo <[EMAIL PROTECTED]>, Bret Busby <[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED], [EMAIL PROTECTED]
> Subject: Re: [GENERAL] [NOVICE] PostgreSQL Training
> 
> Chris Travers wrote:
> > I think though that there is an opportunity, though, for us to perhaps
> > work together in developing a Postgresql training base curriculum.  We
> > can pool some resources and perhaps develop at least a list of the
> > things which ought to be covered. Perhaps this can lead to books on the
> > subject, etc.  I am thinking that an open curriculum might be something
> > very helpful particularly for novices.  It doesn't have to lead to
> > certification, but it could enable third parties (including Brainbench)
> > to build certifications that they could charge for.
> 
> All my class presentations are on my home page --- the only thing that
> isn't there is the exercises.
> 
> 

And, from what I have seen of the Table of Contents of the book, 
as listed on the Internet, exercises are also not there.

Exercises make alot of difference to training and learning, as, from 
exercises, comes understanding and remembering.

That is one of the reasons that I seek formalised, standardised 
training, and a Teach Yourself PostgreSQL In 21 Days book (to which I 
recently alluded in a particular query about the book), apart from the 
certifications.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] [NOVICE] PostgreSQL Training

2003-12-10 Thread Bret Busby
software developer, by profession. She also trains people, 
and has trained lecturers, in some of the development software in which 
she develops. 

However, when the issue of open source software development, such as in 
PostgreSQL, arises, her employer company apparently steers away from it, 
instead, steering toward software development, using software tools that 
are internationally recognised and in which certification is available, 
and, I believe that her employer company regards things like PostgreSQL, 
as the dark and murky unknown, especially in the absence of any 
recognised formal training and certification.

It is one thing to say that PostgreSQL is big and powerful, and that it 
is (or, as I believe, is) the most ANSI-SQL standards-compliant DBMS, 
and that it is used for such major projects as (as I believe) the .org 
registry, but, in the absence of recognition of PostgreSQL as being 
backed by formal training and certification, it is difficult to obtain 
acceptance of PostgreSQL.

But, the issue of formal and structured training and certification in 
PostgreSQL, is something to be decided by the PostgreSQL guru's, I 
believe, and, until they implement these things, we are left in the 
dark, and, required to do things such as travel the path that I have 
mentioned, via MySQL. And, it is always possible, that, in 
following such a path, and having obtained MySQL certification, a person 
may stay with MySQL, thus, the path of formalised training and 
certification, taking potential software developers, and, thence, 
potential customers, to MySQL instead of PostgreSQL.

Thus, whilst, if I chose that path, it might not be any great loss, if 
others followed that path, and, went to MySQL instead of PostgreSQL, due 
to the lack of formalised training and certification of PostgreSQL 
skills, it would be a loss of potential usage and acceptance, by 
PostgreSQL, kind of like PostgreSQL shooting itself in its feet.

-- 
Bret Busby
Armadale
West Australia
..

"So once you do know what the question actually is,
 you'll know what the answer means."
- Deep Thought,
  Chapter 28 of 
  "The Hitchhiker's Guide to the Galaxy:
  A Trilogy In Four Parts",
  written by Douglas Adams, 
  published by Pan Books, 1992 




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


[GENERAL] unicode/japanese support

2000-08-21 Thread Bret A. Barker

Hi all, 
I'm interested in using PostgreSQL as the database for a project that needs to 
use Japanese text, can anyone point me to resources on using Postgres with 
unicode/japanese characters, especially with JDBC.
Also, is there an archive of the mailing lists somewhere?

Thanks,
-bret

-- 
-* bret a. barker
-* cto, gamelet.com [ [EMAIL PROTECTED] ]
-* take a break, play a game:
-* [ http://www.gamelet.com ]