[ADMIN] I'm Interested In Posting On Your Site
Hello, My name is Jeremy Mollet and I'm very interested in posting a job on your site. It's with a great Company and located here in the San Francisco Bay Area. I have included the job description below. Let me know the process I need to follow in order to post. Thanks, Job Description Technology Firm in Walnut Creek, CA. is looking for a Postgres SQL DBA who can support a Perl/Postgres based financial billing system. Advanced Perl scripting, database design, and strong SQL skills are required. The position requires the ability to work and communicate effectively with other professionals in Information Systems, Finance and Legal disciplines. Previous financial system experience is a plus. Requirements: Postgres (or similar database experience): 5 years Linux/Unix Scripting: 3 years Perl: 5 years Jeremy Mollet Technical Recruiter Sapphire Technologies 27 Maiden Lane Suite 300 San Francisco, Ca. 94108 Office: 415-788-8488 Fax: 1-415-788-2592 Visit www.sapphire.com today to view almost 1000 active jobs
[ADMIN] Heavy postgres process
Hi Admin, I'm new to this I have few queries as listed below 1) Number of connections made with a particular database. 2) And how can I check which process (PID) is responsible for the connection and 3) what all can make a postgres process as heavy as 70-80 MB in size Need to know these answers ASAP so I would highly appreciate if you can find time to answer my questions. Regards, ~Vivek CAUTION - Disclaimer * This e-mail contains PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the addressee(s). If you are not the intended recipient, please notify the sender by e-mail and delete the original message. Further, you are not to copy, disclose, or distribute this e-mail or its contents to any other person and any such actions are unlawful. This e-mail may contain viruses. Infosys has taken every reasonable precaution to minimize this risk, but is not liable for any damage you may sustain as a result of any virus in this e-mail. You should carry out your own virus checks before opening the e-mail or attachment. Infosys reserves the right to monitor and review the content of all messages sent to or from this e-mail address. Messages sent to or from this e-mail address may be stored on the Infosys e-mail system. ***INFOSYS End of Disclaimer INFOSYS***
[ADMIN] open source ERD for postgresql database
I would like to use an ERD tool for postgres and it be open source. Any suggestions? -- Regards, Barbara Stephenson EDI Specialist/Programmer Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30507 tel: (678)989-3020 fax: (404)935-6171 [EMAIL PROTECTED] www.ohlogistics.com -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] open source ERD for postgresql database
I've been using GNU ferret for a while, it's OK for simple tasks, and can produce table graphs and even output rudimentary PostgreSQL DDL in Version 0.6, but it doesn't support PostgreSQL's full range of types yet and the handling is somewhat awkward. Version 0.7 looks much more promising, at least from the screen shots, but that hasn't been release yet: http://www.gnuferret.org/ On Fri, 2008-09-12 at 09:59 -0400, Barbara Stephenson wrote: > I would like to use an ERD tool for postgres and it be open source. Any > suggestions? > -- > Regards, > > Barbara Stephenson > EDI Specialist/Programmer > Turbo, division of Ozburn-Hessey Logistics > 2251 Jesse Jewell Pkwy NE > Gainesville, GA 30507 > tel: (678)989-3020 fax: (404)935-6171 > [EMAIL PROTECTED] > www.ohlogistics.com > signature.asc Description: This is a digitally signed message part
[ADMIN] Recommend dba maintenance tasks on a regular bases
Hello, We are currently using Postgresql 8.3.3 on Red Hat 4 and our largest database is around 8454 MB. I have recommend the below to my group but not sure if reindexing should be involved since autovacuum is on? How can I be sure auto vacumming is working fine? We haven't had any problems plus I do a query and it does list all the tables and shows the last update of auto vacuum and auto analyze. Is that it? 1- pg_dump - binary dump every midday and nightly 2 - auto vacuum autovacuum = on log_autovacuum_min_duration = 0 autovacuum_max_workers = 3 autovacuum_naptime = 1min autovacuum_vacuum_threshold = 50 autovacuum_vacuum_scale_factor = 0.2 autovacuum_analyze_scale_factor = 0.1 3- rotate data logs -- Regards, Barbara Stephenson EDI Specialist/Programmer Turbo, division of Ozburn-Hessey Logistics 2251 Jesse Jewell Pkwy NE Gainesville, GA 30507 tel: (678)989-3020 fax: (404)935-6171 [EMAIL PROTECTED] www.ohlogistics.com -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Recommend dba maintenance tasks on a regular bases
On Fri, Sep 12, 2008 at 11:49:46AM -0400, Barbara Stephenson wrote: > I have recommend the below to my group but not sure if reindexing should be > involved since autovacuum is on? No, there's no reason to reindex regularly if everything is working as expected. > > How can I be sure auto vacumming is working fine? Check the pg_statitistic_all_tables entries in last_autovacuum and last_autoanalyze. A -- Andrew Sullivan [EMAIL PROTECTED] +1 503 667 4564 x104 http://www.commandprompt.com/ -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Recommend dba maintenance tasks on a regular bases
On Fri, 12 Sep 2008, Barbara Stephenson wrote: 1- pg_dump - binary dump every midday and nightly 2 - auto vacuum 3- rotate data logs You should also consider running a script which does a VACUUM VERBOSE weekly or twice monthly and emails you the last 8 lines of output. This will allow you to keep your FSM settings up to date. Jim Nasby's article here: http://decibel.org/~decibel/pervasive/fsm.html has some good info about the Free Space Map if you're not familiar with it. In addition, it's probably worth setting log_min_duration_statement to something like 500 or 1000 (500ms or 1s) so that you log slow queries. Note that 500ms is just an example, set it to a value you consider slow so that it will log your slow queries. Then, after you've gathered some data, run it through pgfouine. http://pgfouine.projects.postgresql.org/ -- Jeff Frost, Owner <[EMAIL PROTECTED]> Frost Consulting, LLC http://www.frostconsultingllc.com/ Phone: 916-647-6411 FAX: 916-405-4032 -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Recommend dba maintenance tasks on a regular bases
On Fri, Sep 12, 2008 at 9:49 AM, Barbara Stephenson <[EMAIL PROTECTED]> wrote: > Hello, > > We are currently using Postgresql 8.3.3 on Red Hat 4 and our largest database > is around 8454 MB. > > I have recommend the below to my group but not sure if reindexing should be > involved since autovacuum is on? > > How can I be sure auto vacumming is working fine? We haven't had any problems > plus I do a query and it does list all the tables and shows the last update > of auto vacuum and auto analyze. Is that it? > > > 1- pg_dump - binary dump every midday and nightly If uptime is critical, also look at setting up a PITR server. > 2 - auto vacuum > autovacuum = on > log_autovacuum_min_duration = 0 > autovacuum_max_workers = 3 > autovacuum_naptime = 1min > autovacuum_vacuum_threshold = 50 > autovacuum_vacuum_scale_factor = 0.2 > autovacuum_analyze_scale_factor = 0.1 What's your autovacuum sleep? 10 to 20 is pretty reasonable. If you set it much higher autovacuum may not run fast enough to keep up. Run regular vacuum verbose on the database and examine the last 20 or so lines (need to be superuser I think) to see how many free space map slots you need. FSM slots are super cheap (6 bytes each) but they do come from shared memory. A good setting for a database in your size range with high updates will be between the default 200k or so to 1M. > 3- rotate data logs pgsql has this ability built in, as does your OS. I prefer letting pgsql log as I like the options better than syslog, plus it seems to be faster. Also, don't forget to add long term periodic maintenance. Things like taking the server down every 6 or 12 months to blow out dust, run memory and drive tests, etc... -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Heavy postgres process
On Fri, Sep 12, 2008 at 2:12 AM, Vivek_Sharan <[EMAIL PROTECTED]> wrote: > Hi Admin, > > I'm new to this I have few queries as listed below > > 1) Number of connections made with a particular database. Wait, how to find out how many connections there are, or how many can a particular db handle. For this kind of thing, look at the admin functions in the pgsql-sql docs: http://www.postgresql.org/docs/8.3/interactive/functions-admin.html specifically you want something like: select datname from pg_stat_activity; select datname, count(datname) from pg_stat_activity group by datname; > 2) And how can I check which process (PID) is responsible for the > connection and That table up there ^^^ > 3) what all can make a postgres process as heavy as 70-80 MB in size you may not be measuring properly. When you say it's using 70-80 MB how do you know this? The numbers you see in top aren't necessarily what some folks think they ar. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Recommend dba maintenance tasks on a regular bases
The help of reindex say: Also, for B-tree indexes a freshly-constructed index is somewhat faster to access than one that has been updated many times, because logically adjacent pages are usually also physically adjacent in a newly built index. (This consideration does not currently apply to non-B-tree indexes.) It might be worthwhile to reindex periodically just to improve access speed. I have a reindex of my databases periodically Sorry for the English translation of google I have PostgreSQL 8.1.x The databases that we have has many updates and were being slow, revising found that some index were bigger than it should be, did not agree to the size of the table. I made a reindex of the table and consultation is his much faster and the index remained very small. We therefore began a scheduled task that reidex my batadase periodically. Mensaje original: Tengo postgresql 8.1.xLas bases de datos que nosotros tenemos tienen muchas actualizaciones y se estaban poniendo lentas, revisando encontre que unos indices estaban mas grandes de lo que deberian de estar,no estaba de acuerdo al tamaño de la tabla. Aplique un reindex a la tabla y la consulta se hiso mucho mas rapida y el indice quedo muy pequeño.Por lo tanto puse una tarea programada que me reidex mis bases de datos periodicamente. > Date: Fri, 12 Sep 2008 12:40:19 -0400> From: [EMAIL PROTECTED]> To: > pgsql-admin@postgresql.org> Subject: Re: [ADMIN] Recommend dba maintenance > tasks on a regular bases> > On Fri, Sep 12, 2008 at 11:49:46AM -0400, Barbara > Stephenson wrote:> > I have recommend the below to my group but not sure if > reindexing should be > > involved since autovacuum is on? > > No, there's no > reason to reindex regularly if everything is working as> expected. > > > > > > How can I be sure auto vacumming is working fine? > > Check the > pg_statitistic_all_tables entries in last_autovacuum and> last_autoanalyze.> > > A> > > -- > Andrew Sullivan> [EMAIL PROTECTED]> +1 503 667 4564 x104> > http://www.commandprompt.com/> > -- > Sent via pgsql-admin mailing list > (pgsql-admin@postgresql.org)> To make changes to your subscription:> > http://www.postgresql.org/mailpref/pgsql-admin _ Live Search premia tus búsquedas, llévate hasta ¡Un Auto! http://www.ganabuscando.com/Default.aspx
[ADMIN] exporting/importing BLOB's (lo_* style)
Hi All; I want to dump a set of BLOB's from one db/server and import to another. I assume I cannot do a pg_dump unless I also dump related pg_largeobject rows for the referenced OID I assume I'll need to export via distinct lo_export commands and use lo_import to insert into the second database server. Is there a way to do a bulk transfer or export/import of lo_* style large objects ? Thanks in advance -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] exporting/importing BLOB's (lo_* style)
kevin kempter <[EMAIL PROTECTED]> writes: > Hi All; > I want to dump a set of BLOB's from one db/server and import to another. > I assume I cannot do a pg_dump unless I also dump related > pg_largeobject rows for the referenced OID > I assume I'll need to export via distinct lo_export commands and use > lo_import to insert into the second database server. Why don't you just use pg_dump? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Recommend dba maintenance tasks on a regular bases
On Fri, Sep 12, 2008 at 12:51 PM, Fabricio <[EMAIL PROTECTED]> wrote: > > I have a reindex of my databases periodically > > Sorry for the English translation of google > I have PostgreSQL 8.1.x > The databases that we have has many updates and were being slow, revising > found that some index were bigger than it should be, did not agree to the > size of the table. I made a reindex of the table and consultation is his > much faster and the index remained very small. > We therefore began a scheduled task that reidex my batadase periodically. This is generally a sign you aren't vacuuming aggresively enough. some usage patterns, however, lend themselves to off hours vacuuming instead of during the day. If your machine doesn't have enough bandwidth to handle vacuuming during the day, then it's probably better in the long run to look at faster I/O subsystems. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] exporting/importing BLOB's (lo_* style)
kevin kempter wrote: > Hi All; > > I want to dump a set of BLOB's from one db/server and import to another. > > I assume I cannot do a pg_dump unless I also dump related pg_largeobject > rows for the referenced OID pg_dump should dump large objects automatically. If you're dumping a single table, make sure you use the -b switch. Saith the manpage -b --blobs Include large objects in the dump. This is the default behavior except when --schema, --table, or --schema-only is specified, so the -b switch is only useful to add large objects to selective dumps. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] Triggers & BLOB's
Hi All; Can I put an update/insert/delete trigger on a BLOB (lo_* style) column, It looks like the trigger should actually live on the data column of the pg_largeobject table to be effective. Is this correct ? Thanks in advance... -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Triggers & BLOB's
kevin kempter wrote: > Hi All; > > Can I put an update/insert/delete trigger on a BLOB (lo_* style) column, > It looks like the trigger should actually live on the data column of the > pg_largeobject table to be effective. Is this correct ? You can't put a trigger in pg_largeobject. Of course, you can put a trigger in the table that you're storing the OID's in. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin