Re: [ADMIN] Selecting HW RAID controller

2004-09-05 Thread Tony Reina
[EMAIL PROTECTED] (pginfo) wrote in message news:[EMAIL PROTECTED]...
 Hi all,
 I am searching for good working HW RAID controller.
 I will use linux as OS.
 Also I will use SCSI.
 I readet the docs about adaptec AHA2120S, but do not find any comments 
 about the driver supprt and how stable is for linux.
 Any ideas?
 

I've used 3ware stuff in the past for PG on RH Linux.

http://www.3ware.com

-Tony

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


[ADMIN] Password with libpq

2004-06-27 Thread Tony Reina
I'm setting my PostgreSQL server to demand md5 encrypted passwords
from users. In my C programs (using libpq as the interface), it's no
big deal to pass the password through PQconnectdb.

However, what about in the general case where my client program
doesn't know if a password is required or not. Is there any libpq
function to query the server if a password is required? Perhaps,
there's a return value for PQconnectdb which would tell me that the
connection failed because of the password? (In which case, I can ask
for the password and try the connection again.)

Thanks.
-Tony

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Postgresql ssl mode

2004-05-19 Thread Tony Reina
 Hi,
 
 the certificate was create in the directori of data, an the permision its
 to user postgres.
 
 but the database not start
 

Print the postmaster log file (I think it's $PGDATA/serverlog). It
should tell you specifically why the SSL mode can't start
(permissions/existence/etc).

-Tony

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Database storage bloat -- FOLLOWUP

2004-04-11 Thread Tony Reina
I thought I'd make a followup to the question of storage bloat. I
tried approaching the problem by living a little loosely with database
normalization and use arrays instead of flattening everything out in
the tables.

So my original table,

CREATE TABLE original (
dbIndex integer,
index1 smallint,
index2 smallint,
index3 smallint,
dvalue real
):

becomes

CREATE TABLE newtable (
dbIndex integer,
dvalue   real[][][]
);

Now I'd expect to get better storage here just because I'm dropping 3
smallint fields (so 2 bytes x 3 for each value, or a 6 fold decrease).

However, in actuality I'm seeing a 30 fold decrease:

original table = 48 MB
newtable   = 1.6 MB !

If I'm understanding the data directory correctly, the array tuples
are being toasted (which I think must compress them). The actual table
in the newtable format is only 8K and the pg_toast that goes with this
table is 1.6MB.

So I'm very pleased with the storage here. Plus, reads are faster than
the original design. All in all, I think I've found a good solution
for my kind of data.

Now I know that the use of arrays is considered taboo in the books,
but in my case the data being stored actually do fall into an array
(matrix)  structure naturally (it's a timebased recording so there's a
sample for each time point and each X,Y position -- a natural 3D
matrix).

What are the possible downsides that I may be missing?

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


[ADMIN] Database storage bloat

2004-04-08 Thread Tony Reina
I'm developing a database for scientific recordings. These recordings
are traditionally saved as binary flat files for simplicity and
compact storage. Although I think ultimately having a database is
better than 1,000s of flat files in terms of data access, I've found
that the database (or at least my design) is pretty wasteful on
storage space compared with the binary flat files.

In particular, I tried importing all of the data from a binary flat
file that is 1.35 MB into a PostgreSQL database (a very small test
file; average production file is probably more like 100 MB). The
database directory ballooned from 4.1 MB to 92 MB (a bloat of 65X the
original storage of the binary flat file).

Now I know that table design and normalizing is important. As far as
my partner and I can tell, we've made good use of normalizing (no
redundancy), we've set field sizes to their theoretical skinniness,
and we've made use of foreign keys and views. I'm also aware that
indicies/keys and other database internals will necessarily make the
DBMS solution bloated in terms of storage space. However, a 65X bloat
in space seems excessive.

Has anyone run across similar storage concerns? I'd be interested in
knowing if I just have really poorly designed tables, or if something
else is going on here. I figure a bloat of 3-4X would be permissible
(and possibly expected). But this bloat just seems too much.

Thanks.
-Tony

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] Do Petabyte storage solutions exist?

2004-04-05 Thread Tony Reina
[EMAIL PROTECTED] (scott.marlowe) wrote in message news:[EMAIL PROTECTED]...
 For quite some time.  I believe the max table size of 32 TB was in effect 
 as far back as 6.5 or so.  It's not some new thing.  Now, the 8k row 
 barrier was broken with 7.1.  I personally found the 8k row size barrier 
 to be a bigger problem back then.  And 7.1 broke that in 2001, almost 
 exactly four years ago.  6.5 came out in 1999-06-09, so the limit to table 
 sizes was gone a very long time ago.
 

The PostgreSQL limitations on the users' page
(http://www.postgresql.org/users-lounge/limitations.html) still says
that tables are limited to 16 TB, not 32 TB.

Perhaps it should be updated?

-Tony

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[ADMIN] Do Petabyte storage solutions exist?

2004-04-01 Thread Tony Reina
I have a database that will hold massive amounts of scientific data.
Potentially, some estimates are that we could get into needing
Petabytes (1,000 Terabytes) of storage.

1. Do off-the-shelf servers exist that will do Petabyte storage?

2. Is it possible for PostgreSQL to segment a database between
multiple servers? (I was looking at a commercial vendor who had a
product that took rarely used data in Oracle databases and migrated
them to another server to keep frequently accessed data more readily
available.)

Thanks.
-Tony

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] Are there programs to graph database schema?

2004-01-12 Thread Tony Reina
I know that there are standard ways to graph relational databases. 

It occured to me that perhaps programs existed which simply parsed the
table definitions and plotted the graphs for you.

Anyone know of such a thing?

Thanks.
-Tony

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [ADMIN] Tape/DVD Backup Suggestions?

2002-07-18 Thread Tony Reina

Thanks everyone. I think I'll go with one of the tape solutions (DLT,
Exabyte VXA-2, or AIT) after all. If I can get the 80G native/160 Gig
compressed, then I can probably get buy with one or two tapes. That
should make my single-file restores a little less onerous.

-Tony




[EMAIL PROTECTED] (Kurt Gunderson) wrote in message 
news:[EMAIL PROTECTED]...
 You might try looking into DLT technology.  DLT tape drives will give
 you 80Gb storage with ~6-12MB/s throughput.  They are an industry
 standard supported by the heavyweights (HP/IBM/Dell) but a little more
 pricey than your current DDS technology.
 
 If you are looking for speed AND storage, I would suggest a combination
 of disk and tape.  Back you 'production' data to a separate 'backup'
 drive then write the 'backup' drive contents to tape.
 
 If you have money to burn and require very little downtime during
 backup, place your data on a mirrored disk.  During backup time split
 the mirror and backup the stale mirror to tape.  Once you are done,
 re-sync the stale mirror to your production mirror (which can be done
 online).
 
 My CDN$.02,
 K.


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[ADMIN] Tape/DVD Backup Suggestions?

2002-07-17 Thread Tony Reina

I'm looking into new ways of backing up the data in our lab, including
the PostgreSQL database. Currently, we have a single DDS-2 tape drive
capable of holding 8Gig compressed. However, it is slow (i.e. takes a
day to backup about 40 Gig of information), spans multiple tapes, and
makes it hard to find just a single file or two on the backup (i.e. I
have to go through many tapes before I can find and extract one of two
files).  Our CDRW backups are easier to manage and relatively fast,
but require dozens of CDRWs.

Can anyone make suggestions on backup systems? I was thinking that
some sort of DVD writing system would be good for accessing one or two
files in the backup quickly.  It would probably also complete backups
faster. However, I think DVD's only hold a few Gigs on them. Are there
systems like with these features that could handle say 20 Gigs per
media?

Thanks.
-Tony

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] Tape/DVD Backup Suggestions?

2002-07-17 Thread Tony Reina

Yes, I'm a little wary of hard disk based systems as my sole backup. I 
prefer something that allows me to have at least 2-3 different media 
backups (e.g. one this week, one last week, and one the week before last). 
Also, I like to be able to take a copy of the media home just because I'm 
paranoid that the lab will burn down or something silly like that ;)

I've been looking at the Exabyte systems which hold 80G/160G and write at 
about 10G/hour.  Anyone have experience with these? How onerous is it to 
look at the table of contents or restore a specific file from these tapes?

-Tony



At 11:35 PM 7/17/02 +0100, [EMAIL PROTECTED] wrote:
On Wed, Jul 17, 2002 at 04:48:10PM -0500, Nathan Mueller wrote:
   Are there
   systems like with these features that could handle say 20 Gigs per
   media?
 
  Where I work we just started using a new disk based backup system that
  we wrote in-house.  Disk is a little bit cheaper then tape -- plus much
  faster.  Another bonus is that your compression rate does not depend on
  how fast your data is flowing.  I'd suggest you buy a few 160GB IDE
  drives and just dump your data there.  It's faster, easier and you don't
  need to change tapes.  If you're interested in the source to our system
  just let me know.

I advise against hard disk based backups:

It's not cheap, sure, a tape drive is expensive, but the tapes are quite
cheap.

The cheapness of the tapes allows you to use several for different
backup strategies (weekly and daily incremental and monthly full backup, for
example) and to keep older data on saved tapes.

It's easier. There are programs like Arkeia (free for one linux server and
two clients (win32/linux)), that makes tape and backup management a few
clicks (but a read of documentation is still needed). As a side note,
Arkeia supports direct dumping and backup from serveral rdbms, postgresql
included.

It's more reliable. If the backup disk fails, all backup is lost and its
substitution an hassle. If a tape breaks you still have the other tapes
for last week/day/etc.. Still, a tape drive can fail or a tape can screw
the drive, but I haven't heard of anyone to whom that has happened. In
hardware world, quality normally comes with an higher price.

That being said, I don't have a tape. Too much for my pocket. :)
So I made a script that creates, compress, splits and burns backups to
cds. I'll made a switch to dvds when the price for dvd recorders drops to
EUR 100 and dvd blank discs to EUR 2...

Your solution is still a valid one, but more a special case for those
without too much $$...

Regards,
Luciano Rocha

--
Consciousness: that annoying time between naps.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[ADMIN] Alias for field name

2002-02-28 Thread Tony Reina

I know this is probably a no-no, but I was just curious. 

I have two databases that are almost identical. Some of the minor
differences include different field names. For instance, in one db a
field called 'cell_name' is the same (more or less) as a field called
'file_name' in another db (actually cell_name is a subset of
file_name).

Would it be possible for PostgreSQL to know that if I ask for
'file_name' in the first db that I am really talking about
'cell_name'? In other words, is there an alias declaration?

e.g.CREATE TABLE testtable (
cell_nameint4  AKA file_name
);

Note that I don't think this would be true referential integrity
because 'file_name' is nowhere in the first database (it's between
them).

Thanks.
-Tony

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [ADMIN] performance problem with version 7.0.3

2002-01-28 Thread Tony Reina

[EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]...
 Hello there
 
  Please help me in analyzing this output. After vacuum/analyze, the
 performance is ok for another month.
 

Are you regularily doing a vacuum analyze? This is a critical step
to maintaining the database as PostgreSQL doesn't re-use
deleted/updated tuples. In order to keep the db queries fast within
PostgreSQL, you'll need to regularily perform a vacuum analyze. The
frequency will depend on how often the database has
updated/deleted/inserted tuples.

HTH,
-Tony

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[ADMIN] Is this list working?

2002-01-10 Thread Tony Reina

I'm reading the list through Google groups, but I haven't seen much
activity on admin, hackers, sql, etc. over the last few weeks. Are
there just not any posts or is something screwy with Google groups
newsserver?

I was hoping to see the status of 7.2.

-Tony

[EMAIL PROTECTED]

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[ADMIN] Inherited tables vs non-inherited tables

2002-01-02 Thread Tony Reina

I've been trying an inherited-table schema for my database and seem to
be getting a performance hit. The old table looked like this:

CREATE TABLE center_out (
subject text,
arm char, 
target  int4, 
rep int4, 
success int2,   -- end of primary key
exp_datedate, 
exp_timetime,  
inter_target_radius int4, 
target_radius   int4);

Since the fields subject, arm, target, and rep appeared in just about
every other table as the primary key, I made it an inherited table for
the new schema:

CREATE TABLE center_out (
subjecttext,
armchar,
target int2,
repint4,
successint2
);

CREATE UNIQUE INDEX pkcenter_out ON center_out (subject, arm, target,
rep, success);

CREATE TABLE center_out_behavior (
exp_date date, 
exp_time time, 
inter_target_radius  int2, 
target_radiusint2
) INHERITS (center_out);

However, queries such as SELECT DISTINCT subject FROM center_out
seem to take 2-3 times longer in the new schema. Does this make sense?
I was hoping that the inherited information would lead to an
easier-to-follow, smaller, and faster database since there wouldn't be
redundant information across tables.

Thanks.
-Tony 

 PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.96

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] Allowing TCP/IP connections

2001-12-31 Thread Tony Reina

Marc [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]...
 In the postgres configuration file, which options do I have to enable to
 allow TCP/IP connections:
 
 #
 #   Connection Parameters
 #
 #tcpip_socket = false
 #ssl = false
 
 #max_connections = 32 # 1-1024
 
 #port = 5432
 #hostname_lookup = false
 #show_source_port = false
 
 #unix_socket_directory = ''
 #unix_socket_group = ''
 #unix_socket_permissions = 0777
 
 #virtual_host = ''
 
 #krb_server_keyfile = ''

set tcpip_socket = true and take out the # comment indicator.

-Tony

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [ADMIN] Primary Key Problems

2001-12-28 Thread Tony Reina

[EMAIL PROTECTED] (Phill Kenoyer) wrote in message 
news:[EMAIL PROTECTED]...
 Here is a good one.  I have three fields set for my primary key.  Now I
 thought that a primary key was unique, and dups can not be inserted.
 

I don't think primary keys per se are unique, but rather can be made
unique by specifying that option. IIRC primary keys just allow you to
index searches within the database. So to make a unique primary key
from your db schema:


CREATE TABLE inventory (
stock character varying(50) NOT NULL,
inventory_type character varying(20) DEFAULT 'unknown' NOT
NULL,
client_id integer NOT NULL,
[...]
UNIQUE (stock, inventory_type, client_id)
Constraint inventory_pkey 
Primary Key (stock, inventory_type, client_id)
);

HTH,
-Tony


BTW, I'm using Google newsgroups to view the Postgres messages, but
haven't seen posted messages in several days on the Hackers list. Does
anyone know if this is a Google glitch or is the hackers list just not
very active during the holiday?

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[ADMIN] psql option for writing schema

2001-11-30 Thread Tony Reina

I seem to remember a psql option that dumped out the table schema. I
can't seem to find it in the man for psql. Could someone point out the
option to me?

Thanks.
-Tony

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [ADMIN] pg_dump windows (what a mess...!)

2001-09-07 Thread Tony Reina

[EMAIL PROTECTED] wrote in message 
 connectDB() -- connect() failed: No such file or directory
 Is the postmaster running at 'localhost' and accepting connections on Unix
 socket '5432'?
 
 I figured that, although through Windows clients everything works fine,
 maybe from the command line it's not recognizing where Postgres is. Maybe,
 I missed some configuration parameter or something else (TCP/IP perhaps
 what's pg_hba.conf for? How can I change it?)

If your Windows clients are accessing the database correctly (via ODBC
I presume), then you probably just don't have the correct
environmental variables set for DOS.

Try using this:

psql -h hostname -d database_name

where hostname is the IP address of the machine containing the
database and database_name is the database name.

Alternatively, you can set up your DOS environmental variables. Add
the following to your autoexec.bat or any other file that will execute
when you drop into a DOS shell.

set pghost=host_computer_name
set pgdata=c:\directory_where_the_db_exists

where host_computer_name is the IP address for the machine containing
your database and c:\directory_where_the_db_exists is the directory
containing the data.

pg_hba.conf is the file that allows you to change which
users/computers can access the database. It's where you can set up
security in your connections. You should be able to edit it with a
standard text editor providing that you have read/write access to the
file.

-Tony

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[ADMIN] Re: Finding number of current connections

2001-08-09 Thread Tony Reina

[EMAIL PROTECTED] (Tauren Mills) wrote in message 
news:[EMAIL PROTECTED]...
 I need to find a way to determine how many concurrent connections are being
 made to the database server at any given time.  

You may find what you are looking for with Bruce Momjian's pgmonitor
tool.

http://greatbridge.org/project/pgmonitor/projdisplay.php

Alternatively, you could just use a simple shell script. All of the
Postgres connections are separate jobs with the identifier postgres:
at the begining. So you could simply do:

ps ax | grep postgres:

on a RH Linux machine.

With some sed/awk commands, you could sort the job information even
finer. For example, to find out which users are logged onto the
database at any given time:

ps ax | grep postgres: | awk '{print $6 | sort } ' | uniq

I'm sure if you fooled around with this, you could come up with
something tailored to your needs in no time.

-Tony

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster