fyi,
I upgraded the odbc driver on the client and every thing worked.
thanks
On May 3, 2007, at 1:45 PM, Warren Little wrote:
I'm getting the following error from a query being generated from
MS Access :
character 0xefbfbd of encoding "UTF8" has no equivalent in "LATIN9&
tigris=# show client_encoding;
client_encoding
-
UTF8
(1 row)
What else should I be looking at?
thanks
Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763
ome way to determine the
timeline of the corrupted segment, ie what was the original time of
the last restored transaction.
On Mar 30, 2007, at 5:16 AM, Simon Riggs wrote:
On Fri, 2007-03-23 at 17:16 -0600, Warren Little wrote:
My concern is that there were many more logfiles to be played
ART WAL LOCATION: 11A/EE4E0060 (file 0001011A00EE)
STOP WAL LOCATION: 11A/EFF68AB8 (file 0001011A00EF)
CHECKPOINT LOCATION: 11A/EE4E0060
START TIME: 2007-03-17 20:29:16 MDT
LABEL: 076_pgdata.tar
STOP TIME: 2007-03-18 05:16:17 MDT
Does the line: incorrect resource manager data checksum in record at
11A/FD492B20
mean there is a corrupted WAL log file?
Any insight here would be helpful
version PG 8.1.2 64 bit Linux
thanks
Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763
utdown, execute a
full-database VACUUM in "preR14".
There are a few databases in this cluster ( about 6)
Any suggestions would be greatly appreciated.
Warren Little
Chief Technology Officer
Meridias Capital Inc
ph 866.369.7763
ly 35GB of "uncleaned"
data.
Is this a case where I should be running the vacuum manually or is auto vacuum
all that should be necessary to keep track and mark the updated tuple space
ready for re-use.
thanks
--
Warren Little
Chief Technology Officer
Meridias Capital, Inc
1006 Atherton D
/indexes to help the planner select a better plan?
--
Husam
http://firstdba.googlepages.com
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Warren Little
Sent: Thursday, May 18, 2006 9:06 AM
To: pgsql-admin@postgresql.org
Hello,
my team is in the process of migrating some data from a mysql (5.0) database to our core postgres (8.1.3) database.
We are running into some performance issues with the postgres versions of the queries.
MySQL takes about 150ms to run the query where postgres is taking 2500ms.
The servers
Tom,
thanks much for your help, the cluster command did the trick.
fyi running 8.1.2
On Sat, 2006-04-29 at 14:48 -0400, Tom Lane wrote:
> Warren Little <[EMAIL PROTECTED]> writes:
> > Could this be the reference to the toast table that is preventing the
> > vacuum from de
this be the reference to the toast table that is preventing the
vacuum from deleting the toast data? And what purges "dropped" columns
if not a full vacuum.
On Sat, 2006-04-29 at 06:52 -0600, Warren Little wrote:
> I am now a little confused.
>
> I ran the following with
f I'm reading the output correctly, but it appears no rows in
the toast table were removed.
What else could be holding onto the data in "pg_toast_24216115" ???
On Fri, 2006-04-28 at 16:03 -0400, Tom Lane wrote:
> Warren Little <[EMAIL PROTECTED]> writes:
> > 3
Tom,
I'll run the vacuum over the weekend and see how that goes.
And, yes, large pdf documents (4-24mb a piece).
thanks
On Fri, 2006-04-28 at 16:03 -0400, Tom Lane wrote:
> Warren Little <[EMAIL PROTECTED]> writes:
> > 3) I know that once upon a time the table had a by
get rid of
the related toast data?
thanks
--
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
ok
thanks
On Wed, 2006-04-26 at 19:22 -0400, Tom Lane wrote:
> Warren Little <[EMAIL PROTECTED]> writes:
> > which appears to be the pg_toast entry. Shouldn't there be a pg_class
> > whose reltoastrelid equals the reltoastidxid of the pg_toast instance
>
> No.
etermine if I have some tuples that are not being
vacuumed.
thanks
--
Warren Little
Chief Technology Officer
Meridias Capital Inc
1018 W Atherton Dr
Salt Lake City, UT 84123
ph: 866.369.7763
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
Sorry,
forget the attachment.
On Mon, 2006-01-02 at 15:24 -0700, warren little wrote:
> The dump/restore failed even with the zero_damaged_pages=true.
> The the logfile (postgresql-2006-01-02_130023.log)
> did not have much in the way of useful info. I've attached the section
&g
dump have landed?
Regarding your comments about losing the evidence, the data I'm trying
to load is in another database in the same cluster which I have no
intention of purging until a can get the table moved to the new
database.
thanks
On Mon, 2006-01-02 at 16:34 -0500, Tom Lane wrote
rrives (about a week).
thanks
On Mon, 2006-01-02 at 15:10 -0500, Tom Lane wrote:
> warren little <[EMAIL PROTECTED]> writes:
> > I received the following error message when trying to copy a table from
> > one database to another on the same cluster:
>
> > pg_dump: Th
uster back into production.
note running:
PostgreSQL 8.1beta4 on x86_64-unknown-linux-gnu, compiled by GCC gcc
(GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)"
--
Warren Little
CTO
Meridias Capital Inc
ph 866.369.7763
---(end of broadcast)---
TIP 6: exp
GN KEY (typepid) REFERENCES
casedocumenttype (pid) ON UPDATE NO ACTION ON DELETE NO ACTION
)
WITH OIDS;
ALTER TABLE casedocument OWNER TO tigris;
Is there any way to determine what data the copy doesn't like
thanks
--
Warren Little
Chief Technology Officer
Meridias Capital I
, Tom Lane wrote:
Warren Little <[EMAIL PROTECTED]> writes:
> When I start postgres (pg_ctl -l logfile start). The following text is
> all that exists in the logfile:
> <2005-03-01 14:21:43 MST>FATAL: incorrect checksum in control file
> What control file is it
it.
When I start postgres (pg_ctl -l logfile start). The following text is
all that exists in the logfile:
<2005-03-01 14:21:43 MST>FATAL: incorrect checksum in control file
What control file is it referring to? Is there some way to generate
more verbose logging?
thanks
Warren
Hello,
Does pg7.4.x support resizing a varchar column
ie varchar(30) -> varchar(200)
If not does the feature in 8.0 relating to changing column types support this?
thx
--
Warren Little
Senior Vice President
Secondary Markets and IT Manager
Security Lending Wholesale,
Thanks to all who responded.
Found the pg_dumplo tool in contrib which did exactly what I needed.
On Wed, 2004-02-18 at 05:54, Jeff Boes wrote:
> At some point in time, [EMAIL PROTECTED] (Warren Little) wrote:
>
> >I migrated my database from 7.3 to 7.4 this weekend using the pg_dum
. Another
option I was looking at was to restore the archived database with the
blobs intact and then restore the production version over the top
without destroying the blob data.
All suggestions welcome, I'm dying here.
--
Warren Little
Senior Vice President
Secondary Marketing
Security Le
I'm in a bit of a pickle on this, so if anyone has some immediate
suggestion it would be very much appreciated
On Tue, 2004-02-17 at 10:10, Warren Little wrote:
> In an attempt to migrate from 7.3 to 7.4 doing a pg_dumpall I did not
> get any of my large objects. Is there a special pr
In an attempt to migrate from 7.3 to 7.4 doing a pg_dumpall I did not
get any of my large objects. Is there a special process which needs to
take place and is there a way to simple copy the large objects
seperately?
--
Warren Little
Senior Vice President
Secondary Marketing
Security Lending
ight be
causing this?
Note
We call setAutoCommit(FALSE) on every connection when created.
--
Warren Little
Senior Vice President
Secondary Marketing
Security Lending Wholesale, LC
www.securitylending.com
Tel: 866-369-7763
Fax: 866-849-8082
---(end of broadcast)-
Is there anyway to modify the statement_timeout value set
in postgresql.conf we creating a connection?
I would like to have the value set to something like 60 seconds on a
regular basis but my vacuumdb statements run longer and timeout without
completing the vacuum
--
Warren Little
Senior Vice
29 matches
Mail list logo