Hi,
I need to be able to store special chars, German Umlaute, in my tables. This
works when using pgPHPAdmin to store the same value to the same field. But when
using the c-library it doesn't, fields stored are garbled.
I checked using \l to see what encoding the database is which is UTF8,
Sorry, there is a copy-paste error, actually the code really is:
const char *cString = [sql cStringUsingEncoding:defaultEncoding];
if (cString == NULL) {
blablabla
//This just catches cases where cString failed to encode.
}
res = PQexec(pgconn, cString);
Am 22.03.2012 um 09:02
Oliver Kohll - Mailing Lists wrote:
I'm doing some SELECTs from information_schema.views to find views
with dependencies on other views,
i.e.
SELECT table_name FROM information_schema.views WHERE view_definition
ILIKE '%myviewname%';
and each is taking about 1/2 a second, which is getting
On 22 Mar 2012, at 10:17, Albe Laurenz wrote:
Or is there a better way of finding view dependencies? I see there's a
pg_catalog entry for tables
that a view depends on but that's not what I'm after.
You can use pg_depend and pg_rewrite as follows:
SELECT DISTINCT r.ev_class::regclass
Hi,
I found out that apparently in PGSQLKit there is an error in PQescapeStringConn
or the way it is being used.
From the docu I take it this is to prevent SQL injection attacks. I removed
any processing and it turned out the issue ceases, all works fine.
The call is here:
-(NSString
Alexander Reichstadt wrote:
I need to be able to store special chars, German Umlaute, in my tables. This
works when using
pgPHPAdmin to store the same value to the same field. But when using the
c-library it doesn't, fields
stored are garbled.
I checked using \l to see what encoding the
Thanks, Albe, I had checked this, too, and it was ok. I already posted the
solution to the board. It was an error due to an incorrect conversion between
an object-instance and a char. It works now.
Am 22.03.2012 um 12:50 schrieb Albe Laurenz:
Alexander Reichstadt wrote:
I need to be able to
Help needed in parsing PostgreSQL CSV Log
Hello friends,
I am working an a section of application which needs to Parse CSV Logs
generated by PostgreSql server.
- The Logs are stored C:\Program Files\PostgreSQL\9.0\data\pg_log
- The Server version in 9.0.4
- The application is developed in C
Hey!
On a host that I'm currently in the process of migrating, I'm
experiencing massive memory usage when importing the dump (generated
using a plain pg_dump without options) using psql. The massive memory
usage happens when the CREATE INDEX commands are executed, and for a
table with about
On Thu, 2012-03-22 at 09:32 +, Arvind Singh wrote:
Help needed in parsing PostgreSQL CSV Log
Hello friends,
I am working an a section of application which needs to Parse CSV Logs
generated by PostgreSql server.
- The Logs are stored C:\Program Files\PostgreSQL\9.0\data\pg_log
- The
On Wed, Mar 21, 2012 at 2:31 PM, Kjetil Nygård polpo...@gmail.com wrote:
Hi,
We are considering to migrate some of our databases to PostgreSQL.
We wonder if someone could give some hardware / configuration specs for
large PostgreSQL installations.
We're interested in:
- Number of
Heiko Wundram modeln...@modelnine.org writes:
On a host that I'm currently in the process of migrating, I'm
experiencing massive memory usage when importing the dump (generated
using a plain pg_dump without options) using psql. The massive memory
usage happens when the CREATE INDEX commands
On Thu, Mar 22, 2012 at 8:46 AM, Merlin Moncure mmonc...@gmail.com wrote:
large result sets) or cached structures like plpgsql plans. Once you
go over 50% memory into shared, it's pretty easy to overcommit your
server and burn yourself. Of course, 50% of 256GB server is a very
different
On Thu, Mar 22, 2012 at 10:02 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Mar 22, 2012 at 8:46 AM, Merlin Moncure mmonc...@gmail.com wrote:
large result sets) or cached structures like plpgsql plans. Once you
go over 50% memory into shared, it's pretty easy to overcommit your
Arvind Singh wrote:
Help needed in parsing PostgreSQL CSV Log
[...]
**However the main problem that is, the Log format is not readable**
A Sample Log data line
2012-03-21 11:59:20.640
IST,postgres,stock_apals,3276,localhost:1639,4f697540.ccc,10,idle
,2012-03-21 11:59:20
On Thu, Mar 22, 2012 at 9:29 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Thu, Mar 22, 2012 at 10:02 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
There's other issues you run into with large shared_buffers as well.
If you've got a large shared_buffers setting, but only regularly hit a
Thank you sir,
i have sorted out the problem on
The columns that are not quoted are guaranteed not to contain a comma.
But i have another query, the structure of PG Log CSV as mentioned in manual
and as below has 24 columns
On Thu, Mar 22, 2012 at 10:57 AM, Scott Marlowe scott.marl...@gmail.com wrote:
On Thu, Mar 22, 2012 at 9:29 AM, Merlin Moncure mmonc...@gmail.com wrote:
On Thu, Mar 22, 2012 at 10:02 AM, Scott Marlowe scott.marl...@gmail.com
wrote:
There's other issues you run into with large shared_buffers
Am 22.03.2012 15:48, schrieb Tom Lane:
What PG version are we talking about, and what exactly is the
problematic index?
Index is on (inet, integer, smallint, timestamp w/o timezone), btree
and a primary key.
There was a memory leak in the last-but-one releases for index
operations on inet
2012/3/21 Tom Lane t...@sss.pgh.pa.us
BTW, I experimented with that a little bit and found that the relmapper
is not really the stumbling block, at least not after applying this
one-line patch:
On Thu, Mar 22, 2012 at 00:20, Martijn van Oosterhout klep...@svana.org wrote:
That, and a good RAID controller with BBU cache will go a long way to
relieving the pain of fsync.
Well a BBU cache RAID is helpful, but fsyncs are a minor problem in
data warehouse workloads, since inserts are done
=?ISO-8859-1?Q?Fabr=EDzio_de_Royes_Mello?= fabriziome...@gmail.com writes:
2012/3/21 Tom Lane t...@sss.pgh.pa.us
BTW, I experimented with that a little bit and found that the relmapper
is not really the stumbling block, at least not after applying this
one-line patch:
Hi all.
Database version used 9.0.4 on FreeBSD 7.3.
Today ,after restart of replica db, I got the next warning in log:
2012-03-23 03:10:08.221 MSK 55096 @ from [vxid:1/0 txid:0] []LOG:
invalid resource manager ID 128 at 44E/4E7303B0
I searched over mailing lists but I still not sure is it
v8.3.4 on Linux
I have a check constraint on a column. The constraint decides pass/fail based
on the returned status of a stored procedure call that returns either OK or
NO. So when the stored procedure is called, there's a living attempt to
insert or update a record.
Question: Is there a
I am very confused after I read the guide as follows. It means I only need
to set the search_path to make the pg_temp as the last entry. or I need
configure search_path and at the same time, I need create the security
definer? Is thers anybody help me?
Thank you very much. I really appreciate
the security definer is built-in function, or I need create security definer
first, then user call it. How it works? I am pretty new in Postgresql.
Please help. Thanks.
--
View this message in context:
Hello,
With a database admin of a commercial database system I've discussed
that they have to provide and they also achieve 2^31 transactions per
SECOND!
As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they
would turn around in about one second.
How can one achieve this with
27 matches
Mail list logo