Re: [GENERAL] ERROR: could not read block 4707 of relation 1663/16384/16564: Success
I am not sure how big your table is one way we implemented here was we selected the clean rows and outputted it to a csv file. And the rows affected we had to load from the backup, luckily we had the clean backup. Ex: assume you have 1,2,3,4,5100 rows and the corrupted is between 60-70. I outputted clean rows from 1-59 and 71-100 to a csv file and loaded in a new table. The corrupted was loaded back from a table. This just One of doing it. There might be more the experts here can answer very well. I am interested to see others answers as well. My way is time consuming and if you have a very large table or tables affected it's a nightmare to fix them. Good luck with your recovery. Thanks Deepak On Jul 31, 2011, at 11:27 PM, Deniz Atak wrote: > Deepak, Tom thanks for answering. > > Tom, we have psql 8.1.18. So you are right, this weird message is because of > the old version. I will check with my colleague about the possible reasons. > What can I do if there is a messed up table? > > Regards, > Deniz > > On Sat, Jul 30, 2011 at 11:45 PM, Tom Lane wrote: > Deniz Atak writes: > > I am using postgresql on Glassfish server and I have EJB 3.0 for ORM. I am > > trying to run a query in PSQL but receiving following error: > > > Local Exception Stack: > > Exception [EclipseLink-4002] (Eclipse Persistence Services - > > 2.0.0.v20091031-r5713): org.eclipse.persistence.exceptions.DatabaseException > > Internal Exception: org.postgresql.util.PSQLException: ERROR: could not read > > block 4707 of relation 1663/16384/16564: Success > > What Postgres server version is that? > > If it's 8.2 or older, this probably indicates a partial block at the end > of the file. Newer versions produce a more sensible error message for > the case, but that's just cosmetic --- the real problem is a messed-up > table. Have you had a filesystem corruption or an out-of-disk-space > condition on this machine? > >regards, tom lane >
Re: [GENERAL] ERROR: could not read block 4707 of relation 1663/16384/16564: Success
My guess is some one moved the data folder or the directory got deleted (/var/lib/pgsql/9.0/data/.../...1663/16384/16564). Without server restart. I am sure some experts gonna answer this very well. Thanks Deepak On Jul 30, 2011, at 2:01 AM, Deniz Atak wrote: > Hi, > > I am using postgresql on Glassfish server and I have EJB 3.0 for ORM. I am > trying to run a query in PSQL but receiving following error: > > Local Exception Stack: > Exception [EclipseLink-4002] (Eclipse Persistence Services - > 2.0.0.v20091031-r5713): org.eclipse.persistence.exceptions.DatabaseException > Internal Exception: org.postgresql.util.PSQLException: ERROR: could not read > block 4707 of relation 1663/16384/16564: Success > Error Code: 0 > Call: select src_ip from table where date > '2011.07.29' AND date < > '2011.07.30' AND ( (src_ip = 'anIP' ) OR ( src_ip = 'anotherIP' ) ) > group by src_ip; > bind => [2011-07-29 00:00:00.0, 2011-07-30 00:00:00.0, 195.122.20.236, > 195.122.20.88, 500, 0] > Query: ResultSetMappingQuery(sql="select src_ip from table where date > > '2011.07.29' AND date < '2011.07.30' AND ( (src_ip = 'anIP' ) OR ( src_ip > = 'anotherIP' ) ) group by src_ip; ") > > It is kind of weird to have an error that ends with success :) > > Do you have any opinion about this problem? Thanks in advance. > > Deniz > > >
Re: [GENERAL] SHMMAX and SHMALL question
Thank you so much for the script. ~deepak On Jan 22, 2011, at 10:18 AM, Greg Smith wrote: > DM wrote: >> RAM = 16GB, what value should i set for shmall? > > Given that PostgreSQL rarely sees increasing improvement as shared_buffers > goes over 50% of RAM, I just use that figure for the shmall and then compute > shmmax based on the page size to match it. I use the attached script to do > all the hard work, haven't found a Linux system yet it didn't do the right > thing on. It sounds like you might have the math on the relation between the > two backwards, look at the output and code of this once and that should sort > things out for you. > > -- > Greg Smith 2ndQuadrant USg...@2ndquadrant.com Baltimore, MD > PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us > "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books > > #!/bin/bash > > # Output lines suitable for sysctl configuration based > # on total amount of RAM on the system. The output > # will allow up to 50% of physical memory to be allocated > # into shared memory. > > # On Linux, you can use it as follows (as root): > # > # ./shmsetup >> /etc/sysctl.conf > # sysctl -p > > # Early FreeBSD versions do not support the sysconf interface > # used here. The exact version where this works hasn't > # been confirmed yet. > > page_size=`getconf PAGE_SIZE` > phys_pages=`getconf _PHYS_PAGES` > > if [ -z "$page_size" ]; then > echo Error: cannot determine page size > exit 1 > fi > > if [ -z "$phys_pages" ]; then > echo Error: cannot determine number of memory pages > exit 2 > fi > > shmall=`expr $phys_pages / 2` > shmmax=`expr $shmall \* $page_size` > > echo \# Maximum shared segment size in bytes > echo kernel.shmmax = $shmmax > echo \# Maximum number of shared memory segments in pages > echo kernel.shmall = $shmall -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general