Yuri, You're sure the pg xlogs are going where you expect them to? Have you 
fine-tooth-combed your conf file for log file-related settings? Log files may 
well be directed to fs other than /data/postgres (as is common in our 
environments, e.g.)

Do a $ df -h on the various FSes involved...

Are you using Solaris 10 ACLs? Dig deeper on Tom's point on user-specific 
quotas. ZFS in use? Various quota settings under Solaris can get you really 
unexpected mileage.

Lou Picciano

----- Original Message -----
From: Yuri Levinsky 
To: pgsql-bugs@postgresql.org
Sent: Tue, 25 Jun 2013 12:23:00 -0000 (UTC)
Subject: [BUGS] Postgres crash? could not write to log file: No space left on 
device






 Dear All, I have the following issue on Sun Solaris 10. PostgreSQL version is 
9.2.3. The wall logging is minimal and no archiving. The DB restarted several 
time, the box is up for last 23 days. The PostgreSQL installation and files 
under /data/postgres that is half empty. Is it some other destination that 
might cause the problem? Can I log the space consumption and directory name 
where the problem is happening by some debug level or trace setting?   PANIC:  
could not write to log file 81, segment 125 at offset 13959168, length 1392640: 
No space left on deviceLOG:  process 10203 still waiting for ShareLock on 
transaction 3010915 after 1004.113 msSTATEMENT:  UPDATE tctuserinfo SET 
clickmiles = clickmiles + $1, periodicalclickmiles = periodicalclickmiles + $2, 
active = $3, activeupdatetime = $4, activationsetby = $5, smscid = $6, sdrtime 
= $7, simvalue = simvalue + $8, totalsimvalue = totalsimvalue + $9, firstclick 
= $10, lastclick = $11, firstactivationtime = $12, cbchannel = $13, 
clickmilesupdatetime = $14, ci = $15, lac = $16, bscid = $17, 
lastlocationupdatetime = $18, subscriptiontype = $19, contentcategory =$20, 
livechannels = $21, contextclicks = $22 WHERE phonenumber = $23LOG:  WAL writer 
process (PID 10476) was terminated by signal 6LOG:  terminating any other 
active server processesFATAL:  the database system is in recovery mode But it 
looks OK: dbnetapp:/vol/postgres                        90G    44G    46G    
49%    /data/postgres Is it possible that “heavy” queries consuming disk space 
(as temporary space) and after the crash and recovery it becoming OK?  

Reply via email to