Hi,
In /var/spool/cups, I've a tmp directory, I only can open this
directory as root, not as user. Then, I did:
sed -i 's/LogLevel warn/LogLevel debug2/' /etc/cups/cupsd.conf

Then, as root, I restarted with /etc/rc.d/init.d/cups restart
and I tried to print (lp File.txt). Nothing has changed.
I have gone on my searches, trying to do lpq. Here's the output:
lpq: get-jobs failed: client-error-bad-request

Manage Jobs in web interface gave the same result as in my previous
mail.
To finish, here's lines of my error_log, those which concerned my last
my last lp before I wrote this mail.
I [13/Sep/2005:18:23:02 +0200] Listening to 0:631
I [13/Sep/2005:18:23:02 +0200] Loaded configuration file
"/etc/cups/cupsd.conf"
I [13/Sep/2005:18:23:02 +0200] Configured for up to 100 clients.
I [13/Sep/2005:18:23:02 +0200] Allowing up to 100 client connections
per host.
I [13/Sep/2005:18:23:02 +0200] Full reload is required.
I [13/Sep/2005:18:23:02 +0200] LoadPPDs: Read "/etc/cups/ppds.dat", 16
PPDs...
I [13/Sep/2005:18:23:02 +0200] LoadPPDs: No new or changed PPDs...
I [13/Sep/2005:18:23:02 +0200] Full reload complete.
I [13/Sep/2005:18:23:21 +0200] Adding start banner page "none" to job
8.
I [13/Sep/2005:18:23:21 +0200] Adding end banner page "none" to job 8.
I [13/Sep/2005:18:23:21 +0200] Job 8 queued on 'Samsung' by 'root'.
I [13/Sep/2005:18:23:21 +0200] Started filter
/usr/lib/cups/filter/texttops (PI\
D 3116) for job 8.
I [13/Sep/2005:18:23:21 +0200] Started filter
/usr/lib/cups/filter/pstops (PID \
3117) for job 8.
I [13/Sep/2005:18:23:21 +0200] Started filter
/usr/lib/cups/filter/foomatic-rip\
(PID 3118) for job 8.
I [13/Sep/2005:18:23:21 +0200] Started backend
/usr/lib/cups/backend/parallel (\
PID 3119) for job 8.
E [13/Sep/2005:18:23:21 +0200] PID 3118 stopped with status 22!
I [13/Sep/2005:18:23:21 +0200] Hint: Try setting the LogLevel to
"debug" to find out more
-- 
http://linuxfromscratch.org/mailman/listinfo/blfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Reply via email to