> I suggest that you get a little more data.
> 
> First, LPRng has more debugging stuff than a case of banned
> DDT.  You can enable this by doing:
> 
> lpc debug colorA 4
> lpc debug tekc350 4
> lpc debug tek380 4
> lpc debug optra40 4
> 
> This is the overkill value... 2 is not as chatty.

> Now you fire up a job:
> 
> echo hi |lpr -PcolorA

I had already done this earlier after looking through the FAQ.
Excellent FAQ by the way.

> and you can look in the log files in the spool directory for more
> tracing information than you will ever want.

The colorA load balancing queue seems to work fine, and generates
a tremendous log file of ~5900 lines. I am poking through it
trying to compare it with the optra40 output which is only
79 lines long.  Direct printing to the subqueues like optra40
is what is not working for me.

Here's a copy of the debug level 4 from optra40, a subqueue.
I'm not clear enough on the meaning of some of the debug output
or on what SHOULD be happening, to say what is missing. I am assuming
the line near the end "Do_queue_jobs: nothing to do" is indicating
there is a problem, but not sure what it is. Hmmm. I may try the
LPRng-3.4.1BETA just for comparison purposes if further 
analysis doesn't get me anywhere.

2000-10-16-11:17:01.331 q [6543] (Server)  optra40: Find_str_value: key 'debug', value 
'4'
2000-10-16-11:17:01.331 q [6543] (Server)  optra40: Split: str 0x80a92fe '4', sep '    
 ,;:', sort 0, keysep '<NULL>', uniq 0, trim 0
2000-10-16-11:17:01.332 q [6543] (Sub)  optra40: Find_str_value: key 'printer', value 
'optra40'
2000-10-16-11:17:01.332 q [6543] (Sub)  optra40: Update_spool_info: printer 'optra40'
2000-10-16-11:17:01.332 q [6543] (Sub)  optra40: Find_str_value: key 'spooldir', value 
'/var/spool/lpd/optra40'
2000-10-16-11:17:01.332 q [6543] (Sub)  optra40: Find_str_value: key 
'queue_control_file', value 'control.optra40'
2000-10-16-11:17:01.332 q [6543] (Sub)  optra40: Find_str_value: key 'printer', value 
'optra40'
2000-10-16-11:17:01.333 q [6543] (Sub)  optra40: Find_str_value: key 'hf_name', value 
'<NULL>'
2000-10-16-11:17:01.333 q [6543] (Sub)  optra40: Find_value: key 'server', value '0'
2000-10-16-11:17:01.333 q [6543] (Sub)  optra40: Find_flag_value: key 'server', value 
'0'
2000-10-16-11:17:01.333 q [6543] (Sub)  optra40: Find_value: key 'done_time', value '0'
2000-10-16-11:17:01.333 q [6543] (Sub)  optra40: Find_flag_value: key 'done_time', 
value '0'
2000-10-16-11:17:01.334 q [6543] (Sub)  optra40: Get_spool_control: dir 
'/var/spool/lpd/optra40', control file 'control.optra40', path 
'/var/spool/lpd/optra40/control.optra40'
2000-10-16-11:17:01.334 q [6543] (Sub)  optra40: Get_file_image: 
'/var/spool/lpd/optra40/control.optra40', maxsize 0
2000-10-16-11:17:01.334 q [6543] (Sub)  optra40: Checkread: file 
'/var/spool/lpd/optra40/control.optra40'
2000-10-16-11:17:01.334 q [6543] (Sub)  optra40: Checkread: 
'/var/spool/lpd/optra40/control.optra40' fd 5, size 19
2000-10-16-11:17:01.335 q [6543] (Sub)  optra40: Get_fd_image: fd 5
2000-10-16-11:17:01.335 q [6543] (Sub)  optra40: Get_fd_image: len 19
2000-10-16-11:17:01.335 q [6543] (Sub)  optra40: Split: str 0x80a92d8 'change=0x1
debug=4
', sep '
^L^D^T', sort 1, keysep '       =#@', uniq 1, trim 1
2000-10-16-11:17:01.335 q [6543] (Sub)  optra40: Dump_line_list: Get_spool_control- 
info - 0x80a9460, count 2, max 102, list 0x80af978
2000-10-16-11:17:01.335 q [6543] (Sub)  optra40:   [ 0] 0x80a9260 ='change=0x1'
2000-10-16-11:17:01.336 q [6543] (Sub)  optra40:   [ 1] 0x80af370 ='debug=4'
2000-10-16-11:17:01.336 q [6543] (Sub)  optra40: Find_value: key 'change', value '0x1'
2000-10-16-11:17:01.336 q [6543] (Sub)  optra40: Find_flag_value: key 'change', value 
'1'
2000-10-16-11:17:01.336 q [6543] (Sub)  optra40: Init_tempfile: temp file 
'/var/spool/lpd/optra40'
2000-10-16-11:17:01.337 q [6543] (Sub)  optra40: Make_temp_fd: fd 5, name 
'/var/spool/lpd/optra40/temp001Abagj'
2000-10-16-11:17:01.337 q [6543] (Sub)  optra40: Set_spool_control: path 
'/var/spool/lpd/optra40/control.optra40', tempfile 
'/var/spool/lpd/optra40/temp001Abagj'
2000-10-16-11:17:01.337 q [6543] (Sub)  optra40: Dump_line_list: Set_spool_control- 
info - 0x80a79a4, count 2, max 102, list 0x80af468
2000-10-16-11:17:01.338 q [6543] (Sub)  optra40:   [ 0] 0x80aeeb0 ='change=0x0'
2000-10-16-11:17:01.338 q [6543] (Sub)  optra40:   [ 1] 0x80a92f8 ='debug=4'
2000-10-16-11:17:01.338 q [6543] (Sub)  optra40: *** Dump_subserver_info: 
'Do_queue_jobs - after setup' - 1 subservers
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:  server 0 - 0x80a9460, count 7, max 
102, list 0x80af978
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:   [ 0] 0x80aedd0 ='change=0x0'
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:   [ 1] 0x80af370 ='debug=4'
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:   [ 2] 0x80af350 ='done_time=0x0'
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:   [ 3] 0x80a92d8 ='printer=optra40'
2000-10-16-11:17:01.339 q [6543] (Sub)  optra40:   [ 4] 0x80aee88 
='queue_control_file=control.optra40'
2000-10-16-11:17:01.340 q [6543] (Sub)  optra40:   [ 5] 0x80afb18 ='server=0'
2000-10-16-11:17:01.340 q [6543] (Sub)  optra40:   [ 6] 0x80aee60 
='spooldir=/var/spool/lpd/optra40'
2000-10-16-11:17:01.340 q [6543] (Sub)  optra40: Do_queue_jobs: after subservers next 
fd 5
2000-10-16-11:17:01.340 q [6543] (Sub)  optra40: Scan_queue: START dir 
'/var/spool/lpd/optra40'
2000-10-16-11:17:01.341 q [6543] (Sub)  optra40: Scan_queue: printable 0, held 0, move 0
2000-10-16-11:17:01.341 q [6543] (Sub)  optra40: Do_queue_jobs: printable 0, held 0, 
move 0
2000-10-16-11:17:01.341 q [6543] (Sub)  optra40: Do_queue_jobs: after Scan_queue next 
fd 5
2000-10-16-11:17:01.341 q [6543] (Sub)  optra40: Do_queue_jobs: MAIN LOOP
2000-10-16-11:17:01.341 q [6543] (Sub)  optra40: Do_queue_jobs: MAIN LOOP next fd 5
2000-10-16-11:17:01.342 q [6543] (Sub)  optra40: Unlink_tempfiles: unlinking 
'/var/spool/lpd/optra40/temp001Abagj'
2000-10-16-11:17:01.342 q [6543] (Sub)  optra40: Do_queue_jobs: Susr1 before scan 0
2000-10-16-11:17:01.342 q [6543] (Sub)  optra40: Dump_line_list: Do_queue_jobs - sort 
order printable - 0x80a7494, count 0, max 0, list 0x0
2000-10-16-11:17:01.342 q [6543] (Sub)  optra40: Find_value: key 'printing_disabled', 
value '0'
2000-10-16-11:17:01.342 q [6543] (Sub)  optra40: Find_flag_value: key 
'printing_disabled', value '0'
2000-10-16-11:17:01.343 q [6543] (Sub)  optra40: Find_value: key 'printing_aborted', 
value '0'
2000-10-16-11:17:01.343 q [6543] (Sub)  optra40: Find_flag_value: key 
'printing_aborted', value '0'
2000-10-16-11:17:01.343 q [6543] (Sub)  optra40: Find_str_value: key 'forwarding', 
value '<NULL>'
2000-10-16-11:17:01.343 q [6543] (Sub)  optra40: Do_queue_jobs: printing_enabled '1', 
forwarding '<NULL>'
2000-10-16-11:17:01.343 q [6543] (Sub)  optra40: *** Dump_subserver_info: 
'Do_queue_jobs- checking for server' - 1 subservers
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:  server 0 - 0x80a9460, count 7, max 
102, list 0x80af978
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:   [ 0] 0x80aedd0 ='change=0x0'
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:   [ 1] 0x80af370 ='debug=4'
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:   [ 2] 0x80af350 ='done_time=0x0'
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:   [ 3] 0x80a92d8 ='printer=optra40'
2000-10-16-11:17:01.344 q [6543] (Sub)  optra40:   [ 4] 0x80aee88 
='queue_control_file=control.optra40'
2000-10-16-11:17:01.345 q [6543] (Sub)  optra40:   [ 5] 0x80afb18 ='server=0'
2000-10-16-11:17:01.345 q [6543] (Sub)  optra40:   [ 6] 0x80aee60 
='spooldir=/var/spool/lpd/optra40'
2000-10-16-11:17:01.345 q [6543] (Sub)  optra40: Find_value: key 'server', value '0'
2000-10-16-11:17:01.345 q [6543] (Sub)  optra40: Find_flag_value: key 'server', value 
'0'
2000-10-16-11:17:01.345 q [6543] (Sub)  optra40: Find_str_value: key 'printer', value 
'optra40'
2000-10-16-11:17:01.346 q [6543] (Sub)  optra40: Do_queue_jobs: printer 'optra40', 
server 0
2000-10-16-11:17:01.346 q [6543] (Sub)  optra40: Do_queue_jobs: job_to_do -1, 
use_subserver -1, working 0
2000-10-16-11:17:01.346 q [6543] (Sub)  optra40: Do_queue_jobs: nothing to do
2000-10-16-11:17:03.354 q [6543] (Sub)  optra40: Do_queue_jobs: Susr1 at end 0
2000-10-16-11:17:03.354 q [6543] (Sub)  optra40: cleanup: signal No signal, Errorcode 
0, exits 0
2000-10-16-11:17:03.355 q [6543] (Sub)  optra40: Killchildren: pid 6543, signal 
Interrupt, count 0
2000-10-16-11:17:03.355 q [6543] (Sub)  optra40: *** Dump_pinfo Killchildren - after - 
count 0 ***
2000-10-16-11:17:03.355 q [6543] (Sub)  optra40: *** done ***
2000-10-16-11:17:03.355 q [6543] (Sub)  optra40: cleanup: done, exit(0)


Here's my lpq -llll -Poptra40:

q:/var/spool/lpd# lpq -llll -Poptra40
Server Printer: optra40@q (dest L1@netgear) 'ECS Optra 40' (serving colora)
 Queue: 1 printable job
 Server: no server active
 Rank   Owner/ID                  Class Job Files                 Size Time
1      vincent@gypsy+926            A   926 colorcir.ps           1815 11:17:06

> > From [EMAIL PROTECTED] Sun Oct 15 11:19:40 2000
> > Date: Sun, 15 Oct 2000 13:17:12 -0400 (EDT)
> > From: Vincent Fox <[EMAIL PROTECTED]>
> > To: [EMAIL PROTECTED]
> > Subject: LPRng: load balancing queues: subqueues not reliable
> >
> > I am sure I am doing something moronic, but I am having much
> > trouble getting load balancing in LPRng 3.6.26 to work correctly.
> > Actually the load balancing queue itself seems to work just fine.
> > It's that the subqueues don't seem to work/print. Perhaps I am making
> > some really basic mistake that someone can point out.
> >
> > I have a load balance queue colorA, and three
> > subqueues associated with it tek350, tek380, optra40.
> > If I print to colorA all works fine. If I print to any
> > of the subqueues, it usually puts the job in the queue
> > but never makes it "active", e.g.
> >
> > Server Printer: optra40@q (dest L1@netgear) 'ECS Optra 40' (serving colorA)
> >  Queue: 1 printable job
> >  Server: no server active
> >  Status: job 'vincent@gypsy+253' removed at 12:16:34.229
> >  Rank   Owner/ID                  Class Job Files                 Size Time
> > 1      vincent@gypsy+885            A   885 colorcir.ps           1815 12:51:15    
>            
> > If I do an lpc status all:
> >
> > q:/var/spool/lpd/tek380# lpc status all
> >  Printer           Printing Spooling Jobs  Server Subserver Redirect Status/(Debug)
> > ripley@q            enabled  enabled    0    none    none
> > jonesy@q            enabled  enabled    0    none    none
> > bishop@q            enabled  enabled    0    none    none
> > call@q              enabled  enabled    0    none    none
> > apone@q             enabled  enabled    0    none    none
> > colorA@q            enabled  enabled    0    none tek350,tek380,optra40
> > tek380@q            enabled  enabled    0    none    none
> > tek350@q            enabled  enabled    0    none    none
> > tek600@q            enabled  enabled    0    none    none
> > tek2@q              enabled  enabled    0    none    none
> > cgc-color@q         enabled  enabled    0    none    none
> > optra40@q           enabled  enabled    1    none    none          (2)
> > ibb-delenn@q        enabled  enabled    0    none    none
> >
> > If I do an lpc release all or lpc up all, it will start up
> > the subserver and print the job. However, the problem persists.
> > On following prints I have to do the "lpc up all" once again.
> > I've tried cleaning all the queues, running through checkpc -V
> > and assorted other ground-zero solutions with no success.
> > Any ideas what is going on here?
> 
> I suggest that you get a little more data.
> 
> First, LPRng has more debugging stuff than a case of banned
> DDT.  You can enable this by doing:
> 
> lpc debug colorA 4
> lpc debug tekc350 4
> lpc debug tek380 4
> lpc debug optra40 4
> 
> This is the overkill value... 2 is not as chatty.
> 
> Now you fire up a job:
> 
> echo hi |lpr -PcolorA
> 
> and you can look in the log files in the spool directory for more
> tracing information than you will ever want.
> 
> You can also do the following.  Replace your normal filters with:
> 
> #!/bin/sh
> #  dummy print filter
> echo $0 "$@" >&2
> sleep 30
> cat
> exit 0
> 
> 
> 
> And also set:  :lp=/dev/null
> 
> Now you can fling jobs at the queues without wasting paper.
> 
> I am curious to know what the problem is,  because I don't seem
> to have any problems here...
> 
> Of course,  if you feel brave,  you can try the 3.4.1Beta release...
> 
> ftp://ftp.astart.com/pub/LPRng/private/LPRng-3.4.1.....
> 
> Patrick
> 
> -----------------------------------------------------------------------------
> YOU MUST BE A LIST MEMBER IN ORDER TO POST TO THE LPRNG MAILING LIST
> The address you post from MUST be your subscription address
> 
> If you need help, send email to [EMAIL PROTECTED] (or lprng-requests
> or lprng-digest-requests) with the word 'help' in the body.  For the impatient,
> to subscribe to a list with name LIST,  send mail to [EMAIL PROTECTED]
> with:                           | example:
> subscribe LIST <mailaddr>       |  subscribe lprng-digest [EMAIL PROTECTED]
> unsubscribe LIST <mailaddr>     |  unsubscribe lprng [EMAIL PROTECTED]
> 
> If you have major problems,  send email to [EMAIL PROTECTED] with the word
> LPRNGLIST in the SUBJECT line.
> -----------------------------------------------------------------------------
> 


-- 
        "Who needs horror movies when we have Microsoft"?
         -- Christine Comaford, PC Week, 27/9/95

-----------------------------------------------------------------------------
YOU MUST BE A LIST MEMBER IN ORDER TO POST TO THE LPRNG MAILING LIST
The address you post from MUST be your subscription address

If you need help, send email to [EMAIL PROTECTED] (or lprng-requests
or lprng-digest-requests) with the word 'help' in the body.  For the impatient,
to subscribe to a list with name LIST,  send mail to [EMAIL PROTECTED]
with:                           | example:
subscribe LIST <mailaddr>       |  subscribe lprng-digest [EMAIL PROTECTED]
unsubscribe LIST <mailaddr>     |  unsubscribe lprng [EMAIL PROTECTED]

If you have major problems,  send email to [EMAIL PROTECTED] with the word
LPRNGLIST in the SUBJECT line.
-----------------------------------------------------------------------------

Reply via email to