Re: pf queues
On 2023-11-29, Daniel Ouellet wrote: >> yes, all this can be make without hierarchy, only with priorities(because >> hierarchy it's priorities), but who and why decided that eight would be >> enough? the one who created cbq- he created it for practical tasks. but this >> "hateful eight" and this "flat-earth"- i don't understand what use they are, >> they can't even solve such the simplified task :\ >> so what am i missing? > > man pf.conf > > Look for set tos. Just a few lines below set prio in the man age, > > You can have more then 8 if you need/have to. Only useful if devices upstream of the PF router know their available bandwidth and can do some QoS themselves.
Re: pf queues
On 2023-11-29, 4 wrote: > here is a simple task, there are millions of such tasks. there is an > internet connection, and although it is declared as symmetrical 100mbit > it's 100 for download, but for upload it depends on the time of day, so > we can forget about the channel width and focus on the only thing that > matters- priorities. But wait. If you don't know how much bandwidth is available, everything else goes out the window. If you don't know how much bw is available in total, you can't decide how much to allocate to each connection, so even the basic bandwidth control can't really work, let alone prioritising access to the available capacity. Priorities work when you are trying to transmit more out of an interface than the bandwidth available on that interface. Say you have a box running PF with a 1Gb interface to a (router/modem/whatever) with an uplink of somewhere between 100-200Mb. If you use only priorities in PF, in that case they can only take effect when you have >1Gb of traffic to send out. If you queue with a max bw 200Mb, but only 100Mb is available on the line at times, during those times all that happens is you defer any queueing decisions to the (router/modem). The only way to get bandwidth control out of PF in that case is to either limit to _less than the guaranteed minimum_ (say 100Mb in that example), losing capacity at other times. Or if you have some way to fetch the real line speed at various times and adjust the queue speed in the ruleset. > --| > --normal > ---| > ---low > but hierarchy is not enough, we need priorities, since each of these three > queues can contain other queues. for example, the "high" queue may contain, > in addition to the "normal" queue, "icmp" and "ssh" queues, which are more > important than the "normal" queue, in which, for example, we will have http, > ftp and other non-critical traffic. therefore, we assign priority 0 to the > "normal" queue, priority 1 to the "ssh" queue and limit its maximum bandwidth > to 10mb(so that ssh does not eat up the entire channel when copying files), > and assign priority 2 to the "icmp" queue(icmp is more important than ssh). > i.e. icmp packets will leave first, then ssh packets, and then packets from > the "normal" queue and its subqueues(or they won't leave if we don't restrict > ssh and it eats up the entire channel) if PF doesn't know the real bandwidth, it _can't_ defer sending lower- priority traffic until after higher-prio has been sent, because it doesn't know if won't make it over the line...
Re: PineView not using the whole screen
> 2. How can I make the VGA the primary globaly, >so that users don't need to say that in their ~/.xsession ? As already mention by others, looking to our own xrandr -q output where the LVDM voice is missing it is evident that the problem is related to your hardware configuration. Following the suggestion of Crystal you can try managing it by the xorg.conf disabling also some of these options: Section "ServerFlags" # Option "AutoAddGPU" "on" # Option "AutoBindGPU" "on" EndSection Try to play also with: xrandr --listproviders xrandr --setprovideroutputsource == Nowarez Market
Re: PineView not using the whole screen
On Wed, Nov 29, 2023 at 07:23:48PM +0100, Jan Stary wrote: > (Same with other WM's, which makes me think > it's a X problem, not a wm problem.) It's an X 'feature'. You can have several physical displays, (monitor, television, projector, etc), that show the same desktop. Ignore for the moment the concept of one 'desktop' spanning several displays, that is different. Here we are talking about one 'desktop' which is shown in one way or another on several monitors or other devices. If those displays have the same resolution, (and refresh rate), then it's easy, you just send the same pixel data to all of them. If those displays have different resolutions, then something has to happen. You have various choices: Scale the display to fit, show only part of the display, etc, etc. Example: Maybe you have a laptop with 1920x1080 and you are doing a conference with a projector that only displays 1280x1024. In that case you might choose to display only the top-left part of the desktop on the projector, and use the right hand side for things that you only want to see on the laptop screen, (such as a clock, or other widget). But in that case, if you 'maximise' a window, you would probably only want it to size itself as 1280x1024 and fill the projector screen, not size as 1920x1080 to fill the laptop screen and have some missing on the smaller projector display. So it is not really an 'X problem' it's an intended feature. > 1. Why are there two "outputs" recognized >when the machine only has a (visible) VGA >connected by a VGA cable to the monitor? Because your hardware supports those outputs at a lower level. The board you have is a D525MW. There was another version of that board D525MWV which actually had the LVDM connector header on it. > 2. How can I make the VGA the primary globaly, >so that users don't need to say that in their ~/.xsession ? An xorg.conf similar to that which I sent you, (with the names of the outputs changed to what the machines actually have), does that on at least two multi-headed machines that I have here. However neither of those is using Pineview or the intel driver, they are much newer and using the modesetting driver. Off the top of my head I would try adding a monitor section for the lvds output and including: Option "Disable" "true" but that's just speculation as none of the machines I've tested required it.
Re: PineView not using the whole screen
On Nov 28 20:30:39, kolip...@exoticsilicon.com wrote: > On Sun, Nov 26, 2023 at 04:12:39PM +0100, Jan Stary wrote: > > Can I make the VGA output the default globaly, so that > > individual users don't need to xrandr in their .xsession? > > Does this /etc/xorg.conf fix it? Thanks for the hint, but no. See below. > Section "Device" > Identifier "My device" > Option "Monitor-VGA1" "My monitor" > EndSection > > Section "Monitor" > Identifier "My monitor" > Option "Primary" "true" > Option "PreferredMode" "1680x1050" > EndSection The Xorg log has this to say (full log below): [ 9246.525] (II) intel(0): switch to mode 1680x1050@60.0 on VGA1 using pipe 0, position (0, 0), rotation normal, reflection none [ 9246.555] (II) intel(0): switch to mode 1280x800@58.1 on LVDS1 using pipe 1, position (0, 0), rotation normal, reflection none [ 9246.573] (II) intel(0): Setting screen physical size to 430 x 270 So it recognizes the modes for the two putputs, and even recognizes the right physical size. But I am still seeing the same problem: a "maximalized" window will only occupy the upper left 1280x800 rectangle of the full 1680x1050 screen. Upon loging in via xenodm, xrandr says: Screen 0: minimum 8 x 8, current 1680 x 1050, maximum 32767 x 32767 LVDS1 connected primary 1280x800+0+0 (normal left inverted right x axis y axis) 338mm x 211mm 1280x800 58.14*+ 1280x720 59.8659.74 1024x768 60.00 1024x576 59.9059.82 960x540 59.6359.82 800x600 60.3256.25 864x486 59.9259.57 640x480 59.94 720x405 59.5158.99 640x360 59.8459.32 VGA1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 434mm x 270mm 1680x1050 59.95*+ 1280x1024 75.0260.02 1280x960 60.00 1152x864 75.00 1024x768 75.0360.00 832x624 74.55 800x600 75.0060.3256.25 640x480 75.0059.94 720x400 70.08 VIRTUAL1 disconnected (normal left inverted right x axis y axis) So the 1680x1050 mode is set, but that is only a mode of the VGA, which is _not_ the primary, LVDS is. Upon making VGA the primary via xrandr, as described before, the problem disappears, i.e. the running cwm starts making full screen windows full screen. (Same with other WM's, which makes me think it's a X problem, not a wm problem.) Repeating myself: does anyone please know th following? 1. Why are there two "outputs" recognized when the machine only has a (visible) VGA connected by a VGA cable to the monitor? 2. How can I make the VGA the primary globaly, so that users don't need to say that in their ~/.xsession ? Jan [ 9246.081] (--) checkDevMem: using aperture driver /dev/xf86 [ 9246.133] (--) Using wscons driver on /dev/ttyC4 [ 9246.171] X.Org X Server 1.21.1.9 X Protocol Version 11, Revision 0 [ 9246.171] Current Operating System: OpenBSD bzm.stare.cz 7.4 GENERIC.MP#1468 amd64 [ 9246.171] [ 9246.171] Current version of pixman: 0.42.2 [ 9246.171]Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 9246.171] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 9246.173] (==) Log file: "/var/log/Xorg.0.log", Time: Wed Nov 29 19:08:08 2023 [ 9246.174] (==) Using config file: "/etc/xorg.conf" [ 9246.174] (==) Using system config directory "/usr/X11R6/share/X11/xorg.conf.d" [ 9246.175] (==) No Layout section. Using the first Screen section. [ 9246.175] (==) No screen section available. Using defaults. [ 9246.175] (**) |-->Screen "Default Screen Section" (0) [ 9246.175] (**) | |-->Monitor "" [ 9246.177] (==) No device specified for screen "Default Screen Section". Using the first device section listed. [ 9246.177] (**) | |-->Device "My device" [ 9246.177] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 9246.177] (==) Automatically adding devices [ 9246.177] (==) Automatically enabling devices [ 9246.177] (==) Not automatically adding GPU devices [ 9246.177] (==) Automatically binding GPU devices [ 9246.177] (==) Max clients allowed: 256, resource mask: 0x1f [ 9246.178] (==) FontPath set to: /usr/X11R6/lib/X11/fonts/misc/, /usr/X11R6/lib/X11/fonts/TTF/, /usr/X11R6/lib/X11/fonts/OTF/, /usr/X11R6/lib/X11/fonts/Type1/, /usr/X11R6/lib/X11/fonts/100dpi/, /usr/X11R6/lib/X11/fonts/75dpi/ [ 9246.178] (==) ModulePath set to "/usr/X11R6/lib/modules" [ 9246.178] (II) The server relies on wscons to provide the list of input devices. If no devices become available, reconfigure wscons or disable AutoAddDevices. [ 9246.178] (II) Loader magic: 0x4ff320a1530 [ 9246.178] (II) Modul
Re: pf queues
yes, all this can be make without hierarchy, only with priorities(because hierarchy it's priorities), but who and why decided that eight would be enough? the one who created cbq- he created it for practical tasks. but this "hateful eight" and this "flat-earth"- i don't understand what use they are, they can't even solve such the simplified task :\ so what am i missing? man pf.conf Look for set tos. Just a few lines below set prio in the man age, You can have more then 8 if you need/have to.
Re: OpenBSD 7.4 released -- Oct 16, 2023
Just dropping a thanks for release 7.4. After used it some days I noticed an average cpu temperature almost six degrees lower than the usual one (despite the lower environment temperature of the season). == Nowarez Market
Re: pf queues
> On Wed, Nov 29, 2023 at 12:12:02AM +0300, 4 wrote: >> i haven't used queues for a long time, but now there is a need. previously, >> queues had not only a hierarchy, but also a priority. now there is no >> priority, only the hierarchy exists. i was surprised, but i thought that >> this is quite in the way of Theo, and it is possible to simplify the queue >> mechanism only to the hierarchy, meaning that if a queue standing higher in >> the hierarchy, and he priority is higher. but in order for it to work this >> way, it is necessary to allow assigning packets to any queue, and not just >> to the last one, because when you assign only to the last queue in the >> hierarchy, then in practice it means that you have no hierarchy and no >> queues. and although the rule with the assignment to a queue above the last >> one is not syntactically incorrect, but in practice the assignment is not >> performed, and the packets fall into the default(last) queue. am i missing >> something or is it really idiocy that humanity has not seen yet? >> > How long ago is it that you did anything with queues? > the older ALTQ system was replaced by a whole new system back in OpenBSD 5.5 > (or actually, altq lived on as oldqeueue through 5.6), and the syntax is both > very different and in most things much simpler to deal with. > The most extensive treatment available is in The Book of PF, 3rd edition > (actually the introduction of the new queues was the reason for doing that > revision). If for some reason the book is out of reach, you can likely > glean most of the useful information from the relevant slides in the > PF tutorial https://home.nuug.no/~peter/pftutorial/ with the traffic > shaping part starting at https://home.nuug.no/~peter/pftutorial/#68 looks like i'm phenomenally dumb :( queue rootq on $ext_if bandwidth 20M queue main parent rootq bandwidth 20479K min 1M max 20479K qlimit 100 queue qdef parent main bandwidth 9600K min 6000K max 18M default queue qweb parent main bandwidth 9600K min 6000K max 18M queue qpri parent main bandwidth 700K min 100K max 1200K queue qdns parent main bandwidth 200K min 12K burst 600K for 3000ms queue spamd parent rootq bandwidth 1K min 0K max 1K qlimit 300 -- this is a flat model. no hierarchy here, because no priorities. it looks as hierarchy exists, but this is "fake news" :\ i can't immediately come up with at least one task where such a thing would be needed.. probably no such task exist. pass proto tcp to port ssh set prio 6 -- hard coded eight queues/priorities and no bandwidth controls. but this case is at least is useful, because priorities is much more important than bandwidth limits. i have a feeling that the person who came up with this is Mad Hatter from the Wonderland :\ what was wrong with the cbq engine where all was in one? here is a simple task, there are millions of such tasks. there is an internet connection, and although it is declared as symmetrical 100mbit it's 100 for download, but for upload it depends on the time of day, so we can forget about the channel width and focus on the only thing that matters- priorities. we make three queues, hierarchically connect them to one another: root -| -high --| --normal ---| ---low but hierarchy is not enough, we need priorities, since each of these three queues can contain other queues. for example, the "high" queue may contain, in addition to the "normal" queue, "icmp" and "ssh" queues, which are more important than the "normal" queue, in which, for example, we will have http, ftp and other non-critical traffic. therefore, we assign priority 0 to the "normal" queue, priority 1 to the "ssh" queue and limit its maximum bandwidth to 10mb(so that ssh does not eat up the entire channel when copying files), and assign priority 2 to the "icmp" queue(icmp is more important than ssh). i.e. icmp packets will leave first, then ssh packets, and then packets from the "normal" queue and its subqueues(or they won't leave if we don't restrict ssh and it eats up the entire channel). now: root -| -high[normal(0),ssh(1),icmp(2)] --| --normal[low(0),default(1),http(2),ftp(2)] ---| ---low[bittorrent(0),putin(0),vodka(0)] yes, all this can be make without hierarchy, only with priorities(because hierarchy it's priorities), but who and why decided that eight would be enough? the one who created cbq- he created it for practical tasks. but this "hateful eight" and this "flat-earth"- i don't understand what use they are, they can't even solve such the simplified task :\ so what am i missing?