Please can we do something about programmatic interfaces to ZFS?
Hi, Since updating to 15-CURRENT, I have been unable to get some existing code that used libzfs_core to take snapshots. There are a lot of reasons that this could have broken and it’s hard to track it down: - We ship both libnv and libnvpair. These define the same data structure but with different APIs and are incompatible. I believe libnv can create the serialised data structures that the ZFS ioctls expect. - We don’t install headers for libnvpair or libzfs_core (or libzfs) and so any code using these has to either depend on things in the src tree (which depend on OpenSolaris headers that are incompatible with FreeBSD ones, so must be in separate compilation units) or provide its own definitions, which may get out of sync with the libraries. - We don’t provide any documentation of the underlying ZFS ioctls (and there is some code that suggests that these vary between platforms), and so the *only* API for interacting with ZFS is libzfs_core. - The APIs in libzfs_core are also poorly documented. This makes it incredibly difficult to interact with ZFS via anything other than the `zfs` command-line tool. When things break, I have no idea which of these layers caused the breakage. David
Is anyone using libzfs_core on 15-CURRENT?
Hi all, I have some code using libzfs_core that works fine on 13, but seems not to on 15-CURRENT. The lzc_snapshot function is failing with exactly the same nv list argument. It is failing with errno 2 (ENOENT) from the ZFS ioctl (and not returning an nvlist of errors). My understanding is that the zfs command-line tool wraps libzfs_core and so it must be working somehow. Has anything changed in between 13 and 14 that would be expected to cause incompatibilities? The fact that we don’t install usable headers for these libraries makes it quite difficult to be sure that I haven’t done anything wrong, but stepping through the libzfs_core bits in a debugger, everything looks correct up to the ioctl call. David
Re: sanitizers broken (was RE: libc/libsys split coming soon)
On 21 Feb 2024, at 20:00, Brooks Davis wrote: > > The sanitizers reach somewhat questionably into libc internals that are > exported to allow rtld to update them. I was unable to find an solution > that didn't break this and I felt that fixing things like closefrom() > using non-deprecated syscalls was more important than avoiding changes > to the sanitizer interface. On Darwin, Apple added a special __interpose section that contains pairs of functions to be replaced and replacements. Within the library supplying the interposer, the symbol is resolved to the next version along, but everything that links to the interposing library sees the wrapped version. I wonder if it’s worth teaching rtld to do something equivalent. It’s a fairly lightweight generic mechanism that avoids a lot of the hacks that the sanitisers (and other things, such as instrumented malloc wrappers) do. David
Re: libc/libsys split coming soon
On 3 Feb 2024, at 09:15, Mateusz Guzik wrote: > > Binary startup is very slow, for example execve of a hello world > binary in a Linux-based chroot on FreeBSD is faster by a factor of 2 > compared to a native one. As such perf-wise this looks like a step in > the wrong direction. Have you profiled this? Is the Linux version using BIND_NOW (which comes with a load of problems, but it often the default for Linux systems and reduces the number of slow-path entries into rtld)? Do they trigger the same number of CoW faults? Is there a path in rtld that’s slower than the equivalent ld-linux.so path? David
Re: How to upgrade an EOL FreeBSD release or how to make it working again
On 15 Jan 2024, at 16:46, Mario Marietto wrote: > > The ARM Chromebook is based on armv7,it is still recent. For reference, the ARMv7 architecture was introduced in 2005. The last cores that implemented the architecture were released in 2014. This is not a ‘recent’ architecture, it’s one that’s 19 years old and has been largely dead for several years. > But let's change perspective for a moment,don't think about the ARM > Chromebook. My question is : how to upgrade FreeBSD when it goes EOL. Generally, run `freebsd-update`. This is a very different question from ‘how do I do a new install of an old an unsupported version?' > I ask this because there is a huge difference here between FreeBSD and Linux. > Today if you need to use , for example Ubuntu 14.0, you can use it as is. > Yes,there will be a lot of bugs,but it will work without crashes. But if you > want to use an old FreeBSD system,nothing will work for you. So,do you know > some methods to install even packages or ports ? You know,there are cases > when you need to do some experiments so that you can keep your machine off > the internet,so you aren't scared that someone can compromise it. Totally > prohibiting the users to use an old system,removing ports and packages is not > a choice that I approve of. And I'm not the only one that thinks like this. If you want to use an old and unsupported version of FreeBSD, no one is stopping you, but: - You will need to build the releases. The source code is still in git, you can. The scripts for building the release images are right there in the repo. Just grab the relevant release or releng branch and go. - You will need to build packages. Newer versions of the ports tree will not be tested with the older release, so you may need to use an older checkout of the ports tree. Poudriere will build a package repo for you. In both cases, if you’re using older versions you almost certainly *will* have security vulnerabilities. The project strongly advises you not to do this and not to blame us when you install known-insecure software and end up compromised. The project does not have enough active contributors to keep maintaining things indefinitely. This is why release have a five-year supported lifetime. If you want to pick up an old branch and maintain it, you’re welcome to. In the past, companies have picked up old branches and maintained them for customers that had a dependency on them. If you want to pay someone to maintain an old branch (and have deep pockets) then there are probably a few companies that will happily take your money. Maintaining binaries is a slightly different issue, but it’s not totally unrelated. Keeping old packages around consumes disk space and costs the project money (remember, every package is mirrored across the CDN, so this isn’t just a single disk). Even if it were free, philosophically, I think making it easy for users to install known-insecure software is a bad idea but if you want to keep a package repo with out-of-date packages online indefinitely then you can. You can run Poudriere and even cross-compile from a fairly beefy cloud machine quite easily. It’s been a while since I did a full package build, but I would guess that you could do a single package build (all ports) for about $50 on a cloud VM, more (2-3x) if it’s emulated. Storing the results for a small number of users will cost around $10-20/month. If you think this is an important thing to do, then you are absolutely welcome to spend your own money on doing it. David
Re: Move u2f-devd into base?
On 8 Jan 2024, at 16:30, Tomoaki AOKI wrote: > > So it should be in ports to adapt for latest products more quickly than > in base, I think. We push out a new release of each of the -STABLE branches every 6 months and can do ENs if a product ships and becomes popular in under six months. This shouldn’t be a reason to not do things in the base system. Streamlining the process for ENs (automating them so that there’s a simple flow from review request for a commit on a stable branch to generating the binaries and sending out the announcement) would help a lot and would almost certainly make Colin happier about his workload. David
Re: Continually count the number of open files
On 12 Sep 2023, at 17:19, Bakul Shah wrote: > > FreeBSD > should add inotify. inotify is also probably not the right thing. If someone is interested in adding this, Apple’s fsevents API is a better inspiration. It is carefully designed to ensure that the things monitoring for events can’t ever block filesystem operations from making progress. I think there’s a nice design possible with a bloom filter in the kernel of events that ensures that monitors may get spurious events but don’t miss out on anything. On macOS because files have a stronger idea of which directory they live in and so it’s easier to have an API that notifies for changes to files in a directory. inotify monitors individual files but loses notifications if you write through the wrong hard link to a file (hard link foo from a/foo to b/foo, use inotify to watch a, write though b/foo, observe no notification). I think the right kernel API would walk the directory and add the vnodes to a bloom filter and trigger a notification on a match in the filter. You’d then have occasional spurious notifications but you’d have something that could be monitored via kqueue and could be made to not block anything else in the kernel. If anyone is interested in improving the current kqueue code here: there’s currently no mechanism for tracking when the last writable file descriptor for a file has been closed, which is useful for consuming files that are dropped via sftp or similar. NOTE_CLOSED_WRITE is hard to use without race conditions and tells you only that *a* file descriptor with write permission has closed, not that the last one has closed. I’m currently resorting to a process that runs as root that uses libprocstat to walk the entire list of open file descriptors and report that they’re closed, which is incredibly suboptimal. David
Re: The future for support of BeagleBone Black and its variants
Hi, What are the changes to the DTS files? If there are problems with DTC handling the new files, please can you raise issues here: https://github.com/davidchisnall/dtc/issues If there are problems with the kernel’s handling of the dtb, please ignore me. David > On 10 Aug 2023, at 13:24, George Abdelmalik wrote: > > Hi all, > > For a long while now CURRENT has not been working on the BBB due to > incompatible upstream changes in vendor DTS files. I know there are some > patches available which at least get FreeBSD running but those have yet to be > incorporated into the project's repository. > > This leaves me with the feeling that BBB support in FreeBSD is on the path to > being dropped. Is there someone from the FreeBSD project that could speak > directly to this? > > As a user it would be helpful to know if I should be searching of an > alternate SBC platform or another OS for my projects. > > If it still remains the plan to keep supporting BBB into 14.0 and beyond then > I'd like to know what work is missing to make that happen. I'm willing and > able to help. > > Regards, > George. > > >
Re: Review of patch that uses "volatile sig_atomic_t"
On 2 Aug 2023, at 00:33, Rick Macklem wrote: > > Just trying to understand what you are suggesting... > 1 - Declare the variable _Atomic(int) OR atomic_int (is there a preference) > and > not volatile. Either is fine (the latter is a typedef for the former). I am not a huge fan of the typedefs, some people like them, as far as I can tell it’s purely personal preference. > 2 - Is there a need for signal_atomic_fence(memory_order_acquire); before the > assignment of the variable in the signal handler. (This exists in > one place in > the source tree (bin/dd/misc,c), although for this example, > neither volatile nor > _Atomic() are used for the variable's declaration. You don’t need a fence if you use an atomic variable. The fence prevents the compiler reordering things across it, using atomic operations also prevents this. You might be able to use a fence and not use an atomic but I’d have to reread the spec very carefully to convince myself that this didn’t trigger undefined behaviour. > 3 - Is there any need for other atomic_XXX() calls where the variable is used > outside of the signal handler? No. By default, _Atomic variables use sequentially consistent semantics. You need to use the `atomic_` functions only for explicit memory orderings, which you might want to do for optimisation (very unlikely in this case). Reading it outside the signal handler is the equivalent of doing `atomic_load` with a sequentially consistent memory order. This is a stronger guarantee than you need, but it’s unlikely to cause performance problems if you’re doing more than a few dozen instructions worth of work between checks. > In general, it is looking like FreeBSD needs to have a standard way of dealing > with this and there will be assorted places that need to be fixed? If we used a language that let you build abstractions, that would be easy (I have a C++ class that provides a static epoch counter that’s incremented in a signal handler and a local copy for each instance, so you can check if the signal handler has fired since it was last used. It’s trivial to reuse in C++ projects but C doesn’t give you tools for doing this. David
Re: Review of patch that uses "volatile sig_atomic_t"
Hi, This bit of the C spec is a bit of a mess. There was, I believe, a desire to return volatile to its original use and make any use of volatile other than MMIO discouraged. This broke too much legacy code and so now it’s a confusing state. The requirements for volatile are that the compiler must not elide loads or stores and may not narrow them (I am not sure if it’s allowed to widen them). Loads and stores to a volatile variable may not be reordered with respect to other loads or stores to the same object but *may* be reordered with respect to any other accesses. The sig_atomic_t typedef just indicates an integer type that can be loaded and stored with a single instruction and so is immune to tearing if updated from a signal handler. There is no requirement to use this from signal handlers in preference to int on FreeBSD (whether other types work is implementation defined and int works on all supported architectures for us). The weak ordering guarantees for volatile mean that any code using volatile for detecting whether a signal has fired is probably wrong if if does not include a call to automic_signal_fence(). This guarantees that the compiler will not reorder the load of the volatile with respect to other accesses. In practice, compilers tend to be fairly conservative about reordering volatile accesses and so it probably won’t break until you upgrade your compiler in a few years time. My general recommendation is to use _Atomic(int) (or ideally a enum type) for this. If you just use it like a normal int, you will get sequentially consistent atomics. On a weakly ordered platform like Arm this will include some more atomic barrier instructions but it will do the right thing if you add additional threads monitoring the same variable later. In something like mountd, the extra performance overhead from the barriers is unlikely to be measurable, if it is then you can weaken the atomicity (sequentially consistent unless specified otherwise is a good default in C/C++, for once prioritising correctness over performance). David > On 1 Aug 2023, at 06:14, Rick Macklem wrote: > > Hi, > > I just put D41265 up on phabricator. It is a trivial > change to mountd.c that defines the variable set > by got_sighup() (the SIGHUP handler) as > static volatile sig_atomic_t > instead of >static int > > I did list a couple of reviewers, but if you are familiar > with this C requirement, please take a look at it and > review it. > > Thanks, rick > ps: I was unaware of this C requirement until Peter Eriksson > reported it to me yesterday. Several of the other NFS > related daemons probably need to same fix, which I will > do after this is reviewed. >
Re: Surprise null root password
On 30/05/2023 20:11, Dag-Erling Smørgrav wrote: David Chisnall writes: There was a very nasty POLA violation a release or two ago. OpenSSH defaults to disallowing empty passwords and so having a null password was a convenient way of allowing people to su or locally log into that user but disallowing ssh. This option does not work in recent versions of FreeBSD. Turning on the option to permit root login while keeping the root password blank used to be (mostly) safe because it permitted su to root from people in the wheel group, root login via SSH key remotely (for ‘everything is broken I can’t log in as a user whose home directory is not on the root filesystem’ recovery) and local login as root from consoles marked as secure. It now permits root login from the network with a blank password. That is incorrect. PermitRootLogin defaults to “no” in FreeBSD and to “prohibit-password” upstream (and presumably in the port), while PermitEmptyPasswords defaults to “no” both in FreeBSD and upstream, cf. crypto/openssh/servconf.c (search for “permit_root” and “permit_empty”). I didn't say it defaulted to anything else, but if you enable PermitRootLogin then you have a nasty surprise because PermitEmptyPasswords=no does not do anything and you can still log in via an empty password. There is presumably something I can put in pam.d that will prevent password-based login (without fully disabling keyboard-interactive from sshd_config) but I have never successfully understood anything after reading the PAM documentation. David
Re: Surprise null root password
On 27 May 2023, at 03:52, Mike Karels wrote: > > On 26 May 2023, at 21:28, bob prohaska wrote: > >> It turns out all seven hosts in my cluster report >> a null password for root in /usr/src/etc/master.passwd: >> root::0:0::0:0:Charlie &:/root:/bin/sh >> >> Is that intentional? > > Well, it has been that way in FreeBSD since 1993, and in BSD since > 1980 (4.0BSD). I guess you would say that it is intentional. The > alternative would be to have a well-known password like root, but > then it wouldn’t be as obvious that a local password had not been > set. There was a very nasty POLA violation a release or two ago. OpenSSH defaults to disallowing empty passwords and so having a null password was a convenient way of allowing people to su or locally log into that user but disallowing ssh. This option does not work in recent versions of FreeBSD. Turning on the option to permit root login while keeping the root password blank used to be (mostly) safe because it permitted su to root from people in the wheel group, root login via SSH key remotely (for ‘everything is broken I can’t log in as a user whose home directory is not on the root filesystem’ recovery) and local login as root from consoles marked as secure. It now permits root login from the network with a blank password. David
Re: GitHub Code Search [Re: Tooling Integration and Developer Experience]
On 01/02/2023 06:05, Yetoo wrote: On Tue, Jan 31, 2023, 9:47 AM David Chisnall wrote: On 30/01/2023 21:39, Yetoo wrote: If github is going to be considered for issue tracking I just want to say, after having extensively using it for issue tracking, it tends to be difficult to find an issue if the exact title isn't entered and many duplicate reports are made as a result. Code search sucks and doesnt show you the multiple places in a file where a match was made so a user may be left wondering if code exists or not, but this doesn't seem to be germane given this is about issue tracking. Which search are you using? The old GitHub search is not great, but cs.github.com has replaced local search for me in the FreeBSD tree. It's not*quite* as good as fxr, but it's close. For example, searching for sys_cap_enter: https://cs.github.com/?scopeName=All+repos&scope=&q=sys_cap_enter+repo%3Afreebsd%2Ffreebsd-src+ David Clicking that link greets me with a login screen and a requirement to join a waitlist. Yes, this is beta and may change in the future, but not thrilled that an account is required to use and opt into it. Sorry, I thought it was open to everyone now. If the issue is needing a GitHub account, I don't have a work around - in my experience having one has been unavoidable if you want to contribute to the F/OSS ecosystem for about 10 years now. I used to be a big proponent of projects hosting their own infrastructure on open platforms, but after seeing massive jumps in the number of active contributors for projects that moved to GitHub workflows, I've decided to pick other battles. David
GitHub Code Search [Re: Tooling Integration and Developer Experience]
On 30/01/2023 21:39, Yetoo wrote: If github is going to be considered for issue tracking I just want to say, after having extensively using it for issue tracking, it tends to be difficult to find an issue if the exact title isn't entered and many duplicate reports are made as a result. Code search sucks and doesnt show you the multiple places in a file where a match was made so a user may be left wondering if code exists or not, but this doesn't seem to be germane given this is about issue tracking. Which search are you using? The old GitHub search is not great, but cs.github.com has replaced local search for me in the FreeBSD tree. It's not *quite* as good as fxr, but it's close. For example, searching for sys_cap_enter: https://cs.github.com/?scopeName=All+repos&scope=&q=sys_cap_enter+repo%3Afreebsd%2Ffreebsd-src+ David
Re: Header symbols that shouldn't be visible to ports?
On 7 Sep 2022, at 15:55, Cy Schubert wrote: > > This is exactly what happened with DMD D. When 64-bit statfs was introduced > all DMD D compiled programs failed to run and recompiling didn't help. The > DMD upstream failed to understand the problem. Eventually the port had to > be removed. I’m not sure that I understand the problem. This should matter only for libraries that pass a statbuf across their ABI boundary. Anyone using libc will see the old version of the symbol and just use the old statbuf. Anyone using the old syscall number and doing system calls directly will see the compat version of the structure. Anyone taking the statbuf and passing it to a C library compiled with the newer headers will see compat problems (but the same is true for a C library asking a C program to pass it a statbuf and having the two compiled against different kernel versions). There’s a lot that we could do in system headers to make them more FFI-friendly. For example: - Use `enum`s rather than `#define`s for constants. - Add the flags-enum attribute for flags, so that FFI layers that can parse attributes get more semantic information. - Add non-null attributes on all arguments and returns that - Use `static inline` functions instead of macros where possible and expose them with a macro for `static inline` so that an FFI layer can compile the headers in a mode that makes these functions that they can link against. For Rust, this can be compiled to LLVM IR and linked against and inlined into the Rust code, so things like the Capsicum permissions bitmap setting code wouldn’t need duplicating in Rust. - Mark functions with availability attributes so that FFI knows when it’s using deprecated / unstable values and can make strong ABI guarantees. - Add tests for the headers to the tree. In 12.0, someone decided to rewrite a load of kernel headers to use macros instead of inline functions, which then broke C++ code in the kernel by changing properly namespaced things into symbols that would replace every identifier. I’d love to see a concerted effort to use a post-1999 style for our headers. David
Re: Accessibility in the FreeBSD installer and console
On 08/07/2022 13:18, Stefan Esser wrote: Am 08.07.22 um 12:53 schrieb Hans Petter Selasky: Hi, Here is the complete patch for Voice-Over in the FreeBSD console: https://reviews.freebsd.org/D35754 You need to install espeak from pkg and then install the /etc/devd/accessibility.conf file and then run sysctl kern.vt.accessibility.enable=1 after booting the new kernel. It is freaking awesome! There might be some bugs, but it worked fine for me! The espeak port is marked for deletion on 2022-06-30 (but has not been deleted, yet): DEPRECATED= Last release in 2014 and deprecated upstream EXPIRATION_DATE=2022-06-30 There is espeak-ng, which took over the sources, and I have prepared a port update. Many years ago, I added the speech synthesis APIs from OS X to GNUstep using flite: https://www.freshports.org/audio/flite/ flite it small (the port contains separate .so and .a files for each voice, a minimal version needs only one), has no dependencies outside of the base system, and is permissively licensed. I haven't used it for a while (apparently it's had a new major release since I last did), but I was happily using it for text-to-speech on FreeBSD 10-15 years ago and it is still in ports. David
Re: Deprecating ISA sound cards
On 19 Mar 2022, at 21:24, Chris wrote: > > On 2022-03-18 09:08, Ed Maste wrote: >> ISA sound cards have been obsolete for more than a decade, and it is >> (past) time to retire their drivers. This includes the following >> drivers/devices: >> snd_ad1816 Analog Devices AD1816 SoundPort >> snd_ess Ensoniq ESS >> snd_guscGravis UltraSound >> snd_mss Microsoft Sound System >> snd_sbc Creative Sound Blaster >> I have a review open to add deprecation notices: >> https://reviews.freebsd.org/D34604 >> I expect to commit this in the near future, then MFC to stable >> branches and remove these drivers from main. Please follow up if >> there's a reason we should postpone the removal of any of these >> drivers. > This only hurts from a nostalgic perspective. Those GUS cards were incredible! > I have a board running freebsd that has 2 GUS cards in it running Exactly my reaction. You can tell you’re old when drivers are removed from the tree for mainstream hardware that you never owned but wished that you could afford. David
Re: Dragonfly Mail Agent (dma) in the base system
On 30/01/2022 14:01, michael.osi...@siemens.com wrote: Sendmail: The biggest problem is that authentication strictly requires Cyrus SASL, even for stupid ones like PLAIN/LOGIN, accourding to the handbook you must recompile sendmail from base with Cyrus SASL from ports to make this possible. A showstopper actually, for two reasons: 1. I don't like mixing base and ports, it just creates a messy system. 2. While this may work with hosts, when you have jails running off a RELEASE in Bastille this obviously will not work. Not going to work with sendmail easily. I think this is a critical point: at the moment, we're paying the cost of having a full-featured MTA in the base system, without getting most of the benefits. Around 2003, I hit exactly this problem. The instructions after update were slightly terrifying: after each base system or ports update, I potentially had to recompile my own sendmail. There's now a sendmail+sasl configuration in packages and so I was incredibly happy to be able to move away from using sendmail in base. Now I have two copies of sendmail on some machines. The one in ports, for compatibility reasons, looks for config in /etc/mail not under LOCALBASE, which is a layering violation and means that freebsd-update periodically tries to corrupt my config. I have no strong opinions about where we move to, but moving *from* shipping a limited sendmail in base would make me very happy. David
Re: Deprecating smbfs(5) and removing it before FreeBSD 14
On 22/01/2022 23:20, Rick Macklem wrote: Mark Saad wrote: [stuff snipped] So I am looking at the Apple and Solaris code, provided by rick. I am not sure if the illumos code provides SMB2 support. They based the solaris code on Apple SMB-217.x which is from OSX 10.4 . Which I am sure predates smb2 . https://github.com/apple-oss-distributions/smb/tree/smb-217.19 If I am following this correctly we need to look at Apple's smb client from OSX 10.9 which is where I start to see bits about smb2 https://github.com/apple-oss-distributions/smb/tree/smb-697.95.1/kernel/netsmb This is also where this stuff starts to look less and less like FreeBSD . Let me ask some of the illumos people I know to see if there is anything they can point to. Yes. Please do so. I saw the "old" calls fo things like open and the new ntcreate version, so I assumed that was the newer SMB. If it is not, there is no reason to port it. The new Apple code is a monster. 10x the lines of C and a lot of weird stuff that looks Apple specific. It might actually be easier to write SMBv2 from the spec than port the Apple stuff. --> I'll try and look at whatever Microsoft publishes w.r.t. SMBv2/3. Thanks for looking at this, rick The docs are public: https://docs.microsoft.com/en-gb/openspecs/windows_protocols/ms-smb2/5606ad47-5ee0-437a-817e-70c366052962 Note that the spec is 480 pages, it is not a trivial protocol to implement from scratch. David
Re: Benchmarks: FreeBSD 13 vs. NetBSD 9.2 vs. OpenBSD 7 vs. DragonFlyBSD 6 vs. Linux
While I agree on most of your points, the value of Phoronix is that it tests the default install. As an end user, I don’t care that a particular program is twice as fast on a particular Linux distro as it is on FreeBSD because of kernel features, compiler options, or dependency choices. I would love to see the base system include the ThinLTO (LLVM IR) .a files so that I can do inlining from libc into my program. I would love for ports to default to ThinLTO unless they break with it. Apple flipped that switch a few years ago, so a lot of things that broke with ThinLTO are now fixed. The FreeBSD memcpy / memset implementations look like they’re slower than the latest ones, which can give a 5-10% perf boost on some workloads. LLVM just landed the automemcpy framework, which is designed by some Google folks to synthesises efficient memcpy implementations tailored to different workloads. FreeBSD often wins versus glibc-based distros because jemalloc is faster than dlmalloc (the default malloc implementations in FreeBSD libc and glibc, respectively). I’ve been using snmalloc in my libc for a while and it generally gives me a few percent more perf. Unfortunately, FreeBSD decided to expose all of the jemalloc non-standard functions from libc, which means I can’t contribute it to upstream without implementing all of those on top of snmalloc or it would be an ABI break. It would be great if someone could pick up the Phronix benchmark suite and do some profiling: where is FreeBSD spending more time than Linux? Are there Linux-specific code paths that hit slow paths on FreeBSD and fast paths on Linux that could have FreeBSD-specific fast paths added (e.g. futex vs _umtx_op)? David > On 11 Dec 2021, at 10:17, dmilith . wrote: > > 1. Where are compiler options for BSDs? > 2. Why they compare -O2 to -O3 code in some benchmarks? Why they enable > fast math in some, and disable it for others? > 3. Why they don't mention powerd setup for FreeBSD? By default it may use > slowest CPU mode. Did they even load cpufreq kernel module? > 4. Did they even care about default FreeBSD mitigations (via sysctl) > enabled, or it's only valid for Linuxes? ;) > 5. What happened to security and environment details of BSDs? > > It's kinda known that guys from Phroenix lack basic knowledge of how to do > proper performance testing and lack basic knowledge about BSD systems. > Nothing new. Would take these results with a grain of salt. > > On Sat, 11 Dec 2021 at 10:53, beepc.ch wrote: > >>> I am surprised to see that the BSD cluster today has much worse >> performance >>> than Linux. >>> What do you think of this? >> >> "Default" FreeBSD install setting are quite conservative. >> The Linux common distros are high tuned, those benchmark is in my >> opinion comparison of apples and oranges. >> >> Comparing "default" FreeBSD install with "default" Slackware install >> would be more interesting, because Slackware builds are at most vanilla. >> >> > > -- > Daniel Dettlaff > Versatile Knowledge Systems > verknowsys.com
Re: failure of pructl (atexit/_Block_copy/--no-allow-shlib-undefined)
On 02/12/2021 09:51, Dimitry Andric wrote: Apparently the "block runtime" is supposed to provide the actual object, so I guess you have to explicitly link to that runtime? The block runtime provides this symbol. You use this libc API, you must be compiling with a toolchain that supports blocks and must be providing the blocks symbols. If you don't use `atexit_b` or any of the other `_b` APIs then you don't need to link the blocks runtime. I am not sure why this is causing linker failures - if it's a weak symbol and it's not defined then that's entirely expected: the point of a weak symbol is that it might not be defined. This avoids the need to link libc to the blocks runtime for code that doesn't use blocks (i.e. most code that doesn't come from macOS). This code is not using `atexit_b`, but because it is using `atexit` the linker is complaining that the compilation unit containing `atexit` is referring to a symbol that isn't defined. David
Re: VDSO on amd64
Great news! Note that your example of throwing an exception from a signal handler works because the signal is delivered during a system call. The compiler generates correct unwind tables for calls because any call may throw. If you did something like a division by zero to get a SIGFPE or a null-pointer dereference to get a SIGSEGV then the throw would probably not work (or, rather, would be delivered to the right place but might corrupt some register state). Neither clang nor GCC currently supports non-call exceptions by default. This mechanism is more useful for Java VMs and similar. Some Linux-based implementations (including Android) use this to avoid null-pointer checks in Java. The VDSO mechanism in Linux is also used for providing some syscall implementations. In particular, getting the current approximate time and getting the current CPU (either by reading from the VDSO's data section or by doing a real syscall, without userspace knowing which). It also provides the syscall stub that is used for the kernel transition for all 'real' syscalls. This doesn't matter so much on amd64, but on i386 it lets them select between int 80h, syscall or sysenter, depending on what the hardware supports. A few questions about future plans: - Do you have plans to extend the VDSO to provide system call entry points and fast-path syscalls? It would be really nice if we could move all of the libsyscalls bits into the VDSO so that any compartmentalisation mechanism that wanted to interpose on syscalls just needed to provide a replacement for the VDSO. - It looks as if the Linux VDSO mechanism isn't yet using this. Do you plan on moving it over? - I can't quite tell from kern_sharedpage.c (this file has almost no comments) - is the userspace mapping of the VDSO randomised? This has been done on Linux for a while because the VDSO is an incredibly high-value target for code reuse attacks (it can do system calls and it can restore the entire register state from the contents of an on-stack buffer if you can jump into it). David On 25/11/2021 02:36, Konstantin Belousov wrote: I have mostly finished implementation of "proper" vdso for amd64 native binaries, both 64bit and 32bit. Vdso wraps signal trampolines into real dynamic shared object, which is prelinked into dynamically linked image. The main (and in fact, now the only) reason for wrapping trampolines into vdso is to provide proper unwind annotation for the signal frame, without a need to teach each unwinder about special frame types. In reality, most of them are already aware of our signal trampolines, since there is no other way to walk over them except to match instructions sequence in the frame. Also, we provide sysctl kern.proc.sigtramp which reports the location of the trampoline. So this patch should not make much difference for e.g. gdb or lldb. On the other hand, I noted that llvm13 unwinder with vdso is able to catch exceptions thrown from the signal handler, which was a suprise to me. Corresponding test code is available at https://gist.github.com/b886401fcc92dc37b49316eaf0e871ca Another advantage for us is that having vdso allows to change trampoline code without breaking unwinders. Vdso's for both 64bit and 32bit ABI are put into existing shared page. This means that total size of both objects should be below 4k, and some more space needs to be left available, for stuff like timehands and fxrng. Using linker tricks, which is where the most complexity in this patch belongs, I was able to reduce size of objects below 1.5k. I believe some more space saving could be achieved, but I stopped there for now. Or we might extend shared region object to two pages, if current situation appears to be too tight. The implementation can be found at https://reviews.freebsd.org/D32960 Signal delivery for old i386 elf (freebsd 4.x) and a.out binaries was not yet tested. Your reviews, testing, and any other form of feedback is welcomed. The work was sponsored by The FreeBSD Foundation.
Re: Deprecating smbfs(5) and removing it before FreeBSD 14
On 28/10/2021 16:26, Shawn Webb wrote: I wonder if providing a 9pfs client would be a good step in helping deprecate smbfs. Note: WSL2 uses 9p-over-VMBus, but most of the Linux world is moving away from 9p-over-VirtIO to FUSE-over-VirtIO. This has a few big advantages: - The kernel already has solid FUSE support so this isn't a completely new code path. - FUSE is designed around POSIX filesystem semantics, 9p isn't and this mismatch causes problems in places. - FUSE filesystems can be exposed almost directly to the guest. For example, if you have a networked filesystem you can run the FUSE FS in an unprivileged userspace process and remove the entire host kernel storage stack from the attack surface for the guest. - FUSE allows exposing buffer cache pages. The FUSE-over-VirtIO mechanism makes it fairly easy to expose read-only root filesystem images to guests. The last point is especially important for container workloads where you may have hundreds of containers in lightweight VMs on a single node all using the same base layer. David
Re: FreeBSD base pkg (packaging) and critical ports build alongside
Hi, I think your best option would be to do the opposite of what you suggest. Poudriere can build pkgbase sets from a source tree and populate a jail from them. The flow that I'd suggest is: - Poudriere jail to build a jail from an existing source tree. - If there are kernel changes, install the packages on the package builder and reboot. - Poudriere bulk in the new jail to build the new package set. Note: You can *normally* skip the second step (drm ports, for example, will be built against the new kernel sources in the jail, though they might not be loadable on the host) but there's no guarantee that you can run a newer userland on an older kernel so things may break. If you enable reproduceable builds in the src.conf that you use for building the jail then you should be able to just diff the kernel binary to see if anything has changed. If you have bhyve or are running on a cloud platform then you can replace the second step with a poudriere image invocation to build a VM image containing poudriere and your newly-built base system and deploy this to build the packages. I'm planning on working on some tooling to do this in Azure with GitHub Actions. Note that poudriere uses packages installed on the host system to build a jail. If you have, for example, installed llvm12 then you can put a line in your src-env.conf for the jail to tell it to use that as an external toolchain and skip the toolchain-bootstrap phase of the build. This means that the base-build is fairly fast even on quite modest hardware (it still builds clang, but at least it does it only once). David On 29/09/2021 09:28, FreeBSD User wrote: Hello, I use FreeBSD-base packages built on self hosted systems to update 13-STABLE and CURRENT hosts. I run into the problem, that the packages of the FreeBSD base, built via the FreeBSD framework and from most recent 13-STABLE sources, are often oit of synchronisation with our poudriere packaging builders, that is especially true for critical ports with kernel modules, like i915 drm, virtualbox and so on. The problem is, obviously, barehanded: 13-STABLE sources and probably the API changes more rapidly than those of the appropriate builder hosts for poudriere and since it takes a bunch of days to build a whole poudriere packages repository, there is often a gap between the revision of the kernel and the port containing kernel modules. So, the question is: how can I add ports to the building process of the FreeBSD sources tree in the way they get build every time I build the FreeBSD-base packages alongside the OS? Thanks in advance, oh
Re: Building ZFS disk images
On 05/08/2021 15:06, David Chisnall wrote: Would poudriere work for you? man poudriere-image Wow, there's a lot of stuff I didn't know poudriere could do! It looks as if it can produce a GPT partition table with all of the bootable bits, or it can produce a ZFS disk image. I guess it wouldn't be too difficult to teach it to do both? FYI: I have raised a PR[1] that allows me to create a ZFS disk image. This allows me to create a ZFS root image that I can boot as a Gen1 or Gen2 Hyper-V VM. I have not yet tried it in Azure, but it should work. David [1] https://github.com/freebsd/poudriere/pull/921
Re: -CURRENT compilation time
On 09/09/2021 00:04, Tomoaki AOKI wrote: devel/ninja/Makefile has USES= python in it, so it maybe require python to run or at least build. You could probably remove that line without anyone noticing. Ninja uses Python for precisely one thing (or, at least, did last time I looked): There is a debugging mode that will generate a visualisation of all of the dependencies in the project and run a web server that allows you to view this visualisation in your web browser. In about 10 years of using Ninja, I have used this functionality precisely once, and that was immediately after poking the code to find out why it had a Python dependency, discovering this mode existed, and looking to see what it did. Nothing on the build paths depends on Python and Ninja doesn't require Python to build itself. David
Re: -CURRENT compilation time
On 08/09/2021 11:52, Gary Jennejohn wrote: Seems to me that there was an earlier mail about getting CMAKE to work with FreeBSD builds. Could be worthwhile to look into getting ninja to work also. But I could understand that there might be push-back, since the project prefers to use utilities from the source tree. CMake is a build-system generator, Ninja is a build system. Usually the two are used together: CMake generates Ninja files, Ninja runs the build. Ninja is explicitly designed not to be written by hand. CMake can also emit other things, including POSIX Makefiles, but the Ninja build is usually the fastest. CMake and Ninja are both in package systems for Windows, macOS, *BSD, and all Linux distros that I've seen, unlike bmake, so universally easy to depend on for cross-builds. Cross compiling with bmake is much harder harder from anything that isn't FreeBSD. David
Re: -CURRENT compilation time
On 07/09/2021 18:02, Stefan Esser wrote: Wouldn't this break META_MODE? I have never managed to get META_MODE to work but my understanding is that META_MODE is addressing a problem that doesn't really exist in any other build system that I've used: that dependencies are not properly tracked. When I do a build of LLVM with the upstream build system with no changes, it takes Ninja approximately a tenth of a second to stat all of the relevant files and tell me that I have no work to do. META_MODE apparently lets the FreeBSD build system extract these dependencies and do something similar, but it's not enabled by default and it's difficult to make work. I'd rather be able to continue building the world within a few minutes (generally much less than 10 minutes, as long as there is no major LLVM upgrade) than have a faster LLVM build and then a slower build of the world ... The rest of this thread has determined that building LLVM accounts for half of the build time in a clean FreeBSD build. LLVM's CMake is not a great example: it has been incrementally improved since CMake 2.8 and doesn't yet use any of the modern CMake features that allow encapsulating targets and providing import / export configurations. In spite of that, it generates a ninja file that compiles *significantly* faster than the bmake-based system in FreeBSD. In other projects that I've worked on with a similar-sized codebase to FreeBSD that use CMake + Ninja, I've never had the same problems with build speed that I have with FreeBSD. Working on LLVM, I generally spend well under 10% of my time either waiting for builds or fighting the build system. Working on FreeBSD, I generally spend over 90% of my time waiting for builds or fighting the build system. This means that my productivity contributing to FreeBSD is almost zero. For reference, changes to LLVM typically build for me in under 30 seconds with Ninja, unless I've changed a header that everything In particular, building FreeBSD on a 10-24 core machine has very long periods where a number of the cores are completely idle. Ninja also has a few other nice features that improve performance relative to bmake: - It lets you put jobs in different pools. In LLVM this is used to put link and compile jobs in different pools because linking with LLD uses multiple threads and a lot more memory than compilation, so a 10-core machine may want to do 12 compile jobs in parallel but only 2 link jobs. This makes it much easier to completely saturate the machine. - Ninja provides each parallel build task with a separate pipe for stdout and stderr, and does not print their output unless a build step fails (or unless you build with -v). With bmake, if a parallel build fails I have to rerun the build without -j, because the output is interleaved with succeeding jobs and it's difficult to see what actually failed. With ninja, the output is from each failed job, with no interleaving. David
Re: -CURRENT compilation time
On 06/09/2021 20:34, Wolfram Schneider wrote: With the option WITHOUT_TOOLCHAIN=yes the world build time is 2.5 times faster (real or user+sys), down from 48 min to 19.5 min real time. Note that building LLVM with the upstream CMake + Ninja build system is *significantly* faster on a decent multicore machine than the FreeBSD bmake-based in-tree version. One of the things I'd love to prototype if I had time is a CMake-based build system for FreeBSD so that we could get all of the tooling integration from the compile_commands.json, reuse LLVM's (and any other contrib things that use CMake) build system without having to recreate it, and be able to use ninja, to build. David
Re: Move the Handbook into source tree
Hi, I think there are two conflated things here: - Moving the handbook into the source tree. - Creating branches in the handbook that track particular releases. The first one is largely irrelevant to anyone other than people contributing to the handbook, so I'll focus on the second: This is quite an easy way of having a per-release snapshot but it means that the handbook will diverge for different releases over time. If the new docs are documenting things that are new, that's fine, but if new docs are added to -CURRENT for things in a release, then they will need to follow the same MFC process as code flowing from current to releases, which brings up the potential of merge conflicts and so on. This is made even more complex in cases where code is MFC'd. Typically (unfortunately) docs are added to the handbook after a feature is committed and so things will need to be MFC'd at the same time as other features are MFC'd or later if they're written after the original MFC. I see two possible workflows, the current and the proposed one, both of which have disadvantages: - With the current workflow, you need to explicitly track which release things apply to in the document. - With your proposed workflow, you need to explicitly track which release things apply to by merging them at the correct time. It's not clear to me that either of these is necessarily easier than the other. David On 07/09/2021 08:01, Mehmet Erol Sanliturk wrote: Dear All , in many of my messages to FreeBSD mailing lists I am mentioning the following view : "Please move the Handbook into source tree , and Maintain it with respect to current release without mixing sliding releases : If you do this , maintenance of a correct Handbook is IMPOSSIBLE because maintenance of associated IF statements about releases . " When we look at the following web pages , we see the following : https://www.freebsd.org/cgi/man.cgi FreeBSD Manual Pages In the second box of "All sections" line , there are lines about all of the FreeBSD releases with many more other systems . In spite of this , in the following page : https://docs.freebsd.org/en/books/handbook/ FreeBSD Handbook The FreeBSD Documentation Project " Abstract Welcome to FreeBSD! This handbook covers the installation and day to day use of FreeBSD 13.0-RELEASE, FreeBSD 12.2-RELEASE and FreeBSD 11.4-RELEASE. ... " A Handbook which ( for me , exactly , for the others , perhaps ) with many errors ... I think that , it is NOT extraordinarily a difficult process to move the Handbook into source tree and maintaining it with respect to per release and insert into the above web page a part similar to the manual pages to display the requested Handbook with respect to releases . In the present case , previous handbooks are lost , because of the difficulty of finding them . Thank you very much and my best wishes for you and humanity in these pandemic days ... Mehmet Erol Sanliturk
Re: -CURRENT compilation time
On 06/09/2021 09:08, Jeremie Le Hen wrote: Compiling C++ seems extremely CPU heavy and this is made worse by the fact LLVM is built twice (once for build/cross tools, once for the actual world). Note that you need to build LLVM twice only if you are actively debugging LLVM reproduceable deployment images. You actually don't need to build it at all, you can use an external toolchain to skip the first build and you can compile WITHOUT_TOOLCHAIN to avoid building the version that's installed and then install a toolchain from packages: https://wiki.freebsd.org/ExternalToolchain David
Re: ses ioctl API/ABI stability
On 25/08/2021 22:19, Alan Somers wrote: We usually try to maintain backwards compatibility forever. But is that necessary for the ses(4) ioctls? There are several problems with them as currently defined. They lack type safety, lack automatic copyin/copyout handling, and one of them can overrun a user buffer. I would like to fix them, but adding backwards-compatibility versions would almost negate the benefit. Or, can we consider this to be an internal API, changeable at will, as long as sesutil's CLI remains the same? -Alan I've been pondering for a little while the possibility of using CUSE for compat ioctls (particularly for jails, but potentially in general). This might be a good candidate. If you rename ses and provide a CUSE implementation of ses that runs in a Capsicum sandbox with access to the new device then the worst that a type-safety bug can do is issue the wrong ioctl (but not an invalid one, because the kernel will catch that with the new interfaces). sesutil can move to the new interface and so only things that want to directly talk to the old interface (for example, sesutil in a FreeBSD 12 jail) will need to load the userspace compat interface. David
Tooling for UCLification
Hi everyone, A few years ago at BSDCam I started working on a tool that would parse data structures using libclang and provide libucl wrappers for serialising and deserialising the code. After working on it for a bit, I came to the conclusion that the approach was the wrong way around and what I actually wanted to do was describe the config file and reflect that into code. I’ve written a tool (still quite WIP, but now in a usable state) that takes a JSON Schema describing the config file and produces some modern idiomatic C++ wrappers for exposing it. JSON Schema is the same format the UCL can validate, so you can write the schema once, use it to validate config files, and also use it to generate the code that exposes the config files into your program. I hope it’s useful to anyone working on adding UCL support to tools: https://github.com/davidchisnall/config-gen There are a couple of simple examples in the tests directory that show the schema, some example configs, and the code used for accessing them. David
Re: Building ZFS disk images
On 05/08/2021 13:53, Alan Somers wrote: I don't know of any way to do it using the official release scripts either. One problem is that every ZFS pool and file system is supposed to have a unique GUID. So any kind of ZFS release builder would need to re-guid the pool on first boot. Is there a tool / command to do this? I've hit this problem in the past: I have multiple FreeBSD VMs that are all created from the same template and if one dies I can't import its zpool into another because they have the same UUID. It doesn't matter for modern deployments where the VM is stateless and reimaged periodically but it's annoying for classic deployments where I have things I care about on the VM. David
Re: Building ZFS disk images
On 05/08/2021 14:01, Juraj Lutter wrote: On 5 Aug 2021, at 14:53, Alan Somers wrote: I don't know of any way to do it using the official release scripts either. One problem is that every ZFS pool and file system is supposed to have a unique GUID. So any kind of ZFS release builder would need to re-guid the pool on first boot. On Thu, Aug 5, 2021, 6:41 AM David Chisnall wrote: Hi, Does anyone know how to build ZFS disk images from any existing tooling? I haven't used UFS for over a decade now and the official cloud images are all UFS, so I end up doing an install from the CD ISO into Hyper-V locally and then exporting the VHD, but that can't be the most efficient way of getting a FreeBSD VHD with ZFS. I haven't been able to find any documentation and reading the release scripts they seem to hard-code UFS. Would poudriere work for you? man poudriere-image Wow, there's a lot of stuff I didn't know poudriere could do! It looks as if it can produce a GPT partition table with all of the bootable bits, or it can produce a ZFS disk image. I guess it wouldn't be too difficult to teach it to do both? David
Building ZFS disk images
Hi, Does anyone know how to build ZFS disk images from any existing tooling? I haven't used UFS for over a decade now and the official cloud images are all UFS, so I end up doing an install from the CD ISO into Hyper-V locally and then exporting the VHD, but that can't be the most efficient way of getting a FreeBSD VHD with ZFS. I haven't been able to find any documentation and reading the release scripts they seem to hard-code UFS. David
Re: PATH: /usr/local before or after /usr ?
On 16/07/2021 16:50, Cameron Katri via freebsd-current wrote: On Fri, Jul 16, 2021 at 09:01:49AM -0600, Alan Somers wrote: FreeBSD has always placed /usr/local/X after /usr/X in the default PATH. AFAICT that convention began with SVN revision 37 "Initial import of 386BSD 0.1 othersrc/etc". Why is that? It would make sense to me that /usr/local/X should come first. That way programs installed from ports can override FreeBSD's defaults. Is there a good reason for this convention, or is it just inertia? The biggest example I can think of this being a problem is having binutils installed, it will cause any calls to elftoolchain or llvm-binutils to go to GNU binutils which is platform specific, so cross compiling, or LTO could be broken because of using GNU binutils which don't support cross compiling or LTO. FWIW: In about 20 years of using FreeBSD, my $PATH has always had /usr/local/bin before /usr/bin and I have never once encountered a problem from this. If I install something from ports that's already in the base system, it's invariably because I want to use it in preference to the base-system version. David
Re: WSLg update on 1-5-2021 - BSD / WSL
On 09/05/2021 04:55, Daniel Nebdal wrote: On Thu, 6 May 2021 at 19:05, David Chisnall wrote: [ Disclaimer: I work for Microsoft, but not on WSL and this is my own opinion ] (...) David Just as a counterpoint to Rozhuk's take, that all sounds sensible enough to me - FreeBSD would probably gain more from this than MS. So the WSL2 TODO would be something like this: * Ballooning driver. Seems like a proof of concept would be doable enough - could you model it as an unkillable task (userland or kernel) that wants to allocate a lot of memory, and anything it gets it hands back to the host? There's an in-tree Xen balloon driver that works in this way: it allocates pages of memory from the kernel and then returns them to the hypervisor. It appears that Hyper-V actually supports two kinds of dynamic memory, the balloon interface and a mechanism based on hotplug. The balloon mechanism effectively defines a maximum amount of physical memory and lets the guest return some of it. The hotplug mechanism boots with a smaller amount of memory but can dynamically add and remove physical memory. I don't know which is used in WSL2. * Some sort of boot support. Maybe as a shim that chainloads an unmodified kernel? Probably finicky, but also self-contained. To start, you could kexec the FreeBSD kernel from a minimal Linux install. * File systems. Is / also 9p-over-HyperV-channels? If so that's kind of crucial and perhaps the hardest part. I think WSL2 provides a block device for /, which is why Linux-native filesystem performance is faster than WSL1. It would be great to have a ZFS image instead of ext4 here! Oh, and how does the terminal work? You support multiple ttys, so I guess it's not straight emulated serial? I believe that WSL2 uses SSH connections, rather than exposing the serial terminal. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Building ZFS-based VM images
On 06/05/2021 16:17, Alan Somers wrote: It's easy to build a UFS-based VM image just by setting WITH_VMIMAGES in release.conf and running release.sh. But what about ZFS-based images? What's the easiest way to build a ZFS-based VM image, using a pool layout similar to what the interactive installer uses? The only way that I've deployed FreeBSD VMs in Azure has been to run the installer in Hyper-V locally and then upload it as a template. You need to be *really* careful with this mode though, because ZFS gets really confused if two pools or two VDEVs have the same UUIDs. This means that you can't just attach one VM's disk to another for recovery (I also have a UFS image that I use for recovery). It would be great if it were possible to set a flag somewhere telling the storage subsystem to regenerate the UUIDs of everything (including GPT partitions) on first boot. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: WSLg update on 1-5-2021 - BSD / WSL
On 07/05/2021 11:17, Rozhuk Ivan wrote: On Thu, 6 May 2021 10:57:16 +0100 David Chisnall wrote: Whether Microsoft or the FreeBSD project should do the work really comes down to who has more to gain. Windows 10 is installed on around a 1.3 billion devices and any of these users can run Ubuntu with a single click in the Microsoft Store, so it feels as if the FreeBSD project has a lot to gain from being able to reach them. Make job for free - make more money for MS. Make win10 to support more features to increase windows value... How much money? 'Making money' doesn't just mean money coming in, it means more money coming in than is going out. Adding features to a product costs money. How many people will buy Windows 10 if it has a good FreeBSD compatibility layer who wouldn't buy it without one? I very much doubt that this is a sufficient number to cover the cost of the engineering work. Microsoft, in contrast, is driven by requests from customers who spend money on our products and services. Around a hundred people commented on the WSL issue to add FreeBSD support. Ok, do you job and add FBSD support to WSL. First, this is nowhere near even related to my job. I don't work on Windows, let alone WSL. Second, I have already explained why this is not a sufficiently large market to impact engineering decisions. If you assume that 1% of people who want the feature commented, then this gives around 10,000 folks who really want a FreeBSD equivalent of WSL. They give money to MS, they ask MS to do job for money. They give money to MS, they get a Windows 10 license in return. They are happy to buy Windows without a FreeBSD compat layer. They are buying Windows on the basis of some subset of a large number of features. The lack of a FreeBSD compat layer is not preventing them from buying Windows, they have not shown that this is the deal-breaker feature. Microsoft, like any other POTS software vendor, will prioritise features that impact the most customers. There are things on User Voice with tens or hundreds of thousands of votes and these tend to be prioritised. Something with under a hundred votes is so niche that it's only going to be a target of investment if it impacts another product or service. It's pretty hard to justify a feature in Windows that only 0.001% of Windows users will use. If you want to change that arithmetic, then next time your organisation is renewing M365 or Azure service subscriptions, tell your sales rep that FreeBSD support is important to your company. There is many other hosting services that have FBSD support. So this is MS/azure problem. Azure already officially supports FreeBSD and we have contributed a load of code to improve that support over the years. From the numbers I've seen, I strongly suspect that we've spent more on it than we've gained in revenue. You are asking Microsoft to throw money at a thing that will definitely cost time and money (and comes with the associated opportunity cost, because developer time spent on this features is developer time not spent on other features) but with no clear indication that it will increase revenue. Effectively, you are asking us to do work for free and you're also doing so quite rudely. Personally, I'd love to have a FreeBSD compat layer. The license would even make it possible to embed the FreeBSD kernel in Windows and so get the best aspects of WSL1 and WSL2. From a business perspective; however, I can't argue that this would be a great use of engineer time. There are a load of features that would positively impact a lot more users that would be higher priority. If you want this to happen and you want Microsoft to do it, then you need to help people inside the company provide this business case. Things that don't help include: - I want it. - You suck for not doing it. - It would make you money in unspecified ways. Things that do help include: - We are a FreeBSD shop with 1,000 workstations, we would switch to Windows on the desktop with this feature. - We are a large cloud customer with 10,000 VMs deployed, we would switch to Azure with this feature. - We are a Windows shop with a load of desktops but are planning to switch to Macs because we want a BSD-style userland. If you just want it to happen, then you don't need Microsoft to do anything. All of the code required to build a Linux system that integrates with WSL2 is open source and you can implement something compatible for FreeBSD. You can probably even skip a bunch of the boot requirements by using Linux as a bootloader and having a tiny Linux image that just kexecs a FreeBSD kernel. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: WSLg update on 1-5-2021 - BSD / WSL
On 03/05/2021 22:37, Pete Wright via freebsd-current wrote: On 5/1/21 12:42 PM, Chargen wrote: Dear all please note that I hope this message will be discussed to get this on the roadmap for FreeBSD. Perhaps there is already talk about && work done on that. I would like to suggest having a BSD side for Microsoft FOSS ambitions and get to know the BSD license. I hope the tech people here, know which nuts and bolts would be ready to boot a *BSD subsystem kernel and make that available on Windows 10 installations. I believe most of the effort make this happen lies with Microsoft - it is their product after all. WSL under the covers is Hyper-V which supports FreeBSD pretty well. I believe most of the work would be on the Windows side to get the plumbing in place to spin up a FreeBSD VM. There are open discussions on the WSL github system where people have asked for this but it has not gained much traction by Microsoft. [ Disclaimer: I work for Microsoft, but not on WSL and this is my own opinion ] WSL is actually two things. WSL1 is similar to the FreeBSD Linuxulator: it is a Linux syscall ABI in the NT kernel that implements *NIX abstractions that are not present in NT and forwards other things to corresponding NT subsystems. Like the Linuxulator, it lacks a bunch of features (e.g. seccomp-bpf support, which is required for things like Docker and Chrome) and is always playing catch-up with Linux. I'd personally love to see a FreeBSD version of this (though I'd be 90% happy if ^T did the *BSD thing), but it's something that only Microsoft can do and is currently quite difficult because the picoprocess abstraction in the NT kernel only allows one kind of picoprocess and so it would need to add a new abstraction layer to support both. WSL2 is a lightweight Hyper-V VM that is set up to integrate tightly with the host. This includes: - Aggressively using the memory ballooning driver so that a VM can start with a very small amount of committed memory and grow as needed. - Using Hyper-V sockets to forward things between the guest and the host. - Using 9p-over-VMBus (which, I hope, will eventually become VirtIO-over-VMBus, but I don't know of any concrete plans for this) to expose filesystems from the host to the fuest) - Starting using the LCOW infrastructure, which loads the kernel directly rather than going via an emulated UEFI boot process. FreeBSD is currently missing the balloon driver, I believe, has a Hyper-V socket implementation contributed by Microsoft (Wei Hu), and has a 9p-over-VirtIO implementation that could probably be tweaked fairly easily to do 9p-over-VMBus. The WSL2 infrastructure is designed to make it possible to bring your own kernel. I think FreeBSD would need to support the Linux boot protocol (initial memory layout, mechanism for passing kernel arguments in memory) to fit into this infrastructure, but that wouldn't require any changes to any closed-source components. Whether Microsoft or the FreeBSD project should do the work really comes down to who has more to gain. Windows 10 is installed on around a 1.3 billion devices and any of these users can run Ubuntu with a single click in the Microsoft Store, so it feels as if the FreeBSD project has a lot to gain from being able to reach them. If you believe that FreeBSD provides a better experience (I certainly believe it provides a better developer experience) than Ubuntu, then making it easy to reach every Windows users is of huge value to the FreeBSD project and community. Microsoft, in contrast, is driven by requests from customers who spend money on our products and services. Around a hundred people commented on the WSL issue to add FreeBSD support. If you assume that 1% of people who want the feature commented, then this gives around 10,000 folks who really want a FreeBSD equivalent of WSL. It's pretty hard to justify a feature in Windows that only 0.001% of Windows users will use. If you want to change that arithmetic, then next time your organisation is renewing M365 or Azure service subscriptions, tell your sales rep that FreeBSD support is important to your company. If, for example, a large company is spending a lot with a different cloud provider because they have better FreeBSD support than Azure, then that's the kind of thing that can be used to justify investing in FreeBSD. Currently, from what I know of FreeBSD deployments in Azure, Microsoft is already investing a disproportionate amount in FreeBSD relative to the number of users. WSL makes it easy for folks to develop on Windows and deploy in Azure. A lot of people are running Linux in Azure and so there's a big incentive to make this seamless. If a load of people are deploying FreeBSD on Azure and can't develop on Windows as easily, that's an incentive for Microsoft to improve the FreeBSD client-side integration. David ___ fre
Re: Stop installing /usr/bin/clang
On 16/08/2019 10:10, Konstantin Belousov wrote: You did not read neither review summary nor followups. This is needlessly insulting and this kind of attitude from you towards people on the mailing lists is one of the main reasons that my engagement with the FreeBSD project tends to be in brief bursts. If this were a one-off, then I would be happy to assume that you were unusually stressed, but this is a long-term repeated pattern of behaviour. I was not aware that devel/llvm was anything other than a meta-port that installed the latest devel/llvm{version} (I have only ever installed the packages when I need a specific version and so do not have the devel/llvm port installed). You could have clarified that. Instead, you chose to launch a personal attack. You are not Linus and the FreeBSD project does not need a Linus. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Stop installing /usr/bin/clang
On 15/08/2019 17:48, Konstantin Belousov wrote: > Please look at https://reviews.freebsd.org/D21060 > I propose to stop installing /usr/bin/clang, clang++, clang-cpp. > > It probably does not matter when all your software comes from ports or > packages, but is actually very annoying when developing on FreeBSD. > In particular, you never know which `clang' is called in the user > environment, because it depends on the $PATH elements ordering. What is the confusion here? The binary that is invoked as clang is from the base system. The binary that is invoked as clang{version number} is from ports. If the user has built clang from source and has set up their path to put that first, then they will get a different clang, but there's no way we can stop that kind of behaviour. For reference, on my machine, I have: clang <- this one is from the base system clang60 <- this one if from ports clang70 <- this one if from ports clang80 <- this one if from ports clang-devel <- this one if from ports Nothing in my PATH order affects this. The only source of confusion that I regularly encounter comes from the fact that FreeBSD packages install clang80, when every other system installs clang-8, so I end up having to have a special case in CMake logic for finding specific versions of tools like clang-format on FreeBSD. That said, I don't know what the impact would be on configure scripts if we didn't have a clang binary. CMake seems to run ${CC} -v and parse the output, so it's quite happy finding that cc is clang (and the specific version). How do most autoconf things handle this? Apple shipped a gcc symlink to clang for years because, in the absence of a gcc binary, a load of programs detected /usr/bin/cc and decided not to enable any GNU extensions. We've managed to avoid having to do that, but how many things look for clang, gcc, and cc in the path and enable features based on which one they find? David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: FreeBSD Core Team Response to Controversial Social Media Posts
On 19 May 2019, at 20:43, Igor Mozolevsky wrote: > > the best > explanation of democracy I have ever heard was: "two wolves and a > sheep deciding what to have for dinner!" If you believe that this quote in any way supports your argument, then I would suggest that you work through the game theoretic implications of this structure. (Hint: if the sheep can abstain, the sheep is never eaten and even without abstention the sheep isn’t going to be eaten today) David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: CFT: FreeBSD Package Base
On 29/04/2019 21:12, Joe Maloney wrote: With CFT version you chose to build, and package individual components such as sendmail with a port option. That does entirely solve the problem of being able to reinstall sendmail after the fact without a rebuild of the userland (base) port but perhaps base flavors could solve that problem assuming flavors could extend beyond python. This sounds very much like local optimisation. It's now easy to create a custom base image. Great. But how do I express dependencies in ports on a specific base configuration? This is easy if I depend on a specific base package, but how does this work in your model? For example, if I have a package that depends on a library that is an optional part of the base system, how do I express that pkg needs to either refuse to install it, or install a userland pkg that includes that library in place of my existing version as part of the install process? More importantly for the container use case, if I want to take a completely empty jail and do pkg ins nginx (for example), what does the maintainer of the nginx port need to do to express the minimum set of the base system that needs to be installed to allow nginx to work? One of the goals for the pkg base concept was to allow this kind of use case, easily creating a minimal environment required to run a single service. With a monolithic base package set, you're going to need some mechanism other than packages to express the specific base subset package that you need and I think that you need to justify why this mechanism is better than using small individual packages. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: CFT: FreeBSD Package Base
On 29/04/2019 14:19, Lev Serebryakov wrote: I'm not very interested in packetized base for "big servers" which contains full FreeBSd installation 'Big servers' may have a full FreeBSD installation in the base system, but they may also have hundreds of jails that want the absolute minimum required for the service that they're exporting. FreeBSD is currently suffering quite a lot from the lack of any solid story here. The vast majority of cloud deployments are now using some combination of Docker and Kubernetes or equivalents to spin up a large number of VMs and an even larger number of microservice containers within them. This should be something that FreeBSD is ideal for - jails preform better and provide a more coherent interface than the mess of cgroups and seccomp-bpf that Linux containers use. It *ought* to be trivial to create a jail that has basically nothing other than the core libraries (and maybe a shell) and is managed from the outside. Even the few FreeBSD core utilities that support jails don't really work like this (for example, I can use pkg to install something in a jail, but doing so implicitly installs a copy of the pkg tool inside the jail and invokes that). David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Building freebsd on another OS
On 19/03/2019 00:01, Eric Joyner wrote: On Sun, Mar 17, 2019 at 6:35 AM Hans Petter Selasky wrote: See the freebsd-build utils package for Linux. --HPS Is there anything for Windows? Your best bet on Windows is to use the Windows Subsystem for Linux (WSL). This lets you install a Linux distro's userland on top of the NT kernel. If you install vcxsrv (available in chocolatey) then you can also run graphical applications. That said, FreeBSD also runs very well under Hyper-V, so if you have enough RAM then you may find that a better option. In my experience, compilers that spawn a new process for every file (e.g. gcc, clang) are noticeably faster in a FreeBSD VM on Windows than in WSL or native in Windows (and a *lot* faster than their cygwin versions). David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: GNU binutils 2.17.50 retirement planning
On 23 Nov 2018, at 16:23, Ed Maste wrote: > > For some time we have been incrementally working to retire the use of > obsolete GNU Binutils 2.17.50 tools. At present we still install three > binutils by default: > > as > ld.bfd > objdump We probably need to kill ld.bfd before 12.0. It predates ifunc and so interprets anything with an ifunc as requiring a copy relocation. This means that if you use it to link against any shared library (like, say, libc.so.7 in FreeBSD 12.0) that uses ifuncs then it will insert a relocation so that the ifunc resolver (which contains PC-relative addresses of other functions) will be copied into the main binary. This then causes your program to crash the first time anything calls memcpy, in a very difficult-to-debug way (it jumps into a random bit of your main binary, runs for a bit, and then dies). David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Warnings about dlclose before thread exit. __cxa_thread_call_dtors
The FreeBSD implementation here looks racy. If one thread dlcloses an object while another thread is exiting, we can end up calling a function at an invalid memory address. It also looks as if it may be possible to unload one library, load another at the same address, and end up executing entirely the wrong code, which would have some serious security implications. The GNU/Linux equivalent of this function locks the DSO in memory until all references to it have gone away. A call to dlclose() on GNU/Linux will not actually unload the library until all threads with destructors in that library have been unloaded. I believe that this reuses the same reference counting mechanism that allows the same library to be dlopened and dlclosed multiple times. It would be nice if the FreeBSD version had the same behaviour, because this is almost certainly expected in code written on other platforms. David > On 18 Aug 2018, at 14:18, Willem Jan Withagen wrote: > > Hi, > > I've sent the question below to the Ceph-devel list, asking if any recent > changes would be able to cause this. > > But then of course this could stem from FreeBSD libs, and of ports > So the question here is if anybody has gotten these "warnings" in other tools. > > --WjW > > > Forwarded Message > Subject: Warnings about dlclose before thread exit > Date: Sat, 18 Aug 2018 14:46:35 +0200 > From: Willem Jan Withagen > To: Ceph Development > > Hi, > > I've have upgraded to FreeBSD ALPHA 12.0, but I don't think the errors them > from there. Although they could be in one of the libs that came along with > the upgrade. > > I'm getting these warnings during rbd and ceph (maybe even more) invocations > that indicate that indicate a possible problem because: > === > It could be possible that a dynamically loaded library, use > thread_local variable but is dlclose()'d before thread exit. The > destructor of this variable will then try to access the address, > for calling it but it's unloaded, so it'll crash. We're using > __elf_phdr_match_addr() to detect and prevent such cases and so > prevent the crash. > === > this is from : > https://github.com/freebsd/freebsd/blob/master/lib/libc/stdlib/cxa_thread_atexit_impl.c > > Now it could be that dlclose() and thread exit are just closed to one > another. But still this is hard core embedded in libc already since 2017, so > I'm sort of expecting that a recent change has caused this. > > And as indicated it is a possible cause for crashed, because thread_exit is > going to clean up things that are no longer there. > > Now the 20 dollar question is: > Where was this introduced?? > > Otherwise I'll have to try and throw my best gdb capabilities at it, and try > to invoke an rbd call and see where it activates this warning. > > --WjW > ___ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: clang manual page?
On 6 Apr 2018, at 01:30, Pete Wright wrote: > > > On 04/05/2018 17:15, Steve Kargl wrote: >> This assumes that a gcc(1) is available on the system. >> >> % man gcc >> No manual entry for gcc >> >> If the system compiler is clang/clang++, then it ought to be >> documented better than it currently is. Ian's suggests for >> 'clang --help' is even worse >> >> % clang --help | grep -- -std >> -cl-std= OpenCL language standard to compile for. >> -std=Language standard to compile for >> -stdlib= C++ standard library to use >> >> Does == ? >> > a quick google search turns up the following additional information: > > "clang supports the -std option, which changes what language mode clang uses. > The supported modes for C are c89, gnu89, c99, gnu99, c11, gnu11, c17, gnu17, > and various aliases for those modes. If no -std option is specified, clang > defaults to gnu11 mode. Many C99 and C11 features are supported in earlier > modes as a conforming extension, with a warning. Use |-pedantic-errors| to > request an error if a feature from a later standard revision is used in an > earlier mode." > > https://clang.llvm.org/docs/UsersManual.html I believe that the clang user manual referenced here is written in Sphynx, which is able to generate mandoc output as well as HTML. Perhaps we should incorporate the generated file in the next import? David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: atomic in i386 Current after CLANG 6 upgrade
On 15 Jan 2018, at 17:00, Jan Beich wrote: > > It wouldn't help (see below). Clang 6 accidentally made __atomic* work > enough to satisfy configure check but not for the port to build. I guess, > it also confuses configure in net/librdkafka and net-mgmt/netdata. > Can we (by which I probably mean emaste@) push out an EN that adds the atomic.c from compiler-rt to our libgcc_s? That should provide all of these helper functions. Clang assumes that they exist because both compiler-rt and vaguely recent libgcc_s provide them. Recent GCC will also assume that they exist and so the correct fix is probably for us to make them to exist. If this is difficult, then we can perhaps provide a port that compiles atomic.c into libatomic_fudge.so or similar and provides a libgcc_s.so that’s a linker script that forces linking to libatomic_fudge.so and libgcc_s.so. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: inconsistent for() and while() behavior when using floating point
On 15 Jan 2018, at 14:49, Hans Petter Selasky wrote: > > The "seq" utility should use two 64-bit integers to represent the 10-base > decimal number instead of float/double. And then you need to step this pair > of integers. As the saying goes: > Sometimes, people think 'I have a problem and I will solve it with floating > point values' and then they have 1. problems. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [self base packages] pkg: packages for wrong OS version: FreeBSD:12:amd64
On 10 Jan 2018, at 18:53, Baptiste Daroussin wrote: > > I need to figure out a mechanism to make this simpler to handle to upgrade of > base system while keeping this safety belt for users. > > Any idea is welcome I believe the apt approach to this is to have a different verb (distupgrade vs upgrade) to perform complete version upgrades. Ideally, the proper fix would probably be to depend on a base package version, rather than OSVERSION, and if the base packages are not being used to synthesise a phantom set of base package metadata based on OSVERSION. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Programmatically cache line
On 5 Jan 2018, at 02:46, Jon Brawn wrote: > This idea of Arm big.LITTLE systems having cache lines of different lengths > really, really bothers me - how on earth is the cache coherency supposed to > work in such a system? I doubt the usual cache coherency protocols would work > - probably need a really MESSY protocol to deal with this config :-) I believe that the systems that have different cache line sizes (which ARM explicitly tells partners not to do) don’t allow cores from both the big and little clusters to be active at the same time - the OS is supposed to migrate everything entirely from one cluster to the other. The more complex designs, that allow mixes of cores from two or three different clusters that I’m aware of all have the same cache line size. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Programmatically cache line
On 3 Jan 2018, at 22:12, Nathan Whitehorn wrote: > > On 01/03/18 13:37, Ed Schouten wrote: >> 2018-01-01 11:36 GMT+01:00 Konstantin Belousov : >> On x86, the CPUID instruction leaf 0x1 returns the information in >> %ebx register. > Hm, weird. Why don't we extend sysctl to include this info? >>> For the same reason we do not provide a sysctl to add two integers. >> I strongly agree with Kostik on this one. Why add stuff to the kernel, >> if userspace is already capable of extracting this? Adding that stuff >> to sysctl has the downside that it will effectively introduce yet >> another FreeBSDism, whereas something generic already exists. >> > > Well, kind of. The userspace version is platform-dependent and not always > available: for example, on PPC, you can't do this from userland and we > provide a sysctl machdep.cacheline_size to userland. It would be nice to have > an MI API. On ARMv8, similarly, sometimes the kernel needs to advertise the wrong size. A few big.LITTLE cores have 64-byte cache lines on one cluster and 32-byte on the other. If you query the size from userspace while running on a 64-byte cluster, then issue the zero-cache-line instruction while migrated to the 32-byte cluster, you only clear half the size. Linux works around this by trapping and emulating the instruction to query the cache size and always reporting the size for the smallest cache lines. ARM tells people not to build systems like this, but it doesn’t always stop them. Trapping and emulating is much slower than just providing the information in a shared page, elf aux args vector, or even (often) a system call. To give another example, Linux provides a very cheap way for a userspace process to enquire which core it’s running on. Some more recent high-performance mallocs use this to have a second-layer per-core cache after the per-thread cache for free blocks. Unlike the per-thread cache, the per-core cache does need a lock, but it’s very unlikely to be contended (it will only be contended if either a thread is migrated in between checking and locking, so acquires the wrong CPU’s lock, or if a thread is preempted in the middle of middle of the very brief fill operation). The author of the SuperMalloc paper tried doing this with CPUID and found that it was slower by a sufficient margin to almost entirely offset the benefits of the extra layer of caching. Just because userspace can get at the information directly from the hardware doesn’t mean that this is the most efficient or best way for userspace to get at it. Oh, and some of these things are useful in portable code, so having to write some assembly for every target to get information that the kernel already knows is wasteful. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Programmatically cache line
On 1 Jan 2018, at 05:09, Adrian Chadd wrote: > > On 30 December 2017 at 00:28, Konstantin Belousov wrote: >> On Sat, Dec 30, 2017 at 07:50:19AM +, blubee blubeeme wrote: >>> Is there some way to programmatically get the CPU cache line sizes on >>> FreeBSD? >> >> There are, all of them are MD. >> >> On x86, the CPUID instruction leaf 0x1 returns the information in >> %ebx register. > > Hm, weird. Why don't we extend sysctl to include this info? It would be nice to expose this kind of information via VDSO or similar. There are a lot of similar bits of info that people want to use for ifunc and, SVE is going to have a bunch of similar requirements. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: libfuzzer support
On 24 Oct 2017, at 10:29, Kurt Jaeger wrote: > >> >> is libfuzzer (see https://llvm.org/docs/LibFuzzer.html) supported on FreeBSD >> head? >> It seems that it is not supported by /usr/bin/clang... >> >> Am I wrong and it is supported or is someone working on it? > > I searched in the Port devel/llvm50 and it looks like there's no support > for libfuzzer right now. > > But that would probably(?) be the best place to add it first. It’s been a while, but I’m pretty sure I had libFuzzer working from the upstream sources on FreeBSD. I found AFL (which also works on FreeBSD out of the box) more useful, so I didn’t spend much time with it. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: There is *NO* abi stability in -head
On 23 Oct 2017, at 21:35, Mateusz Guzik wrote: > > Instead, the same can be reshuffled: > struct crap2 { >int i1; >int i2; >void *p1; >void *p2; > }; > > With offsets: > > 0x1000 i1 > 0x1004 i2 > 0x1008 p1 > 0x1010 p2 > > This is only 24 bytes. 2 ints can be placed together and since they add > up to 8 the p1 pointer gets the right alignment without extra padding. If you are making changes of this nature, please consider sorting in the other order. When we start seeing 128-bit pointers (which, with CHERI-like systems, may be sooner than you think) then this ordering will give you lots of padding, whereas putting the pointers first will not. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [RFC] future of drm1 in base
On 3 Sep 2017, at 06:31, Cy Schubert wrote: > > Thanks for the heads up Johannes. I currently have three machines that each > run ATI r128, mach64 and the last one an mga card. I normally use my i945 > and i915 laptops (mostly the former) but on occasion I may fire up X on one > of the other three. Having a drm-legacy port in the tree would benefit to > me. It’s been quite a while since I used any of these (though much of the list is very familiar), but the last machine I ran with a mach64 card was faster with the vesa driver than with the ‘accelerated’ driver (as I recall, it was a 500MHz Pentium III). I suspect the real question is not whether people have machines that use these cards, but whether they do anything with them where they’d notice the lack of acceleration. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: svn commit: r322875 - head/sys/dev/nvme
On 25 Aug 2017, at 07:32, Mark Millard wrote: > > As I remember _Static_assert is from C11, not > the older C99. In pre-C11 dialects of C, _Static_assert is an identifier reserved for the implementation. sys/cdefs.h defines it to generate a zero-length array if the condition is true or a negative-length array if it is false, emulating the behaviour (though giving less helpful error messages) > > As I understand head/sys/dev/nvme/nvme.h use by > C++ code could now reject attempts to use > _Static_assert . In C++, _Static_assert is an identifier reserved for the implementation, but in C++11 or newer static_assert is a keyword. sys/cdefs.h defines _Static_assert to static_assert for newer versions of C++ and defines it to the C-before-11-compatible version for C++-before-11. TL;DR: We have gone to a lot of effort to ensure that these keywords work in all C/C++ dialects, please use them, please report bugs if you find a case where they don’t work. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: swapfile query
On 19 Aug 2017, at 17:54, Cy Schubert wrote: > >> 3. should total swap be 1x 2x or some other multiple of RAM these days? > > Depends. If you're running some kind of database server or OLTP > application. Some vendors recommend no swap whatsoever while others > recommend some. What does your application vendor recommend? The main advantage of swap these days (on machines with that sort of amount of RAM) is to allow you to keep some file-backed memory objects in memory in preference to leaked (or very cold) heap memory. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [libelftc] internal library or not?
On 1 Aug 2017, at 12:36, Michael Zhilin wrote: > > Hi Ed, freebsd-current, > > I want to add C++ demangling into sysutils/pstack. In man pages I've found > eltfc_demangle, exact what I need. This function belongs to libelftc. "make > installworld" installs man pages and include files, but there is no > installed library. As results compilation error is occuried. Is pstack written in C++ or linking anything that is? If so, you will get __cxa_demangle[1] provided by the C++ ABI library (libcxxrt on FreeBSD, which currently uses the libelftc implementation, though might switch soon). If not, adding -lcxxrt will provide it. David [1] https://itanium-cxx-abi.github.io/cxx-abi/abi.html#demangler ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: libstdc++ build failures on MIPS, PowerPC, Sparc
On 23 Jul 2017, at 23:54, Mark Millard wrote: > >>c++ -isystem ${OUTDIR}/tmp/usr/include/c++/v1 -std=c++11 -nostdinc++ >> -isystem ${OUTDIR}/tmp/usr/include -L${OUTDIR}/tmp/usr/lib >> -B${OUTDIR}/tmp/usr/lib --sysroot=${OUTDIR}/tmp -B${OUTDIR}/tmp/usr/bin -O >> -pipe -G0 -EB -mabi=32 -msoft-float -DIN_GLIBCPP_V3 -DHAVE_CONFIG_H >> -I${SRCDIR}/gnu/lib/libstdc++ -I${SRCDIR}/contrib/libstdc++/libsupc++ >> -I${SRCDIR}/contrib/gcc -I${SRCDIR}/contrib/libstdc++/include >> -I${SRCDIR}/contrib/gcclibs/include -I${SRCDIR}/contrib/libstdc++/include >> -I. -frandom-seed=RepeatabilityConsideredGood -fno-implicit-templates >> -ffunction-sections -fdata-sections -Wno-deprecated -c >> ${SRCDIR}/contrib/libstdc++/src/bitmap_allocator.cc -o bitmap_allocator.o This is quite a surprising build command. It’s using usr/include/c++/v1 for system includes, but usr/include/c++/v1 is the libc++ header directory. libstdc++ shouldn’t need to be built with C++11 support, but libc++ does, so this command looks like a combination of both libc++ and libstdc++ build flags all mashed together. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Getting PID of socket client
On 9 Jul 2017, at 14:25, Stefan Ehmann wrote: > > Don't why the structs are not compatible, maybe because: > "The process ID cmcred_pid should not be looked up (such as via the > KERN_PROC_PID sysctl) for making security decisions. The sending process > could have exited and its process ID already been reused for a new process." Note that having the kernel provide a process descriptor instead of a PID would allow the userspace process to have race-free access to the PID. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [libltdl] removal from gnu ports
On 7 Jun 2017, at 10:33, blubee blubeeme wrote: > > Hi > > I'm sure I was reading yesterday on a different machine about the linker > flag -ld which has something to do with gnu dlopen and how it's ok to > remove those from your Makefile since FreeBSD handles dlopen and a few > other things from that header in the standard libc. > > Is that correct? Do you mean -ldl? If so, then yes. On Linux, the dl* symbols are only exported from ld-linux.so if you link against libdl. On FreeBSD, they are exported from rtld regardless. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: build src with colored output?
On 16 May 2017, at 07:42, Johannes Lundberg wrote: > > Gonna answer myself here. Think I found a way. > > Add CFLAGS=-fcolor-diagnostics to env or /etc/src.conf > Do clean build, that is no -DNO_CLEAN,KERNFAST, etc. > > Makes it a lot easier to find the errors in a 16 threads build output... > > The mystery still remains though, why is color disabled for parallel > builds? It’s disabled for two reasons. The first is aesthetic - some people don’t like coloured output. I’m not going to debate that one. The other is technical. Unlike modern build tools, such as Ninja, bmake’s handling of multithreaded output is very bad. It simply allows each task the same output device, whereas ninja gives each parallel job a pipe back to the build process and then merges the output itself. This means that you periodically encounter the case where one child process has sent a colour escape sequence to the output and then another process sends the next line, giving weird visual effects and reducing the utility of colour outputs. Ideally, we’d solve this by fixing bmake to behave more like a modern build tool and: - Giving each sub-process its own pipe. - Emitting the full compile command for all failed tasks. - Displaying only a summary for successful commands Or we could find someone with the time to spend giving FreeBSD a modern build system, which would probably save us 1-2 man years of developer time each year overall. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: libstd++ missing in drm-next-4.7
On 20 Jan 2017, at 14:06, Christian Schwarz wrote: > > I know clang is no longer built and installed as part of buildworld in > the FreeBSDDesktop repo, > but why isn't libstd++ present? > > … > > clang39 # now run clang, won't work, see below > Shared object "libc++.so.1" not found, required by "clang" For clarification, are you missing libc++ or libstdc++? libstdc++ hasn’t been a default part of the base system since 10.0 for x86 architectures. David smime.p7s Description: S/MIME cryptographic signature
Re: vt(4) chops off the leftmost three columns
On 13 Jan 2017, at 01:00, Ernie Luzar wrote: > > VT should have had better testing before becoming the default in 11.0. The choice was VT or no acceleration in X11, because all of the new DRI drivers depend on KMS, which requires VT. We only got VT in a useable state (and therefore useable accelerated X11 for most of the 10.x series) because the Foundation funded the work. More contributions to VT are always welcome, but the choice was not between VT or something better tested, it was between VT and black screens. David smime.p7s Description: S/MIME cryptographic signature
Re: Is there possible run a MacOS X binary
On 5 Dec 2016, at 19:31, Kevin P. Neal wrote: > >> Is there any emulator like linuxator to run Mac OS X binaries, or >> is ther any licensing problem? > > It may be possible to make an emulator for Darwin (the OS that Mac OS sits > on top of), but an emulator for Mac OS would probably require a legal copy > of Mac OS. > NetBSD started working on one, and had it at a state where it could run the Darwin version of XFree86 (which should give you some idea of how old it was), but it couldn’t run the Mac Window Server and I don’t think it’s maintained. It’s not very interesting, because you need all of the frameworks and programs from OS X for it to be useful, and the license prohibits using them in such a way (and even if you did get it to work, you wouldn’t be able to run Mac apps at the same time as X apps without running XQuartz, and at that point you may as well just run macOS). There is a more recent project called Darling (https://www.darlinghq.org) that tries to run Mac apps on Linux using a custom Mach-O loader, bits of GNUstep, and some of their own stuff. No one has ever tried using it on FreeBSD, to the best of my knowledge. It’s GPLv3, so I’m not motivated to contribute to it (though it does incorporate some of my code in various places). David smime.p7s Description: S/MIME cryptographic signature
Re: Optimising generated rules for SAT solving (5/12 are duplicates)
On 23 Nov 2016, at 18:11, A. Wilcox wrote: > > Or you could just, I don't know, email the diff as a patch using git > send-email like normal people instead of using GitHub's walled garden. > That way, people without GitHub accounts can still comment on it. GitHub pull requests are branches in the recipient’s git repo. Anyone can see the patch without logging in, either via the web interface or by pulling the relevant branch. If you want to send comments via email based on this copy of the patch, then that’s up to you, though personally I’d much prefer the GitHub code review interface to anything email based. David smime.p7s Description: S/MIME cryptographic signature
Re: how to reduce the size of /usr/share/i18n data?
On 3 Nov 2016, at 19:34, Rodney W. Grimes wrote: > > Depressing fact: The i18n directory alone is larger than > a full 386BSD 0.1+sources+patchkit install. Is the depressing thing here that even something as recent as 386BSD 0.1 assumed that ASCII was enough for the whole world? David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
On 24 Jul 2016, at 12:42, David Chisnall wrote: > > I’ve now fixed both of these in the version here: > > https://github.com/davidchisnall/dtc Andy filed a number of issues on GitHub, which are now all fixed. In particular, /include/ should now work everywhere and /delete-node/ now works. I’ve added a CMake build system and tests to the version on GitHub, which should make it easier for people to build out of tree (for example, in a port). Please file issues as you find them. I hope to look at the overlay stuff soon. I believe that the support that it requires from dtc is relatively small. I hope to find time to look at overlay support soon, David smime.p7s Description: S/MIME cryptographic signature
Re: Wayland work status
On 12 Aug 2016, at 00:18, Lundberg, Johannes wrote: > > Currently by default evdev create /dev/input/eventX devices with 600 > permission. These need to be accessible for non-root users. What is the > best solution? Should we create a "input" group similar to "video" group is > being used for rights to access /dev/drm devices? There is a similar problem for the drm devices (by default, users can’t use 3D acceleration). A devfs.conf policy can change the permissions. I’d suggest that we create a default group called something like console or local, put new users there by default, and make drm and evdev devices accessible by this group. David smime.p7s Description: S/MIME cryptographic signature
Re: SafeStack in base
On 27 Jul 2016, at 23:55, Shawn Webb wrote: > > I'm interested in getting SafeStack working in FreeBSD base. Below is a > link to a simplistic (maybe too simplistic?) patch to enable SafeStack. > The patch applies against HardenedBSD's hardened/current/master branch. > Given how simple the patch is, it'd be extremely easy to port over to > FreeBSD (just line numbers would change). We’ve worked with the authors of the SafeStack work. There are some changes to libc and a few other support libraries needed for it to work, which are in the GitHub repository. They’ve also done some work to address issues of things like Firefox and v8 that need to be able to walk the stack, allocate their own stacks for userspace threads, and so on. It was not enabled for FreeBSD 11 because SafeStack imposes a lot of long-term ABI constraints that it’s not clear we want to support indefinitely given the ‘Missing the point(er)’ Oakland paper last year. It does increase the work factor for attackers, so has some security benefit, but if bypassing it is something that’s going to be added to exploit toolkits then it’s little practical benefit. One middle-ground that we’ve considered is only supporting it for statically linked binaries. This absolves us of the need to support the ABI indefinitely, and still provides a lot of the benefit. David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
Thanks, On 19 Jul 2016, at 20:49, Emmanuel Vadot wrote: > > > Hello, > > I've just tried bsd dtc on all arm dts that we have. > It doesn't seems to handle multiple include directories. > Here is how to reproduce : > > $ export SRCROOT=/path/to/fbsd/src > $ export MACHINE=arm > $ cd $SRCROOT/sys/boot/fdt/dts/arm > $ $SRCROOT/sys/tools/fdt/make_dtb.sh $SRCROOT/sys > beaglebone-black.dts . converting beaglebone-black.dts > -> ./beaglebone-black.dtb Unable to open file > '/home/elbarto/Work/freebsd/freebsd.git//sys/boot/fdt/dts/arm/am33xx-clocks.dtsi'. > No such file or directory Unable to open file > '/home/elbarto/Work/freebsd/freebsd.git//sys/boot/fdt/dts/arm/tps65217.dtsi'. > No such file or directory > > Both dtsi files are include with /include/ (i.e. not handled by cpp). > make_dtb.sh specify : -i $S/boot/fdt/dts/${MACHINE} -i > $S/gnu/dts/${MACHINE}, it looks like the second one isn't added to the > list. It actually is added to the list and found. The bug here is in error reporting - it was reporting an error for each file that it tried to open but couldn’t, even when it subsequently found the correct file. > > Trying tegra124-jetson-tk1-fbsd.dts give this : > converting tegra124-jetson-tk1-fbsd.dts > -> /tmp/bsd_dtb//tegra124-jetson-tk1-fbsd.dtb Error on line 1214: > Expected node name interrupt-affinity = <&{/cpus/cpu@0}>, > ^ > Error on line 1214: Expected ; at end of property > interrupt-affinity = <&{/cpus/cpu@0}>, > ^ > Failed to parse tree. Unhappy face! There was a FIXME relating to this in the code. I’ve now fixed both of these in the version here: https://github.com/davidchisnall/dtc I’ll push the changes to FreeBSD svn soon, but in the meantime anyone wanting to test can just clone that repo over the dtc directory in their src checkout. David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
On 23 Jul 2016, at 05:16, Warner Losh wrote: > > On Fri, Jul 22, 2016 at 1:03 AM, David Chisnall wrote: >> On 22 Jul 2016, at 03:40, Warner Losh wrote: >>> >>> On Wed, Jul 20, 2016 at 9:51 AM, David Chisnall >>> wrote: >>>> On 20 Jul 2016, at 16:46, Warner Losh wrote: >>>>> >>>>> I've been trying to get the final spec for it. Right now it's a >>>>> disorganized series >>>>> of patches, some of which have been merge some that haven't. I'll send >>>>> you a >>>>> copy when I can find something better than "here's the code." >>>> >>>> Thanks. From the information I can find, it looks as if most of the >>>> machinery required to implement it is already in dtc, so it should >>>> (hopefully) just be a matter of adding a new keyword to detect plugins, a >>>> scan to find the cross-references (or possibly reusing the existing one) >>>> and then a little bit of extra logic. >>> >>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/overlay-notes.txt >>> has the specs. >> >> Hmm, that’s even less complete than the docs that I’d found. > > Can you share that then? I only found tutorial-style things: https://www.raspberrypi.org/documentation/configuration/device-tree.md https://learn.adafruit.com/introduction-to-the-beaglebone-black-device-tree/device-tree-overlays I was hoping for something a bit more like a spec. >> From that, it seems as if the only thing that dtc needs to do is support the >> /plugin/ syntax and emit a section describing unresolved references? > > I believe so, yes. That sounds like it should be easy. > >> Or is dtc also expected to be able to do the merging? > > I think that's TBD. We'll need, at the very least, an update libfdt > from upstream that knows how to do the merging, as well as changes to > /boot/loader to be able to pick and choose which plugins to add to the > base. If we can do all that with the BSDL DTC and it passes all the > other GPL test cases, then we may have a winner and we can get started > integrating plugin support to /boot/loader. I know my RPi would be > happier if it had a 'standard' DTB with a plugin for whatever 1-wire > stuff I'm playing with today. Patrick Wildt was running the GPL’d dtc test suite and I fixed all of the things that he reported broken. We’re now using a less-efficient algorithm for calculating the cross-references so that we resolve things in the same order, which makes doing a diff on the dts produced by the two tools easier. David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
On 22 Jul 2016, at 03:40, Warner Losh wrote: > > On Wed, Jul 20, 2016 at 9:51 AM, David Chisnall wrote: >> On 20 Jul 2016, at 16:46, Warner Losh wrote: >>> >>> I've been trying to get the final spec for it. Right now it's a >>> disorganized series >>> of patches, some of which have been merge some that haven't. I'll send you a >>> copy when I can find something better than "here's the code." >> >> Thanks. From the information I can find, it looks as if most of the >> machinery required to implement it is already in dtc, so it should >> (hopefully) just be a matter of adding a new keyword to detect plugins, a >> scan to find the cross-references (or possibly reusing the existing one) and >> then a little bit of extra logic. > > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/devicetree/overlay-notes.txt > has the specs. Hmm, that’s even less complete than the docs that I’d found. From that, it seems as if the only thing that dtc needs to do is support the /plugin/ syntax and emit a section describing unresolved references? Or is dtc also expected to be able to do the merging? David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
On 20 Jul 2016, at 16:46, Warner Losh wrote: > > I've been trying to get the final spec for it. Right now it's a > disorganized series > of patches, some of which have been merge some that haven't. I'll send you a > copy when I can find something better than "here's the code." Thanks. From the information I can find, it looks as if most of the machinery required to implement it is already in dtc, so it should (hopefully) just be a matter of adding a new keyword to detect plugins, a scan to find the cross-references (or possibly reusing the existing one) and then a little bit of extra logic. David smime.p7s Description: S/MIME cryptographic signature
Re: Call for Testing: Switching back to our BSD licensed dtc(1)
On 20 Jul 2016, at 05:48, Warner Losh wrote: > > My concern with this is overlay support. Overlay support for "shields" or > "badges" is going into upstream GPL dtc shortly. The patches have been > kicking around for a while and are almost ready to go in. We need this feature > and I'd planned on bringing it into the tree as soon as it goes into the git > repo. It's sorely needed for things like RPi and BBB that make it super > easy to add boards with parts that need a bit more DTB into the FDT to > describe them. There's been patches to bring this functionality into the > loader and into the in-tree gpl dtc floating around for a while now. Is the syntax / semantics for overlays documented somewhere? David smime.p7s Description: S/MIME cryptographic signature
Re: FreeBSD-11.0-BETA1-amd64-disc1.iso is too big for my 700MB CD-r
On 13 Jul 2016, at 10:17, O. Hartmann wrote: > > A CD is still a used media, but it starts getting squeezy on it as certain > software starts to grow - as CLANG/LLVM does. Maybe it is time to have also > CDs > as "miniboot" and DVDs for a more complete installation media? I completely agree. If you’re installing somewhere that’s not firewalled off completely from the Internet, then it’s typically faster to boot the bootonly ISO and then download the rest on the target machine (from a local mirror if necessary). Even 100Mb/s ethernet is faster than most CD drives. If you need offline installs, then the DVD with a bunch of other packages on it is probably what you want, not a CD that just contains the base system. Even if you can’t boot from USB (as I can’t on one of my FreeBSD machines), it should be possible to stick the base distributions and a package repo on a USB stick, use the boot-only ISO to boot and then install from USB. David smime.p7s Description: S/MIME cryptographic signature
Re: CURRENT: bhyve and Kernel SamePage Mergin
If this paper is the one that I think it is, then I was one of the reviewers. Their attack is neat, but it depends quite a lot on being able to deterministically trigger deduplication. Their proof-of-concept exploit was on Windows (and JavaScript attack was really fun) and I’m not convinced that it would work reliably on Linux or VMWare ESX, which both defer deduplication for as long as possible to avoid NUMA-related overheads. We don’t currently have a FreeBSD implementation, but if someone wanted to provide one then a defence against this attack would be fairly simple: count the number of CoW faults that a process is receiving and if it reaches a certain threshold then remove all of its memory from the set of eligible pages. The attack relies on being able to repeatedly trigger CoW faults and time whether they occur, with the same set of pages. At least some existing implementations will make this impossible as these pages will repeatedly be deduplicated and then duplicated and this is already a pathological case that most memory deduplication implementations need to handle (as well as being a security hole, it’s also a big performance killer). Kib has been working on ASLR for FreeBSD (I think it’s in 11?), but at this point it’s more of a checkbox item than a serious mitigation technique. It adds a little bit of work for attackers, but there are so many attacks that can bypass ASLR even with strong entropy that it just increases the work factor slightly. If you’re running code written in C, then you’re better off relying on Capscium compartmentalisation to limit what the attacker can do once they’ve compromised it. David > On 8 Jun 2016, at 16:01, O. Hartmann wrote: > > A couple of days I got as a responsible personell for a couple of systems a > warning about > the vulnerabilities of the mechanism called "Kernel SamePage Mergin". On this > year's IEEE > symposion there has been submitted a paper by Bosman et al., 2016, describing > an attack > on KSM. This technique, also referred to as memory/page deduplication, seems > to be > vulnerable by design under certain circumstances. I guess the experts of the > readers here > do already know, but I consider myself a non-expert and therefore, I'd like > to ask about > the status of that kind of development in FreeBSD. I read about a project of > last year's > Google Summer of Code 2015 targetting KSM on FreeBSD. > > In Linux, this deduplication techniques is implemented since kernel 2.6.38 > and Windows > Kernel uses this techniques since Windows 8.1 and sibblings (also Windows > Server). We > were strongly advised to disable those "features" in Windows clients, servers > and Linux > servers, if used. > > Other papers describe successful attacks on memory contents and ASLR by > misusing KSM. On > Windows, mmap() entropy is 19bit, on Linux usually 28bit. And FreeBSD (if > planned/used/already implemented?)? > > If you are interested I could provide links or PDFs of the papers I already > gathered > about that subject (it is not much, simply google for "KSM FReeBSD" or KSM > deduplication > ASLR). > > Thanks in advance, > > oh smime.p7s Description: S/MIME cryptographic signature
Re: why 100 packages are evil
On 25 Apr 2016, at 06:48, Gerrit Kühn wrote: > >> Yes. It will be replaced by 'pkg upgrade' -- as far as I know, that's >> the plan for 11.0-RELEASE. > > Hm... I never had any troubles with freebsd-update, it always "just > worked" for me. OTOH, I remember having several issues with pkg, requiring > to fix databases manually and so on. There are two kinds of issues with freebsd-update: The first is fairly common: it’s slow and creates a lot of files. If you read the forums, you’ll find a lot of issues about this. Updates from one patchlevel to the next are pretty straightforward, but on both the VMs that I use for FreeBSD and a slow AMD machine with ZFS it takes around an hour (sometimes more) for freebsd-update to jump one major release. After that, it takes pkg a minute or two to update the 2-3GB of packages that need upgrading. Minor releases can often take tens of minutes on these system. The many files issue can cause inode exhaustion. One one machine, I just checked and have 20K files in /var/db/freebsd-update/files. If you’re using UFS for /var, it’s fairly easy for freebsd-update to run out of inodes. Trying to recover a FreeBSD system that can no longer create files in /var is not the most fun uses of my time. The second issue is that it sometimes just fails to work. I have twice had freebsd-update manage to become confused about versions and install a kernel that couldn’t read the filesystem. I’ve had similar confusion where (on a box that I administer mostly via the network and where physical access is a pain) had it install a version of ifconfig from an older userland than the kernel. These are all on machines where freebsd-update has been responsible for every upgrade after the initial install from CD. Most depressingly, it spends ages doing checksums of every file in the system, determines that they don’t match the expected ones, and then installs the wrong one anyway. I have been using the testing versions of pkg on most FreeBSD machines since it became available. Since pkg 1.0 was released, I have had far fewer issues with pkg than with freebsd-update and almost all of those were to do with poor information in the ports tree and the rest were either UI or performance issues. We have a lot tighter control over the packaging metadata for the base system. David smime.p7s Description: S/MIME cryptographic signature
Re: [CFT] packaging the base system with pkg(8)
On 21 Apr 2016, at 21:48, Dan Partelly wrote: > > Yes, you are right it misses the media change handler in devd.conf. > maybe it should bementioned somewhere in a man page if it is not > already there. Thanks for the pointer. > > Anyway, if I would have written the system, what I would have done > is to consolidate both user mode daemons into one and make this > daemon a client of devd, fstyp a library, and handle all removable > media inside transparent to the user, without requiring any modifications > to devd.conf, and without relaying on shell scripts. > > Is there any reason you did not took this approach ? This is not > criticism, I am genuinely interested. One of the current shortcomings of devd is that it does not provide a good mechanism for other running processes to request notification of events dynamically. Ideally, when the automounter daemon starts, it should connect to devd via an IPC channel and request notification of the specific events that it wants. This is a known problem (which extends to more than just the automounter) and there are some tentative plans to fix it, but they’re not yet concrete and won’t be in 11.0, though hopefully will appear at some point in the 11.x series. David smime.p7s Description: S/MIME cryptographic signature
Re: [CFT] packaging the base system with pkg(8)
On 20 Apr 2016, at 15:53, Paul Mather wrote: > > Arguably, a packaged base will make it easier to help people, because it > makes more explicit the dependencies of different parts of the system. It's > been my experience that the interactions and impact of the various > /etc/src.conf settings are not entirely well known, at least to end-users. In particular, with libxo output from pkg, we can get a machine-readable detailed dump of all of the base system packages installed and provide a single ‘system dump’ command that includes this, relevant information from dmesg, and so on for bug triaging. David smime.p7s Description: S/MIME cryptographic signature
Re: [CFT] packaging the base system with pkg(8)
On 20 Apr 2016, at 06:06, Julian Elischer wrote: > > my problem with 400 packages is that is is hard to decide what you are > actually running.. or is it FreeBSD 11? is it FreeBSD 10.95342453? > you have no way to tell exactly what you have without comparing all the > packages to a known list. > uname doesn't mean much, nor does "__FreeBSD_version" if everything comes > with its own versions. I think that it’s very important, for the purpose of a constructive discussion, to separate the two concerns: 1) The number of packages that the base system has. 2) The user interface by which the packages are presented. I believe (and, please, correct me if I’m wrong), that all of the complaints in this thread have been about the UI, not about the underlying mechanism. That’s not to say that they’re unimportant (quite the reverse), but that they can be solved concurrently with the task of preparing the base system for distribution in packaged form. Having fine-grained packages makes a lot of things possible that are difficult otherwise, but we do need to fix the UI. > the 'leaf' concept in pkg helps with this a bit, but we've always considered > FreeBSD bas as a sort of monalithic entity that moves forward together. > > you are running 10.1p8 pr 10.2p1 that tells you all you need to know. > If you now need to take into account 400 different dimensions you have a much > harder way to describe what you have.. > > I mentioned this before but I think hte answer is to make a change on the > way that "meta packages" are displayed by default in pkg. Part of the problem is that we don’t actually have metapackages. A metapackage is a package that *contains* other packages. What we actually have is empty packages that *depend on* other packages. The package tool has no way of distinguishing a package that you install for the sole purpose of installing its dependencies from one that you install because you want it (though having no files inside it might serve as an heuristic that would work). > If I install the meta package, I really don't want to see all the sub > packages tat are unchanged unless I add '-v'. On the other hand if I upgrade > a sub package I want to see that in the context of the metapackage. Similarly > if I uninstall of the subpackages. Doing this properly also requires the notion of optional default and non-default subpackages. I should be prevented from uninstalling (at least, without a lot of -f) non-optional subpackages. For example, on a small system where I’m not using zfs, I might uninstall the libzfs subpackage from freebsd-libs, but if I try to uninstall the libc package then the system should shout at me. > > so something like this would remove most of my objections: > > # pkg info > = system packages > FreeBSD-networking-11.0.2_1FreeBSD networking subsystem and > commands > - ipfw-11.0.2-1 ipfw tools (uninstalled) > - fbsd-tcpdump-11.0.2-1 Built in tcpdump tools (uninstalled) > * openssl-11.0.2-2Openssl support (upgraded CVE-123456 > FreeBSD-base-base-11.0.2-1 The absolute minimum booting base > system > [...] > external packages == > apache22-2.2.31Version 2.2.x of Apache web server with > prefork MPM. > apr-1.5.2.1.5.4Apache Portability Library > autoconf-2.69 Automatically configure source code on many > Un*x platforms > autoconf-wrapper-20131203 Wrapper script for GNU autoconf > [...] > > > Maybe I uninstalled ipfw because I use pf and I install the ports tcpdump so > I can remove the built in one. > I have installed a new openssl due to a bugfix.. > > This gives me a real instant feel for what I'm running.. > if I add -v then I see all 400 packages, but I really don't want to see them > 99.99% of the time > > I believe the "leaf" method gives close to this but if we could get the above > I'd have absolutely no objections. Thank you for this suggestion. I think that this is the sort of UI that makes a lot of sense (though having subpackage support would also be useful for ports). It’s also the kind of thing that I think we could persuade the Foundation to fund if there is not enough volunteer time to implement it. David smime.p7s Description: S/MIME cryptographic signature
Re: [CFT] packaging the base system with pkg(8)
On 19 Apr 2016, at 08:44, Julian Elischer wrote: > >> All this can be done by meta-packages which depend on larger package groups. > Currently Metapackage is a way to make 10 packages look like 11 packages. > The framework needs to understand to hide the 10 internal packages if they > are part of a metapackage. I agree, and patches to do this are very welcome. Currently, pkg is short of contributors. I see basically three use cases for a packaged base: 1) People wanting a FreeBSD install to use as a server or workstation. These people will install the FreeBSD 11 metapackage and not care that it is a few hundred MBs. It would be nice if the pkg tool could present this as a single package in list views, but that’s a UI issue with pkg, not an issue with the number of packages in the base system. 2) People wanting to install embedded systems. Anyone who has tried to run FreeBSD on a system with a small amount of flash storage will have encountered the pain of having to use some kind of ad-hoc update. Being able to manage updates to these systems with the same packaging tool as you manage big systems is a big improvement. 3) People wanting to install service jails (sorry, containerised applications). These want the smallest possible attack surface and so want the smallest amount of the base system that they can have. Here, small packages are an advantage. It will take a little while for ports to learn enough about the granularity of the base system for this to really be useful, but it would be great to be able to install nginx, for example, in a jail and have only the handful of libraries that it needs. The big advantage of going with small packages initially, however, is that it will allow us to get some data on what the correct groupings are. If we have large packages, then it’s very hard to tell which subsets of the packages people want. That’s exactly the situation that we’re in now: we know some people don’t want docs or games, but that’s about all that we know. It’s easy to move to a model where we have *fewer* packages in the future, but it’s harder to split them. That also applies to dependencies. If I know that a port depends on the shell, then it’s easy to update it from depending on a sh package to depending on a core system utilities package automatically, but it’s very hard to do an automatic update in the other direction. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Packaging the FreeBSD base system with pkg(8)
On 5 Apr 2016, at 10:07, Gergely Czuczy wrote: > > Also, quite often entries from the base system are changed manually, think of > root's/toor's password. Are such cases going to be dealt with properly > between upgrades, including self-built-and-packaged base systems? Currently > it can be a PITA with mergemaster to handle things like master.passwd > properly between upgrades, automation so far wasn't famous on doing it > properly. Mergemaster uses a 2-way merge. It has the version that you have installed and the version that’s being proposed for installation. Etcupdate and pkg perform a 3-way merge. It has the pristine version, the version that you have made changes to, and the new version. If you have changed an entry and so has the package, then you will get a conflict that you have to resolve manually. If you have added lines and so has the upstream version, then that should cleanly apply. Similarly, if you and upstream have both modified different lines, then there should be no problem. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: FreeBSD MachO File format, your comments on it.
On 24 Mar 2016, at 13:42, Damjan Jovanovic wrote: > > ELF itself is a disaster. Symbol lookup in ELF is process scoped, not > library scoped like Windows's PE and Mac's Mach-O, so same named > symbols from different libraries in the same process (loaded through > any number of levels of indirection) can and do clash, resulting in > memory corruption. This is why hacks like symbol versioning, > RTLD_DEEPBIND on GNU's libc and -Bdirect on Solaris were invented. This problem is addressed by some of the work that Sony has done recently that they are about to upstream to Clang/LLVM. > We suffer from this problem badly on FreeBSD, as Clang's C++ standard > library and GCC's standard library don't have fully compatible ABIs, > so when both are loaded into the same process and the incompatible C++ > features are used -> memory corruption -> crash. Eg. compile Apache > OpenOffice with GCC on a system built with Clang, and you'll see even > the unit tests crash. That shouldn’t happen, as libstd++ and libc++ have different symbols (libc++ puts its symbols in the __v1 namespace). The problem can come from mixing libsupc++ and libcxxrt, but that’s only an issue if you have not built libstdc++ against libcxxrt. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: FreeBSD MachO File format, your comments on it.
On 24 Mar 2016, at 12:05, mokhi wrote: > > Hi. > > I'm agreed with point you told about improvements we can do for fat > format (or more). > And I'm ready to do them (with your helps, sure :D). > > But we need short steps and more of them (a local proverb :D) IMO. > If we completely do this image activator, then we can have 2 sub plans > for OSX emulation and/or fat data segment redesign. FatELF binaries do not depend on this work. Fat Mach-O binaries do, but the pain of working with Mach-O is not worth it (talk to some of the Apple toolchain team some time about how much effort Mach-O is - I’m glad it’s their problem and not mine). I don’t believe that the work to support FatELF would be particularly large. The format is pretty simple (basically a small header that tells you where within the binaries to find the real ELF for your architecture). Teaching all of the associated bits of the toolchain (especially debuggers) about it is a lot of tedious work, but not particularly hard if someone is motivated to do it. Teaching clang and lld to produce fat binaries as part of normal ELF compilation would be a bit more work. > I saw netbsd's way of mach-kernel/darwin emulation. > They have been stopped in porting/simulating quartz (the reason > described lack of developers' interest IIRC), and that relates to OSX > emulating. That wasn’t the only reason. The XNU kernel interfaces for graphics and sound are large and mostly undocumented (at least, publicly) and change between OS X revisions. Even if you implement *all* of this, then you’d still need most of an OS X userland to be able to run OS X applications. This would involve violating the OS X EULA unless you ran it on a Mac and the only thing that you’d then be able to do that you couldn’t with OS X is run FreeBSD binaries in the background or in XQuartz (which you can already do pretty well with xhyve on OS X). If you are willing to violate the OS X EULA then you should probably just run OS X in a VM. > If we wanna complete/continue that way, first we need this image > activator, what's your opinion about it? > > BTW, in brief I believe we can have strategies to do (sub plans) and > it worth (at least for me, because I'll learn good things). What's > your opinion? As a learning exercise, I definitely encourage you to continue. Writing a new image activator will teach you a lot. If you want to do some of the rtld work to make a partial Mach-O rtld then you’ll learn even more. I just don’t think that the end result will be something that’s particularly useful to anyone. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: FreeBSD MachO File format, your comments on it.
Hi, I’d slightly question the assertion that Mach-O is a well-designed format. For example, it has a hard limit of 16 section types, doesn’t support COMDATs and so on. OS X uses a load of magic section names to work around these limitations. Note that a Mach-O image activator is relatively easy, but a Mach-O rtld is far more complex. It might be possible to port dyld from OS X, but as I recall it depends quite heavily on the Mach kernel interfaces. On fat binaries, note that the support in the file format is pretty trivial. Far more support is needed in the image activator and rtld to determine the correct parts and map only them. If you’re interested in doing this work, then I’d recommend looking at this proposed specification for fat ELF binaries: https://icculus.org/fatelf/ That said, I’m not totally convinced that fat binaries are actually a good solution (unless you’re willing to go a step further than Apple did and merge data sections) - NeXT managed very well shipping fat bundles without using fat binaries and even had a special mode in ditto to strip out the foreign architectures when copying a bundle from a network share to a local filesystem. Persuading clang to emit FreeBSD Mach-O binaries is probably harder than you think. It’s quite easy to persuade it that Mach is a valid file format for FreeBSD, but there are a *lot* of places where people conflate ‘is Mach’ with ‘is Darwin’ in the Clang and LLVM sources. Finding all of these and making sure that they’re really checking the correct one is difficult. Emulating OS X binaries may be interesting. NetBSD had a Mach / XNU compat layer for a while. The problem here is that the graphics stack interfaces on OS X are completely different from any other *NIX system (as are the kernel interfaces for sound), so the most that they could do was run command-line and X11 Mac apps - not especially useful. Actually emulating OS X apps will need far more than that - OS X ships with about 500MB of frameworks, many of which are used by most applications. The GNUstep project is undermanned and hasn’t been able to keep up with the changes to the core Foundation and AppKit frameworks, let alone the rest. David > On 24 Mar 2016, at 09:13, mokhi wrote: > > Hi guys. > I'm Mahdi Mokhtari (aka Mokhi between FreeBSD friends). > > I am working on adding Mach-O binary format to supported formats for FreeBSD. > Not for emulations on first step, but as a native supported format > just like a.out [or Elf] > (though it can go in both ways too). > > There are good reasons to have Mach-O format support IMO. > It's well/clear designed file format. > Can supports multiple Arch by default (It's Fat Format). > Because of its Fat Format support, it can even help porting/packaging easier > for > projects such as Freebsd-arm or others IMO :D. > At end (even not among its interesting parts, maybe :D) point, it > leads and helps to have > OSX emulation support on FreeBSD. > > BTW, i've coded[1] Mach-O support for FreeBSD with helps of > FreeBSD-ppl on IRC about various aspects of this works (from > fundamental points of VM-MAP, to SysEntVec for Mach-O format) and > with help of Elf and a.out format codes and mach-o references. > It's in Alpha state (or before it) IMO, as I'm not sure about some of > its parts, but I've tested a mach-o formatted binary with it and it at > least loads and maps it segments correctly :D. (it was actually a > simple "return 0" C Code, compiled in a OSX, if you know how can I > force my FreeBSD clang to produce mach-o files instead of ELF I'd be > happy to know it, and I appreciate :D) > > I'd like to have your helps and comments on it, in hope to make it better > and make it ready for review. > > Thanks and thousands of regards, Mokhi. > > == > [1] https://github.com/m0khi/FreeBSD_MachO > ___ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mount_smbfs(8): support for SMBv3.02?
On 8 Mar 2016, at 17:59, Kurt Jaeger wrote: > >> Indeed. Both Solaris and OS X have SMB2 implementations. If >> anyone is interested in working on this, then the Apple implementation >> may provide some inspiration: >> >> http://www.opensource.apple.com/source/smb/ > > Is there any way to download this as tgz or something ? > > It looks painful to get it from that site. Tarballs are here: http://www.opensource.apple.com/tarballs/smb/ Latest one is here: http://www.opensource.apple.com/tarballs/smb/smb-759.40.1.tar.gz David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: [CFT] packaging the base system with pkg(8)
On 8 Mar 2016, at 15:14, Slawa Olhovchenkov wrote: > > Yes, I undertund this. But what profit of this? Addtional size is > small, many small packages is bad. We already have expirense with > spliting Xorg to many small packages -- no profit of this. The X.org case is similar, but not quite the same. The X.org split was to ease development, but came at a cost of packaging because you almost always want all of X (or, at least, most of it - there are a few things such as Xephyr that many users may want to skip). In FreeBSD, we *do* have a compelling case for installing a small subset of the base system: service jails (or ‘containerised applications’ as the kids are calling them). We want to be able to install, for example, owncloud and nginx or ejabberd in a jail with only the bare minimum required for them to start and run. We want updates to these jails to be fast and we want disk usage (and install time) to be low. In such a jail, I want a shell, the parts of sbin needed to do network setup, the libraries that these ports depend on, *and nothing else*. We’re still a way away from doing that. Comparing the installed sets can be simplified with some improvements to the pkg UI, for example allowing a set of packages to be aggregated into a single entry. This is not quite the same as the metapackage concept. If you install everything, then a FreeBSD-base-all metapackage might show up as a single thing unless you ask for a verbose output. We can also present these in a hierarchical manner, so that you can drill down and see more detail if you want to. In terms of comparing packages, if you’re doing that visually then you are likely to have problems anyway, unless your eyes and brain work far better than most humans. We can make that much easier by providing libxo output in pkg and allowing you to have a simple jq script that tells you what the differences are. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: mount_smbfs(8): support for SMBv3.02?
On 8 Mar 2016, at 13:19, Miroslav Lachman <000.f...@quip.cz> wrote: > > It would be really nice if somebody can bring better support for FreeBSD's > SMB/CIFS mount. Maybe through FreeBSD Foundation projects. Indeed. Both Solaris and OS X have SMB2 implementations. If anyone is interested in working on this, then the Apple implementation may provide some inspiration: http://www.opensource.apple.com/source/smb/ David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Packaging the FreeBSD base system with pkg(8)
On 28 Jan 2016, at 17:45, NGie Cooper wrote: > > Also, consider that you're going to be allowing upgrades from older RELEASE > versions of the OS which might be using a fixed copy of pkgng -- how are you > going to support that? I believe that the plan is to promote the pkg tool somewhat closer to the base system. Upgrades will do the same sort of thing that they do currently for ports: 1. First check if the version of pkg is the latest 2. If not, upgrade it 3. Do the real upgrade The package for package is simply a tarball. It may be advantageous to separate the pkg and pkg-static binaries into different packages, so that pkg can always install pkg-static and pkg-static can always update pkg. There is no guarantee that the pkg tool from X.Y can install any packages from X+n.Y.m other than the pkg-static binary, which can then upgrade the rest of the system. The provision of pkg-static prevents us from being in the situation that I encountered trying to upgrade a Debian system (and ending up with a mess requiring a full reinstall) where apt needed a newer glibc and the glibc package needed a newer apt to install it. We will always provide a pkg-static for every supported branch that can be installed by any earlier version of pkg (because it’s just extracting a single-file archive - and in the absolute worst case you can do this by hand) and can install newer packages. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Too low PTHREAD_STACK_MIN value?
On 23 Jan 2016, at 08:58, Maxim Sobolev wrote: > > For what it's worth, I agree with David. This looks like definite misuse of > the constant. If app X requires min size of stack of Y, it's fullish of it if > to expect our PTHREAD_STACK_MIN somehow accommodate that. It should be really > using MAX(PTHREAD_STACK_MIN, Y) to set its stack instead. Should be easy to > patch and it needs to be reported to the upstream vendor(s) instead. Don't > forget that bumping this limit, no matter how small, will get multiplied by > the number of threads running, which could be in many thousands After talking to Ed, I’m not sure I was correct in my initial assessment. The code in pthread’s thread exit routine is calling the unwinder, and that’s what’s exhausting the stack space. This means that a thread function that just does return 0 will run out of stack space. That said, existing values of PTHREAD_STACK_MIN are part of the ABI and if we’re going to bump it then we need to make sure that we do something clever with existing binaries to ensure that, when they ask for 2KB of stack, we give them more (which can be problematic if they’re allocating their own stack). I’d much prefer a solution where we don’t expose the HP unwind interfaces from libeh (it’s fine to do so from a separate libunwind) and we don’t allocate that much space in the unwinder. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Too low PTHREAD_STACK_MIN value?
On 21 Jan 2016, at 16:02, Ed Maste wrote: > > I found that lang/polyml uses PTHREAD_STACK_MIN for a trivial signal > handler thread it creates[1]. They found it was too small and > implemented a 4K minimum bound to fix polyml on FreeBSD[2]. Even if > this isn't really the intended use of PTHREAD_STACK_MIN it suggests > the 2K x86 minimum may indeed be too low. > > I ran into this while trying LLVM's libunwind, which requires more > stack space. 2K is certainly too low with LLVM libunwind. Is it > reasonable to just increase it to say 8K? I don’t really like this solution. PTHREAD_STACK_MIN is the size for a stack that does not do anything. You should never use it without adding the amount that you are going to need (which might be nothing if you are running code from a language that does not use a conventional C-style stack, but still wants to use OS threads). Making it larger because a specific kind of thing that some consumers want to do with it needs more space is definitely against the spirit of the value and potentially harmful as it means that people using it correctly will be using a lot more memory per thread. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Is updating contrib/gcc desirable?
On 31 Dec 2015, at 11:59, Yuri wrote: > > Would it be the right way of solving the problem if I submitted an update of > contrib/gcc and contrib/gcclibs from the gcc-5.3.0 tree? No. > Any pitfalls with this? The newer versions of GCC are GPLv3 and so are unacceptable for the FreeBSD base system. Most of libgcc in base now comes from compiler-rt. The correct solution would be to identify the missing functionality in compiler-rt so that it can be fixed (with patches ideally, but even a list of the missing functions and what they do would be helpful). Things like libquadmath, which are only needed for the port, belong in ports. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: Dwarf problem with gcc and gdb
The gdb in the base system doesn’t support DWARF4. Use gdb791 or lldb-devel from ports (I believe gdb791 is probably a better bet on ARM, currently). David > On 8 Dec 2015, at 09:02, Ray Newman wrote: > > Hi, > > Compiled using gcc (FreeBSD Ports Collection) 4.8.5 on arm (Raspberry Pi - > several versions); BSDmakefile attached (make test used). > gdb gives: > GNU gdb 6.1.1 [FreeBSD] > Copyright 2004 Free Software Foundation, Inc. > GDB is free software, covered by the GNU General Public License, and you are > welcome to change it and/or distribute copies of it under certain conditions. > Type "show copying" to see the conditions. > There is absolutely no warranty for GDB. Type "show warranty" for details. > This GDB was configured as "armv6-marcel-freebsd"...Dwarf Error: wrong > version in compilation unit header (is 4, should be 2) [in module > /home/ray/mumps/mumps] > > I need to fix this to find the *real* problem. > > Thanks, Ray > > > # Makefile for MUMPS BSD > # Copyright (c) Raymond Douglas Newman, 1999 - 2014 > # with help from Sam Habiel > > CC = gcc > LIBS = -lm -lcrypt > EXTRA = -O -Wall -Iinclude > > .ifmake test > EXTRA = -O0 -g -gdwarf-2 -gstrict-dwarf -ggdb -Wall -Iinclude > .endif > > SUBDIRS=compile database init runtime seqio symbol util xcall > > RM=rm -f > > PROG = mumps > > OBJS= compile/dollar.o \ >compile/eval.o \ >compile/localvar.o \ >compile/parse.o \ >compile/routine.o \ >database/db_buffer.o \ >database/db_daemon.o \ >database/db_get.o \ >database/db_ic.o \ >database/db_kill.o \ >database/db_locate.o \ >database/db_main.o \ >database/db_rekey.o \ >database/db_set.o \ >database/db_uci.o \ >database/db_util.o \ >database/db_view.o \ >init/init_create.o \ >init/init_run.o \ >init/init_start.o \ >init/mumps.o \ >runtime/runtime_attn.o \ >runtime/runtime_buildmvar.o \ >runtime/runtime_debug.o \ >runtime/runtime_func.o \ >runtime/runtime_math.o \ >runtime/runtime_pattern.o \ >runtime/runtime_run.o \ >runtime/runtime_ssvn.o \ >runtime/runtime_util.o \ >runtime/runtime_vars.o \ >seqio/SQ_Util.o \ >seqio/SQ_Signal.o \ >seqio/SQ_Device.o \ >seqio/SQ_File.o \ >seqio/SQ_Pipe.o \ >seqio/SQ_Seqio.o \ >seqio/SQ_Socket.o \ >seqio/SQ_Tcpip.o \ >symbol/symbol_new.o \ >symbol/symbol_util.o \ >util/util_key.o \ >util/util_lock.o \ >util/util_memory.o \ >util/util_routine.o \ >util/util_share.o \ >util/util_strerror.o \ >xcall/xcall.o > > .c.o: >${CC} ${EXTRA} -c $< -o $@ > > all: ${OBJS} >${CC} ${EXTRA} -o ${PROG} ${OBJS} ${LIBS} > > test: ${OBJS} >${CC} ${EXTRA} -o ${PROG} ${OBJS} ${LIBS} > > clean: >rm -f ${OBJS} ${PROG} ${PROG}.core > ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: libXO-ification - tangent on JSON choices
On 16 Nov 2015, at 21:04, Garance A Drosehn wrote: > > First let me say that I wish I had more time to contribute to the project, > but I seem to be caught in variety of long drawn-out hassles in real-life. > Otherwise I would already know the answer to this question: > > Is there some specification for what JSON is created by the various FBSD > utilities? I did not see any discussion of that in the earlier threads > on this. I don't mean "what is syntatically correct JSON?", I mean some > kind of guidelines of what property-names would be used across commands, > and what values should be for those properties. There is not, currently. Until 11.0 is branched, it should be considered to be in flux, but after that then we are going to provide backwards compatibility (i.e. you can add fields to the JSON, but you can’t remove them). Before this point, it would be good to have some consensus of on what properties should be common and what each should be providing, and regression tests for the JSON (the XML is isomorphic to the JSON, so probably doesn’t need testing separately). If you want to provide a first draft of some recommendations, they would be very welcome, David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: libXO-ification - Why - and is it a symptom of deeper issues?
On 16 Nov 2015, at 17:09, Elizabeth Myers wrote: > > It seems to boil down to the golden rule: he who has the gold, makes the > rules. Juniper wanted it, they're a non-trivial donor to the FreeBSD > foundation and employ many devs, so they got their way. > > That's all there is to it. I think that’s a mischaracterisation. (Core hat on:) Juniper’s status as a donor to the FreeBSD Foundation has absolutely no baring on their ability to get code committed. The libxo code was accepted because it solves a problem that a number of FreeBSD users (and downstream consumers of FreeBSD) have. Libucl is primarily developed by a PhD student. He is not backed by a large corporation or an organisation that donates to the FreeBSD Foundation. His code is accepted for precisely the same reason as libxo: it solves a problem that many people have identified is real. Development is, however, driven by people willing to actually do the work, and being willing to listen to feedback from other developers. If someone started committing a load of code that is only of use to them and makes life harder for everyone else, then Core would be quick to request that it be reverted. This rarely happens, because we try hard to avoid giving commit bits to people who don’t play well with others. Phil has put a lot of effort into libxo and, most importantly, listened to community feedback. For example, his recent changes to libxo from feedback at BSDCam (where he led a session discussing it and related topics) means that libxo can now be used to trivially add localisation to the a load of base system utilities. This is something that was not in the Juniper system that inspired libxo, because it is not something that they need (Juniper’s interface provides a choice between US English and US English). This is part of the reason why Phil was recently awarded his commit bit: he isn’t just writing code that Juniper wants, he’s writing code that benefits both Juniper and the wider community and is willing to adapt it to provide wider benefits. This is *precisely* how open source is supposed to work: Juniper benefits by (eventually) being able to reduce their diffs to upstream, everyone else benefits from having the new features, and development is led by consensus of what is useful. (Core hat off:) I slightly disagree with Alan’s comment that librarification of base system utilities addresses the same problems. There are three related problems: - Being able to expose the same functionality as the base system utilities to C code. - Being able to expose this functionality via bindings to high-level languages. - Being able to drive complex scripting from the command line and shell scripts. Libxo directly addresses the last of these points and inefficiently addresses the first and second. Librarification would address the first and (possibly) the second. They are overlapping requirements. For some the second, the combination would likely be the best solution for a lot of requirements (i.e. have library calls that produce the JSON that Lua/Python/Ruby/JavaScript/Intercal can turn into native objects). I would very much like to see all of the base system functionality exposed in terms of libraries, but this is a huge challenge. Good API design is *hard*. Tools like libucl and libxo allow people to build high-level wrappers and experiment with API design easily outside of the base system, in such a way that does not give us difficult C API compatibility requirements that we have to respect for the next few decades, and will allow us to be more informed when it comes to designing these APIs. David ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"