Re: process killed: text file modification
On 2017. 03. 21. 3:40, Rick Macklem wrote: Gergely Czuczy wrote: [stuff snipped] Actually I want to test it, but you guys are so vehemently discussing it, I thought it would be better to do so, once you guys settled your analysis on the code. Also, me not having the problem occurring, I don't think would mean it's solved, since that would only mean, the codepath for my specific usecase works. There might be other things there as well, what I don't hit. I hope by vehemently, you didn't find my comments as nasty. If they did come out that way, it was not what I intended and I apologize. Let me know which patch should I test, and I will see to it in the next couple of days, when I get the time to do it. I've attached it here again and, yes, I would agree that the results you get from testing are just another data point and not definitive. (I'd say this statement is true of all testing of nontrivial code.) Thanks in advance for any testing you can do, rick So, I've copied the patched kernel over, and apparently it's working properly. I'm not getting the error anymore. So far I've only did a quick test, should I do something more extensive, like build a couple of ports or something over NFS? ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: process killed: text file modification
On Thu, Mar 23, 2017 at 12:55:09AM +, Rick Macklem wrote: > Wow, this is looking good to me. I had thought that the simple way to make > ncl_putpages() go through the buffer cache was to replace ncl_writerpc() with > VOP_WRITE(). My concern was all the memory<->memory copying that would > go on between the pages being written and the buffers allocated by > VOP_WRITE(). > If there is a way to avoid some (if not all) of this memory<->memory copying, > then > I think it would be a big improvement.. UIO_NOCOPY means that uio is only updated to indicate the operation as performed, but no real copying occurs. This is exactly what the _putpages() case needs, since the data is already in the pages. When buffers are created for the corresponding file offsets, appropriate pages are put into the buffer's page array and data appears in the buffer with zero copying. This is how generic putpages code works for local filesystems, e.g. UFS. > > As far as the commit goes, you don't need to do anything if you are calling > VOP_WRITE(). > (The code below VOP_WRITE() takes care of all of that.) > --> You might want to implement a function like nfs_write(), but with extra > arguments. > If you did that, you could indicate when you want the writes to happen > synchronously > vs. async/delayed and that would decide when FILESYNC would be > specified. > Yes, this is what I want to improve in the patch. As I noted, I added translation of the VM_PAGER_PUT_* flags into IO_* flags, but IO_* flags needs more code. Most important is IO_ASYNC which probably should become similar to the current !IO_SYNC ncl_write(), but without clustering. You mentioned that NFSWRITE_FILESYNC/NFSWRITE_UNSTABLE should be specified, and it seems that this is managed by B_NEEDCOMMIT buffer flag. I see that B_NEEEDCOMMIT is cleared in ncl_write(). > As far as I know, the unpatched nc_putpages() is badly broken for the > UNSTABLE/commit case. For UNSTABLE writes, the client is supposed to > know how to write the data again if the server crashes/reboots before > a Commit RPC is successfully done for the data. (The ncl_clearcommit() > function is the one called when the server indicates it has rebooted > and needs this. It makes no sense whatsoever and breaks the client > to call it in ncl_putpages() when mustcommit is set. All mustcommit > being set indicates is that the write RPC was done UNSTABLE and the > above applies to it. Some servers always do FILESYNC, so it isn't ever > necessary to do a Commit PRC or redo the write RPCs.) > > Summary. If you are calling VOP_WRITE() or a similar call above the > buffer cache, then you don't have to worry about any of this. Ok, thanks. > > > Things that needs to be done is to add missed handling of the IO flags to > > ncl_write(). > > > + if (error == 0 || !nfs_keep_dirty_on_error) > > vnode_pager_undirty_pages(pages, rtvals, count - > > uio.uio_resid); > If the data isn't copied, will this data still be available to the NFS buffer > cache code, > so that it can redo the writes for the UNSTABLE case, if the server reboots > before a > Commit RPC has succeeded? As far as buffers are there (e.g. not marked clean), the data is there. Of course, userspace can modify the data in pages if writeable mapping exists, but it is expected. Oh, I remembered one more question I wanted to ask in the previous mail. With the patch, ncl_write() can be called from the delayed contexts like pagedaemon, or after all writeable file descriptors referencing the file are closed. Wouldn't some calls to VOP_OPEN()/VOP_CLOSE() around the VOP_WRITE() needed there ? > > > - if (must_commit) > > - ncl_clearcommit(vp->v_mount); > No matter what else we do, this should go away. As above, it breaks the NFS > client > and basically forces all dirty buffer cache blocks to be rewritten when it > shouldn't > be necessary. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: process killed: text file modification
On 2017. 03. 21. 3:40, Rick Macklem wrote: Gergely Czuczy wrote: [stuff snipped] Actually I want to test it, but you guys are so vehemently discussing it, I thought it would be better to do so, once you guys settled your analysis on the code. Also, me not having the problem occurring, I don't think would mean it's solved, since that would only mean, the codepath for my specific usecase works. There might be other things there as well, what I don't hit. I hope by vehemently, you didn't find my comments as nasty. If they did come out that way, it was not what I intended and I apologize. Let me know which patch should I test, and I will see to it in the next couple of days, when I get the time to do it. I've attached it here again and, yes, I would agree that the results you get from testing are just another data point and not definitive. (I'd say this statement is true of all testing of nontrivial code.) Thanks in advance for any testing you can do, rick I finally had the time to give it a go, but unfortunately there was something wrong with the built image, it was unable to find the root device during boot. I will try to just copy the kernel over a bit later, and see how it goes. I hope there are no ABI changes between the two revisions (the previously built world, and the patched kernel). ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: CURRENT: aarch64 /poudriere installation fails with: sh: cc: not found
On 2017-Mar-22, at 7:53 PM, Mark Millard wrote: > O. Hartmann ohartmann at walstatt.org wrote on Wed Mar 22 14:10:00 UTC 2017: > > . . . >> make[2]: "/pool/CURRENT/src/share/mk/bsd.compiler.mk" line 145: Unable >> to determine compiler type for CC=cc -target aarch64-unknown-freebsd12.0 >> --sysroot=/usr/obj/arm64.aarch64/pool/CURRENT/src/tmp >> -B/usr/local/aarch64-freebsd/bin/. Consider setting COMPILER_TYPE. >> *** Error code 1 > > . . . > > > See bugzilla 215561 for a prior report (powerpc64 context). > > > Other poudriere related notes: > > When I experimented some with poudriere I also submitted: > > 216084 > 216083 > 215561 (referenced above) > 215541 > > I've not tried much since then but will get back to it > someday. > > Comments 10 and 11 of: > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216229 > > might also be relevant. In effect: avoid using CFLAGS+= > or CXXFLAGS+= in a SRCCONF/SRC_ENV_CONF file used in > poudriere. (The problem is more general than that.) > CFLAGS.clang+= and the like should work okay. Or be > sure to use a __MAKE_CONF file in poudriere for the > likes of CFLAGS+= . (But this last has issues if > system vs. port building needs different options.) > > > Other notes tied to arm64 or pine64+ 2GB specifically: > > Because you happen to be using arm64 you may want to > look at bugzilla 217239 and 217138 (which I've since > judged as likely to have the same underlying cause). > 217138's original context was tied to buildworld -j4 on > a pine64+ 2GB (but I've managed to make a small example > program or two that shows relevant behavior). > > With 2GB of RAM buildworld -j4 can force some processes > to be swapped-out at times [zero RES(ident memory)]. > There can be problems with trashed (zeroed) memory > pages when swapped back in if the memory was allocated > before the parent process forks. (That is my small > example's way of producing the issue.) The parent, child, > and more ancestor processes that swapped-out can see > zeroed memory in the same general address range(s) > as the child does. (Nasty cross-process damage.) > > There is more to it (it is complicated): See the > last half of: > > https://lists.freebsd.org/pipermail/freebsd-arm/2017-March/015934.html > > for a summary without all the code examples and the > like, including avoiding going through my learn-as-I-went > issues. (Also submitted to freebsd-hackers asking for > information.) I have occasionally types amd64 in my > various materials where it should have been arm64. > > The zeros caused my self-hosted buildworld's to stop > (sh asserting) and I had to restart them twice per > buildworld on the pine64+ 2GB (presumes certain things > were rebuilt). > > I've seen the memory trashing on an rpi3 as well, with > no device in common with the pine64+ 2GB. > > Another issue is that while I've been able to do > builds on the pine64+ 2GB I have found that running > 4 "openssl speed" commands at the same time causes > an eventual sudden/silent shutdown, probably for > insufficient thermal control. This is with 6 heat > sinks and a fan. So the pine64+ 2GB may be marginal > from that point of view. (Yep: powerd was running.) > I've not tried 2 or 3 "openssl speed"s in parallel. > Nor have a tried on a rpi3: was was targeting having > more RAM. > > > Yet other notes: > > With some local adjustments I did get as far as having > an amd64-host to-armv6/v7 cross-build environment. > But I ended up deciding that I'd need to have access to > a more substantial amd64 environment than I had used in > order to satisfy my time preferences and in order to > deal with the resource limitations of the context that > I used for the experiments --ones that I did not want > to change in that context. > > I ended up deleting the packages, jail, and all the > files involved. I'll get back to such someday. Bryan Drewery just added a Comment #19 to Bugzilla 215561: It's most likely the same issue as Bug 212877. There's a patch in there if you want to try it out. See Also: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=212877 === Mark Millard markmi at dsl-only.net ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: CURRENT: aarch64 /poudriere installation fails with: sh: cc: not found
O. Hartmann ohartmann at walstatt.org wrote on Wed Mar 22 14:10:00 UTC 2017: . . . > make[2]: "/pool/CURRENT/src/share/mk/bsd.compiler.mk" line 145: Unable > to determine compiler type for CC=cc -target aarch64-unknown-freebsd12.0 > --sysroot=/usr/obj/arm64.aarch64/pool/CURRENT/src/tmp > -B/usr/local/aarch64-freebsd/bin/. Consider setting COMPILER_TYPE. > *** Error code 1 . . . See bugzilla 215561 for a prior report (powerpc64 context). Other poudriere related notes: When I experimented some with poudriere I also submitted: 216084 216083 215561 (referenced above) 215541 I've not tried much since then but will get back to it someday. Comments 10 and 11 of: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216229 might also be relevant. In effect: avoid using CFLAGS+= or CXXFLAGS+= in a SRCCONF/SRC_ENV_CONF file used in poudriere. (The problem is more general than that.) CFLAGS.clang+= and the like should work okay. Or be sure to use a __MAKE_CONF file in poudriere for the likes of CFLAGS+= . (But this last has issues if system vs. port building needs different options.) Other notes tied to arm64 or pine64+ 2GB specifically: Because you happen to be using arm64 you may want to look at bugzilla 217239 and 217138 (which I've since judged as likely to have the same underlying cause). 217138's original context was tied to buildworld -j4 on a pine64+ 2GB (but I've managed to make a small example program or two that shows relevant behavior). With 2GB of RAM buildworld -j4 can force some processes to be swapped-out at times [zero RES(ident memory)]. There can be problems with trashed (zeroed) memory pages when swapped back in if the memory was allocated before the parent process forks. (That is my small example's way of producing the issue.) The parent, child, and more ancestor processes that swapped-out can see zeroed memory in the same general address range(s) as the child does. (Nasty cross-process damage.) There is more to it (it is complicated): See the last half of: https://lists.freebsd.org/pipermail/freebsd-arm/2017-March/015934.html for a summary without all the code examples and the like, including avoiding going through my learn-as-I-went issues. (Also submitted to freebsd-hackers asking for information.) I have occasionally types amd64 in my various materials where it should have been arm64. The zeros caused my self-hosted buildworld's to stop (sh asserting) and I had to restart them twice per buildworld on the pine64+ 2GB (presumes certain things were rebuilt). I've seen the memory trashing on an rpi3 as well, with no device in common with the pine64+ 2GB. Another issue is that while I've been able to do builds on the pine64+ 2GB I have found that running 4 "openssl speed" commands at the same time causes an eventual sudden/silent shutdown, probably for insufficient thermal control. This is with 6 heat sinks and a fan. So the pine64+ 2GB may be marginal from that point of view. (Yep: powerd was running.) I've not tried 2 or 3 "openssl speed"s in parallel. Nor have a tried on a rpi3: was was targeting having more RAM. Yet other notes: With some local adjustments I did get as far as having an amd64-host to-armv6/v7 cross-build environment. But I ended up deciding that I'd need to have access to a more substantial amd64 environment than I had used in order to satisfy my time preferences and in order to deal with the resource limitations of the context that I used for the experiments --ones that I did not want to change in that context. I ended up deleting the packages, jail, and all the files involved. I'll get back to such someday. === Mark Millard markmi at dsl-only.net ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: crash: umount_nfs: Current
It’s VERY intermittent ( I.E. not easy to reproduce. Sorry. -- Larry Rosenman http://www.lerctr.org/~ler Phone: +1 214-642-9640 E-Mail: l...@lerctr.org US Mail: 17716 Limpia Crk, Round Rock, TX 78664-7281 On 3/22/17, 9:20 PM, "Rick Macklem" wrote: Larry Rosenman wrote: > Err, I’m at r315289…. I think the attached patch (only very lightly tested by me) will fix this crash. If you have an easy way to test it, that would be appreciated, rick ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: crash: umount_nfs: Current
Larry Rosenman wrote: > Err, I’m at r315289…. I think the attached patch (only very lightly tested by me) will fix this crash. If you have an easy way to test it, that would be appreciated, rick clntcrash.patch Description: clntcrash.patch ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: process killed: text file modification
Konstantin Belousov wrote: [stuff snipped] > Below is something to discuss. This is not finished, but it worked for > the simple tests I performed. Clustering should be somewhat handled by > the ncl_write() as is. As an additional advantage, I removed the now > unneeded phys buffer allocation. > > If you agree with the approach on principle, I want to ask what to do > about the commit stuff there (I simply removed that for now). Wow, this is looking good to me. I had thought that the simple way to make ncl_putpages() go through the buffer cache was to replace ncl_writerpc() with VOP_WRITE(). My concern was all the memory<->memory copying that would go on between the pages being written and the buffers allocated by VOP_WRITE(). If there is a way to avoid some (if not all) of this memory<->memory copying, then I think it would be a big improvement.. As far as the commit goes, you don't need to do anything if you are calling VOP_WRITE(). (The code below VOP_WRITE() takes care of all of that.) --> You might want to implement a function like nfs_write(), but with extra arguments. If you did that, you could indicate when you want the writes to happen synchronously vs. async/delayed and that would decide when FILESYNC would be specified. As far as I know, the unpatched nc_putpages() is badly broken for the UNSTABLE/commit case. For UNSTABLE writes, the client is supposed to know how to write the data again if the server crashes/reboots before a Commit RPC is successfully done for the data. (The ncl_clearcommit() function is the one called when the server indicates it has rebooted and needs this. It makes no sense whatsoever and breaks the client to call it in ncl_putpages() when mustcommit is set. All mustcommit being set indicates is that the write RPC was done UNSTABLE and the above applies to it. Some servers always do FILESYNC, so it isn't ever necessary to do a Commit PRC or redo the write RPCs.) Summary. If you are calling VOP_WRITE() or a similar call above the buffer cache, then you don't have to worry about any of this. > Things that needs to be done is to add missed handling of the IO flags to > ncl_write(). > + if (error == 0 || !nfs_keep_dirty_on_error) > vnode_pager_undirty_pages(pages, rtvals, count - > uio.uio_resid); If the data isn't copied, will this data still be available to the NFS buffer cache code, so that it can redo the writes for the UNSTABLE case, if the server reboots before a Commit RPC has succeeded? > - if (must_commit) > - ncl_clearcommit(vp->v_mount); No matter what else we do, this should go away. As above, it breaks the NFS client and basically forces all dirty buffer cache blocks to be rewritten when it shouldn't be necessary. rick ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min
On 2017-03-22 16:02, O. Hartmann wrote: > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 > amd64) is > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, updating > /usr/ports > takes >25 min(!). That is an absolute record now. > > I do an almost daily update of world and ports tree and have periodic > scrubbing ZFS > volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't > grown much, > the content of the ZFS volume hasn't changed much (~ 100 GB, its fill is > about 4 TB now) > and this is now for ~ 2 years constant. > > I've experienced before that while scrubbing the ZFS volume, some operations, > even the > update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit > longer than > usual - but never that long like now! > > Another box is quite unusable while it is scrubbing and it has been usable > times before. > The change is dramatic ... > > Regards, > > Oliver > Due to differences in the kern.hz setting between IllumOS and FreeBSD, the result is FreeBSD doesn't de-prioritize scrub I/O as much as IllumOS does. try: sysctl vfs.zfs.scrub_delay=40 This will speed for 40 ticks (40ms on FreeBSD), between each scrub I/O, allowing your ports update to proceed more quickly. 'zpool list' will show how fragmented your pool is, and how full it is, these may also provide insight. If you run 'top -S' while it is performing badly, what is the CPU load like? Is your /usr/ports dataset compressed? -- Allan Jude signature.asc Description: OpenPGP digital signature
Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min
> On 22 Mar 2017, at 21:02, O. Hartmann wrote: > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 > amd64) is > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, updating > /usr/ports > takes >25 min(!). That is an absolute record now. > > I do an almost daily update of world and ports tree and have periodic > scrubbing ZFS > volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't > grown much, > the content of the ZFS volume hasn't changed much (~ 100 GB, its fill is > about 4 TB now) > and this is now for ~ 2 years constant. > > I've experienced before that while scrubbing the ZFS volume, some operations, > even the > update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit > longer than > usual - but never that long like now! > > Another box is quite unusable while it is scrubbing and it has been usable > times before. > The change is dramatic ... > What do "zpool list", "gstat" and "zpool status" show? ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min
CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 amd64) is annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, updating /usr/ports takes >25 min(!). That is an absolute record now. I do an almost daily update of world and ports tree and have periodic scrubbing ZFS volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't grown much, the content of the ZFS volume hasn't changed much (~ 100 GB, its fill is about 4 TB now) and this is now for ~ 2 years constant. I've experienced before that while scrubbing the ZFS volume, some operations, even the update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit longer than usual - but never that long like now! Another box is quite unusable while it is scrubbing and it has been usable times before. The change is dramatic ... Regards, Oliver pgprBNOPxwCxf.pgp Description: OpenPGP digital signature
CURRENT: aarch64 /poudriere installation fails with: sh: cc: not found
Hello List(s), I'm pretty new to cross compiling on FreeBSD, so to make the introduction short: amazed by the possibilities of FreeBSD on Pine64 and poudriere as the basis of our own ports repository, I try to build a repository of selected ports via poudriere for arm64.aarch64 and - fail! When installaing the jail from a prebuilt world via "-m src=...", the installation fails with: [...] make[2]: "/pool/CURRENT/src/share/mk/bsd.compiler.mk" line 145: Unable to determine compiler type for CC=cc -target aarch64-unknown-freebsd12.0 --sysroot=/usr/obj/arm64.aarch64/pool/CURRENT/src/tmp -B/usr/local/aarch64-freebsd/bin/. Consider setting COMPILER_TYPE. *** Error code 1 Stop. make[1]: stopped in /pool/sources/CURRENT/src *** Error code 1 [...] I use the option "-m src=" with all of my jails, where for amd64 the source tree and object tree resides in /usr/src and /usr/obj respectively from a buildworld. For poudriere jails intended to be used for cross building, I checked out the whole CURRENT tree (among 11 and 10) at /pool/CURRENT/src (or /pool/11-STABLE/src or /pool/10.3-RELEASE/src) to keep the main tree clean and intact in case I have to patch too much. The hosting system is a 12-CURRENT as of recent date: 12.0-CURRENT #30 r315698: Wed Mar 22 06:09:40 CET 2017 amd64. Building a "buildworld" for the arm64.aarch64 has been performed successfully via env MAKEOBJDIRPREFIX=/pool/11-STABLE/obj SRCCONF=/dev/null \ __MAKE_CONF=/dev/null TARGET=arm64 make -j12 buildworld After a successful build, there is a object's folder structure /pool/CURRENT/obj/arm64.aarch/ containing (obviosly?) the world without a kernel. Since I use some optimisation flags and special setting in /etc/src.conf and /etc/make.conf, I needed to neutralise those settings and followed the examples and ways I've learned from using NanoBSD. Now, I try to install this world as the base of my arm64.aarch64 jail, which is supposed to build the ports tree for arm64.aarch64 platforms. As a prerequisite, I have already installed the most recent port emulators/qemu-user-static (qemu-user-static-2.8.50.g20170307) and it has been started as a service, as kldstat seems to indicate: kldstat [...] 41 0x81901000 23d0 filemon.ko 51 0x81904000 14fe imgact_binmisc.ko Well, now I try to install the jail: poudriere jail -c -j head-aarch64 -a arm64.aarch64 \ -M /pool/poudriere/jails/head-aarch64 -m src=/pool/CURRENT/src -v head and as a desperate try also with option "-x". But either way, I fail installing the jail with the error shown above. Something is missing and I think, the recommendation of setting the COMPILER_TYPE has a deeper sense here ;-) I tried to google some advices, but I stumbled only over some "simple and easy" advices which lead me to the failure seen above. Maybe nullifying the SRCCONF and __MAKE_CONF isn't a good idea at that point, but I'd like to to await the professionals advice. Thanks in advance, Oliver ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: how to SVN regenerate [ man awk ]
On Wed, 22 Mar 2017 12:55:58 +, Jamie Landeg-Jones wrote: > "Jeffrey Bouquet" wrote: > > > > If you intend to use "svn up", you should probably review, and > > > follow the instructions in, /usr/src/UPDATING. > > > > but just for one binary? and one man page update? > > As in, it is only two files, how to update singly if does not require a > > buildworld... > > I've no idea what is causing your current problem, but it's perfectly fine to > do: > > cd /usr/src/usr.bin/awk (for example) > make > make install > make clean > make cleandepends > > I do this kind of thing all the time. Obvious caveats apply: > > 1) You are no longer tracking a "standard" installation. > 2) You may create a problem with mismatched versions of things. > 3) Relating to 2), some things may not compile or work as intended due >to changes elsewhere that would need to also be applied to the system. > > But other than that, things should work (to use your example, awk should work > fine) > > Are you doing the 'make install' rather than installing manually? > > cheers, Jamie Hmm... I crafted a reply, then noticed the email was to me. I've been using cd /usr/src/usr.sbin/[someplace] svn up . [indent so not as to appear in history] make cleandepend make depend make obj and if that fails mmv /etc/make.conf / [ not precisely that but same thing ] zsh /tmp/llvm39.sh #/bin/sh export CC=/usr/local/bin/clang39 export CXX=/usr/local/bin/clang++39 export CPP=/usr/local/bin/clang-cpp cd $1 # /usr/src/usr.sbin/pw make cleandepend make obj make depend make install ... to make the build with a new clang that may work better. .. I've more or less given up until a new buildworld/installworld risky at here, installworld sometimes fails though, and only hope that the many migrating to BSD I see on linux.reddit.com and elsewhere don't pick up on ZFS so much as improvements and robust current fails-installworld-less-often, reinvoke/reinvigorate portmanager [of old], pkgdb -F [long since forgotten], portmaster, portupgrade, etc to be again 2004 style seamlessly integrated into/with packages and the other wish-list stuff I once wrote about [parallel pkg-fetch for production machines depending upon a flat-file /var/db/pkg 'local.sqlite3' that can be awked sed'ed into temporary forgiveness upon a hosed pkg upgrade, then switched back to the newer pkg commands in a duplicate-pkg-methodology on same-production machine framework that would stand far above anything debian arch ubuntu centos etc could do, ... but mystified still by terse not upto newbie speed in /usr/src/UPDATING to cover all the edge cases, mfsbsd methodologies etc, that people detail in the forums, and on blog posts, that never actually make it into local new-install documentation, that could be done if persons wrote as verbosely and profusely as I seem to do, but only in an ask ask ask and never produce produce manner, for which I apologize, for I have other demands upon my time that conflict and as you can see broadly limit my input to email and the once-yearly bugzilla of note or that passes into irrelevance upon the next iteration of an installworld. OTOH I closely follow HAMMER HAMMER2 improvements as a on-up on UFS2, and openbsd forum details for howtos that do not appear sometimes in the FreeBSD forum. .. So thanks for the reply to my email. ... Things are peachy more or less here with FreeBSD, for instance uptime, 16:04 despite daily core files and bland ignorance of kgbd upon the debugging custom kernel backtraces that appear during dmesg, starting Xorg, umounting backup filesystems, and the daily browser freeze if I've too many open, or too large pages in links -g. But enough of wall of text. Thanks again. Jeff PS sorry for the rough draft. I am out of time. .. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: how to SVN regenerate [ man awk ]
"Jeffrey Bouquet" wrote: > > If you intend to use "svn up", you should probably review, and > > follow the instructions in, /usr/src/UPDATING. > > but just for one binary? and one man page update? > As in, it is only two files, how to update singly if does not require a > buildworld... I've no idea what is causing your current problem, but it's perfectly fine to do: cd /usr/src/usr.bin/awk (for example) make make install make clean make cleandepends I do this kind of thing all the time. Obvious caveats apply: 1) You are no longer tracking a "standard" installation. 2) You may create a problem with mismatched versions of things. 3) Relating to 2), some things may not compile or work as intended due to changes elsewhere that would need to also be applied to the system. But other than that, things should work (to use your example, awk should work fine) Are you doing the 'make install' rather than installing manually? cheers, Jamie ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
[Bug 217994] Kernel panic in native_lapic_setup with 12-CURRENT on EC2 machine
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217994 Konstantin Belousov changed: What|Removed |Added CC|freebsd-current@FreeBSD.org | -- You are receiving this mail because: You are on the CC list for the bug. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
[Bug 217994] Kernel panic in native_lapic_setup with 12-CURRENT on EC2 machine
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=217994 Sylvain Garrigues changed: What|Removed |Added CC||a...@freebsd.org, ||freebsd-current@FreeBSD.org ||, j...@freebsd.org, ||k...@freebsd.org -- You are receiving this mail because: You are on the CC list for the bug. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: process killed: text file modification
On Tue, Mar 21, 2017 at 09:41:19PM +, Rick Macklem wrote: > Konstantin Belousov wrote: > > Anyway, my position is that nfs VOP_PUTPAGES() should do write through > > buffer cache, not issuing the direct rpc call with the pages as source. > Hmm. Interesting idea. Since a "struct buf" can only refer to > contiguous bytes, I suspect each page might end up as a separate > "struct buf", at least until some clustering algorithm succeeded in > merging them. > > I would agree that it would be nice to change VOP_PUTPAGES(), since > it currently results in a lot of 4K writes (with FILE_SYNC I think?) > and this is normally slow/inefficient for the server. (It would be > interesting to try your suggestion above and see if the pages would > cluster into larger writes. Also, the "struct buf" code knows how to > do UNSTABLE writes followed by a Commit.) Below is something to discuss. This is not finished, but it worked for the simple tests I performed. Clustering should be somewhat handled by the ncl_write() as is. As an additional advantage, I removed the now unneeded phys buffer allocation. If you agree with the approach on principle, I want to ask what to do about the commit stuff there (I simply removed that for now). Things that needs to be done is to add missed handling of the IO flags to ncl_write(). diff --git a/sys/fs/nfsclient/nfs_clbio.c b/sys/fs/nfsclient/nfs_clbio.c index 1c225c1469a..562754609b1 100644 --- a/sys/fs/nfsclient/nfs_clbio.c +++ b/sys/fs/nfsclient/nfs_clbio.c @@ -266,9 +266,7 @@ ncl_putpages(struct vop_putpages_args *ap) { struct uio uio; struct iovec iov; - vm_offset_t kva; - struct buf *bp; - int iomode, must_commit, i, error, npages, count; + int ioflags, i, error, npages, count; off_t offset; int *rtvals; struct vnode *vp; @@ -322,44 +320,34 @@ ncl_putpages(struct vop_putpages_args *ap) } mtx_unlock(&np->n_mtx); - /* -* We use only the kva address for the buffer, but this is extremely -* convenient and fast. -*/ - bp = getpbuf(&ncl_pbuf_freecnt); - - kva = (vm_offset_t) bp->b_data; - pmap_qenter(kva, pages, npages); PCPU_INC(cnt.v_vnodeout); PCPU_ADD(cnt.v_vnodepgsout, count); - iov.iov_base = (caddr_t) kva; + iov.iov_base = unmapped_buf; iov.iov_len = count; uio.uio_iov = &iov; uio.uio_iovcnt = 1; uio.uio_offset = offset; uio.uio_resid = count; - uio.uio_segflg = UIO_SYSSPACE; + uio.uio_segflg = UIO_NOCOPY; uio.uio_rw = UIO_WRITE; uio.uio_td = td; - if ((ap->a_sync & VM_PAGER_PUT_SYNC) == 0) - iomode = NFSWRITE_UNSTABLE; - else - iomode = NFSWRITE_FILESYNC; + ioflags = IO_VMIO; + if (ap->a_sync & (VM_PAGER_PUT_SYNC | VM_PAGER_PUT_INVAL)) + ioflags |= IO_SYNC; + else if ((ap->a_sync & VM_PAGER_CLUSTER_OK) == 0) + ioflags |= IO_ASYNC; + ioflags |= (ap->a_sync & VM_PAGER_PUT_INVAL) ? IO_INVAL: 0; + ioflags |= (ap->a_sync & VM_PAGER_PUT_NOREUSE) ? IO_NOREUSE : 0; + ioflags |= IO_SEQMAX << IO_SEQSHIFT; - error = ncl_writerpc(vp, &uio, cred, &iomode, &must_commit, 0); + error = VOP_WRITE(vp, &uio, ioflags, cred); crfree(cred); - pmap_qremove(kva, npages); - relpbuf(bp, &ncl_pbuf_freecnt); - - if (error == 0 || !nfs_keep_dirty_on_error) { + if (error == 0 || !nfs_keep_dirty_on_error) vnode_pager_undirty_pages(pages, rtvals, count - uio.uio_resid); - if (must_commit) - ncl_clearcommit(vp->v_mount); - } - return rtvals[0]; + return (rtvals[0]); } /* ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"