Re: lua kernel library?
On Wed, Jun 17, 2015 at 12:17 PM, Andrew Cagney andrew.cag...@gmail.com wrote: On 17 June 2015 at 00:48, Lourival Vieira Neto lourival.n...@gmail.com wrote: Is the kernel's version of Lua available as a library? You can use it as a regular kernel module. Take a look at this patch [1] which adds Lua to NPF, as an example. thanks, I'd looked briefly at the lua et.al. modules. Please notice that the modules already present in base work like Lua libraries (that is, they are called by Lua) and npf_lua works like a host program (calling Lua). I think the last is more appropriate to your use case. I'm not so sure. If I create a module and then have that bind to ddb and lua then yes. However, if I try to make this work before modules have even been loaded then, no. My point here isn't that you should use lua(4) as a loadable kernel module. Actually, I'm trying to answer your original question: yes, lua(4) is also a library. It defines an API to use Lua in-kernel. (See sys/lua.h.) And npf_lua uses lua(4) in that way. (I think you should take a look at Section 4 and the extending vs embedding discussion at Section 2 of [3].) [3] http://www.netbsd.org/~lneto/dls14.pdf (...) to sys/ddb/files.ddb is not the best way to link Lua into DDB My understanding of this general approach is that it would make DDB dependent on a [possibly loadable] kernel module? Or perhaps I can add a lua pseudo device and start calling the underlying library directly? To me an in-kernel debugger needs to be both largely standalone and callable from every very early in the boot process. For instance, just after the serial console's initialized and showing signs of working. Then, I think you should just compile lua(4) statically instead of reimplementing it tied to DDB. I'm not sure I'm following. Perhaps we're agreeing? Perhaps =). My lua binding needs access to DDB's code, and the DDB code needs access to my Lua instance. This is why I added the lua files to sys/ddb/files.ddb. It seemed tedious. I suspect it would be easier to create lua.a (built for the kernel) and have both the lua module and (optionally) ddb+kernel link against it. I was suggesting you to link lua(4) statically in the kernel, instead of reimplementig part of lua(4) in DDB. I never did that, but it should be as easy as any other module/driver linked statically in the kernel. (This is also why I need to change my hack so that it uses a static buffer for Lua's heap; using the kernel to allocate memory when debugging the kernel's memory allocator doesn't tend to work very well :-) Then, you should just call klua_newstate(9) passing your custom allocator =). BTW, perhaps Luadata [2] can help you to hack memory. [2] https://github.com/lneto/luadata/ Yes. For the moment I've a really simple binding: lua m=setmetatable({},{ __index=function(table,addr) return db_peek(addr); end}) lua print(m[db_debug_buf+1],m[db_debug_buf+2],m[db_debug_buf+3]) 697670 I don't know much about DDB; but if you have access to raw memory, you can use Luadata to access it from Lua as an array of bytes, bit fields and structured data in a straightforward way. What's your plan for DDB+Lua? =) literally, to see what happens when you put lua and dwarf in a kernel (see BSDCan) Pretty cool! Are the slides publicly available in somewhere? (I couldn't find.) They should appear shortly. I've also put them here https://bitbucket.org/cagney/netbsd/downloads Thanks! Moreover, I notice that you've found a bug on the usage of snprintf. Was this the only one? Please report it next time! =) That was the only problem I found, which is impressive. Nice =). BTW, is lua's testsuite run against the in-kernel lua module? Guilherme Salazar (cc'ed) is porting this as a GSoC project =). (See http://www.lua.inf.puc-rio.br/gsoc/ideas2015.html#kerneltest.) -- Lourival Vieira Neto
Re: lua kernel library?
Hi Andrew, Is the kernel's version of Lua available as a library? You can use it as a regular kernel module. Take a look at this patch [1] which adds Lua to NPF, as an example. [1] http://www.netbsd.org/~lneto/pending/0005-added-npf_ext_lua.patch (...) to sys/ddb/files.ddb is not the best way to link Lua into DDB What's your plan for DDB+Lua? =) Regards, -- Lourival Vieira Neto
Re: lua kernel library?
Is the kernel's version of Lua available as a library? You can use it as a regular kernel module. Take a look at this patch [1] which adds Lua to NPF, as an example. thanks, I'd looked briefly at the lua et.al. modules. Please notice that the modules already present in base work like Lua libraries (that is, they are called by Lua) and npf_lua works like a host program (calling Lua). I think the last is more appropriate to your use case. (...) to sys/ddb/files.ddb is not the best way to link Lua into DDB My understanding of this general approach is that it would make DDB dependent on a [possibly loadable] kernel module? Or perhaps I can add a lua pseudo device and start calling the underlying library directly? To me an in-kernel debugger needs to be both largely standalone and callable from every very early in the boot process. For instance, just after the serial console's initialized and showing signs of working. Then, I think you should just compile lua(4) statically instead of reimplementing it tied to DDB. (This is also why I need to change my hack so that it uses a static buffer for Lua's heap; using the kernel to allocate memory when debugging the kernel's memory allocator doesn't tend to work very well :-) Then, you should just call klua_newstate(9) passing your custom allocator =). BTW, perhaps Luadata [2] can help you to hack memory. [2] https://github.com/lneto/luadata/ What's your plan for DDB+Lua? =) literally, to see what happens when you put lua and dwarf in a kernel (see BSDCan) Pretty cool! Are the slides publicly available in somewhere? (I couldn't find.) Moreover, I notice that you've found a bug on the usage of snprintf. Was this the only one? Please report it next time! =) -- Lourival Vieira Neto
Re: NetBSD Project volunteer
Hey Dimitri, My name is Dimitri and i'm from Brazil. Good to see more Brazilians around here! =D I work with infrastructure for six years already, mainly virtualization. I have good skills with shell script and perl but i'm not a skillful developer. I'm junior level. I really would like to be part of a NetBSD Project, i can work hard. Just need someone to teaches me and i can improve my programming skills and help the community. The best way to be good is work in a real project and i really believe in NetBSD. Can you give me a chance? If you have interest in Lua in kernel, I would be glad to have your help. I was actively working on NPF-Lua [1], but it is somehow dormant now due my current lack of time =(. However, I hope to resume this project soon. [1] http://www.netbsd.org/~lneto/eurobsdcon14.pdf Regards, -- Lourival Vieira Neto
[GSoC 2015] NetBSD kernel Lua project in LabLua
Hi Folks, LabLua [1] has been accepted as a mentoring organization this year by Google Summer of Code and I've added a NetBSD kernel Lua project in the ideas list [2]. If you are an eligible student and have interest in kernel Lua development (perhaps your own use case or feature idea), please consider to contact us at LabLua GSoC mailing list [3]. [1] http://www.lua.inf.puc-rio.br [2] http://www.lua.inf.puc-rio.br/gsoc/ideas2015.html#kerneltest [3] https://groups.google.com/forum/#!forum/labluagsoc Regards, -- Lourival Vieira Neto
lua: pending patches
Hi folks, Here are some pending patches which I want to commit: http://www.netbsd.org/~lneto/pending/. Please, could someone review them? Thank you in advance! -- 0007: lua: updated from 5.1 to 5.3 work3 * lua(1): - changed lua_Integer to intmax_t - updated distrib/sets/lists and etc/mtree - updated bsd.lua.mk - fixed bozohttpd (lua-bozo.c) - compatibilized bindings: gpio, sqlite * lua(4): - removed floating-point and libc dependencies using '#ifndef _KERNEL' - fixed division by zero and exponentiation - libkern: added isalnum(), iscntrl(), isgraph(), isprint() and ispunct() - acpica: removed isprint() from acnetbsd.h - libc: moved strcspn.c, strpbrk.c and strspn.c to common - removed stub headers - compatibilized bindings: luapmf, luasystm * reorganized luaconf.h * updated doc/CHANGES and doc/RESPONSIBLE 0006: lua(4): added debug library 0005: lua(4): uniformed the KPI name space using 'klua_' prefix 0004: lua(4): using lua_CFunction 0003: lua(4): added support for running Lua scripts in intr context * using kmem_intr on lua_alloc * using mutex directly on klua_lock * added ipl arg on klua_newstate() * added kluaL_newstate function * fixed synchronization: locking the Lua state on luaioctl 0002: lua(4): preventing division by zero * note: we should raise an error instead of return INTMAX_MAX 0001: lua(4): cleaned stubs -- Regards, -- Lourival Vieira Neto
Re: lua: pending patches
Hi Alexander, On Sat, Jul 12, 2014 at 5:01 PM, Alexander Nasonov al...@yandex.ru wrote: Lourival Vieira Neto wrote: Hi folks, Here are some pending patches which I want to commit: http://www.netbsd.org/~lneto/pending/. Please, could someone review them? Thank you in advance! -- 0007: lua: updated from 5.1 to 5.3 work3 I will review the changes later but I wonder why the rush to update lua to work-in-progress version? I don't think we rush. Marc and I have discussed on this and we conclude that Lua 5.3 is sufficiently stable and have significant advantages over Lua 5.2, such as integer subtype and bitwise operators. Regards, -- Lourival Vieira Neto
[patch] Lua data library
Hi Folks, I've implemented the core features of the Lua data library; the patch is attached. Luadata provides C and Lua APIs to handle binary data using Lua scripts. Here is a briefly description of those: 1. Lua API: 1.1 creation - data.new(table) Returns a new data object initialized with the given byte array. For example: d1 = data.new{0xFF, 0xFE, 0x00} -- returns a data object with 3 bytes. 1.2 layout - data.layout(table) Returns a new layout table based on table argument, which should have the following formats for its fields: (i) field = {offset, length [, endian]} or (ii) field = {offset = offset, length = length [endian = endian]} Where, field is the name of the field, offset is the offset in bits (MSB 0), length is the length in bits, endian is a string that indicates the field endianness ('host', 'net', 'little', 'big'). The default value for endian is 'big'. Here are a couple examples: (i) l1 = data.layout{msb = {0, 1}, uint32 = {0, 32}, uint64le = {0, 64, 'little'}} (ii) l2 = data.layout{msb = {offset = 0, length = 1}, net_unaligned_uint16 = {offset = 1, length = 16, endian = 'net'}} - d:layout(layout | table) Applies a layout table on a given data object. If a regular table is passed, it calls data.layout(table) first. For example: d1:layout(l1) -- applies l1 layout into d1 data object d2:layout{byte = {0, 8}} -- creates and applies a new layout into d2 data object 2. API C 2.1 creation - int ldata_newref(lua_State *L, void *ptr, size_t size); Creates a new data object pointing to ptr (without copying it), leaves the data object on the top of the Lua stack and returns a reference for it. The data object will not be garbage-collected until it is unreferred. 2.2 deletion - void ldata_unref(lua_State *L, int ref); Removes the ptr from the data object and releases the data-object reference, allowing it to be garbage-collected. After that, it is safe to free the ptr pointer. Regards, -- Lourival Vieira Neto Index: lib/lua/data/Makefile === RCS file: lib/lua/data/Makefile diff -N lib/lua/data/Makefile --- /dev/null 1 Jan 1970 00:00:00 - +++ lib/lua/data/Makefile 17 Jan 2014 11:47:53 - @@ -0,0 +1,9 @@ +LUA_MODULES= data + +LUA_SRCS.data= luadata.c +LUA_SRCS.data+= data.c +LUA_SRCS.data+= layout.c +LUA_SRCS.data+= luautil.c +LUA_SRCS.data+= binary.c + +.include bsd.lua.mk Index: lib/lua/data/binary.c === RCS file: lib/lua/data/binary.c diff -N lib/lua/data/binary.c --- /dev/null 1 Jan 1970 00:00:00 - +++ lib/lua/data/binary.c 17 Jan 2014 11:47:53 - @@ -0,0 +1,182 @@ +/* + * Copyright (c) 2013, 2014, Lourival Vieira Neto ln...@netbsd.org. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * 1. Redistributions of source code must retain the above copyright + *notice, this list of conditions and the following disclaimer. + * 2. Redistributions in binary form must reproduce the above copyright + *notice, this list of conditions and the following disclaimer in the + *documentation and/or other materials provided with the distribution. + * 3. The name of the Author may not be used to endorse or promote products + *derived from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE + * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF + * SUCH DAMAGE. + */ +#include limits.h + +#include sys/param.h + +#include binary.h + +#define BYTE_MAX UCHAR_MAX +#define UINT64_BIT (64) + +inline static void +set_bits(uint64_t *value, uint64_t clear_mask, uint64_t set_mask) +{ + *value = clear_mask; + *value |= set_mask; +} + +#define CONTIGUOUS_BITS(widt, truncated) (width - truncated) + +static void +expand(uint64_t *value, size_t width, size_t msb_offset, byte_t truncated) +{ + size_t contiguous = CONTIGUOUS_BITS(width, truncated); + + size_t trunc_msb_offset = BYTE_BIT - truncated; + size_t trunc_lsb_offset = contiguous + trunc_msb_offset; + + size_t clear_offset = msb_offset + truncated; + + uint64_t clear_mask = UINT64_MAX clear_offset; + uint64_t trunc_mask = *value trunc_lsb_offset
Re: ptrdiff_t in the kernel
Is there a reason to do not have ptrdiff_t defined in the kernel? Shouldn't be OK to define it in sys/cdefs.h? Or even for having stddef.h itself in the kernel? It is defined in the kernel and comes from machine/ansi.h via sys/types.h. Actually, it isn't. Only _BSD_PTRDIFF_T_ is defined by machine/ansi.h. The ptrdiff_t type is defined only in stddef.h. That surprises me. Easy enough to add. http://www.netbsd.org/~matt/ptrdiff-diff.txt I replied this in http://mail-index.netbsd.org/tech-kern/2013/12/04/msg016211.html. No, stddef.h is not allowed in the kernel. Symbols from it are provided via other means. I know. In fact, I'm asking if it would be alright to allow that. AFAIK, it would be inoffensive if available in the kernel. Actually, it would be offensive. Why? Regards, -- Lourival Vieira Neto
Re: [patch] put ptrdiff_t in the kernel and create sys/stddef.h
Hi Joerg, On Wed, Dec 4, 2013 at 12:25 PM, Joerg Sonnenberger jo...@britannica.bec.de wrote: On Wed, Dec 04, 2013 at 12:04:23PM -0200, Lourival Vieira Neto wrote: Hi Mindaugas, Here is a patch that puts ptrdiff_t in the kernel. It also creates a sys/stddef.h header. Why sys/stddef.h? Just keep them in sys/types.h please. To avoid redefining ptrdiff_t on stddef.h. I think it would be more coherent, since ptrdiff_t is a stddef.h definition and stddef.h shouldn't include sys/types.h. Weak reason. stddef.h must not include sys/types.h, but it doesn't mean they can't both define it. I didn't state that it can't be defined by both. I just said it would be more coherent to define it in just one place. Why it is a bad idea? Regards, -- Lourival Vieira Neto
Re: [patch] put ptrdiff_t in the kernel and create sys/stddef.h
Hi David, Alan and Matt, On Wed, Dec 4, 2013 at 7:38 PM, Matt Thomas m...@3am-software.com wrote: On Dec 4, 2013, at 1:33 PM, Alan Barrett a...@cequrux.com wrote: On Wed, 04 Dec 2013, David Holland wrote: (*) A complete scheme for doing it right removes all the _BSD_FOO_T_ drivel and ifdefs scattered in userland headers in favor of: - a single header file that defines all the needed types prefixed with __, which can be included anywhere; - in userland, include-guarded header files akin to sys/null.h that define single or common groups of the names without the __ prefixes, e.g. types/size_t.h; - including these header files in the proper places, such as in standard userland header files like stddef.h; - in the kernel, a single header file that defines all the types without the __, that is or is exposed to sys/types.h but does not affect userland. Yes, that's one way of doing it right. Until such time as somebody does it right, please follow the pattern of what's done already. which is what my suggested patch does. I got your point. Regards, -- Lourival Vieira Neto
ptrdiff_t in the kernel
Hi Folks, Is there a reason to do not have ptrdiff_t defined in the kernel? Shouldn't be OK to define it in sys/cdefs.h? Or even for having stddef.h itself in the kernel? Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
It will be interesting to see by how much memory the addition of the standard libraries will grow lua(4). lneto claims it does not grow at all. If it should, we can still move the standard libraries to a kmod. I just double checked now (using nm to confirm). In fact, I was commenting the wrong portion of the Makefile to test. Sorry about that =(. Here is the result in amd64: 240K with stdlibs and auxlib, 166K with only auxlib and 154K solo. Anyway, I still think that is 86K is not that much to have things like {base, string, table}lib. However, though I think stdlibs could be in another kmod, I think that is not a good idea to have auxlib in another one. Lua auxlib is just an extension of the Lua C API and 12K is really a fair price to have a more complete Lua library in kernel, IMO. Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
On Fri, Nov 29, 2013 at 10:03 AM, Marc Balmer m...@msys.ch wrote: Am 29.11.13 12:38, schrieb Lourival Vieira Neto: It will be interesting to see by how much memory the addition of the standard libraries will grow lua(4). lneto claims it does not grow at all. If it should, we can still move the standard libraries to a kmod. I just double checked now (using nm to confirm). In fact, I was commenting the wrong portion of the Makefile to test. Sorry about that =(. Here is the result in amd64: 240K with stdlibs and auxlib, 166K with only auxlib and 154K solo. Anyway, I still think that is 86K is not that much to have things like {base, string, table}lib. However, though I think stdlibs could be in another kmod, I think that is not a good idea to have auxlib in another one. Lua auxlib is just an extension of the Lua C API and 12K is really a fair price to have a more complete Lua library in kernel, IMO. We could for now just go ahead, put auxlib and the stdlibs in lua(4) as foreseen, and when the need arises, we can still factor out the stdlibs to their own kmod. Agreed. Anyone opposes? Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
On Thu, Nov 28, 2013 at 6:13 AM, Marc Balmer m...@msys.ch wrote: Am 27.11.13 22:23, schrieb Martin Husemann: Can't it be a per-state option, passed by luactl when creating the state? That is actually an excellent idea. So what should be the default, stdlibs enabled or not enabled? IMO, it should be disabled by default. Thus, it would mimic the Lua library behavior. Where Lua states are created empty and if you want to use stdlibs, you must explicitly load it (e.g., calling luaL_openlibs()). Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
If the standard libraries should be a separate kmod or if the code should just reside in lua(4) is another question. I really see no reason to put it in another kmod. Moreover, following the user-space analogy argument, liblua is linked altogether. Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
On Wed, Nov 27, 2013 at 6:15 PM, dieter roelants dieter.net...@pandora.be wrote: On Tue, 26 Nov 2013 12:50:16 +0100 Marc Balmer m...@msys.ch wrote: My suggestion is this: Build with lauxlib and also build the standard libraries Create a new sysctl kern.lua.stdlib, set to one by default If a new state is created in lua(4), run lua_openlib(...) on that state if kern.lua.stdlib is set to a vaue != 0 So by default a kernel Lua state would then have the stdlibs available, much you have them available when running lua(1). Maybe it's just me, but it seems strange to have this depend on a (global) sysctl. I assume you have a reason for being able to disable it. But if you then want to run other lua code that needs the stdlibs, it will be enabled for your original code as well. Did I misunderstand? Yes, you're right. Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
On Wed, Nov 27, 2013 at 7:23 PM, Martin Husemann mar...@duskware.de wrote: Can't it be a per-state option, passed by luactl when creating the state? I think this is a better option. Regards, -- Lourival Vieira Neto
Re: [patch] using luaL_register() on luacore kernel module
On Tue, Nov 26, 2013 at 6:22 AM, Marc Balmer m...@msys.ch wrote: Am 26.11.13 03:26, schrieb Lourival Vieira Neto: Hi again.. Just a tiny patch to use luaL_register() on luacore kernel module. Regards You are now using the auxiliary library, which I did not include in kernel Lua on prupose (to keep stuff smaller). It doesn't seemed reasonable. Lua standard libraries don't add significant footprint to Lua kernel module. In fact, on am64, both versions (with or without Lua stdlib) have 240K. So what is the gain of including the Lua auxiliary library in kernel space Lua? I think it should be discussed for five minutes. Using luaL_register() instead of reinvent it, just pays the price for me (which is 0KB =). Also, I think that Lua is really less useful without standard libraries. Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
Hi Marc, On Tue, Nov 26, 2013 at 6:18 AM, Marc Balmer m...@msys.ch wrote: Am 26.11.13 02:50, schrieb Lourival Vieira Neto: Hi Folks, Here is a patch that puts some Lua standard libraries into the kernel: - Auxiliary library (C API); - Base library; - String library; - Table library. In the kernel, Lua states are created empty _on purpose_. So the Lua standard library should be a module (or modules) in kernel space. Why? Note, the Lua states still empty, but, with this patch, you can call luaL_openlibs(). Whether all standard libraries should be one module, or rather multiple kernel modules, is to be discussed. Let's discuss then =). But please don't load the automatically. Do you mean 'don't link Lua stdlib with Lua'? What is the reason? Regards, -- Lourival Vieira Neto
Re: [patch] put Lua standard libraries into the kernel
On Tue, Nov 26, 2013 at 9:50 AM, Marc Balmer m...@msys.ch wrote: Am 26.11.13 12:13, schrieb Lourival Vieira Neto: Hi Marc, On Tue, Nov 26, 2013 at 6:18 AM, Marc Balmer m...@msys.ch wrote: Am 26.11.13 02:50, schrieb Lourival Vieira Neto: Hi Folks, Here is a patch that puts some Lua standard libraries into the kernel: - Auxiliary library (C API); - Base library; - String library; - Table library. In the kernel, Lua states are created empty _on purpose_. So the Lua standard library should be a module (or modules) in kernel space. Why? Note, the Lua states still empty, but, with this patch, you can call luaL_openlibs(). Whether all standard libraries should be one module, or rather multiple kernel modules, is to be discussed. Let's discuss then =). My suggestion is this: Build with lauxlib and also build the standard libraries Create a new sysctl kern.lua.stdlib, set to one by default If a new state is created in lua(4), run lua_openlib(...) on that state if kern.lua.stdlib is set to a vaue != 0 So by default a kernel Lua state would then have the stdlibs available, much you have them available when running lua(1). It is just fine for me =). Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Sun, Nov 24, 2013 at 10:06 PM, James K. Lowden jklow...@schemamania.org wrote: On Sat, 23 Nov 2013 11:46:19 -0200 Lourival Vieira Neto lourival.n...@gmail.com wrote: On Sat, Nov 23, 2013 at 1:22 AM, James K. Lowden jklow...@schemamania.org wrote: On Mon, 18 Nov 2013 09:07:52 +0100 Marc Balmer m...@msys.ch wrote: After discussion we lneto@ and others we realised that there are several such libraries around, and that I as well as lneto@ wrote one. SO we decided to merge our works How do you deal with the usual issues of alignment and endianism? d = data.new{0xF0, 0xFF, 0x00} -- creates a new data object with 3 bytes. d:layout{ x = { __offset = 0, __length = 3 }, y = { __offset = 8, __length = 16, __endian = 'net' }, z = { __offset = 0, __step = 9 } } d.x -- returns the 3 most significant bits from d (that is, 7) d.y -- returns 16 bits counting from bit-8 most significant. -- in this case, these 2 bytes are converted using ntohs(3), that is 0xFF00. d.z[1] -- returns the 9 most significant bits from d (that is, 0x1E1). Hi Lourival, Thanks for your answer. A few questions and observations, if I may. You are welcome. Of course you may =). 1. What is the significance of the leading underscores? It is used as a mark to distinguish parameters from field names. It is specially useful for nested fields and for the global behavior of the layout. For example: d:layout{__endian = 'net', }, would use big endian for all fields (except which has set it explicitly). 2. I assume you mean d.x represents the three *least* significant bits. Why? I really meant three *most* significant bits. In this example: [* 1 | 1 | 1 * | 1 | 0 | 0 | 0 | 0 ]. I don't understand step, not that it matters. Sorry, I didn't describe the API itself; I just illustrated a little example. The __step parameter is used for array accessing. In the previous example, the d.z could be indexed using Lua array notation, where each position corresponds to 9 bits from data, starting from bit-0 most significant (__offset = 0 and __step = 9). For purposes of extracting/packing values in a buffer, offset and length are all you need. In fact, you only *need* bit operations for that. However, I think that could be more pleasant to have a declarative API for that. Semantics require a type system for the bit patterns. I guess y is implied to be a 16-bit integer, since it has endianism, but its signedness is unspecified. I suggest you enumerate all types you will support, and that that set encompass all types that a C compiler can generate. I'm only handling integers. If you include an ignore type (cf. Perl's pack/unpack functions), you can drop offset from your description, for which you'll be glad eventually. I'm considering an alternative syntax for suppressing offset declaration: l = data.layout{ { 'x', 1 }, -- most significant bit { 3 }, -- 3 bits of padding { 'y', 4 } -- 4 subsequent bits } For purposes of binary transfer, host endianism is unimportant; what matters is the endianism of the wire format. TCP/IP uses big-endian format by definition. ISTM that should be your default, too, else the same code compiled on two different machines means two different things. It is not my intent to support network-only applications. In fact, one of my use cases should be the support for writing device drivers in Lua. A 2-byte integer starting at a 5-bit offset is weird for a byte-addressable machine. I don't see a need to support bitfields unless you have an existing use case; bit arrays can always be transmitted as character arrays, which after all is how they appear in memory. Sure that it *is* weird. But how we should handle data that is structured in that way? By alignment I was asking about padding and offsets in data structures that the C language leaves up to the implementation. You have to describe exactly the layout you want to access. If you don't describe a specific offset, you cannot reach it. In the above example, it has an explicitly padding declaration. Then, you can do d:layout(l) and d.x or d.y, but you cannot reach that 3 bits padding. Same is true if you omit offset ranges (e.g., everything after the first byte is inaccessible using that layout). For your extract format (fmt), you might want to consider the gdb x/fmt command because it encompasses everything you could need and is the soul of brevity. It sounds like a good tip =). As far as I can tell, by the way, you're reinventing part of ASN.1. Nothing wrong with that, in and of itself; perhaps you can create something more convenient to use. But you might want to use it as a reference for functionality, and be ready to explain why your library should be used instead. I really don't think that I'm reinventing ASN.1 nor BER. I'm just designing a little API to handle binary data in Lua, not a standard. Note, I'm using Lua tables to describe data layouts, not another
[patch] put Lua standard libraries into the kernel
Hi Folks, Here is a patch that puts some Lua standard libraries into the kernel: - Auxiliary library (C API); - Base library; - String library; - Table library. Regards, -- Lourival Vieira Neto lua_kernel_stdlib.patch Description: Binary data
Re: [patch] put Lua standard libraries into the kernel
On Tue, Nov 26, 2013 at 12:08 AM, Paul Goyette p...@whooppee.com wrote: Hopefully, this actually puts the libraries into the _optional_ lua kernel module, and not into the kernel itself? :) heh.. course.. s/kernel/Lua kernel module/ =) Regards, -- Lourival Vieira Neto
[patch] using luaL_register() on luacore kernel module
Hi again.. Just a tiny patch to use luaL_register() on luacore kernel module. Regards -- Lourival Vieira Neto lua_kernel_luacore.patch Description: Binary data
Re: A Library for Converting Data to and from C Structs for Lua
Hi James, On Sat, Nov 23, 2013 at 1:22 AM, James K. Lowden jklow...@schemamania.org wrote: On Mon, 18 Nov 2013 09:07:52 +0100 Marc Balmer m...@msys.ch wrote: After discussion we lneto@ and others we realised that there are several such libraries around, and that I as well as lneto@ wrote one. SO we decided to merge our works and have a library that allows to encode/decode structured binary data in one call (my work) and that allows to access individual fields as well (lneto@'s work). How do you deal with the usual issues of alignment and endianism? I'm working on this approach: d = data.new{0xF0, 0xFF, 0x00} -- creates a new data object with 3 bytes. d:layout{ x = { __offset = 0, __length = 3 }, y = { __offset = 8, __length = 16, __endian = 'net' }, z = { __offset = 0, __step = 9 } } d.x -- returns the 3 most significant bits from d (that is, 7) d.y -- returns 16 bits counting from bit-8 most significant. -- in this case, these 2 bytes are converted using ntohs(3), that is 0xFF00. d.z[1] -- returns the 9 most significant bits from d (that is, 0x1E1). (BTW, I'm curious what unstructured binary data might be.) I don't think that Marc meant 'structured binary data' in opposition to 'unstructured binary data'. We both are working with structured data. The main difference is that I'm working on random accessing and he is working on encoding/decoding data at once (e.g., x, y, z = d:extract(fmt)). Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Sat, Nov 23, 2013 at 11:46 AM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Hi James, On Sat, Nov 23, 2013 at 1:22 AM, James K. Lowden jklow...@schemamania.org wrote: On Mon, 18 Nov 2013 09:07:52 +0100 Marc Balmer m...@msys.ch wrote: After discussion we lneto@ and others we realised that there are several such libraries around, and that I as well as lneto@ wrote one. SO we decided to merge our works and have a library that allows to encode/decode structured binary data in one call (my work) and that allows to access individual fields as well (lneto@'s work). How do you deal with the usual issues of alignment and endianism? I'm working on this approach: d = data.new{0xF0, 0xFF, 0x00} -- creates a new data object with 3 bytes. d:layout{ x = { __offset = 0, __length = 3 }, y = { __offset = 8, __length = 16, __endian = 'net' }, z = { __offset = 0, __step = 9 } } d.x -- returns the 3 most significant bits from d (that is, 7) d.y -- returns 16 bits counting from bit-8 most significant. -- in this case, these 2 bytes are converted using ntohs(3), that is 0xFF00. d.z[1] -- returns the 9 most significant bits from d (that is, 0x1E1). s/0x1E1/0x1E1 if the platform is big endian or 0x1FF if little/ Note, in this case there is no conversion (default __endian is 'host'). In non-aligned fields, it gets the lesser aligned data amount that suffices, then masks it to access only the specified length. In this case, it gets the 16 most significant bits (that could be 0xF0FF or 0xFFF0, depending on endianness) and masks them to access only the 9 most significant bits (that is, 0x1E1 or 0x1FF). Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Wed, Nov 20, 2013 at 6:36 AM, Marc Balmer m...@msys.ch wrote: Am 18.11.13 09:07, schrieb Marc Balmer: Am 17.11.13 13:05, schrieb Marc Balmer: I came accross a small library for converting data to an from C structs for Lua, written by Roberto Ierusalimschy: http://www.inf.puc-rio.br/~roberto/struct/ I plan to import it and to make it available to both lua(1) and lua(4) as follows: The source code will be imported into ${NETBSDSRCDIR}/sys/external/mit/struct unaltered and then be modified to compile on NetBSD. Then ${NETBSDSRCDIR}/sys/module/luastruct/ and ${NETBSDSRCDIR}/lib/lua/struct/ directories will be added with the respective Makefiles etc. After discussion we lneto@ and others we realised that there are several such libraries around, and that I as well as lneto@ wrote one. SO we decided to merge our works and have a library that allows to encode/decode structured binary data in one call (my work) and that allows to access individual fields as well (lneto@'s work). Now we need a name that covers both uses cases. It could be memory because it deals with memory, or just data, which I favour. Opinions on the name? Since no one replied, it will go by the name 'data' and be available for both Luas. 'Data' is fine for me; I don't have a better suggestion anyway =). @lneto: I will start with the pack/unpack parts, you can then add your stuff whenever you want, ok? Just fine for me. However, I think we need to define the API before (re)starting coding. Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Wed, Nov 20, 2013 at 7:31 AM, Marc Balmer m...@msys.ch wrote: Am 20.11.13 10:26, schrieb Justin Cormack: On 20 Nov 2013 08:38, Marc Balmer m...@msys.ch mailto:m...@msys.ch wrote: Now we need a name that covers both uses cases. It could be memory because it deals with memory, or just data, which I favour. Opinions on the name? Since no one replied, it will go by the name 'data' and be available for both Luas. @lneto: I will start with the pack/unpack parts, you can then add your stuff whenever you want, ok? I don't have opinions on the name but I do have a set of feature requirements. I am currently using luaffi which needs some work to remove the non portable parts, but that's not far off. Happy to switch but I do need access to struct members like tables, nested structs, unions, casts, metatables for structs. If there was an outline design doc that would be helpful. I suggest we discuss this library in Toulouse, OK? I think it would be very nice. But, please send me your thoughts after =). -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 7:35 AM, Marc Balmer m...@msys.ch wrote: Am 17.11.13 04:36, schrieb Lourival Vieira Neto: On Sat, Nov 16, 2013 at 10:44 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 16, 9:30pm, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | On Sat, Nov 16, 2013 at 8:52 PM, Christos Zoulas chris...@astron.com wrote: | In article 52872b0c.5080...@msys.ch, Marc Balmer m...@msys.ch wrote: | Changing the number type to int64_t is certainly a good idea. Two | questions, however: | | Why not intmax_t? | | My only argument is that int64_t has a well-defined width and, AFAIK, | intmax_t could vary. But I have no strong feelings about this. Do you | think intmax_t would be better? Bigger is better. And you can use %jd to print which is a big win. I agree that bigger is better and %jd is much better then % PRI/SCN. But don't you think that to know the exact width is even better? You can always use sizeof if the need to know the size arises. I mean know it as a script programmer. I think that would be helpful to know the exact lua_Number width when you are writing a script. AFAIK, you don't have sizeof functionality from Lua. So, IMHO, lua_Number width should be fixed and documented. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 7:37 AM, Marc Balmer m...@msys.ch wrote: Am 17.11.13 04:49, schrieb Terry Moore: I believe that if you want the Lua scripts to be portable across NetBSD deployments, you should choose a well-known fixed width. I don't see this as very important. Lua scripts will hardly depend on the size of an integer. But they could. I think that the script programmers should know if the numeric data type is enough for their usage (e.g., time diffs). Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 8:52 AM, Alexander Nasonov al...@yandex.ru wrote: Marc Balmer wrote: The basic issue here is that Lua has only _one_ numerical data type, which is an integral type in kernel, but a floating point type in userspace. Right, not everyone here knows this I guess. Thanks for making it clear. Sorry, I thought that it was clear. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 9:30 AM, Alexander Nasonov al...@yandex.ru wrote: Mouse wrote: Also, using an exact-width type assumes that the hardware/compiler in question _has_ such a type. It's possible that lua, NetBSD, or the combination of the two is willing to write off portability to machines where one or both of those potential portability issues becomes actual. But that seems to be asking for trouble to me; history is full of but nobody will ever want to port this to one of _those_ that come back to bite people. I was perfectly fine with long long because it's long enough to represent all integers in range [-2^53-1, 2^53-1]. As Marc pointed out, Lua has a single numeric type which is double by default. Many Lua libraries don't need FP and they use a subset of exactly representable integers (not all of them do range checks, though). Extending the range when porting from userspace to kernel will decrease the pain factor of porting. I think that it is not the main point here, to decrease the pain factor of porting. Porting Lua user-space libraries should be painful enough independently of lua_Number type. IMHO, the main point here is to define a lua_Number type adjusted to kernel needs. As, we have already broke compatibility with user space libraries (for several factors, not only the missing of floating point number), I think it shouldn't matter much now. Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Sun, Nov 17, 2013 at 11:23 AM, Marc Balmer m...@msys.ch wrote: Am 17.11.13 13:32, schrieb Hubert Feyrer: On Sun, 17 Nov 2013, Marc Balmer wrote: I plan to import it and to make it available to both lua(1) and lua(4) I wonder if we really need to get all this into NetBSD, instead of moving it to pkgsrc somehow. Yes, we need it to handle structured binary data. BTW, I'm developing bitwiser library with a similar purpose.It is not ready already, but it could be helpful to handle binary data without copying it from/to Lua string in the future. LHF has lpack library [1], it could be an alternative to Roberto's struct. [1] http://www.tecgraf.puc-rio.br/~lhf/ftp/lua/#lpack Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 12:58 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 17, 1:36am, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | Bigger is better. And you can use %jd to print which is a big win. | | I agree that bigger is better and %jd is much better then % PRI/SCN. | But don't you think that to know the exact width is even better? Why? You can always compute it at runtime if you need to. Yes, but I think that it is not a common practice in Lua. AFAIK, Lua has only math.huge [1] to tell what is the biggest number value available (what isn't the same of having the lua_Number width). Thus, we would need to provide an interface to return this information. I'd prefer to have it defined in compile time independently of platform. Anyway, as I stated before, I have no strong feelings about that.. if everybody else prefer intmax_t, I have no objection. [1] http://www.lua.org/manual/5.1/manual.html#pdf-math.huge Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 2:02 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 17, 10:46am, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | On Sun, Nov 17, 2013 at 7:37 AM, Marc Balmer m...@msys.ch wrote: | Am 17.11.13 04:49, schrieb Terry Moore: | I believe that if you want the Lua scripts to be portable across NetBSD | deployments, you should choose a well-known fixed width. | | I don't see this as very important. Lua scripts will hardly depend on | the size of an integer. | | But they could. I think that the script programmers should know if the | numeric data type is enough for their usage (e.g., time diffs). By making it the biggest type possible, you never need to be worried. Right.. you just convinced me.. if no one opposes, I'll change that to intmax_t and get rid of PRI/SCNd64 =). Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 3:10 PM, Justin Cormack jus...@specialbusservice.com wrote: On Sun, Nov 17, 2013 at 4:52 PM, Lourival Vieira Neto lourival.n...@gmail.com wrote: On Sun, Nov 17, 2013 at 2:02 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 17, 10:46am, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | On Sun, Nov 17, 2013 at 7:37 AM, Marc Balmer m...@msys.ch wrote: | Am 17.11.13 04:49, schrieb Terry Moore: | I believe that if you want the Lua scripts to be portable across NetBSD | deployments, you should choose a well-known fixed width. | | I don't see this as very important. Lua scripts will hardly depend on | the size of an integer. | | But they could. I think that the script programmers should know if the | numeric data type is enough for their usage (e.g., time diffs). By making it the biggest type possible, you never need to be worried. Right.. you just convinced me.. if no one opposes, I'll change that to intmax_t and get rid of PRI/SCNd64 =). 1. Lua 5.3 will have 64 bit integer support as standard, which will make interop and reuse between kernel and userspace code much easier, iff we use int64_t If they are using int64_t for integers, I think it is a good reason to us to stick to int64_t. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sat, Nov 16, 2013 at 9:25 PM, Lourival Vieira Neto lourival.n...@gmail.com wrote: (...) I moved strtoimax.c to common/libc. Don't know if someone sees a problem on that. BTW, is it OK? Could someone review this? Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Sun, Nov 17, 2013 at 4:39 PM, David Holland dholland-t...@netbsd.org wrote: On Sun, Nov 17, 2013 at 01:32:03PM +0100, Hubert Feyrer wrote: I plan to import it and to make it available to both lua(1) and lua(4) I wonder if we really need to get all this into NetBSD, instead of moving it to pkgsrc somehow. This... I think that would be nice to have Lua kernel modules in pkgsrc, if possible. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 6:13 PM, Justin Cormack jus...@specialbusservice.com wrote: On Sun, Nov 17, 2013 at 7:56 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 17, 3:36pm, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | 1. Lua 5.3 will have 64 bit integer support as standard, which will | make interop and reuse between kernel and userspace code much easier, | iff we use int64_t | | If they are using int64_t for integers, I think it is a good reason to us to | stick to int64_t. This is not relevant. The numeric type will still be double, so forget about compatibility between kernel and userland. There is no need for the interpreter to use a fixed width type, but rather it is convenient to use the largest numeric type the machine can represent. There will be two numeric types as standard, int64_t and double. It should be possible to compile the kernel Lua with only int64_t and no double support I would think, so integer only userland programs would be compatible which is a very useful feature. But the semantics of the Lua integer type will be such that it wraps at 64 bit, unlike some hypothetical larger type (that doesn't yet exist and which the kernel doesn't yet use). Well, I don't think I fully understood that; mainly because I'm not aware about Lua 5.3. It will provide two number types for the scripts? Or you are just talking about lua_Integer type on the C-side. Lua 5.1 already has a lua_Integer type that is defined as ptrdiff_t. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 6:45 PM, Justin Cormack jus...@specialbusservice.com wrote: On Sun, Nov 17, 2013 at 8:39 PM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Well, I don't think I fully understood that; mainly because I'm not aware about Lua 5.3. It will provide two number types for the scripts? Or you are just talking about lua_Integer type on the C-side. Lua 5.1 already has a lua_Integer type that is defined as ptrdiff_t. Yes that is correct, so 2 will be integer and 5.3 float, operations will be defined in terms of how they convert, so there will be int and float division. The draft manual is here http://www.lua.org/work/doc/manual.html (see 3.4.1). This will not happen for a while, but it will make it much easier in future for interfaces like the kernel that need 64 bit int support, which is why it is being implemented. So not being compatible with this seems a mistake. Humm.. I think that §2.1 brings a good argument: Standard Lua uses 64-bit integers and double-precision floats, (...). I think that would not hurt to stick to the future standard; once 64 bit is good enough for kernel purposes. Regards, -- Lourival Vieira Neto
Re: A Library for Converting Data to and from C Structs for Lua
On Sun, Nov 17, 2013 at 7:57 PM, Marc Balmer m...@msys.ch wrote: Am 17.11.13 14:43, schrieb Lourival Vieira Neto: On Sun, Nov 17, 2013 at 11:23 AM, Marc Balmer m...@msys.ch wrote: Am 17.11.13 13:32, schrieb Hubert Feyrer: On Sun, 17 Nov 2013, Marc Balmer wrote: I plan to import it and to make it available to both lua(1) and lua(4) I wonder if we really need to get all this into NetBSD, instead of moving it to pkgsrc somehow. Yes, we need it to handle structured binary data. BTW, I'm developing bitwiser library with a similar purpose.It is not ready already, but it could be helpful to handle binary data without copying it from/to Lua string in the future. LHF has lpack library [1], it could be an alternative to Roberto's struct. [1] http://www.tecgraf.puc-rio.br/~lhf/ftp/lua/#lpack So before I commit anything (if at all), we will coordinate this effort and maybe take the best of all libraries. So that we have only one module dealing with data. OK.. Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sat, Nov 16, 2013 at 6:21 AM, Marc Balmer m...@msys.ch wrote: Changing the number type to int64_t is certainly a good idea. Two questions, however: 1) Why do you remove the sys/modules/lua/inttypes.g file? Because this place holder is no long necessary. I have just replaced that for sys/inttypes.h (adding -I${S}/sys to CPPFLAGS). 2) In sys/modules/lua/luaconf.h, lua_str2number is still #defined as stroll(), which assumes long long. Shouldn't that be changed as well to a function taking int64_t as argument? Yes, my bad; sorry, I forgot about str2number. I've adjusted the patch. Now, I'm using (int64_t) strtoimax() instead of strtoll(). I think it should be alright because intmax_t is always greater or equal than int64_t. I moved strtoimax.c to common/libc. Don't know if someone sees a problem on that. The patch is attached. Regards, -- Lourival Vieira Neto lua_kernel_int64_t-3.patch Description: Binary data
Re: [patch] changing lua_Number to int64_t
On Sat, Nov 16, 2013 at 8:52 PM, Christos Zoulas chris...@astron.com wrote: In article 52872b0c.5080...@msys.ch, Marc Balmer m...@msys.ch wrote: Changing the number type to int64_t is certainly a good idea. Two questions, however: Why not intmax_t? My only argument is that int64_t has a well-defined width and, AFAIK, intmax_t could vary. But I have no strong feelings about this. Do you think intmax_t would be better? Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sat, Nov 16, 2013 at 9:47 PM, Alexander Nasonov al...@yandex.ru wrote: Lourival Vieira Neto wrote: On Sat, Nov 16, 2013 at 8:52 PM, Christos Zoulas chris...@astron.com wrote: In article 52872b0c.5080...@msys.ch, Marc Balmer m...@msys.ch wrote: Changing the number type to int64_t is certainly a good idea. Two questions, however: Why not intmax_t? My only argument is that int64_t has a well-defined width and, AFAIK, intmax_t could vary. But I have no strong feelings about this. Do you think intmax_t would be better? int64_t should be enough to cover a range of exactly representable integers in userspace Lua program where lua_Number is double. I don't think that keeping compatibility with userspace Lua is the right argument. We already have lost this kind of compatibility by using an integer type for lua_Number. Expecting that kernel lua_Number works just like userspace lua_Numbers could lead to misunderstandings. I don't see a need for bigger type unless mainstream Lua switches to long double. I don't expect it to happen any time soon. Why it should bother if Lua switches to a bigger floating-point type? PS Why do you still use a shadow copy of luaconf.h? Please add your changes to the main luaconf.h. Only because it is the way that it was committed. I didn't take time yet to unify that. But you are right, I'll try to do that in the future. If you guard your kernel changes properly with _KERNEL, they will not affect userspace. Yes, I know. I think the guards are just fine. PPS %PRId64 may break in C++11, space between the literals should fix it. I don't think that are plans to change kernel language to C++ ;-), however doesn't hurt to write it in clean C. Just for curiosity.. do you know why it is not allowed in C++11? Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sat, Nov 16, 2013 at 10:44 PM, Christos Zoulas chris...@zoulas.com wrote: On Nov 16, 9:30pm, lourival.n...@gmail.com (Lourival Vieira Neto) wrote: -- Subject: Re: [patch] changing lua_Number to int64_t | On Sat, Nov 16, 2013 at 8:52 PM, Christos Zoulas chris...@astron.com wrote: | In article 52872b0c.5080...@msys.ch, Marc Balmer m...@msys.ch wrote: | Changing the number type to int64_t is certainly a good idea. Two | questions, however: | | Why not intmax_t? | | My only argument is that int64_t has a well-defined width and, AFAIK, | intmax_t could vary. But I have no strong feelings about this. Do you | think intmax_t would be better? Bigger is better. And you can use %jd to print which is a big win. I agree that bigger is better and %jd is much better then % PRI/SCN. But don't you think that to know the exact width is even better? Regards, -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
On Sun, Nov 17, 2013 at 3:30 AM, Terry Moore t...@mcci.com wrote: From: Lourival Vieira Neto [mailto:lourival.n...@gmail.com] Watch out, by the way, for compiled scripts; I have not checked Lua 5.x, but you may find if not careful that the compiled binary is not loadable on machines with different choices for LP64, ILP32, etc. This is somewhat independent of the choice of lua_Number mapping. Yes, Lua bytecode isn't portable in fact. BTW, loading Lua bytecode is not recommended by Lua team. It isn't safe. It's not *much* less safe than compiling and executing a string in the kernel. The only additional attack surfaces are that you can write things that the compiler wouldn't write. This can (1) cause a crash at load time, or it can (2) cause surprising behavior later. I suspect that anyone who would want to allow Lua in the kernel would be somewhat indifferent to type-2 safety issues. If you don't trust the input string, you had better not give it to the kernel (whether or not it's compiled). Type-1 safety issues ought to be fixed, as careful examination of the input data only slows down the load process -- it has no run-time effects. Given that Lua byte codes can be translated into C arrays and then loaded via the API, portability issues can creep in through the back door. Best regards, --Terry Sorry. I think I didn't make myself clear here. By 'it isn't safe' I meant that loading bytecode is not _reliable_, unless you have compiled it in the same Lua build that you are using to run. Otherwise, It can lead to malfunctioning both in kernel or user space. I was not referring myself to possible attacks. Note, that this is not all about portability among platforms, bytecode is not portable among different builds of Lua interpreter. In fact, Lua interpreter has no compromise with bytecode exchanging; the authors' idea is to distribute Lua source code instead of bytecode (AFAIK). I support that is not a good idea to compile a Lua script in some Lua interpreter build and then run it on another one (even on the same platform). Moreover, there is no significative difference in the time cost to load strings or byte codes. Lua compiling time is really fast (even comparing to the loading time of byte code). Regards, -- Lourival Vieira Neto
[patch] changing lua_Number to int64_t
Hi Folks, Here are a patch that changes lua_Number from 'long long' to 'int64_t'. Regards, -- Lourival Vieira Neto lua_kernel_int64_t.patch Description: Binary data
Re: [patch] changing lua_Number to int64_t
*is On Sat, Nov 16, 2013 at 12:53 AM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Hi Folks, Here are a patch that changes lua_Number from 'long long' to 'int64_t'. Regards, -- Lourival Vieira Neto -- Lourival Vieira Neto
Re: [patch] changing lua_Number to int64_t
fixed a little issue.. (s/LUA_NUMBER_SCAN %PRId64/LUA_NUMBER_SCAN %SCNd64).. sorry about that.. =/ On Sat, Nov 16, 2013 at 12:55 AM, Lourival Vieira Neto lourival.n...@gmail.com wrote: *is On Sat, Nov 16, 2013 at 12:53 AM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Hi Folks, Here are a patch that changes lua_Number from 'long long' to 'int64_t'. Regards, -- Lourival Vieira Neto -- Lourival Vieira Neto -- Lourival Vieira Neto lua_kernel_int64_t-2.patch Description: Binary data
Re: Lua in-kernel (lbuf library)
Hi Christoph, Firstly, thanks for your comments. I really appreciated that =). BTW, I renamed lbuf to Lua bitwiser and removed support for unamed array access (buf:mask(alignment [, offset, length]) and buf:mask{ length_pos1, length_pos2, ... }). Instead, I introduced a new way to provide array access (see below). On Sun, Oct 20, 2013 at 8:05 PM, Christoph Badura b...@bsd.de wrote: On Tue, Oct 15, 2013 at 06:01:29PM -0300, Lourival Vieira Neto wrote: Also, having to switch mentally between zero-based arrays in the kernel C code and 1-based arrays in the Lua code make my head ache. It's something that doesn't bug me so much.. But, if necessary it could be changed to 0-based in this userdata. When you create your own data structures, I guess it is a wash. You have to adjust +/-1 in infrequent circumstances in either scenario. But in this case you are creating a special purpose language that operates in universe of zero-based array. And that's not only the kernel code. Every Internet protocol specification that I remember is using zero-based indexing. For someone dealing with both sides (the world and your lua library), it makes the difference between constantly having to be alert to remember to do the offset adjustment. That is a lot more mental work for anyone working with this library. If you use 1-based indices talking to protocol people will be funny too: ``Anyone know why the flags in byte 6 of this packet are funny?'' ``Sure, that's most likely because the flags are in byte 5.'' I think it is worth thinking hard about this. Yes.. I'll keep think on this. By now, I'm using 1-based indices to Lua array-access on buffers and 0-based offsets to bitmask definitions. I what is buffer and how does it relate to mbufs? Note, non-contiguous buffer still an open problem in lbuf. I don't know if should use a ptrdiff_t to pass the distance to 'next' field, a 'next()' to return 'next' field or something else. You seem to be talking about implementation. I was talking about the interface of the library. Yes, sorry. In that stage, sometimes, it is a little difficult to separate these things. However, you could create a lbuf from a mbuf header as follows: lbuf_new(L, mbuf-m_data, mbuf-m_len, NULL, true); I don't think that is a good way. You say you want to inspect packet data in the kernel. Well, the packet's data can be spread over a chain of mbufs. Also, mbufs may have internal or externel storage. You don't want to deal with that as the user of your library. As a user, I want an interface like this: lbuf_from_mbuf(L, mbuf, NULL, true); I'm thinking in providing an interface like this in an adapter layer. Thus, we could use Lua bitwiser library in other areas. I was just giving an example of how to use it with the current implementation (again, sorry for don't separate interface discussion from implementation). Yes, mea culpa =(. I wasn't clear about that. 'net' flag was the way I found to 'record' the buffer endianness. What means, true if the buffer uses BE and false if it uses HE. It has the same semantics of hton* and ntoh* functions. Don't know if it is better to pass the endianness itself as a flag (e.g., enum { BIG_ENDIAN, LITTLE_ENDIAN, HOST_ENDIAN }). What do you think? For me the the most convenient interface would be if I didn't have to mention the host byteorder. Just record what byteorder the buffer is in, and convert when appropriate. Alan made a good point. It maybe be convenient and/or necessary to specify a different byteorder in a mask. I'm working on it; thinking in the following form: bitwiser.mask{ field = { offset, length, sign, endian, step }}, where sign is a boolean, endian is a string like 'host', 'h', 'big', 'b', 'little', 'l', 'net' or 'n', and step is a number between [1, 64]. And (to allow omitting or non-ordering of parameters): bitwiser.mask{ field = { __offset = offset, __length = length, __sign = sign, __endian = endian, __step = step }} defaults are __sign = true, __endian = 'host' and __step = undef. If step is present or length is omitted, the lib assumes that it is a segment field, what means that it should return a bitwiser.buffer userdatum if accessed, which can be accessed like an array (using step, if it is defined, or else the step of the original buffer to determine the length of each field). It also could be masked to use field access. For example: m = bitwiser.mask{ type = { 0, 4 }, flags = { 4, 4, __step = 1 }, payload = { 8 } } b = bitwiser.buffer{ 0xff, 0, 0xff, 0 } -- new buffers have step = 8, by default b[1] -- 0xff b:mask(m) b.flags[1] = false -- unsets bit-4 (0-based) b.flags[4] -- returns bit-7 (0-based), 1 in this case b.payload:mask{ padding = { 0, 8 }, data = { 8, __step = 16 } } b.payload.data[1] -- returns 2 bytes from bit-16 (0-based) of the original buffer, -- 0x00ff or 0xff00 depending on platform endianess, in this case
Re: Lua in-kernel (lbuf library)
Hi Artem, On Wed, Oct 23, 2013 at 5:10 PM, Artem Falcon lo...@gero.in wrote: On Tue, Oct 15, 2013 at 06:01:29PM -0300, Lourival Vieira Neto wrote: Also, having to switch mentally between zero-based arrays in the kernel C code and 1-based arrays in the Lua code make my head ache. Above this well-discussed inconvenience there is a thing which may hurt more. It's the Lua's stack-based C API and all the stack composition pottering it imposes on you. This may be customary for those having an experience with concatenative languages, but it'll be a source of errors for the others. See [1]'s Stack based API is harder for more on it. [1] http://julien.danjou.info/blog/2011/why-not-lua -- dukzcry Here is another nice discussion about C APIs of scripting languages: www.inf.puc-rio.br/~roberto/docs/jucs-c-apis.pdf Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
Hi, On Fri, Oct 18, 2013 at 11:09 AM, Matt W. Benjamin m...@linuxbox.com wrote: Hi, The linked research was performed on Linux, which has NFsv4.1 and pNFS client implementations. Evidently, you can do this kind of thing with an out-of-tree Lua kernel extension. Matt Evidently. I'm not arguing that we need that. I'm just arguing that I see benefits and none harm. Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
On Fri, Oct 18, 2013 at 9:08 AM, Taylor R Campbell riastr...@netbsd.org wrote: Date: Thu, 17 Oct 2013 19:16:16 -0300 From: Lourival Vieira Neto lourival.n...@gmail.com Lua is a tool, not an end in itself. I think that you are formulating a chicken-and-egg problem: we need the basic support for then having applications, and we need applications for then having basic support. This is not a chicken-and-egg problem. You can make an experimental kernel with Lua support and make an experimental application in Lua, all before anything has to be committed to HEAD[*]. Then you can show that the application serves a useful function, has compelling benefits over writing it in C, and can offer confidence in robustness. [*] You could do this in a branch, you could do this in a private Git repository, or you could even just do this in a local CVS checkout (since kernel Lua requires no invasive changes, right?). Yes, but how do we do device driver development? We are branching the tree for each non-intrusive and disabled-by-default device driver? If we have developed a device driver for an uncommon device, we have to put it in a branch? (Please, note I'm friendly asking that). That is not about needing, but it is about supporting a certain kind of agile development, prototyping, customization and experimentation in the NetBSD kernel (how could it be hurtful?). Prototyping and experimentation is great! Show examples! What hurts is getting bitrotten code that nobody actually maintains or uses (when was the last Lua update in src?) and provides a new Turing machine with device access in the kernel for attack vectors. I don't see how an optional module could be used for attacks. If users enable that, they should know what they are doing (such as loading a kernel module). [1] https://github.com/dergraf/PacketScript [2] http://www.pdsw.org/pdsw12/papers/grawinkle-pdsw12.pdf In the two links you gave, I found precisely five lines of Lua code, buried in the paper, and those five lines seemed to exist only for the purpose of measuring how much overhead Lua adds to the existing pNFS code or something. I'm just showing examples of how it could be useful for user applications. I understand that you do not agree with that. But I'm not arguing that we have to add these applications into the tree. I'm arguing that we could benefit users with such a tool. Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
Hi, On Fri, Oct 18, 2013 at 1:31 PM, Aleksej Saushev a...@inbox.ru wrote: (...) Lua is a tool, not an end in itself. I think that you are formulating a chicken-and-egg problem: we need the basic support for then having applications, and we need applications for then having basic support. The problem with your approach is that such chicken-and-egg problems are to be solved _at_once_ rather than laying eggs everywhere around and have everyone else wait till at least one chicken appears. No. I'm talking about put just one egg, just a device driver. Sure, we do not *need* a script language interpreter embedded in the kernel, as we do not need a specific file system. But I do not get why we should not. There is current development of applications being done right now. Also, there is a few interesting works that used Lunatik in Linux [1, 2] that could be done more easily now in NetBSD just because we have the right environment for that. That is not about needing, but it is about supporting a certain kind of agile development, prototyping, customization and experimentation in the NetBSD kernel (how could it be hurtful?). I think that is why we *should* (not need) have this on the tree. IMHO. I have to point out that interesting work is commonly used as a sort of euphemism to refer to highly experimental work with unclear future. Yes. But I'm talking about interesting *user* work. I'm not claiming that they should be in the kernel. I'm just saying that, IMHO, we should incorporate a small device driver that facilitates this kind of development (outside the tree). You tell that there's interesting work using Lua in Linux. Was it accepted in any experimental Linux distribution like Fedora? What was the outcome of discussion among linux kernel developers? Currently there's no indication that it was accepted anywhere. Really don't know. I'm not a member of these communities neither I'm claiming to incorporate such works here. However, I think that there was a discussion about PacketScript on OpenWRT, but I don't know how it evolved. I doubt very much that we want such unreliable development practices like agile ones in the kernel, and experimentation work can be done easier and better in a branch or a personal repository. I agree with you in this point: experimental work should be done aside from the tree. And last. The appeal to why not is defective. NetBSD is not your personal playground, there exist other people who have to deal with the inadvertent mess you can leave after you. That's why you ought to present solid arguments that justify why other people should tolerate your experimentations. I guess you misunderstood that. I'm not arguing that we should do it just because there is no contrary argument. I sincerely asked 'why not?' trying to understand the contrary argumentation. Also, I'm not saying that you should tolerate my experimentation. Far away from that. I haven't committed anything nor tried to impose nothing. I'm just trying to make a point of view and understand yours. When I talked about experimentation, I was trying to say that providing support for that kind of experimentation for users sounds a good idea for me and I don't see how it is prejudicial. Which doesn't mean that I'm proposing that my personal experimentation should be in tree. Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
Lua is a tool, not an end in itself. I think that you are formulating a chicken-and-egg problem: we need the basic support for then having applications, and we need applications for then having basic support. The problem with your approach is that such chicken-and-egg problems are to be solved _at_once_ rather than laying eggs everywhere around and have everyone else wait till at least one chicken appears. No. I'm talking about put just one egg, just a device driver. Sorry, but this is not just one egg. And counting was your reaction to complaints that almost all the code related to Lua is the code to support Lua itself rather than anything else. And counting == there is ongoing work happening outside the tree. Sure, we do not *need* a script language interpreter embedded in the kernel, as we do not need a specific file system. But I do not get why we should not. There is current development of applications being done right now. Also, there is a few interesting works that used Lunatik in Linux [1, 2] that could be done more easily now in NetBSD just because we have the right environment for that. That is not about needing, but it is about supporting a certain kind of agile development, prototyping, customization and experimentation in the NetBSD kernel (how could it be hurtful?). I think that is why we *should* (not need) have this on the tree. IMHO. I have to point out that interesting work is commonly used as a sort of euphemism to refer to highly experimental work with unclear future. Yes. But I'm talking about interesting *user* work. I'm not claiming that they should be in the kernel. I'm just saying that, IMHO, we should incorporate a small device driver that facilitates this kind of development (outside the tree). I'm of opinion that this device driver can and should stay outside the tree until its utility can be demonstrated without this much strain. At last this is one of the reasons why we support kernel modules. Understand. You tell that there's interesting work using Lua in Linux. Was it accepted in any experimental Linux distribution like Fedora? What was the outcome of discussion among linux kernel developers? Currently there's no indication that it was accepted anywhere. Really don't know. I'm not a member of these communities neither I'm claiming to incorporate such works here. However, I think that there was a discussion about PacketScript on OpenWRT, but I don't know how it evolved. This demonstrates that Lua isn't actually useful in the kernel. I don't think so. It could even evince that, but not demonstrate. And last. The appeal to why not is defective. NetBSD is not your personal playground, there exist other people who have to deal with the inadvertent mess you can leave after you. That's why you ought to present solid arguments that justify why other people should tolerate your experimentations. I guess you misunderstood that. I'm not arguing that we should do it just because there is no contrary argument. I sincerely asked 'why not?' trying to understand the contrary argumentation. Also, I'm not saying that you should tolerate my experimentation. Far away from that. I haven't committed anything nor tried to impose nothing. On my side it sounded like that, sorry, if I'm wrong. It could sound as you want, but it wasn't what I meant. I'm just trying to make a point of view and understand yours. When I talked about experimentation, I was trying to say that providing support for that kind of experimentation for users sounds a good idea for me and I don't see how it is prejudicial. Which doesn't mean that I'm proposing that my personal experimentation should be in tree. The problem as I see it is that we have one developer (two at most) pushing hard for Lua in base and in kernel and providing no satisfactory arguments why this is to be done at all. Lack of any real code for years reinforces such doubts. Why not sounds as an argument for highly experimental work in this context. And I wouldn't have anything against this why not if all the work were dressed accordingly. For now I'd say that Lua support hasn't demonstrated any benefit. I'd say that it should be removed and the work continued in a branch until benefits become more clear. Understand. Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
Lua is a tool, not an end in itself. I think that you are formulating a chicken-and-egg problem: we need the basic support for then having applications, and we need applications for then having basic support. This is not a chicken-and-egg problem. You can make an experimental kernel with Lua support and make an experimental application in Lua, all before anything has to be committed to HEAD[*]. Then you can show that the application serves a useful function, has compelling benefits over writing it in C, and can offer confidence in robustness. [*] You could do this in a branch, you could do this in a private Git repository, or you could even just do this in a local CVS checkout (since kernel Lua requires no invasive changes, right?). Yes, but how do we do device driver development? We are branching the tree for each non-intrusive and disabled-by-default device driver? If we have developed a device driver for an uncommon device, we have to put it in a branch? (Please, note I'm friendly asking that). We didn't import yet another programming language interpreter for driver development previously. Besides, what are drivers developed in Lua so far? If I understand it correctly, the only driver is the Lua interpreter itself. I meant traditional device driver, but never mind. That is not about needing, but it is about supporting a certain kind of agile development, prototyping, customization and experimentation in the NetBSD kernel (how could it be hurtful?). Prototyping and experimentation is great! Show examples! What hurts is getting bitrotten code that nobody actually maintains or uses (when was the last Lua update in src?) and provides a new Turing machine with device access in the kernel for attack vectors. I don't see how an optional module could be used for attacks. If users enable that, they should know what they are doing (such as loading a kernel module). Was anything done to warn users? The code is not linked yet. Regards, -- Lourival Vieira Neto
Re: Why do we need lua in-tree again? Yet another call for actual evidence, please. (was Re: Moving Lua source codes)
Hi Jeff, On Thu, Oct 17, 2013 at 1:26 PM, Jeff Rizzo r...@tastylime.net wrote: On 10/14/13 1:46 PM, Marc Balmer wrote: It is entirely plausible to me that we could benefit from using Lua in base, or sysinst, or maybe even in the kernel. But that argument must be made by showing evidence of real, working code that has compelling benefits, together with confidence in its robustness -- not by saying that if we let users do it then it will happen. There is real word, real working code. In userland and in kernel space. There are developers waiting for the kernel parts to be committet, so they can continue their work as well. *Where* is this code? The pattern I see happening over and over again is: NetBSD Community: Please show us the real working code that needs this mbalmer: the code is there! (pointer to actual code not in evidence) I do not doubt that something exists, but the onus is on the person proposing the import to convince the skeptics, or at least to make an actual effort. I see lots of handwaving, and little actual code. YEARS after the import of lua into the main tree, I see very little in-tree evidence of its use. In fact, what I see is limited to : 1) evidence of lua bindings for netpgp. 2) evidence of some tests in external/bsd/lutok 3) the actual lua arc in external/mit/lua 4) gpio and sqlite stuff in liblua 5) some lua bindings in libexec/httpd (bozohttpd) 6) two example files in share/examples/lua 7) the luactl/lua module/lua(4) stuff you imported yesterday ...and counting. There is also ongoing working happening =). Am I missing something major here? The only actual usage I see is netpgp and httpd; the rest is all in support of lua itself. I do not see evidence that anyone is actually using lua in such a way that requires it in-tree. When you originally proposed importing lua back in 2010, you talked a lot about how uses would materialize. It's now been 3 years, and I just don't see them. If I am wrong about this, I would love some solid pointers to evidence of my wrongness. Now you're using very similar arguments for bringing lua into the kernel; I would very much like to see some real, practical, *useful* code demonstrating just why this is a good thing. Beyond the 'gee, whiz' factor, I just don't see it. Lua is a tool, not an end in itself. I think that you are formulating a chicken-and-egg problem: we need the basic support for then having applications, and we need applications for then having basic support. Sure, we do not *need* a script language interpreter embedded in the kernel, as we do not need a specific file system. But I do not get why we should not. There is current development of applications being done right now. Also, there is a few interesting works that used Lunatik in Linux [1, 2] that could be done more easily now in NetBSD just because we have the right environment for that. That is not about needing, but it is about supporting a certain kind of agile development, prototyping, customization and experimentation in the NetBSD kernel (how could it be hurtful?). I think that is why we *should* (not need) have this on the tree. IMHO. [1] https://github.com/dergraf/PacketScript [2] http://www.pdsw.org/pdsw12/papers/grawinkle-pdsw12.pdf +j Regards, -- Lourival Vieira Neto
Re: Lua in-kernel (lbuf library)
Hi Justin, On Tue, Oct 15, 2013 at 7:38 PM, Justin Cormack jus...@specialbusservice.com wrote: On Thu, Oct 10, 2013 at 7:15 PM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Hi folks, It has been a long time since my GSoC project and though I have tried to come back, I've experienced some personal issues. However, now I'm coding again. I'm developing a library to handle buffers in Lua, named lbuf. It is been developed as part of my efforts to perform experimentation in kernel network stack using Lua. Initially, I intended to bind mbuf to allow, for example, to write protocols dissectors in Lua. For example, calling a Lua function to inspect network packets: function filter(packet) if packet.field == value then return DROP end return PASS end Thus, I started to design a Lua binding to mbuf inspired by '#pragma pack' and bitfields of C lang. Then, I realized that this Lua library could be useful to other kernel (and user-space) areas, such as device drivers and user-level protocols. So, I started to develop this binding generically as a independent library to give random access to bits in a buffer. It is just in the early beginning, but I want to share some thoughts. I have been using the luajit ffi and luaffi, which let you directly use C structs (with bitfields) in Lua to do this. It makes it easier to reuse stuff that is already defined in C. (luaffi is not in its current state portable but my plan is to strip out the non portable bits, which are the function call support). Justin I never used luaffi. It sounds very interesting and I think it could be very useful to bind already defined C structs, but my purpose is to dynamically define data layouts using Lua syntax (without parsing C code). Regards, -- Lourival Vieira Neto
Re: Lua in-kernel (lbuf library)
On Wed, Oct 16, 2013 at 3:50 AM, Marc Balmer m...@msys.ch wrote: Am 15.10.13 23:01, schrieb Lourival Vieira Neto: [...] Also, having to switch mentally between zero-based arrays in the kernel C code and 1-based arrays in the Lua code make my head ache. It's something that doesn't bug me so much.. But, if necessary it could be changed to 0-based in this userdata. In C an array index is actually an offset from the top, so 0 is the natural way to denote element nr. 1 in C. In Lua, a numeric array index is not an offset, but the ordinal array position. So 1 is the natural way to denote the first element. Strictly speaking, it's actually C that is weird: Index n denotes array element n + 1... Following the principle of least astonishment, I would not recommend starting to do 0 based stuff in Lua, a Lua programmer certainly expects things to start at 1. [...] Indeed. -- Lourival Vieira Neto
Re: Lua in-kernel (lbuf library)
On Wed, Oct 16, 2013 at 11:45 AM, Justin Cormack jus...@specialbusservice.com wrote: (...) Yes absolutely it makes more sense if already defined in C. For parsing binary stuff I would look at Erlang for inspiration too, it is one of the nicer designs. Justin I never gone that far in Erlang. It looks really interesting [1]. I'll take a deeper look later. Thanks! Regards, -- Lourival Vieira Neto
Re: Lua in-kernel (lbuf library)
Hi Christoph, On Mon, Oct 14, 2013 at 10:02 AM, Christoph Badura b...@bsd.de wrote: First, I find the usage of the buf terminology confusing. In kernel context I associate buf with the file system buffe cache buf structure. Packet buffers a called mbufs. I would appreciate it if the terminology was consistent with the kernel or at least not confusing. This is due my lack of creativeness =).. I'm quite open for naming suggestions. Also, having to switch mentally between zero-based arrays in the kernel C code and 1-based arrays in the Lua code make my head ache. It's something that doesn't bug me so much.. But, if necessary it could be changed to 0-based in this userdata. On Thu, Oct 10, 2013 at 03:15:54PM -0300, Lourival Vieira Neto wrote: C API: lbuf_new(lua_State L, void * buffer, size_t length, lua_Alloc free, bool net); * creates a new lbuf userdatum and pushes it on the Lua stack. The net flag indicates if it is necessary to perform endianness conversion. I what is buffer and how does it relate to mbufs? How do I create a new lbuf from an mbuf? Or from an array of bytes? Note, non-contiguous buffer still an open problem in lbuf. I don't know if should use a ptrdiff_t to pass the distance to 'next' field, a 'next()' to return 'next' field or something else. However, you could create a lbuf from a mbuf header as follows: lbuf_new(L, mbuf-m_data, mbuf-m_len, NULL, true); or from an array: uint8_t array[ N ]; lbuf_new(L, (void *) array, N, NULL, false); // 'false' means 'use the platform endianess' Then, you could call a Lua function passing this lbuf, for example: lua_getglobal(L, handler); lbuf_new(L, mbuf-m_data, mbuf-m_len, NULL, true); lua_pcall(L, 1, 0, 0); In order to indicate that endianness conversion is necessary I need to know the future uses of the buffer. Clairvoyance excepted, that is kinda hard. It's a generic data structure that could be used to handle bit fields or nonaligned data. If you are going to make the buffers endianness aware, why not record the endianness that the packet is encoded in. And byteswapping can be performed automatically depending on the consumers endianness. I think this way a lot of redundant code can be avoided. And you don't describe under what circumstances endianness convresion is performed. Yes, mea culpa =(. I wasn't clear about that. 'net' flag was the way I found to 'record' the buffer endianness. What means, true if the buffer uses BE and false if it uses HE. It has the same semantics of hton* and ntoh* functions. Don't know if it is better to pass the endianness itself as a flag (e.g., enum { BIG_ENDIAN, LITTLE_ENDIAN, HOST_ENDIAN }). What do you think? So, if you set net flag true when you access a bit field, the conversion to and from big endian, if needed, is done automatically taking the smaller aligned set of bits. For example: buf:rawget(0, 9) ~ if net flag is *true*: takes 16 bits from beginning of the buffer (as is); convert these 2 bytes from BE to HE (if necessary); and returns these 2 bytes masked to preserve only the most significant 9 bits (zeroing the remaining bits) and shifted to LSB. If net is *false*: just returns the first 2 bytes masked and shifted (without conversion). Then these 2 bytes are expanded to lua_Number type (int64_t in kernel) That is: a) If net flag is _true_ and the platform is LE: 1- Takes 16 bits: [ b0 | b1 | b2 | b3 | b4 | b5 | b6 | b7 ][ b8 | b9 | b10 | b11 | b12 | b13 | b14 | b15 ] 2- Convert it to LE: [ b8 | b9 | b10 | b11 | b12 | b13 | b14 | b15 ][ b0 | b1 | b2 | b3 | b4 | b5 | b6 | b7 ] 3- Returns the first 2 bytes masked and shifted: [ b1 | b2 | b3 | b4 | b5 | b6 | b7 | b8 ][ 0 | 0 | 0 | 0 | 0 | 0 | 0 | b0 ] b) If net flag is _false_ and the platform is LE: 1- Takes 16 bits: [ b0 | b1 | b2 | b3 | b4 | b5 | b6 | b7 ][ b8 | b9 | b10 | b11 | b12 | b13 | b14 | b15 ] 2- Returns the first 2 bytes masked and shifted: [ b9 | b10 | b11 | b12 | b13 | b14 | b15 | b0 ][ 0 | 0 | 0 | 0 | 0 | 0 | 0 | b8 ] c) If net flag is _true or false_ and platform is BE: 1- Takes 16 bits: [ b0 | b1 | b2 | b3 | b4 | b5 | b6 | b7 ][ b8 | b9 | b10 | b11 | b12 | b13 | b14 | b15 ] 2- Returns the first 2 bytes masked and shifted: [ 0 | 0 | 0 | 0 | 0 | 0 | 0 | b0 ][ b1 | b2 | b3 | b4 | b5 | b6 | b7 | b8 ] Lua API: - array access (1) lbuf:mask(alignment [, offset, length]) buf[ix] ~ accesses 'alignment' bits from 'alignment*(ix -1)+offset' position e.g.: buf:mask(3) buf[3] ~ accesses 3 bits from bit-6 position What does that mean? Does it return the top-most 2 bits from the first byte plus the least significant bit fom the second byte of the buffer? It means the least-most 2 bits from the first byte and the LSB from the second. What is 'length' for? Offset and length could be used to impose boundaries to the mask. For example, if you want to analyse a segment of the buffer that is organized
Re: lua_Number in the kernel
Hi Alexander, On Mon, Sep 30, 2013 at 7:24 PM, Alexander Nasonov al...@yandex.ru wrote: [ Ccing to Justin who seems to be interested in Lua in NetBSD but I'm not sure whether he's subscribed to tech-kern@ ]. Like some other people, I beleived that Lua kernel project is dormant and was just waiting for any activity before starting a discussion here but Marc replied today to an ongoing discussion on developers@. Hence, my post. It is not dormant. Marc, mainly, and I are still working on it, but when it is developed in the attic, it will always look dormant =(. I'd like to propose that lua_Number in the kernel should always be int64_t (*). This type will guarantee regular arithmetic rules for the range (-2^53, 2^53) and for 32-bit signed integer range, in particular. I think we had already discuss that in the past and I agree with you. In fact, I really want to commit that changing, but we need to have the code in tree for that. Regards, -- Lourival Vieira Neto
Re: Lua in-kernel (lbuf library)
On Tue, Oct 15, 2013 at 7:22 PM, Alexander Nasonov al...@yandex.ru wrote: Lourival Vieira Neto wrote: I'm developing a library to handle buffers in Lua, named lbuf. It is been developed as part of my efforts to perform experimentation in kernel network stack using Lua. Initially, I intended to bind mbuf to allow, for example, to write protocols dissectors in Lua. For example, calling a Lua function to inspect network packets: function filter(packet) if packet.field == value then return DROP end return PASS end Thus, I started to design a Lua binding to mbuf inspired by '#pragma pack' and bitfields of C lang. Then, I realized that this Lua library could be useful to other kernel (and user-space) areas, such as device drivers and user-level protocols. So, I started to develop this binding generically as a independent library to give random access to bits in a buffer. It is just in the early beginning, but I want to share some thoughts. I wonder if you looked at Lua support in Wireshark [1]? Unfortunately, it's GPL and they even have a special section 'Beware the GPL' on wiki. [1] http://wiki.wireshark.org/Lua Alex Yes. In fact, I have already implemented a Wireshark dissector in Lua for a proprietary protocol that I was designing, inspired in ERP, to detect network loops. WS Lua dissectors also served as inspiration. However, I just used the API; I never looked at the binding implementation. Wireshark Lua dissectors is a good example of what can be done with Lua in that sense. But I'm looking for a more generic API that could allow random bit access in a buffer using Lua table notation, that could also be used to communicate with devices, for example. I think (IMHO) that lbuf masks is more straight forward. Regards, -- Lourival Vieira Neto
Lua in-kernel (lbuf library)
Hi folks, It has been a long time since my GSoC project and though I have tried to come back, I've experienced some personal issues. However, now I'm coding again. I'm developing a library to handle buffers in Lua, named lbuf. It is been developed as part of my efforts to perform experimentation in kernel network stack using Lua. Initially, I intended to bind mbuf to allow, for example, to write protocols dissectors in Lua. For example, calling a Lua function to inspect network packets: function filter(packet) if packet.field == value then return DROP end return PASS end Thus, I started to design a Lua binding to mbuf inspired by '#pragma pack' and bitfields of C lang. Then, I realized that this Lua library could be useful to other kernel (and user-space) areas, such as device drivers and user-level protocols. So, I started to develop this binding generically as a independent library to give random access to bits in a buffer. It is just in the early beginning, but I want to share some thoughts. Here are a draft of the lbuf API: C API: lbuf_new(lua_State L, void * buffer, size_t length, lua_Alloc free, bool net); * creates a new lbuf userdatum and pushes it on the Lua stack. The net flag indicates if it is necessary to perform endianness conversion. Lua API: - array access (1) lbuf:mask(alignment [, offset, length]) buf[ix] ~ accesses 'alignment' bits from 'alignment*(ix -1)+offset' position e.g.: buf:mask(3) buf[3] ~ accesses 3 bits from bit-6 position - array access (2) buf:mask{ length_pos1, length_pos2, ... } buf[ix] ~ accesses 'length_pos(ix)' bits from 'length_pos1 + ... length_pos(ix-1)' position e.g.: buf:mask{ 2, 2, 32, 9 } buf[2] ~ accesses 2 bits from bit-2 position - fields access buf:mask{ field = { offset, length }, ... } buf.field ~ 'field.length' bits from 'offset' position e.g.: buf:mask{ type = { 0, 2 }, -- 1 bit padding flag = { 4, 1 }, xyz = { 15, 17 }, seg = { flagX = { 32, 1 }, flagY = { 33, 1 }, flagZ = { 34, 1 }, } } buf.flag ~ 1 bit from bit-4 position buf.xyz ~ 17 bits from bit-15 position buf.seg.flagY ~ 1 bit from bit-34 position - raw access buf:rawget(3, 30) ~ gets 30 bits from bit-3 position buf:rawset(3, 30, value) ~ sets 'value' into 30 bits from bit-3 position - segment buf:segment(offset [, length]) returns a new lbuf corresponding a 'buf' segment. - mask reusing lbuf.mask{ ... } creates a mask without associating a specific buffer. Thus, you can call buf:mask() passing a already created mask. For example: ethernet_mask = lbuf.mask{ type = { ethertype_offset, ethertype_len }} lldp_mask = lbuf.mask{ version = { version_offset, version_len }} function filter(packet) packet:mask(ethernet_mask) if packet.type == 0x88CC then lldp_pdu = packet.segment(payload_offset):mask(lldp_mask) if packet.version 1 return DROP end end return PASS end The code is hosted in https://github.com/lneto/lbuf. Currently, only array and raw access are working (partially). I think this API could be useful for device-driver and protocol prototyping. Looking forward to hearing from you. Regards, -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project
On Tue, Oct 19, 2010 at 8:23 AM, Antti Kantee po...@cs.hut.fi wrote: On Tue Oct 12 2010 at 02:17:35 -0300, Lourival Vieira Neto wrote: On Tue, Oct 12, 2010 at 1:50 AM, David Holland dholland-t...@netbsd.org wrote: On Tue, Oct 12, 2010 at 12:53:10AM -0300, Lourival Vieira Neto wrote: A signature only tells you whose neck to wring when the script misbehaves. :-) Since a Lua script running in the kernel won't be able to forge a pointer (right?), or conjure references to methods or data that weren't in its environment at the outset, you can run it in a highly restricted environment so that many kinds of misbehavior are difficult or impossible. ?Or I would *think* you can restrict the environment in that way; I wonder what Lourival thinks about that. I wouldn't say better =). That's exactly how I'm thinking about address this issue: restricting access to each Lua environment. For example, a script running in packet filtering should have access to a different set of kernel functions than a script running in process scheduling. ...so what do you do if the script calls a bunch of kernel functions and then crashes? if a script crashes, it raises an exception that can be caught by the kernel (as an error code).. Right... so how do you restore the kernel to a valid state? Why wouldn't it be a valid state after a script crash? I didn't get that. Can you exemplify it? I *guess* what David means is that to perform decisions you need a certain level of atomicity. For example, just drawing something out of a hat, if you want to decide which thread to schedule next, you need to make sure the selected thread object exists over fetching the candidate list and the actual scheduling. For this you use a lock or a reference counter or whatever. So if your lua script crashes between fetching the candidates and doing the actual scheduling, you need some way of releasing the lock or decrementing the refcounter. While you can of course push an error branch stack into lua or write the interfaces to follow a strict model where you commit state changes only at the last possible moment, it is additional work and probably quite error-prone. Although, on the non-academic side of things, if your thread scheduler crashes, you're kinda screwed anyway. Hi Antti, Sorry for the delay. I agree: we need a certain level of atomicity. I think that level should be provided by the libraries that expose kernel internals to Lua (binding libraries), the kernel code that calls Lua and the Lunatik state's mutex. The functions of the binding libraries should not finish their execution with locks (or other resources) held. If it is really necessary, the binding libraries could provide functions to validate the state, after the Lua execution. However, I don't think that is a good idea to allow scripts to call functions that uses a lock without releasing it. Moreover, we can use the Lunatik state's mutex to perform the synchronization (between the kernel and the script code). In your scheduling example, we can use a refcounter (as you said) stored in the Lua state and protected by the Lunatik state's mutex. Thus, if our Lua script crashes between fetching and scheduling, the caller can trace and treat that appropriately (e.g., restoring the refcounter, deleting that script function and calling a predefined function to perform the thread scheduling). Although, I think a better approach for that problem would be to provide a scheduling function that checks if the selected thread exists and fails if not (returning a error code for the script). In short, I think the functions provided for the scripts should be self-contained and all the locks should be managed by the kernel code. If functions of the binding libraries need to share and synchronize their execution state (e.g., a refcounter), they need to do so by storing the desired state in Lua. -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
On Tue, Oct 19, 2010 at 8:10 AM, Antti Kantee po...@cs.hut.fi wrote: On Tue Oct 05 2010 at 18:24:48 -0300, Lourival Vieira Neto wrote: Hi folks, I'm glad to announce the results of my GSoC project this year [1]. We've created the support for scripting the NetBSD kernel with Lua, which we called Lunatik and it is composed by a port of the Lua interpreter to the kernel, a kernel programming interface for extending subsystems and a user-space interface for loading user scripts into the kernel. You can see more details on [2]. I am currently working on the improvement of its implementation, on the documentation and on the integration between Lunatik and other subsystems, such as npf(9), to provide a real usage scenario. Cool. I'm looking forward to seeing your evaluation of real usage scenarios. If you can find some existing policy code written in C and convert it to lua, it would make a strong case. The main metric I'm interested in is convenience, and performance to some degree depending on what kind of places your plan to put lua scripts in. At least in the packet filter use case the performance is quite critical. I'm not too worried about performance issues before running it and having some measures. Anyway, I'm quite open to suggestions =). Do you have any special existing policy in mind? I don't know how well the fibonacci example performs (and the performance is not very critical there), but I'm sure you'll agree that from the convenience pov it is a very strong case _against_ lua ;) (yes, I realize it's not provided for demonstrating convenience) Fibonacci is just a foo example and it was the first Lua code that I ran inside the kernel. It was supposed to be as useful as a 'hello world', nothing else. However, I do not agree with you, I think it shows a strength of Lua: simplicity ;-) Anyway, you can replace the fibo code for any more convenient function. -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project
On Mon, Nov 8, 2010 at 8:46 PM, Antti Kantee po...@cs.hut.fi wrote: On Mon Nov 08 2010 at 20:32:14 -0200, Lourival Vieira Neto wrote: Right... so how do you restore the kernel to a valid state? Why wouldn't it be a valid state after a script crash? I didn't get that. Can you exemplify it? I *guess* what David means is that to perform decisions you need a certain level of atomicity. For example, just drawing something out of a hat, if you want to decide which thread to schedule next, you need to make sure the selected thread object exists over fetching the candidate list and the actual scheduling. For this you use a lock or a reference counter or whatever. So if your lua script crashes between fetching the candidates and doing the actual scheduling, you need some way of releasing the lock or decrementing the refcounter. While you can of course push an error branch stack into lua or write the interfaces to follow a strict model where you commit state changes only at the last possible moment, it is additional work and probably quite error-prone. Although, on the non-academic side of things, if your thread scheduler crashes, you're kinda screwed anyway. Hi Antti, Sorry for the delay. I agree: we need a certain level of atomicity. I think that level should be provided by the libraries that expose kernel internals to Lua (binding libraries), the kernel code that calls Lua and the Lunatik state's mutex. The functions of the binding libraries should not finish their execution with locks (or other resources) held. If it is really necessary, the binding libraries could provide functions to validate the state, after the Lua execution. However, I don't think that is a good idea to allow scripts to call functions that uses a lock without releasing it. Moreover, we can use the Lunatik state's mutex to perform the synchronization (between the kernel and the script code). In your scheduling example, we can use a refcounter (as you said) stored in the Lua state and protected by the Lunatik state's mutex. Thus, if our Lua script crashes between fetching and scheduling, the caller can trace and treat that appropriately (e.g., restoring the refcounter, deleting that script function and calling a predefined function to perform the thread scheduling). Although, I think a better approach for that problem would be to provide a scheduling function that checks if the selected thread exists and fails if not (returning a error code for the script). How would it check if a thread has exited? Listing the existing threads. You either need to keep some log of object lifecycle Why? (and when do you free that information, I don't free that information, the kernel does. I only consult that. i.e. how is it fundamentally different of anything else listed above?), The fundamental difference is who releases the locks (or another resource). When we provide functions (for the scripts) that access the kernel internals then release the locks, the scripts can crash without compromise the kernel operation. When we provide functions which expect that users release the locks, we are looking for some trouble. My point is to give safer and higher-level interfaces for the scripts, instead of using just the same kernel internal interfaces, to mitigate the concerns about script crashing. give every object some UUID to make sure the identifiers were not recycled so that you're sure you get the same object when you relookup it, I think I missed that, threads already have a unique ID in-kernel. or register some sort of callback from thread exit to the lua code. We could do that too, having our own data structure to handle threads and synchronize with the kernel internal implementation. But, I think that is a better idea to only bind the kernel data. For a scheduler an oops every now and then with scheduling the wrong thread might not be a big deal, but if you for example mess up credentials it's a bigger oops. In short, I think the functions provided for the scripts should be self-contained and all the locks should be managed by the kernel code. If functions of the binding libraries need to share and synchronize their execution state (e.g., a refcounter), they need to do so by storing the desired state in Lua. Having some working code would be more convincing ;) I agree. As soon as possible I will give a concrete example =). (but, I wouldn't expect that too soon, because I'm quite busy with my msc thesis =( -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
On Tue, Oct 12, 2010 at 3:47 AM, Alan Barrett a...@cequrux.com wrote: [cross-posting removed] On Tue, 05 Oct 2010, Lourival Vieira Neto wrote: We've created the support for scripting the NetBSD kernel with Lua, Instead of using long long as the C data type for Lua variables, I suggest using int64_t (which is the same size on all existing and future platforms), or intmax_t (which is the largest available type on any particular platform). long long has neither of these attributes. --apb (Alan Barrett) Hi Alan, You're right! I'll change it. -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
On Mon, Oct 11, 2010 at 11:50 PM, Matthew Mondor mm_li...@pulsar-zone.net wrote: On Sun, 10 Oct 2010 19:45:41 -0600 Samuel Greear l...@evilcode.net wrote: I didn't like the fact that the only option for loading a script into the kernel was to load the script source. I would make loading pre-compiled scripts the preferential method. In fact, I would probably tear eval out of the kernel lua implementation and only support loading of precompiled byte-code into the kernel. If the tokenizer is considered heavy, or a potential source of exploit, or if scripts are expected to frequently be loaded and a peformance bottleneck exists, I also think that loading pre-tokenized bytecode would be a good idea. No, it is not heavy (see [1]). However, there are several things to consider: some systems (i.e. Java) do important sanity checks at tokenization time. Is this important for Lua? Yes, it does important verification in the lexer/parser. Secondly, is the Lua bytecode using a stable, well defined instruction set which is unlikely to change? Otherwise as it improves and gets updated any pre-tokenized scripts might need to be regenerated. Of course, that's probably not an issue if everything is part of the base system and always get rebuilt together. Yes, Lua bytecode is stable and uses a well-defined instruction set, but Lua doesn't perform bytecode verification (see [2]). Thanks, -- Matt [1] http://marc.info/?l=lua-lm=128676702329567w=2 [2] http://marc.info/?l=lua-lm=128676669829325w=2 Cheers, -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project
On Tue, Oct 12, 2010 at 12:32 AM, David Holland dholland-t...@netbsd.org wrote: On Sat, Oct 09, 2010 at 07:34:43PM -0300, Lourival Vieira Neto wrote: A signature only tells you whose neck to wring when the script misbehaves. :-) Since a Lua script running in the kernel won't be able to forge a pointer (right?), or conjure references to methods or data that weren't in its environment at the outset, you can run it in a highly restricted environment so that many kinds of misbehavior are difficult or impossible. Or I would *think* you can restrict the environment in that way; I wonder what Lourival thinks about that. I wouldn't say better =). That's exactly how I'm thinking about address this issue: restricting access to each Lua environment. For example, a script running in packet filtering should have access to a different set of kernel functions than a script running in process scheduling. ...so what do you do if the script calls a bunch of kernel functions and then crashes? if a script crashes, it raises an exception that can be caught by the kernel (as an error code).. -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project
On Tue, Oct 12, 2010 at 1:50 AM, David Holland dholland-t...@netbsd.org wrote: On Tue, Oct 12, 2010 at 12:53:10AM -0300, Lourival Vieira Neto wrote: A signature only tells you whose neck to wring when the script misbehaves. :-) Since a Lua script running in the kernel won't be able to forge a pointer (right?), or conjure references to methods or data that weren't in its environment at the outset, you can run it in a highly restricted environment so that many kinds of misbehavior are difficult or impossible. ?Or I would *think* you can restrict the environment in that way; I wonder what Lourival thinks about that. I wouldn't say better =). That's exactly how I'm thinking about address this issue: restricting access to each Lua environment. For example, a script running in packet filtering should have access to a different set of kernel functions than a script running in process scheduling. ...so what do you do if the script calls a bunch of kernel functions and then crashes? if a script crashes, it raises an exception that can be caught by the kernel (as an error code).. Right... so how do you restore the kernel to a valid state? Why wouldn't it be a valid state after a script crash? I didn't get that. Can you exemplify it? -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
On Sun, Oct 10, 2010 at 10:45 PM, Samuel Greear l...@evilcode.net wrote: (...) My brief notes (from memory): I didn't see any bindings, maybe there were some, but if I missed them there can't be very many, Lua in the kernel is fairly useless unless you can call into the public kernel api. Our proposal on this summer was to deliver a port of Lua and mechanisms for loading scripts into the kernel, not to provide bindings to the entire kernel. However, you are totally welcome to join this effort and write some bindings ;-). I didn't like the fact that the only option for loading a script into the kernel was to load the script source. I would make loading pre-compiled scripts the preferential method. In fact, I would probably tear eval out of the kernel lua implementation and only support loading of precompiled byte-code into the kernel. I'm planning to provide a kind of luac [1] to Lunatik, but it is not my main occupation now. I'm prioritizing the creation of a use case. Anyway, I would like to know what is your motivation to use pre-compiled scripts as the preferential method and why is it worrying you. I hope you continue working on this, I see potential and at some point I would like to evaluate this for inclusion in DragonFly BSD. It would be great =). [1] http://www.lua.org/manual/5.1/luac.html Cheers, -- Lourival Vieira Neto
Re: [ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
On Sun, Oct 10, 2010 at 10:45 PM, Samuel Greear l...@evilcode.net wrote: On Tue, Oct 5, 2010 at 3:24 PM, Lourival Vieira Neto lourival.n...@gmail.com wrote: Hi folks, I'm glad to announce the results of my GSoC project this year [1]. We've created the support for scripting the NetBSD kernel with Lua, which we called Lunatik and it is composed by a port of the Lua interpreter to the kernel, a kernel programming interface for extending subsystems and a user-space interface for loading user scripts into the kernel. You can see more details on [2]. I am currently working on the improvement of its implementation, on the documentation and on the integration between Lunatik and other subsystems, such as npf(9), to provide a real usage scenario. I'd like to take this space also to publicly thank Marc Balmer, for his kind support; prof. Roberto Ierusalimschy, for his comprehension and support; and NetBSD developers for their prompt help. [1] http://socghop.appspot.com/gsoc/student_project/show/google/gsoc2010/netbsd/t127230760748 [2] http://netbsd-soc.sourceforge.net/projects/luakern/ Cheers, -- Lourival Vieira Neto I eagerly waited to see the results of this project all summer, I did peruse the code as soon as you put it up on the google code hosting site. My brief notes (from memory): I didn't see any bindings, maybe there were some, but if I missed them there can't be very many, Lua in the kernel is fairly useless unless you can call into the public kernel api. I didn't like the fact that the only option for loading a script into the kernel was to load the script source. I would make loading pre-compiled scripts the preferential method. In fact, I would probably tear eval out of the kernel lua implementation and only support loading of precompiled byte-code into the kernel. I hope you continue working on this, I see potential and at some point I would like to evaluate this for inclusion in DragonFly BSD. Best, Sam Folks, The message above was sent both for tech-kern and Lua mailing lists and we had valuable considerations about loading pre-compiled scripts there. If want to keep track, here is the archive [http://marc.info/?t=12863140021r=1w=2]. Cheers, -- Lourival Vieira Neto
[ANN] Lunatik -- NetBSD kernel scripting with Lua (GSoC project results)
Hi folks, I'm glad to announce the results of my GSoC project this year [1]. We've created the support for scripting the NetBSD kernel with Lua, which we called Lunatik and it is composed by a port of the Lua interpreter to the kernel, a kernel programming interface for extending subsystems and a user-space interface for loading user scripts into the kernel. You can see more details on [2]. I am currently working on the improvement of its implementation, on the documentation and on the integration between Lunatik and other subsystems, such as npf(9), to provide a real usage scenario. I'd like to take this space also to publicly thank Marc Balmer, for his kind support; prof. Roberto Ierusalimschy, for his comprehension and support; and NetBSD developers for their prompt help. [1] http://socghop.appspot.com/gsoc/student_project/show/google/gsoc2010/netbsd/t127230760748 [2] http://netbsd-soc.sourceforge.net/projects/luakern/ Cheers, -- Lourival Vieira Neto