Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-06-19 Thread Marco Schulze

On 16-03-2012 15:13, Matthew Toseland wrote:
Updating its own binaries is incompatible with the standard unix way 
of doing things, isn't it? Even if it's not technically a violation of 
FHS?


I'd just like to point out that this is not the case at all, specially 
because flexibility is a major characteristic in this Unix Way of Doing 
Stuff. Where it might be problematic, if at all, is on the package 
management level:


- The ugly custom installer would have to be replaced by a 
distribution-specific package;
- Some distributions have special rules regarding Java packages. You'd 
have to check those;


You _can_ conform to the FHS without any change by being installed under 
/opt. This will make fred accessible system-wide, so you might want to 
check if it's ok to let multiple users delve inside the Freenet 
directory tree. However, AFAIR, install scripts can do almost anything, 
including creating a fred-specific user and group, and allowing 
freenet{,-ext}.jar to be updated by that user without root privileges.


The new directory layout might look like this:

/etc: freenet.ini.
/usr/bin: shell scripts to launch and update freenet.
/usr/lib: native libraries.
/usr/lib/fred: jars. This might be dependent on the distribution.
/srv/fred: default download location (if system-wide daemon, ~/Downloads 
otherwise).

/var/cache/fred: datastore and other miscellaneous persistent files.
/var/cache/fred/plugins: plugins directory trees.
/var/log: logs.
/var/log/old/fred: compressed old logs (do you really need those?).

Plus, distribution specific scripts to control the daemon (run.sh-ish).
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-22 Thread Matthew Toseland
On Wednesday 21 Mar 2012 00:30:42 Ximin Luo wrote:
 Implicit dependency means that changing some code in one area results in a
 logical error in the other, that is not immediately apparent in the
 documentation nor any compile-time checking built into the language. Obviously
 this is not necessarily a bad thing, but there is a lot of it in the config
 system, and elsewhere, including in how the configured values are used.

Okay...
 
 For example, for the longest time there was no centralised management of
 run-time files used by Freenet, and even now they are just a bunch of extra
 fields in Node.java because I had no existing structure to fit such a system
 into. However, Node.java is such a fucking disorganised mess, with several
 hundred fields, that it's not immediately obvious to everyone that they should
 be using new File(Node.nodeDir(), fileName) rather than new 
 File(fileName).
 I cleaned it up when I first wrote that part, but it looks like misuses have
 slowly crept in again.

So your complaint is that there is no single central point to put all the 
folder names in? Feel free to create one.

Making stuff static is tempting but means we can't do multi-nodes-in-one-VM 
tests, although I believe classloader hacks can solve that? Is there much of a 
performance cost? Otherwise we can just use a global object and grab it from 
Node, or ClientContext.
 
 Yes, configs have EVERYTHING to do with writing the values out. All the
 read/write logic is wrapped up into a ConfigCallback (which throws a
 InvalidConfigException) which is ONLY PASSED TO THE CONFIG SUBSYSTEM. Even
 ignoring the obvious intention of whoever named these classes, the target of
 the read/write logic is completely detached from the logic itself.

ConfigCallback doesn't do any writing out. And the getters don't throw. It can 
only throw when a value is changed. So it's still not clear what the problem is.
 
 Your explanation about the ObjectContainer also assumes a severely stunted
 events framework can't support fine grained thread control. Why can't there, 
 in
 your imagination, exist SOMEWHERE, a framework that lets you mandate only run
 jobs of type J in thread T?

And those jobs are called with parameters (ObjectContainer, ClientContext) ? 
Since we'll have to subclass it anyway, it's going to be more messy than the 
current solution, isn't it? Anyway DBJob/DBJobRunner is pretty simple (although 
it's hidden in NodeClientCore along with ClientContext initialisation which is 
kind of messy).

I agree we should make *Executor use the standard APIs though - while keeping 
the ability to change thread priorities.
 
 An events framework could handle errant plugins by setting a timeout for all
 blocking jobs. If they don't complete within X time, interrupt the thread
 and/or kill it if it doesn't respond to the interrupt. If the plugin can't
 handle it, tough shit their fault. (This is how certain companies run reliable
 services on a massive scale.)

Can't be done. Period.

It's Java. Java does not support killing threads. According to the 
documentation the reason for this is that killing a thread might cause 
synchronization problems. Which is true, but I would have thought it was 
solvable ... Anyway if you need to be able to kill stuff it has to run in *a 
separate JVM*.
 
 As for the updater, simply seeing where the currently-running JARs are, is not
 enough, if an update introduces extra JARs as a dependancy. It's necessary to
 have an explicit list of all the dependencies. Unfortunately it's not enough 
 to
 simply use the Class-Path attribute of the manifest of freenet.jar, because
 that hard-codes the path to the other JARs, whereas we need it to be variable
 for packaging purposes. So, this information can only be figured out at
 *package time* (for arbitrary package layouts), and the updater needs a
 mechanism to read this information, and probably update that as well.

You are planning to have a packaged version which is able to 
auto-update-over-Freenet?

Currently class-path hardcoded freenet-ext.jar for convenience more than much 
else. 99% of installs use the wrapper.
 
 X
 
 On 20/03/12 23:18, Matthew Toseland wrote:
  On Monday 19 Mar 2012 01:52:19 Ximin Luo wrote:
  On 16/03/12 18:13, Matthew Toseland wrote:
  On Thursday 15 Mar 2012 21:02:26 Ximin Luo wrote:
  (Top-posting because previous post is too long to reply to in-line.)
 
  ## refactoring
 
  Refactoring the code helps to make it more understandable for other 
  coders. One
  of the reasons why I don't work more on freenet myself is because it's
  extremely difficult to make any changes that are more than a simple bug 
  fix.
 
  Hmmm, good point. But from FPI point of view, unless it's a very quick 
  and high impact refactoring,  it's probably too long term.
 
  When I was writing code for the config subsystem to separate out 
  different
  files into different directories, this was very frustrating and took up 
  much
  more time than I expected.
 
  

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-22 Thread Ximin Luo
On 22/03/12 12:00, Matthew Toseland wrote:
 On Wednesday 21 Mar 2012 00:30:42 Ximin Luo wrote:
 Implicit dependency means that changing some code in one area results in a
 logical error in the other, that is not immediately apparent in the
 documentation nor any compile-time checking built into the language. 
 Obviously
 this is not necessarily a bad thing, but there is a lot of it in the config
 system, and elsewhere, including in how the configured values are used.
 
 Okay...

 For example, for the longest time there was no centralised management of
 run-time files used by Freenet, and even now they are just a bunch of extra
 fields in Node.java because I had no existing structure to fit such a system
 into. However, Node.java is such a fucking disorganised mess, with several
 hundred fields, that it's not immediately obvious to everyone that they 
 should
 be using new File(Node.nodeDir(), fileName) rather than new 
 File(fileName).
 I cleaned it up when I first wrote that part, but it looks like misuses have
 slowly crept in again.
 
 So your complaint is that there is no single central point to put all the 
 folder names in? Feel free to create one.
 
 Making stuff static is tempting but means we can't do multi-nodes-in-one-VM 
 tests, although I believe classloader hacks can solve that? Is there much of 
 a performance cost? Otherwise we can just use a global object and grab it 
 from Node, or ClientContext.

 Yes, configs have EVERYTHING to do with writing the values out. All the
 read/write logic is wrapped up into a ConfigCallback (which throws a
 InvalidConfigException) which is ONLY PASSED TO THE CONFIG SUBSYSTEM. Even
 ignoring the obvious intention of whoever named these classes, the target of
 the read/write logic is completely detached from the logic itself.
 
 ConfigCallback doesn't do any writing out. And the getters don't throw. It 
 can only throw when a value is changed. So it's still not clear what the 
 problem is.

Irrelevant. To repeat:

- the target of the read/write logic[1] is completely detached from the logic
itself[2], and too strongly-coupled with the config system[3]
- too much implicit dependency, due to the way the variables are poorly managed

[1] whatever get/set acts on
[2] ConfigCallback
[3] via .register(), so only the config system sees it

Who said anything about making stuff static? Stop making up random straw mans.

Feel free to create one - stop making excuses for not cleaning up your shit
and telling someone else to do it when they complain! Node.java is a complete
fucking mess. To make any sort of progress in having a well-structured config
system, that needs to be cleaned out first.


 Your explanation about the ObjectContainer also assumes a severely stunted
 events framework can't support fine grained thread control. Why can't there, 
 in
 your imagination, exist SOMEWHERE, a framework that lets you mandate only 
 run
 jobs of type J in thread T?
 
 And those jobs are called with parameters (ObjectContainer, ClientContext) ? 
 Since we'll have to subclass it anyway, it's going to be more messy than the 
 current solution, isn't it? Anyway DBJob/DBJobRunner is pretty simple 
 (although it's hidden in NodeClientCore along with ClientContext 
 initialisation which is kind of messy).
 

This is bullshit, there is already a DBJobWrapper implements Runnable which
needs nothing extra passed in. In the general case, there are *always* ways to
get around fiddly crap like this, and they are much simpler than WRITING YOUR
OWN EVENTS FRAMEWORK (and logging framework and config framework etc etc etc).

 I agree we should make *Executor use the standard APIs though - while keeping 
 the ability to change thread priorities.

 An events framework could handle errant plugins by setting a timeout for all
 blocking jobs. If they don't complete within X time, interrupt the thread
 and/or kill it if it doesn't respond to the interrupt. If the plugin can't
 handle it, tough shit their fault. (This is how certain companies run 
 reliable
 services on a massive scale.)
 
 Can't be done. Period.
 
 It's Java. Java does not support killing threads. According to the 
 documentation the reason for this is that killing a thread might cause 
 synchronization problems. Which is true, but I would have thought it was 
 solvable ... Anyway if you need to be able to kill stuff it has to run in *a 
 separate JVM*.

Then interrupt is fine and covers most cases.


 As for the updater, simply seeing where the currently-running JARs are, is 
 not
 enough, if an update introduces extra JARs as a dependancy. It's necessary to
 have an explicit list of all the dependencies. Unfortunately it's not enough 
 to
 simply use the Class-Path attribute of the manifest of freenet.jar, because
 that hard-codes the path to the other JARs, whereas we need it to be variable
 for packaging purposes. So, this information can only be figured out at
 *package time* (for arbitrary package layouts), and the updater 

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-20 Thread Matthew Toseland
On Monday 19 Mar 2012 01:52:19 Ximin Luo wrote:
 On 16/03/12 18:13, Matthew Toseland wrote:
  On Thursday 15 Mar 2012 21:02:26 Ximin Luo wrote:
  (Top-posting because previous post is too long to reply to in-line.)
 
  ## refactoring
 
  Refactoring the code helps to make it more understandable for other 
  coders. One
  of the reasons why I don't work more on freenet myself is because it's
  extremely difficult to make any changes that are more than a simple bug 
  fix.
  
  Hmmm, good point. But from FPI point of view, unless it's a very quick and 
  high impact refactoring,  it's probably too long term.
 
  When I was writing code for the config subsystem to separate out different
  files into different directories, this was very frustrating and took up 
  much
  more time than I expected.
  
  I'm still curious as to why.
 
 The code is spread out over many files not related to config management; 

It is? How so?

The code for handling the config files simply takes a value and applies it to 
that subsystem. Short of converting it to get*/set*'ers, using reflection, I 
don't see how we could make it much simpler.

 this
 creates lots of implicit dependencies, which is not directly apparent in the
 syntax/structure of the code nor the semantics of the languag. In other words,
 spaghetti code.

I still don't follow - implicit dependancies?
 
 Config management code should not be mixed into the thing they are 
 configuring,
 this is just Good Programming. 

I don't get it. Evidently I didn't take that course.

 Changing variables on-the-fly is a completely
 separate issue from config management and actually writing out those values,
 and it was a mistake to bundle the two together in the code.

The code that is spread out over many files has nothing to do with writing 
the values out. All it does is changes the values. Sometimes it is necessary to 
store a copy of the configured value so that it can be returned exactly, most 
of the time it just gets and sets a variable.
 
 Using the guice framework would help this a lot as well.

I am not convinced.
 
 
  NB: I'm not implying you should be doing all the work toad, please don't 
  think
  this. And please don't think that my opinions are automatically invalid / 
  less
  worthy just because I haven't committed code for ages. I just want to 
  express
  my view of What I Would Do If I Were God.
  
  Okay. :) I don't mean to criticise un-constructively, just have a useful 
  conversation.
 
  People can do what they want but it doesn't mean it's a good idea. 
  Granted, I
  consider code quality independently of any issues such as funding, but
  focusing too much on the latter leads to bad software. If there's pressure 
  like
  this, ignore it, find something else in the meantime until it goes away.
  
  I plan to continue working for FPI for the time being. Partly because it's 
  part time and fits conveniently with studying. Partly because getting a 
  programming job in the UK requires a degree even if you have some rather 
  odd experience.
 
  ## freenet Events API
 
  freenet does not provide a great API for running callbacks and scheduled 
  tasks.
  
  I'm not sure what you mean here. For example, the client layer has unique 
  requirements. We simply can't just change it to use java.util.concurrent. 
  Database jobs must run on the database thread, and no other thread may have 
  access to the database, because the client layer is not designed to deal 
  with simultaneous transactions. This avoids performance problems (separate 
  caches for each client; simultaneous disk I/O), and complexity (refreshing 
  everything constantly, which requires a more query-oriented architecture; 
  we'd have to change pretty much everything, and it'd probably be slower). 
  But it's messy, in particular it requires passing around ObjectContainer's.
 
 Why can't these restrictions be handled by a 3rd-party events framework?

They can't. Period.

The long answer is you can't pass an ObjectContainer to any other thread, and 
any DBJob needs both the ObjectContainer and the ClientContext (which isn't 
stored in any persistent class and is effectively the means to get all the 
objects created for this run of the node that aren't persisted otherwise, plus 
some global stuff like the FEC queue).

The simplest way to enforce this rule is simply the ugly convention of always 
passing the ObjectContainer in, and NOT either making it a singleton global or 
storing it anywhere.

The expensive way to deal with this would involve even more unwritten rules: 
You could write a wrapper for ObjectContainer that throws if you try to access 
it from another thread, for example.

The really expensive way to deal with this is to have parallel transactions. 
However, this way lies madness, because:
1. Typical end-user PCs' disks may be slower rather than faster with more 
parallel transactions.
2. More importantly, there is *one cache per transaction*.
3. Code-wise, we would have to 

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-20 Thread Ximin Luo
Holy crap toad you need to take your head out of your ass. I'm losing patience
with your persistent denial that the freenet code simply SUCKS BALLS in quite a
lot of areas.

Implicit dependency means that changing some code in one area results in a
logical error in the other, that is not immediately apparent in the
documentation nor any compile-time checking built into the language. Obviously
this is not necessarily a bad thing, but there is a lot of it in the config
system, and elsewhere, including in how the configured values are used.

For example, for the longest time there was no centralised management of
run-time files used by Freenet, and even now they are just a bunch of extra
fields in Node.java because I had no existing structure to fit such a system
into. However, Node.java is such a fucking disorganised mess, with several
hundred fields, that it's not immediately obvious to everyone that they should
be using new File(Node.nodeDir(), fileName) rather than new File(fileName).
I cleaned it up when I first wrote that part, but it looks like misuses have
slowly crept in again.

Yes, configs have EVERYTHING to do with writing the values out. All the
read/write logic is wrapped up into a ConfigCallback (which throws a
InvalidConfigException) which is ONLY PASSED TO THE CONFIG SUBSYSTEM. Even
ignoring the obvious intention of whoever named these classes, the target of
the read/write logic is completely detached from the logic itself.

Your explanation about the ObjectContainer also assumes a severely stunted
events framework can't support fine grained thread control. Why can't there, in
your imagination, exist SOMEWHERE, a framework that lets you mandate only run
jobs of type J in thread T?

An events framework could handle errant plugins by setting a timeout for all
blocking jobs. If they don't complete within X time, interrupt the thread
and/or kill it if it doesn't respond to the interrupt. If the plugin can't
handle it, tough shit their fault. (This is how certain companies run reliable
services on a massive scale.)

As for the updater, simply seeing where the currently-running JARs are, is not
enough, if an update introduces extra JARs as a dependancy. It's necessary to
have an explicit list of all the dependencies. Unfortunately it's not enough to
simply use the Class-Path attribute of the manifest of freenet.jar, because
that hard-codes the path to the other JARs, whereas we need it to be variable
for packaging purposes. So, this information can only be figured out at
*package time* (for arbitrary package layouts), and the updater needs a
mechanism to read this information, and probably update that as well.

X

On 20/03/12 23:18, Matthew Toseland wrote:
 On Monday 19 Mar 2012 01:52:19 Ximin Luo wrote:
 On 16/03/12 18:13, Matthew Toseland wrote:
 On Thursday 15 Mar 2012 21:02:26 Ximin Luo wrote:
 (Top-posting because previous post is too long to reply to in-line.)

 ## refactoring

 Refactoring the code helps to make it more understandable for other 
 coders. One
 of the reasons why I don't work more on freenet myself is because it's
 extremely difficult to make any changes that are more than a simple bug 
 fix.

 Hmmm, good point. But from FPI point of view, unless it's a very quick and 
 high impact refactoring,  it's probably too long term.

 When I was writing code for the config subsystem to separate out different
 files into different directories, this was very frustrating and took up 
 much
 more time than I expected.

 I'm still curious as to why.

 The code is spread out over many files not related to config management; 
 
 It is? How so?
 
 The code for handling the config files simply takes a value and applies it to 
 that subsystem. Short of converting it to get*/set*'ers, using reflection, I 
 don't see how we could make it much simpler.
 
 this
 creates lots of implicit dependencies, which is not directly apparent in the
 syntax/structure of the code nor the semantics of the languag. In other 
 words,
 spaghetti code.
 
 I still don't follow - implicit dependancies?

 Config management code should not be mixed into the thing they are 
 configuring,
 this is just Good Programming. 
 
 I don't get it. Evidently I didn't take that course.
 
 Changing variables on-the-fly is a completely
 separate issue from config management and actually writing out those values,
 and it was a mistake to bundle the two together in the code.
 
 The code that is spread out over many files has nothing to do with writing 
 the values out. All it does is changes the values. Sometimes it is necessary 
 to store a copy of the configured value so that it can be returned exactly, 
 most of the time it just gets and sets a variable.

 Using the guice framework would help this a lot as well.
 
 I am not convinced.


 NB: I'm not implying you should be doing all the work toad, please don't 
 think
 this. And please don't think that my opinions are automatically invalid / 
 less
 worthy just because I haven't 

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-19 Thread Marco Schulze

On 18-03-2012 21:57, Matthew Toseland wrote:

On Friday 16 Mar 2012 20:38:46 Marco Schulze wrote:

On 16-03-2012 15:13, Matthew Toseland wrote:

Updating its own binaries is incompatible with the standard unix way
of doing things, isn't it? Even if it's not technically a violation of
FHS?

I'd just like to point out that this is not the case at all, specially
because flexibility is a major characteristic in this Unix Way of Doing
Stuff. Where it might be problematic, if at all, is on the package
management level:

- The ugly custom installer would have to be replaced by a
distribution-specific package;
- Some distributions have special rules regarding Java packages. You'd
have to check those;

You _can_ conform to the FHS without any change by being installed under
/opt. This will make fred accessible system-wide, so you might want to
check if it's ok to let multiple users delve inside the Freenet
directory tree. However, AFAIR, install scripts can do almost anything,
including creating a fred-specific user and group, and allowing
freenet{,-ext}.jar to be updated by that user without root privileges.

The new directory layout might look like this:

/etc: freenet.ini.
/usr/bin: shell scripts to launch and update freenet.

This is the wrong place for a global daemon. Is apache in /usr/bin? Freenet 
does not, and should not, run once per user with globally shared binaries. And 
if it did, **it wouldn't be able to update them**, for security reasons - you 
can't have multiple users running the same binary and all having write access 
to it!


/usr/lib: native libraries.
/usr/lib/fred: jars. This might be dependent on the distribution.

See above.

That's mainly a permission problem, but yes, /usr/bin is wrong.


/srv/fred: default download location (if system-wide daemon, ~/Downloads
otherwise).
/var/cache/fred: datastore and other miscellaneous persistent files.
/var/cache/fred/plugins: plugins directory trees.
/var/log: logs.
/var/log/old/fred: compressed old logs (do you really need those?).

Yes, in many cases. Although mainly it's because with higher log levels (or 
when unlucky), logs tend to get very big if they are not rotated and compressed 
regularly.

Plus, distribution specific scripts to control the daemon (run.sh-ish).

IMHO Freenet itself (the GUI installer jar) should install to the installing user's home 
directory. Freenet should provide whatever is reasonably needed for packages, but it's 
not really our (fred's) problem.

___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-19 Thread Marco Schulze


On 18-03-2012 22:11, Ximin Luo wrote:

On 19/03/12 01:09, Ximin Luo wrote:

On 16/03/12 23:09, Marco Schulze wrote:

Well, the obvious question is 'why?'. Using /opt + /usr/bin scripts + service
scripts seems to be good enough. Either way, fred .jar paths are configurable,
the jars themselves should have 6** permissions and be owned by the fred user.
Why shouldn't autoupdate work?


/opt is not FHS.
I guess using /opt is not strict FHS, but it's commonly used for 
packages that, for some reason, can't be spread out in the filesystem.



/usr/bin is not FHS in the way that you're proposing. It should be used for us


Forgot to finish this sentence. I meant to say, /usr/bin is used for user-run
programs only. Maintenance scripts go in /usr/lib/freenet or
/usr/share/freenet, and the daemon script should be /etc/init.d/freenet.

True.


debian-staging follows FHS to as much of an extent as possible, although there
was one minor issue possibly with its use of /var/run/freenet.

Why shouldn't autoupdate work - if you look through how the current updater
works, it's fairly self-evident why it won't work for the package built by
debian-staging.
The question was more general: why shouldn't autoupdate work using an 
FHS layout?





An unrelated question: it seems debian-staging not only builds slightly
different jars, but are locked on specific build numbers. Why?


I haven't had time to update the submodule pointers to the latest commits and
verify that the package still works. Does this answer your question?


On 16-03-2012 17:51, Ximin Luo wrote:

We already have a layout that adheres to the FHS, see debian-staging for 
details :)

But this is only for run-time data, NOT the binaries themselves. That part is
rigid and would require much more work, because (to implement it properly)
would need to integrate well with all the various existing installers, as well
as the built-in updater.

I'll respond to the other points some other time, need to be off somewhere now.

(Theoretically it would be possible for fproxy to expose an APT repo under e.g.
localhost:/debian/ that actually gets its data from freenet, but this is
again extra work.)

X

On 16/03/12 20:38, Marco Schulze wrote:

On 16-03-2012 15:13, Matthew Toseland wrote:

Updating its own binaries is incompatible with the standard unix way of doing
things, isn't it? Even if it's not technically a violation of FHS?

I'd just like to point out that this is not the case at all, specially because
flexibility is a major characteristic in this Unix Way of Doing Stuff. Where it
might be problematic, if at all, is on the package management level:

- The ugly custom installer would have to be replaced by a
distribution-specific package;
- Some distributions have special rules regarding Java packages. You'd have to
check those;

You _can_ conform to the FHS without any change by being installed under /opt.
This will make fred accessible system-wide, so you might want to check if it's
ok to let multiple users delve inside the Freenet directory tree. However,
AFAIR, install scripts can do almost anything, including creating a
fred-specific user and group, and allowing freenet{,-ext}.jar to be updated by
that user without root privileges.

The new directory layout might look like this:

/etc: freenet.ini.
/usr/bin: shell scripts to launch and update freenet.
/usr/lib: native libraries.
/usr/lib/fred: jars. This might be dependent on the distribution.
/srv/fred: default download location (if system-wide daemon, ~/Downloads
otherwise).
/var/cache/fred: datastore and other miscellaneous persistent files.
/var/cache/fred/plugins: plugins directory trees.
/var/log: logs.
/var/log/old/fred: compressed old logs (do you really need those?).

Plus, distribution specific scripts to control the daemon (run.sh-ish).
___
Devl mailing list
Devl@freenetproject.org
http://freenetproject.org/cgi-bin/mailman/listinfo/devl



___
Devl mailing list
Devl@freenetproject.org
http://freenetproject.org/cgi-bin/mailman/listinfo/devl


___
Devl mailing list
Devl@freenetproject.org
http://freenetproject.org/cgi-bin/mailman/listinfo/devl




___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl




___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-18 Thread Matthew Toseland
On Friday 16 Mar 2012 20:38:46 Marco Schulze wrote:
 On 16-03-2012 15:13, Matthew Toseland wrote:
  Updating its own binaries is incompatible with the standard unix way 
  of doing things, isn't it? Even if it's not technically a violation of 
  FHS?
 
 I'd just like to point out that this is not the case at all, specially 
 because flexibility is a major characteristic in this Unix Way of Doing 
 Stuff. Where it might be problematic, if at all, is on the package 
 management level:
 
 - The ugly custom installer would have to be replaced by a 
 distribution-specific package;
 - Some distributions have special rules regarding Java packages. You'd 
 have to check those;
 
 You _can_ conform to the FHS without any change by being installed under 
 /opt. This will make fred accessible system-wide, so you might want to 
 check if it's ok to let multiple users delve inside the Freenet 
 directory tree. However, AFAIR, install scripts can do almost anything, 
 including creating a fred-specific user and group, and allowing 
 freenet{,-ext}.jar to be updated by that user without root privileges.
 
 The new directory layout might look like this:
 
 /etc: freenet.ini.
 /usr/bin: shell scripts to launch and update freenet.

This is the wrong place for a global daemon. Is apache in /usr/bin? Freenet 
does not, and should not, run once per user with globally shared binaries. And 
if it did, **it wouldn't be able to update them**, for security reasons - you 
can't have multiple users running the same binary and all having write access 
to it!

 /usr/lib: native libraries.
 /usr/lib/fred: jars. This might be dependent on the distribution.

See above.

 /srv/fred: default download location (if system-wide daemon, ~/Downloads 
 otherwise).
 /var/cache/fred: datastore and other miscellaneous persistent files.
 /var/cache/fred/plugins: plugins directory trees.
 /var/log: logs.
 /var/log/old/fred: compressed old logs (do you really need those?).

Yes, in many cases. Although mainly it's because with higher log levels (or 
when unlucky), logs tend to get very big if they are not rotated and compressed 
regularly.
 
 Plus, distribution specific scripts to control the daemon (run.sh-ish).

IMHO Freenet itself (the GUI installer jar) should install to the installing 
user's home directory. Freenet should provide whatever is reasonably needed for 
packages, but it's not really our (fred's) problem.


signature.asc
Description: This is a digitally signed message part.
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-18 Thread Ximin Luo
On 16/03/12 23:09, Marco Schulze wrote:
 Well, the obvious question is 'why?'. Using /opt + /usr/bin scripts + service
 scripts seems to be good enough. Either way, fred .jar paths are configurable,
 the jars themselves should have 6** permissions and be owned by the fred user.
 Why shouldn't autoupdate work?
 

/opt is not FHS.
/usr/bin is not FHS in the way that you're proposing. It should be used for us

debian-staging follows FHS to as much of an extent as possible, although there
was one minor issue possibly with its use of /var/run/freenet.

Why shouldn't autoupdate work - if you look through how the current updater
works, it's fairly self-evident why it won't work for the package built by
debian-staging.

 An unrelated question: it seems debian-staging not only builds slightly
 different jars, but are locked on specific build numbers. Why?
 

I haven't had time to update the submodule pointers to the latest commits and
verify that the package still works. Does this answer your question?

 On 16-03-2012 17:51, Ximin Luo wrote:
 We already have a layout that adheres to the FHS, see debian-staging for 
 details :)

 But this is only for run-time data, NOT the binaries themselves. That part is
 rigid and would require much more work, because (to implement it properly)
 would need to integrate well with all the various existing installers, as 
 well
 as the built-in updater.

 I'll respond to the other points some other time, need to be off somewhere 
 now.

 (Theoretically it would be possible for fproxy to expose an APT repo under 
 e.g.
 localhost:/debian/ that actually gets its data from freenet, but this is
 again extra work.)

 X

 On 16/03/12 20:38, Marco Schulze wrote:
 On 16-03-2012 15:13, Matthew Toseland wrote:
 Updating its own binaries is incompatible with the standard unix way of 
 doing
 things, isn't it? Even if it's not technically a violation of FHS?
 I'd just like to point out that this is not the case at all, specially 
 because
 flexibility is a major characteristic in this Unix Way of Doing Stuff. 
 Where it
 might be problematic, if at all, is on the package management level:

 - The ugly custom installer would have to be replaced by a
 distribution-specific package;
 - Some distributions have special rules regarding Java packages. You'd have 
 to
 check those;

 You _can_ conform to the FHS without any change by being installed under 
 /opt.
 This will make fred accessible system-wide, so you might want to check if 
 it's
 ok to let multiple users delve inside the Freenet directory tree. However,
 AFAIR, install scripts can do almost anything, including creating a
 fred-specific user and group, and allowing freenet{,-ext}.jar to be updated 
 by
 that user without root privileges.

 The new directory layout might look like this:

 /etc: freenet.ini.
 /usr/bin: shell scripts to launch and update freenet.
 /usr/lib: native libraries.
 /usr/lib/fred: jars. This might be dependent on the distribution.
 /srv/fred: default download location (if system-wide daemon, ~/Downloads
 otherwise).
 /var/cache/fred: datastore and other miscellaneous persistent files.
 /var/cache/fred/plugins: plugins directory trees.
 /var/log: logs.
 /var/log/old/fred: compressed old logs (do you really need those?).

 Plus, distribution specific scripts to control the daemon (run.sh-ish).
 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl



 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl
 
 
 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl


-- 
GPG: 4096R/5FBBDBCE
https://github.com/infinity0
https://bitbucket.org/infinity0
https://launchpad.net/~infinity0



signature.asc
Description: OpenPGP digital signature
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-18 Thread Ximin Luo
On 19/03/12 01:09, Ximin Luo wrote:
 On 16/03/12 23:09, Marco Schulze wrote:
 Well, the obvious question is 'why?'. Using /opt + /usr/bin scripts + service
 scripts seems to be good enough. Either way, fred .jar paths are 
 configurable,
 the jars themselves should have 6** permissions and be owned by the fred 
 user.
 Why shouldn't autoupdate work?

 
 /opt is not FHS.
 /usr/bin is not FHS in the way that you're proposing. It should be used for us
 

Forgot to finish this sentence. I meant to say, /usr/bin is used for user-run
programs only. Maintenance scripts go in /usr/lib/freenet or
/usr/share/freenet, and the daemon script should be /etc/init.d/freenet.

 debian-staging follows FHS to as much of an extent as possible, although there
 was one minor issue possibly with its use of /var/run/freenet.
 
 Why shouldn't autoupdate work - if you look through how the current updater
 works, it's fairly self-evident why it won't work for the package built by
 debian-staging.
 
 An unrelated question: it seems debian-staging not only builds slightly
 different jars, but are locked on specific build numbers. Why?

 
 I haven't had time to update the submodule pointers to the latest commits and
 verify that the package still works. Does this answer your question?
 
 On 16-03-2012 17:51, Ximin Luo wrote:
 We already have a layout that adheres to the FHS, see debian-staging for 
 details :)

 But this is only for run-time data, NOT the binaries themselves. That part 
 is
 rigid and would require much more work, because (to implement it properly)
 would need to integrate well with all the various existing installers, as 
 well
 as the built-in updater.

 I'll respond to the other points some other time, need to be off somewhere 
 now.

 (Theoretically it would be possible for fproxy to expose an APT repo under 
 e.g.
 localhost:/debian/ that actually gets its data from freenet, but this is
 again extra work.)

 X

 On 16/03/12 20:38, Marco Schulze wrote:
 On 16-03-2012 15:13, Matthew Toseland wrote:
 Updating its own binaries is incompatible with the standard unix way of 
 doing
 things, isn't it? Even if it's not technically a violation of FHS?
 I'd just like to point out that this is not the case at all, specially 
 because
 flexibility is a major characteristic in this Unix Way of Doing Stuff. 
 Where it
 might be problematic, if at all, is on the package management level:

 - The ugly custom installer would have to be replaced by a
 distribution-specific package;
 - Some distributions have special rules regarding Java packages. You'd 
 have to
 check those;

 You _can_ conform to the FHS without any change by being installed under 
 /opt.
 This will make fred accessible system-wide, so you might want to check if 
 it's
 ok to let multiple users delve inside the Freenet directory tree. However,
 AFAIR, install scripts can do almost anything, including creating a
 fred-specific user and group, and allowing freenet{,-ext}.jar to be 
 updated by
 that user without root privileges.

 The new directory layout might look like this:

 /etc: freenet.ini.
 /usr/bin: shell scripts to launch and update freenet.
 /usr/lib: native libraries.
 /usr/lib/fred: jars. This might be dependent on the distribution.
 /srv/fred: default download location (if system-wide daemon, ~/Downloads
 otherwise).
 /var/cache/fred: datastore and other miscellaneous persistent files.
 /var/cache/fred/plugins: plugins directory trees.
 /var/log: logs.
 /var/log/old/fred: compressed old logs (do you really need those?).

 Plus, distribution specific scripts to control the daemon (run.sh-ish).
 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl



 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl


 ___
 Devl mailing list
 Devl@freenetproject.org
 http://freenetproject.org/cgi-bin/mailman/listinfo/devl
 
 
 
 
 ___
 Devl mailing list
 Devl@freenetproject.org
 https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl


-- 
GPG: 4096R/5FBBDBCE
https://github.com/infinity0
https://bitbucket.org/infinity0
https://launchpad.net/~infinity0



signature.asc
Description: OpenPGP digital signature
___
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-18 Thread Ximin Luo
On 16/03/12 18:13, Matthew Toseland wrote:
 On Thursday 15 Mar 2012 21:02:26 Ximin Luo wrote:
 (Top-posting because previous post is too long to reply to in-line.)

 ## refactoring

 Refactoring the code helps to make it more understandable for other coders. 
 One
 of the reasons why I don't work more on freenet myself is because it's
 extremely difficult to make any changes that are more than a simple bug fix.
 
 Hmmm, good point. But from FPI point of view, unless it's a very quick and 
 high impact refactoring,  it's probably too long term.

 When I was writing code for the config subsystem to separate out different
 files into different directories, this was very frustrating and took up much
 more time than I expected.
 
 I'm still curious as to why.

The code is spread out over many files not related to config management; this
creates lots of implicit dependencies, which is not directly apparent in the
syntax/structure of the code nor the semantics of the languag. In other words,
spaghetti code.

Config management code should not be mixed into the thing they are configuring,
this is just Good Programming. Changing variables on-the-fly is a completely
separate issue from config management and actually writing out those values,
and it was a mistake to bundle the two together in the code.

Using the guice framework would help this a lot as well.


 NB: I'm not implying you should be doing all the work toad, please don't 
 think
 this. And please don't think that my opinions are automatically invalid / 
 less
 worthy just because I haven't committed code for ages. I just want to express
 my view of What I Would Do If I Were God.
 
 Okay. :) I don't mean to criticise un-constructively, just have a useful 
 conversation.

 People can do what they want but it doesn't mean it's a good idea. Granted, I
 consider code quality independently of any issues such as funding, but
 focusing too much on the latter leads to bad software. If there's pressure 
 like
 this, ignore it, find something else in the meantime until it goes away.
 
 I plan to continue working for FPI for the time being. Partly because it's 
 part time and fits conveniently with studying. Partly because getting a 
 programming job in the UK requires a degree even if you have some rather odd 
 experience.

 ## freenet Events API

 freenet does not provide a great API for running callbacks and scheduled 
 tasks.
 
 I'm not sure what you mean here. For example, the client layer has unique 
 requirements. We simply can't just change it to use java.util.concurrent. 
 Database jobs must run on the database thread, and no other thread may have 
 access to the database, because the client layer is not designed to deal with 
 simultaneous transactions. This avoids performance problems (separate caches 
 for each client; simultaneous disk I/O), and complexity (refreshing 
 everything constantly, which requires a more query-oriented architecture; 
 we'd have to change pretty much everything, and it'd probably be slower). But 
 it's messy, in particular it requires passing around ObjectContainer's.
 

Why can't these restrictions be handled by a 3rd-party events framework?

 Converting everything to use java.util.concurrent and/or
 com.google.common.util.concurrent would help this a lot. Of course, Library 
 can
 and currently does implement this itself, but it's fairly sloppy, and other
 plugins would benefit if such a framework were provided.
 
 If you simply mean replacing Ticker and Executor, that's fine by me.

 Some common tasks that plugins would like to do, which really should be
 provided by the underlying application:
 - run tasks in the background
 - run tasks according to a particular schedule
 - cancel running tasks
 
 How do you propose to cancel a task once it's started? I guess it depends 
 what sort of task it is. If it has a boolean and periodically checks it then 
 fine; this would require a subclass ...
 

Depends on what the task is. The writer of the task will need to add code to
support cancellation, just like Future.cancel().

 - handle dependencies between tasks, so that e.g. if A depends on B and I
 cancel B, A is automatically cancelled.
 
 That's a nice bit of functionality yeah. Nothing in fred needs it at present, 
 although radical refactorings might make it more useful.
 

Having this functionality would make Library's code a lot simpler. (The
dependency management that is, not necessarily the cancellation.) We talked
about it a while back but I never really sat down to solve the issue.

 - group tasks into related categories so one group doesn't affect the other.
 (e.g. tasks for the group Library/b-tree-write/index-URL won't starve the
 resources of group WoT/background-trust-calculation)
 
 If you are scheduling tasks in a limited pool this makes sense. Freenet 
 generally doesn't do this because most tasks are either CPU intensive and 
 complete quickly, or are blocking / I/O intensive and take ages but don't use 
 much 

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-16 Thread Matthew Toseland
On Thursday 15 Mar 2012 21:02:26 Ximin Luo wrote:
 (Top-posting because previous post is too long to reply to in-line.)
 
 ## refactoring
 
 Refactoring the code helps to make it more understandable for other coders. 
 One
 of the reasons why I don't work more on freenet myself is because it's
 extremely difficult to make any changes that are more than a simple bug fix.

Hmmm, good point. But from FPI point of view, unless it's a very quick and high 
impact refactoring,  it's probably too long term.
 
 When I was writing code for the config subsystem to separate out different
 files into different directories, this was very frustrating and took up much
 more time than I expected.

I'm still curious as to why.
 
 NB: I'm not implying you should be doing all the work toad, please don't think
 this. And please don't think that my opinions are automatically invalid / less
 worthy just because I haven't committed code for ages. I just want to express
 my view of What I Would Do If I Were God.

Okay. :) I don't mean to criticise un-constructively, just have a useful 
conversation.
 
 People can do what they want but it doesn't mean it's a good idea. Granted, I
 consider code quality independently of any issues such as funding, but
 focusing too much on the latter leads to bad software. If there's pressure 
 like
 this, ignore it, find something else in the meantime until it goes away.

I plan to continue working for FPI for the time being. Partly because it's part 
time and fits conveniently with studying. Partly because getting a programming 
job in the UK requires a degree even if you have some rather odd experience.
 
 ## freenet Events API
 
 freenet does not provide a great API for running callbacks and scheduled 
 tasks.

I'm not sure what you mean here. For example, the client layer has unique 
requirements. We simply can't just change it to use java.util.concurrent. 
Database jobs must run on the database thread, and no other thread may have 
access to the database, because the client layer is not designed to deal with 
simultaneous transactions. This avoids performance problems (separate caches 
for each client; simultaneous disk I/O), and complexity (refreshing everything 
constantly, which requires a more query-oriented architecture; we'd have to 
change pretty much everything, and it'd probably be slower). But it's messy, in 
particular it requires passing around ObjectContainer's.

 Converting everything to use java.util.concurrent and/or
 com.google.common.util.concurrent would help this a lot. Of course, Library 
 can
 and currently does implement this itself, but it's fairly sloppy, and other
 plugins would benefit if such a framework were provided.

If you simply mean replacing Ticker and Executor, that's fine by me.
 
 Some common tasks that plugins would like to do, which really should be
 provided by the underlying application:
 - run tasks in the background
 - run tasks according to a particular schedule
 - cancel running tasks

How do you propose to cancel a task once it's started? I guess it depends what 
sort of task it is. If it has a boolean and periodically checks it then fine; 
this would require a subclass ...

 - handle dependencies between tasks, so that e.g. if A depends on B and I
 cancel B, A is automatically cancelled.

That's a nice bit of functionality yeah. Nothing in fred needs it at present, 
although radical refactorings might make it more useful.

 - group tasks into related categories so one group doesn't affect the other.
 (e.g. tasks for the group Library/b-tree-write/index-URL won't starve the
 resources of group WoT/background-trust-calculation)

If you are scheduling tasks in a limited pool this makes sense. Freenet 
generally doesn't do this because most tasks are either CPU intensive and 
complete quickly, or are blocking / I/O intensive and take ages but don't use 
much resources. Also many tasks are latency sensitive. And on modern JVMs, lots 
of threads are cheap, although memory is an issue, we do need to keep it down 
if possible ...
 
 ## Library algorithm
 
 You keep talking about specific things like only fetching a single block in
 the common case. Nobody else knows what this means, even I don't (off the top
 of my head) because I haven't looked at the code in several years.

I quoted a bug explaining this.

 
 On 15/03/12 13:03, Matthew Toseland wrote:
  
  https://bugs.freenetproject.org/view.php?id=4066
  And categories: 
  library
  b-tree-index
  

Back to your mail:
 
 Having a proper specification that separates different concerns out into 
 layers
 (e.g. logical structure; on-freenet structure; data format; network
 performance) helps greatly to make the algorithm understandable to other
 people, and yourself if you don't work on it for ages.
 
 If, instead of saying only fetch a single block you say the on-freenet
 structure format has persistence issues, and here's a potential solution, 
 it's
 easier for other people to understand what you 

Re: [freenet-dev] Refactoring Freenet and Library was Re: Gun.IO and Freenet

2012-03-15 Thread Ximin Luo
(Top-posting because previous post is too long to reply to in-line.)

## refactoring

Refactoring the code helps to make it more understandable for other coders. One
of the reasons why I don't work more on freenet myself is because it's
extremely difficult to make any changes that are more than a simple bug fix.

When I was writing code for the config subsystem to separate out different
files into different directories, this was very frustrating and took up much
more time than I expected.

NB: I'm not implying you should be doing all the work toad, please don't think
this. And please don't think that my opinions are automatically invalid / less
worthy just because I haven't committed code for ages. I just want to express
my view of What I Would Do If I Were God.

People can do what they want but it doesn't mean it's a good idea. Granted, I
consider code quality independently of any issues such as funding, but
focusing too much on the latter leads to bad software. If there's pressure like
this, ignore it, find something else in the meantime until it goes away.

## freenet Events API

freenet does not provide a great API for running callbacks and scheduled tasks.
Converting everything to use java.util.concurrent and/or
com.google.common.util.concurrent would help this a lot. Of course, Library can
and currently does implement this itself, but it's fairly sloppy, and other
plugins would benefit if such a framework were provided.

Some common tasks that plugins would like to do, which really should be
provided by the underlying application:
- run tasks in the background
- run tasks according to a particular schedule
- cancel running tasks
- handle dependencies between tasks, so that e.g. if A depends on B and I
cancel B, A is automatically cancelled.
- group tasks into related categories so one group doesn't affect the other.
(e.g. tasks for the group Library/b-tree-write/index-URL won't starve the
resources of group WoT/background-trust-calculation)

## Library algorithm

You keep talking about specific things like only fetching a single block in
the common case. Nobody else knows what this means, even I don't (off the top
of my head) because I haven't looked at the code in several years.

Having a proper specification that separates different concerns out into layers
(e.g. logical structure; on-freenet structure; data format; network
performance) helps greatly to make the algorithm understandable to other
people, and yourself if you don't work on it for ages.

If, instead of saying only fetch a single block you say the on-freenet
structure format has persistence issues, and here's a potential solution, it's
easier for other people to understand what you mean.

## config management

Code for read/writing the config in embedded inside the classes they control,
which clutters up the code and makes it confusing. Config code should be
separated from the actual application. It would also be nice if this was
exposed to plugins.

Being a daemon is why the current config system is NOT SUFFICIENT. I need to be
able to lock certain config settings, such as where to keep the datastore /
run-time files themselves, to conform to the FHS. It's best that such settings
are kept in a separate read-only file. The current system only reads one file,
freenet.ini.

Updating its own binaries is not a problem if freenet knows where the binaries
are. The current installer puts them in a rigid place, however this is
incompatible with FHS.

X

On 15/03/12 13:03, Matthew Toseland wrote:
 On Wednesday 14 Mar 2012 11:49:33 Ximin Luo wrote:
 Library has some architectural problems. I don't think it's a great use of 
 time
 to try to improve the existing code directly.

 - We didn't create a proper specification for how the data structure should
 work over freenet
 - I made a mistake in trying to shoe-horn a stub remote data structure into 
 the
 Java collections framework.
 - To support loading from remote, I wrote my own event handling framework,
 which was not a very clean design as I wasn't familiar with
 java.util.concurrent at the time.
 - I did write an asynchronous merge algorithm (SkelBTreeMap.update) which
 currently does the majority of the write work, but it's quite complex and
 doesn't integrate well with the rest of the code. Toad also made some
 adjustments on top of it for performance, which makes understanding it even
 more complex as it distracts from the pure form of the algorithm.

 What is needed to get Library working well is to:

 1. specify the data structure, and associated algorithms, properly. I have
 notes, I can help with this
 2. use an event handling framework for Java (something similar to Twisted for
 python, but also has the ability to use threads if necessary)
 3. use this to implement the algorithm in a concise way
 
 Some functional stuff for Library (related to on-network format), that may or 
 may not fit in depending on how much is being thrown out and rewritten:
 - B+tree (no data in the nodes so