Re: [tahoe-dev] Precise Puppy (linux) tahoe-lafs 1.10.0 initial report
Hi again, sorry for replying so late... The Pis used for storage nodes are in general not used for anything else, and we try to keep X turned off to save resources. You can also lower the amount of RAM reserved for the GPU to a minimal 16 Mb in the config.txt file in the Pi boot partition. Also, some of our Pis krashed when stressed, e.g. by uploading a bunch of very large files, until the SD card was replaced by a Sandisk SDSDX-016G-X46. The Pi is notoriously sensitive about what card is being used. Finally, if lack of memory is limiting performance, it is possible to set up a swap partition on the Pi. It will slow things down horribly, of course, but may just get the job done. Regards, Anders 28 sep 2013 kl. 05:20 skrev Garonda Rodian deeps...@hotmail.com: Thank you for the report on the Raspberry Pi being used in production - are you and your friends running just one storage node on the Pi, or are you also running any other software (second storage node, Tor, I2P, OpenVPN?). My RPi consistently simply dies during the trial - no errors, it just... stops, but based on your feedback, I'll continue. As I'm hoping to run some medium scale tests, I'm going to have to have something to generate a lot of nodes all at once, and I hate wasting effort. At this point, I'm targetting something more like the old terminal/3270/DOS menus and/or wizards - simple walkthroughs with questions to answer that can be used to create the files for an entire grid, or add to an existing grid's files, hopefully with some manner of wrapper (Tor, I2P, OpenVPN) capabilities available as well. Does anyone have a good Python tutorial for experienced programmers? My C and assembly used to be pretty good and my SQL is excellent, but I haven't picked up a new language in a long time, and I never dealt with parallelization much. P.S. the Precise Puppy 5.7.1 VM at 768MB fails with the GUI, but succeeds at the command line with everything nonessential (cups printer daemon) disabled, so the critical memory limit for the trial is very close to there, OS overhead included. From: anders.gen...@gmail.com Date: Thu, 26 Sep 2013 19:54:44 +0200 To: zoo...@gmail.com CC: tahoe-dev@tahoe-lafs.org Subject: Re: [tahoe-dev] Precise Puppy (linux) tahoe-lafs 1.10.0 initial report P.S. If I'm lucky, the Raspberry Pi has completed its trial run, though if this is the RAM requirement, I'm not holding out much hope. It is too bad about #1476, because I really like to be able to run unit tests everywhere and all the time. However, I believe that the gateway or storage-server itself will run fine on Raspberry Pi, even if (due to #1476) the tests will fail. Just to chime in: We have several storage nodes running off of RPis in our friendnet, and they seem to work fine as such. We would absolutely love a setup menu - many of our participating friends have never used a terminal. Looking forward to be dazzled!! Regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Precise Puppy (linux) tahoe-lafs 1.10.0 initial report
P.S. If I'm lucky, the Raspberry Pi has completed its trial run, though if this is the RAM requirement, I'm not holding out much hope. It is too bad about #1476, because I really like to be able to run unit tests everywhere and all the time. However, I believe that the gateway or storage-server itself will run fine on Raspberry Pi, even if (due to #1476) the tests will fail. Just to chime in: We have several storage nodes running off of RPis in our friendnet, and they seem to work fine as such. We would absolutely love a setup menu - many of our participating friends have never used a terminal. Looking forward to be dazzled!! Regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Dynamic address
I believe it has been suggested that the Introducer should detect incoming connections and thereby keep track of the addresses of connected nodes. If that was implemented you would only need to set the introducer furl and tub.port in your storage node config. The tub.port is of course needed for tahoe to know what port you have opened through your firewall for tahoe data traffic. Until that is implemented, we have a little script running on the storage nodes of out friendnet that automatically updates the node config file and restarts the node if the ipaddress changes. If you're on Linux you are welcome to try it. Regards, Anders 15 sep 2013 kl. 16:04 skrev Jerzy Łogiewa jerz...@interia.eu: Is some hashbased ident + discovery possible instead? Take for example the bittorrent sync. A secret look like: UF5O7G6XIMQKC7OH4J6NIPHDKVONMITO and that is all that is needed to find the share! -- Jerzy Łogiewa -- jerz...@interia.eu On Sep 15, 2013, at 3:24 PM, Pierre Abbat wrote: On Friday, September 13, 2013 13:35:28 Jerzy Łogiewa wrote: If my Tahoe storage node IP change, I must update it in tracker manually? Is there some way to automate it, to make storage node tell tracker new IP? I have some names pointing to my computer and these lines in tahoe.cfg: [node] tub.port = 14159 tub.location = bezitopo.org:14159 bezitopo.org is not a dynamic DNS name, I have to update it on the rare occasions that it changes, but if you have a DDNS name, it should update the introducer manually. My storage node is behind a firewall, so it doesn't know its external IP address. Pierre -- Jews use a lunisolar calendar; Muslims use a solely lunar calendar. ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Random ports
Just to show what we currently have, below is the output of the aforementioned script. It relies on nmap to test connectivity. Currently the script just prints if there is a mismatch between announced port and the default one (8098), but we can also print announced port and its connectivity. The output on my node (below) currently show one mismatch. The output on the introducer node currently shows three mismatches. NodeOnline 8098Mismatch guldburken yes openno PIburkenyes openno genell yes openno patrik yes openno mackan no - - monkyes openyes Skogis..no - - Perra yes openno Best regards, /Anders 24 aug 2013 kl. 15:39 skrev Anders Genell anders.gen...@gmail.com: I just wanted to bump this issue in order to clarify if I misinterpret what to expect. I have updated my previously posted awk script (https://www.dropbox.com/s/e6re3l1sranipuy/check_nodes.awk) to also check ports. As we consistently set port 8098 as tub.port and in tub.location, I would expect that port to be the one reported in the WUI in the Address column, but I might be wrong? Most nodes report port 8098 there most of the time, but not all of them always. Therefore my awk script checks access to port 8098 for all reported IPs as well as access to potential alternative ports that are reported. The latter are basically never accessible because only 8098 is forwarded through the firewalls in the routers of the friends running their respective nodes. 8098 seems so far always accessible for all nodes that are reported to be online. At one point (this has since been remedied) one node was being reported as being online with port reported as 8098 while 8098 was in fact being filtered by a firewall. This all makes me wonder what the requirements are for a node being reported as online? And where is the port number found that is being reported for a node in the WUI? Regards, /Anders 12 aug 2013 kl. 14:00 skrev Zooko O'Whielacronx zoo...@gmail.com: On Sun, Aug 11, 2013 at 2:24 PM, Anders Genell anders.gen...@gmail.com wrote: We now have seven nodes in our friendnet and can soon start to rely on it as a long term cloud backup. One thing we have noticed is that the nodes sometimes report different ports in the web interface(s) than what has been set for tub.location and tub.port. Checking /private/storage.furl shows the intended port, and the system seems to work, so it's just a matter of easing our worried minds about why e.g. 2 out of 7 nodes report wrong ports in the web interface? I'm not aware of any bug about this. Are you sure you're not confusing tub.port with web.port or something? https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/configuration.rst?rev=0a89b738bc05f17597555786b8f59dc05c46be0f#overall-node-configuration Please give more information about the mysterious behavior of the 2 (out of 7) nodes — what port number do they show? Is there anything listening on that port? I would love to hear how your friendnet goes. Most friendnets fail, unfortunately. Some of the people don't use the friendnet, and the ones who aren't using it don't invest a lot of effort in maintaining the servers (to serve those who do use it). I heard another story of such a failed friendnet from some people I met at DefCon. If anybody out there reading this has a story of a friendnet (either a failure or a success), I would love to hear it, to try to figure out what makes a successful one. Regards, Zooko ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Random ports
I just wanted to bump this issue in order to clarify if I misinterpret what to expect. I have updated my previously posted awk script (https://www.dropbox.com/s/e6re3l1sranipuy/check_nodes.awk) to also check ports. As we consistently set port 8098 as tub.port and in tub.location, I would expect that port to be the one reported in the WUI in the Address column, but I might be wrong? Most nodes report port 8098 there most of the time, but not all of them always. Therefore my awk script checks access to port 8098 for all reported IPs as well as access to potential alternative ports that are reported. The latter are basically never accessible because only 8098 is forwarded through the firewalls in the routers of the friends running their respective nodes. 8098 seems so far always accessible for all nodes that are reported to be online. At one point (this has since been remedied) one node was being reported as being online with port reported as 8098 while 8098 was in fact being filtered by a firewall. This all makes me wonder what the requirements are for a node being reported as online? And where is the port number found that is being reported for a node in the WUI? Regards, /Anders 12 aug 2013 kl. 14:00 skrev Zooko O'Whielacronx zoo...@gmail.com: On Sun, Aug 11, 2013 at 2:24 PM, Anders Genell anders.gen...@gmail.com wrote: We now have seven nodes in our friendnet and can soon start to rely on it as a long term cloud backup. One thing we have noticed is that the nodes sometimes report different ports in the web interface(s) than what has been set for tub.location and tub.port. Checking /private/storage.furl shows the intended port, and the system seems to work, so it's just a matter of easing our worried minds about why e.g. 2 out of 7 nodes report wrong ports in the web interface? I'm not aware of any bug about this. Are you sure you're not confusing tub.port with web.port or something? https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/configuration.rst?rev=0a89b738bc05f17597555786b8f59dc05c46be0f#overall-node-configuration Please give more information about the mysterious behavior of the 2 (out of 7) nodes — what port number do they show? Is there anything listening on that port? I would love to hear how your friendnet goes. Most friendnets fail, unfortunately. Some of the people don't use the friendnet, and the ones who aren't using it don't invest a lot of effort in maintaining the servers (to serve those who do use it). I heard another story of such a failed friendnet from some people I met at DefCon. If anybody out there reading this has a story of a friendnet (either a failure or a success), I would love to hear it, to try to figure out what makes a successful one. Regards, Zooko ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Random ports
Sorry, that was my intention but I missed replying to all... 13 aug 2013 kl. 00:27 skrev Zooko O'Whielacronx zoo...@gmail.com: Very cool! I think you should post this description to tahoe-dev! Regards, Zooko On Mon, Aug 12, 2013 at 1:57 PM, Anders Genell anders.gen...@gmail.com wrote: I'll be happy to report here as we go along! I suppose I will be one of the administrators of our friendnet, and that is one of the reasons for me constantly disturbing the peace here. We have created a Pidora image with a tahoe-daemon user and allmydata-1.10 preinstalled so that friends can get up and running easily with a RPi and a USB disk (1TB is most common for now). We have included the previously posted uglyhack script that updates the tahoe.cfg file when the external IP of the node changes, so people should not need to run any dyndns services, as long as the introducer has a static IP, which it has. As for the ports, we have set tub.port and the port sections of tub.location to 8098, which is also the port forwarded through the routers/firewalls of all participating friends. Right now 3 of 8 (yay! another friend in the grid!) nodes report strange ports in the WUI welcome page of my node - 64884, 56377 and 53545 respectively. They still show up green, and they are still possible to reach when just testing with telnet IP 8098, so it does not seem to affect functionality, but I can't figure out what the port number in the WUI represents. Actually, when testing just now one node is reporting a different port than 8098 while it is also unreachable on 8098 (likely due to port forward failure), but it is still green in the WUI. I have written an awk script that parses the WUI welcome page and outputs nickname and connection status of each node. My idea is to use this to keep track of nodes that fall off the grid, and prompt the friend hosting the node to take measures. The RPis tend to overheat when placed in a case and running the first full upload (most friends have more than 30 Gb of files to back up, some as much as 400Gb which takes days if not weeks to complete), and since the users are meant to deploy the node and then more or less forget about it - running backups using Duplicati on their main computer(s) - they might not notice for awhile. So if the connection status is not viable for monitoring the availability of nodes, I need to fix something else (like parsing for the node IPs and scan for port 8098). Best regards, /Anders 12 aug 2013 kl. 14:00 skrev Zooko O'Whielacronx zoo...@gmail.com: On Sun, Aug 11, 2013 at 2:24 PM, Anders Genell anders.gen...@gmail.com wrote: We now have seven nodes in our friendnet and can soon start to rely on it as a long term cloud backup. One thing we have noticed is that the nodes sometimes report different ports in the web interface(s) than what has been set for tub.location and tub.port. Checking /private/storage.furl shows the intended port, and the system seems to work, so it's just a matter of easing our worried minds about why e.g. 2 out of 7 nodes report wrong ports in the web interface? I'm not aware of any bug about this. Are you sure you're not confusing tub.port with web.port or something? https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/docs/configuration.rst?rev=0a89b738bc05f17597555786b8f59dc05c46be0f#overall-node-configuration Please give more information about the mysterious behavior of the 2 (out of 7) nodes — what port number do they show? Is there anything listening on that port? I would love to hear how your friendnet goes. Most friendnets fail, unfortunately. Some of the people don't use the friendnet, and the ones who aren't using it don't invest a lot of effort in maintaining the servers (to serve those who do use it). I heard another story of such a failed friendnet from some people I met at DefCon. If anybody out there reading this has a story of a friendnet (either a failure or a success), I would love to hear it, to try to figure out what makes a successful one. Regards, Zooko ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] Random ports
Dear list! We now have seven nodes in our friendnet and can soon start to rely on it as a long term cloud backup. One thing we have noticed is that the nodes sometimes report different ports in the web interface(s) than what has been set for tub.location and tub.port. Checking /private/storage.furl shows the intended port, and the system seems to work, so it's just a matter of easing our worried minds about why e.g. 2 out of 7 nodes report wrong ports in the web interface? Best regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] How to use /bin/tahoe backup
Dear list! Most of our friendnet members are using duplicati to upload backups to our grid, but I'd like to be able to upload files as they are, not lumping the data into 10Mb zipped (optionally encrypted) packages. I have tentatively investigated the tahoe backup command, but can't really figure out how it works. I tried pointing it to a newly created directory in the grid, and there was a new directory called 'Archives' created in there, but no files showed up while running the backup. Do I need to do something particular to see the files? Also, the info I could find mentions the backup being immutable, but I'd like to be able to perform incremental backups (as I believe duplicati does) and was wondering if tahoe backup can du that too? Ideally I'd like to be able to backup a directory so that I can open the corresponding write-cap uri in the tahoe web interface and plainly see all files. Is that doable? Best regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] Node alert function
Dear devs! We are continuing to deploy our friendnet dubbed RegnmolNET (the rain cloud in Swedish). One thing we thought of in addition to make the web welcome page color blindness proof was to have some daemon monitoring nodes and if they have been disconnected for more than XX hours or days sending a notification to e.g. an email address. Would this be something that would be worth including in tahoe? If not, is there a simple text file listing nodes that are connected/disconnected that we can parse to automate an alert? If not, is there some python API call or similar to that end? Best regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] Friendnet tub.location and tub.port
Dear list! I posted a related question recently and got som useful answers, but I am still somewhat confused as to what is required concerning incoming wan access for friendnet nodes. Our original idea was to just set tub.port on all nodes and make sure that port was forwarded through any routers/firewalls along the way. We would then expect the introducer to detect the IPs of incoming connections from nodes and announce each node with that IP combined with the corresponding port number, as set in each node's tub.port. It would however seem like we actually need to specifically set the tub.location IP address of each node for the nodes to see eachother. Without it all nodes see the introducer, and the introducer sees all nodes but the nodes don't see eachother. There have been suggestions here to let the introducer(s) handle IPs, and if I understand correctly that would work in more or less the way we assumed it already did? Right now we have uglyhacked a script to update the tahoe.cfg file and restart the node whenever the IP changes, by regularly checking e.g. myexternalip.com. Most nodes will run on Raspberry Pi hardware so a bash script is sufficient, but a bit of python should make it more platform independent, I suppose. My question is, should we need to set both tub.location and tub.port? Should we need to uglyhack to update IP alternatively use som dyndns equivalent? The introducer detects incoming IPs anyway, couldn't that be reported back to the node? Apart from that, we now have the required minimum of four nodes running, and all seems to work fine. We can upload and download as expected, mostly using Duplicati so far. Best regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] Feature request: Brighter colors
My friend who is running the introducer for our little friendnet is colorblind and he says the particular shades of red and green used to represent nodes being accessible or not in the Web interface are completely indistinguishable. He would like to request a way of listing node status without color coded information, or at least make the colors differ more in brightness (e.g. bright red and camouflage green). I don't know how many Tahoe-users might suffer from the same condition, but I guess it would be fairly easy to change? Best regards, /Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] how to find out your own IP address
If I understand things correctly, making the introducer the manager of IPs would also make sure all nodes can connect to each other even if nodes in a friendnet use dynamic ip? The tub.location would then only be needed in the introducer(s)? We just added the third node, adding to the introducer (at my friends place) and my storage node, and we realized while the introducer knows about both nodes, my own web interface shows the introducer as online (green) but the third node as offline (red). Similarity the third node shows my node red. We have not specifically set any tub.location, but have set a tub.port, and are forwarding that port through respective routers/firewalls. Best regards, Anders 26 jun 2013 kl. 17:24 skrev Zooko O'Whielacronx zoo...@gmail.com: On Wed, Jun 26, 2013 at 2:50 AM, Greg Troxel g...@ir.bbn.com wrote: I think removing autodetection of IP address would be a significant regression for people running storage nodes that might move. Okay, thanks for the feedback. I think it would be good to figure out what is the problem really. I don't understand why this is so hard. The PATH issue makes me want to have configure-time finding of the right programs, based on OS, and then to invoke them with the configured path, period. That's what we currently do, except that it isn't at configure-time, but at run-time, and it tries only a specific path: https://tahoe-lafs.org/trac/tahoe-lafs/browser/trunk/src/allmydata/util/iputil.py?rev=08590b1f6a880d51751fdcacea6a007ebc568f2e#L160 alternate hosting of that same code, on github: https://github.com/tahoe-lafs/tahoe-lafs/blob/08590b1f6a880d51751fdcacea6a007ebc568f2e/src/allmydata/util/iputil.py#L160 Is anyone having an issue on BSD? There, /sbin/ifconfig is quite stable. So is the issue that various Linux flavors have withdrawn the previously-standard interfaces? That is one of several problems. Here's the full set: https://tahoe-lafs.org/trac/tahoe-lafs/query?status=!closedkeywords=~iputil An alternative would be to have the introducer look at the address that appears at it when the node connects, and use that to advertise, or perhaps just send it down the wire so the client can decide. Good idea! I've updated a very old ticket (#50, opened 6 years ago) to suggest this instead of the STUNT/ICE thing that it formerly suggested: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/50# ask a peer to tell you what your IP address is (similar to STUNT/ICE) Regards, Zooko ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] dynamic ip
Dear list! I am slightly uncertain about the inner workings of the tahoe grid when it comes to (storage)nodes with dynamically assigned ip addresses. I would have though that as long as the introducer is running on a static ip, all clients should be able to annouce their respective ip addresses if and when they change? Or does every node need a dyndns-equivalent account? Best regards, -- Anders anders.gen...@gmail.com ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
[tahoe-dev] Remove node?
Hi list! I'm new here so please excuse my breaking of any code of conduct. We are some friends setting up a small Tahoe grid to use as cloud based backup solution. A friend runs the introducer and I run a storage node connected to it. More nodes to come as more friends join in. When testing things I created a node on one machine but then decided to run my node on a raspberry pi as a low power server. I didn't manage to move the previous node to the pi so I created a new. I now would like to remove my old node from the grid but cannot find any instructions about how to accomplish that. I'm sure I've overlooked something obvious so I humbly ask for someone to point me in the right direction. Best regards, Anders ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Remove node?
Thank you kindly, Gentlemen! I will ask my friend with the introducer to do so. This makes me wonder how it all would work when/if multiple introducers are possible in a grid... Oh we'll, we'll cross that bridge when we come to it (or as a directly translated Swedish proverb goes That day, that sorrow). Best regards, Anders 6 jun 2013 kl. 11:26 skrev Oleksandr Drach luckyred...@gmail.com: Hi guys! Ed, you are absolutely right with your reply :) BTW I have added the question to Tahoe-LAFS FAQ. Thanks! Sincerely, Oleksandr Drach. 2013/6/6 Ed Kapitein e...@kapitein.org On Thu, 2013-06-06 at 09:43 +0200, Anders Genell wrote: Hi list! I'm new here so please excuse my breaking of any code of conduct. We are some friends setting up a small Tahoe grid to use as cloud based backup solution. A friend runs the introducer and I run a storage node connected to it. More nodes to come as more friends join in. When testing things I created a node on one machine but then decided to run my node on a raspberry pi as a low power server. I didn't manage to move the previous node to the pi so I created a new. I now would like to remove my old node from the grid but cannot find any instructions about how to accomplish that. I'm sure I've overlooked something obvious so I humbly ask for someone to point me in the right direction. Best regards, Anders Hi Anders, I am not an expert, but i think restarting the introducer would do the trick. Kind regards, Ed ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Remove node?
Yay! I feel proud like a school kid getting his first A :-) While I'm at it, I suggest to use garbage collection among nodes as well. If a node as been inactive longer than XX hours/days/weeks it could be automatically removed from all lists. I'm sure there are a lot of things to consider but it seems to work well for files. Thank you for you help Oleksandr! Best regards, Anders 6 jun 2013 kl. 13:33 skrev Oleksandr Drach luckyred...@gmail.com: Hello Anders, As you've said there can be only one introducer at the moment (see FAQ Q17), that's true so far. BTW the bridge is not far away so we have started our preparations :-) Have a nice day! Sincerely, Oleksandr Drach. 2013/6/6 Anders Genell anders.gen...@gmail.com Thank you kindly, Gentlemen! I will ask my friend with the introducer to do so. This makes me wonder how it all would work when/if multiple introducers are possible in a grid... Oh we'll, we'll cross that bridge when we come to it (or as a directly translated Swedish proverb goes That day, that sorrow). Best regards, Anders 6 jun 2013 kl. 11:26 skrev Oleksandr Drach luckyred...@gmail.com: Hi guys! Ed, you are absolutely right with your reply :) BTW I have added the question to Tahoe-LAFS FAQ. Thanks! Sincerely, Oleksandr Drach. 2013/6/6 Ed Kapitein e...@kapitein.org On Thu, 2013-06-06 at 09:43 +0200, Anders Genell wrote: Hi list! I'm new here so please excuse my breaking of any code of conduct. We are some friends setting up a small Tahoe grid to use as cloud based backup solution. A friend runs the introducer and I run a storage node connected to it. More nodes to come as more friends join in. When testing things I created a node on one machine but then decided to run my node on a raspberry pi as a low power server. I didn't manage to move the previous node to the pi so I created a new. I now would like to remove my old node from the grid but cannot find any instructions about how to accomplish that. I'm sure I've overlooked something obvious so I humbly ask for someone to point me in the right direction. Best regards, Anders Hi Anders, I am not an expert, but i think restarting the introducer would do the trick. Kind regards, Ed ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev
Re: [tahoe-dev] Remove node?
Hehe, yes, worry is a very human trait :-) But I can also imagine some slight practical use in having an up to date list. We plan to invite friends who are not so very computer savvy, and it would be good to see if their nodes are suddenly disconnected for some reason. If the list starts getting cluttered by legacy nodes I imagine it could be slightly less obvious which ones are a real problem. But if there is no real reason for doing anything about it then I'll survive without a nifty mind soothing clean-up. Best regards, Anders 7 jun 2013 kl. 02:00 skrev Zooko O'Whielacronx zoo...@gmail.com: It doesn't bother the Tahoe-LAFS clients to know about old servers that are no longer connected. The main harm from it is it enlarges the list of known servers and makes humans worry about whether the old long-gone ones are causing some kind of trouble by still being in that list. Regards, Zooko ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev ___ tahoe-dev mailing list tahoe-dev@tahoe-lafs.org https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev