Re: Testing the new tahoe-dev

2021-08-30 Thread Natanael
Here's a test response for you, I suppose if that works too then the
migration worked?

Den mån 30 aug. 2021 20:25Sajith Sasidharan  skrev:

> Greetings!
>
> On Mon, 2021-08-30 at 14:15 -0400, Sajith Sasidharan wrote:
>
> > Hello tahoe-dev@lists.tahoe-lafs.org!
>
> Okay, in case anyone was wondering what that was about:
>
> As discussed a few months earlier, tahoe-dev list has been in the process
> of
> moving to infrastructure managed by Oregon State University's Open Source
> Lab
> (https://osuosl.org/).  Thanks to OSUOSL for generously helping us with
> this
> transition.
>
> If you are able to read this message, this process should be now complete.
> Please feel free to respond to the list, so that we can test things out a
> bit.
>
> Please note that the new list address is tahoe-dev@lists.tahoe-lafs.org.
> List membership page is now available at https://lists.tahoe-lafs.org/,
> and
> archives are at https://lists.tahoe-lafs.org/pipermail/tahoe-dev/.  Please
> update your address book and bookmarks accordinly.  We will proceed to shut
> down the old list once the new list is found to be working fine.
>
> Depending on whether the last email from the old list was delivered
> correctly
> to you or not, you may or may not have received the news: meejah announced
> the
> availabilit of tahoe-lafs 1.16.0 rc last week, which completes Python 3
> port
> efforts.  That should be the only email missing from the new archives.
>
> https://tahoe-lafs.org/pipermail/tahoe-dev/2021-August/010015.html
>
> If you're testing the latest release candidate, please report problems
> here,
> or via IRC.  In case you missed certain recent events, #tahoe-lafs is at
> irc.libera.chat these days.
>
> Thank you, and sorry about all the hassle!
>
> Regardas,
> Sajith.
>
>
> ___
> tahoe-dev mailing list
> tahoe-dev@lists.tahoe-lafs.org
> https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev
>
___
tahoe-dev mailing list
tahoe-dev@lists.tahoe-lafs.org
https://lists.tahoe-lafs.org/mailman/listinfo/tahoe-dev


Re: Sybil attack?

2015-02-17 Thread Natanael
Den 17 feb 2015 13:51 skrev str4d st...@i2pmail.org:

 I2P is an anonymous analogue of the Internet, so it is disingenuous to
 compare it to a user's home network. If you have a private network
 that you have full control over and that can be isolated from the
 outside, then you have no need of the anonymity (and overhead) that
 I2P provides.

I was explicitly only talking about the particular use case where public
introducers are used with no restrictions on who can participate as storage
nodes. Because as far as I can see, that's the most common combination of
using I2P with Tahoe-LAFS. When I was talking about the I2P version, I was
including the differences in configuration.
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Sybil attack?

2015-02-16 Thread Natanael
Den 16 feb 2015 12:53 skrev Adonay Sanz adonay.s...@gmail.com:

 Hi Natanael
 Thanks for answering. Really I'm noob in this area.

 I wanted to use I2P version. So, it's no possible way present or future
to block it? It's a really threat?
 Can you explainme better what it means configured to mimic Freenet?

Freenet caches files you request, and caches some files other people
request via you. Then it replies to requests from others and forwards
requests as well. Tahoe-LAFS in I2P mode mimics this behaviour.

In I2P it isn't certain you'll be able to find honest nodes which have the
data you are asking for. They can send fake replies, and even though your
software will know the replies are wrong it will have to spend a lot of
effort on searching for the correct data. So a Sybil attack can be used for
censorship.

But if you know of honest nodes, and the honest nodes are well connected in
the network, then the requests will reach those nodes and the correct reply
will come back. But the fake nodes can overload the honest nodes to slow
things down.
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Sybil attack?

2015-02-16 Thread Natanael
Den 16 feb 2015 11:42 skrev Adonay Sanz adonay.s...@gmail.com:

 Hi,
 I'm reading your documentation and has Tahoe-LAFS any sybil attack
protection? or does it really matter? Can be implemented some solution?

Tahoe-LAFS in the default configuration don't rely on unknown nodes, so
there Sybil isn't a problem. And because of in-built authentication of all
data, a Sybil attack could at most prevent access to the data (or just to
chosen parts of or versions of the data), a form of denial of service.
Can't fake the data.

The I2P version could be affected (as it is configured to mimic Freenet).
Standard usage isn't.
___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Tahoe for P2P networks

2014-09-15 Thread Natanael
It wouldn't do that well on its own, but look at the I2P version of it and
the accompanying custom configuration. It behaves like Freenet and allows
you to share a private URL pointing to your published folders.

- Sent from my tablet
Den 15 sep 2014 15:28 skrev kuba kozłowicz jakub.kozlow...@gmail.com:

 Hi,
 I wonder if Tahoe could be used for simulating a Peer-to-Peer network?

 If it could then what would be the recommended way to start (any
 tutorials, code samples) ?

 I am working on an anonymous file sharing application based on P2P
 networks. I am going to write it in Python (Twisted) and I need to somehow
 simulate network (Peers that will request/share content).

 Cheers,
 Jakub

 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Using Tahoe-Lafs capabilities on mobile devices

2014-02-24 Thread Natanael
That sounds like how the I2P configured version of Tahoe-LAFS behaves like
(it kind of mimics Freenet). You should take a look at that and see how you
might be able to reuse parts of that configuration.

- Sent from my phone
Den 24 feb 2014 10:35 skrev Rudenko Pavlo 9fin...@ukr.net:

  Hello everybody,

 My name is Pavlo and I'm a member of a team currently working on a project
 of Open Source P2P TV platform which is called Nebel.tv.
 We are consider using Tahoe-Lafs functionality as distributed storage
 (Meta content - releases, media content) of connected mobile devices on
 IPv6-based protocol.

 I'm writing to wonder if anybody can advise on using Tahoe-Lafs
 capabilities in our project, or you could be interested in contributing.

 Here are few more details:
 This has to be a network of vasty distributed nodes, which has both client
 and server capabilities. So, they do store some information as well as read
 it at every node. This is p2p architecture similar to what you can find in
 the bittorrent but different to the extent of it application to lossy
 storage of media data. Using LAFS we plan to overcome tracker node needs:
 as all members of the network will become 'trackers' for their own
 published content as well as the content of their directly visible peers.
 Please, mind that single network user might contribute to the network with
 several nodes, as the same application can be installed across multiple
 personal devices: smartphones, tablets and always-on featured devices hdmi
 dongles or set-top-boxes.

 Looking forward for your replies.

 Regards,
 Pavlo

 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: Fwd: Erasure Coding

2013-12-02 Thread Natanael
You just need to fake a lot of connections, then.

- Sent from my phone
Den 2 dec 2013 06:15 skrev David Vorick david.vor...@gmail.com:

 You don't get mining money until you are actually storing data. Nodes are
 randomly selected. If you randomly end up hosting the same piece of data
 10,000 times, you will be able to cheat.

 But if the network is sufficiently large, the probability of 1 node
 getting to host the same data twice is very slim, and engineering a
 coincidence would be very expensive because nodes are randomly selected
 and the minimum amount of time you can rent storage for is 1 month.
 Furthermore, there is rotation, meaning that after a certain amount of time
 a different random host will be selected to keep data, which means any
 engineered coincidence will not last very long.


 On Sun, Dec 1, 2013 at 5:29 PM, Natanael natanae...@gmail.com wrote:

 Yeah, so how will anybody stop you from hosting 10 GB but pretending to
 be 10 000 nodes, thus making as much as somebody that would be storing 100
 TB?

 - Sent from my phone
 Den 1 dec 2013 22:37 skrev David Vorick david.vor...@gmail.com:

 Thanks Dirk, I'll be sure to check all those out as well. Haven't yet
 heard of spinal codes.

 Natanael, all of the mining is based on the amount of storage that you
 are contributing. If you are hosting 100 nodes each with 10GB, you will
 mine the same amount as if you had just one node with 1TB. The only way you
 could mine extra credits is if you could convince the system that you are
 hosting more storage than you are actually hosting.


 On Sun, Dec 1, 2013 at 2:40 PM, jason.john...@p7n.net wrote:

 What if you gave them the node to use. Like they had to register for a
 node. I started something like this but sort of stopped because I’m lazy.



 *From:* tahoe-dev-boun...@tahoe-lafs.org [mailto:
 tahoe-dev-boun...@tahoe-lafs.org] *On Behalf Of *Natanael
 *Sent:* Sunday, December 1, 2013 1:37 PM
 *To:* David Vorick
 *Cc:* tahoe-dev@tahoe-lafs.org
 *Subject:* Re: Fwd: Erasure Coding



 Can't you pretend to run more nodes than you actually are running in
 order to mine more credits? What could prevent that?

 - Sent from my phone

 Den 1 dec 2013 17:25 skrev David Vorick david.vor...@gmail.com:



 -- Forwarded message --
 From: *David Vorick* david.vor...@gmail.com
 Date: Sun, Dec 1, 2013 at 11:25 AM
 Subject: Re: Erasure Coding
 To: Alex Elsayed eternal...@gmail.com

 Alex, thanks for those resources. I will check them out later this week.

 I'm trying to create something that will function as a market for cloud
 storage. People can rent out storage on the network for credit (a
 cryptocurrency - not bitcoin but something heavily inspired from bitcoin
 and the other altcoins) and then people who have credit (which can be
 obtained by trading over an exchange, or by renting to the network) can
 rent storage from the network.

 So the clusters will be spread out over large distances. With RAID5 and
 5 disks, the network needs to communicate 4 bits to recover each lost bit.
 That's really expensive. The computational cost is not the concern, the
 bandwidth cost is the concern. (though there are computational limits as
 well)

 When you buy storage, all of the redundancy and erasure coding happens
 behind the scenes. So a network that needs 3x redundancy will be 3x as
 expensive to rent storage from. To be competitive, this number should be as
 low as possible. If we had Reed-Solomon and infinite bandwidth, I think we
 could safely get the redundancy below 1.2. But with all the other
 requirements, I'm not sure what a reasonable minimum is.

 Since many people can be renting many different clusters, each machine
 on the network may (will) be participating in many clusters at once
 (probably in the hundreds to thousands). So the cost of handling a failure
 should be fairly cheap. I don't think this requirement is as extreme as it
 may sound, because if you are participating in 100 clusters each renting an
 average of 50gb of storage, your overall expenses should be similar to
 participating in a few clusters each renting an average of 1tb. The
 important part is that you can keep up with multiple simultaneous network
 failures, and that a single node is never a bottleneck in the repair
 process.



 We need 100s - 1000s of machines in a single cluster for multiple
 reasons. The first is that it makes the cluster roughly as stable as the
 network as a whole. If you have 100 machines randomly selected from the
 network, and on average 1% of the machines on the network fail per day,
 your cluster shouldn't stray too far from 1% failures per day. Even more so
 if you have 300 or 1000 machines. But another reason is that the network is
 used to mine currency based on how much storage you are contributing to the
 network. If there is some way you can trick the network into thinking you
 are storing data when you aren't (or you can somehow lie about the volume),
 then you've broken the network

Re: Fwd: Erasure Coding

2013-12-01 Thread Natanael
Can't you pretend to run more nodes than you actually are running in order
to mine more credits? What could prevent that?

- Sent from my phone
Den 1 dec 2013 17:25 skrev David Vorick david.vor...@gmail.com:



 -- Forwarded message --
 From: David Vorick david.vor...@gmail.com
 Date: Sun, Dec 1, 2013 at 11:25 AM
 Subject: Re: Erasure Coding
 To: Alex Elsayed eternal...@gmail.com


 Alex, thanks for those resources. I will check them out later this week.

 I'm trying to create something that will function as a market for cloud
 storage. People can rent out storage on the network for credit (a
 cryptocurrency - not bitcoin but something heavily inspired from bitcoin
 and the other altcoins) and then people who have credit (which can be
 obtained by trading over an exchange, or by renting to the network) can
 rent storage from the network.

 So the clusters will be spread out over large distances. With RAID5 and 5
 disks, the network needs to communicate 4 bits to recover each lost bit.
 That's really expensive. The computational cost is not the concern, the
 bandwidth cost is the concern. (though there are computational limits as
 well)

 When you buy storage, all of the redundancy and erasure coding happens
 behind the scenes. So a network that needs 3x redundancy will be 3x as
 expensive to rent storage from. To be competitive, this number should be as
 low as possible. If we had Reed-Solomon and infinite bandwidth, I think we
 could safely get the redundancy below 1.2. But with all the other
 requirements, I'm not sure what a reasonable minimum is.

 Since many people can be renting many different clusters, each machine on
 the network may (will) be participating in many clusters at once (probably
 in the hundreds to thousands). So the cost of handling a failure should be
 fairly cheap. I don't think this requirement is as extreme as it may sound,
 because if you are participating in 100 clusters each renting an average of
 50gb of storage, your overall expenses should be similar to participating
 in a few clusters each renting an average of 1tb. The important part is
 that you can keep up with multiple simultaneous network failures, and that
 a single node is never a bottleneck in the repair process.

 We need 100s - 1000s of machines in a single cluster for multiple reasons.
 The first is that it makes the cluster roughly as stable as the network as
 a whole. If you have 100 machines randomly selected from the network, and
 on average 1% of the machines on the network fail per day, your cluster
 shouldn't stray too far from 1% failures per day. Even more so if you have
 300 or 1000 machines. But another reason is that the network is used to
 mine currency based on how much storage you are contributing to the
 network. If there is some way you can trick the network into thinking you
 are storing data when you aren't (or you can somehow lie about the volume),
 then you've broken the network. Having many nodes in every cluster is one
 of the ways cheating is prevented. (there are a few others too, but
 off-topic).

 Cluster size should be dynamic (fountain codes?) to support a cluster that
 grows and shrinks in demand. Imagine if some of the files become public
 (for example, youtube starts hosting videos over this network). If one
 video goes viral, the bandwidth demands are going to spike and overwhelm
 the network. But if the network can automatically expand and shrink as
 demand changes, you may be able to solve the 'Reddit hug' problem.

 And finally, machines that only need to be on some of the time gives the
 network a tolerance for things like power failures, without needing to
 immediately assume that the lost node is gone for good.



 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] List server leaves invalid DKIM signatures

2013-11-25 Thread Natanael
1: Why is this off topic thread still going?

2: You can't make any scientific conclusions based on biased samples. We
don't really have any sets of women and men with circumstances equal enough
to tell what the cause of the differences in these fields is. The actual
scientific studies exist don't show drastic differences in capability, they
suggests that both are equally capable (although there is support for
preference differing on average).

Now, back on topic please.

- Sent from my phone
Den 25 nov 2013 14:04 skrev James A. Donald jam...@echeque.com:

 On 2013-11-25 21:48, Lukas Pirl wrote:

 Interesting - are there such correlations for hair colors also?


 Darwinism tells us that females are likely to be worse than males at
 certain activities, and better than males at other activities, in
 particular and especially, at creating life.  See Darwin's lengthy
 discussion of sexual selection and male/female differences in The Descent
 of Man

 This is likely to explain the observed underperformance of females at a
 wide range of activities.  We should no more expect a significant number of
 females in a group selected for exceptional ability in fields that involve
 logic and maths, than a significant number of females in a group selected
 for running fast.

 Indeed, if a group selected for running fast had any females at all in it,
 the heavy hand of political correctness would be obvious, and we would
 expect the females in the group to lag conspicuously behind.

 There are plenty of women that can run faster than me, but there are no
 women running athletes than can run as fast as a male running athlete.

 Since group differences exist, if we select people to perform some
 difficult task, then, if we are highly selective, if the task is hard, some
 groups will be massively overrepresented among those so selected, and some
 groups massively underrepresented, or, quite often, entirely absent.

 Women should be content that they are clearly superior at the most
 important job that there is.

 Who discovered radon?

 The discovery of radon happened at roughly the same time as the discovery
 of radium, and was far more important, because radon revealed that
 radioactivity involved elements decaying from one element into another, the
 transformation of the elements.

 The fact that you know who discovered radium, but do not know who
 discovered the other hundred odd elements without looking them up, should
 tell you Marie Curie was not famous for being a scientist, but famous for
 being a *woman* scientist, received her Nobel for doing science while
 female, much as a dancing bear is not famous for dancing well, but because
 it can dance at all.

 Indeed Marie Curie did not discover radium.  She was the most junior
 person on the three person team that discovered radium, though the course
 of action that led to the discovery was her idea.  You not only don't
 remember who discovered radon, you do not remember the other two people on
 the team that discovered radium.

 If women were equal on average to men in stem fields, you would have a
 more impressive poster girl than Marie Curie.

 ___
 tahoe-dev mailing list
 tahoe-dev@tahoe-lafs.org
 https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev

___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev


Re: [tahoe-dev] [cryptography] SNARKs, constant-time proofs of computation

2013-11-07 Thread Natanael
Resending since it bounced the last time.

 SCIPR is another one. http://www.scipr-lab.org/

 If it became efficient it could be useful for mining in a Bitcoin fork
(commonly called altcoins). Don't know what kind of computations you'd
actually would want it to do, though. Most meaningful computations could
easily be deprecated by better algorithms, forcing you to switch algorithms
often. You also have the problem of achieving consensus for what to compute.

 What exactly would it be used for in Tahoe-LAFS?

 - Sent from my phone

 Den 7 nov 2013 18:54 skrev Steve Weis stevew...@gmail.com:

 Hi Andrew. You may be interested in contacting an early-phase startup
called Tegos:
 http://www.tegostech.com/

 They're in stealth mode and haven't posted any info online, but they are
legitimate and relevant to this work.

 On Thu, Nov 7, 2013 at 8:39 AM, Andrew Miller amil...@cs.ucf.edu wrote:

 Here's a possible Tesla Coils and Corpses discussion I'd like to have
 sometime a few weeks from now maybe:
SNARKs (Succinct Non-interactive Arguments of Knowledge) are a
 recent hot topic in modern cryptography. A generic SNARK scheme lets
 you can take an arbitrary computation (e.g., the routine that checks a
 signature and a merkle tree branch) and compile it to a *constant
 size* compressed representation, called the verification key. An
 untrusted server can execute the computation on behalf of the client,
 and produce a *constant size* proof that it was carried out correctly.


 ___
 cryptography mailing list
 cryptogra...@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography

___
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
https://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev