Colin et al, That's a good introduction to consciousness, *we need a more direct/ practical approach to AGI* - the hybrid system can be the fastest and less expensive approach to AGI and anyone from computer science, electronics, nanotechnology to neuroscience can contribute. 4 The hybrid approach to AGI
The origins of the entire problem started a few decades ago when by mistake action potentials were approximated by stereotyped digital events. As a result many scientists were encouraged to imagine that brain computations can be thoroughly simulated and mapped on digital computers using connectionist models. It became a mob opinion and in spite of recent refutation, this flawed view continued to be sustained and all brain initiatives followed this vision. "*Don’t be trapped by dogma, which is living the results of other people’s thinking for six decades." **U**nderstanding the brain language and the development of AI techniques are highly co-dependent*.To understand the main problem we can start with two relevant examples of algorithmic development. a. The simulation on digital computers can faithfully reproduce the characteristics of the flight b. “Realistic” models of neurons (e.g. Hodgkin-Huxley) simulated on a digital computer do not succeed to display or generate intelligent behavior This gap between (a) and (b) can be easily explained. In the first case the simulation on a digital computer is successful since the model is able to realistically include the physics of flight. In the second case* biological structure uses molecular/quantum computations to integrate meaningful information* . Such biophysics responsible for intelligent behavior is not included in current models ( e.g. . Hodgkin-Huxley) neither in any AGI attempts. Since molecular/quantum computations can be hardly reproduced on digital computers replicating these computation using any algorithmic approach is far more difficult.We already know that wiring together a set of non AGI systems may never generate AGI. What is the solution? We know that the loss of natural biophysics leads to issues in case of the second model . Clearly, to solve the problem one needs to find a way to include the full model of computation generated within biological structure . Having built a system that evolves in a similar way our brains do will solve the problem and guarantee that the resulting “computing machine” will be able to integrate meaningful information.At least two phases are needed to construct a mind using biological building blocks *A.*The first phase will require growing a biological structure either from natural stem cells or from induced pluripotent cells. Providing nutrients, oxygen and environmental interaction is needed to shape the structure and control spatial organization of cells . *B. * The second phase will create a virtual world in which the evolving biostructure can be trained to learn and experience live scenes following a specific gradual program. It is likely that after training the hybrid system will be able to mimic human behavior in the ‘real’ world. The first phase will require developing a system and technology to grow a biological structure. The entire development will be regulated using a computer interface equipped with microcontrollers and different nanosensors. The digital computer will obtain real-time information regarding the state of the evolving structure and detect the need of neurotrophic factors, nutrients and oxygen. This phase will allow biological building blocks to self-assemble and organize into discrete, interdependent domains. Different ways to deliver nutrients, oxygen, and achieve spatial and temporal control of living tissue by manipulating molecular and genetic technology can be explored (Delcea et al., 2011; Lewandowski, et al., 2013; Takebe et al., 2013; Deisseroth and Schnitzer, 2013; Wickner and Schekman, 2005). Dielectrophoretic actuation will be used for cell manipulation to shape the evolving 3D structure (Pethig et al., 2010; Reyes, 2013; Velugotla et al., 2012). In addition, carbon nanotubes will provide the physical support for development. They can be used to create conductive structures to perform bidirectional communication between the evolving biostructure and computers. This will allow monitoring the evolution of neurons, glial cells, ... delivering neurotrophic factors and engineering all structures. The second phase will require to build bidirectional communication between the evolving brain and the computer to create a virtual world and enhance learning. One can read and interpret the information processed in the evolving structure by using data recorded from different nanosensors. Using computer technology a virtual world will be able to provide accelerated training. Substitutional reality will enhance learning, the evolving brain will be able to mimic human behavior in the real world. The entire model can be schematically conceptualized as an interactive training system that shapes the development of biological structure based on natural language and visual information This hybrid approach is *a direct path to generate general intelligence*. One can shape and "program" a biological structure and connect it with digital computers to develop human like intelligence. In addition to algorithms that run on digital computers one can use biological building blocks to build *a full model of computation*. Building such system will represent the first step in reliably solving natural language processing tasks. They are “hard problems” for any algorithmic design.The hybrid system will be a new tool for discovery, far more powerful than any digital system alone. H-AGI can be seen as a transitional step required to understand which parts can be fully replicated in a synthetic form to build a more powerful computing system. Note: *I**GI** is a game-changing strategy* - brings together AI, AGI, neuroscience, nanotechnology to design/build a full model of computation. We need someone like Steve Jobs - *t**hat will make all the difference for IGI. * I tried to keep it simple, please feel free to correct, add.... Dorian *PS. "And most important, have the courage to follow your heart and intuition..... **Don’t be trapped by dogma**"* On Mon, May 25, 2015 at 4:56 PM, Colin Hales <[email protected]> wrote: > Dorian et. al. > > Installment #2 of my stab at a paper. > It is section 5 in the original docx. This is a section on the synthetic > approach and the science of consciousness .... again with a slant on AGI > investment. > > Section 4 is next and is where I'll need Dorian's contribution for the > organic synthetic AGI program example. > I have put in a section for references although only a few have been put > in as yet. > I suggest an acknowledgement section. > > Because of my personal circumstances meaning I can't spend time in > discussion until next week, if I could continue my 'seagull' depositing > technique it would be greatly appreciated. > =================================== > 5 Machine consciousness and the synthetic AGI approach > > Synthetic AGI, whatever the chosen hybridization level, cannot divorce > itself from dealing with consciousness. Indeed, in introducing synthetic > approaches to AGI such as those described above, it becomes quite clear > that the discipline of AGI itself and the science of consciousness are > deeply connected. We find ourselves faced with the realization that the > science of consciousness and the AGI program may actually be regarded, > eventually, as the same thing. It seems worth acknowledging the possibility > that the explicit recognition of that state of affairs is actually central > to the proposed changes in AGI approach. > > > > To see this confronting possibility we can use the established vocabulary > of the youthful science of consciousness (Hales, 2014). In the most > general sense that can be used in a science context, the word consciousness > refers to the first-person-perspective (1PP) of *anything*. We can > consider consciousness of X to be ‘*what it is like to be X from the > first person perspective of being X*’. To scientifically study > consciousness is to construct some kind of account predictive of the 1PP of > some part of the natural world. We need have no theory of consciousness to > speak of it this way. Nor need we attribute any relationship between > consciousness-as-the-1PP and any behaviour or memory or any other state of > affairs. We need not presuppose any particular chunk of the natural world > to speak of consciousness this way. It is a completely general concept. It > is one of the few concrete positions that the science of consciousness has > been able to formulate. > > > > Consider ‘being’ a rock. What might the scientific statement of the > consciousness, the 1PP, of a rock be? Rocks cannot behave. Yet we have to > admit that from the perspective of *being* the rock there may be a 1st > person perspective of some kind. It may be an experience of ‘happy’ or > ‘cold’ or something more sophisticated. For example there may be a visual > scene, from the point of view of being the rock, of everything surrounding > the rock. If we had a science of consciousness and we were able to claim, > scientifically, that ‘*it is not like anything from the 1PP of a rock’* > and that claim was to be scientifically accepted, what would that > scientific statement look like? The answer to this riddle is that currently > we do not know. What we can demonstrate, however, is that central to the > synthetic AGI science program is the potential to be able to say something > about consciousness – the 1PP – in a way that was previously impossible. > That is why we have to accept, from its inception, that synthetic AGI and > the matter of the science of consciousness are deeply enmeshed. > > > > This can be a difficult mental leap to make for some investigators. To > help, consider the 1PP of a bacterium, worm, mouse, dog, computer, a > neuromorphic chipset, tree, rock, human. Of all these things the only thing > we know for sure is that ‘it is like something’ to be that part of the > natural world called a human or, better, to ‘be a human brain’. It is also > one of the few proved facts of the science of consciousness that whatever > the physics involved in the generation of a 1PP, it is contained within > human brain tissue only and no other part of the human. This knowledge of > the existence of a 1PP is accepted despite us being unable to > scientifically prove it to each other. This is because we cannot observe > observations (the mental experiential life of another human) themselves. > The science of consciousness is a scientific account of how we observe at > all – in the first place. All we can actually observe with consciousness is > brain material delivering consciousness - an act of observation - to the > brain itself, from the 1PP. > > > > Some deny consciousness exists at all (Dennett, 1991). Some accept > consciousness as real but irrelevant to intelligence and cognition. We are > forced here to accept that there is something to explain, not because any > particular position is right or wrong, but merely because the argument > exists at all. The argument itself – a capacity to be unsure or confused > about consciousness, is all that is needed to justify that a science of it > is required. Here we can show that synthetic AGI can be used to get closer > to an answer to the question of consciousness in both humans and elsewhere > including machines. In doing so we also get to understand the relationship > between consciousness and intelligence. We must also conceded that we may > never solve the problem. It is, however, possible to see how synthetic AGI > may take us as close to these answers as we can ever get. Perhaps in its > maturity there may be more clarity in this. At this juncture of the birth > of synthetic AGI options, however, this seems to be what is afoot. > > > > To see the centrality of synthetic AGI approaches to a science of > consciousness and its connection to intelligence, simply ask these > questions: > > > > 1. > > “*What is it like to be a computer (100% analytic in software) AGI > inside a robot body X?*” > 2. > > “*What is it like to be a neuromorphic chipset (100% analytic in > hardware) AGI inside a robot body X?*” > 3. > > “*What is it like to be a 100% synthetic organic brain AGI inside a > robot body X?*” > 4. > > “*What is it like to be a neuromorphic chipset (100% inorganic > synthetic in hardware) AGI inside a robotic body X?*” > 5. > > “*What is it like to be a H% synthetic hybrid AGI inside a robotic > body X?*” > > > > Notice that in every case robot body X is the same. The same sensory/motor > capabilities exist in each instance. Robots (a) to (e) may behave > differently in identical contexts. They may have different abilities to > learn and behave. Different levels of actual or potential intelligence. > Whatever the differences between (a)…(e), they relate entirely to > differences in the brain of the complete robot. What is particularly > striking is (c). Consider a synthesised *organic* human brain that is > grown to become (somehow by means yet to be found) identical to a human > brain except to the extent that its peripherals must accommodate the robot > body sensory/motor capacities. In this case we are faced with a very > compelling case that, because all of the physics of the brain is retained, > the resultant brain must have consciousness. If anyone decides to deny > consciousness in the synthetic brain, then that claimant is must make an > argument that a brain originating synthetically has some different > level/kind of consciousness compared to an identical naturally arising > brain. We do not have to make that claim here or prove it either way. What > we do here is demonstrate how the synthetic brain idea confronts this > prospect and suggests a scientific way to make some progress: though > development of synthetic AGI brains. > > > > Consider next the inorganic synthetic brains (d) and (e). We know that > whatever it is in the physics of the brain that results in consciousness > may be literally incorporated in the physics of a new kind of > synthetic-style neuromorphic chipset. Actual brain physics, albeit in > inorganic form, exists in the chipset. That being the case we can therefore > make an argument that ‘it may be like something (or not)’ to be that > chipset. Some brain physics may be necessary for consciousness, some brain > physics may not. The question of which is essential physics and which is > not is, from the perspective of a program of synthetic AGI works, a > scientific question that has a testable answer. That is, for the first time > ever, a completely analytic AGI that *models* the physics can be compared > to a hybrid AGI that actually uses the physics. If there is a difference in > behaviour that can be found to be critically involved in the difference in > the physical instantiation, then that difference can be, potentially, > linked to an argument about the essentialness of the physics to AGI > behaviour and, eventually, to arguments about consciousness. This lineage > of empirical testing therefore can, in its maturity, become a scientific > program with testable results that would not otherwise exist. Pure analytic > AGI on its own cannot answer such questions. Pure synthetic AGI on its own > is afflicted with a different kind of the same problem: that which has > dogged the science of consciousness all along. Together, however, the > analytic/synthetic contrast can provide us answers for an argument that > leads somewhere that would otherwise be missing. > > > > Now go back to (b), the 100% analytic and present-day neuromorphic > hardware AGI brain. There is a huge industry involved in these chips. They > are growing in sophistication and in application week by week. If synthetic > AGI approaches join them, become mainstream, and a new kind of neuromorphic > chipset emerged that actually has some brain physics on it instead of what > we currently do, then within that community we would expect the find the > question “*What is it like to be one of these neuromorphic chipsets?*” in > papers, in workshops and in conferences. The scientific posing of such a > question, and an expectation that it has a scientific answer that involves > the community in the science of consciousness, would be expected to be > normal practice. It is a little sobering to imagine the ultimate impact > that synthetic AGI approaches may have on the existing neuromorphic > engineering community. The area has to undergo a profound shift in > thinking. That shift means that the science of consciousness, computer > science and related engineering fields will eventually have synthetic > approaches as a normal part of training curricula that seems very foreign > at this point in time. > > > > We can now see how synthetic AGI approaches, however they are implemented, > combined with analytic approaches, speak directly to the role (or not) of > consciousness in intelligent behaviour *and*, potentially, the origins > and nature of consciousness itself. Such claims are, within a mature form > of the approach, in-principle scientifically testable. > > > > The cultural impact is, however, a little more far-reaching. Observe the > altered state of science itself that is implicit in synthetic AGI > approaches. It involves, for the first time in history, the science of a > first person perspective. As practised in neuroscience at present or as > might be practiced in the synthetic AGI approach proposed here, this is > unique as science. No other science is expected to involve itself in the > direct account of a first person perspective. That account is unique in > that it is actually an account (albeit indirectly) of ourselves > (scientists) in our role as the scientific observer. Whatever science > itself looks like after a science of consciousness is accessed, the > introduction of synthetic approaches to AGI are a fundamental part of its > progress. Currently we have particle physics smashing the matter of the > universe into ever smaller components in the CERN particle supercollider. > It is not without a sense of irony that while we invest vast amounts of > funding in this amazing science of the infinitesimal, that at the other end > science, where we meet the most complex single object in the known > universe, the brain, we see that the equivalent of the CERN supercollider – > the creation of a massive ‘particle’ called a brain through synthetic AGI > experimentation, speaks a story of how scientists do science at all and yet > is essentially missing the synthetic half of its natural science program … > until now. > 5.1 Summary: Synthetic AGI and the route to a science of > consciousness > > We leave this section having identified a significant complexity entailed > in the way synthetic AGI is deeply involved in the science of consciousness > and in its impact on the nature of science itself. It is part of a major > shift in science practice. > > > > However complicated this sounds, synthetic AGI practice need not concern > itself with any this to start and operate successfully. > > > > Synthetic AGI needs no theory of consciousness to proceed. Synthetic AGI > practice approach need not concern itself with abstract considerations of > science practice and culture. How can this be? This is because synthetic > AGI is actually a reversion to a form of discovery by empirical > investigation. Fire was used millennia prior to a physics of combustion. We > burned things to *acquire* a theory of combustion. Likewise we flew > first, in ignorance, in order to acquire the physics of flight. In exactly > the same way the practice of synthetic AGI can be seen as a way to > in-principle build a conscious machine without a theory of consciousness. > Instead we first build the conscious machine in order that we find a theory > of consciousness. > > > > The synthetic AGI approach can now be accurately seen for what it is. It > is not any kind of discovery. In fact it is a *reversion* to a kind of > science practice that has simply been sidelined while the birth of analytic > AGI took centre stage for a while. With the maturity of analytic AGI, and > with a new modern vocabulary we can now see the events of the last half > century from a new perspective. The introduction of synthetic AGI can now > be seen to be a reversion to a form of empirical science that has always > been there and that has always presented this kind of possibility. > Synthetic AGI presents as a way to reconnect a relatively estranged > community to centuries of empirical practice; a mode of discovery that was > simply set aside at the birth of what we now see to be analytic AGI. > > > > This presents another way of viewing synthetic AGI as a new investment > opportunity: it involves nothing new. It is actually backed by a history of > success of empirical science itself that is centuries older than the age of > computer revolution. What is actually novel is the *pair* of approaches. > Analytic *and* synthetic AGI form a duo of approaches that, together, > form a novel route to science’s future. That will only happen, of course, > with investment in the synthetic half of the duo that has, to date, been > lacking. > > ================================end of section > > > cheers for now. Really sorry I can't play ball elsewhere here. Gotta go. > > > regards > > > Colin Hales > > > > > > On Sun, May 24, 2015 at 11:36 AM, Colin Hales <[email protected]> wrote: > >> Dear IGI enthusiasts, >> Here's a stab at an intro to a paper that I hope begins to capture the >> essence of what is proposed. >> I don't claim it as perfect or the final product. >> What I need to know is if it speaks in a way that might lead to the >> change we are looking for. >> >> *=========================================* >> *AGI Directions: towards Hybrid (H) and Synthetic (S) Forms.* >> By >> Dorian Aur (see previous posts) >> >> (blame for this bit is accepted by Colin Hales >> others? TBA. >> 1 Introduction >> >> Here we seek to instigate a broadening of approaches to artificial >> general intelligence (AGI). Be it an artificial brain the size of a worm, >> ant, bee, dog or human, such an artificial intelligence is recognized here >> as a kind of AGI. The original science program coined ‘artificial >> intelligence’ (AI) in 1956 {refs} set sail, at the birth of computing, with >> a goal to create machines that potentially have human level intelligence or >> better. What has actually happened since then is the application of >> computers to a vast array of technical challenges within which great >> successes have occurred and are ongoing. However, in practice AI successes >> fell, and continue to fall, within a now well recognized category called >> ‘narrow’ or ‘domain-bound’ AI. Within the atmosphere of its successes, >> however, the original goal of human-level intelligence has, at least so >> far, evaded the energies of a huge investment. Such has been the prevalence >> of this pattern it can now be called a kind of syndrome and in recognition >> of that syndrome in recent years the attainment of the original goal of >> human level AI has taken on two main forms. >> >> >> >> The first approach to human level AI one of simple assumption that by >> attending to the AI ‘parts’ that the route to the AGI ‘whole’ will become >> apparent or emerge naturally. This activity, now industrialised, forms the >> backbone of AI investment at this present time. Its successes emerge almost >> weekly now. The second approach is one of a concerted direct attack on >> human-level AI. This is a recent phenomenon manifest in a comparatively >> small community of investigators, with commensurate levels of investment, >> who have explicitly coined the name of the goal: AGI. In doing so the >> target is explicitly recognised as being of a nature deserved of an >> integrated, holistic approach. This, too, is having its successes, but once >> again the syndrome of narrow-AI outcomes tends to be what the practice >> achieves. >> >> >> >> Throughout all this history one thing has been invariant: The use of the >> computer or more generally the use of models of intelligence as an instance >> of machine intelligence. This document signals the beginning of another >> approach: where the computer (model) approach is joined (to an extent to be >> determined) by its natural counterpart. This new approach, for whatever >> reason, is essentially untried and invisible to the AI community. It was >> always an option. All we do here is get it off the shelf and dust it off as >> an AGI option. This paper is a vehicle for the clear expression of an >> untried approach. As such it is hoped that AI and AGI acquire a suite of >> ideas and new scientific assessment techniques that will improve AI >> generally as a science discipline based on a new kind of empirical testing. >> Investment in the approach has been zero since day one of AI. We seek here >> to make a case that if investment in this new approach was non-zero, a >> cost-effective dramatic shift may occur in our understanding of the >> potential kinds of machine intelligence. Specifically we seek to introduce >> the concept of synthetic and hybrid AGI. >> 2 Computation and AGI – a perspective on practice >> >> To understand what follows we need to carefully compare and contrast two >> fundamentally different forms of computation. Formally their difference is >> best captured by the words analytic computation and synthetic computation. >> The first kind, analytic, is easily recognised as model-based computation. >> This is where, by whatever means chosen, an abstract model is explored by >> its designers. Its usefulness is inherent in what the computation tells us >> upon interpretation. Within the model are representations of >> characteristics that are being studied. A voltage in model may be used, for >> example, to represent the actual voltage of what is being modelled. That >> *representation* of something is not an *instance of* the original >> thing. Recognizable forms of analytic computation include that of the >> analog or digital computer (Turing machines). Its distinguishing feature is >> that however the computation is carried out, its meaning is ultimately >> inherent in the mental processes of a designer or in some explicit, >> separate document such as software or a circuit diagram of a model. >> However, complex the model is, it is best thought of as a description of >> something. The description itself is the analytic form. Clearly the >> analytic form is responsible for a dramatic change and technological >> advances in science over decades. The computer revolution itself. >> >> >> >> The second kind of computation, synthetic, is best understood as simply >> the regularity of nature itself. Synthetic computation occurs when nature >> itself is simply regarded as computation. Synthetic computation, too, may >> have a designer. That is, the distinction between analytic and synthetic >> computation is not held up as the distinction between ‘human-made’ and >> ‘naturally occurring’. Synthetic computation is when the regularity of >> nature itself accepted as, or configured to be the computation. There may >> be documents needed to establish the initial conditions of the >> ‘computation’. For example, an engineer builds and configures the initial >> conditions of natural material as an automobile. The result is a synthetic >> computation called ‘the automobile’ or ‘transport’. No documents are needed >> to further interpret the meaning of the result of the computation. Nature >> itself is the outcome of synthetic computation. Another simple example of >> such computation may be seen in the concept of flight. A bird ‘computes’ >> those aspects of the physics of flight suited to the needs of a bird. >> Humans have used those same synthetic computations (manifest in >> air/fight-surface interactions) to create artificial flight. The result is >> a regularity in nature accepted as a form of computation. Physically the >> result is flight. That being the case, what is ‘analytic flight’? We all >> recognise this: the flight simulator. >> >> >> >> The program of future directions proposed here is one that recognises the >> two different kinds of computation in the very specialized science of the >> brain where the analytic/synthetic distinction can be shown to be >> under-developed and potentially confused. The brain is unique in that it is >> a synthetic object with a specialised role to become the natural regularity >> that forms the control system of natural organisms. It embodies the >> intellect of whatever creature it inhabits. The kinds of tasks such a >> control system does can and have been modelled to great effect in analytic >> approaches. The question is: *“What is the difference, application to >> the brain, between the analytic and the synthetic approach?”* Asking >> that question, and expecting a scientific answer, is what this paper is >> seeking. >> >> >> >> For over half a century, approaches to creating an artificial brain have >> been entirely confined to analytic forms. These analytic approaches are >> explorations of models of the brain made by humans. That being the case, >> then the hyper-critical issue is in understanding the conditions under >> which the analytic is indistinguishable from the synthetic. If there is a >> difference, then how does that difference manifest in the capability of an >> AGI. For the brain, for these many decades, the synthetic half of the route >> to AGI has simply been neglected for a variety of reasons. The actual >> reasons for the absence of synthetic approaches to AGI is something >> historians can evaluate. The practical restoration of the synthetic >> approach is the goal here. The restoration of the synthetic approach is >> necessary to scientifically test the difference between the analytic and >> synthetic AGI. Whatever that difference is, the whole AGI enterprise has >> been living within a realm of that difference for reasons that are >> essentially unexplored. *Scientifically *evaluating the >> analytic/synthetic difference (or the lack of it) is the goal of the >> proposed shift in methodology. >> >> >> >> In summary: The prospect of restoration of a synthetic approach to AGI is >> our topic. We look at a potential change in the direction of AGI science, >> and therefore the investment profile, where the analytic, the synthetic and >> their hybrid are formally recognised as separate and where scientific >> testing is then applied to compare and contrast their scope and >> effectiveness in application to the science of the artificial brain as AGI. >> In the creation of such a brain the approach can be >> >> 1. >> >> Nil% synthetic computation (entirely analytic) >> >> or >> >> 1. >> >> 100% synthetic computation >> >> or >> >> 1. >> >> H% synthetic. A hybrid form of both. >> >> >> >> That is, the inclusion of synthetic computation to some desired level >> becomes an experimental parameter. Natural brain tissue can be regarded as >> naturally occurring object based on (2) synthetic computation. In >> application to artificial brain tissue (AGI) so far, option (1) has been >> the only approach. This has achieved all of the progress in artificial >> intelligence to date. Here we suggest that the success of analytic >> approaches be joined by synthetic approaches to AGI. If indeed the time has >> arrived for the formal introduction of (2) synthetic AGI and (3) hybrid AGI >> as viable prospects, then we need to open a discourse. What would the new >> AGI science look like? What does it tell us about the scope, nature and >> expectations inherent in the purely analytic approach? What does it add to >> the nearly 70 year-old AGI program? >> >> (end of section) >> ============================ >> >> This is offered up for discussion as the possible first part of the >> document Dorian started. I have a lot more to add. >> >> regards >> >> Colin Hales >> >> >> >> > *AGI* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/17795807-366cfa2a> | > Modify > <https://www.listbox.com/member/?&> > Your Subscription <http://www.listbox.com> > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
