Re: [Gluster-users] 40 gig ethernet

2013-06-16 Thread Nathan Stratton
niband, how can that be? -- ><> Nathan Stratton Founder, CTO Exario Networks, Inc. nathan at robotics.net nathan at exarionetworks.com http://www.robotics.net http://www.exarionetworks.com/ Building the

[Gluster-users] 40 gig ethernet

2013-06-14 Thread Nathan Stratton
with guys like Arista. My question is, is this worth it at all for something like Gluster? The port to port latency looks impressive at under 4 microseconds, but I don't yet know what total system to system latency would look like assuming QSPF+ copper cables and linux stack. -- >&

Re: [Gluster-users] about HA infrastructure for hypervisors

2012-06-28 Thread Nathan Stratton
ce and only on small packets. As far as power, original PHYs were almost 7 Watts per port! Today, almost every switch out there is less then 1 Watt per port and falling based on Moore's Law. <> Nathan Stratton nathan at robotics.net http://www.robotics.net

Re: [Gluster-users] about HA infrastructure for hypervisors

2012-06-28 Thread Nathan Stratton
s. It took me a full day to figure out it was not my switch or config, but the stupid ixgbe driver! <> Nathan Stratton nathan at robotics.net http://www.robotics.net ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-

Re: [Gluster-users] about HA infrastructure for hypervisors

2012-06-28 Thread Nathan Stratton
On Thu, 28 Jun 2012, Brian Candler wrote: On Wed, Jun 27, 2012 at 05:28:43PM -0500, Nathan Stratton wrote: [root@virt01 ~]# dd if=/dev/zero of=foo bs=1M count=5k 5120+0 records in 5120+0 records out 5368709120 bytes (5.4 GB) copied, 26.8408 s, 200 MB/s But doing a dd if=/dev/zero bs=1024k

Re: [Gluster-users] about HA infrastructure for hypervisors

2012-06-27 Thread Nathan Stratton
=foo bs=1M count=5k 5120+0 records in 5120+0 records out 5368709120 bytes (5.4 GB) copied, 172.706 s, 31.1 MB/s While this is slower then what I would like it see, its faster then what I was getting to my NetApp and it scales better! :) <> Nathan Stratton nathan at robotics.ne

Re: [Gluster-users] about HA infrastructure for hypervisors

2012-06-27 Thread Nathan Stratton
here all servers are running)? I don't see how that would help - I expect you would mount both volumes on both KVM nodes anyway, to allow you to do live migration. Yep <> Nathan Stratton nathan at robotics.net http://www.robotics.net ___

[Gluster-users] 0-share-replicate-0: Returning 0, call_child: 1, last_index: -1

2012-06-23 Thread Nathan Stratton
0, call_child: 1, last_index: -1 http://share.robotics.net/share.log <> Nathan Stratton nathan at robotics.net http://www.robotics.net ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] How Fatal?

2012-06-23 Thread Nathan Stratton
harry mangalam writes: > > To check whether the point version skew might have an effect, I compiled > gluster from the source: > > and tried it again. However, even tho the server and client are now > from (I ass

Re: [Gluster-users] Fedora 17 GlusterFS 3.3.0 problmes

2012-06-22 Thread Nathan Stratton
unt -t glusterfs localhost:/share /share <> Nathan Stratton nathan at robotics.net http://www.robotics.net hjm On Fri, Jun 22, 2012 at 7:49 AM, Nathan Stratton wrote: When I do a NFS mount and do a ls I get: [root@ovirt share]# ls ls: reading directory .: Too many levels of symbolic l

[Gluster-users] Fedora 17 GlusterFS 3.3.0 problmes

2012-06-22 Thread Nathan Stratton
et_root_inode_on_first_lookup] 0-share-replicate-2: added root inode [2012-06-21 19:24:39.669379] I [afr-common.c:1964:afr_set_root_inode_on_first_lookup] 0-share-replicate-3: added root inode <> Nathan Stratton nathan at robotics.net http://www.robotics.net __

Re: [Gluster-users] Gluster 3.2.6 for XenServer

2012-04-19 Thread Nathan Stratton
t you changed the NFS port to 2049 in the configuration and that should fix pretty much the front-end issues with Gluster and Xen. Is the self heal issue still there in 3.3 where the file needs to be locked to heal? <> Nathan Stratton nathan at robotics.net http://www.r

Re: [Gluster-users] Exorbitant cost to achieve redundancy??

2012-03-30 Thread Nathan Stratton
more nodes you have the more likely you will have a node failure. This is what I think Jeff was worried about. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net

Re: [Gluster-users] >1PB

2012-02-17 Thread Nathan Stratton
here. At lest on our end, we don't have backups We just make sure we have 2 copies of the data on RAID6 hardware. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.r

Re: [Gluster-users] >1PB

2012-02-17 Thread Nathan Stratton
best bet is to split the data over as many servers as possible provides the best up time. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net

Re: [Gluster-users] Exorbitant cost to achieve redundancy??

2012-02-13 Thread Nathan Stratton
ddressed soon, until then about the only think you can do is rely on underlying hardware redundancy. We have just over 100 TB on Gluster with every brick using hardware RAID 6 cards with a hot standby. I know its not the solution your looking for, but for now its all we have with Gluster.

Re: [Gluster-users] Red Hat..

2011-10-04 Thread Nathan Stratton
On Tue, 4 Oct 2011, Luis Cerezo wrote: congrats folks. Ditto. Just further validates technology some of us have grown to love over the last few years. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nat

Re: [Gluster-users] ZFS + Linux + Glusterfs for a production ready 100+ TB NAS on cloud

2011-09-25 Thread Nathan Stratton
On Sun, 25 Sep 2011, Jon Tegner wrote: Sorry for a stupid question, but would there be issues using glusterfs based on several 11 TB ext4-bricks? Nope! I have used that in a config, 16 750 disks in RAID 6 on 3ware card formated ext4. <> Nathan Stratton

Re: [Gluster-users] HW raid or not

2011-08-08 Thread Nathan Stratton
On Mon, 8 Aug 2011, Uwe Kastens wrote: Hi, I know, that there is no general answer to this question :) Is it better to use HW Raid or LVM as gluster backend or raw disks? HW Raid. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at roboti

Re: [Gluster-users] infiniband speed

2011-01-11 Thread Nathan Stratton
On Mon, 10 Jan 2011, Joe Landman wrote: Try this: # (on bravo) dd if=/dev/zero of=/cluster/shadow/big.file bs=1M count=20k This will write a 20GB file to the same partition. We need to see how fast that write is (outside of cache) Do the same test on the other machine. Inf

Re: [Gluster-users] Gluster

2010-07-03 Thread Nathan Stratton
On Sat, 3 Jul 2010, Andy Pace wrote: What did you end up using instead? NetApp 960s I bought off eBay. -Nathan ___ Gluster-users mailing list Gluster-users@gluster.org http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster

2010-07-03 Thread Nathan Stratton
n xen. There was one other issue, but don't remember what it was right now. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net

[Gluster-users] Gluster

2010-07-03 Thread Nathan Stratton
Each server will have raw 60 TB of space and two servers will be unified together for redundancy. Pairs of servers will be distributed into one 240 TB volume in each city. The plan needs to scale to 4 PB in each within 1 year. <> Nathan StrattonCTO, Bli

Re: [Gluster-users] mysql on gluster

2009-11-25 Thread Nathan Stratton
On Tue, 24 Nov 2009, Pablo Godel wrote: Hello, I continue to run some tests against glusterfs. I am running openvz containers where their filesystem is in a glusterfs mount. For some reason, mysql takes ages to start up, probably up to a minute, where usualy takes a few seconds. Any idea why t

[Gluster-users] open file heal

2009-11-20 Thread Nathan Stratton
all the new data. Is there a way to see what node GlusterFS is using for a open file and is there a way to force a update to both files while it is open? <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nat

Re: [Gluster-users] xen Hotplug scripts not working

2009-11-11 Thread Nathan Stratton
ot;Device 51712 (vbd) could not be connected. Hotplug scripts not working" . Sounds like you are running tap:aio in your xen config, try changing that to file. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net na

Re: [Gluster-users] Tutorial/Howto/more information about how to run Glusterfs

2009-10-26 Thread Nathan Stratton
On Mon, 26 Oct 2009, Daniel Meier wrote: Hi list I'm new with GlusterFS. I managed to compile and install the current GlusterFS on two machines. I configured them as "distributed, replicated volumes" according to the wiki. glusterfsd is now running on both systems. However, they don't communi

[Gluster-users] Two Networks

2009-10-12 Thread Nathan Stratton
uster with two networks connecting all boxes together and does it actually use both network to increase performance or does it only work as fail over? If this is possible, how is it setup? I can't seam to find any gluster configs where two networks are used to connect all nodes together.

Re: [Gluster-users] 2.1 xen direct io

2009-10-12 Thread Nathan Stratton
On Mon, 12 Oct 2009, samp...@neutraali.net wrote: 2.1 RC1 is scheduled on 19th Oct. You will need 2.6.26+ kernel for the performance enhancements. There are no custom kernel fuse modules from 2.6.26 onwards. 2.6.18 will still need --disable-direct-io mode. Let me get back to you with the tap:aio

Re: [Gluster-users] 2.1 xen direct io

2009-10-09 Thread Nathan Stratton
On Thu, 8 Oct 2009, Anand Avati wrote: 2.1 RC1 is scheduled on 19th Oct. You will need 2.6.26+ kernel for the performance enhancements. There are no custom kernel fuse modules from 2.6.26 onwards. 2.6.18 will still need --disable-direct-io mode. Let me get back to you with the tap:aio answer.

[Gluster-users] 2.1 xen direct io

2009-10-07 Thread Nathan Stratton
question... will 2.1 allow me to use tap:aio? <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmi

Re: [Gluster-users] GlusterFS 2.0.7 released

2009-10-01 Thread Nathan Stratton
have not been able to make it work. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmi

Re: [Gluster-users] Fedora 11 - 2.6.31 Kernel - Fuse 2.8.0 - Infiniband

2009-09-22 Thread Nathan Stratton
On Tue, 22 Sep 2009, Anand Avati wrote: Then I guess even IPoIB is not working for you? I'm not sure if you Actually, IPoIB is working just fine. might have to upgrade to a new OFED for either libibverbs or the mthca uverbs driver may not be compatible with the latest kernel IB drivers. Ya

Re: [Gluster-users] Fedora 11 - 2.6.31 Kernel - Fuse 2.8.0 - Infiniband

2009-09-22 Thread Nathan Stratton
On Tue, 22 Sep 2009, Anand Avati wrote: What does the server log have to say? Can you also check if port-1 is the active port in ibv_devinfo? Looks like ib-verbs messaging is not happening. Does ibv_srq_pingpong give sane results? [3.890311] ib_mthca: Mellanox InfiniBand HCA driver v1.0

Re: [Gluster-users] Fedora 11 - 2.6.31 Kernel - Fuse 2.8.0 - Infiniband

2009-09-22 Thread Nathan Stratton
On Tue, 22 Sep 2009, Anand Avati wrote: Hate to post again, but anyone have any ideas on this? What does the server log have to say? Can you also check if port-1 is the active port in ibv_devinfo? Looks like ib-verbs messaging is not happening. Does ibv_srq_pingpong give sane results? [

Re: [Gluster-users] Fedora 11 - 2.6.31 Kernel - Fuse 2.8.0 - Infiniband

2009-09-22 Thread Nathan Stratton
Hate to post again, but anyone have any ideas on this? -Nathan On Fri, 18 Sep 2009, Nathan Stratton wrote: Has anyone been able to get Infiniband working with 2.6.31 kernel and fuse 2.8.0? My config works fine on my Centos 2.6.18 box, so I know that is ok. Infiniband looks good: [r

[Gluster-users] Fedora 11 - 2.6.31 Kernel - Fuse 2.8.0 - Infiniband

2009-09-18 Thread Nathan Stratton
n root failed. [2009-09-18 20:06:18] W [fuse-bridge.c:1841:fuse_statfs_cbk] glusterfs-fuse: 2: ERR => -1 (Transport endpoint is not connected) <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com

[Gluster-users] Problmes with XFS and Gluster 2.0.6

2009-09-18 Thread Nathan Stratton
found that the problem happened on all 4 nodes. I just don't know how 4 XFS partions on 4 different boxes could all become corrupted at one time. Whatever happens it is bad wrong because xfs can't even fix it: http://share.robotics.net/xfs-crash.txt <>

Re: [Gluster-users] Interesting experiment

2009-08-19 Thread Nathan Stratton
On Wed, 19 Aug 2009, Hiren Joshi wrote: Is it worth bonding? This look like I'm maxing out the network connection. Yes, but you should also check out Infiniband. http://www.robotics.net/2009/07/30/infiniband/ <> Nathan StrattonCTO, BlinkMind,

Re: [Gluster-users] 2.0.6

2009-08-18 Thread Nathan Stratton
have been fortunate to watch this project. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://w

Re: [Gluster-users] is it possible to use ib-verbs and tcp transport-types on the same volume?

2009-08-10 Thread Nathan Stratton
pe protocol/server option transport-type tcp/server subvolumes brick option auth.addr.brick.allow * end-volume volume ib-server type protocol/server option transport-type ib-verbs/server subvolumes brick option auth.addr.brick.allow * end-volume <> Nathan Stratton

[Gluster-users] tcp/server and ib-verbs/server

2009-08-05 Thread Nathan Stratton
rver type protocol/server option transport-type ib-verbs/server subvolumes brick option auth.addr.brick.allow * end-volume <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://

Re: [Gluster-users] FUSE

2009-08-05 Thread Nathan Stratton
On Thu, 6 Aug 2009, Harshavardhana wrote: Hi Nathon, You should be using fuse-2.8.0pre3 released from the their sf.netwebsite. Let us know your findings So I am running Fedora 11 with 2.6.30-rc6 xen 3.4.1-rc11 gluster 2.0 git fuse 2.8.0pre3. I start glusterfs and it looks like things s

[Gluster-users] FUSE

2009-08-05 Thread Nathan Stratton
I am trying to get more then 25 out of raw 500 MB/s with --disable-direct-io-mode. I have been able to build a XEN Dom0 and Infiniband support into the latest kernel 2.6.31-rc4. I also have FUSE in the kernel, should I be using that, or should I use fuse-2.7.4glfs11? <> Nathan St

[Gluster-users] directory lock durning file self-healing

2009-08-03 Thread Nathan Stratton
someting I could put in production! <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmi

Re: [Gluster-users] hi, all! how to make GlusterFS wokr like RAID 5??

2009-07-30 Thread Nathan Stratton
On Thu, 30 Jul 2009, Yi Ling wrote: there are 3 servers in my clustered system, and now it's required to provide function like RAID 5. any guy here could tell how to combine the translators to implement the application?? Well it would not be RAID 5, but you would get redundancy and 2/3rd the

Re: [Gluster-users] any configuration guidelines?

2009-07-29 Thread Nathan Stratton
On Wed, 29 Jul 2009, Liam Slusser wrote: The preferred way is using the client and not the backend server. There is some documentation somewhere about it - ill see if i can dig it up. The downside is that I am using this for xen so I need to disable direct-io. This makes client side recovery

Re: [Gluster-users] Xen - Backend or Frontend or Both?

2009-07-29 Thread Nathan Stratton
Sorry, with config: On Wed, 29 Jul 2009, Nathan Stratton wrote: I have 6 boxes with a client config (see below) across 6 boxes. I am using distribute across 3 replicate pairs. Since I am running xen I need to disable-direct-io and that slows things down quite a bit. My thought was to move

[Gluster-users] Xen - Backend or Frontend or Both?

2009-07-29 Thread Nathan Stratton
using the same raw volumes? <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmind.com ___ G

Re: [Gluster-users] any configuration guidelines?

2009-07-29 Thread Nathan Stratton
On Tue, 28 Jul 2009, Wei Dong wrote: Hi All, We've been using GlusterFS 2.0.1 on our lab cluster to host a large number of small images for distributed processing with Hadoop and it has been working fine without human intervention for a couple of months. Thanks for the wonderful project --

Re: [Gluster-users] GlusterFS & XenServer Baremetal

2009-07-17 Thread Nathan Stratton
ideas for getting a better situation ? If not, what scenario did you switch for ? I still use gluster, just not for xen images. We keep xen images on DRBD, but that is limited to primary/primary on 2 servers only. <> Nathan StrattonCTO, BlinkMind, Inc. nat

Re: [Gluster-users] GlusterFS & XenServer Baremetal

2009-07-17 Thread Nathan Stratton
On Sat, 18 Jul 2009, Alexandre Emeriau wrote: Did you try GlusterFS on ArchLinux ( http://www.gluster.org/docs/index.php/GlusterFS_on_ArchLinux) ? as Xenserver is based on as far i know Centos 5.2 <> Nathan StrattonCTO, BlinkMind, Inc. nat

Re: [Gluster-users] GlusterFS & XenServer Baremetal

2009-07-17 Thread Nathan Stratton
irect-io-mode. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmind.com ___ Gluster-us

Re: [Gluster-users] Storage Cluster Question

2009-06-29 Thread Nathan Stratton
, 336.514 seconds, 25.5 MB/ <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmind.com ___ G

Re: [Gluster-users] Storage Cluster Question

2009-06-29 Thread Nathan Stratton
en one 12T Gluster across all servers for general file storage. <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://w

Re: [Gluster-users] Xen and Glusterfs

2009-04-03 Thread Nathan Stratton
On Fri, 3 Apr 2009, Matt Lawrence wrote: Ok, tried that. Installed on a local disk and it worked. Changed the "tap:aio" to "file", made sure it still worked, then copied it to gluster file system and it works. Try to do an install to the gluster file system and it still hangs. So I'm guess

Re: [Gluster-users] Performance expectations

2009-03-23 Thread Nathan Stratton
en tap:aio. Is there something I can do to bring the performance closer to the native disk speed, or are there some inherent limitations somewhere that I should be aware of? You can try some caching, but overall it did not help me out a lot. <> Nathan Stratton

Re: [Gluster-users] Gluster on a Xen cloud - Questions

2009-03-02 Thread Nathan Stratton
o run into another error before I fix a disk. I saw the note in the technical faq about --disable-direct-io-mode. What does this do, and why is it needed to perform Xen migration? Not sure, would need to ask someone else, all I know is it takes me from over 100 MB/s to 22 MB/s. : ( <&

Re: [Gluster-users] Gluster on a Xen cloud - Questions

2009-03-02 Thread Nathan Stratton
is an option, but we just let distribute throw them where it wants. I would like to thank in advance anyone who answers these questions for me. I am new to Gluster and distributed filesystems so I apologize if any of my questions are stupid. Did not see any. :) <> Nathan Stratton

[Gluster-users] Xen issues with Gluster

2009-02-27 Thread Nathan Stratton
Loading xenblk.ko module Registering block device major 202 xvda:Scanning and configuring dmraid supported devices Creating root device. server client configs: http://share.robotics.net/glusterfs.vol http://share.robotics.net/glusterfsd.vol <> Nathan St

Re: [Gluster-users] xen+glusterfs+live migration?

2009-02-26 Thread Nathan Stratton
On Tue, 24 Feb 2009, Visco Shaun wrote: Mounting the glusterfs was done with option -d disable. Besides as i saw in some article i tried 'tap:aio' instead of 'file' but that resuiltedin failure of booting of the vm from node 1 itself.Also the same xend-config.sxp file is there on both nodes.

[Gluster-users] core

2009-02-24 Thread Nathan Stratton
[0x2b0d6ec90e8d] /lib64/libpthread.so.0[0x2b0d6e30a2f7] /lib64/libc.so.6(clone+0x6d)[0x2b0d6e5f0e3d] - <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net

[Gluster-users] Gluster single file I/O

2009-02-24 Thread Nathan Stratton
) copied, 87.7885 seconds, 97.8 MB/s Boxes are connected with 10 gig Infiniband so that should not be an issue. http://share.robotics.net/glusterfs.vol http://share.robotics.net/glusterfsd.vol <> Nathan StrattonCTO, BlinkMind, Inc. nathan at roboti

[Gluster-users] Interleave or not

2009-02-23 Thread Nathan Stratton
distribute unify - block 1 block2 block3 block4 <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmi

Re: [Gluster-users] Gluster single file I/O

2009-02-22 Thread Nathan Stratton
Moved from afr to replicate and unify to distribute and that helped a bit. I am now seeing 109 MB/s with Gluster vs 167MB/s DRBD. My configs: http://share.robotics.net/glusterfs.vol http://share.robotics.net/glusterfsd.vol -Nathan On Sat, 21 Feb 2009, Nathan Stratton wrote: Direct: [r

Re: [Gluster-users] core

2009-02-22 Thread Nathan Stratton
? <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.nethttp://www.blinkmind.com___ Gluster-users mailing list Gluster

Re: [Gluster-users] Gluster single file I/O

2009-02-22 Thread Nathan Stratton
infiniband: [r...@xen0 share]# dd if=/dev/zero of=/share/bar bs=1G count=8 8+0 records in 8+0 records out 8589934592 bytes (8.6 GB) copied, 60.8988 seconds, 141 MB/s At 06:59 PM 2/21/2009, Nathan Stratton wrote: Direct: [r...@xen0 unify]# dd if=/dev/zero of=/sdb2/bar bs=1G count=8 8+0 reco

[Gluster-users] core

2009-02-21 Thread Nathan Stratton
[0x2b0d6ec90e8d] /lib64/libpthread.so.0[0x2b0d6e30a2f7] /lib64/libc.so.6(clone+0x6d)[0x2b0d6e5f0e3d] - <> Nathan StrattonCTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net

[Gluster-users] Gluster single file I/O

2009-02-21 Thread Nathan Stratton
) copied, 87.7885 seconds, 97.8 MB/s Boxes are connected with 10 gig Infiniband so that should not be an issue. http://share.robotics.net/glusterfs.vol http://share.robotics.net/glusterfsd.vol <> Nathan StrattonCTO, BlinkMind, Inc. nathan at roboti