Re: [Gluster-users] 3.3.0 - 3.4.2 / Rolling upgrades with no downtime

2014-03-06 Thread Bryan Whitehead
best-practice for the VM maximum stability if the VM is frozen the decision maybe to delete the oldest qcow2/FVM file, or random choose if there is no difference best regards, --joão Em 05-03-2014 20:26, Bryan Whitehead escreveu: I followed this blog: http://geekshell.org/~pivo/glusterfs

Re: [Gluster-users] 3.3.0 - 3.4.2 / Rolling upgrades with no downtime

2014-03-05 Thread Bryan Whitehead
commands). It's good news it went well best regards joao Em 04-03-2014 18:30, Bryan Whitehead escreveu: I just did this last week from 3.3.0-3.4.2. I never got the peer probe problems - but I did end up with 2 files being in a split-brain situation. Note: I only had ~hundred files

Re: [Gluster-users] 3.3.0 - 3.4.2 / Rolling upgrades with no downtime

2014-03-04 Thread Bryan Whitehead
I just did this last week from 3.3.0-3.4.2. I never got the peer probe problems - but I did end up with 2 files being in a split-brain situation. Note: I only had ~hundred files that are qcow2 for KVM, so 2 files getting split-brain is about 2% filesystem problem. On Tue, Mar 4, 2014 at 1:43

Re: [Gluster-users] ESXI cannot access stripped gluster volume

2014-03-02 Thread Bryan Whitehead
I don't have ESXi experience but the first thing that jumps out at me is you probably need to mount NFS/tcp. NFS/udp doesn't work on glusterfs (unless this has changed and I've not been paying close enough attention lately). On Sat, Mar 1, 2014 at 8:35 AM, Carlos Capriotti

Re: [Gluster-users] GlusterFS HA testing feedback

2013-10-22 Thread Bryan Whitehead
So gluster is just running on 10Mbit nic cards or 56Gbit Infiniband? With 1G nic cards, assuming only replica=2, you are looking at pretty limited IO for gluster to work with. That can cause long pauses and other timeouts in my experience. On Tue, Oct 22, 2013 at 2:42 AM, José A. Lausuch Sales

Re: [Gluster-users] Mounting Gluster volume on multiple clients.

2013-09-29 Thread Bryan Whitehead
No. I write to the same Volume from many clients at the same time all day. You just can't write to the same file in a Volume at the same time (without using posix locks). On Sat, Sep 28, 2013 at 9:37 PM, Bobby Jacob bobby.ja...@alshaya.comwrote: Hi, Again, my query is : When multiple

Re: [Gluster-users] Feedback Requested: Gluster Community Governance

2013-09-05 Thread Bryan Whitehead
This all looks really good. I think with this official governance plan I'll be able to contribute more and be more involved. I wouldn't mind listening in on the board meeting coming up on September 18th, will this be possible? On Wed, Sep 4, 2013 at 12:41 AM, Anand Avati av...@gluster.org

Re: [Gluster-users] Tunning GlusterFS

2013-08-02 Thread Bryan Whitehead
Many small files is in general not going to give you good performance - a lot of time is spent locking each individual file (and directory) as it is written. You would see vast improvements simply by cramming all the small files into a big file (obviously this would require changes in the

Re: [Gluster-users] Create a volume with one brick

2013-07-30 Thread Bryan Whitehead
This is done all the time for testing and it is really easy. install the RPM's, start up the glusterd service. 1) make a directory for gluster to use. Never touch anything in this directory. mkdir /var/gluster 2) create your volume gluster volume create MyVolume localhost:/var/gluster gluster

Re: [Gluster-users] [Gluster-devel] [FEEDBACK] Governance of GlusterFS project

2013-07-29 Thread Bryan Whitehead
Weekend activities kept me away from watching this thread, wanted to add in more of my 2 cents... :) Major releases would be great to happen more often - but keeping current releases more current is really what I was talking about. Example, 3.3.0 was a pretty solid release but some annoying bugs

Re: [Gluster-users] [FEEDBACK] Governance of GlusterFS project

2013-07-26 Thread Bryan Whitehead
I would really like to see releases happen regularly and more aggressively. So maybe this plan needs a community QA guy or the release manager needs to take up that responsibility to say this code is good for including in the next version. (Maybe this falls under process and evaluation?) For

Re: [Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

2013-07-16 Thread Bryan Whitehead
I'm using gluster 3.3.0 and 3.3.1 with xfs bricks and kvm based VM's using qcow2 files on gluster volume fuse mounts. CentOS6.2 through 6.4 w/CloudStack 3.0.2 - 4.1.0. I've not had any problems. Here is 1 host in a small 3 host cluster (using the cloudstack terminology). about 30 VM's are running

Re: [Gluster-users] KVM guest I/O errors with xfs backed gluster volumes

2013-07-16 Thread Bryan Whitehead
No I've never used raw, I've used lvm (local block device) and qcow2. I think you should use the libvirt tools to run VM's and not directly use qemu-kvm. Are you creating the qcow2 file with qemu-img first? example: qemu-img create -f qcow2 /var/lib/libvirt/images/xfs/kvm2.img 200G [root@ ~]#

Re: [Gluster-users] Rev Your (RDMA) Engines for the RDMA GlusterFest

2013-06-21 Thread Bryan Whitehead
+1 On Fri, Jun 21, 2013 at 8:05 AM, Ryan Aydelott ry...@mcs.anl.gov wrote: +1 same story here… On Jun 21, 2013, at 10:04 AM, Matthew Nicholson matthew_nichol...@harvard.edu wrote: Yes please! Busy day 'round here, but we do have a 3.4 beta3 RDMA cluster up, just need to get to the tests...

Re: [Gluster-users] 40 gig ethernet

2013-06-20 Thread Bryan Whitehead
Weird, I have a bunch of servers with Areca ARC-1680 8-ports and they have never given me a problem. The first thing I did was update the firmware to the latest - my brand new cards had firmware 2 years old - and didn't recognize disks 1TB. On Thu, Jun 20, 2013 at 7:11 AM, Shawn Nock

Re: [Gluster-users] 40 gig ethernet

2013-06-17 Thread Bryan Whitehead
. On Sat, Jun 15, 2013 at 5:34 PM, Justin Clift jcl...@redhat.com wrote: On 14/06/2013, at 8:13 PM, Bryan Whitehead wrote: I'm using 40G Infiniband with IPoIB for gluster. Here are some ping times (from host 172.16.1.10): [root@node0.cloud ~]# ping -c 10 172.16.1.11 PING 172.16.1.11

Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
I'm using 40G Infiniband with IPoIB for gluster. Here are some ping times (from host 172.16.1.10): [root@node0.cloud ~]# ping -c 10 172.16.1.11 PING 172.16.1.11 (172.16.1.11) 56(84) bytes of data. 64 bytes from 172.16.1.11: icmp_seq=1 ttl=64 time=0.093 ms 64 bytes from 172.16.1.11: icmp_seq=2

Re: [Gluster-users] 40 gig ethernet

2013-06-14 Thread Bryan Whitehead
/0.628/0.116 ms Note: The Infiniband interfaces have a constant load of traffic from glusterfs. The Nic cards comparatively have very little traffic. On Fri, Jun 14, 2013 at 12:40 PM, Stephan von Krawczynski sk...@ithnet.com wrote: On Fri, 14 Jun 2013 12:13:53 -0700 Bryan Whitehead dri

Re: [Gluster-users] glusterfs + cloudstack setup

2013-05-17 Thread Bryan Whitehead
I think you are referring to this statement: After adding a gluster layer (fuse mount) write speeds per process are at ~150MB/sec. So the raw fs write speed for 1 or more processes against a mountpoint on xfs I get ~300MB/sec. After the fuse layer is added (with replica=2 as shown in my config)

Re: [Gluster-users] Performance for KVM images (qcow)

2013-04-10 Thread Bryan Whitehead
I just want to say that trying to run a VM on almost any glusterfs mounted fuse is going to suck when using 1G nic cards. That said your setup looks fine except you need to change replica 4 to replica 2. I'm assuming you want redundancy and speed. Replicating to all 4 bricks is probably not what

Re: [Gluster-users] Performance for KVM images (qcow)

2013-04-08 Thread Bryan Whitehead
This looks like you are replicating every file to all bricks? What is tcp running on? 1G nics? 10G? IPoIB (40-80G)? I think you want to have Distribute-Replicate. So 4 bricks with replica = 2. Unless you are running at least 10G nics you are going to have serious IO issues in your KVM/qcow2

Re: [Gluster-users] Slow read performance

2013-03-11 Thread Bryan Whitehead
This is clearly something Linus should support (forcing ext4 fix). There is an ethos Linus always champions and that is *never* break userspace. NEVER. Clearly this ext4 change has broken userspace. GlusterFS is not in the kernel at all and this change has broken it. On Mon, Mar 11, 2013 at

Re: [Gluster-users] Slow read performance

2013-02-27 Thread Bryan Whitehead
How are you doing the read/write tests on the fuse/glusterfs mountpoint? Many small files will be slow because all the time is spent coordinating locks. On Wed, Feb 27, 2013 at 9:31 AM, Thomas Wakefield tw...@cola.iges.orgwrote: Help please- I am running 3.3.1 on Centos using a 10GB

Re: [Gluster-users] Slow read performance

2013-02-27 Thread Bryan Whitehead
789252 805040 123810 119088 On Feb 27, 2013, at 2:46 PM, Bryan Whitehead dri...@megahappy.net wrote: How are you doing the read/write tests on the fuse/glusterfs mountpoint? Many small files will be slow because all the time is spent coordinating locks. On Wed, Feb 27, 2013 at 9:31 AM

Re: [Gluster-users] high CPU load on all bricks

2013-02-14 Thread Bryan Whitehead
is transport tcp or tcp,rdma? I'm using transport=tcp for IPoIB and get pretty fantastic speeds. I noticed when I used tcp,rdma as my transport I had problems. Are you mounting via fuse or nfs? I don't have any experience using the nfs but fuse works really well. Additionally, how are you using

Re: [Gluster-users] hanging of mounted filesystem (3.3.1)

2013-01-31 Thread Bryan Whitehead
remove the transport rdma and try again. When using RDMA I've also had extremely bad CPU eating issues. I currently run gluster with IPoIB to get the speed of infiniband and the non-crazy cpu usage of rdma gluster. On Thu, Jan 31, 2013 at 9:20 AM, Michael Colonno m...@hpccloudsolutions.com

Re: [Gluster-users] Errors in documentation for 3.2 to 3.3 upgrade path

2013-01-23 Thread Bryan Whitehead
Side note: The same annoying conversions seems to happen when using Skype. On Wed, Jan 23, 2013 at 5:29 PM, Toby Corkindale toby.corkind...@strategicdata.com.au wrote: On 24/01/13 10:57, Joe Julian wrote: On 01/23/2013 03:43 PM, Toby Corkindale wrote: Hi, Last night I attempted to

Re: [Gluster-users] MacOSX Finder performance woes

2012-12-19 Thread Bryan Whitehead
This sounds like a perfect use case for https://github.com/jdarcy/negative-lookup Jeff Darcy made this as an example of building your own translators. But it basically caches nagative-lookups so repeated asking of the same questions go away. Hell, might be beneficial to make a translator that

Re: [Gluster-users] Infiniband performance issues answered?

2012-12-17 Thread Bryan Whitehead
does anyone have 3.4.0qa5 rpm's available? I'd like to give them a whirl. On Mon, Dec 17, 2012 at 5:17 PM, Sabuj Pattanayek sab...@gmail.com wrote: and yes on some Dells you'll get strange network and RAID controller performance characteristics if you turn on the BIOS power management. On

Re: [Gluster-users] Rebalance may never finish, Gluster 3.2.6

2012-12-13 Thread Bryan Whitehead
use 10G nics or Infiniband. On Thu, Dec 13, 2012 at 7:21 AM, Kent Nasveschuk knasvesc...@mbl.eduwrote: Hi Guys, I have a rebalance that is going so slow it may never end. Particulars on system: 3 nodes 6 bricks, ~55TB about 10%full. The use of data is very active during the day and less

Re: [Gluster-users] Does brick fs play a large role on listing files client side?

2012-12-04 Thread Bryan Whitehead
I think performance.cache-refresh-timeout *might* cache directory listings, so you can try bumping that value up. But probably someone else on the list needs to clarify if that will actually cache a directory (might only cache a file). If not, you can write a translator to cache directory

Re: [Gluster-users] Self healing of 3.3.0 cause our 2 bricks replicated cluster freeze (client read/write timeout)

2012-11-28 Thread Bryan Whitehead
when you mount xfs, also use the inode64 option. That will help with xfs performance. My offhand guess is you are likely running into limited network bandwidth for the 2 bricks to sync. As the network gets flooded nfs response gets poor. Make sure you are getting full-duplex connections - or

Re: [Gluster-users] Rebuilt RAID array, now heal is failing

2012-11-23 Thread Bryan Whitehead
You'll need to share more information. gluster volume info to start would be helpful. So far I have no clue how your setup is. Example: if you have a distributed setup with no replication then all your files on that volume are just lost. On Fri, Nov 23, 2012 at 9:55 AM, Gerald Brandt

Re: [Gluster-users] File has different size on different bricks

2012-10-28 Thread Bryan Whitehead
is md5sum or some other hash different? On Sun, Oct 28, 2012 at 4:49 AM, Nux! n...@li.nux.ro wrote: On 28.10.2012 11:33, Nux! wrote: The problem: While server B was rebooting the wget transfer in server A froze for a few seconds then resumed. When server B came back online it started to

Re: [Gluster-users] slow write to non-hosted replica in distributed-replicated volume

2012-10-22 Thread Bryan Whitehead
gluster volume create vol1 replica 2 transport tcp server1:/brick1 server2:/brick2 server3:/brick3 server4:/brick4 server1:/brick1 and server2:/brick2 are the first replica pair server3:/brick3 and server4:/brick4 are the second replica pair server1.. file1 goes into brick1/brick2 - fast

[Gluster-users] Gluster 3.3.1

2012-10-12 Thread Bryan Whitehead
I didn't see any announcement of gluster 3.3.1, but I see it was dropped yesterday: http://bits.gluster.com/pub/gluster/glusterfs/3.3.1/x86_64/ Also no blog post about it? Is 3.3.1 not ready yet? -Bryan ___ Gluster-users mailing list

[Gluster-users] mkfs.xfs inode size question

2012-10-03 Thread Bryan Whitehead
Look at this guide: https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Storage/2.0/html-single/Administration_Guide/index.html#chap-User_Guide-Setting_Volumes I noticed this: you must increase the inode size to 512 bytes from the default 256 bytes With an example mkfs.xfs like: mkfs.xfs -i

Re: [Gluster-users] Setting xattrs failed

2012-09-18 Thread Bryan Whitehead
I'd open a reproducible bug on bugzilla.redhat.com https://bugzilla.redhat.com/page.cgi?id=browse.htmltab=product=GlusterFSbug_status=open On Mon, Sep 17, 2012 at 1:28 AM, Jan Krajdl s...@spamik.cz wrote: Bump. Anybody hasn't any idea? It's quite critical for me and I don't known what to try

Re: [Gluster-users] QEMU-GlusterFS native integration demo video

2012-08-28 Thread Bryan Whitehead
I'd just like to also state this is fantastic. I hope the performance will match. On Tue, Aug 28, 2012 at 2:34 AM, Fernando Frediani (Qube) fernando.fredi...@qubenet.net wrote: Thanks for sharing it with us Bharata. I saw you have two nodes. Have you done any performance tests and if so how

Re: [Gluster-users] Peer Rejected (Connected) how to resolve

2012-08-24 Thread Bryan Whitehead
check your networking. It is likely one or so hosts is rejecting conniptions. On Thu, Aug 23, 2012 at 8:01 PM, 符永涛 yongta...@gmail.com wrote: Deer experts, I'm using glusterfs 3.2.5 and I got a cluster of 6 peer. Now one of the peer says all the other 5 peers are in Peer Rejected (Connected)

Re: [Gluster-users] Gluster failure testing

2012-08-15 Thread Bryan Whitehead
are you using ext4 with redhat/centos? There is a previous thread that shows some kind of bug with ext4 that causes similar sounding problems. If you are using ext4, try using xfs. On Tue, Aug 14, 2012 at 11:12 PM, Brian Candler b.cand...@pobox.com wrote: On Tue, Aug 14, 2012 at 08:19:27PM

Re: [Gluster-users] question about list directory missing files or hang

2012-08-14 Thread Bryan Whitehead
can you post more details? like gluster volume info, gluster peer status, output of mount, and df ? On Mon, Aug 13, 2012 at 10:42 PM, 符永涛 yongta...@gmail.com wrote: Hi all, Any one helps? More information about this issue. for example if i create abc.zip by touch abc.zip then run ls it

[Gluster-users] RDMA high cpu usage and poor performance

2012-07-12 Thread Bryan Whitehead
I see both glusterfsd and glusterfs eat a good 70-100% of CPU while dd runs (see below) [root@lab0 ~]# gluster volume info Volume Name: testrdma Type: Replicate Volume ID: bf7b42aa-5680-4f5c-8027-d0a56cc5e65d Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: rdma Bricks: Brick1:

Re: [Gluster-users] BUG: 764964 (dead lock)

2012-05-10 Thread Bryan Whitehead
Can you explain where glusterfs is being used? Is this lockup happening on a VM running in on a file-disk-image on top of gluster? is gluster itself causing this timeout? On Wed, May 9, 2012 at 6:59 PM, chyd c...@ihep.ac.cn wrote: Hi all, I'm encountering a lockup problem many times when

Re: [Gluster-users] Performance gluster 3.2.5 + QLogic Infiniband

2012-05-05 Thread Bryan Whitehead
, Bryan Whitehead wrote: I'm confused, you said everything works ok (IPoIB) but later you state you are using RDMA? Can you post details of your setup? Maybe the output from gluster volume infovolumename? On Sat, Apr 21, 2012 at 1:40 AM, Michael Mayermich...@mayer.cx  wrote: Hi all, thanks

Re: [Gluster-users] Performance gluster 3.2.5 + QLogic Infiniband

2012-04-24 Thread Bryan Whitehead
glusterfsd so I can test this? I guess so, since it's just a tcp mount? On Wed, Apr 11, 2012 at 1:43 PM, Harry Mangalam harry.manga...@uci.edu wrote: On Tuesday 10 April 2012 15:47:08 Bryan Whitehead wrote: with my infiniband setup I found my performance was much better by setting up a TCP

Re: [Gluster-users] Replace-Brick usage

2012-04-24 Thread Bryan Whitehead
It is safe. On Tue, Apr 24, 2012 at 8:45 AM, Philip flip...@googlemail.com wrote: Hi, I'd like to replace servers in my cluster with the replace brick command. I'm not sure if this command is safe to use when there are changes on the FS e.g. new files or deletes. Can this command be safely

Re: [Gluster-users] Performance issues with striped volume over Infiniband

2012-04-20 Thread Bryan Whitehead
Max out the number of IO threads and apply a patch to make gluster more agressive about spawning threads as in this thread: http://gluster.org/pipermail/gluster-users/2012-February/009590.html (the above thread is actually pretty good for getting performance out of gluster with infiniband (I use

Re: [Gluster-users] Gluster 3.2.6 for XenServer

2012-04-19 Thread Bryan Whitehead
Where are these RPM's? I wouldn't mind giving them a test drive myself. On Thu, Apr 19, 2012 at 7:46 AM, Gerald Brandt g...@majentis.com wrote: Hi, I have Gluster 3.2.6 RPM's for Citrix XenServer 6.0.  I've installed and mounted exports, but that's where I stopped. My issues are: 1.

Re: [Gluster-users] GlusterFS + Virtual Machine

2012-04-03 Thread Bryan Whitehead
Yes. I currently use glusterfs to replicate qcow2 files with running qemu-kvm. Annoying issue with the 3.2.x series is the entire qcow2 file can get blocked if a rebuild is needed. the 3.3 series (still in beta) fixes this. On Tue, Apr 3, 2012 at 2:48 AM, s...@gmx.de wrote: Hey Guys, I want

Re: [Gluster-users] Slow performance from simple tar -x rm -r benchmark

2012-03-21 Thread Bryan Whitehead
...@davidcoulson.net wrote: On 3/20/12 2:47 AM, Bryan Whitehead wrote: I'm going to start off and say that I misstated, I must have been doing my *many-file* tests *inside* VM's running on top of glusterfs. I post a loopback test later this week. Can you repeat the test using NFS rather than Fuse

Re: [Gluster-users] Slow performance from simple tar -x rm -r benchmark

2012-03-21 Thread Bryan Whitehead
AM, David Coulson da...@davidcoulson.net wrote: Weird - Actually slower than fuse. Does the 'nolock' nfs mount option make a difference? On 3/21/12 1:22 PM, Bryan Whitehead wrote: [root@lab0-v3 ~]# mount -t nfs -o tcp,nfsvers=3 localhost:/images /mnt [root@lab0-v3 ~]# cd /mnt [root@lab0-v3

Re: [Gluster-users] Data consistency with Gluster 3.5

2012-03-13 Thread Bryan Whitehead
Are all the clocks in sync on the servers? You probably should configure memcache to be the banner cache (quick search for OpenX banner cache shows that is an option). You can't have 4 clients all opening/writing to the same file all at the same time. On Mon, Mar 12, 2012 at 6:55 AM, Sean Fulton

Re: [Gluster-users] Need advice for choosing proper glusterfs cluster hardware

2012-03-13 Thread Bryan Whitehead
No matter what you do, you will be limited by your network speed. Consider getting 10Gig network cards or Infiniband. All writes will need to hit both servers, so the max throughput you'll get is about 50MB/sec with a 1G card. You can get a full 100MB/sec by using the NFS/gluster connector to one

Re: [Gluster-users] Write performance in a replicated/distributed setup with KVM?

2012-03-02 Thread Bryan Whitehead
I'd try putting all hostnames in /etc/hosts. Also, can you post ping times between each host ? On Fri, Mar 2, 2012 at 8:55 AM, Harald Hannelius harald.hannel...@arcada.fi wrote: On Fri, 2 Mar 2012, Brian Candler wrote: On Fri, Mar 02, 2012 at 05:25:18PM +0200, Harald Hannelius wrote:

Re: [Gluster-users] gluster volume set performance.io-thread-count N

2012-03-01 Thread Bryan Whitehead
, Bryan Whitehead dri...@megahappy.net wrote: How long does it take for gluster volume set performance.io-thread-count 64 (as an example) to propagate? I've noticed that it seems like mounted volumes don't get the performance boost until i restart glusterd on the boxes. Is this wrong

Re: [Gluster-users] GlusterFS: problem with direct-io-mode

2012-02-14 Thread Bryan Whitehead
Search the archives but I'm pretty sure directio doesn't work over fuse without some patches. On Fri, Feb 10, 2012 at 5:11 AM, Ionescu, A. a.ione...@student.vu.nlwrote: Dear GlusterFS users, We are trying to use some C programs that rely on direct I/O operations on files residing on a

[Gluster-users] Distributed-Replicated adding/removing nodes

2012-02-13 Thread Bryan Whitehead
I have 3 servers, but want replicate = 2. To do this I have 2 bricks on each server: Example output: Volume Name: images Type: Distributed-Replicate Status: Started Number of Bricks: 3 x 2 = 6 Transport-type: rdma Bricks: Brick1: lab0:/g0 Brick2: lab1:/g0 Brick3: lab2:/g0 Brick4: lab0:/g1

Re: [Gluster-users] Rebuild Mirror

2010-04-01 Thread Bryan Whitehead
You need to read each one of the files to trigger syncing up the changed files. So do a find /path/to/gluster/mount -type f -exec tail {} /dev/null \; That should bring everything into sync. On Thu, Apr 1, 2010 at 5:25 AM, Rafael Pappert raf...@pappert.biz wrote: Hello List, I'm evaluate the

Re: [Gluster-users] gluster local vs local = gluster x4 slower

2010-03-29 Thread Bryan Whitehead
 subvolumes writebehind end-volume Any suggestions appreciated.  thx-    Jeremy On 3/26/2010 6:09 PM, Bryan Whitehead wrote: One more thought, looks like (from your emails) you are always running the gluster test first. Maybe the tar file is being read from disk when you do the gluster test

Re: [Gluster-users] gluster local vs local = gluster x4 slower

2010-03-26 Thread Bryan Whitehead
the same way direct disk does? thx-    Jeremy P.S. I'll be posting results w/ performance options completely removed from gluster as soon as I get a chance.    Jeremy On 3/24/2010 4:23 PM, Bryan Whitehead wrote: I'd like to see results with this: time ( tar xzf /scratch/jenos/intel