Re: [Sursound] old Ambisonia links -

2014-05-14 Thread Michael Chapman
 On 2014-05-13, J�rn Nettingsmeier wrote:

 marc, i'd rather mirror the .amb files directly on my server than
 invest the time to maintain a permanent tracking seeder (which i used
 to do many years ago, but it kept breaking). let me know if you need
 the space and/or bandwidth.

 If you give me the permission, I could ask some Pirate Party people
 about establishing a tracker too.
 --

Jörn and I were suggesting _no_ seeds.

Rather that the ability to simply download would enhance 'outreach' (and
more bluntly, not be such a f-ing pain).

Jörn has offered to solve the bandwidth problem ( _if_ there is one).

Michael


___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] old Ambisonia links -

2014-05-14 Thread Marc Lavallée
Le Tue, 13 May 2014 16:51:49 +0200,
Jörn Nettingsmeier netti...@stackingdwarves.net a écrit :
 marc, i'd rather mirror the .amb files directly on my server than
 invest the time to maintain a permanent tracking seeder (which i used
 to do many years ago, but it kept breaking). let me know if you need
 the space and/or bandwidth.
 
 best, 
 jörn

Jörn,

Thanks a lot for your generous offer. Last December, Ben Bloomberg also
offered hosting at MIT. So I'm thinking about a hosting model with
HTTP mirrors for more than 35Gb...

With the HTTP mirroring model, there could be a possibility to
auto-host some files. Similarly to the BitTorrent model, the idea
would be to encourage participation in the mirroring effort, and let a
download manager decide what's the best source. Since there are ways to
setup a hybrid system with BitTorrent and direct HTTP downloads, maybe
it'd possible to benefit from both protocols, for robustness. 

--
Marc

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] old Ambisonia links -

2014-05-14 Thread Michael Chapman


 Jörn,

 Thanks a lot for your generous offer. Last December, Ben Bloomberg also
 offered hosting at MIT. So I'm thinking about a hosting model with
 HTTP mirrors for more than 35Gb...

 With the HTTP mirroring model, there could be a possibility to
 auto-host some files. Similarly to the BitTorrent model, the idea
 would be to encourage participation in the mirroring effort, and let a
 download manager decide what's the best source. Since there are ways to
 setup a hybrid system with BitTorrent and direct HTTP downloads, maybe
 it'd possible to benefit from both protocols, for robustness.


It's over ten years since I did anything like this ... but

if you keep ambisonia.com and/or www.ambisonia.com at York

and create downloads.ambisonia.com
then there is someway to use DNS to distribute calls to
downloads.ambisonia.com between MIT and Jörn (and York?).

The quick and dirty way is (if you are using Apache) is not to use DNS but
to set up conditional redirects in the apache config file. (Conditional on
geographic origin of client and/or estimated server loads.)

DNS has the advantage that it allows a list of IP addreses in suggested
order, so is more robust (if one of the two/three servers is playing up).

All you need is root access to the York server ;-)

Do contact me direct if you want me to dig out some stuff, rather than top
of the head memories ...

Michael

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] old Ambisonia links -

2014-05-14 Thread Marc Lavallée

Hi Michael.

Thanks for your help.

I don't want to flood the list about this topic,
but here's what I would use:

http://mirrorbrain.org/

--
Marc

Le Wed, 14 May 2014 08:11:15 - (GMT),
Michael Chapman s...@mchapman.com a écrit :

 
 
  Jörn,
 
  Thanks a lot for your generous offer. Last December, Ben Bloomberg
  also offered hosting at MIT. So I'm thinking about a hosting model
  with HTTP mirrors for more than 35Gb...
 
  With the HTTP mirroring model, there could be a possibility to
  auto-host some files. Similarly to the BitTorrent model, the idea
  would be to encourage participation in the mirroring effort, and
  let a download manager decide what's the best source. Since there
  are ways to setup a hybrid system with BitTorrent and direct HTTP
  downloads, maybe it'd possible to benefit from both protocols, for
  robustness.
 
 
 It's over ten years since I did anything like this ... but
 
 if you keep ambisonia.com and/or www.ambisonia.com at York
 
 and create downloads.ambisonia.com
 then there is someway to use DNS to distribute calls to
 downloads.ambisonia.com between MIT and Jörn (and York?).
 
 The quick and dirty way is (if you are using Apache) is not to use
 DNS but to set up conditional redirects in the apache config file.
 (Conditional on geographic origin of client and/or estimated server
 loads.)
 
 DNS has the advantage that it allows a list of IP addreses in
 suggested order, so is more robust (if one of the two/three servers
 is playing up).
 
 All you need is root access to the York server ;-)
 
 Do contact me direct if you want me to dig out some stuff, rather
 than top of the head memories ...
 
 Michael
 
 ___
 Sursound mailing list
 Sursound@music.vt.edu
 https://mail.music.vt.edu/mailman/listinfo/sursound

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] TetraMic and Jaunt VR in Time, Gizmodo and Engadget (Virtual Reality Recording System)

2014-05-14 Thread Len Moskowitz
Jaunt VR has developed a virtual reality camera. They're using TetraMic for 
recording audio, decoding with headtracking for playback over headphones and 
speakers. For video playback they're using the Oculus Rift.


http://time.com/49228/jaunt-wants-to-help-hollywood-make-virtual-reality-movies/
http://gizmodo.com/meet-the-crazy-camera-that-could-make-movies-for-the-oc-1557318674
http://www.engadget.com/2014/04/03/jaunt-vr/


Len Moskowitz (mosko...@core-sound.com)
Core Sound LLC
www.core-sound.com
Home of TetraMic

___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


[Sursound] Tetra Mic (for sale)

2014-05-14 Thread Navid Navab
I have a tetra mic alongside its many cables, adaptors, and accessories for
sale.

The mic has been used only during a few occasions and has been sitting in a
secure box most of the time.

My location is Montreal but I can ship to anywhere.

If interested please contact me off-list.

thanks,
-Navid
-- next part --
An HTML attachment was scrubbed...
URL: 
https://mail.music.vt.edu/mailman/private/sursound/attachments/20140514/10de2c6c/attachment.html
___
Sursound mailing list
Sursound@music.vt.edu
https://mail.music.vt.edu/mailman/listinfo/sursound


Re: [Sursound] TetraMic and Jaunt VR in Time, Gizmodo and Engadget (Virtual Reality Recording System)

2014-05-14 Thread Stefan Schreiber

Len Moskowitz wrote:

Jaunt VR has developed a virtual reality camera. They're using 
TetraMic for recording audio, decoding with headtracking for playback 
over headphones and speakers. For video playback they're using the 
Oculus Rift.


http://time.com/49228/jaunt-wants-to-help-hollywood-make-virtual-reality-movies/ 

http://gizmodo.com/meet-the-crazy-camera-that-could-make-movies-for-the-oc-1557318674 




Citing from this link:

A close-up of the 3D microphone that allows for 3D spacialized audio. 
If you're wearing headphones, there's actually headtracking for the 
Oculus to tell which direction you're looking--when you change your 
view, the sound mix will also change to match, in order to keep the 
sound in the same space.



I have suggested this possibility before, for example here:

http://comments.gmane.org/gmane.comp.audio.sursound/5172

(obviously thinking of some   audio-only  application, without any 
video. It was already clear that the Oculus Rift included all necessary 
hardware for HT audio decoding, although Oculus didn't do this in 2012 
or 2013.)


This suggestion led (by influence or coincidence) to some further 
developments, which could be followed on the sursound list:


http://comments.gmane.org/gmane.comp.audio.sursound/5387

To be frank: At least two groups of people on this list have 
demonstrated head-tracked decoding of FOA recently and  before  Jaunt 
VR, done in a very similar  fashion. I could name Hector Centeno and 
Bo-Erik Sandholm (Bo-Erik introduced the external HT hardware, whereas 
the Android app by Hector already existed), further Matthias Kronlachner 
at IEM Graz. If not more people...


Far from complaining about this, I would welcome this coincidence or 
coincidence. (The VR movie people and our list colleagues use 
basically the same HT decoder technology,  and maybe even decoding 
software.) Because this all shows that Ambisonics is mature enough to be 
used even for some very sophisticated applications, if we speak about 
cinematic VR demonstrations... (We are all using the power of HT 
decoded FOA,  in VR worlds, VR movies, and maybe even for 3D audio music 
recordings...  ;-) )


Seeing the recent and ongoing development activities in areas like UHD, 
Mpeg-H 3D audio aka ISO/IEC 23008-3, gaming, VR, 3D movies and now VR 
movies (this is not a technical term yet), it is probably a good 
question why surround sound/3D audio is used in so many areas, but  
still not  for (published) music recodings. (This situation looks 
increasingly  unbelievable . )



Anyway: Congratulations to Len and TetraMic, who are involved in these 
activities!



Now, I have some suggestions for further improvement, to our colleagues 
and also Jaunt VR/TetraMic:


If the reference quality for HT binaural systems is about this

http://smyth-research.com/technology.html,

you would still have to employ personalized HRTFs (HRIR/BRIR) data sets 
into your decoder. (HRIR is anechoic. BRIR includes room acoustics.)


It is probably possible to calculate both HRIR and BRIR data sets from 
3D scans, or even from plain photographs. (This has been done at least 
in the case of HRIR/HRTF data sets, derived from optical 3D scans or 
photographs of the torso/head/ear shapes. Probably there is still ample 
space to improve the existing methods to calculate HRIRs/HRTFs from 
optical data. For example, you could compare your calculation algorithm 
and corresponding real-world  acoustical measurements, and follow some 
evolutionary improvement strategy.  Matching calculation results and 
actual measurements closer and closer, after each algorithm generation. 
Just a quick idea...)


To calculate some (reverbant) BRIR data set (transfer function of some 
listener in a room), you could maybe apply some form of acoustical 
raytracing.


It would be far easier to  calculate  personalized  HRIR/BRIR data 
sets than to measure them. (Because acoustical  full-sphere measurements 
would require to measure hundreds or thousands of different positions, 
over a full or at least half 3D sphere.)



Beside the suggestion to investigate the use of individual HRIRs/HRTFs, 
I have a direct question to Jaunt VR:


What specific set of HRIRs/HRTFs (or BRIRs?) are you currently using as 
part of  your Ambisonics -- head-tracked binaural decoder?


(I would imagine that you will have tested some existing collections, 
and chosen some specific set according to your listening results. 
Because you are using data sets and probably also software of other 
people/parties, I believe it would be fair enough to answer this question. )



Best regards,

Stefan

P.S.: If this is possible, I also would be curious to hear what HT 
update frequency you are using for the audio decoder, and maybe to ask 
some other questions.







http://www.engadget.com/2014/04/03/jaunt-vr/


Len Moskowitz (mosko...@core-sound.com)
Core Sound LLC
www.core-sound.com
Home of TetraMic

___
Sursound