Re: [pygame] Possible to detect collision with an image?
In that example, you need to detect a collision with any one of two triangles or a circle, excluding the other two. That could be a useful function; it's not on the wiki. Is that enough information? Or, there are some strategies for only checking collisions with nearby objects, if you need an optimization, that are on the wiki. Are they adequate? On Fri, Oct 7, 2011 at 4:51 PM, Florian Krause siebenhundertz...@gmail.com wrote: Well, you could make the background of the image transparent and then create a mask fro the pygame surface. This one can be checked for collisions. Florian On Fri, Oct 7, 2011 at 11:44 PM, Alec Bennett wrybr...@gmail.com wrote: I'm using an image with transparency as a Sprite. The image has an irregular shape: http://sinkingsensation.com/stuff/shape.png I'd like to detect collisions with it. My current strategy is to draw a polygon (pygame.draw.polygon) behind the image. That works, but its difficult to make the polygon accurately reflect the image, and I need to make a lot of them. So I'm wondering if there's some way to detect collisions with the image itself? Or maybe there's some clever way to draw a polygon based on an image? I don't need pixel-by-pixel accuracy, anything close would work. -- www.fladd.de - fladd.de: Homepage of Florian Krause blog.fladd.de - fladd's Blog: Blog of Florian Krause intermezzo.fladd.de - Intermezzo: Music by Florian Krause and Giacomo Novembre
[pygame] Surprising result of applying pygame.sndarray.make_sound() to output of numpy.fft.rfft()
Hello all, I had a look in the mailing list archives for FFT and Fourier, and couldn't find anything that looked relevant. The following code has a surprising result: it outputs sound.wav twice. I'd expect some random sounding noise the second time. There's nothing in the documentation for pygame.sndarray about make_sound understanding FFTs. How/why does it work this way? I've tried this with a few different sounds in case it was a property of the one sound. I'm interested in playing with real number Fourier coefficients to manipulate and produce sounds. However, I'm not sure what pygame is doing exactly, which makes it harder to work on. Perhaps it's something about FFTs I don't understand? Can anyone explain? I've just noticed that the second playback is only on one side, whilst the first is on both. Curiouser and curiouser... Russell import pygame import pygame.mixer as pm import pygame.sndarray as sa import numpy as np import time pygame.init() pm.init(frequency=44100, size=16, channels=1, buffer=4096) sample=sa.array(pm.Sound(sound.wav)) fft=np.fft.rfft(np.array(sample,dtype=np.int32)) ch=None while not ch: ch=pm.find_channel() time.sleep(1) s=sa.make_sound(np.array(np.fft.irfft(fft), dtype=np.int16)) ch.queue(s) time.sleep(len(sample)*2/44100.0) s=sa.make_sound(np.array(fft, dtype=np.int16)) ch.queue(s) time.sleep(len(sample)*2/44100.0)
Re: [pygame] Surprising result of applying pygame.sndarray.make_sound() to output of numpy.fft.rfft()
On Sat, Oct 8, 2011 at 5:04 PM, Russell Jones russell.jo...@gmail.comwrote: Hello all, I had a look in the mailing list archives for FFT and Fourier, and couldn't find anything that looked relevant. The following code has a surprising result: it outputs sound.wav twice. I'd expect some random sounding noise the second time. There's nothing in the documentation for pygame.sndarray about make_sound understanding FFTs. How/why does it work this way? I've tried this with a few different sounds in case it was a property of the one sound. I'm interested in playing with real number Fourier coefficients to manipulate and produce sounds. However, I'm not sure what pygame is doing exactly, which makes it harder to work on. Perhaps it's something about FFTs I don't understand? Can anyone explain? I've just noticed that the second playback is only on one side, whilst the first is on both. Curiouser and curiouser... It's definitely not make_sound understanding FFTs You should make sure that the FFT you're taking actually looks like you expect it to, because I don't think it does. Under the default mixer settings, sndarray.sound returns a 2-dimensional array, indexed first on the sample number and second on the channel. numpy.fft.rfft takes the 1-dimensional FFT, so when you call it on this array, you're getting the FFT of thousands of length-2 arrays. I suspect what you really want is the FFT of two arrays that have thousands of samples. Do you get what you want when you take the FFT of the transpose of the sound array? Check the shapes of all the arrays you're working with to see if they're what you expect. You're setting channels=1 in mixer.init, but this is probably not working if you're hearing the second playback on only one side. You should make a call to mixer.pre_init with channels=1 if you want a single audio channel. -Christopher