The application I am currently developing processes each frames using a
native code and it should record the video as well. I tried SDK for this
purpose but certain restrictions didn't allow me to do so, so I switched to
NDK for a video recording code piece.
Apparently, my algorithm seriously uses CPU, up to %70 percent in the worst
case. Before I actually start working on a video recorder, I wanted to try
the following approach.
I will process the preview frames using an android phone and send it to
another phone (which uses same application and same model) for recording.
My questions are:
1.Should I try WiFi instead of Bluetooth? I am developing the application
for API 8 so I don't have WiFi-Direct, therefore I should do some socket
programming, which would possibly complicate things a bit for me since
Bluetooth can easily be set up using SDK.
2- Will I be available to record the frames as a video at the receiving
end? I will receive each frame with certain metadata embedded to it and
should record them using the other phone. I doubt I will be able to do it
using SDK, so NDK along with ffmpeg seems to be the best choice? Any
suggestions related to this question will be more than welcome.
3-Here comes the best part. I am recording the video with the lowest
resolution that,after compressed, takes no more than 14mb space for a 10
minute long video. I have to reach the raw frames to send it to other end,
encode and compress it. Any ideas related to possible flooding of
Bluetooth/Wi-Fi because of big-sized raw frames?
Any other approaches and answers will be much appreciated. Thanks.
--
You received this message because you are subscribed to the Google
Groups "Android Developers" group.
To post to this group, send email to android-developers@googlegroups.com
To unsubscribe from this group, send email to
android-developers+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-developers?hl=en