So, I’ve got the camera working in video capture mode, in 320×240 and lower resolutions. The colorspace conversion is done by IJG libjpeg with some minor byte swapping in my code.
Here’s what I get when the frames are concatenated together:
The target media seems to be lost upon migration
Now comes an interesting part. The image captured using camera-only data is pretty dark and low color. I’ve checked the colorspace conversion routines and they are ok, the camera gives out this kind of image. On the contrary, the phone itself shows vivid colors and the image is not dark at all given the same conditions. So there must be some color-brightness post-processing code that makes all these fancy things. I’m not an expert in image processing but the way mentioned looks like the only one the phone can create such nice pictures.
But there is one disappointing fact — the image writing bottleneck. If you use A1200 and capture video, you may see that the resulting stream is not uniform, i.e. some frames are running smooth but eventually (every 6th or so) frame is dropped. Now I know why that happens – on this very moment of time the stream is written to the storage, be it SD/MMC card or internal phone memory. This forces some frames to be dropped.
I’m not ready to create Motion-JPEG files yet but it looks like multiple buffers and extremely optimized code to copy memory regions from mmaped driver area to userspace are needed.
It is a shame, but this is the first time in my life I faced the real, noticeable performance degradation while using double-precision calculations as opposed to integer only. Sometimes you need to see your code running on low speed CPU to understand that every fancy thing has its price.