1. There is a Face Detector class in the Android framework, you give a bitmap to it and it will return a faces array. This array contains midpoints(a point between the two eyebrows).
2. I will write a Surface View and make the camera write frames on this surface view. When the camera is running the preview and capturing previewFrames, I would try to intervene using callbacks.
3. In the callback function, i will attempt to create a bitmap and pass it along to the Facedetector class. When I receive midpoints, I will try to plot their position on the surface view. This is going to be time critical. Not sure if this will actually work though.
Assuming that the camera is held still, the midpoints returned by the first call to faceDetector will be almost unchanged. And I should be able to locate heads in an image when taking a preview.
I just found out that the creators of the two applications, that I mentioned in the first post i.e. Layar and Wikitude, have exposed their APIs to the public. I will try to use them in my application and see if I get anywhere.
This is but a simple AR experiment in the domain of possibilities. If you follow Android market, you will realize that augmented reality is the latest fad(even though it is just a start, i think) and there are a number of apps released that leverage on it.
The latest I came to know of are:
SomaView(This is also an ADC2 challenge entry!)
I find myself fidgeting with a few problems :
1. The preview frames captured are in YUV(YCbCr) format, something I cannot use to create android.graphics.Bitmap object and feed to the android.media.FaceDetector class. I need to convert it to RGBT_565. There are converter routines available but I am not sure they would be able to convert the image as fast as the camera draws frames(15 fps).
2. The default face detector implementation takes about a minute to recognize upto 5 faces in a picture. Not really fast for my problem.
subject: Writing mobile applications that augment Reality...