Allocating Camera memory faster on Android Part Two

Part 1 Part 2

In Part 1, I talked about how to avoid the GCs so you can get reasonable speeds when using the frames from Android Camera's onPreviewFrame method and process them without losing any, it was basically as follows: (let's call this Method A)

1. Get faster memory allocation with tricks mentioned for small pieces of memories (bytes[]). The number of byte[] needed for the slowest device is the maximum number of frames to process (N);

2. Put the frame from onPreviewFrame to another thread.

3. The other thread process the data, and then give it back to the Camera. 

It turns out, there is another way to do it that's much faster, Method B:

1. Get faster memory allocation with tricks mentioned for small pieces of memories (bytes[]). The number of byte[] needed for the slowest device is about 10.

2. Put the frame from onPreviewFrame into a shared large ByteBuffer Queue that's big enough to fit maximum number of frames to process (Generate this Queue with ByteBuffer.allocateDirect(N*singleFrameSize) and then give back the buffer.

3. Another thread will manage the queue independent of the onPreviewFrame thread (process frames, drop frames when on pressure, etc.)

Comparison on cold launch: 

Method A: Requires generation of N byte[] in Java, and a total of N * singleFrameSize bytes. 

Method B: Requires generation of 10 byte[] in Java, (N + 1)* singleFrameSize in memory block allocation, and a total of (N + 11) * singleFrameSize bytes. 

If done right, Method A can trigger lots of GCs, average about (0.1 * N), so for 180 frames and assuming only the last 20% would GC with the large chunk first trick will make it about 3s. Method B, the allocation will basically be about 0.1 * 11, making it only use about 1s time.