Added support for a magic field GSON_TYPE_ADAPTER in a class. This adapter is automatically invoked if present.
The field must be present in the class (not in any super-type), and must be strongly typed as TypeAdapter<T>.
The underlying problem is that the doubleCapacity function would drop the parent links when all nodes ended up on the same side in a doubling. This was caused by the fact that the AvlIterator was destructive on parent nodes, and we weren't putting them back together with the AvlBuilder. This removes an incorrect optimization and fixes the problem.
Also move LinkedHashTreeMap back into main from test.
This attempts to address issue 402, wherein subclassing ThreadLocal is pinning a reference to a class, which transitively pins the entire application in containers like Tomcat.
The most interesting optimization is to replace ArrayDeque with a manual linked list that reuses the nodes 'parent' field. These optimizations save about 20%.
Compared to LinkedTreeMap, this is slower for small (size=5) maps: 124% slower to get() and 33% slower to create and populate. It's a win for large (size=500) maps: 46% faster to get() but 8% slower to create and populate. And it's a big win for very large (size=50,000) maps: 81% faster to get() and 46% faster to create and populate.
http://microbenchmarks.appspot.com/run/limpbizkit@gmail.com/com.google.common.collect.MapBenchmark
I'm going to follow this up with some simple optimizations: caching local fields and simplifying access. That should narrow the performance gap.
Not yet adopted in our code.
Known critical bugs:
- throws ClassCastException when get() is called with a non-comparable key
- throws NullPointerException on get(null)
This makes Hotspot slower. From my before/after measurements using ParseBenchmark, times in microseconds:
TWEETS: 350 -> 370 (+6%)
READER_SHORT: 77 -> 76 (-1%)
READER_LONG: 870 -> 940 (+8%)
But it makes Dalvik faster by a greater margin. These before/after measurements use times in milliseconds:
TWEETS: 25 -> 20 (-20%)
READER_SHORT: 5.6 -> 4.7 (-16%)
READER_LONG: 52 -> 47 (-10%)
It's a net win because we're saving a greater fraction of time, and because we're helping the platform that needs the most help. We're paying microseconds on Hotspot to gain milliseconds on Dalvik.