The initial rumors about the A6 months before release were it was headed in the quad-core direction, to thwart some of the more-cores-are-obviously-better marketing to the masses. That turned out to be a head fake. Subsequent reports on release day said it was dual core, likely ARM Cortex-A15 based. But as the first information comes out of Chipworks and the benchmarking sites, a more compelling picture is emerging.
In my previous Smart Mobile SoC: Apple post, I pointed out Apple’s string of acquisitions included P. A. Semi and Intrinsity, both of which had unique EDA technology which could be coming to bear inside future processors. Some of our community members questioned if that was pertinent, as much of the human capital has moved on to other destinations as buyout agreements expire. Whether or not the same bodies are still in Apple’s employ, the impact of what Apple has learned about EDA is starting to become clear.
Where the A5x and predecessors used ARM physical IP for the cores, the A6 is a radical departure. According to Chipworks, the processor cores in the A6 appear to be an optimized, hand-crafted layout, meaning the dual cores are neither Cortex-A9 nor a Cortex-A15. This is similar to the strategy Qualcomm has applied with Krait: start with an instruction set license from ARM, and optimize layout (and presumably interconnect) for performance.
The A6 is fabbed in Samsung’s 32nm HKMG process, and my belief is Samsung has this business for the foreseeable future. We know from other efforts that the level of cooperation between Apple, their EDA vendor, ARM, and Samsung to pull off something of this magnitude is substantial and not easily reproduced at another foundry, in huge volumes.
Apple was rather vague about CPU performance in their launch event, probably to avoid the dustup with NVIDIA from the iPad 3 launch. Rather than make specific claims, they made a sweeping self-comparitive statement that they were going to be 2x faster than the iPhone 4S.
Tests from Anandtech are revealing the thing is fast, even faster than advertised in the launch event. Geekbench results show the improvement from iPhone 4S to iPhone 5 is actually 2.13x. Observations indicate the A6 CPU may be clocked as high as 1.3GHz, typically running somewhere between 800MHz and 1.2GHz.
Moving out of self-comparision mode, the A6 is faster than anything out there right now, including the Samsung quad-core Exynos 4 (in the international Samsung Galaxy SIII) and the Intel Z2460 (inside the Lava XOLO X900).
The A6 features a tri-core GPU, based on the Imagination PowerVR SGX543MP3. This is putting it right at the top of graphics tests, approached only by the Qualcomm Adreno 320 found inside the LG Optimus G. The note of interest in the reports is the A6 uses a 64-bit memory interface, instead of the 128-bit interface on the A5X found in the iPad 3. This strongly suggests there will be iPhone parts – A6 – and iPad parts – A6x – going forward.
Once again, Apple hasn’t integrated the baseband on the A6. I’m starting to think this is a red herring for the critics out there. The fact of the matter is different carriers in different markets require different baseband implementations. This is why we’re seeing international versions of phones launched with one chipset, while US 4G LTE versions are launching with Qualcomm Snapdragon parts. (See Motorola Razr I, nowhere near the US anytime soon.) Apple has gone with the Qualcomm Gobi MDM9615 baseband part.
The maturing of the ARM processor ecosystem – with Apple and others starting to hand-craft cores – is an interesting trend in itself. The level of investment in a mobile processor design, fab, and software is mind-boggling, but it’s the entry fee to ship hundreds of millions of units a year. We saw this week TI retargeting OMAP (no, not dead, just refocusing on markets beyond mobile), partly in response to these trends. Apple is erecting a huge barrier for its competitors, and hopefully it won’t drive innovation out of the rest of the high-end SoC market.
EDA teams should take note: the new competitive edge will come from designing the processor to run the software, not from designing the software to run on the processor.
[Disclaimer: I do not own $AAPL nor an Apple device.]