When you first think of partitioning a design, you think of the technical reasons for doing so. Tools today have a sweet spot of around 500K instances. Larger than that and the run times get prohibitively long and so the iteration from an update to RTL to completed physical design becomes too long. Plus designs naturally partition in a certain way depending on the bus structure, the floorplan and other aspects of the SoC itself.
What I hadn't thought of was that the human aspects of the design team are just as important. The most obvious is geographical location. If you have a design team in, say, Bangalore then you want to give them ownership of some comprehensible part of the design. And for the technical reasons you can't just take a large block and chop it in two since the constraints etc that are needed to put it all back together later don't exist and aren't well understood and the communication needed to close the design would be completely excessive.
Another major factor in partitioning is how stable that part of the design is. If you are putting a standard piece of IP onto the chip, such as a USB controller, then it isn't going to change much and you can put it in a partition with other stable blocks and get that part of the design completed early. On the other hand, if you have part of a design that is in flux, you want to put it in its own partition so you are not constantly having to redo a large unchanging portion because it was mistakenly grouped with something volatile. Unstable partitions need more added uncommitted gates whereas stable parts can be squeezed down more.
The challenge to getting this right is that what is optimum for the chip area (do as flat as possible) is not optimal for the tools and the schedule (keep to the sweet spot) and especially may be suboptimal for the human dimension. Everyone seemed to have rules of thumb for doing this and EDA tools (such as floorplanning) only address the technical dimension.