natoal So it's safe to assume that performance is poor otherwise the PR team would be falling over themselves to praise it
It's not safe to assume that at all. And I think the PR team is working overtime advertising gemini.
natoal From this I'd say it's reasonable to conclude that Snapdragon is superior for generative AI. Tensor's selling point is security so it's not a surprise anyway
I am not sure what you mean by superior. A faster chip won't make your AI model any smarter. There are many factors at play and I only know few of them. Chip A might support quantization in a more efficient manner thus allow for greater performances when such quantization is used. Chip B might only be efficient with 8 bits integer or floating point, a form of quantization. Usually memory bandwidth is one of the most important factor.
AFAIK tensor core are for doing tensor things, like running LLMs. As for power efficiency, this is nothing new. Even a 7-10 years ago nvidia gpu was 3-5x more efficient than a xeon for inference; expecting an npu or tensor core to be more efficient than a cpu is to be expected.