T-sne Generates Different Results On Different Machines
Solution 1:
TSNE is a heuristic. Like most heuristics, it might behave quite differently according to small changes. The core characteristic here is: only local-convergence is guaranteed! (not very robust). The latter is indicated (follows from basic optimization-theory) in the docs:
t-SNE has a cost function that is not convex, i.e. with different initializations we can get different results.
While you explained, that the non-seeding approach is not the culprit in your opinion (hard to measure! benchmarking is hard), you should check out your versions of sklearn, as the t-sne code is one of the more active parts of sklearn with many changes over time.
Each of these changes is likely to introduce observations like yours (when only trying one example; a bigger benchmark/testset should be a better approach when comparing t-sne implementations of course)
Remark: however one of the computers keeps getting better results:
this is broad as there are at least two different interpretations:
- rate the result visually / perceptually
- look at
kl_divergence_
achieved after optimization
Post a Comment for "T-sne Generates Different Results On Different Machines"