scanpy.api.tl.tsne¶

scanpy.api.tl.
tsne
(adata, n_pcs=None, use_rep=None, perplexity=30, early_exaggeration=12, learning_rate=1000, random_state=0, use_fast_tsne=True, n_jobs=None, copy=False)¶ tSNE [Maaten08] [Amir13] [Pedregosa11].
tdistributed stochastic neighborhood embedding (tSNE) [Maaten08] has been proposed for visualizating singlecell data by [Amir13]. Here, by default, we use the implementation of scikitlearn [Pedregosa11]. You can achieve a huge speedup and better convergence if you install MulticoretSNE by [Ulyanov16], which will be automatically detected by Scanpy.
 Parameters
 adata :
AnnData
Annotated data matrix.
 n_pcs :
int
orNone
, optional (default:None
) Use this many PCs. If
n_pcs==0
use.X
ifuse_rep is None
. use_rep : {
None
, ‘X’} or any key for.obsm
, optional (default:None
) Use the indicated representation. If
None
, the representation is chosen automatically: for.n_vars
< 50,.X
is used, otherwise ‘X_pca’ is used. If ‘X_pca’ is not present, it’s computed with default parameters. perplexity :
float
, optional (default: 30) The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. The choice is not extremely critical since tSNE is quite insensitive to this parameter.
 early_exaggeration :
float
, optional (default: 12.0) Controls how tight natural clusters in the original space are in the embedded space and how much space will be between them. For larger values, the space between natural clusters will be larger in the embedded space. Again, the choice of this parameter is not very critical. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high.
 learning_rate :
float
, optional (default: 1000) Note that the Rpackage “Rtsne” uses a default of 200. The learning rate can be a critical parameter. It should be between 100 and 1000. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high. If the cost function gets stuck in a bad local minimum increasing the learning rate helps sometimes.
 random_state :
int
orNone
, optional (default: 0) Change this to use different intial states for the optimization. If
None
, the initial state is not reproducible. use_fast_tsne :
bool
, optional (default:True
) Use the MulticoreTSNE package by D. Ulyanov if it is installed.
 n_jobs :
int
orNone
(default:sc.settings.n_jobs
) Number of jobs.
 copy :
bool
(default:False
) Return a copy instead of writing to adata.
 adata :
 Returns
Depending on
copy
, returns or updatesadata
with the following fields. X_tsne
np.ndarray
(adata.obs
, dtypefloat
) tSNE coordinates of data.
 X_tsne