scanpy.tl.tsne
- scanpy.tl.tsne(adata, n_pcs=None, use_rep=None, perplexity=30, early_exaggeration=12, learning_rate=1000, random_state=0, use_fast_tsne=False, n_jobs=None, copy=False, *, metric='euclidean')
t-SNE [Maaten08] [Amir13] [Pedregosa11].
t-distributed stochastic neighborhood embedding (tSNE) [Maaten08] has been proposed for visualizating single-cell data by [Amir13]. Here, by default, we use the implementation of scikit-learn [Pedregosa11]. You can achieve a huge speedup and better convergence if you install Multicore-tSNE by [Ulyanov16], which will be automatically detected by Scanpy.
- Parameters
- adata :
AnnData
AnnData
Annotated data matrix.
- n_pcs :
int
|None
Optional
[int
] (default:None
) Use this many PCs. If
n_pcs==0
use.X
ifuse_rep is None
.- use_rep :
str
|None
Optional
[str
] (default:None
) Use the indicated representation.
'X'
or any key for.obsm
is valid. IfNone
, the representation is chosen automatically: For.n_vars
< 50,.X
is used, otherwise ‘X_pca’ is used. If ‘X_pca’ is not present, it’s computed with default parameters.- perplexity :
float
|int
Union
[float
,int
] (default:30
) The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms. Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50. The choice is not extremely critical since t-SNE is quite insensitive to this parameter.
- metric :
str
str
(default:'euclidean'
) Distance metric calculate neighbors on.
- early_exaggeration :
float
|int
Union
[float
,int
] (default:12
) Controls how tight natural clusters in the original space are in the embedded space and how much space will be between them. For larger values, the space between natural clusters will be larger in the embedded space. Again, the choice of this parameter is not very critical. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high.
- learning_rate :
float
|int
Union
[float
,int
] (default:1000
) Note that the R-package “Rtsne” uses a default of 200. The learning rate can be a critical parameter. It should be between 100 and 1000. If the cost function increases during initial optimization, the early exaggeration factor or the learning rate might be too high. If the cost function gets stuck in a bad local minimum increasing the learning rate helps sometimes.
- random_state :
None
|int
|RandomState
Union
[None
,int
,RandomState
] (default:0
) Change this to use different intial states for the optimization. If
None
, the initial state is not reproducible.- n_jobs :
int
|None
Optional
[int
] (default:None
) Number of jobs for parallel computation.
None
means usingscanpy._settings.ScanpyConfig.n_jobs
.- copy :
bool
bool
(default:False
) Return a copy instead of writing to
adata
.
- adata :
- Return type
- Returns
Depending on
copy
, returns or updatesadata
with the following fields.- X_tsne
np.ndarray
(adata.obs
, dtypefloat
) tSNE coordinates of data.
- X_tsne